uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,501,215 | arxiv | \section{Introduction}
\label{intr}
Kroll et al. (1987) showed that the near infrared fluxes and colors of
Chemically Peculiar stars (or CP stars, according to Preston's (1974)
scheme), when compared to a black body, are normal, like that of early
main sequence stars. IRAS data could even prove that the normality of IR
fluxes is guaranteed to at least 25$\mu$ (Kroll 1987): only two CP4 stars
showed flux excesses longward of 60$\mu$, showing cold circumstellar
material, which is not uncommon among early B stars. Moreover Leone \&
Catalano (1991) have shown that the solar composition Kurucz model
atmospheres, which are used to fit the spectra of CP stars from $\lambda$5500
to $\lambda$16500~\AA, give a fair representation of the overall flux
distribution, with the exception of the Balmer region, where CP stars
appear generally brighter than normal, this excess being just a few
percent of the total flux. \\
\begin{table}[t]
\footnotesize
\begin{center}
\caption{The CP stars checked for variability in the near infrared.}
\label{t1}
\begin{tabular}{rrrrrr} \hline\hline
SrCrEu & HD~~~3980 & HD~~24712 & HD~~49976 & HD~~72968 & HD~~83368 \\
& HD~~96616 & HD~~98088 & HD~101065 & HD~111133 & HD~118022 \\
& HD~125248 & HD~126515 & HD~137949 & HD~148898 & HD~153882 \\
& HD~164258 & HD~203006 & HD~206088 & HD~220825 & HD~221760 \\
Si et al. & HD~~10783 & HD~~12447 & HD~~74521 & HD~~90044 & HD~116458 \\
& HD~119419 & HD~125630 & HD~147010 & HD~166469 & HD~170397 \\
& HD~187473 & HD~223640 & & & \\
Si & HD~~12767 & HD~~19832 & HD~~25267 & HD~~29305 & HD~~37808 \\
& HD~~54118 & HD~~56455 & HD~~66255 & HD~~73340 & HD~~92664 \\
& HD~114365 & HD~116890 & HD~122532 & HD~124224 & HD~133880 \\
& HD~144231 & HD~145102 & HD~203585 & HD~221006 & \\
He weak & HD~~~5737 & HD~~22470 & HD~~28843 & HD~~35456 & HD~~37151 \\
& HD~~49333 & HD~~74196 & HD~125823 & HD~137509 & HD~142990 \\
& HD~144334 & HD~148199 & HD~168733 & HD~175362 & \\
He rich & HD~~36485 & HD~~37017 & HD~~37479 & HD~~37776 & HD~~59260 \\
& HD~~60344 & HD~~64740 & & & \\ \hline\hline
\end{tabular} \end{center} \end{table}
\vspace{-2mm}
However, in spite of this normal infrared behavior, peculiar abundances
and/or magnetic fields seem to affect the near infrared too; in fact,
Catalano et al. (1991) have shown that, out of the eight CP stars
monitored throughout their rotational periods, at least six are variable
in the near infrared, although with smaller amplitudes than in the
visible. This unexpected result led us to start an observational
campaign aimed at searching for infrared variability and also to better
understand the origin of the light variability, which is one of the
outstanding observational aspects of these stars.
\vspace{-3mm}
\section{Observations}
\label{obse}
\vspace{-2mm}
The observations have been carried out in the near IR bands J, H, and K
at the 1-m photometric telescope at ESO, La Silla, Chile, using an InSb
detector cooled with liquid nitrogen. A detailed description of the ESO
infrared photometers can be found in Bouchet (1989).
The data have been collected during several observing runs from July 1986
through January 1993. All program stars were measured relative to closeby
comparisons, which were chosen to have as similar color and brightness as
possible. The integration times, the number of cycles, and the desired
r.m.s. accuracy in the mean level were optimized to get a 2\% maximum
error in the observations: the resulting accuracy in the final reduced
data is typically 0.006 mag. ESO standard software was used for all
reduction steps. Magnitudes in the standard IR system have also been
obtained by observing suitable standard stars from the ESO list (Bouchet
et al. 1991).
The adopted ephemeris elements of the infrared light curves for the
programme stars have been mainly taken from Catalano \& Renson (1984,
1988, 1997), Catalano, Renson \& Leone (1991, 1993), and references
therein. The results concerning the SrCrEu and Si et al. stars have
been published elsewere (Catalano et al. 1997, 1998).
\vspace{-2mm}
\section{Discussion and conclusions}
\vspace{-2mm}
Near infrared variability has been found to be present in the large
majority of the CP2 stars studied. The typical trend of CP2 stars to
present smaller amplitude light variations at increasing wavelength
is confirmed: the amplitudes in the near infrared are smaller than
in the visible. In most cases the variations have been found to show
very similar behavior and in phase with each other in all filters.
In a previous paper (Catalano et al. 1991) we investigated the effects
of high metallicity at the near infrared wavelengths and showed that a
Kurucz model atmosphere with a metal content ten times the solar one
could explain a three percent variation in the near infrared brightness,
which is the typically observed value.
The influence of the magnetic field in the atmosphere structure has
been quantitatively discussed by some authors in some particular
configurations, however the most general approach has been carried out
by Stepien (1978) who showed that, according to the direction of the
toroidal electric currents in the outermost layers, the star's shape can be
prolate or oblate with respect to the magnetic axis: the differences
between the polar and equatorial values of the radius being up to 3\%.
The results obtained by Stepien lend support to a distorted figure of the
star up to a few percent and to small variations (2-3\%) of the effective
temperature over the surface, which in some cases, can contribute to
the observed light variations. While this explanation is not valid as
far as it concerns the visible light variations of many CP stars,
because of the different behaviours presented by the $u$, $v$, $b$,
and $y$ curves, it cannot be excluded that the non-spherical shape
of the star as seen at the infrared wavelengths could contribute to the
observed variability, since the magnetic pressure importance increases
in the outer layers.
After completing the analysis of our infrared data, we hope to be able to
disentangle the relative contributions of these two mechanisms from
the study of the phase relation between the magnetic field and infrared
variations.
\vspace{-3mm}
|
1,116,691,501,216 | arxiv | \section{Introduction}
Extensive work has been done on temporal action/activity localization~\cite{shou2016temporal,zhao2017temporal,dai2017temporal,buch2017sst,gao2017turn,chao2018rethinking}, where an action of interest is segmented from long, untrimmed videos. These methods only identify actions from a pre-defined set of categories, which limits their application to situations where only unconstrained language descriptions are available. This more general problem is referred to as natural language localization (NLL)~\cite{anne2017localizing,gao2017tall}. The goal is to retrieve a temporal segment from an untrimmed video based on an arbitrary text query. Recent work focuses on learning the mapping from visual segments to the input text~\cite{anne2017localizing,gao2017tall,liu2018temporal,hendricks18emnlp,zhang2018man} and retrieving segments based on the alignment scores. However, in order to successfully train a NLL model, a large number of diverse language descriptions are needed to describe different temporal segments of videos which incurs high human labeling cost.
We propose Weakly Supervised Language Localization Networks (WSLLN) which requires only video-sentence pairs during training with no information of where the activities temporally occur. Intuitively, it is much easier to annotate video-level descriptions than segment-level descriptions. Moreover, when combined with text-based video retrieval techniques, video-sentence pairs may be obtained with minimum human intervention. The proposed model is simple and clean, and can be trained end-to-end in a single stage. We validate our model on \emph{ActivityNet Captions} and \emph{DiDeMo}. The results show that our model achieves the state-of-the-art of the weakly supervised approach and has comparable performance as some supervised approaches.
\section{Related Work}
\noindent\textbf{Temporal Action Localization} in long videos is widely studied in both offline and online scenarios. In the offline setting, temporal action detectors~\cite{shou2016temporal,buch2017sst,gao2017turn,chao2018rethinking} predict the start and end times of actions after observing the whole video, while online approaches~\cite{de2016online,gao2017red,shou2018online,xu2018temporal,gao2019startnet} label action class in a per-frame manner without accessing future information. The goal of temporal action detectors is to localize actions in pre-defined categories. However, activities in the wild is very complicated and it is challenging to cover all the activities of interest by using a finite set of categories.
\noindent\textbf{Natural Language Localization} in untrimmed videos was first introduced in~\cite{gao2017tall,anne2017localizing}, where given an arbitrary text query, the methods attempt to localize the text (predict its start and end times) in a video. Hendricks \emph{et al.} proposed MCN~\cite{anne2017localizing} which embeds the features of visual proposals and sentence representations in the same space and ranks proposals according their similarity with the sentence. Gao \emph{et al.} proposed CTRL~\cite{gao2017tall}, where alignment and regression are conducted for clip candidates. Liu \emph{et al.} introduced TMN~\cite{liu2018temporal} which measures the clip-sentence alignment guided by the semantic structure of the text query. Later, Hendricks \emph{et al.} proposed MLLC~\cite{hendricks18emnlp} that explicitly reasons about temporal clips of a video. Zhang \emph{et al.} proposed MAN~\cite{zhang2018man} which utilizes Graph Convolutional Networks~\cite{kipf2016semi} to model temporal relations among visual clips. Although these methods achieve considerable success, they need segment-level annotations for training. Duan \emph{et al.} proposed WSDEC to handle weakly supervised dense event captioning in~\cite{duan2018weakly} by alternating between language localization and caption generation iteratively. WSDEC generates language localization as intermediate results and can be trained using video-level labels. Thus, we set it as a baseline, although it is not designed for NLL.
\noindent\textbf{Weakly Supervised Localization} has been studied extensively to use weak supervisions for object detection on images and action localization in videos~\cite{oquab2015object,Bilen_2016_CVPR, tang2017multiple, gao2018c,kantorov2016contextlocnet, Li_2016_CVPR, jie2017deep, diba2017weakly, papadopoulos2017training, duchenne2009automatic,laptev2008learning,bojanowski2014weakly, huang2016connectionist,wang2017untrimmednets,shou2018autoloc}. Some methods use class labels to train object detectors. Oquab \emph{et al.} discussed that object locations may be freely obtained when training classification models~\cite{oquab2015object}. Bilen \emph{et al.} proposed WSDDN~\cite{Bilen_2016_CVPR}, which focuses on both object recognition and localization. Their proposed two-stream architecture inspired several weakly supervised approaches~\cite{tang2017multiple, gao2018c, wang2017untrimmednets} including our method. Li \emph{et al.} presented an adaptation strategy in~\cite{Li_2016_CVPR} which uses the output of a weak detector as pseudo groundtruth to train a detector in a fully supervised way. OICR~\cite{tang2017multiple} integrates multiple instance learning and iterative classifer refinement in a single network. Some works use other types of weak supervisions to optimize detectors. In~\cite{papadopoulos2017training}, Papadopoulos \emph{et al.} used clicks to train detectors. Gao \emph{et al.} utilized object counts for weakly supervised object detection~\cite{gao2018c}. Instead of using temporally labeled segments, weakly supervised action detectors use weaker annotations, \emph{e.g.}, movie script~\cite{duchenne2009automatic,laptev2008learning}, the order of the occurring action
classes in videos~\cite{bojanowski2014weakly, huang2016connectionist} and video-level class labels~\cite{wang2017untrimmednets,shou2018autoloc}.
\section{Weakly Supervised Language Localization Networks (WSLLN)}
\subsection{Problem Statement}
Following the setting of its strongly supervised counterpart~\cite{gao2017tall,anne2017localizing}, the goal of a weakly supervised language localization (WSLL) method is to localize the event that is described by a sentence query in a long, untrimmed video. Formally, given a video consisting of a sequence of image frames, $\textbf{V}_i=[I_i^1, I_i^2, ..., I_i^T]$, and a text query ${Q}_i$, the model aims to localize a temporal segment, $[I_i^{st}, ...,I_i^{ed}]$, which semantically aligns best with the query. $st$ and $ed$ indicate the start and end times, respectively. The difference is that WSLL methods only utilize video-sentence pairs, $\{\textbf{V}_i, Q_i\}_{i=1}^N$, for training, while supervised approaches have access to the start and end times of the queries.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{figs/pipeline.pdf}
\end{center}
\caption{
The workflow of our method. Visual and text features are extracted from $n$ video proposals and the input sentence. Fully-connected (FC) layers are used to transform the features to the same length, $d$. The two features are combined by multi-modal processing~\cite{gao2017tall} and input to the two-branch structure. Scores from both parts are merged. Video-level scores, $vq$, are obtained by summing $s$ over proposals. The whole pipeline is trained end-to-end using video-level and pseudo segment-level labels. $x\times z$ indicates dimensions.
}
\label{fig:pipeline}
\end{figure*}
\subsection{The Proposed Approach}
Taking frame sequences, $[I_i^1, I_i^2, ..., I_i^T]$, as inputs, the model first generates a set of temporal proposals, $\{p_i^1, p_i^2,...,p_i^n\}$, where $p_i^j$ consists of temporally-continuous image frames. Then, the method aligns the proposals with the input query and outputs scores for proposals, $\{s_i^1, s_i^2,...,s_i^n\}$, indicating their likelihood of containing the event.
\noindent\textbf{Feature Description}.
Given a sentence query ${Q}_i$ of arbitrary length, sentence encoders can be used to extract text feature, $fq_i$, from the query. For a video, $\textbf{V}_i=[I_i^1, I_i^2, ..., I_i^T]$, features, $\textbf{fv}_i=[fv_i^1, fv_i^2, ..., fv_i^T]$, are extracted from each frame. Following~\cite{anne2017localizing}, the visual feature, $fp_i^j$, of a proposal $p_i^j$ is obtained using Eq.~\ref{eq: proposal_feat}, where $pool(x, t_1, t_2)$ means average pooling features $x$ from time $t_1$ to $t_2$, $||$ indicates concatenation, $j_{st}$/$j_{ed}$ indicates start/end times of the proposal and $\bar{j}$ means time is normalized to $[0, 1]$.
\begin{equation}
\label{eq: proposal_feat}
pool({\textbf{fv}_i}, j_{st}, j_{ed}) || pool({\textbf{fv}_i}, 0, T)||[\bar{j}_{st}, \bar{j}_{ed}]
\end{equation}
We see that the feature of each proposal contains the information of its visual pattern, the overall context and its relative position in the video.
Following~\cite{gao2017tall}, features of the sentence and a visual proposal are combined as in Eq.~\ref{eq: vs_comb}. The feature, $fm$, will be used to measure the matching between a candidate proposal and the input query.
\begin{equation}
\label{eq: vs_comb}
fm=(fp+fq) || (fp\cdot fq) || FC(fp||fq)
\end{equation}
The workflow of WSLLN is illustrated in Fig.~\ref{fig:pipeline}. Inspired by the success of the two-stream structure in the weakly supervised object and action detection tasks~\cite{Bilen_2016_CVPR, wang2017untrimmednets}, WSLLN consists of two branches, \emph{i.e.}, alignment branch and selection branch. The semantic consistency between the input text and each visual proposal is measured in the alignment branch. The proposals are compared and selected in the detection branch. Scores from both branches are merged to produce the final results.
\noindent\textbf{Alignment Branch} produces the consistency scores, $sa_i \in \mathrm{R}^{n\times2}=[sa_i^1, sa_i^2,...,sa_i^n]$, for proposals of the video-sentence pair. $sa_i$ in Eq.~\ref{eq: align}, measures how well each proposal matches the text. Different proposal scores are calculated independently where $softmax_a$ indicates applying the softmax function over the last dimension.
\begin{equation}
\label{eq: align}
sa_i=softmax_a(\textbf{W}_afm_i)
\end{equation}
\noindent\textbf{Detection Branch} performs proposal selection. The selection score, $sd_i \in \mathrm{R}^{n\times2}=[sd_i^1, sd_i^2,...,sd_i^n]$ in Eq.~\ref{eq: detection}, is obtained by applying softmax function over proposals. Through softmax, the score of a proposal will be affected by those of other
proposals, so this operation encourages competition among segments.
\begin{equation}
\label{eq: detection}
sd_i=softmax_d(\textbf{W}_dfm_i)
\end{equation}
\noindent\textbf{Score Merging} is applied to both parts to obtain the results by dot production, \emph{i.e.}, $s_i=sa_i \cdot sd_i$, for proposals. $s_i$ is used as the final segment-sentence matching scores during inference.
\noindent\textbf{Training Phase}. To utilize video-sentence pairs as supervision, our model is optimized as a video-sentence matching classifier. We compute the matching score of a given video-sentence pair by summing $s_i^j$ over proposals, $vq_i=\sum_{j=1}^{n} s_i^j$. Then, $L_v$ is obtained in Eq.~\ref{eq: lv} by measuring the score with the video-sentence match label $l_i \in \{0, 1\}$. Positive video-sentence pairs can be obtained directly. We generate negative ones by pairing each video with a randomly selected sentence in the training set. We ensure that the positive pairs are not included in the negative set.
\begin{equation}
\label{eq: lv}
L_v=loss(vq_i, l_i)
\end{equation}
Results can be further refined by adding an auxiliary task $ L_r$ in Eq.~\ref{eq: lr} where $\hat{y}_i=\{0, 1, ..., n-1 \}$ indicates the index of the segment that best matches the sentence during training. The real segment-level labels are not available, thus we generate pseudo labels by setting $\hat{y}_i={\operatorname*{argmax}}_{j} s_i^j[:,1]$. This loss further encourages competition among proposals.
\begin{equation}
\label{eq: lr}
L_r=loss(s_i^j, \hat{y}_i)
\end{equation}
The overall objective is minimizing $L$ in Eq.~\ref{eq: loss_full}, where $\lambda$ is a balancing scalar. $loss$ is cross-entropy loss.
\begin{equation}
\label{eq: loss_full}
L= loss(vq_i, l_i) +\lambda loss(s_i^j, \hat{y}_i) .
\end{equation}
\section{Experiments}
\subsection{Experimental Settings}
\noindent\textbf{Implementation Details}. BERT~\cite{devlin2018bert} is used as the sentence encoder, where the feature of `[CLS]' at the last layer is extracted as the sentence representation. Visual and sentence features are linearly transformed to have the same dimension, $d=1000$. The hidden layers for both branches have 256 units. For \emph{ActivityNet Captions}, we take the $n=15$ proposals over multiple scales of each video provided by~\cite{duan2018weakly} and use the C3D~\cite{tran2015learning} features provided by~\cite{krishna2017dense}. For \emph{DiDeMo}, we use the $n=21$ proposals and VGG~\cite{simonyan2014very} features (RGB and Flow) provided in~\cite{anne2017localizing}.
\noindent\textbf{Evaluation Metrics}. Following~\cite{gao2017tall,anne2017localizing}, \emph{R@k,IoU=th} and \emph{mIoU} are used for evaluation. Proposals are ranked according to their matching scores with the input sentence. If the temporal IoU between at least one of the top-k proposals and the groundtruth is bigger or equal to $th$, the sentence is counted as matched. \emph{R@k,IoU=th} means the percentage of matched sentences over the total sentences given $k$ and $th$. \emph{mIoU} is the mean IoU between the top-1 proposal and the groundtruth.
\subsection{Experiments on ActivityNet Captions}
\noindent\textbf{Dataset Description}. \emph{ActivityNet Captions}~\cite{krishna2017dense} is a large-scale dataset of human activities. It contains 20k videos including 100k video-sentences in total. We train our models on the training set and test them on the validation set. Although the dataset provides segment-level annotation, we only use video-sentence pairs during training.
\noindent\textbf{Baselines}. We compare with strongly supervised approaches, \emph{i.e.}, CTRL~\cite{gao2017tall}, ABLR~\cite{yuan2018find} and WSDEC-S~\cite{duan2018weakly} to see how much accuracy it sacrifices when using only weak labels. Originally proposed for dense-captioning, WSDEC-W~\cite{duan2018weakly} achieves state-of-the-art performance for weakly supervised language localization. Although showing good performance, WSDEC-W involves complicated training stages, and alternates between sentence localization and caption generation for iterations.
\begin{table}[]
\centering
\setlength\tabcolsep{1.7pt}
\begin{tabular}{lccccc}
Model & WS & IoU=0.1 & IoU=0.3 & IoU=0.5 & mIoU \\
\hline
CTRL & False & 49.1 & 28.7 & 14.0 & 20.5 \\
ABLR& False & 73.3 & 55.7 & 36.8 & 37.0\\
WSDEC-S & False & 70.0 & 52.9 & 37.6 & 40.4 \\
\hline
WSDEC-W & True & 62.7 & 42.0 & 23.3 & 28.2 \\
\textbf{WSLLN} & True &75.4 &42.8 &22.7 & 32.2\\
\hline
\end{tabular}
\caption{Comparison results based on $R@1$ on \emph{ActivityNet Captions}. All baseline numbers are reprinted from~\cite{duan2018weakly}. WS: weakly supervised.}
\label{tab: act}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{lcccccc}
$\lambda$ $\rightarrow$& 0.0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\
\hline
IoU=0.1 & 64.9 &75.4& 75.5& 75.5& 75.5& 66.6 \\
IoU=0.3 & 36.2&42.8& 42.9& 42.9& 42.9& 38.3 \\
IoU=0.5 & 19.4&22.7& 22.7& 22.8& 22.7& 20.7 \\
mIoU & 27.4& 32.2 & 32.3& 32.3& 32.3& 28.8 \\
\hline
\end{tabular}
\caption{R@1 results of our method on \emph{ActivityNet Captions} when $\lambda$ in Eq.~\ref{eq: loss_full} is set to be different values.}
\label{tab: ablation_act}
\end{table}
\subsubsection{ Comparison Results}
Comparison results are displayed in Tab.~\ref{tab: act}. It shows that WSLLN largely outperforms WSDEC-W by $\sim$$4\%$ $mIoU$. When comparing with strongly supervised methods, WSLLN outperforms CTRL by over $11\%$ $mIoU$. Using the $R@1, IoU=0.1$ metric, our model largely outperforms all the baselines including strongly and weakly supervised methods which means that when a scenario is flexible with the IoU coverage, our method has great advantage over others. When $th=$$0.3/0.5$, our model has comparable results as WSDEC-W and largely outperforms CTRL. The overall results demonstrate good performance of WSLLN, even though there is still a big gap between weakly supervised methods and some supervised ones, \emph{i.e.}, ABLR and WSDEC-S. $mIoU$ (mean$\pm$std) of WSLLN across 3 runs is $32.2\pm0.05$ which demonstrates the robustness of our method.
\subsubsection{Ablation Study}
\noindent\textbf{Effect of $\lambda$}. We evaluate the effect of $\lambda$ (see Eq.~\ref{eq: loss_full}) in Tab.~\ref{tab: ablation_act}. As it shows, our model performs stable when $\lambda$ is set from $0.1$ to $0.4$. When $\lambda=0$, the refining module is disabled and the performance drops. When $\lambda$ is set to a big number, \emph{e.g.}, $0.5$, the contribution of $L_v$ is reduced and the model performance also drops.
\noindent\textbf{Effect of Sentence Encoder}. WSDEC-W uses GRU~\cite{cho2014learning} as its sentence encoder, while our method uses BERT. It seems an unfair comparison, since BERT is powerful than GRU in general. However, we uses pretrained BERT model without fine tuning on our dataset, while WSDEC-W
uses GRU but performed an end-to-end training. So, it is unclear which setting is better. To resolve this concern, we replace our BERT with GRU following WSDEC-W. The $R@1$ results when $IoU$ is set to be 0.1, 0.3 and 0.5 are 74.0, 42.3 and 22.5, respectively. The mIoU is 31.8. It shows that our model with GRU has comparable results as that with BERT.
\noindent\textbf{Effect of Two-branch Design}. We create two baselines, \emph{ie}, \emph{Align-only} and \emph{Detect-only}, to demonstrate the effectiveness of our design. To perform fair comparison, both of them are trained using only video-sentence pairs.
\emph{Align-only} contains only the alignment branch. For positive video sentence pair, we give positive labels to all proposals. Negative pairs have negative labels for all the proposals. Loss is calculated between
proposal scores and the generated segment-level labels.
\emph{Detect-only} contains only the detection branch. Loss is calculated using the highest detection score over
proposals and the video-level label at each training iteration.
Comparison results are displayed in Tab.~\ref{tab: act_ablation}. It shows that the two baselines underperform WSLLN by a large margin, which demonstrates
the effectiveness of our design.
\begin{table}[]
\centering
\setlength\tabcolsep{1.7pt}
\begin{tabular}{lcccc}
Model & IoU=0.1 & IoU=0.3 & IoU=0.5 & mIoU \\
\hline
Align-only & 40.0& 18.9& 7.5& 13.4 \\
Detect-only & 33.7& 18.3& 10.4& 13.6\\
\hline
\end{tabular}
\caption{Ablation study based on $R@1$ on \emph{ActivityNet Captions}. Both methods are trained using weak supervisions.}
\label{tab: act_ablation}
\end{table}
\subsection{Experiments on DiDeMo}
\noindent\textbf{Dataset Description}. \emph{DiDeMo} was proposed in~\cite{anne2017localizing} for the language localization task. It contains 10k, 30-second videos including 40k annotated segment-sentence pairs. Our models are trained using video-sentence pairs in the train set and tested on the test set.
\noindent\textbf{Baselines}. To the best of our knowledge, no weakly supervised method has been evaluated on \emph{DiDeMo}. So, we compare with some supervised methods, \emph{i.e.}, MCN~\cite{anne2017localizing} and LOR~\cite{hu2016natural}. MCN is a supervised NLL model. LOR is a supervised language-object retrieval model. It utilizes much more expensive (object-level) annotations for training. We follow the same setup of LOR as in~\cite{anne2017localizing} to evaluate LOR for our task.
\noindent\textbf{Comparison Results} are shown in Tab.~\ref{tab: didemo}. WSLLN performs better than LOR in terms of $R@1/5$. We also observe that the gap between our method and the supervised NLL model is much larger on \emph{DiDeMo} than on \emph{ActivityNet Captions}. This may be due to the fact that \emph{DiDeMo} is a much smaller dataset which is a disadvantage for weakly supervised learning.
\begin{table}[]
\centering
\begin{tabular}{lccccc}
Model & WS&Input & R@1 & R@5& mIoU \\
\hline
Chance & --&-- & 3.75 & 22.50 & 22.64 \\
LOR & False &RGB&16.2 &43.9 &27.2 \\
MCN & False& RGB & 23.1 & 73.4 & 35.5 \\
MCN & False& Flow & 25.8 & 75.4 & 38.9 \\
\hline
\textbf{WSLLN} & True& RGB & 19.4 & 53.1 & 25.4 \\
\textbf{WSLLN} & True & Flow & 18.4 & 54.4 & 27.4 \\
\hline
\end{tabular}
\caption{Comparison results on \emph{DiDeMo}. Following MCN, we set $th=1.0$ for the IoU threshold. All baseline numbers are reprinted from~\cite{anne2017localizing}. WS: weakly supervised.}
\label{tab: didemo}
\end{table}
\section{Conclusion}
We propose WSLLN-- a simple language localization network. Unlike most existing methods which require segment-level supervision, our method is optimized using video-sentence pairs. WSLLN is based on a two-branch architecture where one branch performs segment-sentence alignment and the other one conducts segment selection. Experiments show that WSLLN achieves promising results on \emph{ActivityNet Captions} and \emph{DiDeMo}.
|
1,116,691,501,217 | arxiv | \section{Introduction}\label{s-1}
\include{introduction}
\include{td}
\include{braungardt}
\include{boundary3}
\include{schottky}
\frenchspacing
\section{Boundary points of Teichm\"uller curves}
The aim of this chapter is to study the boundary points
of the Teichm\"uller disks
and Teichm\"uller curves introduced in Chapter \ref{bp} in
$\overline{T}_g$ and $\overline{M}_g$, respectively.
Here and later, whenever we speak about $\overline{T}_g$ and its boundary, we
mean the bordification of the Teich\-m\"ul\-ler space
described in Chapter \ref{volker}.\\
In particular we will derive, in Section \ref{bpofdisks},
the following description
of the boundary points of Teichm\"uller curves
(see Proposition~\ref{iquerfortc} and Corollary~\ref{finalcor}
for a more precise formulation):
\begin{theorem}\label{thm}
One obtains the boundary points of a Teichm\"uller curve
by contracting the centers of
all cylinders in Strebel directions.
They are determined by the parabolic elements in
the associated mirror Veech group.
\end{theorem}
This statement seems to be well known to the experts although
we are not aware of a published proof.\\
In Section \ref{srendpoint} we prepare for the proof of Theorem \ref{thm}
by introducing Strebel
rays. They are special
geodesic rays in Teichm\"uller space which always converge
to a point on the boundary.
Following Masur \cite{M}, we describe
this boundary point
quite explicitly using the affine structure of the quadratic differential $q$ that defines the Strebel ray.\\
In Section \ref{bpofdisks} we turn to the boundary points of
Strebel rays that are contained in a Teichm\"uller disk.
In particular if the Teichm\"uller disk
descends to a Teichm\"uller curve in the moduli space, all its boundary points
can be determined explicitly with the aid of the projective Veech group.
One obtains Theorem \ref{thm} as a conclusion.
\subsection{Hitting the boundary via a Strebel ray}
\label{srendpoint}
In this section, we introduce Strebel rays and describe their end point
on the boundary of $\overline{T}_g$.
As before, everything might be done
as well for punctured surfaces and the moduli space $T_{g,n}$ with $3g-3+n > 0$,
but for ease of notation, we restrict to the case $n=0$.\\
Let $X$ be a Riemann surface of genus $g \geq 2$, $q$ a
holomorphic quadratic differential on $X$.
Recall from Section \ref{deform} that
with $q$ we have chosen a
natural flat structure $\mu$ on the surface
$X^* = X - \{\mbox{critical points of $q$}\}$ whose charts were given
in (\ref{natchart}).
The maximal real curves in $X^*$
which are locally mapped by these charts to horizontal
(resp. vertical) line segments
are called {\em horizontal}\index{trajectory!horizontal}\index{trajectory} (resp. {\em vertical})\index{trajectory!vertical} {\em trajectories}. A
trajectory is {\em critical}\index{trajectory!critical} if it ends in a critical point. Otherwise
it is {\em regular}\index{trajectory!regular}.
\begin{definition}
We say that a holomorphic quadratic differential $q$
is {\em Strebel}\index{Strebel differential},
if all regular horizontal trajectories are closed.
\end{definition}
Strebel differentials play an exceptional role
in the following sense. Recall from Section \ref{sr} that
each holomorphic quadratic differential defines a geodesic ray.
If $q$ is Strebel, then the geodesic ray defined by its negative $-q$
converges in $\overline{T}_g$ to an end point on the boundary.
This is described more precisely in the following proposition which
was proven by Masur in \cite{M}. We give a version of his proof
with parts of the notation and arguments adapted to the context of
our article.\\
Recall also from \ref{sr} that we obtain the geodesic ray\index{geodesic ray} to $-q$ as the image
of the isometric embedding
\begin{equation}\label{minusq}
\gamma = \gamma_{-q}: \left\{
\begin{array}{lcl}
[0,\infty) & \rightarrow & T_g\\
t & \mapsto & (X_K,f_K) = [(X,\mu_{-q})\circ \begin{pmatrix} K&0\\0&1\end{pmatrix}, \mbox{id}]
\qquad \mbox{with } K = e^t .
\end{array}\right.
\end{equation}
Note that here $(X_K,f_K)$ is the Teichm\"uller deformation of
$X$ of dilatation $K$ with respect to $-q$. Furthermore, $\mu_{-q}$ is
the translation structure on $X^*$ defined by $-q$.
\begin{proposition} \label{rayminusq}
Suppose $q \neq 0$ is a Strebel differential.
For the geodesic ray defined by $\gamma_{-q}$
in $T_g$, one has:
\begin{itemize}
\item[a)] The ray converges towards a unique point $(X_{\infty},f_{\infty})$
on the boundary of
the Teich\-m\"uller space $T_g$.
\item[b)] One obtains this point by contracting the
central lines of the horizontal cylinders defined by $q$ as is described in \ref{contract}.
\end{itemize}
\end{proposition}
\begin{definition}\label{sray}
In the previous Proposition, the geodesic ray
defined by $-q$, i.~e. the image of $\gamma_{-q}$ in $T_g$,
is called a {\it Strebel ray}\index{Strebel ray}.
\end{definition}
For the proof of Proposition~\ref{rayminusq} one may
use two slightly different perspectives
of the Strebel ray.
They are described in \ref{patch}, \ref{stretch} and
\ref{dan}, \ref{contract}.
In \ref{endpoint} we describe the boundary point $(X_{\infty},f_{\infty})$.
In \ref{conv} we show that the Strebel ray in fact converges
towards this point.\\
Throughout Section \ref{srendpoint}, we assume that the
differential $q$ is Strebel.
\subsubsection[$X$ as patchwork of rectangles]{$X$ as patchwork of
rectangles\\[2mm]}
\label{patch}
One may regard $X$ as a patchwork of rectangles\index{patchwork!of rectangles} in the complex plane, as is
described in the following.\\
Since $q$ is Strebel, the surface $X$, with the critical points and critical
horizontal trajectories removed, is swept out by closed horizontal trajectories.
More precisely, it follows from the work of Strebel (cf. \cite{st},
also see \cite[Theorem B]{M} which contains a list of the results
we use here) that
the surface $X$, except for the critical points and critical
horizontal trajectories, is covered by a finite number of
maximal {\em hori\-zontal cylinders}\index{cylinder!horizontal} $Z_1$, \ldots
$Z_p$, i.~e. annuli that are swept out by closed horizontal trajectories.
For each $Z_i$ one may choose a vertical trajectory $\beta_i$
joining opposite boundary components
of $Z_i$.
If we remove $\beta_i$ from $Z_i$, the remainder is mapped, by the natural
chart $w_i$ defined by $\mu$ (see (\ref{natchart})), to an open rectangle $R_i$
in the complex plane.
The horizontal and vertical edges have lengths
\[a_i = \int_{\alpha_i}|q(z)|^{\frac12}dz \;\; \mbox{ and } \;\;
b_i = \int_{\beta_i}|q(z)|^{\frac12}dz,\]
where $\alpha_i$ is any closed horizontal trajectory in the cylinder $Z_i$.\\
One may extend $w_i^{-1}$
uniquely to a
map from the closure $\bar{R}_i$ of $R_i$
to the closure of the annulus $Z_i$.
Then the two horizontal edges of $\bar{R}_i$ are
mapped to the two horizontal boundary components of $Z_i$ and
the two vertical edges are both mapped to $\beta_i$.
The critical points of $q$
that lie on the boundary of $Z_i$ define by their preimage
marked points on the horizontal edges of $\bar{R}_i$
and decompose them into segments. \\
For each such segment $s$ on a horizontal edge
of $\bar{R}_i$ its image on $X$ joins the annulus $Z_i$
to an annulus $Z_j$ possibly with $i=j$.\\
Thus the map $w_j\circ w_i^{-1}$ ($w_i^{-1}$ is the
extended map, $w_j$ is locally
the inverse map of the extended map $w_j^{-1}$)
is an {\em identification map}
between $s$ and a segment on a horizontal edge
of $\bar{R}_j$. (Images of critical points have to be excluded.)\\
These identification maps are of the form $z \mapsto \pm z + c$
with a constant $c \in \mathbb{C}$.\\
Conversely, given the closed rectangles
$\bar{R}_1$, \ldots, $\bar{R}_p$,
the marked points on their horizontal edges
and these identification maps, we may recover the surface $X$
as follows: for each $i$ glue the two vertical edges of $\bar{R}_i$ by a
translation and the horizontal edges (with the marked points removed)
by the identification maps.
In this way, one obtains a surface $X^*$ with the flat structure
on it inherited from the euclidean plane $\mathbb{C}$.
By filling in the punctures at vertices, we obtain the original compact
Riemann surface $X$.\\
In this sense one may
consider $X$ as a patchwork
of the rectangles
$\bar{R}_1$, \ldots, $\bar{R}_p$. This description
depends of course on the
chosen holomorphic quadratic Strebel differential $q$.
\begin{example}{\em Two Riemann surfaces $X$ given as a patchwork of
rectangles}:\label{examp}\\[2mm]
In the two examples in Figure \ref{l} and Figure
\ref{kk}, the two vertical edges of each rectangle are glued by
a translation, respectively.
Horizontal segments with the same name are glued.
The direction of the arrow indicates whether the identification
is a translation or a rotation by
$180\grad$. In the example in Figure \ref{l}
one only has translations,
in the example in Figure \ref{kk} only rotations.\\[2mm]
In the first example the surface $X$ is of genus $2$ and all marked points
are identified and thus give only one point on $X$. In the second example
one obtains a surface of genus $0$ with four marked points
indicated by the four symbols $\bullet$, $\star$, $\circ$ and
{\footnotesize $\square$}.\\
\begin{minipage}[t]{6cm}
{\em Surface of genus 2 with 1 marked point:}\\
\setlength{\unitlength}{1cm}
\begin{picture}(6,4)
\put(0,0){\framebox(5,1.5){$R_1$}}
\put(0,2.5){\framebox(2,1){$R_2$}}
\put(-.1,-.1){\LARGE $\bullet$}
\put(1.9,-.1){\LARGE $\bullet$}
\put(.9,1.4){\LARGE $\bullet$}
\put(2.9,1.4){\LARGE $\bullet$}
\put(-.1,2.4){\LARGE $\bullet$}
\put(1.9,2.4){\LARGE $\bullet$}
\put(-.1,3.4){\LARGE $\bullet$}
\put(1.9,3.4){\LARGE $\bullet$}
\put(1,-.3){$s_1$}
\put(3.5,-.3){$s_2$}
\put(0.3,1.65){$s_2$}
\put(2,1.65){$s_3$}
\put(4,1.65){$s_2$}
\put(1,2.2){$s_3$}
\put(1,3.65){$s_1$}
\put(1.5,-.1){$>$}
\put(4,-.1){$>$}
\put(.6,1.42){$>$}
\put(2.6,1.42){$>$}
\put(4.4,1.42){$>$}
\put(1.5,2,4){$>$}
\put(1.5,3.42){$>$}
\put(-0.4, 1){$b_1$}
\put(4.3,-.5){$a_1$}
\put(-0.4, 3.1){$b_2$}
\put(1.7,2.2){$a_2$}
\end{picture}\\
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\label{l}
\end{center}
\end{minipage}
\hspace*{5mm}
\begin{minipage}[t]{4cm}
{\em Surface of genus 0 with 4 marked points:}\\
\setlength{\unitlength}{1cm}
\begin{picture}(3,3.5)
\put(0,1.5){\framebox(4,1){$R_1$}}
\put(-.1,1.4){\LARGE $\bullet$}
\put(3.9,1.4){\LARGE $\bullet$}
\put(-.15,2.4){\LARGE $\star$}
\put(3.85,2.4){\LARGE $\star$}
\put(1.9,1.35){\LARGE $\circ$}
\put(1.75,2.37){ $\square$}
\put(1,1.2){$s_1$}
\put(3,1.2){$s_1$}
\put(1,2.65){$s_2$}
\put(3,2.65){$s_2$}
\put(1.4,1.4){$>$}
\put(2.55,1.4){$<$}
\put(1.4,2.45){$>$}
\put(2.55,2.45){$<$}
\put(-0.5, 2.2){$b_1$}
\put(3.55,1){$a_1$}
\end{picture}\\
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\label{kk}
\end{center}
\end{minipage}
\end{example}
\subsubsection[Stretching the cylinders]{Stretching
the cylinders\\[2mm]} \label{stretch}
We will now redescribe the Strebel ray\index{Strebel ray} defined by $-q$
by stretching the rectangles in the 'patchwork' from \ref{patch}
in the vertical direction.\\
The flat structure defined by $-q = e^{\pi i}\cdot q$ is obtained from
the flat structure $\mu$ defined by $q$
by composing each chart with a rotation by $\frac{\pi}{2}$.
Thus the deformation $(X_K,f_K)$ of dilatation $K$ with respect to $-q$
is equal to the affine deformation
\[ \begin{pmatrix} K & 0\\ 0 & 1\end{pmatrix} \circ \begin{pmatrix} 0&-1\\1&0\end{pmatrix} \circ (X,\mu)
\;\; = \;\; \begin{pmatrix} 0 & -K\\ 1 & 0\end{pmatrix} \circ (X,\mu).
\]
This defines by (\ref{PU}) the same point in $T_g$ as the affine deformation
\[\begin{pmatrix} 0&1\\-1&0\end{pmatrix} \circ \begin{pmatrix} 0 & -K\\ 1 & 0\end{pmatrix} \circ (X,\mu)
= \begin{pmatrix} 1 & 0\\ 0 & K\end{pmatrix} \circ (X,\mu). \]
Thus the isometric embedding $\gamma = \gamma_{-q}$ in (\ref{minusq})
is equivalently given by
\begin{equation}\label{qminuszwei}
\gamma_{-q}: \left\{
\begin{array}{lcl}
[0,\infty) & \rightarrow & T_g\\
t & \mapsto & (X_K, f_K) \, = \,
[ \begin{pmatrix} 1 & 0\\ 0 & K\end{pmatrix} \circ (X,\mu),\;\; \mbox{id}],\quad
K = e^t
\end{array}\right.
\end{equation}
Recall again that here $(X_K,f_K) = (X_K^{-q},f_K^{-q})$
is the Teichm\"uller deformation with
respect to the differential $-q$.\\
Hence we obtain the point $\gamma_{-q}(t)$ as follows:
Each chart of $\mu$ is composed with the map
$x + iy \mapsto x + iKy, \; (x,y \in \mathbb{R}) $ with $K = e^t$,
and the marking is topologically the identity.
Now, let $X$ be given as a patchwork of the rectangles $\bar{R}_1$,
\ldots, $\bar{R}_p$ as in \ref{patch}. Then we obtain the surface
$X_K = X_K^{-q}$ in the following way:
We stretch each rectangle
$\bar{R}_i$, which has horizontal and
vertical edges of lengths $a_i$ and $b_i$, into a rectangle
$\bar{R}_i(K)$ with horizontal
and vertical edges of lengths $a_i$ and $K\cdot b_i$. The identification
maps of the horizontal segments are again translations or rotations
identifying the same segments as before. The surface
$X_K = X^{-q}_K$ then is the patchwork obtained from $\bar{R}_1(K)$, \ldots,
$\bar{R}_p(K)$ as described in \ref{patch}.\\
On $\bar{R_i}$, the diffeomorphism $f_K = f^{-q}_K$
has image $\bar{R}_i(K)$ and is given by
\[x+iy \;\; \mapsto \;\; x + iKy. \]
This glues to a well defined diffeomorphism on $X^*$,
which can be uniquely extended to $X$.
\begin{example} {\em $K$-stretched surfaces:}\\
\hspace*{7mm}
\begin{minipage}[t]{6cm}
\setlength{\unitlength}{1cm}
\begin{picture}(6,6.7)
\put(0,0){\framebox(5,3){$R_1$}}
\put(0,4){\framebox(2,2){$R_2$}}
\put(-.1,-.1){\LARGE $\bullet$}
\put(1.9,-.1){\LARGE $\bullet$}
\put(.9,2.9){\LARGE $\bullet$}
\put(2.9,2.9){\LARGE $\bullet$}
\put(-.1,3.9){\LARGE $\bullet$}
\put(1.9,3.9){\LARGE $\bullet$}
\put(-.1,5.9){\LARGE $\bullet$}
\put(1.9,5.9){\LARGE $\bullet$}
\put(1,-.3){$s_1$}
\put(3.2,-.3){$s_2$}
\put(0.3,3.15){$s_2$}
\put(2,3.15){$s_3$}
\put(4,3.15){$s_2$}
\put(1,3.8){$s_3$}
\put(1,6.15){$s_1$}
\put(1.5,-.1){$>$}
\put(4,-.1){$>$}
\put(.6,2.92){$>$}
\put(2.6,2.92){$>$}
\put(4.4,2.92){$>$}
\put(1.5,3.9){$>$}
\put(1.5,5.92){$>$}
\put(-0.8, 2){$Kb_1$}
\put(4.3,-.55){$a_1$}
\put(-0.8, 5.5){$Kb_2$}
\put(1.7,3.6){$a_2$}
\end{picture}\\
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\label{Kl}
\end{center}
\end{minipage}
\begin{minipage}[t]{4.5cm}
\setlength{\unitlength}{1cm}
\hspace*{5mm}
\begin{picture}(3,5.7)
\put(0,3.5){\framebox(4,2){$R_1$}}
\put(-.1,3.4){\LARGE $\bullet$}
\put(3.9,3.4){\LARGE $\bullet$}
\put(-.15,5.4){\LARGE $\star$}
\put(3.85,5.4){\LARGE $\star$}
\put(1.9,3.35){\LARGE $\circ$}
\put(1.75,5.37){ $\square$}
\put(1,3.2){$s_1$}
\put(3,3.2){$s_1$}
\put(1,5.65){$s_2$}
\put(3,5.65){$s_2$}
\put(1.4,3.4){$>$}
\put(2.55,3.4){$<$}
\put(1.4,5.45){$>$}
\put(2.55,5.45){$<$}
\put(-0.8, 5){$Kb_1$}
\put(3.5,3){$a_1$}
\end{picture}\\
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\label{Kkk}
\end{center}
\end{minipage}\\
One obtains the surface $X_K = X^{-q}_K$
from the surface $X$ in Example \ref{examp}
as the patchwork of
the stretched rectangles in Figure \ref{Kl} and Figure \ref{Kkk},
respectively.
\end{example}
\subsubsection[$S$ as patchwork of double annuli]{$S$ as patchwork
of double annuli\index{patchwork!of double annuli}\\[2mm]}
\label{dan}
Recall that, in \ref{patch}, we used $\mu$ to identify the horizontal
cylinder $Z_i$ on
$X$ with the euclidean cylinder defined by
the rectangle $R_i$ in $\mathbb{C}$; we did so by adding the vertical boundary edges and
identifying them by a translation.
It turns out to be easier to describe the end point of the
Strebel ray, if we identify the $Z_i$ with so called double annuli
$A_i$.
\begin{definition} \label{dani}
A cylinder $Z$ of length $a$ and height $b$
defines a {\em double annulus}\index{double annulus} $A$ as follows:
\begin{itemize}
\item Take two disjoint open annuli $A^1$ and $A^2$ given as
\[A^1 = A^2 = \{z \in \mathbb{C} |\, r \leq |z| < 1\}
\;\;\mbox{ with } r = e^{-\pi\frac{b}{a}}.\]
\item Glue their inner boundary lines $\{|z| = r\}$
by the map $z \mapsto \frac{1}{z}\cdot r^2$.
\item We call the resulting surface $A$ the {\em double annulus}
of $Z$.
\end{itemize}
\end{definition}
\begin{remark}
$A$ is biholomorphic to $Z$.
\end{remark}
The identification is given explicitly as follows:
\begin{itemize}
\item $Z$ is biholomorphic
to the Euclidean cylinder defined by the rectangle
\[\{z \in \mathbb{C}|\, 0 \leq \mbox{Re}(z) \leq a,\;\; 0 < \mbox{Im}(z) < b\}.\]
\item
Decompose the rectangle into two halves of height $\frac{b}{2}$,\\
a lower half
$R^1 = \{z \in \mathbb{C}|\;\; 0 \, \leq \, \mbox{Re}(z) \, \leq \, a,\;\;
0 \, < \, \mbox{Im}(z) \, \leq \, \frac{b}{2}\} $\\
and an upper half
$R^2 = \{z \in \mathbb{C}|\;\; 0 \, \leq \, \mbox{Re}(z) \, \leq \, a,\;\;
\frac{b}{2} \, \leq \, \mbox{Im}(z) \, < \, b\}$.
\item
The cylinder defined by
$R^1$ is mapped to $A^1$
by \; $z \; \mapsto \; e^{2\pi i \frac{z}{a}}$.\\[1mm]
The cylinder defined by
$R^2$ is mapped to $A^2$
by $z \; \mapsto \; e^{2\pi i \frac{a+bi-z}{a}}$.\\[1mm]
These maps respect the identifications and define a biholomorphic
map from $Z$ to $A$, as shown in Figure \ref{abab}.
\end{itemize}
\vspace*{-.5mm}
\begin{minipage}{\linewidth}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(11,3.5)
\put(0,0.5){\framebox(4,2)}
\qbezier[40](0,1.5)(2,1.5)(4,1.5)
\put(1.8,.8){$R^1$}
\put(1.8,1.9){$R^2$}
\put(-.2,.2){$0$}
\put(3.8,.2){$a$}
\put(-.45,2.3){$bi$}
\put(-.45,1.35){$\frac{b}{2}i$}
\put(9,.7){\circle{1.3}}
\put(9,.7){\circle{.6}}
\put(8.9,.1){$A_1$}
\put(9,2.5){\circle{1.3}}
\put(9,2.5){\circle{.6}}
\put(8.82,2.87){$A_2$}
\put(9,1){\vector(0,1){1.23}}
\put(9,1.15){\vector(0,-1){.15}}
\put(9.9,2.1){$z$}
\put(9.8,1){$\begin{array}{l}
e^{-2\pi \frac{b}{a}}\cdot\frac{1}{z}
\end{array}$}
\put(9.9,1.89){\line(1,0){.2}}
\put(10,1.9){\vector(0,-1){0.7}}
\qbezier(3.5,2)(5.5,3.5)(8.5,2.7)
\put(8.3,2.75){\vector(4,-1){.3}}
\put(5.5,3.1){$z \; \mapsto \; e^{2\pi i \frac{a+bi-z}{a}}$}
\qbezier(3.5,1)(5.5,.5)(8.5,.7)
\put(8.5,.7){\vector(1,0){.1}}
\put(5,.8){$ z \mapsto
\hspace*{5mm} e^{2\pi i \frac{z}{a}}$}
\qbezier[15](9,.7)(9.3,.7)(9.6,.7)
\put(9.2,.5){$r$}
\put(9.7,.5){$1$}
\end{picture}
\end{center}
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\label{abab}
\end{center}
\end{minipage}
Consider the double annuli $A_1,\ldots ,A_p$
defined by the cylinders $Z_1, \ldots, Z_p$.
The biholomorphic map $Z_i \to A_i$ extends to a continuous
map from the closure of
$Z_i$ to the closure $\bar{A}_i$ of $A_i$. The
zeroes of $q$ on the boundary of $Z_i$ define marked points on the
boundary of $A_i$ and decompose it into segments.
The surface $X$ can now be described as a
{\em patchwork of the closed double cylinders}
$\bar{A}_1$, \ldots, $\bar{A}_p$.
The identification maps between the segments on the boundary of the
$A_i$ are essentially the same as in \ref{patch}.
\subsubsection[Contracting the central lines]{Contracting the central lines\\[2mm]}
\label{contract}
Suppose that $X$ is given as a patchwork of double annuli
$\bar{A}_1$, \ldots,
$\bar{A}_p$ as in \ref{dan}. We may describe
the points $(X_K,f_K) = (X_K^{-q},f_K^{-q})$
on the Strebel ray\index{Strebel ray} also as a patchwork of double annuli:\\
Let $A_i(K) = A_i^1(K) \cup A_i^2(K)$ $(i \in \{1, \ldots, p\})$ be the double annulus from
Definition~\ref{dani} with $r = r_i(K) = r_i^K$
and define $X_K = X_K^{-q}$ to be the surface
obtained by gluing the closures
$\bar{A}_1(K)$, \ldots, $\bar{A}_p(K)$ with the same
maps as $\bar{A}_1$, \ldots,
$\bar{A}_p$.
Furthermore, define the diffeomorphism $f_K = f_K^{-q}$ on $A_i$
by
\begin{eqnarray*}
&f_K^{-q}:\; A_i^1 \to A_i^1(K) \; \mbox { and } \; A_i^2 \to A_i^2(K),&\\
&\hspace*{8mm}
z=r\cdot e^{i\varphi} \; \mapsto \; r^K\cdot e^{i\varphi} \quad
\mbox{on both parts.}
\end{eqnarray*}
Then the following diagram is commutative:\\[1mm]
\begin{minipage}{\linewidth}
\setlength{\unitlength}{1cm}
\begin{picture}(15,7.5)
\put(1,1.5){\circle{2}}
\put(.65,1.02){$\scriptscriptstyle A_i^2(K)$}
\put(1,1.5){\circle{.4}}
\qbezier(1.2,1.5)(2.25,1.7)(3.3,1.5)
\put(3.02,1.57){\vector(4,-1){.3}}
\put(1.85,1.75){$\scriptscriptstyle z \mapsto \frac{\scriptsize r_i^{\scriptscriptstyle 2K}}{\scriptsize z}$}
\put(3.5,1.5){\circle{2}}
\put(3.5,1.5){\circle{.4}}
\put(3.15,1.02){$\scriptscriptstyle A_i^1(K)$}
\put(1,4.4){\vector(0,-1){1.8}}
\put(2.1,3.8){$f_K$
\put(3.5,4.4){\vector(0,-1){1.8}}
\put(9,0){\framebox(2,3)}
\qbezier[40](9,1.5)(10,1.5)(11,1.5)
\put(9.8,.5){$R_i^1(K)$}
\put(9.8,2.1){$R_i^2(K)$}
\put(9.5,4.6){\vector(0,-1){1.2}}
\put(9.6,3.8){$f_K$}
\put(10.8,-.3){$a_i$}
\put(8.5,3.1){$Kb_i$}
\qbezier[100](9.5,2.26)(6,5)(1.3,2)
\put(1.5,2.05){\vector(-4,-1){.3}}
\put(5.2,3.7){$z \mapsto e^{2\pi i\frac{a_i + Kb_ii-z}{a_i}}$}
\qbezier[70](9.3,1)(6,.7)(3.99,1.3)
\put(4.25,1.23){\vector(-4,1){.3}}
\put(5.5,1.13){$z \mapsto e^{2\pi i\frac{z}{a_i}}$}
\put(1,5.5){\circle{2}}
\put(1,5.5){\circle{.7}}
\put(.8,4.92){$\scriptscriptstyle A_i^2$}
\qbezier(1.35,5.5)(2.25,5.7)(3.142,5.5)
\put(2.865,5.57){\vector(4,-1){.3}}
\put(1.85,5.75){$\scriptscriptstyle z \mapsto \frac{\scriptsize r_i^2}{\scriptsize z}$}
\put(3.5,5.5){\circle{2}}
\put(3.5,5.5){\circle{.7}}
\put(3.3,4.92){$\scriptscriptstyle A_i^1$}
\put(9,5){\framebox(2,1)}
\qbezier[40](9,5.5)(10,5.5)(11,5.5)
\put(9.7,5.65){$R_i^2$}
\put(9.7,5.12){$R_i^1$}
\put(10.8,4.7){$a_i$}
\put(8.8,6.15){$b_i$}
\qbezier[100](9.3,5.7)(6,8)(1.3,6)
\put(1.5,6.05){\vector(-4,-1){.3}}
\put(5.8,7){$z \mapsto e^{2\pi i\frac{a_i+b_i i - z}{a_i}}$}
\qbezier[70](9.3,5.3)(6,4.7)(3.99,5.3)
\put(4.25,5.23){\vector(-4,1){.3}}
\put(5.5,5.2){$z \mapsto e^{2\pi i\frac{z}{a_i}}$}
\end{picture}
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\end{center}
\end{minipage}\\[3mm]
where \; $f_K = (re^{i\varphi} \mapsto r^Ke^{i\varphi})$ \;
on the left side and \; $f_K = (x+iy \mapsto x+Kiy)$ \;
on the right side of the diagram.
Thus, in particular, we have defined here with $(X_K,f_K) = (X_K^{-q},f_K^{-q})$
the same surface (up to isomorphism) and the same diffeomorphism
as in \ref{stretch}.
\subsubsection[The end point of the Strebel ray]{The end point of the Strebel ray\\[2mm]}
\label{endpoint}
We use the description of the Strebel ray in \ref{contract} to
obtain its end point\index{Strebel ray!end point of} $(X_{\infty}, f_{\infty}) \in \overline{T}_g$.
Recall from \ref{v-tgnq} that a point in $\overline{T}_g$
consists of a stable Riemann surface $X_{\infty}$
and a deformation $f_{\infty}: X \to X_{\infty}$.\\
If $K \to \infty$
in \ref{contract}, the
interior radius $r_i(K) = r_i^K$ of the two annuli $A_i^1(K)$ and $A_i^2(K)$
that form the double annulus $A_i(K)$
tends to $0$ ($i \in \{1,\ldots, p\}$). $A_i(K)$ tends to a double
cone $A_i(\infty)$ and the whole surface $X_K$ to a stable
Riemann surface $X_{\infty}$.
More precisely, we define $A_i(\infty)$ and $X_{\infty}$
as complex spaces in the following way.
\begin{definition}
Let $A_i^1(\infty)$ and $A_i^2(\infty)$ both be
the punctured disk
\[\{z \in \mathbb{C}|\, 0<|z| < 1\},\]
and let $\mbox{\it pt}$ be an arbitrary point.
The disjoint union
\[A_i(\infty) = A_i^1(\infty) \cup A_i^2(\infty) \cup \{\mbox{\it pt}\}\]
becomes a complex cone by the following chart:
\begin{eqnarray*}
&&\varphi:\,\; A_i(\infty) \;\; \to \;\; \{(z_1,z_2) \in \mathbb{C}^2|\, z_1\cdot z_2 = 0,
|z_1|, |z_2| < 1\}\\
&&\varphi|_{A_i^1(\infty)}:\, z \mapsto (0,z),\; \quad
\varphi|_{A_i^2(\infty)}:\, z \mapsto (z,0), \; \quad
\varphi(\mbox{\it pt}) = (0,0)
\end{eqnarray*}
The closures of the double cones $\bar{A}_1(\infty)$, \ldots,
$\bar{A}_p(\infty)$ are glued to each other
by the same identification maps as in the 'finite' case in
\ref{contract}. We call the resulting stable Riemann surface
$X_{\infty}$. Topologically, $X_{\infty}$ is obtained from the surface $X$
by a contraction $f_{\infty}$ of the middle curves of the cylinders.
\end{definition}
\begin{minipage}{\linewidth}
\begin{center}
\setlength{\unitlength}{.73cm}
\begin{picture}(6.5,6.5)
\put(1,1.5){\circle{2}}
\put(1,1.5){\circle*{.08}}
\put(.65,1.02){$\scriptscriptstyle A_i^2(\infty)$}
\put(.8,1.65){$\mbox{\it pt}$}
\qbezier(1.15,1.53)(3.5,2)(5.85,1.53)
\put(5.7,1.57){\vector(4,-1){.3}}
\put(1.3,1.57){\vector(-4,-1){.3}}
\put(6,1.5){\circle{2}}
\put(6,1.5){\circle*{.08}}
\put(5.65,1.02){$\scriptscriptstyle A_i^1(\infty)$}
\put(5.8,1.65){$\mbox{\it pt}$}
\put(1,4.4){\vector(0,-1){1.8}}
\put(1.1,3.5){$\begin{array}{l}
f_{\infty}:\\
r\cdot e^{i\varphi} \, \mapsto \,
h_{i,\infty}(r)\cdot e^{i\varphi}
\end{array}$}
\put(6,4.4){\vector(0,-1){1.8}}
\put(1,5.5){\circle{2}}
\put(1,5.5){\circle{.7}}
\put(.8,4.92){$\scriptscriptstyle A_i^2$}
\qbezier(1.35,5.5)(3.25,6)(5.642,5.5)
\put(1.65,5.57){\vector(-4,-1){.3}}
\put(5.365,5.57){\vector(4,-1){.3}}
\put(2.5,6){$z \mapsto \frac{r_i^2}{z}$}
\put(6,5.5){\circle{2}}
\put(6,5.5){\circle{.7}}
\put(5.8,4.92){$\scriptscriptstyle A_i^1$}
\end{picture}
\end{center}
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\end{center}
\end{minipage}\\
\noindent
We now define the contraction $f_{\infty}$
as the following map:
Let $A_i^1$ and $A_i^2$ be the two annuli in Definition \ref{dani}
that form the double annulus $A_i$ ($i\in\{1,\ldots,p\}$).
Then $f_{\infty}$ is
given by
\[
\begin{array}{lllcl}
f_{\infty}:& A_i^j &\to&
A_i^j(\infty) &
\mbox{ for } j \in \{1,2\}\\
&z = r\cdot e^{i\varphi} &\mapsto&
f_{\infty}(z) = h_{i,\infty}(r)\cdot e^{i\varphi}&
\end{array}
\]
with an arbitrary monotonously increasing diffeomorphism $
h_{i,\infty}: [r_i,1) \to [0,1)$. The isotopy class of $f_{\infty}$ is
independent of the choices of $h_{i,\infty}$.\\
\subsubsection[Convergence]{Convergence\\[2mm]}
\label{conv}
We now show that, in the above notation, the Strebel ray $\gamma_{-q}$
converges to the point
$(X_{\infty},f_{\infty})$
on the boundary of $T_g$. Recall from
(\ref{uke}) in Chapter \ref{volker} that
a base of open neighbourhoods of $(X_{\infty},f_{\infty})$ is given
by the open sets
\[U_{V,\varepsilon}(X_{\infty},f_{\infty}) = \{(X',f')|
\begin{array}[t]{l}
\exists \; \varphi: X' \to X_{\infty}
\mbox{ s.t. }
\varphi \mbox{ is deformation}, \\
\varphi\circ f' \mbox{ is isotopic to } f_{\infty} \mbox{ and }\\
\varphi|_{X' \backslash\varphi^{-1}(V)}
\mbox{ has dilatation } < 1+\varepsilon
\},
\end{array}
\]
for all compact neighbourhoods $V$ of the singular points of $X_{\infty}$
and for all $\varepsilon > 0$.
We may restrict to open neighbourhoods $V$ of the form
\[V = V(\kappa) = V_1 \cup \ldots \cup V_p, \quad
\kappa = (\kappa_1, \ldots, \kappa_p),\;\; 0< \kappa_i < 1\]
where $V_i$ is a double cone defined by
\[V_i = V_i^1 \, \cup \, V_i^2 \, \cup \, \{\mbox{\it pt}\} \; \mbox{ with }
V_i^j = \{0 < |z| \leq \kappa_i \} \, \subseteq \, A_i^j(\infty)
\quad (j \in \{1,2\} ).\]
\begin{lemma}\label{lemconv}
For each such $V = V(\kappa)$
and each $\varepsilon > 0$, there is some $K_0 \in \mathbb{R}_{>0}$
such that all points $(X_K,f_K) = (X_K^{-q},f_K^{-q})$ with $K > K_0$
are in $U_{V,\varepsilon}(X_{\infty},f_{\infty})$.
\end{lemma}
\begin{proof}
Choose $K_0$ such that $r_i^{K_0} < \kappa_i$ for all $i \in \{1,\ldots, p\}$
and suppose that $K > K_0$. Define the diffeomorphism
$\varphi: X_K \to X_{\infty}$ on $\bar{A}_i^j(K)$ by
\[\varphi:\; z = r\cdot e^{i\theta} \mapsto
\left\{
\begin{array}{ll}
z \;\, \in \, A_i^j(\infty), \; &\mbox{ if } 1 > |z| \geq \kappa_i\\
h^i_K(r)\cdot e^{i\theta} \;\, \in \, A_i^j(\infty), \;
&\mbox{ if } \kappa_i \geq |z| > r_i^K\\
\mbox{\it pt} \;\, \in \, A_i^j(\infty), \; &\mbox{ if } |z| = r_i^K
\end{array} \right.
\]
with an arbitrary monotonously increasing diffeomorphism
$h^i_K: (r_i^K,\kappa_i) \to (0,\kappa_i)$.
Then $\varphi\circ f_K$ is isotopic to
$f_{\infty}$ and
$\varphi|_{X_K \backslash \varphi^{-1}(V)}$ is holomorphic, hence its
dilatation is
$1$. Thus $(X_K, f_K)$ is in $U_{V,\varepsilon}(X_{\infty},f_{\infty})$.
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(5,3)
\put(.5,2.1){$A_i^j(K)$}
\put(1,1){\circle{4}}
\put(1,1){\circle{.4}}
\qbezier[10](.55,1)(.55,1.45)(1,1.45)
\qbezier[10](1,1.45)(1.45,1.45)(1.45,1)
\qbezier[10](1.45,1)(1.45,.55)(1,.55)
\qbezier[10](1,.55)(.55,.55)(.55,1)
\qbezier(1,0)(.9,0)(0.35,0)
\put(.82,0.05){\line(0,-1){.15}}
\put(.95,-.5){$r_i^K$}
\put(.55,0.05){\line(0,-1){.15}}
\put(.5,-.5){$\kappa_i$}
\put(.35,0.05){\line(0,-1){.15}}
\put(.2,-.5){$1$}
\put(5.6,2.1){$A_i^j(\infty)$}
\put(6,1){\circle{4}}
\put(6,1){\circle*{.03}}
\qbezier[10](5.55,1)(5.55,1.45)(6,1.45)
\qbezier[10](6,1.45)(6.45,1.45)(6.45,1)
\qbezier[10](6.45,1)(6.45,.55)(6,.55)
\qbezier[10](6,.55)(5.55,.55)(5.55,1)
\qbezier(6,0)(5.9,0)(5.35,0)
\put(5.55,0.05){\line(0,-1){.15}}
\put(5.55,-.5){$\kappa_i$}
\put(5.35,0.05){\line(0,-1){.15}}
\put(5.2,-.5){$1$}
\qbezier(1,1.6)(3.5,2.5)(6,1.6)
\put(5.8,1.7){\vector(2,-1){.3}}
\put(3.5,2.15){id}
\qbezier(1.27,1)(4,1)(5.8,1)
\put(5.8,1){\vector(1,0){.03}}
\put(2.2,1.1){$ re^{i\theta} \mapsto h^i_K(r)e^{\scriptscriptstyle i\theta}$}
\end{picture}
\end{center}
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\end{center}
\end{proof}
With Lemma \ref{lemconv} we have obtained the desired result
and completed the proof of
Proposition~\ref{rayminusq}.
\begin{corollary}
The Strebel ray\index{Strebel ray} defined by $-q$ converges to the point
$(X_{\infty},f_{\infty})$ on the boundary of $T_g$.
\end{corollary}
\subsection{Boundary points of Teichm\"uller disks}\label{bpofdisks}
\index{Teichm\"uller disk!boundary points of}
In this section we study the boundary points of a Teichm\"uller disk
$\Delta = \Delta_{\iota}$ in the
bordification $\overline{T}_g$ of the Teichm\"uller space; in particular, we consider
the case that $\Delta_{\iota}$ projects to an affine curve in the moduli
space $M_g$. For convenience, we use the upper half plane model
and consider Teichm\"uller embeddings as maps from $\HH$ to $T_g$.
We will obtain Theorem \ref{thm} as our final result. We proceed
in two steps:
\begin{itemize}
\item In \ref{iotaquer}, we show that a Teichm\"uller embedding
$\iota:\HH \hookrightarrow T_g$
has a natural extension
\[\bar{\iota}: \HH \cup \{\mbox{cusps of $\Gammaquer^*_{\iota}$}\}
\hookrightarrow
\overline{T}_g,\]
\item
In \ref{igitter}, we show that the image of
$\bar{\iota}$ is the whole closure of $\Delta_{\iota}$ in $\overline{T}_g$,
if the Teichm\"uller disk $\Delta_{\iota}$
projects onto a Teichm\"uller curve in $M_g$.\\
It will follow from this that one obtains the boundary points of $\Delta_{\iota}$
precisely by contracting the central lines of the cylinders in ``parabolic
directions''. The parabolic directions correspond to the cusps of the
projective mirror Veech group $\Gammaquer^*_{\iota}$.
\end{itemize}
Throughout this section, we assume that
$\iota: \HH \hookrightarrow T_g$ is a Teichm\"uller embedding to
a fixed holomorphic quadratic differential $q$ on $X = X_{\mbox{\fs ref}}$ and that
$\mu$ is the translation structure defined by $q$ as in
Section~\ref{deform}. Recall from Section \ref{tc} that the associated
projective Veech group $\bar{\Gamma}_{\iota} = \bar{\Gamma}(X,\mu)$
and its mirror image $\Gammaquer^*_{\iota} = R\bar{\Gamma}_{\iota} R^{-1}$ (with
$R$ as in Remark \ref{achtionstogether}) are both Fuchsian groups
in $\mbox{PSL}_2(\mathbb{R})$.
\subsubsection[Extending Teichm\"uller embeddings to the cusps of the
mirror Veech group]{Extending Teichm\"uller embeddings to the cusps of
$\Gammaquer^*$\\[2mm]}
\label{iotaquer}
Let $\tilde{s}\in \mathbb{R}^{\infty} = \mathbb{R} \cup \{\infty\}$ be a cusp of the
Fuchsian group $\Gammaquer^*_{\iota}$, i.e. $\tilde{s}$ is a fixed point of
some parabolic element $\tilde{A}$ of $\Gammaquer^*_{\iota}$. We associate
to $\tilde{s}$ a point
$\bar{\iota}(\tilde{s}) = (X_{\infty}(\tilde{s}),f_{\infty}(\tilde{s}))$
on the boundary of $T_g$ in the following way:
\begin{itemize}
\item In a natural way we associate to $\tilde{s}$ a Strebel ray\index{Strebel ray}
\item We show that this Strebel ray
is the image in $T_g$ of the hyperbolic ray in $\HH$ from
$i$ to $\tilde{s}$ under $\iota$.
\item
$\bar{\iota}(\tilde{s}) = (S_{\infty}(s),f_{\infty}(s))$ is defined to
be the end point of the
Strebel ray
\end{itemize}
\noindent
{\em The Strebel ray associated to $\tilde{s}$:}\\[1mm]
$A = R^{-1}\tilde{A}R$ is a parabolic element in the projective Veech group
$\bar{\Gamma}_{\iota}$.
Let $v$ be its unit eigenvector. \\
By Proposition 2.4 in \cite{V},
the direction $v$ is fixed by some affine diffeomorphism $h$ of $(X,\mu)$.
The
derivative of $h$ is $A$ and $v$ is a Strebel direction.
More precisely: The
trajectories in the direction of $v$ are preserved by $h$ and each leaf is
either closed or a saddle connection\index{saddle connection}, i.~e. connects two
critical points.\\
As in \ref{patch}, $X$ decomposes into maximal cylinders\index{cylinder!maximal}
of closed leaves parallel to $v$ and the cylinders are bounded by
saddle
connections. The affine diffeomorphism $h$ can be described nicely
as follows: Passing to a power of $h$ if necessary, one may assume that
$h$ fixes all critical points of $q$. Then $h$ is the composition of
Dehn twists along the core curves of the cylinders. Each trajectory is
mapped by $h$ to itself and the saddle connections are fixed pointwise.\\
Now, let us take the matrix
\[U \; = \; U_{\theta} \; \in \, \mbox{SO}_2(\mathbb{R}) \;\; \mbox{ such that } \;
U\cdot v = \vec{e_1} = \begin{pmatrix} 1\\0\end{pmatrix}\]
with $U_{\theta}$ defined as in (\ref{decompose}).\\
Consider the affine deformation
$\mbox{id}: (X,\mu) \to (X,\mu_U) = (X,\mu)\circ U$ as in
Definition \ref{defdeform}. The
vector $v$ is mapped to $\vec{e_1}$.
Thus the same trajectories are now the horizontal ones.\\
Recall from \ref{affdeforms} that the flat structure $(X,\mu_U)$
is defined by the quadratic differential
$e^{2\theta i}\cdot q$. Thus $e^{2\theta i}\cdot q$
is Strebel. The ray is by (\ref{qminuszwei}) given as:
\begin{eqnarray*}
\gamma_{\tilde{s}} = \gamma_{-e^{2\theta i}\cdot q}\; : \;
[0,\infty) &\to& T_g \\
t &\mapsto & [\begin{pmatrix} 1&0\\0&K\end{pmatrix} \circ (X,\mu_{U_{\theta}}), \mbox{id}]
\;\; = \;\;[(X,\mu_{A_K}), \mbox{id}]
\end{eqnarray*}
with $K = e^t$ and
$A_{K} = \begin{pmatrix} 1 & 0\\ 0& K \end{pmatrix}\cdot U_{\theta}$\\
\noindent
{\em The Strebel ray $\gamma_{\tilde{s}}$ is the image of the geodesic ray
in $\HH$ from $i$
to the cusp $\tilde{s}$:}\\[1mm]
From
Remark~\ref{ioeinszwei} (see also Figure \ref{gb}) one obtains that
\[ \gamma_{\tilde{s}}(t) =
[(X,\mu_{A_K}), \mbox{id}]
= \hat{\iota}(A_K) = \iota(-\overline{A_K^{-1}(i)}).\]
Furthermore, we have
\[-\overline{A_K^{-1}(i)})
= -\overline{U_{\theta}^{-1}(K\cdot i)} = -U_{\theta}^{-1}(-Ki)
= RU_{\theta}^{^-1}R^{-1}(Ki).\]
Thus the image of $\gamma_{\tilde{s}}$
is equal to the image of the the
ray $RU_{\theta}^{-1}R^{-1}(Ki)$ ($K \in [1,\infty)$)
under $\iota$. But the latter one
is the geodesic ray in $\HH$
from $i$ to $RU^{-1}R^{-1}(\infty) = -U^{-1}(\infty)$.\\
Observe finally that $-U^{-1}(\infty) = \tilde{s}$:\;
Since $U\cdot v = \vec{e_1}$ for the eigenvector $v$ of
$A$, one has for the fixed point $s$ of $A$ that $U(s) = \infty$. Hence,
one has for the fixed point $\tilde{s}$ of $\tilde{A} = RAR^{-1}$ that
$\tilde{s} = -s = -U^{-1}(\infty)$. Thus the Strebel ray defined
by $\gamma_{\tilde{s}}$ is the image of the
geodesic ray from $i$ to $\tilde{s}$ in $\HH$ under $\iota$.\\
Finally we define
$\bar{\iota}(\tilde{s})
= (X_{\infty}(\tilde{s}),f_{\infty}(\tilde{s})) \in \overline{T}_g$
to be the end point of the Strebel ray $\gamma_{\tilde{s}}$.
We then define the map $\bar{\iota}$ as follows.
\begin{definition}
$\bar{\iota}$\index{Teichm\"uller embedding!extension of} is
the extension of $\iota$
defined by
\begin{eqnarray*}
\bar{\iota}: \quad \HH \cup \{\mbox{cusps of $\Gammaquer^*_{\iota}$}\}
&\to& \overline{T}_g,\\
t &\mapsto& \left \{
\begin{array}{l}
\iota(t) , \mbox{ if } t \in \HH\\
\bar{\iota}(t) = (X_{\infty}(\tilde{s}),f_{\infty}(\tilde{s}))
\mbox{ if } t = \tilde{s} \mbox{ is a cusp of $\Gammaquer^*$}
\end{array} \right .
\end{eqnarray*}
\end{definition}
We consider $\HH \cup \{\mbox{cusps of $\Gammaquer^*_{\iota}$}\}$
as topological space endowed with the horocycle topology
as in Example~\ref{horo}.
\begin{proposition} \label{iquer}
$\bar{\iota}$ is a continuous embedding.
\end{proposition}
\begin{proof}
{\em $\bar{\iota}$ is continuous:}\\[2mm]
Let $s$ be a cusp of $\Gammaquer^*_{\iota}$, i.e. $s$ is a fixed point
of some parabolic element $\tilde{A} \in \Gammaquer^*_{\iota}$, and
$c:[0,\infty) \to \HH$ an arbitrary path in $\HH$
converging to $s$ in the horocycle
topology. \\
By Remark \ref{achtionstogether}, the action of $\tilde{A}$
on $\HH$ fits together with the action of $\rho(A) \in \Gamma_g$ on
$\overline{T}_g$. Both actions may be extended continously to
$\HH_s = \HH \cup \{s\}$ (endowed with the horocycle topology)
and to $\overline{T}_g$, respectively, and one obtains the following commutative
diagram:
\[ \xymatrix{
\HH_{s} =
\HH \cup \{s\}
\ar[rr]^{\bar{\iota}} \ar[d]_{p_{\tilde{A}}} &&
\overline{T}_g\ar[d]^{p} \\
\overline{\HH_s/<\tilde{A}>} \ar[rr]^{i_{\tilde{A}}} &&
\overline{M}_g
}\]
Here the map $ i_{\tilde{A}}:\overline{\HH/<\tilde{A}>} \to \overline{M}_g$
is the map induced by
$\bar{\iota}$ and $ \overline{\HH/<\tilde{A}>}$ \, is a disk with center
$p_{\tilde{A}}(s)$.\\
Let $W$ be a neighbourhood of
\[\bar{P}_{\infty} = i_{\tilde{A}}(p_{\tilde{A}}(s)) = p(\bar{\iota}(s)).\]
For $i$ in an index set $I$,
let
$P_{\infty}^i$ be the preimages of
$\bar{P}_{\infty}$ in $\overline{T}_g$ under $p$. One of them is $\bar{\iota}(s)$, again by the
commutativity of the
diagram.\\
Since $\{P_{\infty}^i|\; i \in I\}$ is discrete we may choose the neighbourhood $W$
in such a manner that
its preimage under $p$ is of the form:
\[V = p^{-1}(W) = \bigcup_{i\in I} V_i \;\; \subseteq \, \overline{T}_g\]
where the $V_i$ are the connected components of $V$ with
$P_{\infty}^i \in V_i$ and $V_i$ is invariant under
the stabilizer of $P_{\infty}^i$ in the mapping class group $\Gamma_g$.\\
Furthermore, we may choose $W$ such that the preimage of $W$
under $i_{\tilde{A}}$ is a simply connected neighbourhood of $p_{\tilde{A}}(s)$.
Then, again, the preimage
\[U = p_{\tilde{A}}^{-1}(i_{\tilde{A}}^{-1}(W))\]
is a neighbourhood of $s$ in the horocycle topology.\\
Thus an end piece of the path $c$ is completely contained in $U$, i.e.
there is some $l \in \mathbb{R}_{>0}$ such that $c([l,\infty))$
is contained in $U$.\\
Since the above diagram is commutative and the $V_i$
are disjoint, the image of $U$ is one of the $V_i$.
This $V_i$ then contains $\bar{\iota}(c[l,\infty))$.
In addition, $V_i$ has to contain the end piece of the Strebel ray
that leads to $s$ used to define $\bar{\iota}(s)$. Hence,
$V_i$ is the component that contains $\bar{\iota}(s)$.\\
Making $W$ arbitrarily small, the neighbourhood U of $s$ becomes
arbitrarily small. Thus $\iota\circ c$ converges to $\bar{\iota}(s)$.\\
\noindent{\em $\bar{\iota}$ is injective:}\\[2mm]
Suppose there are two cusps $s_1$ and $s_2$ with
$P_{\infty} = \bar{\iota}(s_1) = \bar{\iota}(s_2)$. Thus we have two Strebel rays
defined by the negative of the Strebel differentials $q_1 = e^{i\theta_1}\cdot q$ and
$q_2 = e^{i\theta_2}\cdot q$
with initial point $P_{0} = \iota(i)$ and the same end point $P_{\infty}$
in $\overline{T}_g$. Let $(X_{\infty},f_{\infty})$ and $(Y_{\infty},g_{\infty})$
be the two marked stable Riemann surfaces defined by the two Strebel rays, respectively.
Since they define the same point in $\overline{T}_g$ the following diagram
is commutative up to homotopy with some biholomorphic $h$:
\[ \xymatrix{
& X_{\infty}\\
X_{\mbox{\footnotesize ref}} \ar[ru]^{f_{\infty}} \ar[rd]^{g_{\infty}}& \\
& Y_{\infty} \ar[uu]_{h}
}\]
The core curves of the cylinders relative to the flat structure
on $X$ defined
by $q_1$ are mapped by $f_{\infty}$ to the singular points of
$X_{\infty}$. Similarly
the core curves coming from $q_2$ are mapped to the singular points of $Y_{\infty}$.
Since the diagram is commutative up to isotopy, the two
systems of core curves are homotopic. Thus the two Strebel rays
are similar by definition,
using the terminology in \cite[Section 5]{M}. From Theorem~2 in
\cite{M}
it follows that there is some constant $M < \infty$ such that for
two points $Q \neq R$ lying on the two Strebel rays
which are equidistant from the
initial point $P_0$, one has
$d(Q,R) \leq M$. But then, since $\iota$ is an isometric embedding,
$M$ would have to be an upper bound for the distance of equidistant points
on two different geodesic rays in $\HH$ starting from $i$.
This cannot be true.
\end{proof}
\subsubsection[Boundary of Teichm\"uller disks that
lead to Teichm\"uller curves]{Boundary of Teichm\"uller disks
that lead to Teichm\"uller curves\\[2mm]}\label{igitter}\index{Teichm\"uller disk!boundary points of}\index{Teichm\"uller curve!boundary points of}
Let now $\iota: \HH \hookrightarrow T_g$ be a Teichm\"uller embedding such that
its image $\Delta_{\iota}$ projects to a Teich\-m\"uller curve
$C$ in the moduli space $M_g$.
\begin{proposition}\label{iquerfortc}
In this situation, the extended embedding from \ref{iotaquer}
\[\bar{\iota}:\; \HH \cup \{\mbox{\em cusps of }\Gammaquer^*_{\iota}\}
\;\;\hookrightarrow\;\; \overline{\Delta}_{\iota} \; \subseteq \; \overline{T}_g \]
is surjective onto the closure $\overline{\Delta}_{\iota}$
of $\Delta_{\iota}$ in $\overline{T}_g$.
\end{proposition}
\begin{proof}
Recall from Corollary~\ref{latticeproperty} that
if $\iota$ leads to a Teichm\"uller curve then
the projective Veech group $\bar{\Gamma} = \bar{\Gamma}_{\iota}$
is a lattice in $\mbox{PSL}_2(\mathbb{R})$,
$\HH/\Gammaquer^*$ is a complex algebraic curve and
$\HH/\Gammaquer^* \to C \subsetM_g$ is
the normalization of $C$.
Thus it extends to a surjective morphism
\[\varphi: \overline{\HH/\Gammaquer^*}
\;\; \to \;\; \overline{C} \quad \subseteq \; \overline{M}_g,\]
where $\overline{\HH/\Gammaquer^*}$ and $\overline{C}$ are
the projective closure of $\HH/\Gammaquer^*$ and the closure of
$C$ in $\overline{M}_g$, respectively.\\
Furthermore, the map $\HH \to \HH/\Gammaquer^*$ extends continuously to
a surjective map $p_{\bar{\Gamma}}: \HH \cup \{\mbox{cusps of } \Gammaquer^*\}
\to \overline{\HH/\Gammaquer^*}$, since $\Gammaquer^*$ is a lattice
in $\mbox{PSL}_2(\mathbb{R})$.
Here we use the horocycle topology on
$\HH \cup \{\mbox{cusps of } \Gammaquer^*\}$.\\
Thus one has the following commutative diagram of continuous maps:
\[ \xymatrix{
\HH \cup \{\mbox{cusps of } \Gammaquer^*\}
\ar[rr]^{\bar{\iota}} \ar[d]_{p_{\bar{\Gamma}}} &&
\;\;\overline{\Delta}_{\iota} \;\; \subseteq \; \overline{T}_g
\ar@<-3.5ex>[d]_{p|_{\,\overline{\Delta}_{\iota}}}
\ar@<3.5ex>[d]^{p} \\
\overline{\HH/\Gammaquer^*} \ar[rr]^{\varphi} &&
\;\;\overline{C} \;\; \subseteq \; \overline{M}_g
}\]
Let now $P_{\infty}$ be a point on the boundary of $\Delta_{\iota}$. Similarly
as in the proof of the continuity of $\bar{\iota}$ we may choose
a neighbourhood $W$ of $p(P_{\infty})$ in $\overline{C}$
such that all connected components
$V_i$ of the preimage $p^{-1}(W)$ contain only one preimage of
$p(P_{\infty})$.
One of them, let's say $V_0$, contains of course $P_{\infty}$ itself.\\
We choose an arbitrary path
$c_{\iota}:[0,\infty) \to W\backslash\{p(P_{\infty})\} \,
\subseteq C$ that converges to $p(P_{\infty})$.
Let $\hat{c}_{\iota}:[0,\infty) \,\to\, V_0$ be an arbitrary lift
of $c_{\iota}$ via $p$ in $V_0$. Since we may choose $W$ arbitrarily small,
$V_0$
may become arbitrarily small and
$\hat{c}_{\iota}$ converges to $P_{\infty}$.\\
Now let
$c:[0,\infty) \,\to\, \HH$ be the preimage of $\hat{c}_{\iota}$
under $\iota$, i.~e.
the path such that $\iota\circ c = \hat{c}_{\iota}$.
We project it by $p_{\bar{\Gamma}}$ to $\overline{\HH/\Gammaquer^*}$,
i.~e. we take the path $p_{\bar{\Gamma}}\circ c$.
Its image under $\varphi$ is
$\varphi \circ p_{\bar{\Gamma}}\circ c = p\circ \hat{c}_{\iota} =
c_{\iota}$
and converges to $p(P_{\infty})$ in $\overline{C}$.
Thus $p_{\bar{\Gamma}}\circ c$ converges
in $\overline{\HH/\Gammaquer^*}$,
since $\varphi$
is an open map.\\
Since also $p_{\bar{\Gamma}}$ is open,
$c$ converges to some
$t_{\infty} \in \HH \cup \{\mbox{cusps of } \Gammaquer^*\}$. By continuity
of $\bar{\iota}$ one has
$\bar{\iota}(t_{\infty}) = P_{\infty}$. Thus $\bar{\iota}$
is surjective onto $\overline{\Delta}_{\iota}$.
\end{proof}
One obtains immediately the following conclusions.
\begin{corollary} \label{finalcor} If $\iota:\HH \hookrightarrow T_g$
leads to a Teichm\"uller curve $C$, then
\begin{enumerate}
\item[a)] the boundary points of the Teichm\"uller disk $\Delta_{\iota}$
are precisely the end points of the Strebel rays in $\Delta_{\iota}$
with initial point $\iota(i)$.
\item[b)] These boundary points correspond to the fixed points of parabolic elements
in the projective Veech group\index{Veech group!parabolic elements of}.
\item[c)] Each boundary point of the Teichm\"uller curve $C$ is obtained
by contracting the core curves of the cylinders\index{cylinder!core curve of} in
the direction of $v$, where
$v$ is the eigenvector of a parabolic element in the Veech group.
\end{enumerate}
\end{corollary}
This finishes the proof of Theorem \ref{thm}.
\section{Braungardt's construction of $\overline{T}_{g,n}$}
\label{volker}
Before we continue our study of Teichm\"uller disks and pass to the boundary,
we want to explain the partial compactification $\overline{T}_{g,n}$\index{Teichm\"uller
space!partial compactification $\overline{T}_{g,n}$} of the Teichm\"uller space $T_{g,n}$ that
we shall use in the subsequent chapters. As mentioned in the introduction,
$\overline{T}_{g,n}$ will be a locally ringed space which, as a topological space,
coincides with Abikoff's augmented Teichm\"uller space\index{Teichm\"uller
space!augmented}\index{augmented Teichm\"uller space} $\hat T_{g,n}$ (see the
discussion following
Proposition \ref{homeo}). The points of this space can be considered as marked
stable Riemann surfaces $(X,f)$, where $f:X_{\mbox{\scriptsize ref}}\to X$ is a deformation map.
The forgetful map $(X,f)\mapsto X$ defines a natural map from $\overline{T}_{g,n}$ to the
moduli space $\overline{M}_{g,n}$ of stable $n$-pointed Riemann surfaces of genus $g$.
This map extends the projection $T_{g,n}\toM_{g,n}$ and is in fact also the quotient
map for the natural action of the mapping class group $\Gamma_{g,n}$. But the
stabilizers of the boundary points are infinite, and at the boundary the
topology of $\overline{T}_{g,n}$ is quite far from that of a manifold.\\
In his thesis \cite{VDiss}, V.~Braungardt gave a construction of $\overline{T}_{g,n}$ which
uses only the complex structure of $\overline{M}_{g,n}$ and the boundary divisor
$\partialM_{g,n}$. Moreover his construction endows $\overline{T}_{g,n}$ with the structure of a
locally ringed space and he shows that it is a fine moduli space for
``marked'' stable Riemann surfaces.
In this chapter we give a brief account of his
approach.
\subsection{Coverings with cusps}
\label{v-cov}
The basic idea of Braungardt's construction\index{Braungardt's construction}
is to study, for a complex manifold $S$,
quotient maps $W\to W/G = S$ that have ``cusps'' over a divisor $D$
in $S$. This
concept, which will be explained in this section, generalizes the familiar
ramified coverings. The key result is that, in
the appropriate category of such quotient maps, there exists a universal
object $p:\tilde W\to S$ with cusps
over $D$.\\
In general $\tilde W$ cannot be a complex manifold or even a complex space.
Therefore we have to work in the larger category of {\it locally complex
ringed spaces}\index{locally ringed space}, i.\,e.\ topological spaces $W$
endowed with a sheaf ${\cal O}_W$ of $\mathbb{C}$-algebras (called the {\it structure
sheaf\index{structure sheaf}}) such that at each point $x\in W$ the stalk
${\cal O}_{W,x}$ is a local $\mathbb{C}$-algebra. The basic properties of such spaces
can be found e.\,g.\ in \cite[Ch.\,1, \S\,1]{GR} (where they are called
$\mathbb{C}$-ringed spaces).\\
In our situation Braungardt constructs a normal locally
complex ringed space $\tilde W$ such that the subspace $\tilde W_0=\tilde W - p^{-1}(D)$ is a complex manifold
and the restriction $p|_{\tilde W_0}:\tilde W_0\to S_0=S - D$ is the usual
universal
covering.
\begin{example}\label{horo}
The simplest example is well known and quite typical: Take
$S$ to be the unit disk $\mathbb D=\{z\in\mathbb{C}:|z|<1\}$ and $D=\{0\}$. The universal
covering of $S - D$ is, of course, $\exp:\HH\to\mathbb D-\{0\}$, $z\mapsto e^{2\pi
iz}$. It turns out that the universal covering in Braungardt's sense is
$\hat\HH=\HH\cup\{\mbox{i}\infty\}$ with the {\it horocycle topology}\index{horocycle topology},
i.\,e.\ the sets
$\HH_R=\{z\in\mathbb{C}:\mbox{Im}\,z>R\}\cup\{\mbox{i}\infty\}$
for $R>0$ form a basis of neighbourhoods of the point $\mbox{i}\infty$. Note
that this topology is not the one induced from the Euclidean topology if
$\HH\cup\{\infty\}$ is considered as a subset of the Riemann sphere
$\hat\mathbb{C}$.\\[1mm]
$\hat\HH$ is given the structure of a normal complex ringed space by taking
${\cal
O}(U)$ to be the holomorphic functions on $U$ for open subsets $U$ of $\HH$,
and by defining
${\cal O}(\HH_R)$ to be the set of holomorphic functions on
$\{z\in\mathbb{C}:\mbox{Im}\,z>R\}$ that have a
continuous extension to $\mbox{i}\infty$. Clearly ${\cal O}(\HH_R)$ contains
all functions of the form $z\mapsto e^{2\pi iz/n}$ for all $n\ge1$.
\end{example}
We now give the precise definitions. We begin with the class of spaces that we
need
(cf.~\cite[3.1.3]{VDiss}):
\begin{definition}
\label{R-space}
Let $(W,{\cal O}_W)$ be a locally complex ringed space whose structure sheaf
${\cal O}_W$ is a subsheaf of the sheaf ${\cal C}^\infty(W,\mathbb{C})$ of continuous
complex valued functions on $W$. \\[1mm]
{\bf a)} A subset $B\subset W$ is called {\it analytic} if there is an open
covering $(U_i)_{i\in I}$ of $W$ and for each $i\in I$ there are finitely many
elements $f_{i,1},\dots,f_{i,n_i}\in{\cal O}_W(U_i)$ such that $B\cap U_i$
is the zero set of $\{f_{i,1},\dots,f_{i,n_i}\}$.\\[1mm]
{\bf b)} We call $(W,{\cal O}_W)$ an {\it
R-space}\index{R-space} if, for every open $U\subseteq W$ and every proper
closed analytic
subset $B\subset U$, a continuous function $f:U\to\mathbb{C}$ is in ${\cal O}_W(U)$ if
and only if its restriction to $U-B$ is in ${\cal O}_W(U-B)$.
\end{definition}
Note that all complex spaces are R-spaces: The required property is just
Riemann's extension theorem\index{Riemann's extension theorem},
see \cite[Chapter 7]{GR}.
\begin{definition}
\label{cusps}
Let $S$ be a complex manifold and $D\subset S$ a proper closed analytic
subset. Then a surjective morphism $p:W\to S$ from an R-space $(W,{\cal O}_W)$
to $S$ is called a
{\it covering with cusps over}\index{covering with cusps} $D$ if there is a
group $G$ of
automorphisms of $W$ (as locally complex ringed space) such that
\begin{itemize}
\item[(i)] $p$ is the quotient map $W\to W/G=S$,
\item[(ii)] $W_0=p^{-1}(S-D)$ is a complex manifold and $p|_{W_0}:W_0\to
S_0=S-D$ is an unramified covering,
\item[(iii)] for any $x\in W$ there is a basis of neighbourhoods $U_x$ that
are {\it precisely invariant}\index{precisely invariant} under the stabilizer
$G_x$ of $x$ in $G$ (i.\,e.\
$G_x(U_x)=U_x$ and $g(U_x)\cap U_x=\emptyset$ for each $g\in G-G_x$).
\end{itemize}
\end{definition}
Note that, in particular, any ramified normal covering of complex manifolds is
a covering in the sense of this definition (with cusps over the branch
locus). As mentioned before, the basic result is (see \cite[Satz 3.1.9]{VDiss})
\begin{theorem}
\label{univ}
{\em (i)} For any complex manifold $S$ and any proper closed analytic subset
$D\subset
S$ there exists an initial object $p:(\tilde W,{\cal O}_{\tilde W})\to S$ in
the category of coverings of $S$ with cusps over $D$; it is
called the {\em universal covering with cusps over}\index{covering with
cusps!universal}
$D$. The restriction of $p$ to $\tilde W_0=p^{-1}(S_0)$ is the universal
covering of $S_0$, and the group
$G=\mbox{\em Aut}(\tilde W/S)$ is the fundamental group $\pi_1(S_0)$.\\[1mm]
{\em (ii)} If $S'$ is an open submanifold of $S$ and $\tilde W'$ the universal
covering of $S'$ with cusps over $D'=D\cap S'$, then $\tilde W'/H'$ embeds as
an open
subspace into $\tilde W$, where $H'$ is the kernel of the homomorphism
$\pi_1(S'-D')\to\pi_1(S-D)=G$.
\end{theorem}
\begin{proof}
We only sketch the construction of the space $(\tilde W,{\cal O}_{\tilde W})$.
The details that it
satisfies all the required properties are worked out in \cite{VDipl}. For the
proof of (ii) we refer to \cite{VDiss}.\\
Let $S_0=S-D$, $G=\pi_1(S_0)$ and $p_0:W_0\to S_0$ the universal covering.
$\tilde W$
is obtained from $W_0$ by ``filling in the holes above $D$'' in such a way
that the $G$-action extends from $W_0$ to $\tilde W$. More formally, the fibre
$\tilde W_s$
of $\tilde W$ over any point $s\in S$ is constructed as follows: let $\mathfrak
U(s)$ be the set of open connected neighbourhoods of $s$ in $S$; for any $U\in
\mathfrak U(s)$ denote by $X(U)$ the set of connected components of
$p_0^{-1}(U)$. Then
\[ \tilde W_s = \{(x_U)_{U\in\mathfrak U(s)}:x_U\in X(U),\,x_U\cap x_{U'}\not=\emptyset\
\mbox{for all}\ U,U'\in\mathfrak U(s)\}.\]
Clearly $\tilde W_s=p_0^{-1}(s)$ for $s\in S_0$. Note that by definition, $G$ acts
transitively on each $\tilde W_s$, thus $\tilde W/G=S$. For any $x=(x_U)\in \tilde W$ define the
sets $x_U\cup\{x\}$, $U\in\mathfrak U(s)$, to be open neighbourhoods of
$x$. Finally define the structure sheaf by
\begin{align}
\label{calO}
{\cal O}_{\tilde W}(U) = \{f:U\to\mathbb{C}\ \mbox{continuous}: f\ \mbox{holomorphic on}\
U\cap \tilde W_0\}
\end{align}
for any open subset $U$ of $\tilde W$.
\end{proof}
A key point in Braungardt's proof of Theorem \ref{univ} is the existence of
neighbourhoods $U$ for any point $a\in D$ such that the natural homomorphism
$\pi_1(U-D) \to \lim\limits_{\stackrel{\longrightarrow}{U'\in\mathfrak U(a)}}\,\pi_1(U'-D)$ is an isomorphism. He calls such
neighbourhoods {\it decent}\index{decent neighbourhood}.
The importance of this notion is that if $U$ is a decent neighbourhood of a
point $a\in D$ and $\bar x_U$ a connected component of $p^{-1}(U)$, then $\bar
x_U$ is
precisely invariant under the stabilizer $G_x$ in $G$ of the unique point $x\in
\bar x_U\cap p^{-1}(a)$.\\
Decent neighbourhoods in the above sense do not exist in general for singular
complex spaces. For example, if $S$ is a stable Riemann surface and $s\in S$ a
node, $U-\{s\}$ is not even connected for small neighbourhoods $U$ of
$s$. Nevertheless the construction can be generalized to this case, and
the proof of the theorem carries over to this case as
Braungardt explains in \cite[Anm.\ 3.1.4]{VDiss}; we therefore have:
\begin{corollary}
\label{univstab}
Any stable Riemann surface\index{stable Riemann surface} has a universal covering with cusps over the nodes.
\end{corollary}
Near the inverse image of a node, the universal covering of a stable Riemann
surface looks like two copies of $\hat\HH$ glued together in the cusps. If such a
neighbourhood is embedded into the complex plane or $\mathbb{P}^1(\mathbb{C})$ it is called a
{\it doubly cusped region}\index{doubly cusped region}, cf.\ \cite[VI.A.8]{Mask}.
\subsection{The cusped universal covering of $\overline{M}_{g,n}$}
\label{v-tgnq}
Let us now fix nonnegative integers $g$, $n$ such that $3g-3+n>0$. We want to
construct the space $\overline{T}_{g,n}$ as the universal covering of $\overline{M}_{g,n}$ with cusps
over the compactification (or boundary) divisor $\partial\Mgn$. But we cannot apply
Theorem~\ref{univ} directly to $\overline{M}_{g,n}$ since it is not a
manifold, but only an orbifold (or smooth stack). Braungardt circumvents this
difficulty by
\begin{definition}
\label{covmgnq}
A morphism $p:Y\to\overline{M}_{g,n}$ of locally complex ringed spaces is called a {\it
covering with cusps over} $D=\partialM_{g,n}$ if there is an open covering
$(U_i)_{i\in I}$ of $\overline{M}_{g,n}$ and for each $i\in I$ a covering
$q_i:U'_i\to U_i$ by a complex manifold $U'_i$ such that $p|_{p^{-1}(U_i)}$
factors as
$p^{-1}(U_i)\stackrel{\scriptsize{p'_i}}{\longrightarrow}U'_i\stackrel{\scriptsize{q_i}}{\longrightarrow}U_i$,
where $p'_i$ is a covering with cusps over $q_i^{-1}(D)$ (in the sense of
Definition \ref{cusps}).
\end{definition}
Then one can use Theorem \ref{univ} to prove
\begin{proposition}
\label{tgnq}
There is a universal covering $\overline{T}_{g,n}\to\overline{M}_{g,n}$\index{Teichm\"uller space!partial compactification $\overline{T}_{g,n}$}\index{moduli space!compactification $\overline{M}_{g,n}$} with cusps over
$\partialM_{g,n}$.
\end{proposition}
\begin{proof}
We first construct local universal coverings and then glue them together.
For any $s\in\overline{M}_{g,n}$ choose an open neighbourhood $U$ and a covering
$q':U'\to U$ with a manifold $U'$. Let $\tilde W'$ be the universal covering
of $U'$ with cusps over $D'=q'^{-1}(D)$. Let $H'$ be the kernel of the
homomorphism from $\pi_1(U'-D')$ to $\Gamma_{g,n}$.
Theorem \ref{univ} (ii) suggests that the quotient $\tilde W'/H'$
should be an open part of the universal covering of $\overline{M}_{g,n}$. All that remains
to show is that the
$\tilde W'/H'$ glue together to a covering with cusps over $D$. This is done in \cite[3.2.1]{VDiss}
\end{proof}
Locally $\overline{T}_{g,n}$ looks like a product of a ball with some copies of the
universal covering $\hat\HH$ of $\mathbb D$ with cusps over $\{0\}$ which
was explained in Section \ref{v-cov}:
\begin{corollary}
\label{local}
Let $x\in\overline{T}_{g,n}$ correspond to a stable Riemann surface $X$ with $k$ nodes. Then $x$ has
a neighbourhood that is isomorphic to
\[\hat\HH^k\times\mathbb D^{3g-3+n-k}.\]
\end{corollary}
\begin{proof}
Let $s\in\overline{M}_{g,n}$ be the image point of $x$. The deformation theory of stable
Riemann surfaces gives us a map from $\mathbb D^{3g-3+n}$ onto a neighbourhood of
$s$ such that the inverse image of $D=\partial\Mgn$ is the union of axes
$D'=\{(z_1,\dots,z_{3g-3+n}): z_1\cdot\ldots\cdot z_k = 0\}$, see
\cite[Sect.~3B]{HM}.
The fundamental group of $\mathbb D^{3g-3+n}-D'$ is a free abelian group on $k$
generators; they correspond to Dehn twists about the loops that are
contracted in $X$. Thus the homomorphism
$\pi_1(\mathbb D^{3g-3+n}-D')\to\Gamma_{g,n}$ is injective. By Proposition~\ref{tgnq} and
its proof the universal covering $\tilde W$ of $\mathbb D^{3g-3+n}$ with cusps over $D'$ is
therefore a neighbourhood of $x$. It is not hard to see that $\tilde W$ is of the
given form.
\end{proof}
Our next goal is to compare $\overline{T}_{g,n}$ to the {\it augmented} Teichm\"uller space
$\hat T_{g,n}$ introduced by Abikoff \cite{Abideg}.
\begin{proposition}[cf.~\cite{VDiss}, Satz 3.4.2]
\label{homeo}
$\overline{T}_{g,n}$\index{Teichm\"uller space!partial compactification $\overline{T}_{g,n}$} is homeomorphic to the augmented Teichm\"uller space\index{augmented Teichm\"uller space} $\hat T_{g,n}$.
\end{proposition}
Before proving the proposition we summarize the definition and some properties
of $\hat T_{g,n}$: As a point set,
\begin{align}\label{tgndach}
\hat T_{g,n} = \{(X,f): X\ \mbox{a stable Riemann surface of type}\ (g,n),\nonumber\\
f:X_{\mbox{\scriptsize ref}}\to X\ \mbox{a deformation}\}/\sim
\end{align}
As mentioned in the introduction, a deformation\index{deformation map} is a map that contracts some
disjoint loops on $X_{\mbox{\scriptsize ref}}$ to points (the nodes of $X$) and is a homeomorphism
otherwise. The equivalence relation is the same as for $T_{g,n}$:
$(X,f)\sim(X',f')$ if and only if there is a biholomorphic map $h:X\to X'$
such that $f'$ is homotopic to $h\circ f$.\\[1mm]
Abikoff puts a topology on $\hat T_{g,n}$ by
defining neighbourhoods $U_{V,\varepsilon}$ of a point $(X,f)$
for a compact neighbourhood $V$ of the set of nodes in $X$ and
$\varepsilon>0$:
\begin{align}\label{uke}
U_{V,\varepsilon} = \{(X',f'):\exists\ \mbox{deformation}\ h:X'\to X,\ \
(1+\epsilon)\,\mbox{-quasiconformal}\nonumber\\
\mbox{on}\ h^{-1}(X-V),\ \mbox{such that}\ f\ \mbox{is homotopic to}\
h\circ f'\}/\sim
\end{align}
The action of the mapping class
group $\Gamma_{g,n}$ extends continuously to $\hat T_{g,n}$ (\cite[Thm.~4]{Abideg}), and the
orbit space $\hat T_{g,n}/\Gamma_{g,n}$ is $\overline{M}_{g,n}$ (as a topological space).
\begin{proof}[Proof of Proposition \ref{homeo}]
Braungardt shows (see \cite[Hilfssatz 3.4.4]{VDiss}) that the stabilizer
of a point $(X,f)\in\hat T_{g,n}$ in $\Gamma_{g,n}$ is an extension of the free abelian group
generated by the Dehn twists about the contracted loops by the holomorphic
automorphism group Aut$(X)$ of $X$. For any $V$ and
$\varepsilon$, $\bigcap\limits_{\sigma\in\mbox{\scriptsize Aut}(X)}\sigma(U_{V,\varepsilon})$ is
invariant under the stabilizer of $(X,f)$, and for sufficiently small $V$ and
$\varepsilon$, it is precisely invariant. Therefore the quotient map $\hat T_{g,n}\to
\overline{M}_{g,n}$ is a covering with cusps over $\partial\Mgn$ in the sense of Definition
\ref{covmgnq}, except that so far no structure
sheaf has been defined on $\hat T_{g,n}$. But this can be done in the same way as in
(\ref{calO}).
The universal property of $\overline{T}_{g,n}$ then
yields a map $p:\overline{T}_{g,n}\to\hat T_{g,n}$ compatible with the action of $\Gamma_{g,n}$ on both
sides. To show that this map is an isomorphism we compare the stabilizers in
$\Gamma_{g,n}$ for the points in both spaces. For a point in $\hat T_{g,n}$ we just
described this stabilizer, and the proof of Corollary~\ref{local} shows that
for a corresponding point in $\overline{T}_{g,n}$ it is also an extension of $\mathbb{Z}^k$ by
Aut$(X)$.
\end{proof}
\subsection{Teichm\"uller structures}
\label{v-structure}
In this section we explain how Braungardt extends the universal family\index{universal family}
of marked Riemann surfaces that is well known to exist over $T_{g,n}$ to a family
over $\overline{T}_{g,n}$ which still is universal for the appropriate notion of marking or
{\it Teich\-m\"uller structure}.\\
As above we fix a reference Riemann surface $X_{\mbox{\scriptsize ref}}$ of type $(g,n)$; let
$Q_1,\dots,Q_n$ be the marked points and $X^0_{\mbox{\scriptsize ref}} = X_{\mbox{\scriptsize ref}} -
\{Q_1,\dots,Q_n\}$. Let us also fix a universal covering $U_{\mbox{\scriptsize ref}}\toX^0_{\mbox{\scriptsize ref}}$ and
identify $\pi_{g,n}=\pi_1(X^0_{\mbox{\scriptsize ref}})$ with the group Aut$(U_{\mbox{\scriptsize ref}}/X^0_{\mbox{\scriptsize ref}})$ of deck
transformations.\\
A classical construction of the family ${\cal C}_{g,n}$ over $T_{g,n}$ goes as follows
(cf.~\cite{Bers}): For
every point $x=(X,P_1,\dots,P_n,f)\inT_{g,n}$ take a universal covering of $X^0 =
X-\{P_1,\dots,P_n\}$ and arrange them so that they form an $\HH$\,-bundle
$\Omega^+$ over $T_{g,n}$. Then ${\cal C}_{g,n}$ is obtained as the quotient of
$\Omega^+$ by the natural action of $\pi_{g,n}$. More precisely, $\Omega^+$ is defined as follows: to $x\inT_{g,n}$ there corresponds the quasifuchsian group $G_x=w^\mu G(w^\mu)^{-1}$, where $G=\mbox{Aut}(U_{\mbox{\scriptsize ref}}/X^0_{\mbox{\scriptsize ref}})\cong\pi_{g,n}$ and $w^\mu$ is the quasiconformal automorphism of $\mathbb{P}^1(\mathbb{C})$ associated to $x$, see e.\,g.\ \cite[6.1.1]{IT}. The domain of discontinuity of $G_x$ consists of two connected components $\Omega^-(x)=w^\mu(L)$ (where $L$ is the lower half plane) and $\Omega^+(x)=w^\mu(\HH)$. Then $\Omega^+(x)/G_x=X^0$, whereas $\Omega^-(x)/G_x={X^{0,*}_{\mbox{\scriptsize ref}}}$, the mirror image of $X^0_{\mbox{\scriptsize ref}}$.\\
To extend this family we identify $\overline{T}_{g,n}$ with $\hat T_{g,n}$ by Corollary
\ref{homeo}. As explained in \cite{Abideg}, any point
$x=(X,P_1,\dots,P_n,f)\in\hat T_{g,n}-T_{g,n}$ corresponds to a {\it regular B-group}\index{regular B-group}
$G_x$. This means that $G_x$ is a Kleinian group isomorphic to $\pi_{g,n}$ whose domain of discontinuity $\Omega(G_x)$ has a
unique simply connected invariant component $\Omega^-(G_x)$ such that $\Omega^-(G_x)/G_x$ is
isomorphic to ${X^{0,*}_{\mbox{\scriptsize ref}}}$. For the union $\Omega^+(G_x)=\Omega^+(x)$ of the other
components of $\Omega(G_x)$ it holds that $\Omega^+(G_x)/G_x\cong
X^0-\{\mbox{nodes}\}$. To every node in $X$ there corresponds a conjugacy
class of parabolic elements in $G_x$, each of which is
accidental\index{accidental parabolic element} (i.\,e.\ it becomes hyperbolic in the Fuchsian group $hG_xh^{-1}$,
where $h:\Omega^-(G_x)\to\HH$ is a conformal map). Near a fixed point of such a
parabolic element, $\Omega^+(G_x)$ is a doubly cusped region\index{doubly cusped region}, cf.\ the remark at the end of
Section \ref{v-cov}. If we denote by $\hat\Omega^+(x)$ the union of $\Omega^+(G_x)$ with
the fixed points of the parabolic elements in $G_x$ (accidental or not), then $\hat\Omega^+(x)\to X$ is
the universal covering of $X$ with cusps over the nodes (cf.~Corollary
\ref{univstab}).
\begin{definition}
\label{cgnq}
Let
\begin{align}
\hat\Omega^+_{g,n}=\{(x,z)\in\overline{T}_{g,n}\times\mathbb{P}^1(\mathbb{C}):z\in\hat\Omega^+(x)\}.\nonumber
\end{align}
On $\hat\Omega^+_{g,n}$, $\pi_{g,n}$ acts in such a way that for fixed $x\in\overline{T}_{g,n}$ the action on $\Omega^+(x)$ is that of $G_x$. $\overline{\cal C}_{g,n}=\hat\Omega^+_{g,n}/\pi_{g,n}$ is called the {\it universal family} over $\overline{T}_{g,n}$\index{universal family over $\overline{T}_{g,n}$}.
\end{definition}
Braungardt shows (\cite[Hilfssatz 4.2.1]{VDiss}) that
$\Omega^+_{g,n}=\{(x,z)\in\hat\Omega^+_{g,n}: x \in T_{g,n}, \; z\in\Omega^+(x)\}$ is an open subset of
$\overline{T}_{g,n}\times\mathbb{P}^1(\mathbb{C})$ and hence has a well defined structure of a complex
ringed space. One can extend this structure sheaf to all of $\hat\Omega^+_{g,n}$ in the
same way as in (\ref{calO}). Then clearly $\overline{\cal C}_{g,n}$ is also a complex ringed
space, and the fibre over $x$ is isomorphic to the stable Riemann surface $X$
represented by $x$.\\
To justify the name ``universal'' family for $\overline{\cal C}_{g,n}$ we introduces the notion
of a Teichm\"uller structure\index{Teichm\"uller structure}: For a single smooth Riemann surface $(X,P_1,\dots,P_n)$ of
type $(g,n)$, a Teichm\"uller structure is just a marking: so far in this article we used markings as classes of mappings $X_{\mbox{\scriptsize ref}}\to X$; equivalently a marking can be given as an isomorphism $\pi_{g,n}\to\pi_1(X-\{P_1,\dots,P_n\})$ inducing an
isomorphism $\pi_g=\pi_1(X_{\mbox{\scriptsize ref}})\to\pi_1(X)$ and respecting the orientation and the
conjugacy classes of the loops around the $Q_i$ resp.\ $P_i$. Yet another equivalent way to give a
marking is as a universal covering $U\to X^0$ together with an isomorphism
$\pi_{g,n}\to \mbox{Aut}(U/X^0)$. This last characterization also works for a stable Riemann surface if
we take for $U$ a universal covering with cusps over the nodes. Before we
extend this definition to the relative situation we recall the notion of a
family of stable Riemann surfaces:
\begin{definition}
\label{famstab}
Let $S$ be a complex ringed space. A {\it family of stable Riemann surfaces}\index{stable Riemann surfaces!family of} of type $(g,n)$ over
$S$ is a complex ringed space ${\cal C}$ together with a proper flat map
$\pi:{\cal C}\to S$
such that the fibres $X_s=\pi^{-1}(s)$, $s\in S$, are stable Riemann surfaces
of genus $g$. In addition we are given $n$ disjoint sections $P_i:S\to {\cal
C}$, $i=1,\dots,n$, of $\pi$ such that $P_i(s)$ is not a node on $X_s$. We denote by ${\cal
C}^0={\cal C}-\bigcup\limits_{i=1}^nP_i(S)$ the complement of the marked sections.
\end{definition}
\begin{definition}
\label{t-structure}
Let ${\cal C}/S$ be a family of stable Riemann surfaces of type $(g,n)$ over a complex ringed space $S$. A {\it
Teichm\"uller structure} on ${\cal C}$ is a complex ringed space ${\cal U}$ together with a morphism ${\cal U}\to {\cal C}$ such
that for every $s\in S$ the (restriction of the) fibre $U_s^0\to X_s^0$ is a universal covering
with cusps over the nodes, together with an isomorphism
$\pi_{g,n}\to\mbox{Aut}({\cal U}/{\cal C}^0)$.
\end{definition}
Putting everything together we obtain
\begin{theorem}
\label{fein}
$\overline{T}_{g,n}$ is a fine moduli space for stable Riemann surfaces with Teichm\"uller
structure. $\overline{\cal C}_{g,n}\to\overline{T}_{g,n}$ is the universal family and $\hat\Omega^+_{g,n}\to\overline{\cal C}_{g,n} =
\hat\Omega^+_{g,n}/\pi_{g,n}$ is the universal Teichm\"uller structure.
\end{theorem}
Finally Braungardt gives a very elegant and conceptual description of $\overline{\cal C}_{g,n}$
which extends a classical result of Bers (\cite[Thm.~9]{Bers}) to the boundary:
\begin{proposition}
\label{tgn+1}
$\overline{T}_{g,n+1}/\pi_{g,n}$ is in a natural way isomorphic to $\overline{\cal C}_{g,n}$.
\end{proposition}
\begin{proof}
The kernel of the obvious homomorphism $\Gamma_{g,n+1}\to\Gamma_{g,n}$ can be identified with
$\pi_{g,n}$, which gives the action on $\overline{T}_{g,n+1}$. The holomorphic map $T_{g,n+1}\toT_{g,n}$
which forgets the last marked point extends to a map $\overline{T}_{g,n+1}\to\overline{T}_{g,n}$ by a
general property of universal coverings with cusps. The difficult step in
Braungardt's proof is to show that the induced map $\overline{T}_{g,n+1}/\pi_{g,n}\to\overline{T}_{g,n}$ has the right fibres. For this
purpose he constructs a map $\hat\Omega^+_{g,n}\to\overline{T}_{g,n+1}$ and shows that it is
bijective and induces isomorphisms on the fibres over $\overline{T}_{g,n}$.
\end{proof}
\section{Introduction} \label{intro}
One of the original motivations that led to the discovery of Teichm\"uller space\index{Teichm\"uller space}
was to better understand the classification of Riemann surfaces. Riemann
himself already saw that the compact Riemann surfaces of genus $g$ with $n$
marked points on it depend on $3g-3+n$ complex parameters (if this number is
positive). More precisely, there is a complex analytic space $M_{g,n}$\index{moduli space} whose
points correspond in a natural way to the isomorphism classes of such Riemann
surfaces. $M_{g,n}$ is even an algebraic variety, but its geometry is not easy to
understand. Most of the basic properties are known today, but many
questions on the finer structure of $M_{g,n}$ are still open.\footnote{Although
we consider this general setting in a large part of this paper, we shall
restrict ourselves in this introduction to the case $n=0$ and write, as usual,
$M_g$ instead of $M_{g,0}$ (and later $T_g$ instead of $T_{g,0}$).}\\
Many classification problems become more accessible if the objects are endowed
with an additional structure or {\it marking}\index{marked Riemann surface}. The general strategy is to
first classify the marked objects and then, in a second step, to try to
understand the equivalence relation that forgets the marking. The markings
that Teichm\"uller introduced for a compact Riemann surface $X$ consist of
orientation preserving diffeomorphisms $f:X_{\mbox{\scriptsize ref}}\to X$ from a reference Riemann
surface $X_{\mbox{\scriptsize ref}}$ to $X$. Markings $(X,f)$ and $(X',f')$ are considered the same
if $f'\circ f^{-1}$ is homotopic to a biholomorphic map. Thus different
markings of a fixed Riemann surface differ by a homotopy class of
diffeomorphisms of $X_{\mbox{\scriptsize ref}}$. In other words the {\it mapping class group}\index{mapping class group}\index{Teichm\"uller modular group} (or
{\it Teichm\"uller modular group})
\begin{equation}\label{mapgroup}
\Gamma_g=\mbox{Diffeo}^+(X_{\mbox{\scriptsize ref}})/\mbox{Diffeo}^0(X_{\mbox{\scriptsize ref}})
\end{equation}
acts on the set $T_g$ of all marked Riemann surfaces of genus $g$, and the
orbit space $T_g/\Gamma_g$ is equal to $M_g$ (here $\mbox{Diffeo}^+(X_{\mbox{\scriptsize ref}})$
denotes the group of orientation preserving diffeomorphisms of $X_{\mbox{\scriptsize ref}}$ and
$\mbox{Diffeo}^0(X_{\mbox{\scriptsize ref}})$ the subgroup of those that are homotopic to the
identity).\\
Teichm\"uller discovered that in each homotopy class of diffeomorphisms
between compact Riemann surfaces $X$ and $X'$ there is a unique ``extremal
mapping'', i.\,e.\ a quasiconformal map with minimal dilatation. The logarithm
of this dilatation puts a metric on $T_g$, the {\it Teichm\"uller
metric}\index{Teichm\"uller metric}. With it $T_g$ is a complete metric space, diffeomorphic to
$\mathbb{R}^{6g-6}$, and $\Gamma_g$ acts on $T_g$ by isometries. There is
also a structure as complex manifold on $T_g$, for which the elements of $\Gamma_g$ act holomorphically and thus
make the quotient map $T_g\to M_g$ into an analytic map between complex spaces.
\\
That the complex structure on $T_g$ is the ``right one'' for the
classification problem can be seen from the fact that there is a family
${\cal C}_g$ of Riemann surfaces over $T_g$ which in a very precise sense is
universal\index{universal family}\index{Riemann surfaces!universal family of}. This family can be obtained as follows: By the uniformization
theorem, the universal covering of a compact Riemann surface $X$ of genus
$g\ge 2$ is (isomorphic to) the upper half plane $\HH$. Any marking
$f:X_{\mbox{\scriptsize ref}}\to X$ induces an isomorphism $f_*$ from $\pi_g = \pi_1(X_{\mbox{\scriptsize ref}})$,
the fundamental group of the reference surface, to $\pi_1(X)$. We may
obtain a holomorphic action of $\pi_g$ on $T_g\times\HH$ as follows: for
$\gamma\in\Gamma_g$, $x=(X,f)\in T_g$ and $z\in\HH$ put
\[\gamma(x,z) = (x,f_*(\gamma)(z)),\]
where we identify $\pi_1(X)$ with the group of deck transformations of the
universal covering $\HH\to X$.
The quotient ${\cal C}_g = (T_g\times\HH)/\pi_g$ is a complex manifold with a natural
projection $p:{\cal C}_g\to T_g$; the fibre $p^{-1}(X,f)$ is isomorphic to
$X$. Moreover $p$ is proper and therefore $p:{\cal C}_g\to T_g$ is a {\it family of
Riemann surfaces}. The representation of ${\cal C}_g$ as a quotient of a manifold
by an action of $\pi_g$ is called a {\it Teichm\"uller structure}\index{Teichm\"uller structure} on this
family. It follows from results of Bers on the uniformization of families (see e.\,g.\ \cite[Thm.\,XVII]{B}) that this family is {\it universal}, i.\,e.\ every other family of
Riemann surfaces of genus $g$ with a Teichm\"uller structure can be obtained
as a pullback from $p:{\cal C}_g\to T_g$. In a more fancy language: $T_g$ is a fine
moduli space\index{moduli space!fine} for Riemann surfaces of genus $g$ with Teichm\"uller
structure.\\
It follows by the same arguments that for any family $\pi:{\cal C}\to S$ of Riemann
surfaces (over some complex space $S$) there is an analytic map
$\mu=\mu_\pi:S\to M_g$, which maps $s\in S$ to the point in $M_g$ that
corresponds to the isomorphism class of the fibre
$\pi^{-1}(s)$. Unfortunately, $\Gamma_g$ does not act freely on $T_g$;
therefore the quotient of ${\cal C}_g$ by the action of $\Gamma_g$ does not give a
universal family over $M_g$: the fixed points of elements in $\Gamma_g$
correspond to automorphisms of the Riemann surface, and the fibre over $[X]\in
M_g$ in the family ${\cal C}_g/\Gamma_g\to M_g$ is the Riemann surface
$X/\mbox{Aut}(X)$ (whose genus is strictly less than $g$ if $\mbox{Aut}(X)$ is
nontrivial). As a consequence, $M_g$ is not a fine moduli space for Riemann
surfaces, but only a ``coarse''\index{moduli space!coarse} one (see e.\,g.\ \cite[1A]{HM} for a precise
definition of fine and coarse moduli spaces).\\
There are several equivalent ways to define markings of Riemann surfaces and to describe Teichm\"uller space. Instead of classes of diffeomorphisms $f:X_{\mbox{\scriptsize ref}}\to X$ often conjugacy classes of group isomorphisms $\pi_g\to\pi_1(X)$ are used as markings. For the purpose of this paper the approach to Teich\-m\"uller space via Teichm\"uller deformations\index{Teichm\"uller deformation} is very well suited; it is developed in Section \ref{deform}. The starting point is the observation that a holomorphic quadratic differential $q$ on a Riemann surface $X$ defines a flat structure $\mu$ on $X^* = X-\{\mbox{zeroes of\ }q\}$. Composing the chart maps of $\mu$ with a certain (real) affine map yields a new point in $T_g$. Any point in $T_g$ is in a unique way such a Teichm\"uller deformation of a given base point $(X_{\mbox{\scriptsize ref}},\mbox{id})$, cf.~Section \ref {sr}.\\
The main objects of interest in this article are {\it Teichm\"uller embeddings}\index{Teichm\"uller embedding}, i.\,e.\ holomorphic isometric embeddings $\iota:\HH\to T_g$ (or $\iota:\mathbb D\to T_g$), where $\HH$ (resp.\ $\mathbb D$) is given
the hyperbolic metric and $T_g$ the Teichm\"uller metric, see Definition \ref{emb}. The restriction of
$\iota$ to a hyperbolic geodesic line in $\HH$ (or $\mathbb D$) is then a (real) geodesic line
in $T_g$ in the usual sense. The image $\Delta_\iota$ of such an embedding $\iota$ is called a
{\it Teichm\"uller geodesic}\index{Teichm\"uller geodesic} or {\it Teichm\"uller disk}\index{Teichm\"uller disk} in $T_g$. There are
plenty of Teichm\"uller disks in $T_g$. To see this note first that the tangent space
to $T_g$ at a point $x=(X,f)\in T_g$ is naturally isomorphic to the vector space
$Q_X=H^0(X,\Omega_X^{\otimes 2})$ of holomorphic quadratic differentials on X (this results from the Bers embedding of $T_g$ as a bounded open subdomain of $Q_X$). We shall explain in Section \ref{defdisks} in three different ways how one can, for a given holomorphic quadratic differential $q$ on a Riemann surface $X$, construct a Teichm\"uller embedding $\iota:\mathbb D\to T_g$ with $\iota(0)=x$ and $\iota'(0)=q$. This shows that for any $x\in T_g$ and any
(complex) tangent vector at $x$ there is a Teichm\"uller disk passing
through $x$ in direction of the given tangent vector.\\
There are several natural and closely related objects attached to a Teich\-m\"uller disk $\Delta_\iota$ (or a Teichm\"uller embedding $\iota:\HH\to T_g$): The first is a discrete subgroup of $\mbox{PSL}_2(\mathbb{R})$ called the {\it (projective) Veech group}\index{Veech group} $\bar\Gamma_\iota$, cf.~Section \ref{vg}. If $q$ is the quadratic differential on the Riemann surface $X$ by which $\iota$ is induced, $\bar\Gamma_\iota$ consists of the derivatives of those diffeomorphisms of $X$ that are affine with respect to the flat structure defined by $q$. Veech showed that this subgroup of $\mbox{PSL}_2(\mathbb{R})$ is always discrete (\cite[Prop.~2.7]{V}).\\[1mm]
A second group naturally attached to $\iota$ is the {\it stabilizer}\index{Teichm\"uller disk!stabilizer}
\[\mbox{Stab}(\Delta_\iota) = \{\varphi\in\Gamma_g:\varphi(\Delta_\iota)=\Delta_\iota\}\]
of $\Delta_\iota$ in the Teichm\"uller modular group.
The pointwise stabilizer
\[\mbox{Stab}^0(\Delta_\iota)=\{\varphi\in\Gamma_g:\varphi|_{\Delta_\iota}=\mbox{id}_{\Delta_\iota}\}\]
is a finite subgroup of $\mbox{Stab}(\Delta_\iota)$, and $\mbox{Stab}(\Delta_\iota)/\mbox{Stab}^0(\Delta_\iota)$ then is (via $\iota$) a
group of isometries of $\HH$ and thus a subgroup of $\mbox{PSL}_2(\mathbb{R})$. This subgroup coincides with the projective Veech group $\bar\Gamma_\iota$, see Section \ref{lattice}.\\[1mm]
Given a Teichm\"uller embedding $\iota$ we are also interested in the
image $C_\iota$ of $\Delta_\iota$ in the moduli space $M_g$.
The map $\Delta_\iota\to C_\iota$ obviously factors through the Riemann surface
$\HH/\bar\Gamma_\iota$ or rather through its mirror image $V_\iota$, see Section \ref{tc} and in particular \ref{lattice}. The typical case seems to be $\mbox{Stab}(\Delta_\iota)=\{\mbox{id}\}$ (although it
is not trivial to give explicit examples). Much attention has been given in
recent years to the other extreme case that $\bar\Gamma_\iota$ is a lattice\index{lattice in $\mbox{PSL}_2(\mathbb{R})$} in
$\mbox{PSL}_2(\mathbb{R})$. Then $V_\iota$ is of finite hyperbolic volume and hence a
Riemann surface of finite type, or equivalently an algebraic curve. In this
case the induced map $V_\iota\to C_\iota$ is birational (see \cite{EG}), i.\,e.\
$V_\iota$ is the desingularization (or normalization) of $C_\iota$. It follows
from a result of Veech (\cite{V}) that $V_\iota$ (and hence also $C_\iota$)
cannot be projective. If $\bar\Gamma_\iota$ is a lattice, the affine curve $C_\iota$ is
called a {\it Teichm\"uller curve}\index{Teichm\"uller curve}, cf.~Sect.~\ref{tc}.
First examples were given by Veech \cite{V};
in them, $\bar\Gamma_\iota$ is a hyperbolic triangle group. Later more examples with
triangle groups as Veech groups were found, see \cite{HS} for a comprehensive
overview and \cite{BM} for recent results. Explicit examples for Teichm\"uller curves
also with non triangle groups as Veech groups can be found
e.~g. in \cite{mcm}, \cite{C} and \cite{L}. M\"oller has shown
(\cite{Martin}) that every Teichm\"uller curve is, as a subvariety of $M_g$, defined
over a number field. This implies that there are at most countably many
Teichm\"uller curves.\\
A special class of Teichm\"uller curves is obtained by {\it origamis}\index{origami} (or {\it square-tiled surfaces}\index{square-tiled surface}). They arise from finite coverings of an elliptic curve that are ramified over only one point. Given such a covering $p:X\to E$, the quadratic differential $q=(p^*\omega_E)^2$ (where $\omega_E$ is the invariant holomorphic differential on $E$) induces a Teichm\"uller embedding whose Veech group is commensurable to $\mbox{SL}_2(\mathbb{Z})$, see \cite{GJ}. Lochak proposed in \cite{L} a combinatorial construction for such coverings (which led to the name ``origami''), and Schmith\"usen \cite{Algo} gave a group theoretic characterization of the Veech group. In \cite{Diss}, origamis and their Veech groups are systematically studied and numerous examples are presented. Origamis in genus 2 where $q$ has one zero are classified in \cite{HL}. Using the description of origamis by gluing squares it is not difficult to see that there are, for any $g\ge2$, infinitely many Teichm\"uller curves in $M_g$ that come from origamis. In genus 3 there
is even an explicit example of an origami curve that is intersected by infinitely many others,
see \cite{WMS}. \\
We want to study boundary points of Teichm\"uller disks and Teichm\"uller
curves; by this we mean, for a Teichm\"uller embedding $\iota:\HH\to T_g$, the
closures of $\Delta_\iota$ and $C_\iota$ in suitable (partial) compactifications
of $T_g$ and $M_g$, respectively. For the moduli space we shall use the
compactification\index{moduli space!compactification of} $\overline{M}_g$ by stable Riemann surfaces. Here a one-dimensional connected
compact complex space $X$ is called a {\it stable Riemann surface}\index{stable Riemann surface} if all
singular points of $X$ are ordinary double points, i.\,e.\ have a neighbourhood
isomorphic to $\{(z,w)\in\mathbb{C}^2:z\cdot w=0,|z|<1,|w|<1\}$; moreover we require
that every irreducible component $L$ of $X$ that is isomorphic to the
projective line $\hat\mathbb{C} = \mathbb{P}^1(\mathbb{C})$ intersects $\overline{X-L}$ in at least
three points. It was shown by Deligne and Mumford\index{Deligne-Mumford compactification} (\cite{DM}) that stable
Riemann surfaces are classified by an irreducible compact variety $\overline{M}_g$ that,
like $M_g$, has the quality of a coarse moduli space. In fact, with the
approach of Deligne-Mumford it is possible to classify stable algebraic curves
over an arbitrary ground field: they construct a proper scheme over $\mathbb{Z}$ of
which $\overline{M}_g$ is the set of complex-valued points. Some years later, Knudsen
\cite{K} showed that $\overline{M}_g$ is a projective variety.\\
If $\iota:\HH\to T_g$ is a Teichm\"uller embedding such that $C_\iota$ is a
Teichm\"uller curve, the closure $\bar C_\iota$ of $C_\iota$ in $\overline{M}_g$ is
Zariski closed and therefore a projective curve. In particular, $\bar C_\iota -
C_\iota$ consists of finitely many points, called the {\it cusps}\index{Teichm\"uller curve!cusps of} of $C_\iota$. It is very
interesting to know, for a given Teichm\"uller curve $C_\iota$, the number of
cusps and the stable Riemann surfaces that correspond to the cusps. In the
case that $\iota$ is induced by an origami
there is an algorithm that determines (among other information) the precise
number of cusps of $C_\iota$, see \cite{Algo}.\\
The boundary\index{moduli space!boundary of} $\partial M_g = \overline{M}_g-M_g$ is a divisor, i.\,e.\ a projective subvariety
of (complex) codimension 1. It has irreducible components
$D_0,D_1,\dots,D_{[\frac{g}{2}]}$; the points in $D_0$ correspond to
irreducible stable Riemann surfaces with a double point, while for
$i=1,\dots,[\frac{g}{2}]$, $D_i$ classifies stable Riemann surfaces consisting
of two nonsingular irreducible components that intersect transversally, one of
genus $i$ and the other of genus $g-i$. The combinatorial structure of the
intersections of the $D_i$ is best described in terms of the {\it intersection
graph}\index{intersection graph}: For a stable Riemann surface $X$, we define a graph $\Gamma(X)$ as
follows: the vertices of $\Gamma(X)$ are the irreducible components of $X$,
the edges are the double points (connecting two irreducible components of $X$
which need not be distinct). For every graph $\Gamma$ let $\overline{M}_g(\Gamma)$ be the
set of points in $\overline{M}_g$ corresponding to stable Riemann surfaces with
intersection graph isomorphic to $\Gamma$. It is not hard to see that for
a given genus $g$, there are only finitely many graphs $\Gamma$ with nonempty
$\overline{M}_g(\Gamma)$, and that the $\overline{M}_g(\Gamma)$ are the strata of a stratification
of $\overline{M}_g$. This means that each $\overline{M}_g(\Gamma)$ is a locally closed subset of
$\overline{M}_g$ (for the Zariski topology), that $\overline{M}_g$ is the disjoint union of the
$\overline{M}_g(\Gamma)$, and that the closure of each $\overline{M}_g(\Gamma)$ is a finite union
of other $\overline{M}_g(\Gamma')$. A natural question in our context is: which
$\overline{M}_g(\Gamma)$ contain cusps of Teichm\"uller curves? In \cite{michi} Maier showed that if $\Gamma$ has no ``bridge'', i.\,e.\ no edge $e$ such that $\Gamma-e$ is disconnected. the stratum $\overline{M}_g(\Gamma)$ contains
points on a compactified Teich\-m\"uller curve $\bar C_\iota$ with a Teichm\"uller embedding $\iota$ that corresponds to an origami. M\"oller and
Schmith\"usen observed that this condition on the graph is necessary if the Teich\-m\"uller curve comes from a quadratic differential which is the square of a holomorphic 1-form (or equivalently from a translation structure on $X^*$).\\
Most of our knowledge about cusps of Teichm\"uller curves comes from studying boundary points of Teichm\"uller disks in a suitable extension of Teichm\"uller space. Several
different boundaries for Teichm\"uller space with very different properties
have been studied, like the Thurston boundary or the one coming from the Bers embedding. In the framework of this paper we look for a space $\overline{T}_g$
in which $T_g$ is open and dense such that the action of the group $\Gamma_g$
extends to an action on $\overline{T}_g$, and the quotient space $\overline{T}_g/\Gamma_g$ is
equal to $\overline{M}_g$. Such a space is the ``augmented'' Teichm\"uller space\index{augmented Teichm\"uller space} $\hat T_{g}$
introduced by Abikoff \cite{Abideg}. The points in $\hat T_{g}$
are equivalence classes of pairs $(X,f)$, where $X$ is a stable Riemann
surface of genus $g$ and $f:X_{\mbox{\scriptsize ref}}\to X$ is a {\it deformation}\index{deformation map}. This is a
continuous surjective map such that there are finitely many loops
$c_1,\dots,c_k$ on $X_{\mbox{\scriptsize ref}}$ with the property that $f$ is a homeomorphism outside the $c_i$
and maps each $c_i$ to a single point $P_i$ on $X$. Abikoff defined a topology
on this space and showed that the quotient for the natural action of
$\Gamma_g$ on the pairs $(X,f)$ is the moduli space $\overline{M}_g$ as a topological
space.\\
In his thesis \cite{VDiss}, Braungardt\index{Braungardt's construction} introduced the concept of a covering of
a complex manifold $S$ with cusps over a divisor $D$. He showed that under
mild assumptions on $S$ there exists a universal covering $\tilde X$ of this
type which extends the usual holomorphic universal covering of $S-D$ by
attaching ``cusps'' over $D$. $\tilde X$ is no longer a complex manifold or a
complex space, but Braungardt introduced a natural notion of holomorphic
functions in a neighbourhood of a cusp and thus defined a sheaf ${\cal
O}_{\tilde X}$ of rings (of holomorphic functions) on $\tilde X$. In this
way $\tilde X$ is a locally complex ringed space, and the quotient map $\tilde
X\to\tilde X/\pi_1(S-D)=S$ is analytic for this structure. When applied to
$S=\overline{M}_g$ and $D=\partial M_g$, Braungardt showed that the universal covering $\overline{T}_g$ of
$\overline{M}_g$ with cusps over $\partial M_g$ is, as a topological space with an action of
$\Gamma_g$, homeomorphic to Abikoff's augmented Teichm\"uller space. We shall
reserve the symbol $\overline{T}_g$ in this article exclusively for this space
(considered as a locally ringed space). In Chapter \ref{volker}, we review
Braungardt's construction and results.\\
Our key technique to investigate boundary points of Teichm\"uller disks is the use of {\it Strebel rays}\index{Strebel ray}, see Definition \ref{sray}. By this we mean a geodesic ray in $T_g$ that corresponds by the construction in Section \ref{sr} to a Strebel quadratic differential on the Riemann surface $X$ at the starting point of the ray. A Strebel differential decomposes $X$ into cylinders swept out by horizontal trajectories. Mainly following \cite{M} we give in Section \ref{srendpoint} two explicit descriptions of the marked Riemann surfaces $(X_K,f_K)$ (for $K>1$) on a Strebel ray. This allows us to identify the boundary point $(X_\infty,f_\infty)$ at the ``end''\index{Strebel ray!end point of} of the ray as the stable Riemann surface that is obtained by contracting on $X$ the core lines of the cylinders in a prescribed way, see Sections \ref{endpoint} and \ref{conv}.\\
In the case that the Teichm\"uller embedding $\iota$ leads to a Teichm\"uller curve $C_\iota$ we show in Section \ref{bpofdisks} that all boundary points\index{Teichm\"uller disk!boundary points of} of $\Delta_\iota$ are obtained in this way. This shows in particular that all cusps of Teichm\"uller curves\index{Teichm\"uller curve!cusps of} are obtained by contracting, on a corresponding Riemann surface, the center lines of the cylinders of a Strebel differential. For the proof of this result we show that the Teichm\"uller embedding $\iota$ can be extended\index{Teichm\"uller embedding!extension of} to a continuous embedding $\bar\iota:\HH\cup\{\mbox{cusps of}\ \Gammaquer^*_{\iota}\}
\hookrightarrow \overline{T}_g$, see Prop.~\ref{iquer}. Moreover, if the Veech group $\bar\Gamma_\iota$ is a lattice in $\mbox{PSL}_2(\mathbb{R})$, the image of $\bar{\iota}$ is the closure of $\Delta_\iota$ in $\overline{T}_g$, see Prop.~\ref{iquerfortc}.\\
Since $\overline{T}_g$ has these cusp singularities at the boundary that prevent it from
being an ordinary complex space, whereas the boundary of $\overline{M}_g$ is a nice
divisor in a projective variety, it is interesting to look at spaces that lie
properly between Teichm\"uller and moduli space and to ask for a boundary that
fits somehow in between $\overline{T}_g$ and $\overline{M}_g$. An example of such a space is
provided by the {\it Schottky space}\index{Schottky space} which goes back to the paper \cite{S} of F.~Schottky
from 1887. He studied discontinuous groups that are freely generated by
M\"obius transformations $\gamma_1,\dots,\gamma_g$ (for some $g\ge1$) chosen
in such a way that there are disjoint closed Jordan domains $D_1,D_1',\dots,D_g,D_g'$
such that $\gamma_i$ maps $D_i$ onto the complement of the interior of
$D_i'$. The Riemann surface of such a {\it Schottky group}\index{Schottky group} is compact of genus
$g$. It can be shown that every compact Riemann surface $X$ admits such a {\it
Schottky uniformization}\index{Schottky uniformization} $X =\Omega/\Gamma$ (with $\Omega\subset\mathbb{P}^1(\mathbb{C})$
open and $\Gamma$ a Schottky group), see Section \ref{s-struc}. The covering
$\Omega\to X$ is called a {\it Schottky covering}\index{Schottky covering}.
It is minimal for the property that $\Omega$ is planar, i.\,e.\
biholomorphic to a subdomain of $\mathbb{P}^1(\mathbb{C})$; here minimality means that
each unramified holomorphic covering $Y \to X$ with a planar manifold $Y$
factors through $\Omega$.\\
Schottky coverings are classified by a complex manifold $S_g$ of dimension
$3g-3$, called the Schottky space. The natural map from $T_g$ to $M_g$ factors
through $S_g$, therefore there is a subgroup $\Gamma_g(\alpha)$ of the mapping class group
$\Gamma_g$ such that $T_g/\Gamma_g(\alpha)=S_g$. Unfortunately the subgroup $\Gamma_g(\alpha)$ is not
normal and depends on the choice of a certain group homomorphism $\alpha$. As
a consequence the induced map $S_g\to M_g$ is not the quotient for a group
action. We review this classical but not so widely known material in Sections
\ref{s-struc} and \ref{s-teich}.\\
The concept of Schottky coverings can be extended to stable Riemann
surfaces. If the analogous construction as for ordinary Riemann surfaces is
applied to a surface $X$ with nodes, we obtain a covering space $\Omega$ which
is not planar, but on which nevertheless a free group $\Gamma$ acts by
holomorphic automorphisms with quotient space $\Omega/\Gamma=X$. Although the
groups $\Gamma$ are no longer subgroups of $\mbox{PSL}_2(\mathbb{C})$, it is possible to
find parameters for them in almost the same way as for Schottky groups, namely
by cross ratios of fixed points. It then turns out that these
generalized Schottky coverings\index{Schottky covering!generalized} are classified by a complex mani\-fold $\overline{S}_g$
(which contains $S_g$ as an open dense subset), see Section
\ref{s-stab}. This result was originally proved in \cite{GH}; here we show
that it can easily be derived from Braungardt's characterization of $\overline{T}_g$ as
the universal covering of $\overline{M}_g$ with cusps over $\partial M_g$, see Section \ref{s-ext}.\\
Finally we wonder what the image in $S_g$ of a Teichm\"uller disk
$\Delta_\iota$ in $T_g$ might look like. In the general case we have no
idea. Of course, the image may depend on the choice of the subgroup $\Gamma_g(\alpha)$
that gives the map $T_g\to S_g$. In the special situation that $C_\iota$ is a
Teichm\"uller curve we prove that for suitable choice of $\alpha$, the image
of $\Delta_\iota$ in $S_g$ is not a disk, see Prop.~\ref{durchschnitt}.\\
{\bf Acknowledgments:} We would like to thank Volker Braungardt for allowing us to include his results on $\overline{T}_g$, and for his helpful comments on an earlier version of Chapter \ref{volker}. We are also grateful to Pierre Lochak and Martin M\"oller for many valuable conversations on Teichm\"uller disks, Teichm\"uller curves, and their boundaries. Furthermore, we would like to thank
Bill Abikoff for his useful suggestions that helped to improve the
exposition considerably.
\section{Schottky spaces}
In this chapter we first recall the construction of Schottky coverings for
smooth and stable Riemann surfaces. We use them to define markings called
Schottky structures. In the smooth case they are classified by the well known Schottky space
$S_g$, a complex manifold of dimension $3g-3$ (if $g\ge 2$). In \cite{GH} it
was shown that also the Schottky structures on stable Riemann surfaces are
parameterized by a complex manifold $\overline{S}_g$. Here we show how to obtain $\overline{S}_g$
from Braungardt's extension $\overline{T}_g$ of the Teichm\"uller space introduced in
Chapter~\ref{volker}. In the last section of this chapter we study the image of a Teichm\"uller disk in the Schottky space.
\subsection{Schottky coverings}
\label{s-struc}
We recall the basic definitions and properties of Schottky uniformization of
Riemann surfaces. We introduce the Schottky space $S_g$ and sketch, following
\cite{GH}, the construction of a universal family over it.
\begin{definition}
\label{s-group}
A group $\Gamma\subset\mbox{PSL}_2(\mathbb{C})$ of M\"obius transformations on
$\mathbb{P}^1(\mathbb{C})$ is called
a {\it Schottky group}\index{Schottky group} if there are, for some $g\ge1$, disjoint closed simply connected domains
$D_1$, $D_1',\dots,D_g$, $D_g'$ bounded by Jordan curves $C_i = \partial D_i$,
$C_i' = \partial D_i'$, and generators $\gamma_1,\dots,\gamma_g$ of $\Gamma$
such that $\gamma_i(C_i)=C_i'$ and $\gamma_i(D_i) = \mathbb{P}^1(\mathbb{C})-\bar D_i'$ for
$i=1,\dots,g$. The generators $\gamma_1,\dots,\gamma_g$ are called a {\it
Schottky basis\index{Schottky basis}} of $\Gamma$.
\end{definition}
In Schottky's original paper \cite{S}, the $D_i$ in the definition were disks. With the same notation let
\[F=F(\Gamma)=\mathbb{P}^1(\mathbb{C})-\cup_{i=1}^g(\bar D_i\cup\bar D_i')\ \ \mbox{and}\ \
\Omega=\Omega(\Gamma)=\cup_{\gamma\in\Gamma}\gamma(F).\]
It is well known, see e.\,g.\ \cite[X.H.]{Mask} that $\Gamma$ is a Kleinian
group, free of rank $g$ with free generators $\gamma_1,\dots,\gamma_g$, that
$\Omega$ is the region of discontinuity of $\Gamma$, and that
$X=\Omega/\Gamma$ is a closed Riemann surface of genus $g$. The quotient map
$\Omega\to X$ is called a {\it Schottky covering}\index{Schottky covering}.\\
An important fact is the following uniformization\index{Schottky uniformization} theorem:
\begin{proposition}
\label{s-unif}
Every compact Riemann surface $X$ of genus $g\ge1$ admits a Schottky covering
by a Schottky group of rank $g$.
\end{proposition}
\begin{proof}
The proof is based on the following construction that we shall extend to
stable Riemann surfaces in Section \ref{s-ext}: choose disjoint simple loops
$c_1,\dots,c_g$ on $X$ which are independent in homology, i.\,e.\
$F=X-\cup_{i=1}^gc_i$ is connected. Then $F$ is conformally equivalent to a
plane domain that is bounded by $2g$ closed Jordan curves. For $i=1,\dots,g$
denote by $C_i$ and $C_i'$ the two boundary components of $F$ that result from
cutting along $c_i$. Now let $\Phi_g$ be a free group on generators
$\varphi_1,\dots,\varphi_g$, and take a copy $F_w$ of $F$ for every element
$w\in\Phi_g$. The $F_w$ are glued according to the following rule: if $w$ and
$w'$ are reduced words in $\varphi_1,\dots,\varphi_g$ and if $w=w'\varphi_i$
then the boundary component $C_i$ on $F_{w'}$ is glued to $C_i'$ on $F_w$; if
$w$ ends with $\varphi_i^{-1}$ the roles of $C_i$ and $C_i'$ are
interchanged. By this construction we obtain a plane domain $\Omega$
together with a holomorphic action of $\Phi_g$ on it: an element
$\varphi\in\Phi_g$ maps the copy $F_w$ to $F_{w\varphi}$. The crucial step in
the proof now is to show that this action extends to all of $\mathbb{P}^1(\mathbb{C})$,
i.\,e.\ $\Phi_g$ acts by M\"obius transformations. For this we refer to \cite[Ch.~IV, Thm.~19\,F]{AS}.
\end{proof}
\begin{definition}
\label{sg}
Let $\tilde S_g$ be the set of all $(\gamma_1,\dots,\gamma_g)\in\mbox{PSL}_2(\mathbb{C})^g$
that generate a Schottky group $\Gamma$ and form a Schottky basis for
$\Gamma$. The set $S_g$ of equivalence classes of $g$-tuples
$(\gamma_1,\dots,\gamma_g)\in\tilde S_g$ under simultaneous conjugation is called
the {\it Schottky space}\index{Schottky space} of genus $g$.
\end{definition}
For a point $s=(\gamma_1,\dots,\gamma_g)\in\tilde S_g$ let $\Gamma(s)$ be the
Schottky group generated by $\gamma_1,\dots,\gamma_g$, $\Omega(s)$ the region
of discontinuity of $\Gamma(s)$, and $X(s)=\Omega(s)/\Gamma(s)$ the associated
Riemann surface. This leads to an alternative description of the Schottky space:
\begin{remark}
\label{sg-alt}
$S_g$ is the set of equivalence classes of pairs $(X,\sigma)$, where $X$ is a Riemann surface of genus $g$ and $\sigma:\Phi_g\to\mbox{\em PSL}_2(\mathbb{C})$ is an injective homomorphism such that $\Gamma:=\sigma(\Phi_g)$ is a Schottky group and $\Omega(\Gamma)/\Gamma\cong X$.\\
$(X,\sigma)$ and $(X',\sigma')$ are equivalent if there is some $A\in\mbox{\em PSL}_2(\mathbb{C})$ such that $\sigma'(\gamma)=A\sigma(\gamma)A^{-1}$ for all $\gamma\in\Phi_g$. Note that then $X'$ is isomorphic to $X$.
\end{remark}
To endow $S_g$ with a complex structure we proceed as follows: Taking the fixed points and the multipliers of the $\gamma_i$ we obtain an embedding of $\tilde S_g$ as an open subdomain of $\mathbb{P}^1(\mathbb{C})^{3g}$. For $g=1$ each equivalence class contains a unique M\"obius transformation of the form $z\mapsto \lambda z$ for some $\lambda\in\mathbb{C}$, $0<|\lambda|<1$.
If $g\ge 2$ we find in each equivalence class in $\tilde S_g$ a unique representative
$(\gamma_1,\dots,\gamma_g)$ such that $\gamma_1$ and $\gamma_2$ have
attracting fixed points 0 and 1, respectively, and $\gamma_1$ has repelling
fixed point $\infty$. This defines a section to the projection $\tilde S_g\to S_g$
and embeds $S_g$ as a closed subspace of $\tilde S_g$ which, moreover,
lies in $\{0\}\times\{\infty\}\times\{1\}\times \mathbb{C}^{3g-3} \subseteq
\mathbb{P}^1(\mathbb{C})^{3g}$. Thus we have shown.
\begin{proposition}
\label{s-open}
{\bf a)} $S_1$ is a punctured disk.\\[1mm]
{\bf b)} For $g\ge2$, $S_g$ carries a complex structure as an open
subdomain of $\mathbb{C}^{3g-3}$.
\end{proposition}
Our next goal is to show that this complex structure on $S_g$ is natural. The main step in this direction is
\begin{proposition}
\label{mu}
The forgetful map $\mu:S_g\to M_g$, that sends $s=(X,\sigma)$ to the isomorphism class of $X$, is analytic and surjective.
\end{proposition}
\begin{proof} The surjectivity of $\mu$ follows from Prop.~\ref{s-unif}.
To show that $\mu$ is analytic we use the fact that $M_g$ is a coarse moduli
space for Riemann surfaces. Therefore it suffices to find a holomorphic
family $\pi:{\cal C}_g\to S_g$ of Riemann
surfaces\index{family of Riemann surfaces} over $S_g$ which induces $\mu$ in the sense that for $s\in S_g$,
$\mu(s)$ is the isomorphism class of the fibre $C_s=\pi^{-1}(s)\subset{\cal
C}_g$. \\[1mm]
The family ${\cal C}_g$ is obtained as in Section
\ref{v-structure}: Let
\[\Omega_g=\{(s,z)\in S_g\times\mathbb{P}^1(\mathbb{C}):z\in\Omega(s)\}.\]
$\Omega_g$ is a complex manifold on which the free group $\Phi_g$ acts
holomorphically by $\varphi(s,z) = (s,\sigma(\varphi)(z))$ for $s=(X,\sigma)\in S_g$, $\varphi\in\Phi_g$ and $z\in\Omega(s)$.\\[1mm]
The projection pr$_1:\Omega_g\to S_g$ onto the first component factors through
the orbit space ${\cal C}_g = \Omega_g/\Phi_g$, and the induced map $\pi:{\cal C}_g\to S_g$
is the family of Riemann surfaces we were looking for.
\end{proof}
The family ${\cal C}_g$ is in fact universal for Riemann surfaces with {\it Schottky
structure}\index{Schottky structure}, a kind of marking that we now recall from \cite[Section~1.3]{GH}:
\begin{definition}
\label{s-structure}
{\bf a)} Let ${\cal U}\to S$ be an analytic map of complex manifolds and $\Gamma\subset\mbox{Aut}({\cal U}/S)$ a properly discontinuous subgroup. Then the analytic quotient
map ${\cal U}\to{\cal U}/\Gamma={\cal C}$ is called a {\it Schottky covering}
if the induced map ${\cal C}\to S$ is a family of Riemann surfaces and
if for every $x\in S$ the restriction $U_x\to C_x$ of the quotient map to the fibres is a Schottky
covering.\\[1mm]
{\bf b)} A {\it Schottky structure} is a Schottky covering \,${\cal U}\to{\cal
U}/\Gamma={\cal C}$ together with an equivalence class of isomorphisms
$\sigma:\Phi_g\to\Gamma$, where $\sigma$ and $\sigma'$ are considered
equivalent if they differ only by an inner automorphism of $\Phi_g$.
\end{definition}
Note that the construction in the proof of Proposition \ref{mu} endows the
family ${\cal C}_g/S_g$ with a Schottky structure. \\
A Schottky structure on a single Riemann surface $X$ is given by a Schott\-ky covering $\Omega\to\Omega/\Gamma=X$ and an isomorphism $\sigma:\Phi_g\to\Gamma$. Comparing the respective equivalence relations we find that the points $(X,\sigma)$ in $S_g$ correspond bijectively to the isomorphism classes of Riemann surfaces with Schottky structure. In fact a much stronger result holds:
\begin{theorem}
\label{s-fine}
$S_g$ is a fine moduli space\index{fine moduli space!for Riemann surfaces with Schottky structure} for Riemann surfaces with Schottky structure.
\end{theorem}
\begin{proof}
Let ${\cal C}/S$ be a family of Riemann surfaces and $({\cal U}\to{\cal
U}/\Gamma={\cal C},\sigma:\Phi_g\stackrel{\sim}{\longrightarrow}\Gamma)$ a
Schottky structure on ${\cal C}$. Then we have a map $f:S\to S_g$ which maps
a point $x$ to the isomorphism class of the Schottky covering $U_x\to C_x$. We
have to show that $f$ is analytic. Then the other properties of a fine
moduli space follow easily from the definitions, namely that ${\cal C}$ is
the fibre product ${\cal C}_g\times_{S_g}S$ and that ${\cal U}$ is isomorphic to
$\Omega_g\times_{{\cal C}_g}{\cal C} = \Omega_g\times_{S_g}S$ such that the
projection ${\cal U}\to\Omega_g$ onto the first factor is equivariant for
the actions of $\Gamma$ and $\Phi_g$ via the isomorphism $\sigma$.\\
The universal property of $M_g$ as a coarse moduli space gives us, as above for
$\mu$, that the composition $\mu\circ f$ is analytic. Since $\mu$ has
discrete fibres, it therefore suffices to show that $f$ is continuous. This
is quite subtle, see \cite[\S\,3]{GH}.
\end{proof}
\subsection{Relation to Teichm\"uller space}
\label{s-teich}
In this section we explain that Schottky space\index{Schottky space} can
be obtained as a quotient
space of the Teichm\"uller space which was introduced in
Section \ref{v-structure}. For this purpose we first endow the universal
family
${\cal C}_{g,0}$ over the Teichm\"uller space $T_g=T_{g,0}$ with a
Schottky structure as follows: \\
Let $a_1,b_1,\dots,a_g,b_g$ be a set of standard generators\index{standard generators} of $\pi_g$, the
fundamental group of the reference surface $X_{\mbox{\scriptsize ref}}$; this means that they
satisfy the relation $\Pi_{i=1}^g a_ib_ia_i^{-1}b_i^{-1}=1$. Then
$b_1,\dots,b_g$ are homologically independent, hence the construction in the proof of Prop.~\ref{s-unif} provides us with a corresponding
Schottky covering $\Omega_{\mbox{\scriptsize ref}}\toX_{\mbox{\scriptsize ref}}$. The group Aut$(\Omega_{\mbox{\scriptsize ref}}/X_{\mbox{\scriptsize ref}})$ of deck
transformations is isomorphic to the free group on $b_1,\dots,b_g$. Denoting
$U_{\mbox{\scriptsize ref}}\toX_{\mbox{\scriptsize ref}}$ the universal covering, there is a covering
map $U_{\mbox{\scriptsize ref}}\to\Omega_{\mbox{\scriptsize ref}}$ over $X_{\mbox{\scriptsize ref}}$. The group Aut$(U_{\mbox{\scriptsize ref}}/\Omega_{\mbox{\scriptsize ref}})$ is the kernel
$N_\alpha$ of the homomorphism $\alpha:\pi_g\to\Phi_g$ which maps $b_i$ to
$\varphi_i$ and $a_i$ to 1; in other words, $N_\alpha$ is the normal closure in
$\pi_g$ of the subgroup generated by $a_1,\dots,a_g$.\\
In Section \ref{v-structure} we described the family
$\Omega^+_{g,0}\to{\cal C}_{g,0}$ of universal coverings
of the surfaces in the family ${\cal C}_{g,0}$; the fundamental group $\pi_g$ and hence also $N_\alpha$ acts on the fibres of this covering, and we obtain:
\begin{remark}
\label{ssonteich}
The induced map $\Omega^+_{g,0}/N_\alpha\to{\cal C}_{g,0}$ is a Schottky covering, and the universal Teichm\"uller structure\index{Teichm\"uller structure!universal}
$\tau:\pi_g\stackrel{\sim}{\longrightarrow}\mbox{\em Aut}(\Omega^+_{g,0}/{\cal
C}_{g,0})$ (cf.~Theorem~\ref{fein}) descends via $\alpha$ to a Schottky structure
$\sigma_\alpha:\Phi_g = \pi_g/N_\alpha
\stackrel{\sim}{\longrightarrow}\mbox{\em Aut}((\Omega^+_{g,0}/N_\alpha)/{\cal C}_{g,0})$
on ${\cal C}_{g,0}$.
\end{remark}
By Theorem \ref{s-fine} this Schottky structure induces an analytic map
$s_\alpha:T_g\to S_g$. To describe $s_\alpha$ as the quotient map for
a subgroup of the mapping class group\index{mapping class group} $\Gamma_g$,
we first identify $\Gamma_g$ with the group
$\mbox{Out}^+(\pi_g)$ of orientation preserving outer automorphisms of
$\pi_g$; then, to a diffeomorphism $f:X_{\mbox{\scriptsize ref}}\toX_{\mbox{\scriptsize ref}}$, we associate the induced automorphism $\varphi=f_*:\pi_g\to\pi_g$. It follows from the Dehn-Nielsen theorem that this gives an isomorphism $\Gamma_g\stackrel{\sim}{\longrightarrow}\mbox{Out}^+(\pi_g)$. In this chapter, by $\varphi\in\Gamma_g$ we always mean an element of $\mbox{Out}^+(\pi_g)$.
\begin{proposition}
\label{teichtos}
{\bf a)} $s_\alpha$ is the quotient map for the subgroup
\[\Gamma_g(\alpha)=\{\varphi\in\Gamma_g:\alpha\circ\varphi\equiv\alpha\ \mbox{\em
mod Inn}(\pi_g)\}\]
of the mapping class group $\Gamma_g$ (where $\mbox{\em Inn}(\pi_g)$ denotes the group of inner automorphisms).\\[1mm]
{\bf b)} $s_\alpha:T_g\to S_g$ is the universal covering of the Schottky
space.\\[1mm]
{\bf c)} $s_\alpha$ lifts to maps $\tilde s_\alpha$ and $\omega_\alpha$ that
make the following diagram commutative:
\begin{center}
$\xymatrix@=6ex{\ar[dr]^{/N_\alpha}\ar[dd]_{/\pi_g}\Omega^+_{g,0}\\
&\ar[r]^{\omega_\alpha}\ar[dl]\Omega^+_{g,0}/N_\alpha&\ar[d]^{/\Phi_g}\Omega_g\\
\ar[d]\ar[rr]^{\tilde s_\alpha}{\cal C}_{g,0}&&\ar[d]{\cal C}_g\\
\ar[rr]^{s_\alpha}\ar[dr]_{/\Gamma_g}T_g&&\ar[dl]^\mu S_g\\
&M_g}$
\end{center}
\end{proposition}
\begin{proof}
{\bf a)} Let $x=(X,f)\in T_g$. Recall from Section \ref{v-structure} that the fibre over $x$ in $\Omega^+_{g,0}$ is the component $\Omega^+(x)$ of the region of discontinuity of the quasifuchsian group $G_x$ associated to $x$. The universal Teichm\"uller structure on ${\cal C}_{g,0}$ induces an isomorphism $\tau_x:\pi_g\to G_x=\mbox{Aut}(\Omega^+(x)/X)$. From Remark \ref{ssonteich} we see that the point $s_\alpha(x)=(X,\sigma)\in S_g$ is given by the restriction $\sigma_{\alpha,x}$ of $\sigma_\alpha$ to the fibre over $x$; explicitly,
\[\sigma=\sigma_{\alpha,x}:\Phi_g=\pi_g/N_\alpha\stackrel{\sim}{\longrightarrow}\mbox{Aut}((\Omega^+(x)/\tau_x(N_\alpha))/X)=G_x/\tau_x(N_\alpha).\]
For $\varphi\in\Gamma_g$ we have $s_\alpha(x)=s_\alpha(\varphi(x))$ if and only if $\sigma_{\alpha,x}=\sigma_{\alpha,\varphi(x)}$ up to an inner automorphism.
Since $\tau_{\varphi(x)}=\tau_x\circ\varphi^{-1}$ this happens if and only if $\varphi$ induces an inner automorphism on $\pi_g/N_\alpha$, i.\,e.\ if and only if $\varphi\in\Gamma_g(\alpha)$.\\[1mm]
{\bf b)} This is clear from the fact that $T_g$ is simply connected and
$\Gamma_g(\alpha)$ is torsion free, hence $s_\alpha$ is unramified. Using the
construction in a) one can give a direct proof which in turn provides an
independent proof that $T_g$ is simply connected, see
\cite[Prop.~6]{GH}.\\[1mm]
{\bf c)} It follows from Remark \ref{ssonteich} that $\Omega^+_{g,0}/N_\alpha
\to {\cal C}_{g,0}$ is a Schottky covering. Therefore, by the universal
property of $S_g$ (Theorem \ref{s-fine}), ${\cal C}_{g,0}$ is the fibre
product $T_g\times_{S_g}{\cal C}_g$, and $\tilde s_\alpha$ is the projection
to ${\cal C}_g$. Moreover the Schottky covering $\Omega^+_{g,0}/N_\alpha
\to {\cal C}_{g,0}$ is a pullback of the universal Schottky covering $\Omega_g
\to{\cal C}_g$, i.\,e.\ $\Omega^+_{g,0}/N_\alpha = {\cal C}_{g,0}\times_{{\cal
C}_g}\Omega_g$, and again $\omega_\alpha$ is the projection to the second
factor.
\end{proof}
In fact, the action of $\Gamma_g(\alpha)$ on $T_g$ extends to $\Omega^+_{g,0}$, $\Omega^+_{g,0}/N_\alpha$ and
${\cal C}_{g,0}$; then $\tilde s_\alpha$ and $\omega_\alpha$ are the quotient maps
for these actions.
\subsection{Schottky coverings of stable Riemann surfaces}
\label{s-stab}
In this and the following section we introduce a partial compactification $\overline{S}_g$ of\index{Schottky space!partial compactification of}
$S_g$ that fits in between $\overline{T}_g$ and $\overline{M}_g$. We have presented two different
ways to define $S_g$, and we shall see that both are suited for extension to
stable Riemann surfaces: The first way is to construct Schottky coverings for
surfaces with nodes, define Schottky structures and find parameters for
them. This approach was pursued in \cite{GH} and will be sketched in this
section. The other possibility is to extend the action of $\Gamma_g(\alpha)$ to (part of)
the boundary of $\overline{T}_g$ and show that the quotient exists and has the desired
properties; this will be done in Section \ref{s-ext}.
\begin{definition}
\label{cut}
Let $X$ be a stable Riemann surface\index{stable Riemann surface} of genus $g$. A {\it cut system}\index{cut system} on $X$ is
a collection of disjoint simple loops $c_1,\dots,c_g$ on $X$, not
passing through any of the nodes, such that $X-\cup_{i=1}^gc_i$ is connected.
\end{definition}
\begin{proposition}
\label{cut-exist}
On any stable Riemann surface there exist cut systems.
\end{proposition}
\begin{proof}
Let $f:X_{\mbox{\scriptsize ref}}\to X$ be a deformation; we must find
disjoint and homologically independent loops $\tilde c_1,\dots,\tilde c_g$ on
$X_{\mbox{\scriptsize ref}}$ that are disjoint from the loops $a_1,\dots,a_k$ that are contracted
by $f$. For this we complete $a_1,\dots,a_k$ to a maximal system
$a_1,\dots,a_{3g-3}$ of homotopically independent loops (such a system
decomposes $X_{\mbox{\scriptsize ref}}$ into pairs of pants). Among the $a_i$ we find
$a_{i_1},\dots,a_{i_g}$ that are homologically independent. If $i_\nu>k$ we
take $\tilde c_\nu = a_{i_\nu}$, and for $i_\nu\le k$ we replace $a_{i_\nu}$
by a loop $\tilde c_\nu$ that is homotopic to $a_{i_\nu}$ and disjoint from
it.
\end{proof}
Once we have found $c_1,\dots,c_g$ as above, we proceed as in the proof of
Proposition \ref{s-unif} to construct a Schottky covering of $X$: Let
$F=X-\cup_{i=1}^gc_i$, take a copy $F_w$ of $F$ for each $w\in\Phi_g$, and
glue these copies exactly as before to obtain a space $\Omega$. Of course,
neither $F$ nor $\Omega$ is planar whenever $X$ has nodes.
In all cases, the complex structure on $X$ lifts to a structure of
a one-dimensional complex space on $F$. The group
$\Phi_g$ acts on this space by holomorphic automorphisms.
Precisely, there is an isomorphism $\Phi_g\to\Gamma=\mbox{Aut}(\Omega/X)$, and $X$ is isomorphic to $\Omega/\Gamma$ as complex space.
\begin{definition}
\label{s-covq}
The covering $\Omega\to X$ constructed above for a cut system
$c=(c_1,\dots,c_g)$ on a stable Riemann surface $X$ is called the {\it
Schottky covering}\index{stable Riemann surface!Schottky covering of} of $X$ relative to $c$.
A covering of $X$ is called a
{\it Schottky covering} if it is the Schottky covering relative to some cut system.
\end{definition}
The next goal is to define a space $\overline{S}_g$ that classifies Schottky coverings in a way
analogous to Definition \ref{sg}. Since the covering space $\Omega$ is in
general not a subspace of $\mathbb{P}^1(\mathbb{C})$ and thus the group of deck
transformations not a subgroup of $\mbox{PSL}_2(\mathbb{C})$, we cannot directly extend
\ref{sg}.\\
A closer look at the construction of a Schottky covering
$\Omega\to\Omega/\Gamma=X$ of a stable Riemann surface $X$ shows the following:\\
Each irreducible component $L$ of $\Omega$ is an open dense subset of a projective
line; more precisely, the stabilizer of $L$ in $\Gamma$ is a Schottky group as in
Definition \ref{s-group}, and $L$ is its region of discontinuity. Moreover the
intersection graph of the irreducible components of $\Omega$ is a
tree (hence $\Omega$ is called a {\it tree of projective lines})\index{tree of projective lines}.\\
Therefore, for each irreducible component $L$, there is a well defined
projection $\pi_L:\Omega\to L$ which is the identity on $L$: For an arbitrary point $x\in\Omega$ there is a unique chain $L_0,L_1,\dots,L_n=L$ of mutually distinct components such that $x\in L_0$ and $L_i$ intersects $L_{i+1}$ for $i=0,\dots,n-1$; then define $\pi_L(x)$ to be the intersection point of $L_{n-1}$ and $L$.\\
An {\it end} of $\Omega$ is an equivalence class of infinite chains $L_0,L_1,L_2,\dots$ of irreducible components as above (i.\,e.\ $L_i\not=L_j$ for $i\not=j$ and $L_i\cap L_{i+1}\not=\emptyset$), where two chains are equivalent if they differ only by finitely many components. Let $\Omega^*=\Omega\cup\{\mbox{ends of}\ \Omega\}$. Clearly the projection $\pi_L$ to a component $L$ can be extended to $\Omega^*$.\\
For any three different points or ends
$y_1, y_2, y_3$ in $\Omega^*$ there is a unique component $L=L(y_1,y_2,y_3)$
(called the {\it median} of the three points) such that the points
$\pi_L(y_1),\pi_L(y_2),\pi_L(y_3)$ are distinct. Now given any four distinct
points or ends $y_1,\dots,y_4$ in $\Omega^*$ we can define a {\it cross ratio}\index{cross ratio}
$\lambda(y_1,\dots,y_4)$ by taking the usual cross ratio of
$\pi_L(y_1),\dots,\pi_L(y_4)$ on the median component $L=L(y_1,y_2,y_3)$ of
the first three of them; note that $\lambda(y_1,\dots,y_4)$ will be 0, 1 or
$\infty$ if $\pi_L(y_4)$ coincides with $\pi_L(y_1)$, $\pi_L(y_2)$ or
$\pi_L(y_3)$. \\
To obtain parameters for the group $\Gamma$ observe that any
$\gamma\in\Gamma$, $\gamma\not=1$, has exactly two fixed points on the
boundary of $\Omega$, where boundary points of $\Omega$ are either points in
the closure of a component, or ends of $\Omega$; one of the fixed points is
attracting, the other repelling. For any four different (primitive) elements
$\gamma_1,\dots,\gamma_4$ in $\Gamma$ we define
$\lambda(\gamma_1,\dots,\gamma_4)$ to be the cross ratio of their attracting
fixed points. It is a remarkable fact that from these cross ratios both the
space $\Omega$ and the group $\Gamma\subset\mbox{Aut}(\Omega)$ can be recovered.
For any parti\-cular Schottky covering finitely
many of them suffice, but for different Schottky coverings we must take, in
general,
the cross ratios of different elements of $\Phi_g$. To parameterize all
Schottky coverings we therefore have to use infinitely many of these cross
ratios. We consider them as (projective) coordinates on an infinite product
of projective lines $\mathbb{P}^1(\mathbb{C})$. The cross ratios satisfy a lot of
algebraic relations, which define a closed subset $B$ of this huge space.
Every point of $B$ represents a tree of projective lines $\Omega$ as above
together with an action of $\Gamma$ on it. $\overline{S}_g$ is the open subset
of $B$, where this action defines a Schottky covering. For details
and in particular the technical complication caused by the presence
of infinitely many variables and equations, see
\cite[\S2]{GH} and \cite{He}. In principle, one can proceed as in
Section~\ref{s-struc} to construct a family of stable Riemann surfaces over
$\overline{S}_g$.\\
Given a family ${\cal C}/S$ of stable Riemann surfaces over a complex manifold
$S$, we can define the notion of a {\it Schottky covering} ${\cal U}/S\to {\cal
C}/S$ and of a {\it Schottky structure}\index{Schottky structure} on ${\cal U}$ exactly as in
Definition \ref{s-structure}, except that now ${\cal U}$ is not assumed to be
a manifold, but only a complex space. It is shown in \cite[\S3]{GH} that
the family over $\overline{S}_g$ carries a universal Schottky structure:
\begin{theorem}
\label{sgq}
$\overline{S}_g$ is a fine moduli space for stable Riemann surfaces with Schottky
structure.
\end{theorem}
\subsection{$\overline{S}_g$ as quotient of $\overline{T}_g$}
\label{s-ext}
It is not possible to extend the quotient map $s_\alpha:T_g\to S_g$ constructed
in Section~\ref{s-teich} to the whole boundary of $T_g$ in $\overline{T}_g$. Instead we shall, for
each $\alpha$, identify a part $\overline{T}_g(\alpha)$ of $\overline{T}_g$ to which the action of $\Gamma_g(\alpha)$
and hence the morphism $s_\alpha$ can be extended. It will turn out that the
quotient space is the extended Schottky space\index{Schottky space!extended} $\overline{S}_g$ described in the previous
section.\\
We begin with the definition of the admissible group homomorphisms $\alpha$
and the associated parts $\overline{T}_g(\alpha)$ of $\overline{T}_g$:
\begin{definition}
\label{sympl}
{\bf a)} A surjective homomorphism $\alpha:\pi_g\to\Phi_g$ is called
{\it symplectic}\index{symplectic homomorphism} if there are standard generators $a_1,b_1,\dots,a_g,b_g$ of
$\pi_g$ (in the sense of Section \ref{s-teich}) such that $\alpha(a_i)=1$ for
$i=1,\dots,g$.\\[1mm]
{\bf b)} Recall from Chapter \ref{volker} that a point in $\overline{T}_g$ can be described as an equivalence class of
pairs $(X,f)$, where $X$ is a stable Riemann surface and $f:X_{\mbox{\scriptsize ref}}\to X$ is a
deformation (see Corollary \ref{homeo}).\\
For a symplectic homomorphism $\alpha:\pi_g\to\Phi_g$ let
\[\overline{T}_g(\alpha) =
\{(X,f)\in\overline{T}_g:\mbox{ker}\,(f_*)\subseteq\mbox{ker}\,(\alpha)\}.\]
\end{definition}
\begin{proposition}
\label{Tgqa}
{\bf a)} $\overline{T}_g(\alpha)$ is an open subset of $\overline{T}_g$; it contains $T_g$ and is
invariant under the group $\Gamma_g(\alpha)$ introduced in Prop.~\ref{teichtos}.\\[1mm]
{\bf b)} $\overline{T}_g$ is the union of the $\overline{T}_g(\alpha)$, where $\alpha$ runs through the symplectic homomorphisms.\\[1mm]
{\bf c)} The restriction to $\overline{T}_g(\alpha)$ of the universal covering $p:\overline{T}_g\to\overline{M}_g$
is surjective for every symplectic $\alpha$.
\end{proposition}
\begin{proof}
{\bf a)} Let $(X,f)$ be a point in $\overline{T}_g$ and $c_1,\dots,c_k$ the loops on
$X_{\mbox{\scriptsize ref}}$ that are contracted under $f$. Then the kernel of $\pi_1(f):\pi_g\to\pi_1(X)$ is the
normal subgroup generated by $c_1,\dots,c_k$. The local description of $\overline{T}_g$
in Corollary \ref{local} shows that there is a neighbourhood $U$ of $(X,f)$ in
$\overline{T}_g$ such that for every $(X',f')\in U$ the map $f':X_{\mbox{\scriptsize ref}}\to X'$ contracts a
subset of $\{c_1,\dots,c_k\}$. Hence the kernel of $\pi_1(f')$ is contained in
ker$\,(\pi_1(f))$. Thus if $(X,f)\in\overline{T}_g(\alpha)$, also $U\subseteq\overline{T}_g(\alpha)$. The
remaining assertions are clear.\\[1mm]
{\bf b)} Again let $(X,f)$ be a point in $\overline{T}_g$ and $c_1,\dots,c_k$ the
loops on $X_{\mbox{\scriptsize ref}}$ contracted by $f$. By Proposition~\ref{cut-exist} we can find a cut system
$a_1,\dots,a_g$ on $X$ and a corresponding Schottky covering.
This covering induces a surjective homomorphism
$\pi_1(X)\to\Phi_g$. Composing this homomorphism with $\pi_1(f)$ yields a
homomorphism $\alpha:\pi_g\to\Phi_g$ which corresponds to a Schottky covering of
$X_{\mbox{\scriptsize ref}}$ (relative to the cut system $f^{-1}(a_1),\dots,f^{-1}(a_g)$) and hence
is symplectic. By construction, $c_1,\dots,c_k$ are in the kernel of
$\alpha$.\\[1mm]
{\bf c)} Let $\alpha:\pi_g\to\Phi_g$ be symplectic and $a_1,b_1,\dots,a_g,b_g$
standard generators of $\pi_g$ such that $\alpha(a_i) = 1$ for all $i$. For an
arbitrary stable Riemann surface $X$ choose a deformation $f:X_{\mbox{\scriptsize ref}}\to X$ and
let $c_1,\dots,c_k$ be the loops that are contracted by $f$. As
in the proof of b) we find standard generators $a'_1,b'_1,\dots,a'_g,b'_g$
such that the $c_j$ are contained in the normal subgroup generated by
the $a'_i$. The map $a_i\mapsto a'_i, b_i\mapsto b'_i$ defines an
automorphism $\varphi$ of $\pi_g$ and thus an element of $\Gamma_g$. Then by construction
$(X,f\circ\varphi)$ lies in $\overline{T}_g(\alpha)$ and $p(X,f\circ\varphi) = X$.
\end{proof}
As a side remark we note that
$\overline{T}_g(\alpha)$ is not only invariant under $\Gamma_g(\alpha)$, but also under the larger ``handlebody'' group\index{handlebody group}
\[H_g(\alpha) = \{\varphi\in\Gamma_g:\varphi(N_\alpha) = N_\alpha\}\]
(where $N_\alpha = \mbox{ker}\,(\alpha)$ as in Section \ref{s-teich}). Note
that $H_g(\alpha)$ is the normalizer of $\Gamma_g(\alpha)$ in $\Gamma_g$, and that we have an
exact sequence
\[1\to\Gamma_g(\alpha)\toH_g(\alpha)\to\mbox{Out}\,(\Phi_g)\to 1.\]
The quotient space $\hat S_g=T_g/H_g(\alpha)=S_g/\mbox{Out}\,(\Phi_g)$ is a parameter
space for Schottky groups of rank $g$ (without any marking).
\begin{proposition}
\label{Sgqa}
For any symplectic homomorphism $\alpha:\pi_g\to\Phi_g$, the quotient space
$\overline{T}_g(\alpha)/\Gamma_g(\alpha)$ is a complex manifold $\overline{S}_g(\alpha)$.
\end{proposition}
\begin{proof}
This is a local statement which is clear for points $(X,f)\in T_g$ since
$\Gamma_g(\alpha)$ is torsion free. For an
arbitrary $x=(X,f)\in\overline{T}_g$ we saw in Section \ref{v-tgnq} that the Dehn twists
$\tau_1,\dots,\tau_k$ around the loops $c_1,\dots,c_k$ that are contracted by
$f$ generate a finite index subgroup $\Gamma_x^0$ of the stabilizer $\Gamma_x$
of $x$ in $\Gamma_g$ (the quotient being the finite group Aut$\,(X)$). Let $\alpha$ be a symplectic
homomorphism with respect to standard generators $a_1,b_1,\dots,a_g,b_g$, and
assume $(X,f)\in\overline{T}_g(\alpha)$. Since the $c_i$ are in the normal subgroup generated
by $a_1,\dots,a_g$, they do not intersect any of the $a_j$ and thus
$\tau_i(a_j) = a_j$ for all $i$ and $j$. This shows $\Gamma_x\subseteq\Gamma_g(\alpha)$.\\
Now choose a neighbourhood $U$ of $x=(X,f)$ in $\overline{T}_g(\alpha)$ which is precisely
invariant under $\Gamma_x$. Then it follows, from Proposition \ref{tgnq}
(and
Definition \ref{covmgnq}), that $U/\Gamma_x$ is a complex manifold.
\end{proof}
For any two sets $a_1,b_1,\dots,a_g,b_g$ and $a'_1,b'_1,\dots,a'_g,b'_g$ of
standard generators, $a_i\mapsto a'_i$, $b_i\mapsto b'_i$ defines an
automorphism of $\pi_g$. Therefore for any two symplectic homomorphisms
$\alpha$ and $\alpha'$ there is an automorphism $\psi\in\Gamma_g$ such that
$\alpha = \alpha'\circ\psi$. Then clearly $N_\alpha = \psi(N_{\alpha'})$
and $\Gamma_g(\alpha') = \psi\Gamma_g(\alpha)\psi^{-1}$. This shows that, as an automorphism of
$\overline{T}_g$, $\psi$ maps $\overline{T}_g(\alpha)$ to $\overline{T}_g(\alpha')$ and descends to an isomorphism
$\bar\psi:\overline{S}_g(\alpha)\to\overline{S}_g(\alpha')$. We have shown:
\begin{remark}
\label{nureinSgqa}
The complex manifolds $\overline{S}_g(\alpha)$ are isomorphic for all symplectic homomorphisms
$\alpha$.
\end{remark}
It remains to show that the $\overline{S}_g(\alpha)$ coincide with the fine moduli space $\overline{S}_g$
of Section~\ref{s-stab}. This is achieved by showing that $\overline{S}_g(\alpha)$ satisfies
the same universal property as $\overline{S}_g$:
\begin{proposition}
\label{sgqa=sgq}
For any symplectic $\alpha$, $\overline{S}_g(\alpha)$ is a fine moduli space for stable Riemann
surfaces with Schottky structure and hence isomorphic to $\overline{S}_g$.
\end{proposition}
\begin{proof}
The idea of the proof is to endow the universal family over $\overline{T}_g(\alpha)$ with
a Schottky structure and to transfer this to a Schottky structure on the image
family over $\overline{S}_g(\alpha)$.\\[1mm]
Before explaining this for the whole family we consider
a single stable Riemann surface $X$. Let $d_1,\dots,d_k$ be
the nodes on $X$, $f:X_{\mbox{\scriptsize ref}}\to X$ a
deformation and
$\alpha:\pi_g\to\Phi_g$ a symplectic homomorphism such that
$x=(X,f)\in\overline{T}_g(\alpha)$. In Section \ref{v-structure} we described the universal
covering $\hat\Omega^+(x)\to\hat\Omega^+(x)/G_x=X$ of $X$ with cusps over the nodes. Recall
that $\hat\Omega^+(x)$ is the union of the plane region $\Omega^+(x)$ with the common
boundary points
of the doubly cusped regions lying over the nodes $d_i$, and that $G_x$ is
isomorphic to $\pi_g$.
\begin{remark}
\label{OplusxdmodNa}
Using the above notation, let $\rho:\pi_g\to G_x$ be an isomorphism and
$N_\alpha^{G_x}=\mbox{\em ker}\,(\alpha\circ\rho^{-1})\subseteq G_x$. Then
$\Omega=\hat\Omega^+(x)/N_\alpha^{G_x}$ is a complex space, $G_x/N_\alpha^{G_x}\cong\Phi_g$ acts
holomorphically on $\Omega$, and $\Omega\to\Omega/\Phi_g=X$ is a Schottky
covering.
\end{remark}
\begin{proof}
The key observation is that the stabilizer in $G_x$ of a point $\tilde d_i\in\hat\Omega^+(x)$
lying over $d_i$ is
generated by an element $\gamma_i$ corresponding under $\rho$ to a conjugate
of the loop
$f^{-1}(d_i)$. Since we assumed $(X,f)\in\overline{T}_g(\alpha)$, we have $\gamma_i\in
N_\alpha^{G_x}$. This shows that $\Omega$ is a complex space, more precisely: a
Riemann surface with nodes. The other assertions then follow directly from the definitions.
\end{proof}
The above construction can be carried over to families in the following way:
First consider the universal family $\overline{\cal C}_g$ over $\overline{T}_g$ and the universal
Teichm\"uller structure $\hat\Omega^+_g\to\overline{\cal C}_g$ on it. Denote by $\overline{\cal C}_g(\alpha)$ resp.\
$\hat\Omega^+_g(\alpha)$ the restriction to $\overline{T}_g(\alpha)$. Then the quotient space
$\hat\Omega^+_g(\alpha)/N_\alpha$ is a complex space on which $\Phi_g=\pi_g/N_\alpha$ acts.
The quotient map $\hat\Omega^+_g(\alpha)/N_\alpha\to\overline{\cal C}_g(\alpha)$ is a Schottky covering and the
identification of $\Phi_g$ with the group of deck transformations defines a
Schottky structure. \\[1mm]
The group $\Gamma_g(\alpha)$ acts not only on $\overline{T}_g(\alpha)$, but also on $\hat\Omega^+_g(\alpha)$
as follows: for
$\varphi\in\Gamma_g(\alpha)$ and $(x,z)\in \hat\Omega^+_g(\alpha)$ with $x\in\overline{T}_g(\alpha)$ and $z\in\hat\Omega^+(x)$
we set
\[\varphi(x,z)=(\varphi(x),z).\]
Note that the groups $G_x$ and $G_{\varphi(x)}$ are the same (only the
isomorphism with $\pi_g$ has changed); therefore
$\hat\Omega^+(x)=\hat\Omega^+(\varphi(x))$. This action, which is trivial on the
fibres,
descends to actions of $\Gamma_g(\alpha)$ on $\hat\Omega^+_g(\alpha)/N_\alpha$ and on $\overline{\cal C}_g(\alpha)$. The
respective orbit spaces give a family $\overline{\cal C}_g = \overline{\cal C}_g(\alpha)/\Gamma_g(\alpha)$ over $\overline{S}_g(\alpha)$ and a
Schottky structure on it. Using the universal property of the family over
$\overline{T}_g(\alpha)$ (see Theorem~\ref{fein}) and the fact that Schottky structures are
locally induced by Teichm\"uller structures, we find that the Schottky
structure on $\overline{\cal C}_g$ is in fact universal.
\end{proof}
The following diagram collects the relations between the spaces introduced and
used in this section. The horizontal maps are open embeddings, the last two
vertical maps are analytic with discrete fibres; all
other maps in the diagram are quotient maps for the groups indicated (to be
precise, the map from $\overline{T}_g(\alpha)$ to $\overline{M}_g$ is the restriction of the orbit map
for the action of $\Gamma_g$ on $\overline{T}_g$).\\[1mm]
\newpage
\begin{center}
$\xymatrix@=9ex{\ar@{^{(}->}[rr]\ar[drr]^(.55){/\Gamma_g(\alpha)}\ar[ddrr]^(.6){/H_g(\alpha)}\ar[dddrr]_{/\Gamma_g}T_g\
&&\ar[drr]^{/\Gamma_g(\alpha)}\ar@{-}'[dr]\ar[dddrr]^{/\Gamma_g}|(.33)\hole|(.666)\hole\overline{T}_g(\alpha)\\&&\ar@{^{(}->}[rr]\ar[d]^{/\mbox{\scriptsize
Out}(\Phi_g)}S_g\ &\ar[dr]^(.35){/H_g(\alpha)}&\ar[d]^{/\mbox{\scriptsize
Out}(\Phi_g)}\overline{S}_g\\&&\ar@{^{(}->}[rr]\ar[d]\hat S_g\ &&\ar[d]\hat{\overline{ S}}_g\\&&\ar@{^{(}->}[rr]M_g&&\overline{M}_g\ }$\\[5mm]
\end{center}
\subsection{Teichm\"uller disks in Schottky space}
\label{s-geod}
Let $\iota:\HH\to T_g$ be a Teichm\"uller embedding\index{Teichm\"uller embedding} as in Definition \ref{emb} and $\Delta
= \iota(\HH)$ its image in $T_g$. Let $\mbox{Stab}(\Delta)$ be the stabilizer of $\Delta$ in $\Gamma_g$. We have seen in Section \ref{lattice} that $\mbox{Stab}(\Delta)$ maps surjectively to the projective Veech group $\bar\Gamma_\iota$ of $\iota$ (see Definition \ref{Veechgroup}); the kernel of this map is the pointwise stabilizer of $\Delta$.\\
In this section we assume that $\bar\Gamma_\iota$ is a lattice in
PSL$_2(\mathbb{R})$, or equivalently that the image $C_\iota$ of $\Delta$ in $M_g$ is a Teichm\"uller curve\index{Teichm\"uller curve}
(cf.~Corollary \ref{latticeproperty}). As mentioned in the introduction, Veech showed that $C_\iota$ is not a projective curve and thus cannot be closed in $\overline{M}_g$.
\begin{proposition}
\label{durchschnitt}
Let $\iota:\HH\to T_g$ be a Teichm\"uller embedding such that $\bar\Gamma_\iota$ is a lattice in
$\mbox{\em PSL}_2(\mathbb{R})$. Then there exists a symplectic homomorphism
$\alpha:\pi_g\to\Phi_g$ such that
\[{\mbox{\em Stab}(\Delta)}\cap\Gamma_g(\alpha)\not=\{1\}.\]
\end{proposition}
Since $\Gamma_g(\alpha)$ is torsion free, this implies that the intersection is
infinite. As a consequence, the image of the Teichm\"uller disk $\Delta$ in the Schottky space\index{Teichm\"uller disk!image in Schottky space} $S_g$ is the quotient by an infinite group and in particular
not isomorphic to a disk.
\begin{proof}
Denote by $\overline{\Delta}$ and $\bar
C_\iota$ the closures of $\Delta$ and $C_\iota$ in $\overline{T}_g$ and $\overline{M}_g$,
respectively. Since $C_\iota$ is not closed, we can
find a point $z\in \bar C_\iota-C_\iota$; let $x\in\overline \Delta$ be a point above $z$. By
Prop.~\ref{Tgqa}\,b) there is a symplectic homomorphism $\alpha$ such that
$x\in\overline{T}_g(\alpha)$.\\
Let $\bar s_\alpha:\overline{T}_g(\alpha)\to\overline{S}_g$ be the quotient map for $\Gamma_g(\alpha)$
(see~Prop.~\ref{Sgqa} and Prop.~\ref{sgqa=sgq}) and let
$D(\iota)=s_\alpha(\Delta)$ be the image of $\Delta$
in $S_g$. Then the closure $\bar D(\iota)$ of $D(\iota)$ in $\overline{S}_g$ contains
$\bar s_\alpha(x)$, and we have $\bar C_\iota=\bar \mu(\bar
D(\iota))$, cf.~the diagram at the end of Section \ref{s-ext}.\\
By our assumption, $\bar C_\iota$ is Zariski closed in $\overline{M}_g$. Therefore
$\bar\mu^{-1}(\bar C_\iota)$ is an analytic subset of $\overline{S}_g$. $\bar
D(\iota)$ is an irreducible component of $\bar\mu^{-1}(\bar C_\iota)$ and
hence also an analytic subset.\\
Recall, from Corollary~\ref{latticeproperty}, that $\Delta/\mbox{Stab}(\Delta)$ is the
normalization of $C_\iota$. Furthermore, by Prop.~\ref{iquerfortc}, $\overline{\Delta}$ is isomorphic to $\HH\cup\{\mbox{cusps of}\ \Gammaquer^*_{\iota}\}$. Therefore $\overline{\Delta}/\mbox{Stab}(\Delta)$ is the normalization of $\bar C_\iota$. The restriction of the quotient map $\overline{\Delta}\to\overline{\Delta}/\mbox{Stab}(\Delta)$ to the intersection
$\overline{\Delta}_\alpha=\overline
\Delta\cap\overline{T}_g(\alpha)$ factors through $\bar s_\alpha$. If the intersection
$\mbox{Stab}(\Delta)\cap\Gamma_g(\alpha)$ was trivial,
this restriction would be an
isomorphism. But then $\overline{\Delta}_\alpha$ would be isomorphic to an
analytic subset of a complex manifold. This is impossible since
$\overline{\Delta}_\alpha$ contains $x\in \overline{T}_g-T_g$ and hence is not a complex space.
\end{proof}
\section{Geodesic rays, Teichm\"uller disks and Teichm\"uller curves}\label{bp}
The aim of this section is to introduce Teichm\"uller
disks and Teichm\"uller curves.
We start by recalling in \ref{deform} the concept of
Teich\-m\"uller deformations and using them we give
a definition for the Teichm\"uller space $T_g$ alternative to the one we
gave in the introduction.
This will help us to define geodesic rays
in the Teich\-m\"uller space in \ref{sr}.
In \ref{defdisks}
we introduce Teichm\"uller disks as complex version of geodesic rays
giving different alternative definitions. Finally in \ref{tc}
we introduce the Veech group and Teichm\"uller
curves and summarize some facts about the interrelation between these objects.
\subsection{Teichm\"uller deformations}\label{deform}
As one of numerous possibilities, one can define the Teichm\"uller space
as the space of Teichm\"uller deformations\index{Teichm\"uller deformations}.
We briefly recall
this concept here. It is described e.g. in \cite[Ch I, \S 3]{A}.
At the end of this subsection we extend it to the corresponding concept
for punctured Riemann surfaces and their Teichm\"uller space $T_{g,n}$,
cf. \cite[Chapter II, \S 1]{A}.\\
Let $X = X_{\mbox{\fs ref}}$ be a fixed Riemann surface of genus $g \geq 2$ and $q$ be a
holomorphic quadratic differential on $X$. We refer to the zeros of $q$
as {\em critical points}\index{holomorphic quadratic differential!critical point of}, all other points are {\em regular points}\index{holomorphic quadratic differential!regular point of}.
Then on the surface
\[X^* = X - \{P \in X |\; P \mbox{ is a critical point of }q\}\]
the differential $q$ naturally defines a
{\em flat structure}\index{flat structure} $\mu$, i.e.\
an atlas such that all transition maps are of the form
$z \mapsto \pm z + c$, with
some constant $ c \in \mathbb{C}$. The charts of $\mu$
in regular points
$z_0$ are given as
\begin{equation}\;\label{natchart}z \,\mapsto\, \int_{z_0}^z\sqrt{q(\xi)}d\xi.
\end{equation}
One may deform this flat structure
by composing each
chart with the map
\begin{equation} \label{Kdeform}
x + iy \;\; \mapsto \;\; Kx + iy \quad
= \; \frac{1}{2}(K+1)z + \frac{1}{2}(K-1)\overline{z},
\quad (x,y \in \mathbb{R})
\end{equation}
with $K$ an arbitrary real number $>1$. This defines a new
flat structure on $X^*$ which can uniquely be extended to a holomorphic
structure on $X$.\\
We call $X_K$ the Riemann surface that we obtain this way,
$X_1 = X$ the surface with the original complex structure and
$f_K: X_1 \to X_K$ the map that is topologically the identity.
The map $f_K$ is a Teichm\"uller map and has constant complex dilatation \index{complex dilatation}
\[
k(z) = \frac{(f_K)_{\overline{z}}}{(f_K)_{z}} \;\; = \;\; \frac{K-1}{K+1}.\]
Its maximal dilatation\index{maximal dilatation}
$\sup_{z \in X}\frac{1+|k(z)|}{1-|k(z)|}$ (as a quasiconformal map)
is equal to $K$.
\begin{definition}
Let $q$ be a holomorphic quadratic differential on $X$
and $K \in \mathbb{R}_{>1}$.
The pair $(X_K,f_K)$ as defined above is called the
{\em Teichm\"uller deformation}\index{Teichm\"uller deformation} of $X$ of
constant dilatation $K$ with respect to $q$.
\end{definition}
The pair $(X_K, f_K)$ defines a point in the Teichm\"uller space
$T_g$ which for simplicity we also denote as $(X_K,f_K)$.
Since the constant dilatation of $f_K$ is equal to $K$,
the Teichm\"uller distance\index{Teichm\"uller distance} between the points $(X_1,\mbox{id})$ and $(X_K, f_K)$
of $T_g$ is $\log(K)$.\\
If two holomorphic quadratic differentials on $X$
are positive scalar multiples of each other, they define, for each $K$,
the same point in $T_g$. Thus one restricts to
differentials with norm $1$. By Teichm\"uller's existence
and uniqueness theorems, see e.g. \cite[Chapter I, (3.5), (4.2)]{A},
one can show that each point in $T_g$ is uniquely obtained as a
Teichm\"uller deformation.
If $Q_X$ is the vector space of
all holomorphic quadratic differentials on $X$ and if $\Sigma_X$
is the unit sphere
in $Q_X$, one may thus write
\begin{equation} \label{tgdeform}
\{(X,q,k)| \; q \in \Sigma_X, k \in (0,1)\} \cup \{0\} \;= \; T_g.
\end{equation}
and the identification of the two sets is done by the map:
\begin{eqnarray} \label{tgtotdef}
(X,q,k) & \mapsto &
(X_K,f_K) \quad \mbox{ with }
K = \frac{1+k}{1-k} \Leftrightarrow k = \frac{K-1}{K+1} \quad
\mbox{ and }\nonumber \\
0 & \mapsto & \mbox{ the base point } (X,\mbox{id})
\end{eqnarray}
$(X_K,f_K)$ depends of course by its definition on the differential $q$.
In the following we will denote the base point also by $(X,q,0)$.\\
The map (\ref{tgtotdef}) is a homeomorphism.
Here on the
left hand side of (\ref{tgdeform}) one takes the topology obtained
by identifying it with the open unit ball
in $Q_X$. It follows in particular, that $T_g$ is contractible.\\
Teichm\"uller deformations can be understood as
{\em affine deformations}\index{affine deformation}
in the following sense:
Let us here and in the rest of the article identify $\mathbb{C}$ with $\mathbb{R}^2$
by the $\mathbb{R}$-linear map sending $(1,i)$ to the standard basis of $\mathbb{R}^2$.
Then the map in (\ref{Kdeform}) is equal to the affine map
\[\begin{pmatrix} x\\y \end{pmatrix} \;\mapsto \; \begin{pmatrix} K & 0\\ 0&1\end{pmatrix}\cdot \begin{pmatrix} x\\y\end{pmatrix} \]
Since composing charts with a biholomorphic map does not change the point in
Teich\-m\"uller
space, one obtains the same point $(X_K,f_K)$ in $T_g$ if
one composes each chart of the flat structure $\mu$ on $X$ with
the affine map
\begin{eqnarray}\label{XDK}
\begin{pmatrix} x\\y \end{pmatrix} \;\mapsto \; D_K\cdot\begin{pmatrix} x\\y\end{pmatrix} \;\; \mbox{ with } \,
D_K = \begin{pmatrix} \sqrt{K} & 0\\ 0&\frac{1}{\sqrt{K}}\end{pmatrix} \; \in \, \mbox{SL}_2(\mathbb{R})
\end{eqnarray}
We will use the following notations which are compatible with those in
Section~\ref{affdeforms} where we introduce the general concept
of affine deformations.
\begin{definition}\label{defDK}
Let $X$ be a compact Riemann surface of genus $g$, $q$ a holomorphic quadratic
differential, $\mu$ the flat structure defined by $q$.
We call the flat structure defined by (\ref{XDK})\; $\mu_{D_K}$
and denote $(X,\mu)\circ D_K = (X,\mu_{D_K})$.
\end{definition}
Note that $(X,\mu_{D_K})$ is as Riemann surface isomorphic to $X_K$.
Thus the point $[(X,\mu_{D_K}),\mbox{id}]$ in $T_g$ defined by the marking
$\mbox{id}:X \to (X,\mu_{D_K})$ is equal to $(X_K,f_K)$.\\
Finally, let us turn to {\em Teichm\"uller deformations of punctured
Riemann surfaces}\index{Teichm\"uller deformation!of punctured Riemann surface}:
The definition is done almost in the same way as in the case without
punctures, see \cite[Chapter II, \S 1]{A}.
Suppose that $g$ and $n$ are natural numbers with $3g-3+n>0$.
Let $X$ be a Riemann surface of genus $g$
with $n$ marked points $P_1$, \ldots, $P_n$, and
$X_{\mbox{\fs ref}} = X_0 = X - \{P_1, \ldots, P_n\}$.\\
In this case, one uses {\em admissible holomorphic quadratic
differentials}\index{holomorphic quadratic differential!admissible} on $X_0$. They are by
definition those meromorphic quadratic differentials
on $X$ that restrict to a holomorphic quadratic differential on $X_0$
and have at each puncture either a simple pole or extend holomorphically across
the puncture, see \cite[Chapter II, (1.4)]{A}. The vector space of these
differentials is called $Q_{X_0}$. For $q \in Q_{X_0}$ we define the
{\em critical points}\index{holomorphic quadratic differential!critical point of} to be the marked points and all zeroes of $q$; the remaining
points are called {\em regular}\index{holomorphic quadratic differential!regular point of}. Now, the definition of Teichm\"uller deformation
is done exactly as before, just always
replacing $Q_X$ by $Q_{X_0}$. One obtains in the same way:
\begin{equation} \label{tgdeformpunct}
\{(X,q,k)| \; q \in \Sigma_{X_0}, k \in (0,1)\} \cup \{0\} \;= \; T_{g,n}.
\end{equation}
Here $\Sigma_{X_0}$ is the unit ball in $Q_{X_0}$.
\subsection{Geodesic rays}\index{geodesic ray}
\label{sr}
Let $X = X_{\mbox{\fs ref}}$ be a Riemann surface of genus g.
A holomorphic quadratic differential $q$ on $X$
naturally defines a geodesic embedding of $\mathbb{R}_{\geq 0}$
into $T_g$ with respect to the Teichm\"uller metric on $T_g$
as is described in the following.
\begin{definition}
Let $q$ be a holomorphic quadratic differential on $X$
and $\gamma$ the map:
\begin{equation}\label{geodesicray}
\gamma = \gamma_q: \left\{
\begin{array}{lcl}
[0,\infty) & \rightarrow & T_g\\
t & \mapsto & (X_K,f_K) \; = \; (X,\mu_{D_K})
\; = \; (X,q,k) \\[1mm]
& &\phantom{(X_K,f_K) }
\mbox{with } \; K = e^t, \quad
k = \frac{K-1}{K+1}
\end{array}\right.
\end{equation}
The image of $\gamma$ is called the {\em geodesic ray
in $T_g$ in direction of $q$ (or with respect to) $q$ starting at $(X,\mbox{\em id})$}.
\end{definition}
Here we use the notation of the last section:
\[(X_K,f_K) \stackrel{\mbox{\footnotesize Def. }\ref{defDK}}{=} (X,\mu_{D_K})
\stackrel{(\ref{tgtotdef})}{=} (X,q,k)\]
is the point in $T_g$
defined by the Teichm\"uller deformation of
$X$ of dilatation $K$ with respect to $q$.
Recall from the last section that the distance between the
two points $(X_K, f_K)$
and $(X,\mbox{id})$ in $T_g$
is $\log(K)$. Thus $\gamma$ is an isometric embedding.\\
In fact, from the description of $T_g$ given in (\ref{tgdeform})
one observes that all points in $T_g$ which have distance
$\log(K)$ to the base point
$(X,\mbox{id})$ are Teichm\"uller deformations of $X$
of constant dilatation $K$ with respect to a holomorphic quadratic
differential. It follows that each isometric embedding of
$[0,\infty)$ into $T_g$ is of the form (\ref{geodesicray}).
\subsection{Teichm\"uller disks}\label{defdisks}\index{Teichm\"uller disk}
In this section we define Teichm\"uller disks. They can be
found defined under this name e.g. in \cite[p.~149/150]{n} and
\cite[8.1-8.2]{GL}.
One may find comprehensive overviews e.~g. in \cite{V}
and \cite{EG}, or more recently \cite{mcm} and \cite{L},
to pick only a few of numerous references where they occur.
We introduce them here
in detail comparing three different ways
how to construct them. For completeness we have
included most of the proofs.
\begin{definition} \label{emb}
Let $3g-3+n > 0$.
A {\em Teichm\"uller disk $\Delta_{\iota}$}
is the image of a holomorphic isometric
embedding
\[ \iota: \mathbb D \hookrightarrow T_{g,n} \]
of the complex unit disk $\mathbb D = \{z \in \mathbb{C}| |z| < 1\}$ into the
Teichm\"uller space. Here we take
the Poincar{\'e} metric of constant curvature $-1$ on $\mathbb D$ and the
Teichm\"uller metric on
$T_{g,n}$. The embedding $\iota$ is also called {\em Teichm\"uller embedding}\index{Teichm\"uller embedding}.
\end{definition}
Instead of the unit disk $\mathbb D$ one may take as well the
upper half plane $\HH$ with the hyperbolic metric. We will switch between
these two models using the holomorphic isometry
\begin{eqnarray} \label{uebergang}
f: \;\; \HH \rightarrow \mathbb D,\quad t \mapsto \frac{i-t}{i+t}.
\end{eqnarray}
Thus Teichm\"uller disks are obtained equivalently
as images of holomorphic
isometric embeddings $\HH \hookrightarrow T_{g,n}$ of the upper half plane $\HH$
into the Teichm\"uller space $T_{g,n}$.\\
How does one find such embeddings?
Similarly as for geodesic rays, each holomorphic quadratic differential
$q$ on a Riemann surface $X$ defines
a Teich\-m\"uller disk. In the following we describe three alternative
constructions starting from such a differential
$q$ that all lead to the same Teichm\"uller disk $\Delta_q$. For simplicity
we only consider the case $n = 0$ and $g \geq 2$. However the same
constructions
can be done in the general case of punctured surfaces.
\subsubsection[Teichm\"uller disks as a collection of geodesic rays]{
Teichm\"uller disks as a collection of geodesic rays}\label{coll}\index{Teichm\"uller disk}
\begin{definition}
Let $q$ be a holomorphic
quadratic differential on a Riemann surface $X$ of genus $g$.
Let $\iota_1$ be the map
\begin{equation*}
\iota_1: \left\{
\begin{array}{ccl}
\mathbb D &\to& T_g\\
z = r \cdot e^{i\varphi} &\mapsto& (X, e^{-i\varphi}\cdot q, r).
\end{array} \right.
\end{equation*}
\end{definition}
Here we use the definition of $T_g$ given by (\ref{tgdeform}).
Hence, $(X, e^{-i\varphi}\cdot q, r)$ is the point
defined by the Teichm\"uller deformation
of $X$ of dilatation $K = \frac{1+r}{1-r}$ with respect to
$q_{-\varphi} = e^{-i\varphi}\cdot q$. \\
We shall show in Proposition \ref{alleszs} that
$\iota_1$ is an isometric holomorphic
embedding, thus the image $\Delta_{\iota_1}$ of $\iota_1$ is
a Teichm\"uller disk.\\
The map $\iota_1$ may be considered as a collection of geodesic rays
in the following sense:
Let $\tau_{\varphi}$ be the geodesic ray in $\mathbb D$ starting from $0$
in direction $\varphi$, i.e.:
\[ \tau_{\varphi}:
\left \{\begin{array}{lcl}
[0,\infty) &\to& \mathbb D\\
t &\mapsto& r(t)\cdot e^{i\varphi}
\quad \mbox{ with } r(t) = \frac{e^t - 1}{e^t + 1}
\end{array}
\right.
\]
Then $\iota_1 \circ \tau_{\varphi}: [0,\infty) \to T_g$
is equal to the map given in (\ref{geodesicray}) that defines the geodesic ray
to the holomorphic quadratic differential $q_{-\varphi} = e^{-i\varphi}\cdot q$
on $X$.\\
Thus the Teichm\"uller disk $\Delta_{\iota_1}$ is the union of
all geodesic rays defined by the differentials $e^{i\varphi}\cdot q$
with $\varphi \in [0,2\pi)$.
Furthermore, $\iota_1 \circ \tau_{\varphi}$
is the parameterization by length of the restriction $\iota_1|_{R_{\varphi}}$
of $\iota_1$ to the ray $R_{\varphi} = \{r\cdot e^{i\varphi}|\,r \in [0,1)\}$.\\[3mm]
\subsubsection[Teichm\"uller disks by affine
deformations]{Teichm\"uller disks by affine
deformations\\[2mm]}\label{affdeforms}\index{Teichm\"uller disk}
We now describe a second approach that starting from
a holomorphic quadratic differential $q$
leads to the same Teichm\"uller disk
as in \ref{coll}.\\
Recall from Section \ref{deform} that a holomorphic
quadratic differential $q$ defines on
$X^* = X - \{\mbox{zeroes of $q$}\}$ a flat structure
$\mu$.
The group $\mbox{SL}_2(\mathbb{R})$ acts on the flat structures
of $X^*$ (as topological
surface) in the following way:
Let $B \in \mbox{SL}_2(\mathbb{R})$ and $\mu$ be a flat structure on
$X^*$. Composing each chart of $\mu$ with the affine map
$z \mapsto B\cdot z$ gives a new flat structure on $X^*$ which we denote
$B \circ (X,\mu)$
or $(X,\mu_B)$. In the special case $B = D_K$
we obtain the Teichm\"uller deformation of dilatation $K$,
cf. Definition \ref{defDK}.
\begin{definition}\label{defdeform}
We call $(X,\mu_B) = B \circ (X,\mu)$ {\em affine deformation}\index{affine deformation}
of $(X,\mu)$ by the matrix $B$.
\end{definition}
\noindent
Note that for $B_1$, $B_2$ in $\mbox{SL}_2(\mathbb{R})$ one may write
\[B_1\circ (B_2 \circ (X,\mu)) = B_1 \circ (X,\mu_{B_2}) = (X,\mu_{B_1B_2}) =
B_1\cdot B_2 \circ(X,\mu).\]
\vspace*{2mm}
The flat structure $\mu_B$ defines in particular
a complex structure on $X$. We identify here
the complex plane $\mathbb{C}$ with $\mathbb{R}^2$ as we already did in
Section \ref{deform}.
In general the new complex structure will be
different from the one defined by $\mu$.
Taking the identity $\mbox{id}:(X,\mu) \to (X,\mu_B)$ on
$X$ as marking, we obtain a point $P_B = [(X,\mu_B),\mbox{id}]$
in the Teichm\"uller space $T_g$. By abuse of notation we will
sometimes denote this point also just as $(X,\mu_B)$.\\
Thus one obtains the map
\[\hat{\iota}_2:\,\mbox{SL}_2(\mathbb{R}) \to T_g, \;\; B \; \mapsto \;
P_B = [(X,\mu_B),\mbox{id}] = (X,\mu_B) \]
If however the matrix $A=U$ is in $\mbox{SO}_2(\mathbb{R})$ the
map $\mbox{id}: (X,\mu_B) \to (X,\mu_{U\cdot B})$ is holomorphic,
thus the point in Teichm\"uller
space is not changed, i.~e.
\begin{equation}\label{PU}
U \in \mbox{SO}_2(\mathbb{R}) \;\; \Rightarrow \;\;
P_{UA} = P_A \mbox{ for all } A \in \mbox{SL}_2(\mathbb{R})
\end{equation}
Hence $\hat{\iota}_2$
induces a map
\[\iota_2: \mbox{SO}_2(\mathbb{R})\backslash\mbox{SL}_2(\mathbb{R}) \to T_g,\quad
\mbox{SO}_2(\mathbb{R})\cdot B \; \mapsto \;
P_B = [(X,\mu_B),\mbox{id}] = (X,\mu_B) .\]
\vspace*{2mm}
Please note: The action of $\mbox{SL}_2(\mathbb{R})$ on the flat structures
$\{(X,\mu_A)|\; A \in \mbox{SL}_2(\mathbb{R})\}$ does not descend to the
image set $\{P_A|\; A \in \mbox{SL}_2(\mathbb{R})\}$ in $T_g$;
in particular:
$P_{U} = P_{I} \; \not\Rightarrow \; P_{AU} = P_{A}$!\\
\noindent
{\bf The Teichm\"uller disk:}\\[1mm]
One may identify $\mbox{SO}_2(\mathbb{R})\backslash\mbox{SL}_2(\mathbb{R})$
with the upper half plane
$\HH$ in the following way:
Let $\mbox{SL}_2(\mathbb{R})$ act by
M\"obius transformations
on the upper half plane $\HH$. This action is transitive and
$\mbox{SO}_2(\mathbb{R})$ is the stabilizing
group of $i$. Thus the map
\begin{equation}\label{pequ}
p:\;\mbox{SL}_2(\mathbb{R}) \to \HH, \;\; A \, \mapsto \, -\overline{A^{-1}(i)}
\end{equation}
induces a bijection $\mbox{SO}_2(\mathbb{R})\backslash\mbox{SL}_2(\mathbb{R}) \to \HH$.
Its inverse map is induced by
\[\HH \to \mbox{SL}_2(\mathbb{R}),\quad t \, \mapsto \,
\frac{1}{\sqrt{\mbox{Im}(t)}}\begin{pmatrix} 1&\mbox{Re}(t)\\0&\mbox{Im}(t) \end{pmatrix}\]
Composing $\iota_2$ from above with this bijection
one obtains a map from $\HH$ to $T_g$ which we also denote
by $\iota_2$.
\begin{definition}\label{defi2}
Let $q$ be a holomorphic quadratic differential on the
Riemann surface $X$ and $\mu$ the flat structure
defined by $q$.
Let $\iota_2$ be the map
\[\iota_2:\, \HH \to T_g, \quad
t \,\mapsto\, P_{A_t} = [(X,\mu_{A_t}),\mbox{id}]\]
with $A_t$ chosen such that $-\overline{A_t^{-1}(i)} = t$.
\end{definition}
Note that the identification of $\mbox{SO}_2(\mathbb{R})\backslash\mbox{SL}_2(\mathbb{R})$ with
$\HH$
given by $p$ may seem a bit ponderous, but
one has to compose $A \mapsto A^{-1}(i)$ with the reflection at
the imaginary axis in order that $\iota_2$ becomes holomorphic.
We will see this later in \ref{beltrams}.
In fact one has much more, as is stated in the
following proposition.
\begin{proposition} \label{alleszs}
The maps $\iota_1$ and $\iota_2$ are Teichm\"uller embeddings.
They define the same Teichm\"uller disk
\begin{eqnarray}\label{deltaq}
\Delta_q \; = \; \Delta_{\iota_1} \; = \; \iota_1(\mathbb D)
\; = \; \Delta_{\iota_2} \; = \;
\iota_2(\HH).
\end{eqnarray}
\end{proposition}
\begin{proof}
The proof is given in the rest of Subsection \ref{affdeforms}
and in \ref{beltrams}:\\
In Proposition \ref{ioeinszwei} we show that
$\iota_2 = \iota_1 \circ f$ with $f$ from (\ref{uebergang})
(see also Figure~\ref{gb}); thus it is sufficient
to show only for one of them that it is isometric,
and in the same manner for being holomorphic.\\
In Proposition \ref{remisom} it is shown that $\iota_2$
is isometric. In Subsection \ref{beltrams}, it is shown
that $\iota_1$ is holomorphic (see Corollary \ref{corhol}).
For this purpose we introduce an
embedding $\iota_3:\mathbb D \to T_g$, using Beltrami differentials,
for which it is not difficult to see that it is holomorphic,
and show that it is equal
to $\iota_1$.\\
That $\iota_1$ and $\iota_2$ define the same
Teichm\"uller disks then also follows also from Proposition
\ref{ioeinszwei}.
\end{proof}
In fact the described constructions do not only give some special
examples but all Teichm\"uller disks
are obtained as follows:
Each Teichm\"uller disk is equal to $\Delta_q$ as in
(\ref{deltaq}) for some holomorphic quadratic differential $q$.
And all Teich\-m\"uller embeddings are of the form $\iota_1:\mathbb D \hookrightarrow T_g$
or equivalently $\iota_2: \HH \hookrightarrow T_g$, see \cite[7.4]{GL}. \\
In order to see that $\iota_2$ from Definition \ref{defi2}
is isometric we first calculate the Teichm\"uller distance
between two affine deformations.\index{affine deformations!Teichm\"uller distance between two}\\[3mm]
\noindent
{\bf Teichm\"uller distance between two affine deformations:}\\[1mm]
In what follows we will constantly use the following fact
about matrices in $\mbox{SL}_2(\mathbb{R})$:
\begin{remark}
Each matrix $A \in \mbox{\em SL}_2(\mathbb{R})$ with $A \not\in \mbox{\em SO}_2(\mathbb{R})$
can be decomposed uniquely
up to the minus signs as follows:
\begin{eqnarray}\label{decompose}
&A = U_1\cdot D_K \cdot U_2 \;\; \mbox{ with } U_1, U_2 \in \mbox{\em SO}_2(\mathbb{R}),
\quad D_K = \begin{pmatrix} \sqrt{K} & 0 \\ 0 & \frac{1}{\sqrt{K}}\end{pmatrix}, \;\;K > 1.
\nonumber&\\
&\mbox{We may denote:} \quad
U_2 = U_{\theta} =
\begin{pmatrix} \cos(\theta)&-\sin(\theta)\\ \sin(\theta) & \cos(\theta) \end{pmatrix}.&
\end{eqnarray}
\end{remark}
This fact can e.g. be seen geometrically as follows: $\mbox{SL}_2(\mathbb{R})$
acts transitively
on the upper half plane $\HH$ by M\"obius transformations.
The point $i\in \HH$ can be mapped to $A(i)\neq i$ by first
doing a stretching
along the imaginary axis in direction $\infty$ and afterwards
a rotation around $i$, i.~e. $A(i) = U_1(D_K(i))$
with suitably chosen $U_1 \in \mbox{SO}_2(\mathbb{R})$ and $D_K$ with $K>1$
as in the remark.
Since the stabilizer of $i$ in $\mbox{SL}_2(\mathbb{R})$ is $\mbox{SO}_2(\mathbb{R})$,
one has $A = U_1\cdot D_K \cdot U_2$ with $U_2$ also in $\mbox{SO}_2(\mathbb{R})$.
A short calculation gives the uniqueness claim.\\
In the following proposition again let
$q$ be a holomorphic quadratic differential on
$X = X_{\mbox{\fs ref}}$ and $\mu$ the flat structure that $q$ defines.
\begin{proposition}\label{distance}
Let $A$ and $B$ be in $\mbox{\em SL}_2(\mathbb{R})$ with $A\cdot B^{-1} \not\in \mbox{\em SO}_2(\mathbb{R})$ and
\[ A\cdot B^{-1} = U_1 \cdot D_K \cdot U_2 \]
with $U_1$, $U_2$ and $D_K$ as in (\ref{decompose}). Then
the Teichm\"uller distance between the two points
$P_A = [(X,\mu_A),\mbox{\em id}]$ and $P_B = [(X,\mu_B),\mbox{\em id}]$
in $T_g$ is $\log(K)$.
\end{proposition}
\begin{proof}
We will proceed in three steps:\\
\noindent
{\bf a)} Suppose $B$ is the identity matrix $I$ and
\[A \;=\; D_K \;=\; \begin{pmatrix} \sqrt{K} & 0\\ 0&\frac{1}{ \sqrt{K}} \end{pmatrix}
\;\;\mbox{ for some }
K \in \mathbb{R}_{>1}.\]
Thus we have in fact that $P_A = [(X,\mu_{D_K}),\mbox{id}]$ is the point in $T_g$
defined by the Teichm\"uller deformation of dilatation $K$
with respect to $q$, see Definition \ref{defDK}. Hence the distance
between $P_A$ and the base point $(X_{\mbox{\fs ref}},\mbox{id}) = P_I$
is $\log(K)$.\\
\noindent
{\bf b)} Suppose again that $B = I$, but $A$ is an arbitrary
matrix in $\mbox{SL}_2(\mathbb{R})$.\\
Thus \; $A = U_1 \cdot D_K \cdot U_2$ \; and
the map $\mbox{id}: (X,\mu) \to (X,\mu_A)$ is the composition of three maps:
\[(X,\mu) \stackrel{\footnotesize\mbox{id}}{\to} (X,\mu_{U_2})
\stackrel{\footnotesize\mbox{id}}{\to} (X,\mu_{D_KU_2})
\stackrel{\footnotesize\mbox{id}}{\to} (X,\mu_{U_1D_KU_2}) \]
Since the first and the third map are biholomorphic
the Teichm\"uller distance is again $\log(K)$.\\
More precisely, write \; $U_2 = U_{\theta}$ \;
as in (\ref{decompose}).
Then $\mu_{U_2}$ is the flat structure obtained by
composing each chart with $z \mapsto e^{i\theta}\cdot z$. This
is equal to the flat structure defined by the quadratic differential
$q_{2\theta} = (e^{i\theta})^2\cdot q$ which is holomorphic on
the Riemann surface $X$.\\
Now,\; $\mbox{id}:(X,\mu_{U_2}) \to (X,\mu_{D_KU_2})$\; is (up to the stretching
$z \mapsto \sqrt{K}\cdot z$)
the Teich\-m\"uller deformation of dilatation $K$ with respect to
the holomorphic quadratic differential
$q_{2\theta}$. Thus the distance between $P_A = P_{U_1D_KU_2}
\stackrel{(\ref{PU})}{=} P_{D_KU_2}$ and
the base point $P_B = P_I$ is $\log(K)$.\\
\noindent
{\bf c)} Let now $A$, $B$ be arbitrary in $\mbox{SL}_2(\mathbb{R})$. The Teichm\"uller
metric does not depend on the chosen base point. Thus we may consider
$P_B$ as base point and $P_A$ as coming from the affine deformation defined
by the matrix $A\cdot B^{-1}$.
Then with the given decomposition $A\cdot B^{-1} = U_1 \cdot D_K \cdot U_2$
the distance is as in b) equal to $\log(K)$.
\end{proof}
\begin{proposition} \label{remisom}
$\iota_2$ is an isometric embedding
\end{proposition}
\begin{proof}
We denote by $\rho$ the Poincar{\'e} distance in $\HH$
and by $d_T$ the Teichm\"uller distance in $T_g$. Let
$t_1$ and $t_2$ be arbitrary distinct points in $\HH$. We
may write $t_1 = p(A)$ and $t_2 = p(B)$ with $A$, $B$ in
$\mbox{SL}_2(\mathbb{R})$, $p$ as in (\ref{pequ}). Let
$AB^{-1} = U_1D_KU_2$ the decomposition of $AB^{-1}$
from (\ref{decompose}). ($AB^{-1} \notin \mbox{SO}_2(\mathbb{R})$ because
$t_1 \neq t_2$)
\begin{eqnarray*}
\rho(t_1,t_2) &=& \rho(-\overline{B^{-1}(i)}, -\overline{A^{-1}(i)})
= \rho(B^{-1}(i), A^{-1}(i)) = \rho(AB^{-1}(i),i)\\
&=& \rho(U_1D_KU_2(i),i)
= \rho(U_1D_K(i),i) \stackrel{\star}{=} \rho(D_K(i),i)\\
&=& \rho(Ki,i)\, =\, \log(K) \stackrel{\mbox{\footnotesize Prop. \ref{distance}}}{=}
d_T(P_B,P_A) = d_T(\iota_2(t_1),\iota_2(t_2))
\end{eqnarray*}
The equality $\star$ is given since $U_1$ is a hyperbolic
rotation with center $i$ and thus does not change the distance
to $i$.
\end{proof}
Now we show that $\iota_1$ and $\iota_2$
are ``almost'' the same map.
\begin{proposition}\label{ioeinszwei}
$\iota_1$ and $\iota_2$ fit together. More precisely:
$\iota_1 \circ f = \iota_2,$
with the isomorphism $f:\HH \to \mathbb D$ from (\ref{uebergang}).
\end{proposition}
The following diagram may be helpful while reading the
proof. Some parts will be explained only after the proof; in particular
the space $B(X)$ of Beltrami differentials will be introduced in
\ref{beltrams}.\\
\begin{minipage}{\linewidth}
\[\hspace*{-5mm} \xymatrix{
& \mathbb D \ar[rd]^{b} \ar@/^1.5cm/[rrd]^{\iota_1}
\save[]+<-22mm,2mm> *\txt<8pc>{%
${\scriptstyle
z = r\cdot e^{\alpha i} = \frac{i-t}{i+t}
= \frac{K-1}{K+1}\cdot e^{-2i\theta} \in }$}
\restore
& & \\
\mbox{SL}_2(\mathbb{R})
\ar[rd]^{p}
\ar@/_2.7cm/[rrr]_{\hat{\iota}_2}
\save[]+<9mm,4mm> *\txt<8pc>
$ \scriptstyle \ni \;\; A \; =\; U_1D_KU_{\theta} $
}
\restore
& &
B(X) \ar[r]^{\Phi}
\save[]+<4mm,7mm> *\txt<8pc> {
$z\cdot\frac{\bar{q}}{|q|} =
\frac{i-t}{i+t}\frac{\bar{q}}{|q|}
$}\restore
&
**[r]\hspace*{3mm}T_g \; \ni &{}
\save[]+<15mm,0mm>*\txt<5cm>{%
$\left\{
\begin{minipage}{4cm}
$\iota_1(z)$ \newline
$\scriptsize = (X,e^{-\alpha i}q,r)$\newline
\quad \; $\scriptsize = (X,e^{2\theta i}q,
\frac{\scriptscriptstyle K-1}{\scriptscriptstyle K+1}) =$
\newline
$\iota_2(t)$ \newline
$\scriptsize = P_A = [(X,\mu_A), \mbox{id}]$ \newline
$\scriptsize = [D_K \circ U_2 \circ (X,\mu), \mbox{id}]$
\end{minipage}\right.$}
\restore \\
& \hspace*{1cm}\HH \hspace*{1cm}
\ar[ru] \ar[uu]^{f}
\save[]+<14mm,-2mm>*\txt<8pc>
$\scriptsize \ni t = -\overline{A^{-1}(i)}$}
\restore
& &
}\]
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}:
{\it Diagram for alternative definitions of the Teichm\"uller disk
$\Delta_q$}
\label{gb}
\end{center}
\end{minipage}
\vspace*{3mm}
\begin{proof}
We proceed in two steps:
\begin{enumerate}
\item
Let $A \in \mbox{SL}_2(\mathbb{R})$ be decomposed as in (\ref{decompose}):
$A = U_1\cdot D_K \cdot U_2$, \; $U_2 = U_{\theta}$.\\[1.5mm]
We show that $(f\circ p)(A) = r\cdot e^{-2i\theta}$ with $r = \frac{K-1}{K+1}$.
\item We show that $\iota_1(r\cdot e^{-2i\theta}) = \hat{\iota}_2(A)$.\\
\end{enumerate}
\noindent
{\bf Step 1:}
One may express $t := p(A)$ in terms of $K$ and $\theta$ as follows:
\begin{eqnarray*}
t &=& -\overline{A^{-1}(i)} = -U_2^{-1}D_K^{-1}(\overline{i})
= -U_2^{-1}(-\frac{i}{K})
= -\frac{\cos(\theta)\cdot \frac{-i}{K} + \sin(\theta)}
{-\sin(\theta)\cdot\frac{-i}{K} + \cos(\theta)}\\
&=& \frac{i\cos(\theta) - K\sin(\theta)}{i\sin(\theta)+K\cos(\theta)}
\end{eqnarray*}
Now one has:
\begin{eqnarray*}
f(p(A))
&=& f(t) \,=\, \frac{-t+i}{t+i}
\,=\, \frac{-i\cos(\theta) + K\sin(\theta) +i(i\sin(\theta)+K\cos(\theta))}{
i\cos(\theta) - K\sin(\theta) +i(i\sin(\theta)+K\cos(\theta))}\\
&& \hspace*{-20mm}
\,=\, \;\,
\frac{(K-1)[\sin(\theta)+i\cos(\theta)]}
{(K+1)[-\sin(\theta)+i\cos(\theta)]}
\,=\, \frac{K-1}{K+1} \cdot
\frac{-(\sin(\theta)+ i\cos(\theta))^2}
{(\sin(\theta)-i\cos(\theta))(\sin(\theta)+i\cos(\theta))} \\
&& \hspace*{-20mm}
\,=\, \;\,
\frac{K-1}{K+1}(\cos(\theta) - i\sin(\theta))^2
\,=\, \frac{K-1}{K+1}\cdot e^{-2i\theta}
\end{eqnarray*}
\noindent
{\bf Step 2:}
$\iota_1(r\cdot e^{-2i\theta}) = (X,e^{2i\theta}\cdot q, r) \in T_g$
is the point in the Teichm\"uller space that is obtained as
Teichm\"uller deformation of dilatation $\frac{1+r}{1-r} = K$ with respect to
the quadratic differential $e^{2i\theta}\cdot q$.
Recall from the proof of Proposition \ref{distance} that this is
precisely the point in
$T_g$ defined by the affine deformation
$D_K \circ U_{\theta} \circ (X,\mu) = (X,\mu_{D_KU_{\theta}})
= (X, \mu_{D_KU_2})$. Thus
\begin{equation}\label{DUK}
(X,e^{2i\theta}\cdot q, r) = P_{D_KU_{\theta}} =
P_{D_KU_2} \stackrel{(\ref{PU})}{=}
P_{U_1D_KU_2} = P_A =
\hat{\iota}_2(A).
\end{equation}
\end{proof}
Using (\ref{DUK}) one may also describe the geodesic rays
$\iota_1\circ \tau_{\varphi}$ from \ref{coll} in
the Teichm\"uller disk
$\Delta_q = \Delta_{\iota_1} = \Delta_{\iota_2}$ as follows.
\begin{corollary}\label{DKU}
Define $D_K, U_{\theta}$ as in (\ref{decompose}).
The map
\[
[0,\infty ) \;\to\; T_g, \quad t \;\mapsto \;
P_{D_KU_{\theta}} = [D_{K} \circ (X,\mu_{U_\theta}), \mbox{\em id}]
\;\mbox{ with } K = e^t
\]
is equal to $\iota_1\circ \tau_{-2\theta}$.\\ It is thus by \ref{coll}
the geodesic ray\index{geodesic ray} in direction of the quadratic differential
$q_{2\theta} = e^{2\theta i}q$.
\end{corollary}
\begin{proof}
One has: \;\;
$ t \;\stackrel{\tau_{-2\theta}}{\mapsto}\; r(t)e^{-2\theta i}
\;\stackrel{\iota_1}{\mapsto}\; (X,e^{2\theta i}\cdot q, r(t))
\;\stackrel{(\ref{DUK})}{=}\; P_{D_KU_{\theta}}.
$
\end{proof}
Hence, geometrically one obtains the geodesic ray to $q_{\varphi}$
by rotating the flat structure by $U_{\frac{\varphi}{2}}$ and then stretching
in vertical direction with dilatation $K$.
\subsubsection[Beltrami differentials]{Beltrami differentials\\[2mm]}\label{beltrams}
In order to see that $\iota_1$ and $\iota_2$ are holomorphic
we introduce an alternative way to define $\iota_1$
using Beltrami differentials\index{Beltrami differential}. We keep this aspect
short and refer to e.g. \cite{n} for more details.\\
Let
\[M(X) = \{(X_1,f)|\;
\begin{array}[t]{l}
X_1 \mbox{ Riemann surface },\\
f:X\to X_1 \mbox{ is a quasiconformal homeomorphism}
\}/\approx \end{array}\]
with $(X_1,f_1) \approx (X_2,f_2) \Leftrightarrow f_2\circ f_1^{-1}$
is biholomorphic.\\
One has a natural projection $M(X) \to T_g$. Furthermore $M(X)$
can be ca\-nonical\-ly identified with the open unit ball
$B(X)$ in the Banach space
$L^{\infty}_{(-1,1)}(X)$ of $(-1,1)$-forms by the bijection:
\[M(X) \to B(X),\;\; (X_1,f) \mapsto \mu_f, \]
where $\mu_f$ is the Beltrami differential (or complex dilatation)
of $f$, cf. \cite[2.1.4]{n}.\\
Thus one obtains a projection
$\Phi: B(X) \to T_g$.
The map $\Phi$ is holomorphic (\cite[3.1]{n}).
Furthermore, for each quadratic differential $q$
and for all $k \in (0,1)$ the form $k\frac{\bar{q}}{|q|}$
is in $B(X)$
(\cite[2.6.3]{n})
Thus one may define the map
\[\iota_3: \left\{
\begin{array}{lclcl}
\mathbb D &\stackrel{b}{\to}& B(X) &\stackrel{\Phi}{\to}& T_g\\
z &\mapsto& z\cdot\frac{\bar{q}}{|q|} & \mapsto &
\Phi(z\cdot\frac{\bar{q}}{|q|})
\end{array} \right .
\]
It is composition of two holomorphic maps and thus itself holomorphic.\\
We will show in the following remark that $\iota_3 = \Phi \circ b = \iota_1$,
cf.\ Figure \ref{gb}.
\begin{remark}
For all $z_0 \in \mathbb D: \iota_3(z_0) = \iota_1(z_0)$.
\end{remark}
\begin{proof}
Let $z_0 = r\cdot e^{i\alpha} \in \mathbb D$ and $A \in \mbox{SL}_2(\mathbb{R})$
with $f(p(A)) = z_0$.\\
Decompose $A = U_1D_KU_2$ as in (\ref{decompose}) with $U_2 = U_{\theta}$.
Then by Step 1 of the proof of Proposition \ref{ioeinszwei},
$r = \frac{K-1}{K+1}$ \, and $\alpha = -2\theta$. Furthermore,
by Proposition \ref{ioeinszwei}
\[\iota_1(z) = \hat{\iota}_2(A) = [(X,\mu_A), \mbox{id}] = [(X,\mu_{D_KU_2}),\mbox{id}]\]
Let us calculate the Beltrami differential of the Teichm\"uller deformation
$f = \mbox{id}: X \to (X,\mu_{D_KU_2})$. We will see that it is equal to
$z_0\cdot \frac{\bar{q}}{|q|}$. From this it follows that
$\iota_1(z) = \iota_3(z)$.\\[2mm]
One has $f = g\circ h$ with $h = \mbox{id}:X \to (X,\mu_{U_2})$ and
$g = \mbox{id}: (X,\mu_{U_2}) \to (X,\mu_{D_KU_2})$. Locally in the
charts of
the flat structure defined by $q$, the maps $g$ and $h$ are given by
\[g: z \mapsto K\cdot\mbox{Re}(z) + i\cdot\mbox{Im}(z) \;\quad\;\mbox{ and }\;\quad\;
h: z \mapsto e^{i\theta}\cdot z.\]
Thus in terms of these charts one has:
\begin{eqnarray*}
&&f_z \;=\; {g}_z\cdot h_z + g_{\bar{z}}\cdot \bar{h}_{z}
\;=\; e^{i\theta}\cdot g_z \;\;\qquad
f_{\bar{z}} = g_z\cdot h_{\bar{z}} + g_{\bar{z}}\cdot \bar{h}_{\bar{z}}
\;=\; e^{-i\theta}\cdot g_{\bar{z}} \hspace*{2cm}\\
&&\Rightarrow \quad \frac{f_{\bar{z}}}{f_z}
\;=\; e^{-2i\theta}\cdot\frac{g_{\bar{z}}}{g_z}
\;=\; e^{-2i\theta}\cdot\frac{K-1}{K+1}
\;=\; e^{i\alpha}\cdot r \;=\; z_0
\end{eqnarray*}
Hence the Beltrami differential of $f$ is $z_0\cdot\frac{\bar{q}}{|q|}$.
\end{proof}
One obtains immediately the following conclusion.
\begin{corollary}\label{corhol}
$\iota_1 = \iota_3$ is holomorphic. By Proposition \ref{ioeinszwei}
$\,\iota_2$ is also holomorphic.
\end{corollary}
\subsection[Teichm\"uller curves]{Teichm\"uller
curves\\[2mm]}\label{tc}\index{Teichm\"uller curve}
In this section we introduce Teichm\"uller curves and recall some
properties of them, in particular their relation to Veech groups.
This was explored by Veech in his article \cite{V} and has been studied
by many authors since then. Overviews and further properties
can be found e.g. in \cite{mcm}, \cite{EG} or \cite{HS}. \\
Let $\iota: \mathbb D \hookrightarrow T_g$ be a Teichm\"uller embedding and
$\Delta = \Delta_{\iota} = \iota(\mathbb D)$ its image. We may consider
the image of $\Delta_{\iota}$ in the moduli space $M_g$
under the natural projection
$T_g \to M_g$, cf. Chapter \ref{intro}.
In general it will be something with a large closure. But
occasionally it is an algebraic curve. Such a curve is
called Teichm\"uller curve.
\begin{definition} If the image of
the Teichm\"uller disk $\Delta$ in
the moduli space $M_g$ is an algebraic curve $C$,
then $C$ is called {\em Teichm\"uller curve}.\\
A surface $(X,q)$, with a Riemann surface $X$ and
a holomorphic quadratic differential $q$
such that the Teichm\"uller
disk $\Delta = \Delta_q$ defined by $q$
projects to a Teichm\"uller curve
is called {\em Veech surface}\index{Veech surface}.
\end{definition}
How can one decide whether a surface $(X,q)$ induces a Teichm\"uller
curve or not? An answer to this question is given
by the Veech group, a subgroup of $\mbox{SL}_2(\mathbb{R})$ associated to $(X,q)$.
This is explained in the following two subsections.
\subsubsection[Veech groups]{Veech groups\\[2mm]}\label{vg}\index{Veech group}
Let $X$ be a Riemann surface and $q$ a holomorphic quadratic
differential on $X$. Let $\mu$ be the flat structure on $X$ defined by $q$.
One obtains a discrete subgroup of $\mbox{SL}_2(\mathbb{R})$
as follows: Let $\mbox{Aff}^+(X,\mu)$ be the group
of orientation preserving diffeomorphisms
which are affine\index{affine diffeomorphism} with respect to the flat structure
$\mu$, i.e. diffeomorphisms which are in terms of a local chart $z$
of $\mu$
given by
\[z \mapsto A\cdot z + t, \quad \mbox{ for some }
A = \begin{pmatrix} a&b\\c&d\end{pmatrix} \in \mbox{SL}_2(\mathbb{R}), t \in \mathbb{C}.\]
As above we identify the complex plane
$\mathbb{C}$ with $\mathbb{R}^2$. Furthermore, we denote for $z=x+iy$: $A\cdot z = ax+by + i(cx + dy)$.\\
Since $\mu$ is a flat structure, up to change of sign the matrix $A$ does not
depend on the charts. Thus one has a group homomorphism:
\[D: \;\mbox{Aff}^+(X,\mu) \to \mbox{PSL}_2(\mathbb{R}), \;\; f \mapsto [A].\]
For simplicity we will denote the image $[A]$ of the matrix $A$
in $\mbox{PSL}_2(\mathbb{R})$ often also just by $A$.
\begin{definition} \label{Veechgroup}
The image $\bar{\Gamma}(X,\mu) = D(\mbox{Aff}^+(X,\mu))$ of $D$
is called the {\em projective Veech group} of $(X,\mu)$.
\end{definition}
We will denote the projective Veech group also by $\bar{\Gamma}(X,q)$
and $\bar{\Gamma}_{\iota}$,
where $\iota:\mathbb D \hookrightarrow T_g$ or $\iota:\HH \hookrightarrow T_g$ is the Teichm\"uller
embedding defined by $q$ as described in \ref{defdisks}.
$\bar{\Gamma}(X,\mu)$ is a discrete subgroup of $\mbox{PSL}_2(\mathbb{R})$,
see \cite[Prop. 2.7]{V}.
\subsubsection[The action of the Veech group on the Teichm\"uller disk]{
The action of the Veech group on the Teichm\"uller disk\\[2mm]}
Recall that the projection
$T_g \to M_g$ from the Teich\-m\"uller space to the moduli space
is given by the quotient
for the action of the mapping class group \index{mapping class group}
\[\Gamma_g = \mbox{Diffeo}^+(X)/\mbox{Diffeo}_0(X),\]
cf. (\ref{mapgroup}) in the introduction.
The action of $\mbox{Diffeo}^+(X)$ on $T_g$ is given by
\begin{eqnarray*}
&\rho:&\mbox{Diffeo}^+(X) \;\to\;\mbox{Aut}(T_g) \cong \Gamma_q, \;\;
\varphi \mapsto \rho_{\varphi}\\
&&\mbox{with } \; \rho_{\varphi}:\; T_g \to T_g, \;\;
(X_1,h) \mapsto (X_1,h\circ\varphi^{-1}).
\end{eqnarray*}
The affine group\index{affine group} $\mbox{Aff}^+(X,\mu)$
acts as subgroup of $\mbox{Diffeo}^+(X)$ on $T_g$.
The following remark (cf. \cite[Theorem 1]{EG}) determines
this action restricted to the Teichm\"uller disk
\[\Delta = \Delta_q =
\{P_B = [(X,\mu_B),\mbox{id}] \in T_g| B \in \mbox{SL}_2(\mathbb{R})\}.\]
\begin{remark} \label{actab}
$\mbox{\em Aff}^+(X,\mu)$ stabilizes $\Delta$. Its action
on $\Delta$ is given by:
\begin{eqnarray*}
\varphi \in \mbox{\em Aff}^+(X,\mu),\, B \in \mbox{\em SL}_2(\mathbb{R}) &\Rightarrow&
\rho_{\varphi}(P_B) =
P_{BA^{-1}}\\
&&\mbox{ with } A \in \mbox{\em SL}_2(\mathbb{R})
\mbox{ a preimage of }
D(\varphi) = [A].
\end{eqnarray*}
\end{remark}
\begin{proof}
Let $\varphi \in \mbox{Aff}^+(X)$, $B \in \mbox{SL}_2(\mathbb{R})$ and
$A \in \mbox{SL}_2(\mathbb{R})$ be a preimage of $D(\varphi) = [A] \in \mbox{PSL}_2(\mathbb{R})$.
In the following commutative diagram
\[
\xymatrix @-1pc {
(X,\mu) \ar[rr]^{\varphi^{-1}} \ar[rrrrdd]_{\mbox{\footnotesize id}} &&
(X,\mu) \ar[rr]^{\mbox{\footnotesize id}} && (X,\mu_B) \\&&&&\\
& && & (X,\mu_{BA^{-1}}) \ar[uu]
}
\]
the map $(X,\mu_{BA^{-1}}) \, \to \, (X,\mu_B)$ is, as a composition
of affine maps, itself affine. Its derivative is
$D(\mbox{id} \circ \varphi^{-1} \circ \mbox{id}^{-1}) =
BA^{-1}(B{A}^{-1})^{-1} = I$.
Thus it
is biholomorphic and $\rho_{\varphi}([(X,\mu_B),\mbox{id}]) =
[(X,\mu_{BA^{-1}}),\mbox{id}]$.
\end{proof}
It follows from Remark \ref{actab}
that $\mbox{Aff}^+(X,\mu)$ is mapped by $\rho$
to Stab$(\Delta)$, the global stabilizer\index{Teichm\"uller disk!global stabilizer of} of $\Delta$
in $\Gamma_g$. Furthermore
$\rho: \mbox{Aff}^+(X,\mu) \to \mbox{Stab}(\Delta)\,\subseteq \Gamma_g$ is in fact
an isomorphism: It is injective, see \cite[Lemma~5.2]{EG}
and surjective, see \cite[Theorem~1]{EG}. Thus we
have $\mbox{Aff}^+(X,\mu) \cong \mbox{Stab}(\Delta)$.\\
From Remark \ref{actab} it also becomes clear that
the action of $\varphi \in \mbox{Aff}^+(X,\mu)$
depends only on $D(\varphi)$. Thus one obtains in fact
an action of the projective Veech group
$\bar{\Gamma}(X,\mu)$ on $\Delta$.
\begin{corollary}
$\bar{\Gamma}(X,\mu) \, \subseteq \, \mbox{\em PSL}_2(\mathbb{R})$ acts on
$\Delta = \{P_B \in T_g|\, B \in \mbox{\em SL}_2(\mathbb{R})\}$
by:
\begin{equation}\label{abc}
\rho_{[A]}(P_B) = P_{BA^{-1}} \quad \mbox{ where
$A$ is a preimage in $\mbox{\em SL}_2(\mathbb{R})$ of $[A]$}.
\end{equation}
\end{corollary}
Finally one may use the Teichm\"uller embedding
$\iota_2:\HH \to T_g$ defined by $q$ (cf. \ref{defi2})
in order to compare the action of $\bar{\Gamma}(X,\mu)$
on $\Delta = \Delta_{\iota} = \iota(\HH)$ with its action on
$\HH$ via M\"obius
transformations. One obtains the diagram in the following remark
(cf. \cite[Proposition 3.2.]{mcm}).
\begin{remark} \label{achtionstogether}
Let $A \in \mbox{\em PSL}_2(\mathbb{R})$. Denote by $A:\,\HH \to \HH$
its action as M\"obius transformation on $\HH$. The following diagram
is commutative:
\[ \xymatrix @-1pc {
\HH \ar[rr]^{t \mapsto -\bar{t}} \ar[d]_{A}
&& \HH \ar[rr]^{\iota}\ar[d]^{RAR^{-1}} && \Delta \ar[d]^{\rho_A} \\
\HH \ar[rr]^{t \mapsto -\bar{t}} && \HH \ar[rr]^{\iota} && \Delta\\
}\]
\begin{center}
\refstepcounter{diagramm}{\it Figure \arabic{diagramm}}
\label{diagramactions}
\end{center}
Here $R = \begin{pmatrix} -1 & 0 \\0&1\end{pmatrix}$, thus $R$ acts on $\mathbb{P}^1(\mathbb{C})$
by $z \mapsto -z$.
\end{remark}
\begin{proof}
Let $t \in \HH$. Choose some $B \in \mbox{SL}_2(\mathbb{R})$ with
$-\overline{B^{-1}(i)} = -\bar{t}$,
thus $\iota(-\bar{t}) = P_B = [(X,\mu_B),\mbox{id}]$ and using (\ref{abc})
we obtain the diagram:
\[ \xymatrix @-1pc {
t \ar@{|->}[rr]^{t \mapsto -\bar{t}} \ar@{|->}[d]_{A}
&& -\bar{t} \ar@{|->}[rr]^(0.3){\iota}
&& P_B = [(X,\mu_B),\mbox{id}] \ar@{|->}[d]^{\rho_A} \\
A(t) \ar@{|->}[rr]^{t \mapsto -\bar{t}} && -\overline{A(t)} &&
P_{BA^{-1}}= [(X,\mu_{BA^{-1}}),\mbox{id}]\\
}\]
The commutativity of the diagram in Figure \ref{diagramactions}
then follows from
\begin{eqnarray*}
&&RAR^{-1}(-\bar{t}) = -A(\bar{t}) = -\overline{A(t)} \mbox{ and}\\
&&-\overline{(BA^{-1})^{-1}(i)} =
-A(\overline{B^{-1}(i)}) = -A(\bar{t}) = -\overline{A(t)}
\mbox{, thus } \iota(-\overline{A(t)}) = P_{BA^{-1}}.
\end{eqnarray*}
\end{proof}
\subsubsection[Veech groups and Teichm\"uller
curves]{Veech groups and Teichm\"uller curves\\[2mm]}\label{lattice}\index{Veech group}\index{Teichm\"uller curve}
In Remark \ref{actab} we saw that the affine group $\mbox{Aff}^+(X,\mu)$
maps isomorphically to the global stabilizer of the Teichm\"uller disk
$\Delta$ in $\Gamma_g$.
Denote by $\mbox{proj}:T_g \to M_g$ the canonical projection. It then
follows from Remark \ref{achtionstogether}
that the map
\[ \mbox{proj}\circ \iota:\;\;\HH \;\,\to\;\, \mbox{proj}(\Delta) \;\,\subseteq\; M_g\]
factors through $\HH/R\bar{\Gamma}(X,\mu)R^{-1}$.
We call
\[\Gammaquer^*(X,\mu) = R\bar{\Gamma}(X,\mu)R^{-1}\]
the {\em mirror projective Veech group}\index{mirror Veech group}, since $\HH/\Gammaquer^*(X,\mu)$
is a mirror image of $\HH/\bar{\Gamma}(X,\mu)$, and
refer to it also as $\Gammaquer^*(X,q)$ or $\Gammaquer^*_{\iota}$.\\
$\HH/\Gammaquer^*(X,\mu)$ is a surface of finite type and hence an algebraic
curve if and only if $\Gammaquer^*(X,\mu)$ is a lattice in $\mbox{PSL}_2(\mathbb{R})$.
Altogether one obtains the following situation
(cf. \cite[Corollary 3.3]{mcm}).
\begin{corollary} \label{latticeproperty}
$(X,q)$ induces a Teichm\"uller curve $C$ if and only if $\bar{\Gamma}(X,\mu)$
is a lattice in $\mbox{\em PSL}_2(\mathbb{R})$. In this case the
following diagram holds:
\[ \xymatrix @-1pc {
\HH \ar[rr]^{t \,\mapsto\, -\bar{t}} \ar[d]
&& \;\; \HH \;\; \ar[d] \ar[rr]^{\iota}
&& \; \Delta = \Delta_{\iota} \;\; \subseteq \;\; T_g
\ar@<-18pt>[d]^{\mbox{\footnotesize \em proj}} \ar@<28pt>[d]^{\mbox{\footnotesize \em proj}}\\
\HH/\bar{\Gamma}(X,\mu) \ar[rr]^{\mbox{\footnotesize antihol.}}
&& \;\; \HH/\Gammaquer^*(X,\mu) \; \ar[rr]^(.44){\mbox{\footnotesize birat.}}
&& \quad \;\; C
\quad \quad \subseteq \;\; M_g
}\]
In particular if $\bar{\Gamma}(X,\mu)$
is a lattice, then
\begin{itemize}
\item
$\HH/\Gammaquer^*(X,\mu)$ is the normalization
of the Teichm\"uller curve $C$,
\item $\HH/\bar{\Gamma}(X,\mu)$ is antiholomorphic
to $\HH/\Gammaquer^*(X,\mu)$.
\end{itemize}
\end{corollary}
|
1,116,691,501,218 | arxiv |
\section{Introduction}
\label{sec:intro}
Stars form in the high-density parts of molecular clouds,
i.e., dense cores. A dense core prior to the protostellar
phase is called a starless core, and a gravitationally
bound/unstable starless core is referred to as a pre-stellar
core \citep{1994MNRAS.268..276W}.
The pre-stellar phase is considered as the starting point in
the star formation process \citep{2007ARA&A..45..339B,
2012A&ARv..20...56C}.
Pre-stellar cores can form single or multiple stellar systems
under the combined effect of gravity, magnetic fields, and
turbulence \citep{2015Natur.518..213P}.
Understanding the properties of pre-stellar cores is critical to
characterize the initial conditions of cluster formation.
The pre-stellar cores in low-mass star-forming regions have
been intensively studied toward nearby molecular clouds,
e.g., Perseus, Ophiuchus, Chamaeleon, Serpens, and Taurus
\citep{1999ApJ...526..788L,2001ApJS..136..703L,
2008ApJ...684.1240E,2010ApJ...718..306S,
2016ApJ...823..160D,2017ApJ...838..114K,2020ApJ...899...10T}.
However, the studies of pre-stellar cores in massive cluster-forming
regions are still limited by the low number of statistics available
\citep{2018A&A...618L...5N,2019ApJ...886..130L,
2019ApJ...886..102S,2021ApJ...907L..15S}.
The massive infrared dark cloud NGC\,6334S (also known as IRDC
G350.56+0.44) is located at the southwestern end of the NGC\,6334
molecular cloud complex that is a nearby
\citep[1.3 kpc;][]{2014ApJ...784..114C} young and massive
`mini-starburst' star-forming region \citep{2013ApJ...778...96W}.
With a mass of 1.3 $\times \, 10^{3} \, M_{\odot}$
\citep{2020ApJ...896..110L}, comparable to the clumps with
embedded massive protostars and protoclusters in the complex,
NGC\,6334S has the potential to form a cluster
encompassing both low- ($<$ 2 $M_{\odot}$\xspace) and high-mass ($>$ 8 $M_{\odot}$\xspace) stars.
NGC\,6334S provides an ideal laboratory to search for and study the
early stages (e.g., pre-stellar phase) of star formation in the high-mass
regime.
In order to investigate massive star and cluster formation, we performed
Atacama Large Millimeter/submillimeter Array (ALMA)
and Karl G. Jansky Very Large Array (JVLA) observations of NGC\,6334S.
A study of 49 continuum dense cores (hereafter continuum cores)
revealed by the 3~mm wavelength continuum image was presented in
\citet{2020ApJ...896..110L}, in which we reveal that the nonthermal
motions are predominantly subsonic and transonic in both clump scale
and continuum core scale.
Here we use molecular lines to study the starless/pre-stellar cores.
\section{Observations}
\label{sec:obs}
\subsection{ALMA Observations}
We carried out a 55--pointing mosaic of NGC\,6334S using
the ALMA 12 m array in 2017 March (ID: 2016.1.00951.S).
Two 234.4 MHz width spectral windows with a spectral resolution
of 61~kHz ($\sim$0.21 km s$^{-1}$\xspace at 86 GHz) were configured to cover
the H$^{13}$CO$^{+}$ (1--0, 86.754 GHz) and ortho-NH$_{2}$D
($1_{11}-1_{01}$ , 85.926 GHz) lines, respectively. Three additional
1.875 GHz wide spectral windows at 88.5 GHz, 98.5 GHz, and
100.3 GHz with a coarse spectral resolution (3.0 -- 3.3 km s$^{-1}$\xspace)
were employed to obtain broad band continuum
and rotational
transitions of several molecular species (e.g., HCO$^{+}$ 1--0,
HCN 1--0, CS 2--1, HNCO $4_{0,4}-3_{0,3}$, H$^{15}$NC 1--0,
CH$_{3}$OH $5_{1,4}-4_{1,3}$, SO $2_{2}-1_{1}$,
HC$_{3}$N 11--10). The maximum recoverable scales (MRS)
is $\sim$25$^{\prime\prime}$\ in the ALMA data.
The details of the observations can be found
in \citet{2020ApJ...896..110L}.
Data calibration and imaging were performed using CASA 4.7.0
\citep{2007ASPC..376..127M}.
We used Briggs' robust weighting of 0.5 to the visibilities for both
the continuum and lines, which results in a synthesized beam of
3.\arcsec6 $\times$ 2.\arcsec4 with a position angle (P.A.) of
81$^{\circ}$ and 4.\arcsec1 $\times$ 2.\arcsec8 (P.A. = 83$^{\circ}$)
for continuum and line images, respectively.
The achieved 1$\sigma$ rms noise level is about 6 mJy~beam$^{-1}$
per 0.21 km s$^{-1}$\xspace\ for the line images and 30~$\mu$Jy~beam$^{-1}$ for
the continuum image.
All images shown in the paper are prior to primary beam correction,
while all measured fluxes are corrected for the primary beam attenuation.
\subsection{JVLA Observations}
We carried out a 4-pointing mosaic of the central region of
NGC\,6334S using the JVLA C-configuration in 2014 August
(ID:14A-241).
The NH$_{3}$ (1, 1) through (5, 5) metastable inversion transitions
and H$_{2}$O maser were simultaneously covered in this observation.
CASA versions 4.7.0 and 5.1.1 were used to perform data
calibration and imaging.
An elliptical Gaussian with a FWHM of 6$^{\prime\prime}$ $\times$ 3$^{\prime\prime}$
(P.A. = 0$^{\circ}$) was used to taper the visibilities, in order to
increase the signal-to-noise ratio (S/N).
This yields a synthesized beam size of about
10$^{\prime\prime}$ $\times$ 5$^{\prime\prime}$ (P.A. = 26$^{\circ}$) with a 1$\sigma$
rms noise of 9 mJy beam$^{-1}$ per 0.2 km s$^{-1}$\xspace for the NH$_{3}$ lines.
More details on the observations are presented in \cite{2020ApJ...896..110L}.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.28]{NH2D_cont.pdf}
\caption{
Panel a: the gray scale in the background shows the NH$_{2}$D
velocity-integrated intensity ($W_{\rm NH_{2}D}$) image.
Cyan open triangles show the NH$_{2}$D cores.
Yellow open squares present 49 continuum cores identified by
ALMA~3~mm continuum image \citep{2020ApJ...896..110L}.
Purple cross `x' and blue plus `+' symbols are 25 Class I and 58 Class II YSOs
\citep{2013ApJ...778...96W}, respectively.
The beam size of the NH$_{2}$D image is shown on the bottom
left of the panel. Dashed purple contour shows the area mosaicked
with VLA.
Panel b: the core-averaged spectra of NH$_{2}$D ($1_{11}-1_{01}$)
and H$^{13}$CO$^{+}$ (1-0) for continuum core \#2 (black solid line)
and NH$_{2}$D core M1 (blue solid line).
Panel c: the core-averaged spectra of three wide (1.875 GHz)
spectral windows for continuum core \#2 (black solid line) and
NH$_{2}$D core M1 (blue solid line), respectively.
}
\label{fig:cont}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.58]{NH2D_cores.pdf}
\caption{
The 3~mm continuum image (black contours) overlaid on the
velocity-integrated intensity of NH$_{2}$D for the candidates that
are gravitationally bound with $\alpha_{\rm vir} <$ 2.
The integrated velocity range is
-11.5 and 5.6 km s$^{-1}$\xspace. Black contours are (5, 7, 9, 11, 15, 17,
20, 24, 29)$\times \sigma$, where $\sigma$ = 30 $\mu$Jy~beam$^{-1}$
is the rms noise level of the continuum image. The yellow ellipses
show the identified NH$_{2}$D cores. The green open stars
indicate continuum cores. The beam size of NH$_{2}$D image
is shown on the bottom right of M10 panel.
}
\label{fig:cores}
\end{figure*}
\section{Results and Analysis}
\label{sec:results}
\subsection{Cold and Quiescent Cores}
\label{sec:core}
NH$_{2}$D ($1_{11}-1_{01}$, critical density
$n_{\rm cr} \sim$ 10$^{5}$ cm$^{-3}$)
is a good tracer of cold and dense molecular gas
\citep{2007A&A...470..221C,2013ApJ...773..123S}, which can survive
in the gas phase in the dense interior region of pre-stellar cores
\citep{2017A&A...600A..61H}. In our ALMA data,
the NH$_{2}$D ($1_{11}-1_{01}$) line
emission is in general well correlated with the 3~mm continuum
emission, but there are some exceptions; there are 17 bright
compact structures in the NH$_{2}$D emission that are
associated with weak (3$\sigma$ -- 11$\sigma$) or no
continuum emission ($< 3\sigma$) at all (Figure~\ref{fig:cont} and
\ref{fig:cores}), and with no young stellar object (YSO) counterparts
\citep[e.g., Class~\uppercase\expandafter{\romannumeral1}/\uppercase\expandafter{\romannumeral2};][]{2013ApJ...778...96W}.
There is no reliable Class~0 catalogue in this region.
The details on the identification of NH$_{2}$D compact structures
are summarized in Appendix~\ref{app:identification}.
We refer to these NH$_{2}$D compact structures as NH$_{2}$D
cores, naming those in M1, M2, M3 ... in order of descending
NH$_{2}$D velocity-integrated intensity.
They have diameters ranging from 0.018 to 0.04 pc (see
Table~\ref{tab:nh2d}).
Among the NH$_{2}$D cores, M1 is the most prominent
pre-stellar candidate with high signal-to-noise (S/N) ratio and a
relatively isolated environment that eliminates the contamination
from the outflows driven by the continuum cores. The following
analysis and discussion will therefore use M1 as a showcase,
while the physical parameters of the remaining NH$_{2}$D
cores will only be included in statistical analyses.
Using the HCO$^{+}$, CS, HCN, SO, HNCO, and H$_{2}$O maser
lines (see Figure~\ref{fig:cont}), we have searched for signatures
of protostellar activity, such as bipolar/monopolar/multipolar outflow
activity and H$_{2}$O maser emission. All the NH$_{2}$D cores
show no signs of protostellar activity.
Although we currently cannot
rule out the existence of unresolved or weak outflows due to the
coarse spectral resolution and the lack of better low-mass outflow
tracers such as CO, these NH$_{2}$D cores appear to be
starless with the available evidence.
In addition, the NH$_{2}$D
cores do not show any appreciable ($< 3\sigma$) continuum emission
in the infrared wavelengths from 3.6 up to 70 $\mu \rm m$, and therefore they are
completely dark in these infrared images.
The upper limit on the internal luminosity ($L_{\rm int}$) of NH$_{2}$D
cores is estimated to be $\sim$1.26 $L_{\odot}$ using
\texttt{Herschel}/PACS 70 $\mu \rm m$\ data \citep[][see Appendix~\ref{app:lum}
for detailed derivation of internal luminosity]{2017A&A...602A..77T}.
These results suggest that these NH$_{2}$D cores are cold and quiescent
in terms of star-forming activity, though some undetected very faint
embedded low-mass protostars cannot be fully ruled out
\citep{2008ApJS..179..249D,2009ApJS..181..321E,2011ApJ...736...53O,
2017ApJ...840...69F}.
An example of the comparison between NH$_{2}$D core (M1)
and continuum core (\#2) is shown in Figure~\ref{fig:cont}, which
includes all of the observed spectral windows. Continuum core \#2 is
a protostellar core as evidenced by the clear molecular outflows in the
CS, HCN, and HCO$^{+}$ line emission, and it is one of the most
chemically rich among the continuum cores.
Nonetheless, only a few lines are detected toward continuum
core \#2. This indicates that the continuum cores \#2 is at
a stage prior to the hot cores/corinos. We note that M1 shows even
less chemical complexity than continuum core \#2, with fewer lines,
molecular species, and narrower line profiles. This suggests that the
evolutionary stage of the NH$_{2}$D cores are likely earlier than the
continuum cores.
On the other hand, we cannot completely rule out the possibility that
some of continuum cores are relatively evolved pre-stellar cores,
because of the lack of high sensitivity and high spectral resolution
outflow tracers (e.g., CO, SiO) to distinguish the evolutionary stages
\citep{2020ApJ...903..119L}.
In this study, we will focus only on the NH$_{2}$D cores.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.27]{sigma.pdf}
\caption{Panel a: NH$_{2}$D spectrum (black curve) extracted
from single pixels toward M1. Red curve shows the best result from
the hyperfine fitting that includes NH$_{2}$D 36 hyperfine
components \citep{2011ascl.soft09001G,2016A&A...586L...4D},
with the best-fit parameters displayed in the panel.
Panel b: the NH$_{2}$D integrated intensity (black contours)
overlaid on its $\sigma_{\rm obs}$ toward M1.
The contours are (3, 5, 10, 20, 30, 40, 50)$\times \sigma$,
with $\sigma$ = 0.011~Jy~beam$^{-1}$~km~s$^{-1}$.
The beam size is shown on the lower left of the panel.
Panel c: the annularly averaged observed velocity dispersions
($\sigma_{\rm obs}$) as a functions of radial distance from the
center (given in Table~\ref{tab:nh2d}) of M1.
The error bars show the statistical standard deviation at each bin.
}
\label{fig:sigma}
\end{figure*}
\subsection{Small Velocity Dispersions}
\label{linewidth}
The NH$_{2}$D line cube is fitted pixel by pixel, following the fitting
processes described in \cite{2020ApJ...896..110L}. In the hyperfine fits,
the uncertainties of the LSR velocities and line widths are small, while
those of the excitation temperature and optical depth are relatively
large (Figure~\ref{fig:sigma}).
Figure~\ref{fig:comb} shows the observed velocity dispersions
($\sigma_{\rm obs}$) distribution for each NH$_{2}$D core. All the
NH$_{2}$D cores show small velocity dispersion
($\langle \sigma_{\rm obs} \rangle \sim$ 0.16 km s$^{-1}$\xspace)
and the majority of them are smaller than the sound speed,
$c_{\rm s}$(10 K) = 0.19 km s$^{-1}$\xspace.
The observed velocity dispersions may be regarded as upper limits,
given the limited spectral resolution, the blending of velocity
components along the line of sight and opacity broadening effects.
For the NH$_{2}$D cores, the line widths of detected lines
are smaller than those of the majority of continuum cores
(see Figure~\ref{fig:cont} and \ref{fig:comb}).
Among all the NH$_{2}$D cores, M1 shows the strongest
NH$_{2}$D emission and the smallest velocity dispersion.
For M1, pixel by pixel fit to the NH$_{2}$D
line cube yields a small mean velocity dispersion
$\langle \sigma_{\rm obs} \rangle \approx$ 0.107 km~s$^{-1}$
(Figure~\ref{fig:comb}), corresponding to an intrinsic observed
velocity dispersion of $\sigma_{\rm obs,int}$ = 0.06 km~s$^{-1}$
after removing the smearing effect due to the channel width
using $\sigma_{\rm obs,int} = \sqrt{ \sigma_{\rm obs}^{2} -
\sigma_{\rm ch}^{2} }$, where $\sigma_{\rm ch} =
\triangle_{\rm ch}/(2\sqrt{2\; \rm ln2})$ = 0.089 km~s$^{-1}$
is the channel width.
For the extremely narrow line emission, a higher spectral
resolution data is required to accurately characterize the actual
intrinsic velocity dispersion. Considering the
$\langle \sigma_{\rm obs} \rangle$ is 1.2 times larger than the
$\sigma_{\rm ch}$, the aforementioned derived
$\sigma_{\rm obs,int}$ can be regarded as a reasonable
approximation for the following analysis
\citep[see Appendix C in][]{2020ApJ...896..110L}.
Assuming the observed linewidths of the NH$_{2}$D line are caused
only by thermal broadening, the gas temperature is found to be
about 7~K, $\sigma_{\rm th,NH_{2}D}$(7~K)\footnote{
The molecular thermal velocity dispersion can be calculated by
$\sigma_{\rm th}\;=\; \sqrt{(k_{\rm B}T)/(\mu m_{\rm H})} \;=\; 9.08\times 10^{-2} \rm \;km\; s^{-1} (\frac{T}{K})^{0.5}\; \mu^{-0.5}$, where $\mu \;=\; m/m_{\rm H}$
is the molecular weight, $m$ is the molecular mass and $m_{\rm H}$
is the proton mass.} = $\sigma_{\rm obs,int}$ = 0.06 km~s$^{-1}$.
This indicates that the gas kinematics is dominated by thermal motions
with a kinetic temperature that is 7 K at most.
In addition to M1, there are
four NH$_{2}$D cores (M9, M12, M13 and M15) that also show
extremely small velocity dispersions comparable to the NH$_{2}$D
thermal velocity dispersion in the temperature range of 7 -- 10 K.
The velocity dispersions of remaining cores are relatively higher
than the NH$_{2}$D thermal velocity dispersion at a
temperature of 10 K ($\sigma_{\rm th, NH_{2}D} \sim$ 0.07 km s$^{-1}$\xspace),
while they are much smaller than the sound speed
(Figure~\ref{fig:comb}). This indicates that the gas kinematics are
dominated by thermal motions toward these NH$_{2}$D cores.
Figure~\ref{fig:sigma} shows the annularly averaged observed velocity
dispersion as a function of the radial distance ($R_{\rm dist}$) from
the center of M1. The observed velocity dispersion appears to increase
with increasing $R_{\rm dist}$ (see Figure~\ref{fig:sigma}), which
prevails in pre-stellar and starless cores
\citep{2005ApJ...619..379C,2019ApJ...872..207A}. The increase of
$\sigma_{\rm obs}$ along R$_{\rm dist}$ is also detected in the
other 9 cores (M4, M6, M7, M8, M9, M10, M13, M15, M17),
although the annularly averaged $\sigma_{\rm obs}$ starts to suffer
from low S/N toward the outer edges of the cores.
This increasing trend of $\sigma_{\rm obs}$ along R$_{\rm dist}$
indicates that the turbulence is significantly dissipated toward the
center of the core. The results indicate that the turbulence
dissipation is commonly seen toward these NH$_{2}$D cores.
Alternatively, in the larger-scale collapse scenario, if the pre-stellar
cores undergo accretion, the infall speed decreases towards the
center, and thus the line width will be smaller for more central
regions \citep{2021MNRAS.tmp..412G}. Although no clear infall
signatures are detected in M1 using coarser spectral resolution
(3.0 -- 3.3 km s$^{-1}$\xspace) optically thick lines (e.g., HCO$^{+}$ 1-0, HCN
1-0 and CS 2-1), this possibility cannot be ruled out and should
be explored using higher spectral resolution data and other
tracers \citep[e.g., HCN 5-4/4-3/3-2 and HCO$^{+}$ 5-4/3-2;
][]{2014MNRAS.444..874C}.
Overall, the observed small velocity dispersions and their spatial
distributions resemble those observed in the well known pre-stellar
cores in Ophiuchus/H-MM1 and L\,1544
\citep{2007A&A...470..221C,2017A&A...600A..61H,2018ApJ...855..112P}.
This suggests that the identified NH$_{2}$D cores may have
not been affected by the star formation activities yet.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.311]{N_Sigma_all.pdf}
\caption{
Panel a to c: violin plots of the $\sigma_{\rm obs}$, $N_{\rm NH_{2}D}$,
and $\chi(\rm NH_{2}D)$ distributions for all NH$_{2}$D cores.
The shape of each distribution shows the probability density of the data
smoothed by a kernel density estimator. The blue bars from the top to
bottom represent the maximum, mean, and minimum values, respectively.
Panel a: the red solid line is the sound speed at a temperature of 10 K.
The blue solid line is the mean observed velocity dispersion of the
continuum cores, $\langle \sigma_{\rm obs, cont} \rangle$ = 0.23 $\pm$
0.08 km s$^{-1}$\xspace.
The shaded gray region shows the spectral resolution limit of the
NH$_{2}$D line.
Panel b: the red and blue solid lines are observed $\rm N(NH_{2}D)$
in the pre-stellar core L1544 \citep[1.75 $\times\, 10^{14}$ cm$^{-2}$;][]{
2007A&A...470..221C} and
Ophiuchus/H-MM1 \citep[1.1 $\times\, 10^{14}$ cm$^{-2}$;][]{
2017A&A...600A..61H}.
Panel c: the blue and red solid lines are observed abundance
$X(\rm NH_{2}D)$ in the pre-stellar cores Ophiuchus/H-MM1
\citep[2 $\times\, 10^{-9}$,][]{2017A&A...600A..61H} and L1544
\citep[2 $\times\, 10^{-10}$, ][]{2007A&A...470..221C}, respectively.
}
\label{fig:comb}
\end{figure}
\subsection{Gas Masses and Dynamical States}
\label{sec:mass}
Assuming a typical temperature ($T_{\rm dust}$ = 10 K) of pre-stellar
cores (see also $\S$~\ref{sec:XNH2D}), the gas masses ($M_{\rm gas}$)
of the NH$_{2}$D cores are estimated from continuum emission in
two different approaches depending on their intensities (see
Appendix~\ref{app:mass} for detailed derivation of gas mass).
First, the continuum emission is used to compute the gas mass
if the continuum peak emission is higher than $3\sigma$.
Second, a 3$\sigma$ mass sensitivity of 0.13 $M_{\odot}$\xspace\ is used as an upper
limit for the core mass if the continuum peak emission is $\leqslant 3\sigma$.
The derived cores masses are between $<$ 0.13 and 0.87 $M_{\odot}$\xspace, with a mean
value of 0.45 $M_{\odot}$\xspace\ (Table~\ref{tab:nh2d}). Following the same procedure,
the core-averaged H$_{2}$ column densities ($N_{\rm H_{2}}$) are derived
to be between $<$ 1.1~$\times$~10$^{22}$ and
3.0~$\times$~10$^{22}$ cm$^{-2}$ (Appendix~\ref{app:mass}),
with a mean value of 2.0~$\times$~10$^{22}$ cm$^{-2}$.
In order to evaluate the gravitationally bound of the
NH$_{2}$D cores, we estimated
the virial masses ($M_{\rm vir} $) that range from 0.43 to 2.54 $M_{\odot}$\xspace\
(see Appendix~\ref{app:mass} for derivation of virial mass).
The derived virial parameters,
$\alpha_{\rm vir}$ = $M_{\rm vir}/M_{\rm gas}$,
are between 0.93 and 6.03. Nine out of the 17 NH$_{2}$D cores are
gravitationally bound with virial parameters $<$ 2, and the remaining
eight NH$_{2}$D cores are gravitationally unbound with
$\alpha_{\rm vir} >$ 2 if external pressure is ignored
\citep{2013ApJ...779..185K}.
\subsection{NH$_{2}$D Abundances and NH$_{3}$ Deuterium Fractionation}
\label{sec:XNH2D}
Assuming a constant temperature of 10 K and the ortho-to-para
ratio of 3 \citep{2017A&A...600A..61H}, the estimated NH$_{2}$D
column densities ($N(\rm NH_{2}D)$) range from 7.3~$\times$~10$^{12}$~cm$^{-2}$
to 3.4~$\times$~10$^{15}$~cm$^{-2}$, with a mean value of
5.2~$\times$~10$^{14}$~cm$^{-2}$ (see Figure~\ref{fig:comb}).
The derived abundance
$X(\rm NH_{2}D)$ = $N(\rm NH_{2}D)$/$N(\rm H_{2})$ toward
the NH$_{2}$D cores range from 1.8~$\times$~10$^{-9}$ to
2.0~$\times$~10$^{-7}$, with a mean value of
2.5~$\times$~10$^{-8}$.
The JVLA NH$_{3}$ observations only cover the central region of
NGC\,6334S (see Figure~\ref{fig:cont}). There are ten NH$_{2}$D
cores located in the field of the NH$_{3}$ observations. Limited by
the relatively low S/N, both NH$_{3}$ (1,1) and (2,2) transitions are
detected in M4 and M10 (S/N $\sim$ 4--5), while only NH$_{3}$ (1,1)
is marginally detected in M13 and undetected in the other seven
NH$_{2}$D cores. Using \texttt{PySpecKit} we fit the core-averaged
spectrum, and we derive a kinetic temperature of
$T_{\rm k}$ = 12.7 $\pm$ 3.1 K and a column density of
$N_{\rm NH_{3}}$ = $(4.4\, \pm \,2.5) \times 10^{14}$~cm$^{-2}$
for M4, and the derived parameters are $T_{\rm k}$ = 12.1 $\pm$ 3.2 K
and $N_{\rm NH_{3}}$ = $(8.9 \pm 5.0) \times 10^{14}$ cm$^{-2}$
for M10, assuming the ortho-to-para ratio of 1 for NH$_{3}$
\citep{2017A&A...600A..61H}.
The column density ratio NH$_{2}$D-to-NH$_{3}$
($D_{\rm NH_{3}}$, also known as NH$_{3}$ deuterium fractionation)
is found to be 0.25 and 0.11 for M4 and M10, respectively.
A 3$\sigma$ integrated intensity is used to estimate the upper
limit column density (3.8 $\times$ 10$^{14}$ cm$^{-2}$) for
the other eight cores assuming an excitation temperature of 10 K.
The resulting lower limits $D_{\rm NH_{3}}$ range from 0.11 to 0.39
(see Table~\ref{tab:nh2d}), which are close to those in the pre-stellar
core L~1544 \citep[0.5$\pm$0.2;][]{2007A&A...470..221C} and
Ophiuchus/H-MM1 \citep[0.45$\pm$0.09;][]{2017A&A...600A..61H}.
\section{Discussion}
\label{sec:dis}
As shown in Figure~\ref{fig:comb}, the derived NH$_{2}$D column
densities and fractional abundances are higher than the values of
Ophiuchus/H-MM1 \citep[$1.1 \,\times\, 10^{14}$~cm$^{-2}$ and
$2 \,\times\, 10^{-9}$;][]{2017A&A...600A..61H}
and L1544 \citep[$1.8 \,\times\, 10^{14}$~cm$^{-2}$ and
$2\, \times \,10^{-10}$;][]{2007A&A...470..221C}.
The high abundances of NH$_{2}$D could benefit from the
significant depletion of CO, which occurs faster at cold and
dense environments \citep[e.g., starless or pre-stellar
cores; the detail chemical processes related to the NH$_{2}$D
formation can be found
in][]{2015A&A...581A.122S,2017A&A...600A..61H}.
The small velocity dispersion, small Mach number (Table~\ref{tab:nh2d}),
and positive $\sigma_{\rm obs}$--$R_{\rm dist}$
relation indicate that the turbulence is likely dissipated toward these
NH$_{2}$D cores. In addition, it implies that star formation has not
yet started; outflows motions would widen the lines as can be seen
in the continuum cores (Figure~\ref{fig:cont}).
Therefore, these results in conjunction with the high NH$_{3}$
deuterium fractionation, and the lack of embedded YSOs and 70~$\mu \rm m$
emission indicate that the identified NH$_{2}$D cores are still in a
starless phase. Nine out of 17 starless cores are gravitationally bound
with $\alpha_{\rm vir} <$ 2, therefore are identified as excellent
pre-stellar core candidates.
The dust temperature may drop to $\sim$5.5 K toward the core center
\citep[e.g., L1544;][]{2007A&A...470..221C}. The derived gas mass
can increase by a factor of 2.25 if a temperature of 5.5 K is assumed,
so that the resulting virial parameter will decrease by a factor of 2.25.
The levels of the missing flux are unknown due to the lack of single dish
data. However, the estimated masses and radius of cores may not
significantly affected by the missing flux, because the observations are
carried out with the most compact 12~m array configuration (C40-1)
and the MRS is $\sim$25$^{\prime\prime}$\ that is much larger than the cores sizes.
Therefore, the estimated virial parameters are not seriously affected
by the missing flux.
In addition, the ambient pressure from the parental clumps and
filaments could also provide additional confinement to make these
NH$_{2}$D cores bound \citep[e.g.,][]{2019ApJ...877...93C,
2020ApJ...896..110L}.
Therefore, we cannot completely rule out the possibility that the other
8 starless cores could also be pre-stellar core candidates.
The estimated masses of NH$_{2}$D cores range from 0.13 to
0.87 $M_{\odot}$\xspace, suggesting that they are in the low-mass core regime.
The majority (15) of NH$_{2}$D cores are associated with filamentary
structures revealed by the H$^{13}$CO$^{+}$ line emission
(Li et al. 2021, in preparation). Therefore, the NH$_{2}$D cores
have the potential to accrete a significant amount of material from the
parental clump via filamentary accretion.
The masses of continuum cores are found to range from low- to
high-mass regime \citep[0.17 -- 14.03 $M_{\odot}$\xspace;][]{2020ApJ...896..110L}.
These results suggest that cores with a large range of masses are
simultaneously forming in this cluster-forming cloud.
The massive cores are found toward the center of cloud
\citep[see][]{2020ApJ...896..110L}, while the low-mass cores are
wide spread throughout the region.
The evolutionary stages vary significantly over the sample
of NH$_{2}$D cores, continuum cores, and YSOs (Class~\uppercase\expandafter{\romannumeral1} and \uppercase\expandafter{\romannumeral2}).
This indicates that there is a temporally continuous formation of low-
to high-mass cores.
In summary, NGC\,6334S is forming a stellar protocluster, its
embedded cores show a wide diversity in masses (0.13 -- 14.03 $M_{\odot}$\xspace)
and evolutionary stages (pre-stellar -- Class~\uppercase\expandafter{\romannumeral2} phases),
and the embedded cores are expected to grow in mass by
gas accretion from parental clump via filaments. This seems to be
consistent with the competitive accretion massive star formation
scenario \citep{2006MNRAS.370..488B}, in which the dense cores
located near the center of the gravitational potential continue
accreting material to form massive stars.
\section{Summary}
\label{summary}
We present ALMA and JVLA high spatial resolution observations
toward the massive IRDC NGC\,6334S. We have identified 17
low-mass starless core candidates that show bright NH$_{2}$D
emission, but no YSO counterparts and no signs of protostellar
activity, although we cannot completely rule out the possibility of
very weak thus undetected outflows or very faint embedded
low-mass protostars.
These candidates show almost-thermal velocity
dispersions (down to $\sigma_{\rm tot} \sim$ 0.06 km s$^{-1}$\xspace),
high NH$_{2}$D abundances (up to $\sim$10$^{-7}$), high
NH$_{3}$ deuterium fractionations (up to $>$ 0.39), and are
completely dark in the infrared wavelengths from 3.6 up to
70~$\mu \rm m$.
In addition, the $\sigma_{\rm obs}$ appear to decrease toward
the center of cores in 10 of them. These results suggest that
turbulence has significantly dissipated and the NH$_{2}$D
abundance has significantly enhanced toward these cores.
The gas kinematics resemble the well known pre-stellar cores,
e.g., Ophiuchus/H-MM1 and L\,1544, but with one to two orders
of magnitude greater NH$_{2}$D abundance
\citep{2007A&A...470..221C,2017A&A...600A..61H}.
Nine out of the 17 NH$_{2}$D cores are gravitationally bound,
therefore are identified as pre-stellar core candidates.
Therefore, the NH$_{2}$D line could be a powerful tracer
to reveal the starless and pre-stellar cores that do not show
significant dust continuum emission. This is the first detection
of a cluster of low-mass starless and pre-stellar core candidates in a
massive star cluster-forming cloud. Low- to high-mass
cores are simultaneously forming in this cluster-forming cloud.
\acknowledgments
We thank the anonymous referee for constructive comments that
helped improve this paper.
This work was partially supported by National Natural Science Foundation
of China (NSFC)(Grant Nos. 11988101, 11911530226).
C.W.L. is supported by the Basic Science Research Program through
the National Research Foundation of Korea (NRF) funded by the
Ministry of Education, Science and Technology (NRF-2019R1A2C1010851).
H.B. acknowledges support from the European Research Council
under the European Community's Horizon 2020 framework program
(2014-2020) via the ERC Consolidator Grant `From Cloud to Star
Formation (CSF)' (project number 648505). H.B. also acknowledges
funding from the Deutsche Forschungsgemeinschaft (DFG) via the
Collaborative Research Center (SFB 881) `The Milky Way System'
(subproject B1).
I.J.-S. has received partial support from the Spanish State Research
Agency (AEI; project number PID2019-105552RB-C41).
K.Q. is partially supported by National Key R\&D Program of China
No. 2017YFA0402600, and acknowledges the National Natural
Science Foundation of China (NSFC) grant U1731237.
A.P. acknowledges financial support from the UNAM-PAPIIT IN111421
grant, the Sistema Nacional de Investigadores of CONACyT, and from
the CONACyT project number 86372 of the `Ciencia de Frontera 2019’
program, entitled `Citlalc\'oatl: A multiscale study at the new frontier of
the formation and early evolution of stars and planetary systems’, M\'exico.
J.M.G. acknowledges the support of the grant AYA2017-84390-C2-R
(AEI/FEDER, EU).
K.W. acknowledges support by the National Key Research and
Development Program of China (2017YFA0402702, 2019YFA0405100),
the National Science Foundation of China (11973013, 11721303), NSFC
grant 11629302, and
the starting grant at the Kavli Institute for Astronomy and Astrophysics,
Peking University (7101502287).
H.B.L. is supported by the Ministry of Science and Technology (MoST)
of Taiwan (Grant Nos. 108-2112-M-001-002-MY3).
This paper makes use of the following ALMA data:
ADS/JAO.ALMA\#2016.1.00951.S. ALMA is a partnership of ESO
(representing its member states), NSF (USA) and NINS (Japan),
together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI
(Republic of Korea), in cooperation with the Republic of Chile. The
Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
\newpage
\vspace{5mm}
\facilities{ALMA, JVLA, \texttt{Herschel}.}
\software{ CASA \citep{2007ASPC..376..127M},
APLpy \citep{2012ascl.soft08017R},
Astropy \citep{2013A&A...558A..33A},
Matplotlib \citep{4160265},
PySpecKit \citep{2011ascl.soft09001G}.
}
\newpage
\begin{deluxetable*}{cccccccccccccccccccc}
\setlength{\tabcolsep}{0.42mm}{
\tabletypesize{\scriptsize}
\rotate
\tablecolumns{20}
\tablewidth{0pc}
\tablecaption{Physical parameters of the NH$_{2}$D cores. \label{tab:nh2d}}
\tablehead{
\colhead{Core ID} &\colhead{R.A.} &\colhead{Decl.}
&\colhead{$\rm Maj \times Min$} &\colhead{P.A.} &\colhead{$R$}
&\colhead{$\sigma_{\rm obs}$} &\colhead{$v_{\rm LSR}$} &\colhead{$\mathcal{M}$}
&\colhead{$W_{\rm NH_{2}D}^{\rm peak}$}
&\colhead{$W_{\rm NH_{2}D}$} &\colhead{$S_{\rm \nu}^{\rm peak}$}&\colhead{$S_{\rm \nu}$}
&\colhead{$M_{\rm gas}$} &\colhead{$M_{\rm vir}$} &\colhead{$\alpha_{\rm vir}$}
&\colhead{$N_{\rm NH_{2}D}$} &\colhead{$N_{\rm H_{2}}$}
&\colhead{$X(\rm NH_{2}D)$} &\colhead{$D_{\rm NH_{3}}$} \\
&\colhead{(J2000)} &\colhead{(J2000)}
&\colhead{($^{\prime\prime}$$\times$$^{\prime\prime}$)} &\colhead{(deg)} &\colhead{(pc)}
&\colhead{(km/s)} &\colhead{(km/s)} &
&\colhead{(Jy/beam km/s)}
&\colhead{(Jy/beam km/s)} &\colhead{(mJy/beam)} &\colhead{(mJy)}
&\colhead{($M_{\odot}$)} &\colhead{($M_{\odot}$)} &
&\colhead{(cm$^{-2}$)} &\colhead{(cm$^{-2}$)}
& &
}
\decimalcolnumbers
\startdata
M1 & 17:19:09.38 & -36:03:37.96 & 7.1$\times$2.8 & 107.6 & 0.028 & 0.11 & -3.59 & $<$0.68 & 0.69 & 140.95 & 0.174 & 0.294 & 0.43 & $<$0.68 & $<$1.57 & 2.2E+15 & 1.8E+22 & 1.1E-07 & … \\
M2 & 17:19:04.28 & -36:06:57.17 & 9.3$\times$4.3 & 126.6 & 0.040 & 0.30 & -1.36 & 2.41 & 0.40 & 136.13 & 0.186 & 0.412 & 0.61 & 2.54 & 4.18 & 6.1E+14 & 1.6E+22 & 3.2E-08 & $>$0.39 \\
M3 & 17:19:05.61 & -36:11:01.29 & 7.5$\times$3.9 & 75.4 & 0.034 & 0.20 & -2.95 & 1.38 & 0.41 & 104.66 & 0.292 & 0.462 & 0.68 & 1.25 & 1.84 & 7.0E+14 & 2.4E+22 & 2.8E-08 & … \\
M4 & 17:19:11.12 & -36:05:41.37 & 9.4$\times$3.2 & 79.5 & 0.034 & 0.17 & -5.07 & 1.10 & 0.31 & 82.90 & 0.214 & 0.589 & 0.87 & 1.00 & 1.15 & 4.8E+14 & 2.8E+22 & 1.8E-08 & 0.25 \\
M5 & 17:19:07.39 & -36:10:24.52 & 7.5$\times$2.1 & 72.1 & 0.025 & 0.15 & -2.47 & 0.84 & 0.44 & 82.36 & 0.180 & 0.258 & 0.38 & 0.63 & 1.65 & 9.3E+14 & 1.8E+22 & 4.5E-08 & … \\
M6 & 17:19:12.04 & -36:06:55.76 & 7.3$\times$3.9 & 57.3 & 0.034 & 0.18 & -2.80 & 1.13 & 0.28 & 69.97 & 0.230 & 0.458 & 0.68 & 1.08 & 1.60 & 4.0E+14 & 2.4E+22 & 1.7E-08 & $>$0.25 \\
M7 & 17:19:09.91 & -36:09:05.62 & 6.6$\times$5.2 & 21.1 & 0.037 & 0.23 & -4.17 & 1.69 & 0.23 & 65.62 & 0.206 & 0.447 & 0.66 & 1.83 & 2.77 & 1.8E+14 & 2.2E+22 & 8.0E-09 & … \\
M8 & 17:19:10.30 & -36:09:09.49 & 5.8$\times$4.4 & 122.3 & 0.032 & 0.19 & -4.22 & 1.32 & 0.24 & 56.76 & 0.237 & 0.450 & 0.66 & 1.27 & 1.91 & 3.0E+14 & 2.5E+22 & 1.2E-08 & … \\
M9 & 17:19:11.40 & -36:08:44.12 & 4.9$\times$3.9 & 78.0 & 0.028 & 0.12 & -1.68 & 0.34 & 0.24 & 45.38 & 0.158 & 0.201 & 0.30 & 0.73 & 2.47 & 2.7E+14 & 1.4E+22 & 1.4E-08 & 0.22 \\
M10 & 17:19:05.12 & -36:06:45.00 & 4.2$\times$3.7 & 9.9 & 0.025 & 0.15 & -3.47 & 0.78 & 0.26 & 45.19 & 0.310 & 0.366 & 0.54 & 0.79 & 1.46 & 6.9E+14 & 2.8E+22 & 2.5E-08 & $>$0.11 \\
M11 & 17:18:58.66 & -36:07:04.76 & 4.1$\times$2.2 & 48.5 & 0.019 & 0.11 & -2.01 & $<$0.73 & 0.33 & 44.30 & … & … & $<$0.13 & $<$0.51 & 3.82 & 7.5E+14 & $<$1.5E+22 & $>$7.1E-08 & $>$0.32 \\
M12 & 17:19:06.86 & -36:06:22.65 & 6.8$\times$3.6 & 124.3 & 0.031 & 0.13 & -3.51 & 0.39 & 0.18 & 41.11 & 0.244 & 0.538 & 0.79 & 0.74 & 0.93 & 2.5E+14 & 3.0E+22 & 8.9E-09 & $>$0.16 \\
M13 & 17:19:07.39 & -36:05:55.13 & 6.9$\times$3.7 & 116.2 & 0.032 & 0.10 & -5.36 & $<$0.57 & 0.15 & 36.21 & … & … & $<$0.13 & $<$0.80 & 6.03 & 1.3E+14 & $<$1.5E+22 & $>$7.8E-09 & $>$0.14 \\
M14 & 17:19:04.55 & -36:05:31.47 & 3.7$\times$2.2 & 29.7 & 0.018 & 0.16 & -4.20 & 0.93 & 0.26 & 33.59 & … & … & $<$0.13 & 0.54 & $>$4.04 & 2.8E+14 & $<$1.5E+22 & $>$1.2E-08 & … \\
M15 & 17:19:08.88 & -36:08:01.55 & 5.3$\times$2.7 & 78.2 & 0.024 & 0.11 & -4.26 & $<$0.70 & 0.19 & 29.68 & … & … & $<$0.13 & $<$0.62 & 4.64 & 2.0E+14 & $<$1.5E+22 & $>$2.0E-08 & $>$0.17 \\
M16 & 17:19:07.21 & -36:08:16.60 & 6.4$\times$2.3 & 125.7 & 0.024 & 0.15 & -4.18 & 0.81 & 0.16 & 29.59 & 0.160 & 0.168 & 0.25 & 0.60 & 2.44 & 1.6E+14 & 1.2E+22 & 1.1E-08 & $>$0.14 \\
M17 & 17:19:07.78 & -36:10:07.54 & 4.8$\times$1.8 & 57.5 & 0.019 & 0.13 & -2.52 & 0.49 & 0.13 & 18.00 & 0.158 & 0.170 & 0.25 & 0.43 & 1.70 & 1.6E+14 & 1.6E+22 & 7.2E-09 & … \\ \hline
Mean & & & & & 0.028 & 0.16 & & 0.96 & 0.29 & 62.50 & 0.211 & 0.370 & 0.45 & 0.94 & 2.60 & 5.1E+14 & 2.0E+22 & 2.6E-08 & 0.22 \\ \hline
Median & & & & & 0.028 & 0.15 & & 0.81 & 0.26 & 45.38 & 0.206 & 0.412 & 0.43 & 0.74 & 1.91 & 3.0E+14 & 1.8E+22 & 1.7E-08 & 0.20 \\ \hline
Minimum & & & & & 0.018 & 0.10 & & 0.34 & 0.13 & 18.00 & 0.158 & 0.168 & 0.13 & 0.43 & 0.93 & 1.3E+14 & 1.2E+22 & 7.2E-09 & 0.11 \\ \hline
Maximum & & & & & 0.040 & 0.30 & & 2.41 & 0.69 & 140.95 & 0.310 & 0.589 & 0.87 & 2.54 & 6.03 & 2.2E+15 & 3.0E+22 & 1.1E-07 & 0.39 \\
\enddata
\tablenotetext{}{Notes.
For M1, M11, M13, and M15, $\sigma_{\rm nt,NH_{2}D}$ = $\sigma_{\rm obs}$
was used since the $\sigma_{\rm obs,int} \leqslant \sigma_{\rm nt,NH_{2}D}(\rm 10 K)$.
(4)-(5) beam-deconvolved size.
(6) beam-deconvolved effective radius.
(7)-(8) $\sigma_{\rm obs}$ and $v_{\rm LSR}$ are derived by the
core-averaged spectrum.
(9) Mach number $\mathcal{M}$ = $\sqrt{3} \sigma_{\rm nt,NH_{2}D}/c_{\rm s}$.
(10) the peak value of NH$_{2}$D velocity-integrated image.
(11) the integrated flux density of the NH$_{2}$D line emission.
(12)-(13) the peak and integrated intensity of continuum emission.
(14) gas mass estimated by continuum emission.
(15) virial mass.
(16) virial parameter.
(17) mean NH$_{2}$D column density.
(18) mean H$_{2}$ column density.
(19) mean NH$_{2}$D abundance fraction.
(20) the NH$_{3}$ deuterium fractionation $D_{\rm NH_{3}}$.
}
}
\end{deluxetable*}
\section{}
\textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}}
(\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio
(alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can
promptly and briefly share materials of interest with the astronomical community
in a form that will be searchable via ADS and permanently archived.
The astronomical community has long faced a challenge in disseminating
information that may not meet the criteria for a traditional journal article.
There have generally been few options available for sharing works in progress,
comments and clarifications, null results, and timely reports of observations
(such as the spectrum of a supernova), as well as results that wouldn’t
traditionally merit a full paper (such as the discovery of a single exoplanet
or contributions to the monitoring of variable sources).
Launched in 2017, RNAAS was developed as a supported and long-term
communication channel for results such as these that would otherwise be
difficult to broadly disseminate to the professional community and persistently
archive for future reference.
Submissions to RNAAS should be brief communications - 1,000 words or fewer
\footnote{An easy way to count the number of words in a Research Note is to use
the \texttt{texcount} utility installed with most \latex\ installations. The
call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front
matter and 493 words in the text/references/captions of this template. Another
option is by copying the words into MS/Word, and using ``Word Count'' under the
Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table
(but not both) - and should be written in a style similar to that of a
traditional journal article, including references, where appropriate, but not
including an abstract.
Unlike the other journals in the AAS portfolio, RNAAS publications are not
peer reviewed; they are, however, reviewed by an editor for appropriateness
and format before publication. If accepted, RNAAS submissions are typically
published within 72 hours of manuscript receipt. Each RNAAS article is
issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a
long-term, citable record of work.
Articles can be submitted in \latex\ (preferably with the new "RNAAS"
style option in AASTeX v6.2), MS/Word, or via the direct submission in the
\href{http://www.authorea.com}{Authorea} or
\href{http://www.overleaf.com}{Overleaf} online collaborative editors.
Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K},
including guidance on plagiarism \citep{2012AAS...21920404V}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.85,angle=0]{aas.pdf}
\caption{Top page of the AAS Journals' website, \url{http://journals.aas.org},
on October 15, 2017. Each RNAAS manuscript is only allowed one figure or
table (but not both). Including the
\href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure}
in a Note is encouraged, and the data will be provided as a link in the
published Note.\label{fig:1}}
\end{center}
\end{figure}
\acknowledgments
Acknowledge people, facilities, and software here but remember that this counts
against your 1000 word limit.
|
1,116,691,501,219 | arxiv | \section{Introduction and Related Work}
\vspace{-0.4cm}
Humans interact with a rich palette of sounds \cite{gemmeke2017audio} in a wide range of acoustic environments. We experience music not only in individual settings, such as headphones and home speakers but also in highly reverberant spaces, such as theatres and concert halls. Listening to live music performed in these acoustic spaces often adds enjoyment and richness to the experience \cite{rossing2004principles}. The design of these acoustic spaces shaped the input signals' timbre and volume dynamics, sometimes precisely according to the genre and type of music performed \cite{bagenal1930bach}. Detailed studies of cave acoustics inhabited by paleolithic humans suggest that the reverberant qualities of these caves might have functioned as the first auditoriums in which humans experienced sound \cite{fazenda2017cave}. It is conceivable that as early humans found safety and comfort inhabiting caves, they also found comfort in their dwelling's rich acoustical environment \cite{shipton201878}. One can also hypothesize that human evolution has adapted to reverberant sounds, contrasted by discomfort with anechoic sounds, and reverb adds pleasantness to the sounds. As shown in \cite{maconie2010chapter}, musicians/listeners dislike experiencing and playing music in environments void of natural reflections, aka anechoic chambers. Neural architectures have recently revolutionized the field of audio signal processing. With the advent of Transformer based architectures \cite{vaswani2017attention}, there has been a pivot on approaching almost all problems in areas such as computer vision \cite{dosovitskiy2020image}, NLP \cite{vaswani2017attention,wei2021finetuned} and audio \cite{verma2021audio,verma2021generative,dhariwal2020jukebox}, with powerful attention based architectures. This work touches on ways to derive audio embeddings: do a conditioned transformation based on these embeddings. Audio embeddings have been powerful to aid in a variety of applications such as ASR, Audio Understanding \cite{Chung2018-Speech2Vec}, \cite{haque2019audio}, conditional audio synthesis \cite{haque2019audio,skerry2018towards} as well as transformation \cite{oord2017neural, verma2018neural}. These latent vectors can also be used for summarizing the contents of the audio signal: then use a head similar to \cite{chen2020simple, wang2021multi} for classification purposes. There has been a recent surge of interest in transforming audio into a desired acoustic space both in academia and industry. \cite{su2020acoustic} proposed a way of embedding impulse responses to conditionally them and then to guide a generative auto-regressive architecture resembling wavenet \cite{oord2016wavenet}.
\begin{figure}
\includegraphics[width=0.5\textwidth,left]{oneshot1.png}
\caption{A single snippet of sound "y" is used as a proxy for balloon pop, typically used to measure room acoustics. Transformer architecture is trained to do a conditional audio transformation of "x" as heard in acoustic space "h1" to "h2" by using the learned residual. With a trained model and conditioning audio, we can transform the audio input now to the desired space of interest.}
\label{fig:img1}
\end{figure}
This process is slow due to predicting auto-regressive samples one at a time. \cite{Singh_2021_ICCV} proposed a way of directly generating room impulse responses from a 2-D image. Since measuring and recording impulse response via traditional methods, such as a balloon pop, is time-consuming and challenging to scale, the authors proposed this workaround. Similar methods extending to a 3-D environment to estimate impulse responses directly from 2-D/3-D data have been proposed recently \cite{majumder2022few}. \cite{chen2022visual} used an image of a target environment to compute an embedding and a waveform of source audio to re-synthesize the audio conditionally using an embedding. It used a complex adversarial-based formulation. With these methods, the data collection and building of the desired dataset are complex. Moreover, even within an image of a single acoustic environment, the impulse response varies significantly (closer to walls vs. windows vs. center), which audio signals capture. The visual data beyond its field of view cannot capture the complete geometry that the audio signals can capture. The contributions of this work are as follows: 1. We show (for the first time, to our knowledge) condition-based acoustic transfer using audio signals as a condition in a non-autoregressive setting. 2. We use a simple architecture inspired by signal processing properties of convolution to learn residual signals, which directly learn a residual signal instead of a waveform or a spectrogram. 3. We use a novel min-max loss function that circumvents some drawbacks of regression-based loss functions such as euclidean loss and mean absolute error. 4. In this work, we showcase for the first time in audio and signal processing literature a robust Transformer based framework that can map spectrogram to spectrogram based on the conditioning of audio signals. \vspace{-0.5cm}
\section{Dataset} \vspace{-0.4cm}
Since, to the best of our knowledge, no such work existed for music signals, we built the framework ourselves. With the ubiquitous nature of transformer architectures, we expect our approach to holding for other kinds of audio signals. Since large-scale anechoic data is unavailable for a large variety of music except for subgenres, \cite{d2020anechoic}, we use a mixture of mixture approach. The initial audio signals are real-world recordings of composers like Mozart, Vivaldi, and Beethoven encompassing a range of instruments such as piano, violin, and symphonic. For reproducibility purposes, we have shared the audio files at the following URL: http://ai.stanford.edu/~prateekv/testurl.txt. These files mimic a variety of playing conditions. Let us assume that for a concert hall, $h^{i}_{ir}[n]$ is the room impulse response present already from the location in which the audio gets recorded. Considering the microphone acoustics, intrinsic room acoustics, the recording compression, and the instrument sound production system, $x_{i}[n]$ was the input signal. The input dataset is a collection of signals $x[n]$ each of which is produced by convolving the signal $x_{i}[n]$ with $h^{i}_{ir}[n]$. We further convolve it with different conditioning/input IRs. \vspace{-0.4cm}
\subsection{Impulse Response}
\vspace{-0.2cm}
We use a dataset of synthetic impulse responses to simulate the effects of various acoustic spaces (concert halls and rooms). Techniques like balloon pop, sweeping sinusoids, or as proposed by \cite{abel2010estimating} are used in modeling the room acoustics. However, these are non-trivial to start with and are difficult to scale. Therefore, we started with \cite{ko2017study} as the primary source of synthetic impulse responses. To the best of our knowledge, this was the largest-sized dataset in terms of the number of unique impulse response measurements known to us. In addition, there are 200 rooms, each uniformly distributed between small, medium, and large, with various room sizes, absorption coefficients, heights, and locations inside every room, yielding quite a diverse set of IRs.
\subsection{Combining the two}
\vspace{-0.2cm}
Now given a collection of thousands of small patches of these signals $x[n]$ and room impulses $h^{pr}[n]$, we generate the training data. The goal of our work is to give an acoustic signal played in an acoustic space to transform it into another acoustic space. The current work uses a simple convolution to mimic how audio would sound in an acoustic space. Given a signal of interest, $x[n]$, it would sound in a room r at point p, as $y[n] = x[n] \ast h^{pr}[n] $. Realistic reverberation using signal processing algorithms is a field in itself, and \cite{valimaki2016more} describes a detailed study. We carry out a similar approach to generate the conditioning audio spectrogram.
\section{Methodology}
\vspace{-0.2cm}
This section describes the components and the architectural choice of the current work. Broadly the goal is to transform the input audio spectra $X_{a}^{i}$ to that of $X_{a}^{j}$, with conditioning audio spectra $X_{b}^{j}$, where $X_{a,b}^{i,j}$ is audio $a,b$ played in acoustic space (impulse response) $i,j$.
\vspace{-0.25cm}
\subsection{Need For Residual Learning}
\vspace{-0.2cm}
We aim to transform the input content audio into a different acoustic space. We would presume that to play the audio in an acoustic space, whose room impulse response is $h_{d}[n]$ at a particular location in space, then the desired output audio $y[n]$ of interest is,
$y[n] = x_{i}[n] \ast h_{d}[n]$. Instead of having access to $h_{d}[n]$, we have the conditioning audio as input. Therefore, our architecture should extract a latent space for $h_{d}[n]$ directly from the conditioning audio or derive a similar latent space. Thus given a input audio $x_{i}[n]$ and a latent space embedding $h_{d}[n]$, we should be able to extract $y[n]$. We explore working in the frequency domain because a property of convolution is that multiplication in the frequency domain is convolution in the time domain. Let us assume $X_{i}(f,n)$ is the STFT representation of the input signal, and $H_{d}(f,n)$ is that of the impulse response. Then, the output STFT is,
$Y(f,n) = X_{i}(f,n) \odot H_{d}(f,n) $. Now, working in the log domain,
\begin{equation}
log{Y(f,n)} = log X_{i}(f,n) + log H_{d}(f,n)
\end{equation}
Therefore, for log-magnitude spectrograms, we need to add a signal or its \textbf{\textit{residual}} component. This framework falls into the realm of residual learning \cite{he2016deep}, and neural architectures are good at learning residuals, yielding better convergence and improved performance \cite{he2020resnet}. Thus for our conditioned generation of audio signals, to generate the desired transformation as seen from Eqn (1), we take in an audio spectrogram and devise a neural architecture that can learn a residual signal which we add back to the input, to give the desired transformation.
\vspace{-0.4cm}
\subsection{Deriving Acoustic Signature \& the Setup}
\vspace{-0.2cm}
Many papers have explored neural architectures that can conditionally do the desired transformation, the closest to our work being \cite{haque2018conditional}. A primary architecture uses the commands as latent embeddings to do the desired transformation. We use a architecture similar to \cite{su2020acoustic} to extract acoustic space embeddings. The final layer passes through a global average pooling layer to get an embedding that acts as a condition to a Transformer architecture. Our primary transformational architecture is a Transformer, having attention mechanisms capable of understanding long-term dependencies. It also allows us to do condition-based transformation as seen in \cite{verma2021generative} and \cite{keskar2019ctrl}. We devise a Transformer architecture that can map the input of dimensions $257{\mathbf x} 300$ to the same dimensional output, where 300 corresponds to the time steps in 3s, with each timestep a log-magnitude Fourier transform a slice of dimension 257. Sinusoidal positional encodings are added as in \cite{vaswani2017attention}. The embedding size is 257, with the feed-forward layers being 512 and 8 head attention heads for three layers. Conditional embedding size 257 is concatenated to position 0, thus totaling 301 tokens in the first and the output of the last layer of the Transformer. This ensures that Transformer can attend to this token as and when needed in whatever hierarchy it chooses. We slice the output of the last layer of the Transformer to have only 300 tokens, as desired. The attention mechanism ensures that the correct log-magnitude Fourier slices are present at suitable locations. The output of the last layer of the Transformer is called a residual signal. According to the conditioning and input audio, a residual signal can increase/decrease the gain of each time-frequency point depending on the input signal. Adding this residual signal to the input log-magnitude spectrogram gives us the desired transformation.
\vspace{-0.4cm}
\subsection{The Loss Function}
\vspace{-0.2cm}In continuous value regression problems, mean squared error(MSE) or mean absolute error(MAE) is minimized as an error criterion to be minimized. However, both of these metrics are scale-dependent, i.e., the larger the value larger the error. Our experiments also found that minimizing MAE/MSE first focused on higher harmonics(typically having larger values) than lower-order harmonics. However, all the harmonics are equally crucial for the perception of music. This can be one of the main reasons across the literature; $L1$ /$L2$ loss is not used with adversarial loss function, which is difficult to tune in a multi-criterion setup. We propose a novel min-max loss formulation inspired by literature in classic optimization. Let us define two spectrograms $X_{p}$ and $X_{t}$ denoting predicted/target spectrograms, each consisting of $f$ frequency points (257 in our case) and $n$ time points (300 in our case). We define min-max loss formulation as a single scalar measuring the distance between the desired output and the predicted output we choose to minimize. Mathematically,
\vspace{-0.2cm}
$$\mathcal{L} (X_{t},X_{p}) = \sum_{\forall k \in [1,n]} \max\limits_{f} {|(X_{t}(k,f)- X_{p}(k,f))|} $$
This ensures that the prediction is not biased against the largest values within a spectrogram slice. To ensure we remove implicit volume-dependent biases, we randomly scale the input volume with random maximum values chosen between [0,1]. Finally, we sum the maximum deviation across a single time slice across all the time points to get the desired scalar telling how far apart the prediction is for the neural architecture to minimize.
Our experiments produced much richer audio, particularly in the higher harmonics, compared to the $L1$ and $L2$ loss criterion. During inference, we use Griffin-Lim reconstruction \cite{griffin1984signal} to go from the generated spectrogram to the actual waveform. Furthermore, we used data augmentation techniques for the input signal. In addition to the techniques to modify the input signals, \cite{mcfee2015software} dropout rates were tuned to close the training-validation loss gap. As noted in \cite{hershey2017cnn}, using a large amount of training data is a good regularizer. We train all the architectures for about 100 epochs. Adam optimizer with a learning rate of 2*1e-4 was reduced till 1e-6 whenever the loss started plateauing. We had a total of 300,000 3s patchs randmoly sample from the training audio data of roughly 20 hours with random room impulse responses for our training dataset.
\section{Results and Discussion}
\vspace{-0.2cm}To demonstrate the results of our work, we share some audio examples, perform listening experiments, and devise quantitative experiments demonstrating evidence of the transformation. To share the results, we have put up a small subset of the transformations on a webpage.
This webpage contains different audio examples, conditioning audio, the input audio transformed to the acoustic space of interest, the target audio, and our predicted audio. \newline
http://ai.stanford.edu/\(\sim \)prateekv/IRtransfer.html
\vspace{-0.2cm}
\subsection{Quantitative Results}
\vspace{-0.2cm}
We devise an experiment to show quantitatively the transformation of interest in our paper. Similar to the listening test, we propose a typical convolutional-based architecture to score how close two audio signals are in terms of acoustic space. E.g., if we play two audio signals in the same acoustic space, then the neural architecture (CNN network henceforth) should give a score of 0, whereas if they are playing in a different acoustic space, then it should give us a score of 1. We train a convolutional architecture in a siamese setup, irrespective of the content of the input signal (e.g., the two audio signals can have different instruments/composers or a mixture of two). First, we devise positive pairs (same acoustic space) as follows: i) For half of the positive sample pairs, we take the same audio with the same impulse response. Then, the audio is randomly scaled, and augmented copies of the input spectrogram are assigned a score of 0. For the second half of the positive pairs, we take in the same audio content: randomly scale the amplitudes but convolve it with different IRs to get two signals, $audio_{1} \ast imp_{1}$ and $audio_{1} \ast imp_{2}$. ii) For creating negative pairs (different acoustic spaces), for the first half, we randomly sample an audio signal $audio_{1}$ and convolve it with two differently sampled impulse signals $imp_{1}$ and $imp_{2}$ to get $audio_{1} \ast imp_{1}$, and $audio_{1} \ast imp_{2}$. For the second half of the negative pairs, we randomly sample two different audio signals as well as the impulse responses and compute $audio_{1} \ast imp_{1}$, and $audio_{2} \ast imp_{2}$. The validation and the test set do not contain overlapping content or IRs. The convolutional architecture takes in the spectrogram representation of two audio signals $audio_{1}$ and $audio_{2}$ and assigns a score of 0 for the same acoustic space and 1 for a different acoustic space. The log-spectrogram computed with 10ms hop, with \cite{mcfee2015librosa} libraries, with 512 point FFT. We used typical data-augmentation strategies such as volume/amplitude scaling, random flips, random cutout, and jittering, as described in \cite{mcfee2015software} with every audio signal. This data-augmented and passed onto a convolutional encoder. We use an Efficient-Net B0 \cite{tan2019efficientnet} architecture that can take 3s of audio spectrogram representation of size (257x300) and map it to a latent embedding of size 128 dim. With two input audio spectrograms, we get two vectors $emb1$ and $emb2$ of 128-dims. We do not have a large classification head to force encoders to extract good embeddings. We take the embeddings, subtract them, and use a linear head of dimension 256 followed by an output layer of size 2. Cross entropy loss minimizes the prediction error between the target and the predicted output. We obtained an accuracy of about 94 \%, which in itself is quite remarkable. It now gives us a proximity score of two pieces played in the same or different rooms. We compare the scores for both the input signal and the conditioning signal (before transformation) and the input signal and the predicted transformation. The average score across the validation set of about 2000 snippets across several rooms decreases from 0.95 to 0.6. We achieve good scores from the listening experiment, yet we do not quite reach a small score with a neural architecture doing the same task. One way of looking at this is that the neural evaluator is a "very picky and idealistic" distance function, which can pick up subtle differences for any two audio spectrograms anywhere in the present to be present in a different/similar acoustic space. However, this idealistic evaluator still indicates the transformed audio moving closer to the acoustic space of interest.
\vspace{-0.45cm}
\subsection{Listening Experiment Results}
\vspace{-0.3cm}
For conducting the listening experiments, we ask users two questions. i) 1. Given the desired output, what is the quality of the output ii) Are the input and the transformed audio similar/different? In other words, given the transformed output, is it closer to the input or the desired target? For 100 randomized ratings, we use human listeners to rate them via a webpage to validate the results. In the experiment, for evaluating the quality, we ask the users to rate both the actual ground truth target and what was predicted by our architecture separately on a mean opinion score scale (MOS) scale between 1 to 5, with 1 being poor quality. We obtain a 3.6 MOS score for our predicted audio, with the ground truth being 4.1. Finally, we ask users to rate the transformation if the predicted output is close to that of the input or target audio. We get the human ratings correct about 76\% of the times, showing that the transformed output and the ground truth are being played in the same space, validating our experiments.
\vspace{-0.2cm}
\section{Conclusion and Future Work}
\vspace{-0.3cm}
We have demonstrated a pipeline that can do an acoustic transfer of input signal to an acoustic space of interest irrespective of the conditioning's content (e.g., piece, instrument, composer). A neural architecture with ideas grounded in signal processing using a novel min-max loss function is described. A CNN architecture to extract acoustic signatures is used to condition a transformer architecture. A generation transformer uses the learned acoustic signature and input signal to generate a conditionally dependent output spectrogram. Subjective listening tests show how human raters validate the claims of acoustic transfer to a different reverberant environment. Finally, we also develop neural architecture-based scores that objectively confirm the listening test of human raters. It will be interesting to extend this work to speech signals and jointly optimize our scoring mechanism with a generator. Finally, as shown, conditional generation using a robust attention-based model is a compelling idea and has applications far beyond the current work.
\section{Acknowledgement}
We thank the Institute of Human-Centered AI at Stanford University (Stanford HAI) for supporting this work through a generous Google Cloud computing grant for the academic year 2021-22 to carry out computational experiments. We thank both the HAI as well as Google for the initiative. Research was, in part, supported by the Templeton Religion Trust's \textit{Art Seeking Understanding} initiative. We would also like to thank Prof. Preeti Rao and students of DAPLAB and EE 679 at IIT Bombay and students of Stanford University for taking the listening experiments
\bibliographystyle{IEEEbib}
|
1,116,691,501,220 | arxiv |
\section{Real Case, integer $r$}
\label{sec:real-case-integer}
Here we prove Proposition $1$, for {\it real} matrices with integer
dimension $r$, \textit{not necessarily even}.
A similar result, with proof extending that of \citet[Lemma 2]{omh13}
has been obtained by Alexei Onatski (personal communication) and will
appear elsewhere.
Our goal is to prove the validity of the following
expression for $0 \leq p \leq q+1$:
\begin{align}
\label{master1}
{}_pF^2_q(a,b;X,Y)=\frac{\Gamma(m+1)}{x^m\rho_m'}\frac{1}{2\pi i}\int_\Gamma
{}_pF_q\left(a-m,b-m;xs\right) \Delta_y(s) {\rm d}s
\end{align}
where we have defined $\Delta_y(s) =
\prod_{j=1}^r\left(s-y_j\right)^{-\frac{1}{2}}$.
The contour $\Gamma$ starts from $-\infty$ and encircles
$y_1,y_2, \cdots,y_r$ in the positive direction (i.e.,
counter-clockwise) and goes back to $-\infty$.
In what follows, we provide an inductive proof for the above claim.
First we establish the initial cases: ${}_0F_q$ for $q \geq 0$ and,
separately, ${}_1F_0$.
The inductive step establishes truth for ${}_{p+1}F_{q+1}$ given truth
for ${}_pF_q$.
Also, it is worth mentioning that we assume all powers have
their principal values and all angles in the range $[-\pi,\pi)$.
The following alternative representation of the hypergeometric
function of two matrix arguments is useful in the sequel. Let
$\mathcal{O}(r)$ be the orthogonal group and let $({\rm d} Q)$ be the
invariant measure on $\mathcal{O}(r)$ normalized to make the total
measure unity. Then, following \cite{jame64}, we can write
\begin{align}
\label{mxint}
{}_pF^2_{q}\left(a,b;X,Y\right)=\int_{\mathcal{O}(r)} {}_pF^2_{q}\left(a, b;X Q'Y Q\right) ({\rm d} Q).
\end{align}
Moreover, let us assume, without loss of generality, that $Y=\text{diag}\left(y_1,y_2,\cdots,y_r\right)$.
Since $X$ is rank-$1$, we can further simplify (\ref{mxint}) to yield
\begin{align}
\label{matrix_int}
{}_pF^2_{q}\left(a,b;X,Y\right)
= \int_{\mathcal{S}(r)} {}_pF_{q}\left(a,b;x q_r'Y q_r\right) ({\rm d}q_r).
\end{align}
where $\mathcal{S}(r)$ is the $r-1$ dimensional sphere embedded in $\mathbb{R}^r$, $q_r$ is the first column of $Q$ and $({\rm d}q_r)$ is the invariant measure on $\mathcal{S}(r)$ normalized such that the total measure is one.
\subsection{Initial cases}
We first show that the statement
(\ref{master1}) is true for ${}_0F_q$.
With the standard notation
${}_0\mathbf{F}_q(b;z) = {}_0F_q(b;z)/\prod_{j=1}^q \Gamma(b_j)$,
this is equivalent to showing that, for $q\geq 0$,
\begin{equation}
\label{master3}
{}_0\mathbf{F}^2_{q}(b;X,Y)=\frac{\Gamma(m+1)}{x^m} \frac{1}{2\pi i}\int_{\Gamma}
{}_0\mathbf{F}_{q}\left(b-m;xs\right) \Delta_y(s) {\rm d}s.
\end{equation}
Our tool is a contour representation of
\cite[eq. (7.4)]{erde37}:
\begin{align}
\label{erd_eq}
{}_0\mathbf{F}_q(b;z)=\frac{1}{(2\pi i)^q}
\int_{-\infty}^{(0+)}\cdots \int_{-\infty}^{(0+)} e^{\left(\sum_{j=1}^q w_j+\frac{z}{\prod_{j=1}^q w_j}\right)}\prod_{j=1}^q \frac{{\rm d}w_j}{w_j^{b_j}}
\end{align}
where each contour starts from $-\infty$ and encircles the origin in
the positive sense and goes back to $-\infty$.
We use multi-index notation $w^b = \prod w_j^{b_j}, w = \prod w_j$ and
${\rm d}w = \prod {\rm d}w_j$.
We use the spherical average (\ref{matrix_int}), then Erd\'elyi's
representation,
and change order of integration, to get
\begin{align}
\label{beforeint}
{}_0\mathbf{F}^2_q(b;X,Y)
& = \int_{\mathcal{S}(r)} {}_0\mathbf{F}_{q}\left(b;xq_r'Y
q_r\right) ({\rm d}q_r) \\
& = \frac{1}{(2\pi i)^q}
\int_{-\infty}^{(0+)}\cdots \int_{-\infty}^{(0+)}
e^{\sum_{j=1}^q w_j}
\int_{\mathcal{S}(r)}
e^{\frac{x}{w} q_r'Y q_r }
({\rm d}q_r) \frac{{\rm d}w}{w^b}.
\end{align}
A change of variable in \cite[Lemma 2]{omh13} shows that for $x,w > 0$,
\begin{equation}
\label{eq:onatski}
\int_{\mathcal{S}(r)}
e^{\frac{x}{w} q_r'Y q_r} ({\rm d}q_r)
= \frac{\Gamma(r/2)}{ 2 \pi i} \Bigl( \frac{w}{x} \Bigr)^{r/2 -1}
\int_\Gamma e^{\frac{x}{w}s}\Delta_y(s)
{\rm d}s,
\end{equation}
and the equality extends by analyticity to all nonzero $w \in \mathbb{C}$.
Inserting this integral in (\ref{beforeint}) and noting that $\frac{r}{2}=m+1$, we obtain
\begin{align*}
{}_0\mathbf{F}^2_q(b;X,Y)=& \frac{\Gamma(m+1)}{x^m (2\pi i)^{q+1}}
\int_{-\infty}^{(0+)}\cdots \int_{-\infty}^{(0+)}
e^{\sum_{j=1}^q w_j}
\int_{\Gamma}
e^{\frac{xs}{w}} \Delta_y(s)
{\rm d}s
\frac{{\rm d}w}{w^{b-m}}
\end{align*}
Finally, we change the order of integration and again make use of
(\ref{erd_eq}) to arrive at the desired equality \eqref{master3}.
This proves the validity of the statement (\ref{master1}) for $p=0$.
\bigskip
\bigskip
Now we show that, for
$x\max\{y_j\}<1$,
\begin{align}
\label{1f0main}
{}_{1}F^2_{0}(a;X,Y) =\frac{\Gamma(m+1)}{x^m (a-m)_m}\frac{1}{2\pi i}
\int_{\Gamma} {}_{1}F_{0}\left(a-m;xs\right) \Delta_y(s)
{\rm d}s.
\end{align}
We use identity (\ref{matrix_int}), the special form
${}_1F_0(a;z)=(1-z)^{-a}$ and the relation
\begin{equation}
\label{eq:gamma}
\frac{1}{s^a}=\frac{1}{\Gamma(a)}\int_0^\infty t^{a-1} e^{-st} {\rm
d}t, \qquad \;\; \Re(s)>0, \Re(a)>0
\end{equation}
to obtain, after observing that $x\max\{y_j\}<1$ implies $x q_r' Y
q_r<1$,
\begin{equation}
\label{eq_beg}
{}_1F^2_0\left(a;X,Y\right)
= \int_{\mathcal{S}(r)}\frac{1}{\left(1-x q_r' Y q_r\right)^a}({\rm d} q_r)
= \int_0^\infty t^{a-1} e^{-t}
\int_{\mathcal{S}(r)} e^{txq_r'
Y q_r} ({\rm d} q_r) \;{\rm d}t.
\end{equation}
Now substitute the contour identity (\ref{eq:onatski}) with $t = 1/w$,
and with the contour chosen to encircle $\{ y_j \}$ and to lie to the
left of $1/x$. We obtain
\begin{align*}
{}_1F^2_0\left(a;X,Y\right)
& =\frac{\Gamma\left(r/2\right)}{\Gamma(a)
{x}^{\frac{r}{2}-1}} \frac{1}{2\pi i}
\int_0^\infty \int_{\Gamma} t^{a-\frac{r}{2}} e^{-t\left(1-x
s\right)} \Delta_y(s) {\rm d}s\; {\rm d}t \\
& =\frac{\Gamma\left(r/2\right)\Gamma\left(a+1-\frac{r}{2}\right)}{\Gamma(a)
x^{\frac{r}{2}-1}}
\frac{1}{2\pi i}
\int_{\Gamma} (1-xs)^{r/2 -a-1}
\Delta_y(s) {\rm d}s,
\end{align*}
valid for $\Re(a)>\frac{r}{2}-1$, after changing order of integration
and using (\ref{eq:gamma}) and the fact that $\Re(s) < 1/x$.
Recalling that $m = r/2 - 1$ and ${}_1F_0(a;z) = (1-z)^{-a}$, the
final form reduces to the right hand side of (\ref{1f0main}),
under the condition $\Re(a)>m$. However, the both sides of the above
equality, which we have established only in the domain $\Re(a)>m$ of
complex plane, are analytic functions. Therefore, the equality must
hold in the whole region of the analyticity of $a$. This establishes
the claim (\ref{1f0main}).
\subsection{Inductive step}
\label{sec:inductive-step}
First, some notation.
We write $a_+ = (\alpha, a_1, \ldots, a_p)$ and
$b_+ = (\beta, b_1, \ldots, b_q)$ for the augmentations of $a$ and
$b$, and abbreviate ${}_{p+1}F_{q+1}$ by ${}_{p+}F_{q+}$.
Thus, the induction step
amounts to establishing the validity of the following statement, given
the statement (\ref{master1}) is true
\begin{equation}
\label{ini}
{}_{p+}F^2_{q+}(a_+,b_+;X,Y)
=\frac{\Gamma(m+1)}{x^m \rho_{m+}'}\frac{1}{2\pi i}\int_{\Gamma}
{}_{p+}F_{q+}\left(a_+ -m, b_+ -m;xs\right) \Delta_y(s) {\rm d}s
\end{equation}
where
\begin{equation}
\label{eq:rhomplus}
\rho_{m+}'= \rho_m'
\frac{\Gamma(\alpha)\Gamma(\beta-m)}{\Gamma(\alpha-m) \Gamma(\beta)}.
\end{equation}
We use a reparametrized version of the beta density
\begin{equation*}
\phi(t; \alpha,\beta)
= \frac{\Gamma(\beta)}{\Gamma(\alpha) \Gamma(\beta-\alpha)}
t^{\alpha-1} (1-t)^{\beta-\alpha-1},
\end{equation*}
and the
integral representation of
the generalized hypergeometric function \citep[eq. (3.2)]{erde37}
\begin{equation}
\label{hypo_alt}
{}_{p+}F_{q+}(a_+,b_+;x)
= \int_{0}^1 \phi(t; \alpha, \beta) \,
{}_{p}F_{q}(a,b;xt) {\rm d}t
\end{equation}
where $\Re(\beta)>\Re(\alpha)>0$,
along with (\ref{matrix_int}),
in order to write the left side of
(\ref{ini}) as
\begin{align*}
{}_{p+}F^2_{q+}(a_+,b_+;X,Y)
& = \int_{\mathcal{S}(r)}
{}_{p+}F_{q+}(a_+,b_+;x \, q_r'Y q_r) ({\rm d} q_r)\nonumber\\
& = \int_{\mathcal{S}(r)} \int_0^1 \phi(t; \alpha, \beta)
{}_pF_{q}(a,b;xt \, q_r'Y q_r){\rm d}t \;({\rm d} q_r) \nonumber\\
& = \int_0^1 \phi(t; \alpha, \beta) \int_{\mathcal{S}(r)}
{}_pF_{q}(a,b;xt \, q_r'Y q_r)({\rm d} q_r) \;{\rm d}t \\
& = \int_0^1 \phi(t; \alpha, \beta) \; {}_pF^2_{q}\left(a,b;tX,Y\right)
\;{\rm d}t,
\end{align*}
where we have changed the order of integration and again used
(\ref{matrix_int}).
The final expression can be rewritten with the help of
our induction hypothesis (\ref{master1}) as
\begin{equation}
\label{eq:rep}
\frac{\Gamma(m+1)}{x^m\rho_m'}\frac{1}{2\pi i}
\int_0^1 t^{-m} \phi(t; \alpha, \beta)
\int_{\Gamma}
{}_pF_{q}\left(a-m,b-m;xts\right) \Delta_y(s) {\rm d}s\; {\rm d}t.
\end{equation}
Now use the identity
\begin{equation*}
t^{-m} \phi(t; \alpha, \beta)
= \phi(t; \alpha-m, \beta-m) \frac{\Gamma(\beta)
\Gamma(\alpha-m)}{\Gamma(\beta-m)\Gamma(\alpha)}
\end{equation*}
and note from \eqref{eq:rhomplus} that the ratio of Gamma functions
equals $\rho_m'/\rho_{m+}'$. Inserting this into \eqref{eq:rep} and
changing the order of integration, we obtain
\begin{equation*}
\frac{\Gamma(m+1)}{x^m\rho_{m+}'}\frac{1}{2\pi i}
\int_{\Gamma} \Delta_y(s)
\int_0^1 \phi(t; \alpha-m,\beta-m)
{}_pF_{q}\left(a-m,b-m;xts\right) {\rm d}t\; {\rm d}s.
\end{equation*}
Now again use (\ref{hypo_alt}), along
with the restriction $\Re(\alpha)>m$, to yield \eqref{ini}
in the domain $\Re(\beta)>\Re(\alpha)>m$ of $\mathbb{C}$.
Since both sides of equality \eqref{ini}
are analytic functions, the equality must hold in the whole region of
the analyticity of $\alpha$ and $\beta$.
This completes the induction step.
\section{Introduction}
The eigenvalues of one or two sample covariance matrices play a
central role in multivariate analysis. A long list of
examples, including principal components analysis (PCA), canonical
correlations analysis (CCA), multivariate analysis of variance
(MANOVA) and multiple response linear regression are the main subject
of many standard textbooks, such as \citet{mkb79,ande03}.
Under the common assumption of Gaussian data, much is known about the
joint and marginal distribution of the eigenvalues.
For example, under the typical null hypotheses, the joint density of the
eigenvalues has an explicit formula, derived in 1939 in the celebrated
and independent work of
Fisher, Girshick, Hsu, Mood and Roy.
Under general alternatives, the joint density is given by an integral
over a group of matrices. If the number of variables, and hence
eigenvalues, is large, $p$ say, as is common nowadays, this integral
will be high dimensional, of dimension $O(p^2)$.
A remarkable classification of the joint density functions was given
by \citet{jame64}, using hypergeometric functions of matrix argument.
He showed how the classical multivariate methods could be organized
into five cases, involving hypergeometric functions $\,_pF_q$ of
different orders, specifically
$\,_0F_0, \,_0F_1, \,_1F_0, \,_1F_1,$ and $\,_2F_1.$
Remarkable though this work is, and despite significant progress on
the numerical computation of hypergeometric functions, e.g.
\citet{koed06}, these expressions for the joint densities have proved
challenging to work with in application.
In many high dimensional applications, however, it may be reasonable
to consider alternative hypotheses which are low rank departures from
the null. For some examples, see \citet{jona14}.
In this note we consider the simplest case, namely rank one deviations, and
show that the joint eigenvalue density can then be reduced to a single
(contour) integral.
We believe this integral representation to be of interest at least
because it is amenable to approximation when dimension $p$ is large,
leading to simple approximations to at least certain aspects of these
multivariate eigenvalue distributions.
We mention two examples of such applications.
\begin{itemize}
\item[(i)] derivation of limiting Gaussian approximations for `linear
statistics' (including, for example, the likelihood ratio test, and
`high-dimension-corrected' likelihood ratio test,
\citet{omh13,wsy13}). Particular
cases ($\,_0F_0, \,_0F_1, \,_1F_1$) have been given for complex data
by \citet{pmc14}.
\item[(ii)] delineation of the region of contiguous alternatives to
the null hypothesis, and description of the Gaussian limit for the
log-likelihood ratio process inside the contiguity region.
This leads to a comparative understanding of the power properties of
various hypothesis tests, both traditional and new, in the contiguity
region. This example has been studied in the case of PCA,
corresponding to $\,_0F_0$, by \citet{omh13}, and
work is in progress to apply the result of this note to the general
$\,_pF_q$ cases.
\end{itemize}
We will adopt James' systematization in order to give a unified
derivation of our contour formulas.
We give the rank one formula for $\,_pF_q$ in real and complex cases,
Section 2.
This can be converted directly into an expression for the joint
density function for the eigenvalues in each of James' five cases
(for both $\mathbb{R}$ and $\mathbb{C}$). Section 3 illustrates this process in
one case, testing equality of covariance matrices, for real data
(i.e. $\,_1F_0$).
In the real case, the proof of Section \ref{sec:cont-integr-repr}
applies only to even dimension $p$. Section
\ref{sec:real-case-integer} gives a different proof valid for all
integer $p$.
\section{Contour integral representation for rank one}
\label{sec:cont-integr-repr}
Let $X, Y$ be $r \times r$ Hermitian matrices.
The definitions of hypergeometric functions with one and two matrix
arguments are given, for example, by \citet{jame64}, with separate
expressions for real and complex cases.
The definitions simplify in our special case in which $X$ has rank
one, with nonzero eigenvalue $x$.
For $a \in \mathbb{C}$, let $(a)_k = a(a+1) \cdots (a+k-1), (a)_0 = 1$ be the rising
factorial, and for vectors of parameters
$a = (a_l)_{l=1}^p, b = (b_l)_{l=1}^q$
with $a_l \in \mathbb{C}$ and $b_l \in \mathbb{C} \backslash \{0, -1, -2, \ldots \}$,
adopt the abbreviation
\begin{displaymath}
\rho_k = \rho_k(a,b)
= \frac{ (a_1)_k \cdots (a_p)_k}{(b_1)_k \cdots (b_q)_k}.
\end{displaymath}
If $X$ has rank one as described, define
\begin{equation}
\label{eq:rankone}
\,_pF_q^\alpha (a,b; X, Y)
= \sum_{k=0}^\infty \rho_k \frac{(1/\alpha)_k}{(r/\alpha)_k}
\frac{x^k C_k^\alpha(Y)}{k!}.
\end{equation}
Here $\alpha > 0$ indexes a one parameter family that includes the
real ($\alpha = 2$) and complex ($\alpha = 1$) cases.
Also, $C_k^\alpha$ are Jack polynomials (e.g. \citet{macd95}): in the
real case ($\alpha = 2$), they
reduce to James' zonal polynomials (e.g. \citet{muir82}), and in the complex
case ($\alpha=1$), to a normalization of the Schur functions
(e.g. \citet{des07}).
A contour formula for $C_k^\alpha (Y)$ is quoted below; for now we
note that $C_k^\alpha (X) = x^k$, and (e.g. \citet[eq. (245)]{wang12}) that
\begin{displaymath}
C_k^\alpha(I) = \prod_{j=0}^{k-1} \frac{r+\alpha j}{1+\alpha j}
= \frac{(r/\alpha)_k}{(1/\alpha)_k},
\end{displaymath}
which explains the form of the two ratios in formula
(\ref{eq:rankone}) as
$C_k^\alpha(X) C_k^\alpha(Y) / C_k^\alpha(I)$.
The series (\ref{eq:rankone}) converges for all $x, Y$ if $p \leq q$;
for $x \| Y \| < 1$ if $p=q +1$ (and $\|Y\|$ denotes the maximum
eigenvalue in absolute value of $Y$) and finally diverges unless
it terminates if $p > q+1$
(e.g. \citet{mph95}).
With this notation, the scalar generalized hypergeometric function,
which does not depend on $\alpha$, is
\begin{displaymath}
\,_pF_q (a,b;x) = \sum_{k=0}^\infty \rho_k(a,b) \frac{x^k}{k!}.
\end{displaymath}
The main result of this note can now be stated.
\begin{proposition}
Suppose that $p \leq q+1$,
$X$ is rank 1 with positive eigenvalue $x$ and that $Y$
is positive definite with eigenvalues $(y_j)_{j=1}^r$.
(i) Suppose that $r/\alpha$ is a positive integer, say $r/\alpha =
m+1$, and that
$a_l \notin \{1, \ldots, m\}$ and $ b_l \notin \{ m, m-1, m-2, \ldots \}$.
Then,
\begin{equation}
\label{eq:contour-formula}
\,_pF_q^\alpha (a,b; X, Y)
= \frac{\Gamma(m+1)}{x^m \rho_m'} \frac{1}{2 \pi i}
\int_\Gamma \,_pF_q (a-m,b-m; xs)
\prod_{j=1}^r \frac{1}{(s-y_j)^{1/\alpha}} {\rm d}s,
\end{equation}
where the contour $\Gamma$ starts at $- \infty$, encircles $0$ and $\{
y_j \}$ counterclockwise and returns to $-\infty$. Further,
$a-m$ denotes the vector with entries $a_i -m$ and
\begin{displaymath}
\rho_m' = \rho_m(a-m, b-m).
\end{displaymath}
Equality holds in the common domain of analyticity of both sides:
$\mathbb{C}$ if $p \leq q$ and $\mathbb{C} \backslash (1,\infty)$ if $p = q+1$.
(ii) If instead $r/\alpha = m + \epsilon$ for $\epsilon \in (0,1)$ and
non-negative integer $m$, then under the same conditions
\begin{equation}
\label{eq:contour-formula-mod}
\,_pF_q^\alpha (a,b; X, Y)
= \frac{(\epsilon)_m}{x^m \rho_m'} \frac{1}{2 \pi i}
\int_\Gamma s^{\epsilon -1} \,_{p+1}F_{q+1} (a-m,1,b-m,\epsilon; xs)
\prod_{j=1}^r \frac{1}{(s-y_j)^{1/\alpha}} {\rm d}s.
\end{equation}
(iii) If $\alpha = 2$, then formula (2) holds for \textbf{any} integer
$r$, still with $m+1 = r/2$, if
the symbol $(a)_m$ is interpreted as $\Gamma(a+m)/\Gamma(a)$ for
non-integer $m$.
\end{proposition}
Thus, in the real ($\alpha = 2$) and complex ($\alpha = 1$) cases of most
interest in applications, formula (\ref{eq:contour-formula}) holds for
all positive integer $r$.
Particular cases of (\ref{eq:contour-formula}) are already known:
$\,_0F_0$ for both real and complex cases
\citep{mo12,omh13}, for general $\alpha$, \citet{wang12,forr11},
and for the complex
case only, $\,_0F_1$ \citep{pd13}
and $\,_1F_1$ \citep{pmc14}.
\citet{wang12} also gives formula (\ref{eq:contour-formula-mod}) in the
$\,_0F_0$ case.
A generalization of (i)
to the multi-spike case
has been given for $\,_0F_0$ by \citet{onat14} and recently extended
to $\,_pF_q$ by \citet{pmc14a}.
\textbf{Proof.}
Parts (i) and (ii) are shown here; part (iii) uses a different
argument and is deferred to Section 4.
We begin with a result from \citet[eq. (248)]{wang12}, which states that
\begin{displaymath}
(1/\alpha)_k \, \frac{C_k^\alpha(Y)}{k!}
= \frac{1}{2 \pi i} \int_{\Gamma'} \prod_{j=1}^r
\frac{1}{(1-zy_j)^{1/\alpha}} \frac{{\rm d}z}{z^{k+1}}.
\end{displaymath}
Here the contour $\Gamma'$ encircles zero and is chosen small enough so that all
$y_j^{-1}$ lie outside.
Insert this into \eqref{eq:rankone} and interchange summation and
integration to obtain
\begin{equation}
\label{eq:Fcontour}
\,_pF_q^\alpha (a,b; X, Y)
= \frac{1}{2 \pi i} \int_{\Gamma'} \prod_{j=1}^r
\frac{1}{(1-zy_j)^{1/\alpha}} G(z;x) {\rm d}z
\end{equation}
where the series
\begin{displaymath}
G(z;x) = \sum_{k=0}^\infty \frac{\rho_k}{(r/\alpha)_k}
\frac{x^k}{z^{k+1}}
\end{displaymath}
converges for all $x, z$ if $p \leq q$ and for $|x/z| < 1$ if $p =
q+1$.
Now write $r/\alpha = m+1$ and introduce the variable $l = k+m$, so
that
\begin{equation}
\label{eq:secondform}
G(z;x) = \sum_{l=m}^\infty \frac{\rho_{l-m}}{(m+1)_{l-m}}
\frac{x^{l-m}}{z^{l-m+1}}
= \frac{m!}{x^m} \frac{z^{m-1}}{\rho_m'}
\sum_{l=m}^\infty \frac{\rho_l(a-m,b-m)}{l!}
\left(\frac{x}{z}\right)^l,
\end{equation}
where we have used $(m+1)_{l-m} = l!/m!$, and noted
that $(c)_{l-m} = (c-m)_l / (c-m)_m$ so that
\begin{displaymath}
\rho_{l-m}(a,b) = \frac{ \rho_l(a-m,b-m)}{\rho_m(a-m,b-m)}.
\end{displaymath}
Let $G_0(z;x)$ denote the function obtained by extending the summation
in \eqref{eq:secondform} down to $l = 0$, so that
\begin{displaymath}
G_0(z;x) = \frac{m!}{x^m} \frac{z^{m-1}}{\rho_m'}
\,_pF_q (a-m,b-m; x/z).
\end{displaymath}
Since we are adding a
polynomial to $G$ and a term that is analytic within the contour in
\eqref{eq:secondform}, the value of the integral is unchanged.
Hence
\begin{displaymath}
\,_pF_q^\alpha (a,b; X, Y)
= \frac{m!}{x^m} \frac{1}{\rho_m'}
\frac{1}{2 \pi i} \int_{\Gamma'} \prod_{j=1}^r
\frac{z^{m-1}}{(1-zy_j)^{1/\alpha}} \,_pF_q (a-m,b-m; x/z) {\rm d}z.
\end{displaymath}
The change of variables $z = 1/s$ yields
\begin{align*}
\frac{1}{2 \pi i} \int_{\Gamma'} \frac{z^{m-1}}{\prod (1-zy_j)^{1/\alpha}}
F(x/z) {\rm d}z
& = \frac{1}{2 \pi i} \int_{\Gamma''} \frac{1}{s^{m+1}}
\frac{F(xs)}{\prod (1-y_j/s)^{1/\alpha}} {\rm d}s, \\
& = \frac{1}{2 \pi i} \int_\Gamma \frac{F(xs)}{\prod (s-y_j)^{1/\alpha}}
{\rm d}s,
\end{align*}
where the image $\Gamma''$ of $\Gamma'$ is deformed to $\Gamma$ as
described in the Proposition statement in order to avoid the branch
cut in the final formula. Here we use the analytic continuations of
$\,_pF_q$: entire for $p \leq q$ and for $p = q+1$ analytic off the
positive real axis $(1,\infty)$.
The result follows.
When $r/\alpha = m + \epsilon$, we modify the argument.
In (\ref{eq:secondform}), replace $(m+1)_{l-m}$ by
$(m+\epsilon)_{l-m}=(\epsilon)_l/(\epsilon)_m$ to obtain
\begin{displaymath}
G(z;x) = \frac{(\epsilon)_m}{x^m} \frac{z^{m-1}}{\rho_m'}
\sum_{l=m}^\infty \frac{\rho_l(a-m,b-m)(1)_l}{(\epsilon)_l}
\frac{1}{l!} \left(\frac{x}{z}\right)^l.
\end{displaymath}
Proceeding as before, and extending the summation to $l=0$, so that
\begin{displaymath}
G_0(z;x) = \frac{(\epsilon)_m}{x^m} \frac{z^{m-1}}{\rho_m'}
\,_{p+1}F_{q+1} (a-m,1,b-m,\epsilon; x/z),
\end{displaymath}
we obtain formula (\ref{eq:contour-formula-mod}).
\section{Example}
\label{sec:example}
Consider the problem of testing equality of covariance matrices---the
$\,_1F_0$ case in \citet{jame64}.
Thus, suppose that $n_1, n_2 \geq p$ and that
$p \times n_1$ and $p \times n_2$ real data matrices
$X = [ X_1 \cdots X_{n_1}]$ and
$Y = [ Y_1 \cdots Y_{n_2}]$ have columns $X_\nu, Y_\nu$ with mean zero and
covariance matrices $\Sigma_1$ and $\Sigma_2$ respectively.
A signal detection application is described in
\citet[Sec. 3]{jona14}.
Suppose that the observation vectors are independent Gaussian, so that
$A_1 = X X'$ and $A_2 = Y Y'$ have Wishart distributions
$W_p(n_1,\Sigma_1)$ and $W_p(n_2,\Sigma_2)$ respectively.
Then \citet[eq. (65)]{jame64} gives an expression for the joint
density of the eigenvalues $(f_j)$ of $A_1 A_2^{-1}$.
To state it, we introduce notation
$|A| = \det(A), F = \text{diag} (f_j)$ and $\Delta = \Sigma_1
\Sigma_2^{-1}$.
We transform this expression, following \citet[p. 313-4]{muir82}, to
obtain for $n=n_1+n_2$ and $f_1 > f_2 > \cdots > f_p$,
\begin{equation}
\label{eq:1F0-eq}
p(f;\Delta)
= \frac{c_{p,n_1,n_2}}{|\Delta|^{n_1/2}}
\frac{|F|^{(n_1-p-1)/2}}{|I+F|^{n/2}}
\,_1F_0(\tfrac{n}{2};I-\Delta^{-1}, F(I+F)^{-1})
\prod_{j<j'}^p (f_j - f_{j'}),
\end{equation}
where in this real case, $\alpha = 2$, we have written
$\,_1F_0$ for $\,_1F_0^2$.
The normalization constant is given in terms of the multivariate gamma
function \citep[p. 61]{muir82} by
\begin{displaymath}
c_{p,n_1,n_2}
= \frac{\pi^{p^2/2} \Gamma_p(\textstyle{\frac{1}{2}} n)}{\Gamma_p(\textstyle{\frac{1}{2}} p) \Gamma_p(\textstyle{\frac{1}{2}}
n_1) \Gamma_p(\textstyle{\frac{1}{2}} n_2)}.
\end{displaymath}
In the spirit of application (ii) in the Introduction, we may consider the
likelihood ratio for testing the null hypothesis that $\Sigma_1 =
\Sigma_2$. Writing $\Lambda = F(I+F)^{-1}$, we have
\begin{displaymath}
L(\Delta; \Lambda)
= \frac{p(\Lambda; \Delta)}{p(\Lambda; I)}
= |\Delta|^{-n_1/2} \,_1F_0(\tfrac{n}{2}; I- \Delta^{-1}, \Lambda).
\end{displaymath}
Turning now to apply the result of this paper, suppose that $\Sigma_1$
is a rank one perturbation of $\Sigma_2$, so that
$\Sigma_1 = (I + \psi h \psi') \Sigma_2$ for real $h$ and for $\psi$ a
unit vector in $\mathbb{R}^p$.
In this case, $\Delta = I + \psi h \psi'$, so that
$I - \Delta^{-1}$ has rank one, with nonzero eigenvalue
$\tau = h/(1+h)$.
Since all components of $\Lambda = F(I+F)^{-1}$ are less than one, we
may apply the contour formula (\ref{eq:contour-formula}).
Since $\,_1F_0 (a;x) = (1-x)^{-a}$, we obtain
\begin{displaymath}
L(\tau; \Lambda)
= \frac{n-p}{2} B\biggl(\frac{p}{2}, \frac{n-p}{2}\biggr)
\frac{(1-\tau)^{n_1 /2}}{\tau^{p/2 -1}}
\frac{1}{2 \pi i} \int_\Gamma \frac{(1-\tau s)^{-(n-p+2)/2}}{\prod_j
(s-\lambda_j)^{1/2}} {\rm d}s.
\end{displaymath}
where $B(\alpha, \beta)$ is the usual beta function.
This is a form suitable for asymptotic approximation, the details of
which will be reported elsewhere.
\textit{Remark.} \
A useful check on this last formula is obtained by letting the error
degrees of freedom $n_2 \to \infty$ while keeping $p$ and $n_1$ fixed.
This limit corresponds to the case where $\Sigma_2$ is known, say
$\Sigma_2 = I$ for convenience here, and we consider the single matrix
rank one model $\Sigma_1 = I + \psi h \psi'$ and test the hypothesis
that $h = 0$.
To compare with the formula of \citet[Lemma 3]{omh13}, let
$(\mu_j)$ be the eigenvalues of $n_1^{-1} A_1 (n_2^{-1} A_2)^{-1}$, so
that $\lambda_j = f_j/(1+f_j) = n_1 \mu_j/(n_2 + n_1 \mu_j)$.
With the change of variables $s = n_1 z/n_2$,
the previous display converges to
\begin{displaymath}
L(\tau; \mu)
= \Gamma\left(\frac{p}{2}\right) \left(\frac{2}{n_1}\right)^{p/2-1}
\frac{(1-\tau)^{n_1 /2}}{\tau^{p/2 -1}}
\frac{1}{2 \pi i} \int_{\Gamma} e^{n_1 \tau z/2}
\prod_j (z - \mu_j)^{-1/2} {\rm d}z,
\end{displaymath}
which is the cited expression for the $\,_0F_0$ likelihood ratio.
|
1,116,691,501,221 | arxiv | \section{Introduction}
The main aim of this article is to study the cohomological properties of representations of ${\rm GL}(5,\mathbb{R})$ which are obtained by a Langlands transfer of cohomological representations of $\Sp(4,\mathbb{R})$. This project started with the following observation of Labesse and Schwermer: A cohomological representation $\pi$ of ${\rm GL}(2,\mathbb{R})$ transfers to a cohomological representation of ${\rm GL}(3,\mathbb{R})$ via the symmetric power transfer (see \cite{labesse-schwermer}). This result was extended for the symmetric power transfer from ${\rm GL}(2,\mathbb{R})$ to ${\rm GL}(n+1,\mathbb{R})$ by Raghuram in \cite{Ra}. Such a result was then used to study the arithmetic of symmetric power $L$-functions attached to $\pi.$ The reader is also referred to \cite{Raghuram-Shahidi} where there is a general discussion involving Langlands functoriality, cohomological representations, and applications to the special values of $L$-functions. Further, in \cite{Ra-Sa}, when does a tempered representation of a classical group transfer to a cohomological representation of an appropriate ${\rm GL}(n,\mathbb{R})$ or ${\rm GL}(n,\mathbb{C})$ was determined by the author and Raghuram. This led to the following question: When does a cohomological representation (tempered or not) of a classical group transfer to a cohomological representation of an appropriate ${\rm GL}(n,\mathbb{R})$ or ${\rm GL}(n,\mathbb{C})$?
We answer this question completely in the special case of transferring cohomological representations with trivial coefficients from $\Sp(4,\mathbb{R})$ to ${\rm GL}(5,\mathbb{R})$. The main result of this article is Theorem \ref{sp4-main-result-triv-coeff}, which states that a cohomological representation of $\Sp(4,\mathbb{R})$ is transferred to a cohomological representation if it is the trivial representation, a discrete series representation or it is induced from the Siegel parabolic. We also work out a toy case of transfer from ${\rm SL}(2,\mathbb{R})$ to ${\rm GL}(3,\mathbb{R})$ in Section \ref{SL2}. This example does not shed much light on whether cohomologicalness is preserved under functoriality in general since the only cohomological representations of ${\rm SL}(2,\mathbb{R})$ are the discrete series representations and the trivial representation. The two main tools which are used in proving Theorem \ref{sp4-main-result-triv-coeff} are the Vogan-Zuckerman classification of unitary, irreducible cohomological representations (which will be called cohomological representations)which can be found in \cite{Vo-Zu} and a similar classification for ${\rm GL}(5,\mathbb{R})$ given by Speh in \cite{Sp2}. Section \ref{Notations} introduces basic definitions and fixes notations. Sections \ref{V-Z Classification} and \ref{Speh} recall the Vogan-Zuckerman classification of cohomological representations and Speh's classification. Then, using the classification of Vogan-Zuckerman, we list down all the cohomological representations of $\Sp(4,\mathbb{R})$ in Section \ref{V-Z for Sp}. We then explicitly compute the transfer of these representations in section \ref{Transfer triv-coeff} and check, using Speh, which of the resulting representations are cohomological. Finally, we summarize our results for cohomological representations with trivial coefficients in Section \ref{Summary Triv-coeff} and we make further observations in the non-trivial coefficients case in Section \ref{Transfer non-triv coeff}. Though we do not have a complete result in the case of the non-trivial coefficients, we make a plausible conjecture in section \ref{Summary non-triv coeff}.
\bigskip
\noindent {\small {\it Acknowledgements:}
I would like to thank Raghuram for suggesting this problem and giving his valuable inputs from time to time. I would also like to thank Dipendra Prasad and Arvind Nair for their interest in the results of this project, and for helpful tutorials on Langlands parameters.}
\bigskip
\section{Background and Notations}
\label{Notations}
Let $\Sp(2n,\mathbb{R})=\{A \in {\rm GL}(2n,\mathbb{R}): {}^tAJA=J \},$ where $J=\begin{pmatrix}
0 & I_n \\
-I_n & 0
\end{pmatrix}.$ Let $\mathfrak{g}_0$ be the corresponding real Lie algebra. Let $$\mathfrak{h}_0=\Bigg\{ \begin{pmatrix}
& & & & x_1 & & \\
& 0 & & & & \ddots & \\
& & & & & & x_n \\
-x_1 & & & & & & \\
& \ddots & & & & 0 & \\
& & -x_n & & & &
\end{pmatrix} \Bigg| x_i \in \mathbb{R} \Bigg\}.$$
Let $K= \left\lbrace \begin{pmatrix}
A & B \\
-B & A
\end{pmatrix}: \ A, B \in {\rm GL}(n,\mathbb{R}), A{}^tB={}^tBA, \ A{}^tA + B{}^tB = I_n \right\rbrace $ be a maximal compact subgroup of $\Sp(2n,\mathbb{R})$ and $W_K$ be the Weyl group of $K$. Any element of $W_K$ acts on an element of $i\mathfrak{h}_0$ by permuting the entries $x_i$.\\
For $G = \Sp(4,\mathbb{R})$, we fix an appropriate basis for the Lie algebra of $G$. We fix the following basis (see \cite{Am-Sc}):
\begin{align*}
Z=-i\begin{pmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}, \hskip 8mm & Z'= -i\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0
\end{pmatrix},\\
N_+=\frac{1}{2}\begin{pmatrix}
0 & 1 & 0 & -i \\
-1 & 0 & -i & 0 \\
0 & i & 0 & 1 \\
i & 0 & -1 & 0
\end{pmatrix}, \hskip 5mm & N_- = \frac{1}{2}\begin{pmatrix}
0 & 1 & 0 & i \\
-1 & 0 & i & 0 \\
0 & -i & 0 & 1 \\
-i & 0 & -1 & 0
\end{pmatrix},
\end{align*}
\newpage
\begin{align*}
X_+=\frac{1}{2}\begin{pmatrix}
1 & 0 & i & 0 \\
0 & 0 & 0 & 0 \\
i & 0 & -1 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}, \hskip 5mm & X_-=\frac{1}{2}\begin{pmatrix}
1 & 0 & -i & 0 \\
0 & 0 & 0 & 0 \\
-i & 0 & -1 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}, \\
P_{1+} = \frac{1}{2}\begin{pmatrix}
0 & 1 & 0 & i \\
1 & 0 & i & 0 \\
0 & i & 0 & -1 \\
i & 0 & -1 & 0
\end{pmatrix}, \hskip 5mm & P_{1-} = \frac{1}{2}\begin{pmatrix}
0 & 1 & 0 & -i \\
1 & 0 & -i & 0 \\
0 & -i & 0 & -1 \\
-i & 0 & -1 & 0
\end{pmatrix},\\
P_{0+} = \frac{1}{2}\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & i \\
0 & 0 & 0 & 0 \\
0 & i & 0 & -1
\end{pmatrix}, \hskip 5mm & P_{0-} = \frac{1}{2}\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & -i \\
0 & 0 & 0 & 0 \\
0 & -i & 0 & -1
\end{pmatrix}.
\end{align*}
Note that, we have the Cartan decomposition for $\mathfrak{g} = \mathfrak{sp}(4) = \mathfrak{k} \oplus \mathfrak{p}$, where $\mathfrak{k} = \< Z,Z',N_+,N_- \>$ and $\mathfrak{p} = \< X_+,X_-,P_{1+},P_{1-},P_{0+},P_{0-} \>.$
The table of Lie brackets for the above basis is as follows:
\vskip 3mm
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& $Z$ & $Z'$ & $N_+$ & $N_-$ & $X_+$ & $X_-$ & $P_{1+}$ & $P_{1-}$ & $P_{0+}$ & $P_{0-}$\\
\hline
$Z$ & 0 & 0 & $N_+$ & $-N_-$ & $2X_+$ & $-2X_-$ & $P_{1+}$ & $-P_{1-}$ & 0 & 0 \\
\hline
$Z'$ & 0 & 0 & $-N_+$ & $N_-$ & $0$ & $0$ & $P_{1+}$ & $-P_{1-}$ & $2P_{0+}$ & $-2P_{0-}$\\
\hline
$N_+$ & $-N_+$ & $N_-$ & 0 & $Z'-Z$ & $0$ & $-P_{1-}$ & $2X_+$ & $-2P_{0-}$ & $P_{1+}$ & $0$\\
\hline
$N_-$ & $N_-$ & $-N_-$ & $Z-Z'$ & 0 & $-P_{1+}$ & $0$ & $-2P_{0+}$ & $2X_{-}$ & $0$ & $P_{1-}$ \\
\hline
$X_+$ & $-2X_+$ & $0$ & $0$ & $P_{1+}$ & $0$ & $Z$ & $0$ & $N_+$ & $0$ & $0$ \\
\hline
$X_-$ & $2X_-$ & $0$ & $P_{1-}$ & $0$ & $-Z$ & $0$ & $N_-$ & $0$ & $0$ & $0$ \\
\hline
$P_{1+}$ & $-P_{1+}$ & $-P_{1+}$ & $-2X_+$ & $2P_{0+}$ & $0$ & $-N_-$ & $0$ & $Z+Z'$ & $0$ & $N_+$ \\
\hline
$P_{1-}$ & $P_{1-}$ & $P_{1-}$ & $2P_{0-}$ & $-2X_-$ & $-N_+$ & $0$ & $-Z-Z'$ & $0$ & $N_-$ & $0$ \\
\hline
$P_{0+}$ & $0$ & $-2P_{0+}$ & $-P_{1+}$ & $0$ & $0$ & $0$ & $0$ & $-N_-$ & $0$ & $Z'$ \\
\hline
$P_{0-}$ & $0$ & $2P_{0-}$ & $0$ & $-P_{1-}$ & $0$ & $0$ & $-N_+$ & $0$ & $-Z'$ & $0$ \\
\hline
\end{tabular}
\end{center}
\vskip 3mm
These will come in handy for computations later on.
\section[Cohomological representations]{Vogan-Zuckerman classification of cohomological representations}
\label{V-Z Classification}
We briefly recall the Vogan-Zuckerman classification for cohomological representations and an algorithm to compute the Langlands inducing data for these representations. For more details the reader is referred to \cite{Vo-Zu}. Let $G$ be a connected real semi-simple Lie group with finite center. Let $\mathfrak{g}_0$ be the Lie algebra of $G$ and $\mathfrak{g}$ be the complexification of $\mathfrak{g}_0$. Let $K \subseteq G$ be a maximal compact subgroup of $G$ and $\theta$ be the corresponding Cartan involution of $G$. Then, we have the Cartan decomposition $$\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p},$$ where $\mathfrak{k}$ is the $+1$ eigenspace of $\theta$ and $\mathfrak{p}$ the $-1$
eigenspace.
Harish-Chandra in $1953$ proved the following result:
\begin{thm}[Harish-Chandra, \cite{H-C1}]
Let $(\pi,V)$ be an irreducible unitary representation of $G$. Then $V_{K}^{{}^\infty}$ is irreducible as a $\mathfrak{g}$-module and determines $\pi$ up to unitary equivalence, where $V_{K}^{{}^\infty}$ is the subspace of smooth $K$-finite vectors in $V$.
\end{thm}
The subspace $V_{K}^{{}^\infty}$ has a $(\mathfrak{g},K)$-module structure and the result implies that it is enough to study $(\mathfrak{g},K)$-modules. Vogan-Zuckermann describes those $(\mathfrak{g},K)$-modules for which the $(\mathfrak{g},\mathfrak{k})$-cohomology groups do not vanish. We need two parameters: a $\theta$-stable parabolic subalgebra $\mathfrak{q}$ of $\mathfrak{g}$ and an admissible homomorphism $\lambda$ on the Levi part of $\mathfrak{q}$.
We construct a $\theta$-stable parabolic subalgebra as follows: Let $x \in i\mathfrak{k}_0$. Since $K$ is compact, $ad(x): \mathfrak{g} \rightarrow \mathfrak{g}$ is diagonalizable with real eigenvalues. Define,
\begin{eqnarray*}
\mathfrak{q} & = & \text{ sum of non-negative eigen-spaces of $ad(x)$},\\
\mathfrak{l} & = & \text{ the zero eigen-space of $ad(x)$ = centralizer of $x$},\\
\mathfrak{u} & = & \text{ sum of positive eigen-spaces.}
\end{eqnarray*}
Then $\mathfrak{q}$ is a parabolic subalgebra of $\mathfrak{g}$ and $\mathfrak{q} = \mathfrak{l} + \mathfrak{u}$ is the Levi decomposition of $\mathfrak{q}$. Further $\mathfrak{l}_0 = \mathfrak{g}_0 \cap \mathfrak{l}$. Since $\theta(x) =x$, the subalgebras $\mathfrak{q},\mathfrak{l},\mathfrak{u}$ are all invariant under the Cartan involution $\theta$. The subalgebra $\mathfrak{q}$ is called a $\theta$-stable parabolic subalgebra which is one of the two parameters. Let $\mathfrak{t}_0 \subseteq \mathfrak{k}_0$ be a Cartan subalgebra containing $ix.$ Then $\mathfrak{t} \subseteq \mathfrak{l}$. For any subspace $\mathfrak{f}$, which is stable under $ad(\mathfrak{t})$, let $\Delta(\mathfrak{f},\mathfrak{t})=\{\alpha_1,\dots,\alpha_r\}$ be the roots of $\mathfrak{t}$ occurring in $\mathfrak{f}$. We allow multiplicities in the set $\Delta(\mathfrak{f},\mathfrak{t})$.
Define
$$\rho(\mathfrak{f})= \frac{1}{2} \sum\limits_{\alpha_i \in \Delta(\mathfrak{f})} \alpha_i.$$
Let $L \subseteq G$ be the connected subgroup of $G$ with Lie algebra $\mathfrak{l}_0$. A representation $\lambda: \mathfrak{l} \rightarrow \mathbb{C}$ is called admissible if
\begin{itemize}
\item $\lambda$ is a differential of a unitary character (also denoted by $\lambda$) of $L$.
\item If $\alpha \in \Delta(\mathfrak{u})$, then $\langle \alpha, \lambda\vert_{\mathfrak{t}} \rangle \geq 0.$
\end{itemize}
Given a $\theta$-stable parabolic subalgebra $\mathfrak{q}$ and an admissible $\lambda$, define
$$\mu(\mathfrak{q},\lambda)= \text{Representation of $K$ of highest weight } \lambda\vert_{\mathfrak{t}}+2\rho(\mathfrak{u} \cap \mathfrak{p}).$$
We have the following classification result:
\begin{thm}[\cite{Vo-Zu} Theorem 5.3]
Let $\mathfrak{q}$ be a $\theta$-stable parabolic subalgebra and let $\lambda: \mathfrak{l} \rightarrow \mathbb{C}$ be an admissible character. Then there is a unique irreducible $\mathfrak{g}$-module $A_{\mathfrak{q}}(\lambda)$ such that:
\begin{enumerate}
\item The restriction of $A_{\mathfrak{q}}(\lambda)$ to $\mathfrak{k}$ contains $\mu(\mathfrak{q},\lambda).$
\item The center $Z(\mathfrak{g})$ of the universal enveloping algebra acts by $\chi_{\lambda+\rho}$ on $A_{\mathfrak{q}}(\lambda)$.
\item If a representation of highest weight $\delta$ of $\mathfrak{k}$ appears in the restriction of $A_{\mathfrak{q}}(\lambda)$, then
$$\delta = \lambda\vert_\mathfrak{t} + 2\rho(\mathfrak{u} \cap \mathfrak{p}) + \sum\limits_{\alpha \in \Delta(\mathfrak{u} \cap \mathfrak{p})} n_{\alpha}\alpha,$$ with $n_\alpha$'s non-negative integers.
\end{enumerate}
\end{thm}
This classifies all irreducible unitary cohomological representations of the Lie group $G$. The representation $A_\mathfrak{q}(\lambda)$ has non-trivial cohomology with respect to the finite-dimensional representation of $G$ with highest weight $\lambda$.
\begin{rmk} \cite{Vo-Zu} \label{DS}
$A_q(\lambda)$ is a discrete series representation if and only if $\mathfrak{l} \subseteq \mathfrak{k}$. Further, $A_\mathfrak{q}$ is a tempered representation if and only if $[\mathfrak{l},\mathfrak{l}] \subseteq \mathfrak{k}$.
\end{rmk}
\medskip
\subsection{Langlands data for $A_{\mathfrak{q}}(\lambda)$}
We obtain the Langlands inducing data for $A_{\mathfrak{q}}(\lambda)$'s as follows. For details, the reader is referred to \cite{Vo-Zu}. Fix a maximally split $\theta$-stable Cartan subgroup $H=TA$ of $L$ (corresponding to the Levi part $\mathfrak{l}$ of the $\theta$-stable parabolic subalgebra $\mathfrak{q}$) and an Iwasawa decomposition $L=(L\cap K)AN^L.$ Put
\begin{eqnarray*}
MA & = & \text{Langlands decomposition of centralizer of } A \text{ in } G,\\
\nu & = & (\frac{1}{2} \text{ sum of roots of $\mathfrak{a}$ in $\mathfrak{n}^L$})+\lambda\vert_{\mathfrak{a}} \hskip 3mm
\in \mathfrak{a}^*.
\end{eqnarray*}
Now, let $P$ be any parabolic subgroup of $G$ with Levi factor $MA$ satisfying $\langle Re(\nu),\alpha \rangle \geq 0$ for all roots $\alpha$ of $\mathfrak{a}$ in $\mathfrak{n}^L$. The Harish-Chandra parameter of the discrete series representation of $M$ is given by $\rho^+ + \lambda\vert_{\mathfrak{t}}+\rho(\mathfrak{u})$; where $\rho^+$ is half sum of positive roots of $\mathfrak{t}$ in $\mathfrak{m} \cap \mathfrak{l}$ and $\rho(\mathfrak{u})$ is half sum of roots of $\mathfrak{t}$ in $\mathfrak{u}$. We denote this discrete series representation by $\sigma$. The only difficulty here is that if $M$ is not connected then the Harish-Chandra parameter does not completely determine the discrete series representation, $\sigma$, of $M$. We fix this as follows:\\
Let
\begin{eqnarray*}
\mu^{M}(\mathfrak{q},\lambda) & = & \text{Representation of $M \cap K$ of extremal weight } \\
& & \lambda \vert _{\mathfrak{t}} + 2\rho(\wedge^{dim(\mathfrak{u} \cap \mathfrak{p})}(\mathfrak{u} \cap \mathfrak{p}))\vert _{\mathfrak{t}}.
\end{eqnarray*}
Let $\sigma$ be the discrete series representation with lowest $M \cap K$ type $\mu^{M}(\mathfrak{q},\lambda)$. This completely determines the discrete series representation of $M$. The parabolic subgroup $P$, the character,$\nu$, of $\mathfrak{a}$ and $\sigma$ the representation of $M$ gives us the Langlands inducing data for $A_{\mathfrak{q}}(\lambda)$.
\bigskip
\section{Speh's Classification}
\label{Speh}
We now recall Speh's classification of irreducible, unitary representations of ${\rm GL}(n,\mathbb{R})$ which are cohomological with respect to trivial coefficients. Let $G={\rm GL}(2n,\mathbb{R}), n>1$. Let $C_n=T_nA_n$ be the Cartan subgroup containing matrices of the form:\\
$$\begin{pmatrix}
cos \phi_1 & sin \phi_1 & & & \\
-sin \phi_1 & cos \phi_1 & & & \\
& & \ddots & & \\
& & & cos \phi_n & sin \phi_n\\
& & & -sin \phi_n & cos \phi_n
\end{pmatrix}
\begin{pmatrix}
a_1 & & & & \\
& a_1 & & & \\
& & \ddots & & \\
& & & a_n & \\
& & & & a_n
\end{pmatrix}.$$
Then the roots system $\Phi(\mathfrak{c_n},\mathfrak{g})$ is of type $A_{n-1}$ with each root occurring $4$ times. Let $\Phi^+$ be the set of positive roots and set $\rho_n=\sum\limits_{\alpha \in \Phi^+}2\alpha$.\newline Let $P=M_nA_nN$ be the parabolic subgroup determined by the set of positive roots $\Phi^+$. The connected component $M_n^\circ$ of $M_n$ is isomorphic to $n$ copies of $SL(2,\mathbb{R})$ and $T_n$ is isomorphic to a product of $n$ copies of $O(2)$.
Let $\chi(k); k > 0$ be the quasi-character of $C_n$ such that the restriction of $\chi$, to each ${\rm SO}(2)$ component, is $e^{2\pi ik}$ and the restriction to $A_n$ is $exp(\frac{1}{2}\rho_n)$. Define $$I(k)=J(\chi(k)),$$ where $J(\chi(k))$ is the Langlands quotient of the induced representation ${\rm Ind}_{P}^{{\rm GL}(2n)}(\pi(k) \otimes \chi(k))$ and $\pi(k) = D_k \otimes D_k \otimes \dots \otimes D_k$ is a representation of $M = {\rm SL}_{\pm}(2,\mathbb{R})^n$. If $G={\rm GL}(2,\mathbb{R})$, then put $I(k)$ to be the discrete series representation $D_k$ of ${\rm GL}(2,\mathbb{R})$.
The cohomological representations of ${\rm GL}(n,\mathbb{R})$ are obtained as follows: Let $(n_0,n_1,\dots,n_r)$ be a partition of $n$ with $n_0 \geq 0$ and $n_i = 2 m_i$ for all $1 \leq i \leq r$ and all the $n_i$ are positive. Let $P = MAN$ be the parabolic corresponding to the partition $(n_0,\dots,n_r)$. Then, $$M = \prod\limits_{i=0}^{r} {\rm SL}_{\pm}(n_i,\mathbb{R}).$$
Let $k_i = n - \sum\limits_{j=i+1}^{r}n_j - m_i$. Define the induced representation
$${\rm Ind}_P^G( \otimes_{i=1}^r I(k_i) \otimes \chi_{n_0} \otimes \chi_0),$$
where $I(k_i)$ are representations of ${\rm SL}_{\pm}(n_i,\mathbb{R})$ and $\chi_{n_0}$ and $\chi_0$ are trivial representations of ${\rm SL}_{\pm}(n_0,\mathbb{R})$ and $AN$ respectively. Then we have,
\begin{thm}[see \cite{Sp2}]
The induced representation $${\rm Ind}_P^G( \otimes_{i=1}^r I(k_i) \otimes \chi_{n_0} \otimes \chi_0)$$ is irreducible and classifies all the unitary, irreducible representations of ${\rm GL}(n,\mathbb{R})$ which have cohomology with trivial coefficients.
\end{thm}
For the purposes of our computations, it will be convenient for us to write down the Langlands inducing data for these representations.
With notations as above, choose a Cartan subgroup $C_{(n-n_0)/2}$ in $MA$ with the following properties:
\begin{itemize}
\item $C_{(n-n_0)/2} \cap {\rm SL}_{\pm}(n_l,\mathbb{R})$ is the fundamental Cartan subgroup of ${\rm SL}_{\pm}(n_l,\mathbb{R})$ for $l \geq 1$, and
\item $C_{(n-n_0)/2} \cap {\rm SL}_{\pm}(n_0,\mathbb{R})$ is the split Cartan subgroup of ${\rm SL}_{\pm}(n_0,\mathbb{R})$.
\end{itemize}
Then we can decompose $C_{(n-n_0)/2}$ as $T_{(n-n_0)/2}A_{(n-n_0)/2}$ with the following properties:
\begin{itemize}
\item $T_{(n-n_0)/2} = \prod\limits_{l=0}^{r}T_{n_l/2}$ with $T_{n_l/2} = T_{(n-n_0)/2} \cap {\rm SL}_{\pm}(n_l,\mathbb{R})$ for $l \geq 0$, and
\item $A_{(n-n_0)/2} = A\prod\limits_{l=0}^{r}A_{n_l}$ with $A_{n_l} = A_{(n-n_0)/2} \cap {\rm SL}_{\pm}(n_l,\mathbb{R})$.
\end{itemize}
Choose a cuspidal parabolic subgroup $Q$ containing $C_{(n-n_0)/2}$ and the upper triangular matrices and write, for $0 \leq l \leq r$, $2\rho_l$ for the sum of positive roots of $(\mathfrak{sl}(n_l,\mathbb{R}),\mathfrak{a}_l)$ for the sum of positive roots determined by $Q$. Let $\chi(n) \in \hat{C}_{(n-n_0)/2}$ be such that the following holds:
\begin{itemize}
\item $\chi(n)|_{A} = \chi_0$,
\item $\chi(n)|_{A_0} = \rho_0$
\item $\chi(n)|_{T_0}$ is trivial
\item $\chi(n)|_{A_l} = \frac{1}{2}\rho_l$ for $l \rangle $,
\item $\chi(n)|_{T_l}$ is a product of factors $exp((n-\sum\limits_{i=l+1}n_i - m_l)2\pi i)$.
\end{itemize}
Then, by \cite{Sp2} Proposition 4.1.1, ${\rm Ind}_P^G( \otimes_{i=1}^r I(k_i) \otimes \chi_{n_0} \otimes \chi_0) \cong J(\chi(n))$.
\bigskip
\section{${\rm SL}(2,\mathbb{R})$ to ${\rm GL}(3,\mathbb{R})$}
\label{SL2}
In this section, as a warm-up example we will study the cohomological properties of representations of ${\rm GL}(3,\mathbb{R})$ which are obtained by transferring $A_{\mathfrak{q}}(\lambda)$'s of ${\rm SL}(2,\mathbb{R})$. We denote by $\mathfrak{sl}(2,\mathbb{C})$ the complexified Lie algebra of ${\rm SL}(2,\mathbb{R})$.
Let $H = \left( \begin{smallmatrix}
1 & 0 \\
0 & -1
\end{smallmatrix} \right), X = \left( \begin{smallmatrix}
0 & 1 \\
0 & 0
\end{smallmatrix} \right)$ and
$Y = \left( \begin{smallmatrix}
0 & 0 \\
1 & 0
\end{smallmatrix} \right)$ be a basis of $\mathfrak{sl}(2,\mathbb{R})$.
Let $w = \left( \begin{smallmatrix}
0 & 1 \\
1 & 0
\end{smallmatrix} \right)$, $\phi(A) = -{}^{t}A$ and $\theta = int(w) \circ \phi$, where $int(w)$ is the inner automorphism by $w$. Then $\theta$ is a Cartan involution on $\mathfrak{sl}(2,\mathbb{C})$ such that
$\mathfrak{sl}(2,\mathbb{C}) = \langle H \rangle \oplus \langle X, Y \rangle,$ is the Cartan decomposition of $\mathfrak{sl}(2,\mathbb{C})$ with $\mathfrak{k} = \langle H \rangle $ and $\mathfrak{p} = \langle X, Y \rangle$.
There are three $\theta$-stable parabolic subalgebras of $\mathfrak{sl}(2,\mathbb{C})$ corresponding to $0$, $H$ and $-H$.
\begin{enumerate}
\item Corresponding to $0$: This gives the full algebra of $q_0 = \mathfrak{sl}(2,\mathbb{C}) = \mathfrak{l}$.
\item Corresponding to $H$: The parabolic subalgebra is
$$\mathfrak{q}_1 = \langle H \rangle \oplus \langle X \rangle,$$ where $\mathfrak{l} = \langle H \rangle$ and $\mathfrak{u} = \langle X \rangle. $
\item Corresponding to $-H$: The parabolic subalgebra is
$$\mathfrak{q}_2 = \langle H \rangle \oplus \langle Y \rangle,$$ where $\mathfrak{l} = \langle H \rangle$ and $\mathfrak{u} = \langle Y \rangle.$
\end{enumerate}
Note that the only possible admissible character $\lambda$ for $\mathfrak{q}_0$ is $\lambda = 0.$ This gives rise to the trivial representation of ${\rm SL}(2,\mathbb{R})$. This representation is transferred to the trivial representation of ${\rm GL}(3,\mathbb{R})$ which is cohomological with respect to the trivial coefficients.
Observe that the Levi parts of both $\mathfrak{q}_1$ and $\mathfrak{q}_2$ are contained in $\mathfrak{k}$. Thus the cohomological representations $A_{\mathfrak{q}_1}(\lambda)$ and $A_{\mathfrak{q}_2}(\lambda)$ are discrete series representations with highest weight $\lambda$. The Langlands parameter for a representation of ${\rm SL}(2,\mathbb{R})$ is a homomorphism from the Weil group of $\mathbb{R}$ to ${\rm PGL}(2,\mathbb{C})$. The parameter $\phi(D_n)$ for the discrete series representation of ${\rm SL}(2,\mathbb{R})$ is given by
$$z \mapsto \begin{pmatrix}
(\frac{z}{\bar{z}})^\frac{n}{2} & 0 \\
0 & 1
\end{pmatrix}, \hskip 5mm j \mapsto \begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}.$$
To compute the transfer of the discrete series representations of ${\rm SL}(2,\mathbb{R})$ to ${\rm GL}(3,\mathbb{R})$, we embed ${\rm PGL}(2,\mathbb{C})$ into ${\rm GL}(3,\mathbb{R})$ via the $3$- dimensional representation induced by ${\rm GL}(2,\mathbb{C})$ taking
$$\begin{pmatrix}
a & 0 \\
0 & b
\end{pmatrix} \mapsto \begin{pmatrix}
\frac{a}{b} & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & \frac{b}{a}
\end{pmatrix}.$$
The image of ${\rm PGL}(2,\mathbb{C})$ can be identified with ${\rm SO}(3) \subset {\rm GL}(3,\mathbb{C})$ which preserves the quadratic form $\begin{pmatrix}
0 & 0 & 1 \\
0 & -1 & 0 \\
1 & 0 & 0
\end{pmatrix}$.
Thus, one observes that the transfer of a discrete series representation of ${\rm SL}(2,\mathbb{R})$, with highest weight $n$, to ${\rm GL}(3,\mathbb{R})$ has Langlands parameter
$$z \mapsto \begin{pmatrix}
(\frac{z}{\bar{z}})^\frac{n}{2} & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & (\frac{z}{\bar{z}})^{-\frac{n}{2}}
\end{pmatrix}; \hskip 20mm j \mapsto \begin{pmatrix}
0 & 0 & 1 \\
0 & -1 & 0 \\
1 & 0 & 0
\end{pmatrix}.$$
We know that this corresponds to a cohomological representation of ${\rm GL}(3,\mathbb{R})$ which is cohomological with respect to the finite dimensional representation with highest weight $(n,0,-n)$. We have already seen that the transfer of $M_n$, the finite dimensional representation of ${\rm SL}(2,\mathbb{R})$ with highest weight $n$, transfers to the finite dimensional representation of ${\rm GL}(3,\mathbb{R})$ with highest weight $(n,0,-n)$. Thus we have the following result:
\begin{prop}
Let $\pi$ be an irreducible unitary cohomological representation of ${\rm SL}(2,\mathbb{R})$ with respect to the finite dimensional representation $M$. Then the representation of ${\rm GL}(3,\mathbb{R})$, $\iota(\pi)$, obtained by the Langlands transfer is cohomological with respect to $\iota(M)$.
\end{prop}
\begin{rmk}
Note that the only cohomological representations of ${\rm SL}(2,\mathbb{R})$ are the discrete series representations and the trivial representation.
\end{rmk}
\bigskip
\section{Vogan-Zuckermann classification for $\Sp(4,\mathbb{R})$}
\label{V-Z for Sp}
\subsection{$\theta$-stable subalgebras for $\Sp(2n,\mathbb{R})$}
We will parameterize the $\theta$-stable parabolic subalgebras of $\Sp(2n,\mathbb{R})$
We have the following result to aid us in listing all the $\theta$-stable subalgebras of $\Sp(4,\mathbb{R}).$
\begin{lemma}
\label{Clas-theta-parabolics}
The following sets are in $1-1$ correspondence:
\begin{enumerate}
\item \{open, polyhedral root cones in $i\mathfrak{h}/W_K$ \}
\item \{ordered partitions of $n$: $n=\sum\limits_{j=1}^{s}(n_j+m_j)+m$ with $n_j,m_j,m,s \geq 0, n_j+m_j > 0$\}
\end{enumerate}
\end{lemma}
\begin{proof}
Let $x=(x_1,\dots,x_n) \in i\mathfrak{h}/W_K$. Since $W_K$ acts by permuting the coordinates of $x$, we can assume that $x_1 \geq x_2 \geq \dots \geq x_r > 0 > x_{r+1} \geq x_{r+2} \geq \dots \geq x_n$. This can also be expressed as follows:
$$x=(\underbrace{s,\dots,s}_{n_s},\underbrace{s-1,\dots,s-1}_{n_{s-1}},\dots,\underbrace{1,\dots,1}_{n_1},\underbrace{0,\dots,0}_{m},\underbrace{-1,\dots,-1}_{m_1},\dots,\underbrace{-s,\dots,-s}_{m_s}),$$ with $n=\sum\limits_{j=1}^{s}(n_j+m_j)+m$ with $n_j,m_j,m,s \geq 0, n_j+m_j > 0.$ This gives us a bijection between the two sets above.
\end{proof}
Let $\mathfrak{Q}$ be the set of all $\theta$-stable parabolic subalgebras of $\mathfrak{g}.$ The group $K$ acts on the set $\mathfrak{Q}$ via the adjoint action due to which we get a finite set of $\theta$-stable parabolic subalgebras $\mathfrak{Q}/K$. The following lemma gives us a bijection between $\mathfrak{Q}/K$ and open polyhedral root cones in $i\mathfrak{h}/W_K$.
\begin{lemma}
\label{char-theta-sta-subalg}
Every $x \in i\mathfrak{h}/W_K$ defines a $\theta$-stable parabolic subalgebra $\mathfrak{q}_x$ by setting \linebreak $\mathfrak{q}_x = \mathfrak{l}_x + \mathfrak{u}_x$, where $$\mathfrak{l}_x = \mathfrak{h} \oplus \bigoplus\limits_{\alpha \in \Delta(\mathfrak{g},\mathfrak{h}),\alpha(x)=0}\mathfrak{g}_{\alpha}; \hskip 5mm
\mathfrak{u}_x = \bigoplus\limits_{\alpha \in \Delta(\mathfrak{g},\mathfrak{h}),\alpha(x)>0} \mathfrak{g}_\alpha.$$
Two $\theta$-stable parabolic subalgebras $\mathfrak{q}_x,\mathfrak{q}_y$ are equal if and only if $x$ and $y$ are in the same open polyhedral root cone.\\
\noindent
Conversely, up to conjugacy be $K$, any $\theta$-stable parabolic subalgebra $\mathfrak{q}$ is $\mathfrak{q}=\mathfrak{q}_x$ for some $x \in i\mathfrak{h}/W_K$.
\end{lemma}
Two $\theta$-stable parabolic subalgebras, $\mathfrak{q}_1,\mathfrak{q}_2$ are said to be equivalent if $\Delta(\mathfrak{u}_1 \cap \mathfrak{p}) = \Delta(\mathfrak{u}_2 \cap \mathfrak{p})$, i.e. if the non-compact parts in the unipotent radical of the parabolic subalgebras are equal. We list down all the relevant data for $\Sp(4,\mathbb{C})$.
\subsection{Parabolic subgroups of $G=\Sp(4,\mathbb{R})$}
The parabolic subgroup is one of the components in the Langlands inducing data for a representations. To compute the Langlands parameter for $A_{\mathfrak{q}}(\lambda)$, we will need to realize $A_{\mathfrak{q}}(\lambda)$ as a Langlands quotient of an induced representation. Thus it will be important for us to list down the parabolic subgroups of $\Sp(4,\mathbb{R})$. The parabolic subgroups containing a Borel subgroup are in bijection with the subsets of a base corresponding to the Borel (see \cite{Spr}).
For $\Sp(4,\mathbb{R})$, the root system is $\{e_1-e_2,2e_2 \}$. There are $4$ parabolic subgroups of $\Sp(4,\mathbb{R})$, each corresponding to a subset of the base. One of them is the group itself which corresponds to the full base. This leaves $3$ proper parabolic subgroups of $\Sp(4,\mathbb{R})$ which are:
\begin{enumerate}
\item \textbf{Minimal parabolic:} The Borel subgroup $B=M_0A_0N_0$, corresponding to the empty subset of the base, with \\ $M_0=\{\begin{pmatrix}
\epsilon_1 & 0 & 0 & 0\\
0 & \epsilon_2 & 0 & 0\\
0 & 0 & \epsilon_1 & 0\\
0 & 0 & 0 & \epsilon_2\\
\end{pmatrix}: \epsilon_i \in \{\pm 1\} \},$\\
$A_0=\{ \text{diag}(a,b,a^{-1},b^{-1}): a,b \in \mathbb{R}_{> 0}^{\times} \},$ and\\
$N_0 = \{ n(x_0,x_1,x_2,x_3) = \begin{pmatrix}
1 & 0 & x_1 & x_2 \\
0 & 1 & x_2 & x_3 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{pmatrix} \begin{pmatrix}
1 & x_0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & -x_0 & 1 \\
\end{pmatrix} \} \subset \Sp(4).$
\item \textbf{Siegel parabolic:} The Siegel parabolic $P_S = M_SA_SN_S,$ corresponding to the subset $\Sigma=\{e_1 - e_2\}$ of the base, with\\
$M_S=\{\begin{pmatrix}
m & 0 \\
0 & {}^tm^{-1}
\end{pmatrix}: m \in SL^{\pm}(2,\mathbb{R}) \},$\\
$A_S=\{ \text{diag}(a,a,a^{-1},a^{-1}): a > 0 \},$ and\\
$N_S = \{ \begin{pmatrix}
1_2 & x \\
0 & 1_2
\end{pmatrix}: x={}^tx \in M_2(\mathbb{R}) \}.$
\item \textbf{Jacobi Parabolic:} The Jacobi parabolic $P_J = M_JA_JN_J$, corresponding to the subset $\Sigma=\{2e_2 \}$ of the base, with\\
$M_J=\{\begin{pmatrix}
\epsilon & 0 & 0 & 0\\
0 & a & 0 & b\\
0 & 0 & \epsilon & 0\\
0 & c & 0 & d\\
\end{pmatrix} : \begin{pmatrix}
a & b\\
c & d\\
\end{pmatrix} \in SL(2,\mathbb{R}), \epsilon = \pm 1 \}$\\
$A_J=\{ \text{diag}(a,1,a^{-1},1): a \in \mathbb{R}_{> 0}^{\times} \},$ and\\
$N_J = \{ n(x_0,x_1,x_2,0): x_i \in \mathbb{R} \}.$
\end{enumerate}
We will need these parabolics when we compute the Langlands parameters for the $A_{\mathfrak{q}}(\lambda)$'s.
\subsection{$\theta$-stable parabolic subalgebras of $\mathfrak{sp}(4)$}
We list all the $\theta$-stable parabolic subalgebras $\mathfrak{q}=\mathfrak{l} + \mathfrak{u}$ of $\mathfrak{sp}(4)$ and the possible admissible characters $\lambda: \mathfrak{l} \rightarrow \mathbb{C}$ which can be obtained from a highest weight of $\mathfrak{h}$. We note that a highest weight of $\mathfrak{h}$ can be extended to an admissible character of $\mathfrak{l}$ if and only if $\lambda\vert_{\mathfrak{h}\cap [\mathfrak{l}_0,\mathfrak{l}_0]}$ and $\lambda \vert_{\mathfrak{a}} = 0$ where the subalgebra $\mathfrak{l}_0=\mathfrak{l} \cap \mathfrak{sp}(4,\mathbb{R})$ (see \cite{Ha-Ra}). Along with the $\theta$-stable parabolic subalgebras and their corresponding admissible characters, we will also simultaneously list down some useful data for each $\theta$-stable parabolic subalgebra, which will come in handy when we compute the Langlands parameters. To make the list we use Lemma \ref{Clas-theta-parabolics} and Lemma \ref{char-theta-sta-subalg}.
\begin{enumerate}
\label{theta-stable algebras}
\item $x=0$ corresponding to the partition $2=2.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_1 = \mathfrak{sp}_4(\mathbb{C}) + 0$$
The Levi part is: $\mathfrak{l} = \mathfrak{sp}_4(\mathbb{C}).$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = \mathfrak{sp}_4(\mathbb{R}).$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = \mathfrak{l_0}.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = \mathfrak{h}.$\\
Therefore, $\lambda$ of $\mathfrak{h}$ can be extended to get an admissible character of $\mathfrak{l}$ if and only if $\lambda = 0.$ This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $G.$
\item $x=-Z-2Z'$ corresponding to the partition $2=(0+1)+(0+1)+0.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_2 = <Z,Z'> + <N_+,X_-,P_{1-},P_{0-}>$$
The Levi part is: $\mathfrak{l} = <Z,Z'>.$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = <iZ,iZ'> = \mathfrak{h}.$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = 0.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = 0.$\\
Therefore, any highest weight $\lambda$ of $\mathfrak{h}$ is an admissible character of $\mathfrak{l} = \mathfrak{h}$. This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $B.$
\item $x=2Z-Z'$ corresponding to the partition $2=(1+0)+(0+1)+0.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_3 = <Z,Z'> + <N_+,X_+,P_{1+},P_{0-}>$$
The Levi part is: $\mathfrak{l} = <Z,Z'>.$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = <iZ,iZ'> = \mathfrak{h}.$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = 0.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = 0.$\\
Therefore, any highest weight $\lambda$ of $\mathfrak{h}$ is an admissible character of $\mathfrak{l} = \mathfrak{h}$. This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $B.$
\item $x=2Z'-Z$ corresponding to the partition $2=(0+1)+(1+0)+0.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_4 = <Z,Z'> + <N_-,X_-,P_{1+},P_{0+}>$$
The Levi part is: $\mathfrak{l} = <Z,Z'>.$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = <iZ,iZ'> = \mathfrak{h}.$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = 0.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = 0.$\\
Therefore, any highest weight $\lambda$ of $\mathfrak{h}$ is an admissible character of $\mathfrak{l} = \mathfrak{h}$. This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $B.$
\item $x=2Z+Z'$ corresponding to the partition $2=(1+0)+(1+0)+0.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_5 = <Z,Z'> + <N_+,X_+,P_{1+},P_{0+}>$$
The Levi part is: $\mathfrak{l} = <Z,Z'>.$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = <iZ,iZ'> = \mathfrak{h}.$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = 0.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = 0.$\\
Therefore, any highest weight $\lambda$ of $\mathfrak{h}$ is an admissible character of $\mathfrak{l} = \mathfrak{h}$.
This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $B.$
\item $x=Z+Z'$ corresponding to the partition $2=(2+0)+0.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_6 = <Z,Z',N_+,N_-> + <X_+,P_{1+},P_{0+}>$$
Note that $\mathfrak{u} \cap \mathfrak{p} = \mathfrak{u}$ which is also equal to the intersection of the unipotent part of $\mathfrak{q}_5$ and $\mathfrak{p}$. Thus $\mathfrak{q}_6$ is equivalent to $\mathfrak{q}_5$, and the corresponding $A_\mathfrak{q}(\lambda)$'s are isomorphic.
\item $x=-(Z+Z')$ corresponding to the partition $2=(0+2)+0.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_7 = <Z,Z',N_+,N_-> + <X_-,P_{1-},P_{0-}>$$
Note that $\mathfrak{u} \cap \mathfrak{p} = \mathfrak{u}$ which is also equal to the intersection of the unipotent part of $\mathfrak{q}_2$ and $\mathfrak{p}$. Thus $\mathfrak{q}_7$ is equivalent to $\mathfrak{q}_2$, and the corresponding $A_\mathfrak{q}(\lambda)$'s are isomorphic.
\item $x=Z$ corresponding to the partition $2=(1+0)+1.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_8 = <Z,Z',P_{0+},P_{0-}> + <N_+,X_+,P_{1+}>$$
The Levi part is: $\mathfrak{l} = <Z,Z',P_{0+},P_{0-}>.$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = <iZ,iZ',P_{0+}+P_{0-},i(P_{0+}-P_{0-})>.$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = <\mathfrak{h}_2 = iZ'>.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = <\mathfrak{h}_2 = iZ'>.$\\
Therefore, a highest weight $\lambda$ of $\mathfrak{h}$ can be extended to get an admissible character of $\mathfrak{l}$ if and only if $\lambda(\mathfrak{h}_2) = 0.$ Therefore, $\lambda$ has the form $(\lambda_1,0)$. This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $P_J.$
\item $x=-Z'$ corresponding to the partition $2=(0+1)+1.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_9 = \langle Z,Z',X_{+},X_{-}\> + \langle N_+,P_{1-},P_{0-}\>$$
The Levi part is: $\mathfrak{l} = \langle Z,Z',X_{+},X_{-}\>.$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = \langle iZ,iZ',X_{+}+X_{-},i(X_{+}-X_{-})\>.$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = \langle iZ, 2i(X_--X_-), 2(X_+-X_-) \>.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = \langle \mathfrak{h}_1 = iZ \>.$\\
Therefore, a highest weight $\lambda$ of $\mathfrak{h}$ can be extended to get an admissible character of $\mathfrak{l}$ if and only if $\lambda= (0,\lambda_1).$
Note that this integral weight is conjugate under the Weyl group to an integral weight of the form $\lambda = (\lambda_1,0)$. This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $P_J$.
\item $x=Z-Z'$ corresponding to the partition $2=(1+1)+0.$\\
The $\theta$-stable parabolic subalgebra corresponding to $x$ is:
$$\mathfrak{q}_{10} = \langle Z,Z',P_{1+},P_{1-} \> + \langle N_+,X_+,P_{0-} \>$$
The Levi part is: $\mathfrak{l} = \langle Z,Z',P_{1+},P_{1-} \>.$\\
$\mathfrak{l_0} = \mathfrak{l} \cap \mathfrak{sp}_4(\mathbb{R}) = \langle iZ,iZ',P_{1+}+P_{1-},i(P_{1+}-P_{1-}) \>.$\\
$[\mathfrak{l_0}, \mathfrak{l_0}] = \langle i(P_{1+}-P_{1-}),-(P_{1+}+P_{1-}), -2i(Z+Z') \>.$\\
So $\mathfrak{h} \cap [\mathfrak{l_0},\mathfrak{l_0}] = \langle \mathfrak{h}_1+\mathfrak{h}_2 \>.$\\
Therefore, a highest weight $\lambda$ of $\mathfrak{h}$ can be extended to get an admissible character of $\mathfrak{l}$ if and only if $\lambda(\mathfrak{h}_1+\mathfrak{h}_2) = 0$ i.e. $\lambda(\mathfrak{h}_1) = -\lambda(\mathfrak{h}_2)$.
Note that such an integral weight is conjugate to an integral weight of the form $(\lambda_1,\lambda_1)$.
This $\theta$-stable parabolic subalgebra corresponds to the parabolic subgroup $P_S$.
\end{enumerate}
We summarize the $\theta$-stable parabolic subalgebras and the relevant data as below:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Parabolic & Corresponding & Possible highest\\
subagebras & Parabolic subgroups & weight $\lambda$ \\
\hline
$\mathfrak{q}_1$ & $G$ & $\lambda = 0$\\
\hline
$\mathfrak{q}_2 \sim \mathfrak{q}_7,\mathfrak{q}_3,\mathfrak{q}_4,\mathfrak{q}_5 \sim \mathfrak{q}_6$ & $B$ & Any $\lambda$ \\
\hline
$\mathfrak{q}_8$ & $P_J$ & $\lambda = (\lambda_1,0)$ \\
\hline
$\mathfrak{q}_9$ & $P_J$ & $\lambda = (\lambda_1,0)$ \\
\hline
$\mathfrak{q}_{10}$ & $P_S$ & $\lambda = (\lambda_1,\lambda_1)$\\
\hline
\end{tabular}
\end{center}
\section{Parabolic subgroups of ${\rm SO}(5,\mathbb{C})$}
For $G=\Sp(4,\mathbb{R})$, we know that ${}^LG^\circ={\rm SO}(5,\mathbb{C})$. Recall that, for a given representation $\pi$ of $G$ the Langlands parameter is a map $\phi(\pi): W_\mathbb{R} \rightarrow {}^LG$ and the image of $W_\mathbb{R}$ under $\phi$ is contained in a parabolic subgroup of ${}^LG^\circ$. Hence, we list down the parabolic subgroups of ${\rm SO}(5)$. For ${\rm SO}(5,\mathbb{C})$, the choice of the bilinear form is $J=\text{anti-diag}(1,-1,1,-1,1).$ Then the maximal torus for ${\rm SO}(5,\mathbb{C})$ contains elements of the form $\text{diag}(a,b,1,b^{-1},a^{-1})$. For ${\rm SO}(5,\mathbb{C})$, we have $3$ proper parabolics which are enumerated below:
\begin{enumerate}
\item \textbf{Minimal parabolic:} The Borel $B=M_0A_0N_0$, corresponding to the empty subset of the base, with $M_0=\{I_5\},$ $A_0$ is the subset of the diagonal matrices of the form $A_0=\{ \text{diag}(a,b,1,b^{-1},a^{-1}): a,b \in \mathbb{C}^{\times} \},$ and \\
$N_B = \{ \begin{pmatrix}
1 & * & * & * & *\\
0 & 1 & * & * & *\\
0 & 0 & 1 & * & *\\
0 & 0 & 0 & 1 & *\\
0 & 0 & 0 & 0 & 1\\
\end{pmatrix} \} \subset {\rm SO}(5).$
\item \textbf{Siegel parabolic:} The Siegel Parabolic $P_S = M_SA_SN_S$, corresponding to the subset \newline $\Sigma=\{e_1-e_2\}$ of the base, with\\
$M_S=\{\begin{pmatrix}
A & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & A
\end{pmatrix}: A \in SL(2,\mathbb{C}) \},$\\
$A_S=\{ \text{diag}(a,a,1,a^{-1},a^{-1}): a \in \mathbb{C}^{\times} \},$ and\\
$N_S = \{ \begin{pmatrix}
1 & 0 & * & * & *\\
0 & 1 & * & * & *\\
0 & 0 & 1 & * & *\\
0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 1\\
\end{pmatrix} \} \subset {\rm SO}(5).$\\
\item \textbf{Jacobi Parabolic:} The Jacobi parabolic $P_J = M_JA_JN_J$, corresponding to the subset \newline $\Sigma=\{e_2\}$ of the base, with\\
$M_J=\{\begin{pmatrix}
1 & 0 & 0 \\
0 & A & 0\\
0 & 0 & 1\\
\end{pmatrix}: A \in {\rm SO}(3) \} \in {\rm SO}(5),$\\
$A_J=\{ \text{diag}(a,1,1,1,a^{-1}): a \in \mathbb{C}^{\times} \},$ and\\
$N_J = \{ \begin{pmatrix}
1 & * & * & * & *\\
0 & 1 & 0 & 0 & *\\
0 & 0 & 1 & 0 & *\\
0 & 0 & 0 & 1 & *\\
0 & 0 & 0 & 0 & 1\\
\end{pmatrix} \} \subset {\rm SO}(5).$
\end{enumerate}
We can now compute the Langlands parameters for $A_\mathfrak{q}(\lambda)$'s and compute their transfers to representations of ${\rm GL}(5,\mathbb{R})$.
\section[Trivial coefficients]{Cohomological representations with trivial coefficients}
\label{Transfer triv-coeff}
Let $\mathfrak{Q}(\lambda)$ be the set of all non-equivalent $\mathfrak{q}$'s such that $\lambda$ can be extended to an admissible character of $\mathfrak{q}$. Assuming $\lambda=0$, $\mathfrak{Q}(\lambda)$ consists of all the $8$ nonequivalent $\theta$-stable parabolic subalgebras listed in Section {\ref{theta-stable algebras}}.
\subsection{Trivial and the Discrete Series representations}
For $q_1=\mathfrak{sp}_4(\mathbb{R})$, the representation $A_\mathfrak{q}$ is the trivial representation of $\Sp(4,\mathbb{R})$. This representation is transferred to the trivial representation of ${\rm GL}(5,\mathbb{R})$ which is cohomological with respect to trivial coefficients.\\
From Remark \ref{DS}, we note that $A_\mathfrak{q}$ is the discrete series representations if $\mathfrak{q}$ is one of the following:
\begin{itemize}
\item $\mathfrak{q}_2 = \langle Z,Z' \> \oplus \langle N_+,X_-,P_{1-},P_{0-} \>,$
\item $\mathfrak{q}_3 = \langle Z,Z' \> \oplus \langle N_+,X_+,P_{1+},P_{0-} \>,$
\item $\mathfrak{q}_4 = \langle Z,Z' \> \oplus \langle N_-,X_-,P_{1+},P_{0+} \>,$
\item $\mathfrak{q}_5 = \langle Z,Z' \> \oplus \langle N_+,X_+,P_{1+},P_{0+} \>.$
\end{itemize}
The transfer of these representations has been dealt with in \cite{Ra-Sa} and we know that the transfer of these representations is cohomological with respect to the trivial representation of ${\rm GL}(5,\mathbb{R})$.
This leaves us with $3$ $\theta$-stable parabolic subalgebras and their corresponding cohomological representations. The remaining parabolic subalgebras are:
\begin{itemize}
\label{non-temp alg}
\item $\mathfrak{q}_8=\langle Z,Z',P_{0+},P_{0-} \> \oplus \langle N_+,X_+,P_{1+} \>$
\item $\mathfrak{q}_9=\langle Z,Z',X_{+},X_{-} \> \oplus \langle N_+,P_{1-},P_{0-} \>$
\item $\mathfrak{q}_{10}=\langle Z,Z',P_{1+},P_{1-} \> \oplus \langle N_+,X_+,P_{0-} \>$
\end{itemize}
We analyze these case by case. The representations of $\Sp(4,\mathbb{R})$, which are non-tempered and cohomological with respect to trivial coefficients are listed in Section $2$ of \cite{Oda-Sch}. We include these computations here in some detail.
\subsection{Case of the Jacobi $\theta$-stable subalgebra}
\label{P_J}
We deal with representations of $\Sp(4,\mathbb{R})$ corresponding to the $\theta$-stable subalgebras $\mathfrak{q}_8$ and $\mathfrak{q}_9$.
The parabolic subgroup to which $\mathfrak{q}_8$ corresponds is the Jacobi parabolic $P_J$. Recall that
$\mathfrak{q}_8 = \mathfrak{l} \oplus \mathfrak{u}$, with $\mathfrak{l} = \langle Z,Z',P_{0+}, P_{0-} \rangle$, $\mathfrak{u} = \langle N_+,X_+,P_{1+} \rangle$ and $\lambda = 0$.
We choose a maximally split Cartan subgroup $H$ inside $L$. The Levi $L$ is isomorphic to ${\rm GL}(1,\mathbb{R}) \times {\rm SL}(2,\mathbb{R})$. The Lie algebra corresponding to $H=TA$ is $\langle Z, P_{0+}+P_{0-} \rangle$. Note that the Lie algebras of $T$ and $A$ are generated by $Z$ and $P_{0+}+P_{0-} = \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
\end{pmatrix}$ respectively.
Now let, $\mathrm{Cent}_G(A) = MA.$ Then $M$ is isomorphic to ${\rm SL}(2,\mathbb{R}) \times \{\pm 1\}.$ To compute the Langlands parameter for the representation $A_{\mathfrak{q}_8}$, we need a parabolic subgroup $P=MAN$ of $G = \Sp(4,\mathbb{R})$, a discrete series representation on $M$ and a character $\nu$ of $\mathfrak{a}$. For the parabolic, choose any parabolic subgroup of $G$ which has Levi factor $MA$. The Jacobi parabolic $P_J$ is one such subgroup. This corresponds to the subset $\Sigma=\{ 2e_2 \}$ of the base. Thus, the representation $A_{\mathfrak{q}_8}$ is obtained as the Langlands quotient of a representation which is induced from the Jacobi parabolic $P_J$.
The character on $\mathfrak{a}$ is obtained by restricting $\rho_L$ to $\mathfrak{a}$. Hence $$\nu = \rho_L\vert_{\mathfrak{a}} = \frac{1}{2}(0,2) = (0,1).$$
Now for the discrete series representation of $M$: The Harish-Chandra parameter for the representation of $M^{\circ} = {\rm SL}(2,\mathbb{R})$, the connected component of $M$, is given by $\rho(u) + \rho(\mathfrak{m} \cap \mathfrak{l})$ where $\rho$ is computed with respect to $\mathfrak{t}$. Observe that, $M \cap L = \{\pm 1 \}$ which implies that $\rho(\mathfrak{m} \cap \mathfrak{l}) = 0.$ We have $\mathfrak{u} = \langle N_+ , X_+ , P_{1+}\rangle.$ Thus $$\rho(u) = \frac{1}{2}((1+2+1),0) = (2,0).$$
The only question remains is whether the representation on $\{\pm 1 \} \subset M$ is the trivial one or the sign character. We compute this as follows:
The Lie algebra of $M$ is $\mathfrak{m} = \langle Z,P_{0+}+P_{0-},X_+,X_- \rangle.$ Then $M \cap K = \{\pm 1 \} \times {\rm SO}(2).$
The discrete series representation on $M \cap K$ is the representation with highest weight given by the formula $2\rho \wedge^{\text{dim } \mathfrak{u} \cap \mathfrak{p}}(\mathfrak{u} \cap \mathfrak{p})\vert_{t}$ which in this case is $(2+1)=3.$ Thus the character on $\{\pm 1\}$ is given by $\epsilon : -1 \mapsto -1.$ Note that this computation gives us the discrete series representation on the $\{\pm 1 \}$ as well as the ${\rm SL}(2,\mathbb{R})$ of the Levi part.
Now we compute the Langlands parameter for $A_{\mathfrak{q}_8}.$ Note that since the representation $A_{\mathfrak{q}_8}$ is induced from the parabolic $P_J$, the image of $W_\mathbb{R}$ should lie inside the corresponding parabolic subgroup of $P_J \subseteq {\rm SO}(5,\mathbb{C})$.
The transfer of $A_{\mathfrak{q}_8}$ to ${\rm GL}(5,\mathbb{R})$ is the Langlands quotient of the following induced representation:
$${\rm Ind}_{P}^G(D_4 \otimes \chi_1 \epsilon \otimes \chi_{-1} \epsilon \otimes \epsilon)$$
where $P$ is the $(2,1,1,1)$ parabolic subgroup of ${\rm GL}(5,\mathbb{R})$ and $\chi_n(x) = x^n,$ since the Langlands parameter for $A_{\mathfrak{q}_8}$ is given by
$$z \mapsto \begin{pmatrix}
(z\bar{z}) & 0 & 0 & 0 & 0 \\
0 & (\frac{z}{\bar{z}})^2 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & (z\bar{z})^{-1} & 0 \\
0 & 0 & 0 & 0 & (\frac{z}{\bar{z}})^{-2}
\end{pmatrix}; \hskip 5mm j \mapsto \begin{pmatrix}
-1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 \\
0 & 1 & 0 & 0 & 0
\end{pmatrix}.$$
Observe that the Langlands quotient of ${\rm Ind}_{P}^G(D_4 \otimes \chi_1 \epsilon \otimes \chi_{-1} \epsilon \otimes \epsilon)$ is isomorphic to \newline ${\rm Ind}_P^G(D_4 \otimes \epsilon)$ where $P$ is the $(2,3)$ parabolic of ${\rm GL}(5,\mathbb{R})$, and $\epsilon$ is the sign representation of ${\rm GL}(3,\mathbb{R})$. This follows from the fact that for the Borel $B$ of ${\rm GL}(3,\mathbb{R})$, the Langlands quotient of ${\rm Ind}_B^{{\rm GL}(3,\mathbb{R})}(|\cdot| \otimes \epsilon \otimes |\cdot|^{-1})$, is $\epsilon$. We further note that
${\rm Ind}_P^G(D_4 \otimes \epsilon) \cong {\rm Ind}_P^G(D_4 \otimes 1) \otimes \epsilon$.
Thus, this is a twist of a unitary representation by the sign character. Hence this representation is unitary. Thus we can appeal to Speh's classification and figure out whether the above representation is cohomological or not.
Since the transferred representation is induced from the $(2,1,1,1)$ parabolic and we only have one factor of ${\rm GL}(2,\mathbb{R})$ in the inducing data, we consider the representation corresponding to the partition $5 = 3 + 2$ of ${\rm GL}(5,\mathbb{R})$ in terms of Speh's classification \cite{Sp2}.
For the partition $n = 5 = 3 + 2$, we have $n_0 = 3 , n_1 = 2, m_1 = 1$. The representation which is cohomological corresponding to this partition is obtained as a Langlands quotient of the $(2,1,1,1)$ parabolic. The discrete series representation on the ${\rm GL}(2,\mathbb{R})$ part of the Levi is given by $exp(n - \sum\limits_{i = 2}n_i - m_1 )$, which is $4$ since for $i > 1$, $n_i,m_i = 0.$ Thus we observe that the representation which occurs in Speh's classification is ${\rm Ind}_P^G(D_4 \otimes 1)$. Thus, the transferred representation obtained from $A_{\mathfrak{q}_8}$ does not occur in the classification of Speh. Hence the transfer of $A_{\mathfrak{q}_8}$ is not a cohomological representation of ${\rm GL}(5,\mathbb{R})$.
A similar computation for the parabolic subalgebra $\mathfrak{q}_9$ shows that the representations $A_{\mathfrak{q}_8}$ and $A_{\mathfrak{q}_9}$ transfer to the same representation of ${\rm GL}(5,\mathbb{R})$. Thus, the transfer of $A_{\mathfrak{q}_8}$ and $A_{\mathfrak{q}_9}$ to representations of ${\rm GL}(5,\mathbb{R})$ are not cohomological.
\subsection{Case of Siegel $\theta$-stable parabolic subalgebra}
The last case left is the case when the $\theta$-stable parabolic subalgebra is $\mathfrak{q}_{10} = \mathfrak{l} \oplus \mathfrak{u},$ with $\mathfrak{l} = \langle Z,Z',P_{1+},P_{1-} \rangle$ and $\mathfrak{u} = \langle N_+, X_+,P_{0-} \rangle$. Let $\lambda = 0$.
We choose a maximally split Cartan subgroup $H$ inside $L$. Then $L$ is isomorphic to ${\rm GL}(1,\mathbb{R}) \times {\rm SL}(2,\mathbb{R})$. The Lie algebra corresponding to $H=TA$ is $\langle Z-Z', P_{1+}+P_{1-} \rangle$. Note that the Lie algebras of $T$ and $A$ are generated by $Z-Z'$ and $P_{1+}+P_{1-} = \begin{pmatrix}
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & -1 & 0 \\
\end{pmatrix}$ respectively.
Now let, $\mathrm{Cent}_G(A) = MA.$ Then $M$ is isomorphic to ${\rm SL}(2,\mathbb{R}) \times \{\pm 1\}.$
To compute the Langlands parameter for the representation $A_{\mathfrak{q}_{10}}$, we need a parabolic subgroup $P=MAN$ of $G = \Sp(4,\mathbb{R})$, a discrete series representation on $M$ and a character $\nu$ of $\mathfrak{a}$. For the parabolic, choose any parabolic subgroup of $G$ which has Levi factor $MA$. The Siegel parabolic $P_S$ is such a parabolic. This parabolic subgroup corresponds to the subset $\Sigma=\{e_1 - e_2 \}$ of the base. The representation $A_{\mathfrak{q}_{10}}$ is obtained as the Langlands quotient of a representation induced from the Siegel parabolic. Now we compute the other two parameters. The character on $\mathfrak{a}$ is obtained by restricting $\rho_L$ to $\mathfrak{a}$. Thus
$$\nu = \rho_L\vert_{\mathfrak{a}} = \frac{1}{2}(2,2) = (1,1).$$
Now for the discrete series representation of $M$: The Harish-Chandra parameter for the representation of $M^{\circ} = {\rm SL}(2,\mathbb{R})$, the connected component of $M$, is given by $\rho(u) + \rho(\mathfrak{m} \cap \mathfrak{l})$ where $\rho$ is computed with respect to $\mathfrak{t}$. Observe that, $M \cap L = \{\pm 1 \}$ which implies that $\rho(\mathfrak{m} \cap \mathfrak{l}) = 0.$ We have $\mathfrak{u} = \langle N_+ , X_+ , P_{0-}\rangle.$ Thus $$\rho(u) = \frac{1}{2}(2+2+2,2+2+2) = (3,3).$$
The only question remains is whether the representation on $\{\pm 1 \} \subset M$ is the trivial one or the sign character. We compute this as follows:
Note that $M \cap K = \{\pm 1 \} \times {\rm SO}(2).$
The discrete series representation on $M \cap K$ is the representation with highest weight given by the formula $2\rho \wedge^{\text{dim } \mathfrak{u} \cap \mathfrak{p}}(\mathfrak{u} \cap \mathfrak{p})\vert_{t}$ which in this case is $(2+2)=4.$ Thus the character on $\{\pm 1\}$ is the trivial character. Now we compute the Langlands parameter for $A_{\mathfrak{q}_{10}} \cong {\rm Ind}_{P_S}^G(D_3|det|^{\frac{1}{2}}).$
Since the representation $A_{\mathfrak{q}_{10}}$ is induced from the parabolic $P_S$, the image of $W_\mathbb{R}$ should go inside $P_S$ which is a parabolic subgroup of ${\rm SO}(5,\mathbb{C})$, corresponding to the subset $\Sigma=\{e_2\}$ of the base.
The Langlands parameter for $A_{\mathfrak{q}_{10}}$ is given by
$$z \mapsto \text{diag}(z^2\bar{z}^{-1},z^{-1}\bar{z}^{2},1,z\bar{z}^{-2},z^{-2}\bar{z});$$ and $$ j \mapsto \begin{pmatrix}
0 & -1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & -1 \\
0 & 0 & 0 & 1 & 0
\end{pmatrix}.$$
Thus the transfer of $A_{\mathfrak{q}_{10}}$ to ${\rm GL}(5,\mathbb{R})$ is the Langlands quotient of the following induced representation:
$${\rm Ind}_{P}^G(D_3|det|^{\frac{1}{2}} \otimes 1 \otimes D_3|det|^{-\frac{1}{2}})$$
where $P$ is the $(2,1,2)$ parabolic subgroup of ${\rm GL}(5,\mathbb{R})$. We need to analyze whether this representation occurs in the Speh's classification of unitary irreducible cohomological representations of ${\rm GL}(5,\mathbb{R})$.
We consider the partition $5=1+4$. Using notations from Section \ref{Speh}, we have $n_0=1, n_1=4$ and $m_1=2$. The representation occurring in Speh's classification corresponding to this partition is ${\rm Ind}_{(1,4)}^{{\rm GL}(5,\mathbb{R})}(1 \otimes I(3))$.
Now we must compute the corresponding Langlands data for this representation. Appealing to \ref{Speh}, we note that for the character $\chi(3)$ on $T_2^0$ given by $\chi(3)(e^{i\theta_1},e^{i\theta_2})=e^{3i(\theta_1+\theta_2)}$ and $\chi(3)|_{A^2}= exp(\frac{\rho_2}{2})=\frac{a_1}{a_2}$, $$I(3)=J(\chi(3)).$$
But as a representation of ${\rm GL}(4,\mathbb{R})$, $J(\chi(3))= {\rm Ind}(D_3|det|^{\frac{1}{2}} \otimes D_3|det|^{-\frac{1}{2}}).$ Thus we note that ${\rm Ind}_{(1,4)}^{{\rm GL}(5,\mathbb{R})}(1 \otimes I(3)) = {\rm Ind}_{P}^G(D_3|det|^{\frac{1}{2}} \otimes 1 \otimes D_3|det|^{-\frac{1}{2}})$. Hence, the transfer of $A_{\mathfrak{q}_{10}}$ occurs in the classification of Speh and is hence cohomological.
\subsection{Summary}
\label{Summary Triv-coeff}
Thus, to summarize we have:
\begin{thm}\label{sp4-main-result-triv-coeff}
Let $\pi$ be an irreducible unitary representation of $\Sp(4,\mathbb{R})$ such that $\pi$ has non-vanishing cohomology with trivial coefficients. Let $\iota(\pi)$ denote the transferred representation of $\pi$ to ${\rm GL}(5,\mathbb{R})$. Then $\iota(\pi)$ is cohomological with trivial coefficients if $\pi$ is one of the following:
\begin{enumerate}
\item $\pi$ is the trivial representation,
\item $\pi$ is a discrete series representation of $\Sp(4,\mathbb{R})$,
\item $\pi$ is induced from the Siegel parabolic.
\end{enumerate}
\end{thm}
\section[Non-Trivial coefficients]{Cohomological representations with Non-Trivial coefficients}
\label{Transfer non-triv coeff}
In this section, we will let $\lambda=(\lambda_1,\lambda_2), \ \lambda_1 \geq \lambda_2 \geq 0$ be a non-zero highest weight of $\Sp(4,\mathbb{R})$. We split the analysis in the following cases:
\begin{itemize}
\item $\lambda_1 = \lambda_2 \neq 0$,
\item $\lambda_1 > \lambda_2 \neq 0$,
\item $\lambda_2 = 0$.
\end{itemize}
\subsection{$\lambda = (\lambda_1,\lambda_1)$, with
$\lambda \neq 0$}
In this case, we note that the $\theta$-stable parabolic subalgebras which are relevant are $\mathfrak{q}_2 \sim \mathfrak{q}_7,\mathfrak{q}_3,\mathfrak{q}_4, \mathfrak{q}_5 \sim \mathfrak{q}_6$ and $\mathfrak{q}_{10}$. Note that $A_{\mathfrak{q}_i}$ for $2 \leq i \leq 7$ are discrete series representations. Thus, from \cite{Ra-Sa}, we know that these transfer to cohomological representations of ${\rm GL}(5,\mathbb{R})$.
The remaining representations are the representations corresponding to the parabolic $\mathfrak{q}_{10}$. As we have already seen, these representations are obtained as the Langlands quotient of a representation which is induced from the Siegel parabolic of $G = \Sp(4,\mathbb{R})$. We compute the Langlands parameters for the representations $A_{q_{10}}(\lambda)$, as before. We note that the discrete series representation on $M_S$ is given by $2\lambda_1 + 3$. The character on $\mathfrak{a}$ does not change and is still given by $$\nu = \rho_L\vert_{\mathfrak{a}} = \frac{1}{2}(2,2) = (1,1).$$
Thus, the representation $A_{\mathfrak{q}_{10}}(\lambda)$ is the irreducible Langlands quotient of the induced representation ${\rm Ind}_{P_S}^G(D_{2\lambda_1+3}|det|^{\frac{1}{2}}).$
We note that the Langlands parameter of $A_{\mathfrak{q}_{10}}(\lambda)$ is given by
$$z \mapsto \text{diag}(z^{\lambda_1+2}\bar{z}^{-\lambda_1-1},z^{-\lambda_1-1}\bar{z}^{\lambda_1+2},1,z^{\lambda_1+1}\bar{z}^{-\lambda_1-2},z^{-\lambda_1-2}\bar{z}^{\lambda_1+1})$$ and $$ j \mapsto \begin{pmatrix}
0 & (-1)^{2\lambda_1+3} & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & (-1)^{2\lambda_1+3} \\
0 & 0 & 0 & 1 & 0
\end{pmatrix}.$$
\noindent
Thus, the transfer of $A_{\mathfrak{q}_{10}}(\lambda)$ to ${\rm GL}(5,\mathbb{R})$ is obtained as a Langlands quotient of $${\rm Ind}_{(2,1,2)}^{{\rm GL}(5,\mathbb{R})}(D_{2\lambda_1+3}|det|^{\frac{1}{2}}\otimes 1 \otimes D_{2\lambda_1+3}|det|^{-\frac{1}{2}}).$$
The question whether this representation of ${\rm GL}(5,\mathbb{R})$ is cohomological or not does not seem to have an easy answer since the main ingredient, which is the Speh's classification for cohomological representations of ${\rm GL}(n,\mathbb{R})$ with non-trivial coefficients is not available.
The expectation is that this representation is cohomological.
\subsection{$\lambda = (\lambda_1,\lambda_2)$, with $\lambda_2 \neq 0$ and $\lambda_1 > \lambda_2$}
In this case, we note that the $\theta$-stable parabolic subalgebras which are relevant are $\mathfrak{q}_2 \sim \mathfrak{q}_7,\mathfrak{q}_3,\mathfrak{q}_4$ and $\mathfrak{q}_5 \sim \mathfrak{q}_6$. For these subalgebras the Levi parts, $\mathfrak{l}$, are contained in $\mathfrak{k}$ and hence the representations $A_\mathfrak{q}$ are the discrete series representations. From \cite{Ra-Sa}, we know that the transfer of these representations are cohomological.
Thus we have:
\begin{prop}
Let $\lambda=(\lambda_1,\lambda_2)$, $\lambda_1 > \lambda_2 \neq 0$. Then the transfer of $A_{\mathfrak{q}}(\lambda)$ to ${\rm GL}(5,\mathbb{R})$ is cohomological.
\end{prop}
\subsection{$\lambda = (\lambda_1,0)$}
The $\theta$-stable parabolic subalgebras which are relevant are $\mathfrak{q}_2 \sim \mathfrak{q}_7,\mathfrak{q}_3,\mathfrak{q}_4, \mathfrak{q}_5 \sim \mathfrak{q}_6, \mathfrak{q}_8$ and $\mathfrak{q}_9$. Out of these $6$ $\theta$-stable parabolic subalgebras, $\mathfrak{q}_2 \sim \mathfrak{q}_7,\mathfrak{q}_3,\mathfrak{q}_4$ and $\mathfrak{q}_5 \sim \mathfrak{q}_6$ correspond to the discrete series representations and we know that these transfer to cohomological representations of ${\rm GL}(5,\mathbb{R})$ from \cite{Ra-Sa}.
This leaves us with the representations $A_{\mathfrak{q}_8}(\lambda)$, $A_{\mathfrak{q}_9}(\lambda)$. Note that for the $\theta$-stable parabolic subalgebra $\mathfrak{q}_8$, $\lambda \vert_\mathfrak{t} = \lambda_1$ and $\lambda \vert_{\mathfrak{a}} = 0.$ These observations along with the calculations in section \ref{P_J} imply that the Langlands parameter for the representation $A_{\mathfrak{q}_8}(\lambda)$ is given by:
$$z \mapsto \text{diag}(z\bar{z},(\frac{z}{\bar{z}})^{\frac{\lambda_1+2}{2}},1,(z\bar{z})^{-1},(\frac{z}{\bar{z}})^{-\frac{\lambda_1+2}{2}})$$
$$j \mapsto \begin{pmatrix}
-1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & (-1)^{\lambda_1+2} \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 \\
0 & 1 & 0 & 0 & 0
\end{pmatrix}.$$
Hence, we observe that the transfer of $A_{\mathfrak{q}_8}(\lambda)$ is obtained by taking the Langlands quotient of:
$${\rm Ind}_{P}^G(D_{\lambda_1+2} \otimes \chi_1 \epsilon \otimes \chi_{-1} \epsilon \otimes \epsilon),$$
where $P$ is the $(2,1,1,1)$-parabolic subgroup of ${\rm GL}(5,\mathbb{R})$, $\chi_n(x)=x^n$ and $\epsilon$ is the sign character on $\mathbb{R}^\times$.
A similar calculation as above shows that the transfer of $A_{\mathfrak{q}_9}(\lambda)$ is also the Langlands quotient of
$${\rm Ind}_{P}^G(D_{\lambda_1+2} \otimes \chi_1 \epsilon \otimes \chi_{-1} \epsilon \otimes \epsilon),$$ where $P$ is as above.
The question whether this representation of ${\rm GL}(5,\mathbb{R})$ is cohomological or not does not seem to have an easy answer since the main ingredient, which is the Speh's classification for cohomological representations of ${\rm GL}(n,\mathbb{R})$ with non-trivial coefficients is not available at the moment.
The expectation is that this representation is not cohomological.
\subsection{Summary}
\label{Summary non-triv coeff}
Finally, to summarize the results we put everything in a tabular form. The table completely answers which unitary, irreducible cohomological representations of $G=\Sp(4,\mathbb{R})$ are transferred to cohomological representations of ${\rm GL}(5,\mathbb{R})$ in the $\lambda=0$ case. In the non-trivial coefficients case, it seems like a difficult question at the moment since no analogous result of Speh's classification for cohomological representations with non-trivial coefficients seem to exist. One hopes to prove this and obtain a complete result for the case $G=\Sp(4,\mathbb{R}).$
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Representation & Corresponding & $\lambda$ & Transfer \\ & Parabolic & & cohomological or not \\
\hline
$A_{\mathfrak{q}_1}$ & $B$ & $\lambda = 0$ & Cohomological\\
\hline
$A_{q_2} \cong A_{\mathfrak{q}_7},A_{\mathfrak{q}_3}$ & $B$ & Any $\lambda$ & Cohomological \\
$A_{\mathfrak{q}_4},A_{\mathfrak{q}_5} \cong A_{\mathfrak{q}_6}$ & & & \\
\hline
$A_{\mathfrak{q}_8}$ & $P_J$ & $\lambda=0$ & Not Cohomological \\
& & $\lambda=(\lambda_1,0)$ & Expected to be not cohomological \\
\hline
$A_{\mathfrak{q}_9}$ & $P_J$ & $\lambda=0$ & Not Cohomological \\
& & $\lambda=(\lambda_1,0)$ & Expected to be not cohomological \\
\hline
$A_{\mathfrak{q}_{10}}$ & $P_S$ & $\lambda = 0$ & Cohomological\\
& & $\lambda=(\lambda_1,\lambda_1)$ & Expected to be cohomological\\
\hline
\end{tabular}
\end{center}
\newpage
Considering the observations made above, we make the following conjecture:
\begin{conjecture}
Let $\pi$ be an irreducible unitary representation of $\Sp(4,\mathbb{R})$ such that $\pi$ has non-vanishing cohomology. Let $\iota(\pi)$ denote the transferred representation of $\pi$ to ${\rm GL}(5,\mathbb{R})$. Then $\iota(\pi)$ is cohomological if $\pi$ is one of the following:
\begin{enumerate}
\item $\pi$ is the trivial representation,
\item $\pi$ is a discrete series representation of $\Sp(4,\mathbb{R})$,
\item $\pi$ is induced from the Siegel parabolic.
\end{enumerate}
Further, if $\pi$ is cohomological with respect to the finite dimensional representation $M_\lambda$, then $\iota(\pi)$ is cohomological with respect to $\iota(M_\lambda)$.
\end{conjecture}
\bigskip
\bigskip
|
1,116,691,501,222 | arxiv | \section{Introduction}
\setcounter{equation}{0}
\label{sec:1}
We consider a magnetic field acting in the vertical direction and
only depending on time $t\geq 0$ and the first two components $\bx=(x_1,x_2)\in \RR^2$,
that is,
$$
\bB(t,\bx) \,\,=\,\, \frac{1}{\varepsilon } \, \left( \begin{array}{l}0\\0\\b(t,\bx)\end{array}\right),
$$
with $b(t,\bx)>0$, where $\varepsilon>0$ is a small parameter related to the ratio between the reciprocal Larmor frequency and the advection time scale. This magnetic field is
applied to confine a plasma constituted of a large number of charged particles, modelled by a distribution function $f^\varepsilon $ solution to the Vlasov equation coupled with the Poisson equation satisfied by the self-consistent electrical potential $\phi^\varepsilon $. Here we focus on the long time behavior of the plasma in a plane orthogonal to the external magnetic field. Therefore we consider the following form of the two dimensional Vlasov-Poisson system with an external strong magnetic field
\begin{equation}
\label{eq:vlasov2d}
\left\{
\begin{array}{l}
\displaystyle{\varepsilon\frac{\partial f^\varepsilon }{\partial t}\,+\,\mathbf{v}\cdot\Gradxp f^\varepsilon \,+\,\left(
\mathbf{E}^\varepsilon (t,\bx) \,-\, b(t,\bx)\frac{\bv^\perp}{\varepsilon } \right)\cdot\Gradvp f^\varepsilon
\,=\, 0,}
\\
\,
\\
\displaystyle{\bE^\varepsilon = - \Gradxp \phi^\varepsilon , \quad -\Delta_{\bx} \phi^\varepsilon =
\rho^\varepsilon ,\quad \rho^\varepsilon =\int_{\RR^2} f^\varepsilon d\bv,}
\end{array}\right.
\end{equation}
where we use notation $\bv^\perp=(-v_2,v_1)$ and for simplicity we set all immaterial physical constants to one. The term $\varepsilon$ in
front of the time derivative of $f^\varepsilon $ stands for the fact that we want to follow
the solution on times sufficiently large to observe non trivial averaged evolutions.
We want to construct numerical solutions to the Vlasov-Poisson system \eqref{eq:vlasov2d} by particle methods (see \cite{birdsall}), which consist in
approximating the distribution function by a finite number of
macro-particles. The trajectories of these particles are determined from
the characteristic curves corresponding to the Vlasov equation
\begin{equation}
\label{traj:00}
\left\{
\begin{array}{l}
\ds{\varepsilon\frac{\dd\bX^\varepsilon }{\dd t} \,=\, \bV^\varepsilon ,}
\\
\,
\\
\ds{\varepsilon\frac{\dd\bV^\varepsilon }{\dd t} \,=\, -\frac{1}{\varepsilon} \,{b}(t,\bX^\varepsilon )\,(\bV^\varepsilon )^{\perp} \,+\, \bE^\varepsilon (t,\bX^\varepsilon ), }
\\
\,
\\
\bX^\varepsilon (t^0) = \bx^0, \, \bV^\varepsilon (t^0) = \bv^0,
\end{array}\right.
\end{equation}
where we use the conservation of $f^\varepsilon $ along the characteristic curves
$$
f^\varepsilon (t, \bX^\varepsilon (t),\bV^\varepsilon (t)) = f^\varepsilon (t^0, \bx^0, \bv^0)
$$
and the electric field is computed from a discretization of the Poisson equation
in (\ref{eq:vlasov2d}) on a mesh of the physical space.
Before describing and analyzing a class of numerical methods for the
Vlasov-Poisson system \eqref{eq:vlasov2d} in the presence of a strong external inhomogeneous magnetic field, we first briefly expound what may be expected from the continuous model in the limit $\varepsilon \to0$.
\subsection{Asymptotics with external electromagnetic fields}\label{s:external-asymptotics}
To gain some intuition, we first discuss the case when all electromagnetic fields $(\bE^\varepsilon , b)$ are known. Conclusions of the present section may actually be completely justified analytically. Yet we do not pursue this line of exposition here to keep the discussion as brief and tight as possible.
Explicitly, in the present section we consider given (independent of $\varepsilon $) smooth electromagnetic fields $(\bE^\varepsilon , b)$
and we
assume that $b$ is bounded below away from zero, that is, we assume that there exists $b_0>0$ such that
\begin{equation}
\label{hyp:1}
b(t,\bx) \geq b_0 \quad \forall \,(t,\bx)\in \RR^+\times\RR^2.
\end{equation}
In the limit $\varepsilon \to0$ one expects oscillations occurring on typical time scales $O(1/\varepsilon ^2)$ to coexist with a slow dynamics evolving on a time scale $O(1)$. We sketch now how to identify a closed system describing at main order the slow evolution. To begin with, note that from the second line of system~\eqref{traj:00} it does follow that $\bV^\varepsilon $ oscillates at order $1/\varepsilon ^2$ thus remains bounded and converges weakly\footnote{Though we do not want to be too precise here, let us mention that in the present discussion \emph{weakly} and \emph{strongly} refer to the weak-* and strong topologies of $L^\infty$ and that the weak convergences that we encounter actually correspond to strong convergence in $W^{-1,\infty}$.} to zero. As we detail below, one may also combine both lines of the system to obtain
\begin{equation}\label{res:0bis}
\frac{\dd}{\dd t}\left( \bX^\varepsilon - \varepsilon \,\frac{(\bV^\varepsilon )^\perp}{b(t,\bX^\varepsilon )}\right) \,=\, \ \bF(t,\bX^\varepsilon )
\,-\,(\bV^\varepsilon )^\perp\left[
\,\dd_\bx \left(\frac1b\right)(t,\bX^\varepsilon )(\bV^\varepsilon )
\,+\,\varepsilon \frac{\d}{\d t}\left(\frac1b\right)(t,\bX^\varepsilon ) \,\right]
\end{equation}
where the force field $\bF$ is the classical electric drift, given by
$$
\bF(t,\bx) \,:=\, -\frac{1}{b(t,\bx)}\, \bE^{\perp}(t,\bx),\quad \forall \,(t,\bx)\in \RR^+\times\RR^2\,.
$$
Indeed
\begin{eqnarray*}
\,\frac{\dd}{\dd t} \bX^\varepsilon &=&
\frac1\varepsilon \bV^\varepsilon
\,=\,
\frac{\varepsilon }{b(t,\bX^\varepsilon )}\left(\frac{\dd\bV^\varepsilon }{\dd t}\right)^\perp+\,\bF(t,\bX^\varepsilon )\\
&=& \varepsilon \frac{\dd}{\dd t} \left(\frac{(\bV^\varepsilon )^\perp}{b(t,\bX^\varepsilon )}\right)+\,\bF(t,\bX^\varepsilon )
-\,\varepsilon (\bV^\varepsilon )^\perp\left(\frac{\d}{\d t}\left(\frac1b\right)(t,\bX^\varepsilon ) + \dd_\bx \left(\frac1b\right)(t,\bX^\varepsilon ) \,\left(\frac{\dd\bX^\varepsilon }{\dd t}\right)\right)\\
&=& \varepsilon \frac{\dd}{\dd t} \left(\frac{(\bV^\varepsilon )^\perp}{b(t,\bX^\varepsilon )}\right)+\,\bF(t,\bX^\varepsilon )
-\,(\bV^\varepsilon )^\perp\left(\varepsilon \frac{\d}{\d t}\left(\frac1b\right)(t,\bX^\varepsilon ) + \dd_\bx \left(\frac1b\right)(t,\bX^\varepsilon ) \,(\bV^\varepsilon )\right).
\end{eqnarray*}
This shows that $\bX^\varepsilon $ evolves slowly but, as such, does not provide a closed asymptotic evolution in the limit $\varepsilon \to0$. Indeed, except in the case when $b$ is constant dealt with in \cite{FR16} we also need to know what happens to expressions that are quadratic in $\bV^\varepsilon $ and this does not follow readily from the weak convergence of $\bV^\varepsilon $.
\begin{remark}\label{rk:homogeneous}
Incidentally note that, in contrast, when $b$ is constant, as in \cite{FR16}, the formulation of equation~\eqref{res:0bis} is already sufficient to identify a guiding center evolution and obtain that $\bX^\varepsilon $ converges to $\bY$ solving
\begin{equation}\label{gc:-1}
\frac{\dd\bY}{\dd t} \,=\, \bF(t,\bY)\,.
\end{equation}
\end{remark}
The missing piece of information may be intuited from
\begin{lemma}
\label{lmm:01}
Consider $\bX^\varepsilon =(x_1^\varepsilon ,x_2^\varepsilon )$ and
$\bV^\varepsilon =(v_1^\varepsilon ,v_2^\varepsilon )$ solving \eqref{traj:00}. Then we have
\begin{equation}
\frac{\thdd}{\thdd t} \left( \,\frac{1}{2}\,\|\bV^\varepsilon \|^2 \,-\, \varepsilon \,\bF(t,\bX^\varepsilon )\cdot
\bV^\varepsilon \,\right) \,\,=\, \, -\bV^\varepsilon \,\cdot\,\thdd_\bx \bF(t,\bX^\varepsilon )\,(\bV^\varepsilon )
\,\,-\,\, \varepsilon \,\bV^\varepsilon \cdot\frac{\d \bF}{\d t}(t,\bX^\varepsilon ),
\label{eq:modulus}
\end{equation}
and with $\bE=(E_1,E_2)$
\begin{equation}
\label{eq:VV}
\left\{
\begin{array}{l}
\ds\frac{1}{2}\,\frac{\thdd}{\thdd t} \left( \,|v^\varepsilon _1|^{2} - |v^\varepsilon _2 |^{ 2}\,\right)
\,=\,
\frac{b(t,\bX^\varepsilon )}{\varepsilon ^2} \,2v^\varepsilon _1 \,v^\varepsilon _2
\,+\,\frac{v^\varepsilon _1\,E_1(t,\bX^\varepsilon ) - v_2\,E_2(t,\bX^\varepsilon )}{\varepsilon },
\\
\,
\\
\ds\frac12\frac{\thdd}{\thdd t} \left(\, 2v^\varepsilon _1\, v^\varepsilon _2 \;\right) \,=\,
-\, \frac{b(t,\bX^\varepsilon )}{\varepsilon ^2} \,\left(\,|v^\varepsilon _1|^{2}\,-\,|v^\varepsilon _2|^{2}\,\right)
\,+\,\frac{v_2^\varepsilon \,E_1(t,\bX^\varepsilon ) + v_1^\varepsilon \,E_2(t,\bX^\varepsilon )}{\varepsilon }\,.
\end{array}\right.
\end{equation}
\end{lemma}
\begin{proof}
First note that the second equation in \eqref{traj:00} may also be written as
$$
\frac{\bV^\varepsilon }{\varepsilon } \,=\, \bF(t,\bX^\varepsilon ) \,+\,\frac{\varepsilon }{b(t,\bX^\varepsilon )}\,\left(\frac{\dd\bV^\varepsilon }{\dd t}\right)^\perp\,.
$$
Therefore
\begin{eqnarray*}
\frac{1}{2}\,\frac{\dd}{\dd t} \|\bV^\varepsilon \|^2 &=&
\bV^\varepsilon \,\cdot \frac{\dd\bV^\varepsilon }{\dd t}
\,=\,\varepsilon \,\bF(t,\bX^\varepsilon ) \cdot
\frac{\dd\bV^\varepsilon }{\dd t}\\
&=& \varepsilon \frac{\dd}{\dd t} \left(\bF(t,\bX^\varepsilon )\cdot \bV^\varepsilon \right)
- \varepsilon \,\bV^\varepsilon \cdot \left(\frac{\d\bF}{\d t}(t,\bX^\varepsilon ) + \dd_\bx \bF(t,\bX^\varepsilon ) \, \frac{\dd\bX^\varepsilon }{\dd t} \right)\\
&=& -\bV^\varepsilon \cdot \dd_\bx \bF(t,\bX^\varepsilon ) \,(\bV^\varepsilon )\;+\,\varepsilon \left(\,\frac{\dd}{\dd t} \left[\bF^\varepsilon (t,\bX^\varepsilon )\cdot \bV^\varepsilon \right] \,- \,\bV^\varepsilon \cdot \frac{\d\bF}{\d t}(t,\bX^\varepsilon )\,\right).
\end{eqnarray*}
Hence \eqref{eq:modulus}. Likewise we may obtain the first equation in \eqref{eq:VV} --- for $(|v^\varepsilon _1|^2 -|v^\varepsilon _2|^2)$ --- by multiplying the second equation of \eqref{traj:00} by $(v^\varepsilon _1,-v^\varepsilon _2)/\varepsilon $ and the equation for $v^\varepsilon _1\,v^\varepsilon _2$ by multiplying it by
$(v^\varepsilon _2,v^\varepsilon _1)/\varepsilon $.
\end{proof}
This crucial lemma suggests that
\begin{itemize}
\item the microscopic kinetic energy $\frac{1}{2}\,\|\bV^\varepsilon \|^2$ also evolves on a slow scale;
\item the two variables $2v^\varepsilon _1\,v^\varepsilon _2$ and $|v^\varepsilon _1|^{2}\,-\,|v^\varepsilon _2|^{2}$ oscillate at scale $1/\varepsilon ^2$ thus converge weakly to zero.
\end{itemize}
Indeed, the couple $\left(2v_1^\varepsilon \,v_2^\varepsilon \,,\,|v_1^\varepsilon |^{2}\,-\,|v_2^\varepsilon |^{2}\right)$ solves a forced spring-mass system with non constant frequency of
oscillation of typical size $1/\varepsilon ^2$. Since the foregoing three variables generates all expressions quadratic in $\bV^\varepsilon $, this is indeed sufficient to conclude.
From previous considerations it follows that when $(\bX^\varepsilon ,\bV^\varepsilon )$ solves \eqref{traj:00} the couple $(\bX^\varepsilon ,\frac{1}{2}\|\bV^\varepsilon \|^2)$ converges strongly to some $(\bY,g)$ as $\varepsilon \to0$. To identify an evolution for the limiting $(\bY,g)$, we write relevant quadratic terms of \eqref{res:0bis} and \eqref{eq:modulus} in terms of quantities of Lemma~\ref{lmm:01}. First
\begin{eqnarray*}
-\bV^{\varepsilon }\,\dd_\bx\left(\frac1b\right)(t,\bX^\varepsilon )\,(\bV^\varepsilon )&=& -\frac{1}{2}
\|\bV^\varepsilon \|^2 \,\nabla_\bx \left(\frac1b\right)(t,\bX^\varepsilon ) \,\,-\,\, \frac{1}{2}
\left[ |v_1^{\varepsilon }|^2 -|v_2^{\varepsilon }|^2 \right]\,\left(\begin{array}{l} \d_{x_1}\\-\d_{x_2}\end{array}\right) \left(\frac1b\right)(t,\bX^\varepsilon )
\\
&&-\,\,
\left[ v_1^{\varepsilon } \,v_2^{\varepsilon } \right]\,
\left(\begin{array}{l} \d_{x_2}\\\d_{x_1}\end{array}\right)\left(\frac1b\right)(t,\bX^\varepsilon )
\end{eqnarray*}
converges weakly to
$$
-g\,\nabla_\bx \left(\frac1b\right)(t,\bY)
$$
as $\varepsilon \to0$. Thus, taking the limit $\varepsilon \to0$ in \eqref{res:0bis} gives
\begin{equation}
\frac{\dd\bY}{\dd t} \,=\, \bF(t,\bY) \,+\, g \,\frac{\nabla_\bx^\perp b}{b^2}(t,\bY).
\label{gc:1}
\end{equation}
Likewise, $\bV^\varepsilon \,\cdot\,\dd_\bx\bF(t,\bX^\varepsilon )\,(\bV^\varepsilon )$ converges weakly to $g\,\Div_\bx(\bF)(t,\bY)$ so that
\begin{equation}
\frac{\dd g}{\dd t} \,=\, -\Div_\bx(\bF)(t,\bY)\,g.
\label{gc:2}
\end{equation}
Gathering the latter results, we get that $(\bX^\varepsilon ,\frac{1}{2}\|\bV^\varepsilon \|^2)$ converges strongly to $(\bY,g)$ solving
\begin{equation}
\label{traj:limit}
\left\{
\begin{array}{l}
\ds{\frac{\dd\bY}{\dd t} \,=\, \bF(t,\bY) \,+\, g \,\frac{\nabla_\bx^\perp b}{b^2}(t,\bY) },
\\
\,
\\
\ds{\frac{\dd g}{\dd t} \,=\, -\Div_\bx (\bF)(t,\bY)\,g},
\\
\,
\\
\bY(t^0) = \bx^0, \quad g(t^0)\,=\,e^0,
\end{array}\right.
\end{equation}
where $e^0\,=\,\frac{1}{2}\|\bv^0\|^2$.
\begin{remark}\label{rk:adiabatic}
Observe that $\thDiv_\bx(\bF)=-\bE^\perp\cdot\nabla_\bx(1/b)+\thRot(\bE)/b$. In particular, as expected, the limiting system \eqref{traj:limit} contains the fact that when $b$ depends only on time and $\bE$ derives from a potential, as in the Vlasov-Poisson case, the microscopic kinetic energy is conserved. Likewise, system~\eqref{traj:limit} also implies that
$$
\frac{\dd}{\dd t}\left(\frac{g}{b(t,\bY)}\right)
\,=\,-\frac{g}{b(t,\bY)^2}\left(\d_tb+\thRot(\bE)\right)(t,\bY)
$$
so that for the limiting system the classical adiabatic invariant $g/b$ becomes an exact invariant when the Maxwell-Faraday equation
$$
\d_tb+\thRot(\bE)\,=\,0
$$
holds. The latter occurs in particular when $b$ depends only on the space variable and $\bE$ derives from a potential, as in the Vlasov-Poisson case. For consistency's sake note also that when $b$ is constant the microscopic kinetic energy and the adiabatic invariant essentially coincide and their conservation is equivalent to the irrotationality of $\bE$.
\end{remark}
\subsection{Formal asymptotic limit of the Vlasov-Poisson system}
We come back to the Vlasov-Poisson system \eqref{eq:vlasov2d}. Here one cannot anymore remain completely at the characteristic level \eqref{traj:00}. Moreover whereas arguments of the previous subsection could be turned into sound analytic arguments, to the best of our knowledge the present situation does not fall directly into range of the actually available analysis of gyro-kinetic limits. We refer the reader to \cite{FS:00,GSR:99,SR:02,Cheverry,Miot,HerdaR} for a representative sample of such analytic techniques.
Nevertheless the previous subsection strongly suggests for $(f^\varepsilon ,\bE^\varepsilon )$ solving the
Vlasov-Poisson system \eqref{eq:vlasov2d} that in the limit $\varepsilon \rightarrow 0$, the electric field $\bE^\varepsilon $ and the following velocity-averaged version of $f^\varepsilon $
$$
f^\varepsilon \,:\,(t,\bx,e)\mapsto\frac{1}{2\pi}\int_0^{2\pi} f^\varepsilon (t,\bx, \sqrt{2e}(\cos(\theta),\sin(\theta)))\,\dd \theta
$$
converge to some $\bE$ and some\footnote{We use distinct notation of variables for limiting functions to be consistent with asymptotic analysis at the characteristic level. This is of course completely immaterial.} $f:(t,\by,g)\mapsto f(t,\by,g)$ solving the following system consisting in a transport equation supplemented with a Poisson equation,
\begin{equation}
\left\{
\begin{array}{l}
\ds \frac{\d f}{\d t}+\mathbf{U}\cdot\nabla_{\by} f+ u_g \frac{\d f}{\d g}=0,
\\ \, \\
\ds -\Delta_{\by}\phi=\rho\,,\quad
\rho=2\pi\,\int_{\RR^+} f \,\dd g,
\end{array}
\right.
\label{eq:gc}
\end{equation}
where the velocity field is given by
$$
{\bf U}(t,\by,g)\,=\,\bF(t,\by) \,+\, g \frac{\nabla_\bx^\perp b}{b^2}(t,\by)\,, \qquad
u_g = -\Div_\by(\bF)(t,\by) \,g\,,
$$
with $\bE = -\nabla_\by \phi$, $\bF=-\bE^\perp/b$.
Our goal is not to develop a thorough analysis of system~\eqref{eq:gc} but let us mention that it generates a reasonable dynamics whose study falls into the scope of classical techniques.
\begin{theorem}\label{th:existence}
Consider $b\in W^{1,\infty}(\RR^+\times\RR^2)$ satisfying \eqref{hyp:1}.\\
Assume $f^0 \in L^1\cap L^\infty (\RR^2\times
\RR^+)$, $f^0$ is nonnegative and
$$
\int_{\RR^2\times \RR^+} g\,f^0(\by,g)\,\,\thdd\by\,\thdd g < \infty.
$$
Then system~\eqref{eq:gc} possesses a nonnegative weak solution $f$ starting from $f^0$ at time $0$ such that both Lebesgue norms of $f$ and total energy are non-increasing in time, in particular,
$$
\|f(t,\cdot,\cdot) \|_{L^p} \,\leq\, \| f^0\|_{L^p}\, \quad \forall \,t\in \RR^+
$$
and
$$
\mathcal{E}(t) \,:=\, 2\pi\,\int_{\RR^2\times\RR^+} g\,f(t,\by, g)\,\thdd\by\,\thdd g
\,+\, \frac{1}{2}\int_{\RR^2} \|\bE(t,\by)\|^2 \thdd\by \,\leq \, \mathcal{E}(0)
\, \quad \forall \,t\in \RR^+.
$$
Furthermore, when $f^0\in\mathcal{C}^1_c(\RR^2\times \RR^+)$, the solution is unique and preserved along the characteristic curves
$$
f(t,\bY(t),g(t)) \,=\, f(t^0, \bx^0, e^0) \quad \forall \,(t,t^0,\bx^0,e^0)\in \RR^+\times\RR^+\times\RR^2\times\RR^+
$$
where $(\bY,g)$ solves \eqref{traj:limit}.
\end{theorem}
The foregoing result is not expected to be optimal in any reasonable way and its proof is completely analogous to those for the Vlasov-Poisson system (\cite{Arsenev,DiPernaLions} for weak solutions, \cite{Okabe} for smooth solutions). One key observation that underpins the analysis is that the vector field $(\bU,u_g)$ is divergence-free since
$$
\Div_\by \bU \,+\, \frac{\d u_g}{\d g} \,=\,\Div_\by(\bF)\,-\,\Div_\by(\bF) = 0.
$$
Note also that for smooth solutions both Lebesgue norms and total energy are exactly preserved by the time evolution. \bigskip
We now come to our main concern in the present article and seek after a numerical method that is able to capture these expected asymptotic properties, even when numerical discretization parameters are kept independent of $\varepsilon $ hence are not adapted to the stiffness degree of the fast scales. Our objective enters in the general framework of so-called Asymptotic Preserving (AP) schemes, first introduced and widely studied for dissipative systems as in \cite{jin:99, klar:98}. Yet, in opposition with collisional kinetic equations in hydrodynamic or
diffusion asymptotics, collisionless equations like the Vlasov-Poisson
system \eqref{eq:vlasov2d} involve fast time oscillations instead of fast time relaxation. By many respects this makes the identification of suitable schemes much more challenging.
In this context, among the few families of strategies available, most of them insist in providing also a good description of fast oscillations. To describe fast oscillations without imposing severe restrictions on time steps one needs in some sense to double time variables thus to see $f$ as the trace at $(t,\tau)=(t,t/\varepsilon ^2)$ of a function of variables $(t,\tau,\bx,\bv)$. The corresponding strategies have proved to be very successful when magnetic fields are uniform.
One of the oldest of these strategies, developed by E. Fr\'enod, F. Salvarani and E. Sonnendr\"ucker in \cite{FSS:09}, is directly inspired by theoretical results on two-scale convergence and relies on the fact that at the limit $\varepsilon \to0$ the $\tau$-dependence may be explicitly filtered out. Its main drawback is probably that it computes only the leading order term in the limit $\varepsilon \to0$. In particular it is only available when $\varepsilon $ is very small.
This may be fixed by keeping besides the stiff term to which a two-scale treatment is applied a non-stiff part that is smaller in the limit $\varepsilon \to0$ but becomes important when $\varepsilon $ is not small. Such a decomposition may be obtained by using a micro-macro
approach as in \cite{CFHM:15} and some references therein. This does allow to switch from one regime to another without any treatment of the transition between those but results in relatively heavy schemes,.
Another approach with similar advantages, developed in \cite{bibCLM} and \cite{FHLS:15}, consists in explicitly doubling time variables and seeking higher-dimensional partial differential equations and boundary conditions in variables $(t,\tau,\bx,\bv)$ that contains the original system at the $\varepsilon $-diagonal $(t,\tau)=(t,t/\varepsilon )$. While the corresponding methods are extremely good at capturing oscillations their design require a deep \emph{a priori} understanding of the detailed structure of oscillations.
Since there are dramatic changes in the structure of oscillations when magnetic fields are not uniform, it seems that the extension of any method capturing oscillations to realistic magnetic fields is either doomed to fail or at least to require a tremendous amount of work and complexity. Observe in particular that in inhomogeneous cases whereas at the limit $\varepsilon \to0$ the slow evolution still obeys a closed system independent of fast scales (as \eqref{eq:gc} above), fast oscillations \emph{do not} uncouple anymore since their oscillating frequencies depend now on the slow scales... In any case this extension would constitute a major break-through in the field.\smallskip
Our goal is somehow more modest as we only require that, in the limit $\varepsilon \to0$, our schemes capture accurately the non stiff part of the evolution while allowing for coarse discretization parameters. Recently, we have indeed proposed a class of semi-implicit schemes which allow to capture the asymptotic limit of the two dimensional Vlasov-Poisson system with a \emph{uniform} magnetic field \cite{FR16}. We stress that by many respects those schemes are remarkably natural and simple.
Here, we show how our approach may be extended to some non uniform cases hence allowing to obtain direct simulations of system \eqref{eq:vlasov2d} with time steps large with respect to $\varepsilon $, while still capturing the slow coherent dynamics that emerge from strong oscillations. We develop numerical schemes that are able to deal with a wide range of values for $\varepsilon$ --- that is, AP schemes in the terminology mentioned above. Our schemes are consistent with the kinetic model for any positive value of $\varepsilon $, and degenerate into schemes consistent with the asymptotic model~\eqref{eq:gc} when $\varepsilon \rightarrow0$.
We stress that the extension carried out here is by no means a trivial continuation of \cite{FR16}. To understand inherent difficulties, we now review what are the main ingredients of the asymptotic analysis discussed in Section~\ref{s:external-asymptotics}, those that we aim at incorporating at the discrete level. As a preliminary warning, it must be stressed however that once the time variable has been discretized the distinction between strong and weak convergences essentially disappear. This is immaterial to the treatment of homogeneous cases since then, as implicitly contained in Remark~\ref{rk:homogeneous}, there is only one key-ingredient, the weak convergence of $\bV^\varepsilon /\varepsilon $ to $\bF(t,\bY)$, that may be safely replaced with a strong convergence at the discrete level. Here, beyond that requirement we also need three pieces of information essentially contained in Lemma~\ref{lmm:01} : weak convergences of $v_1^\varepsilon \,v_2^\varepsilon $ and $|v_1^\varepsilon |^2-|v_2^\varepsilon |^2$ to zero and strong convergence of $\frac12(|v_1^\varepsilon |^2+|v_2^\varepsilon |^2)$ to $g$ (solving \eqref{gc:2}). Therefore we need to offer compatible counterparts \emph{at the discrete level} of the weak convergence of $V^\varepsilon $ to zero and the strong convergence of $\|V^\varepsilon \|^2$ to a non trivial limit, a non trivial task !
\section{A particle method for inhomogeneous strongly magnetized
plasmas}
\setcounter{equation}{0}
\label{sec:3}
The schemes we shall propose here belong to the family of Particle-In-Cell (PIC) methods. Before presenting our specific schemes, we review in a few words the basic ideas underpinning these methods. We refer the reader to \cite{birdsall} for a thorough discussion and other applications to plasma physics.
To keep notation as light as possible, we temporarily omit to denote the dependence of solutions on $\varepsilon $ and discretization parameters. The starting point is the choice of approximating $f$, solving \eqref{eq:vlasov2d}, by a finite sum of smoothed Dirac masses. More explicitly one would like to compute
$$
f_N(t,\bx,\bv) \,:=\; \sum_{1\leq k\leq N } \omega_k \;\vp_\alpha (\bx-\bX_k(t)) \;\vp_\alpha (\bv-\bV_k(t))\,,
$$
where $\vp_\alpha = \alpha^{-d} \vp(\cdot / \alpha)$ is a particle shape function with radius proportional to $\alpha$ --- usually seen as a smooth approximation of the Dirac measure $\delta_0$ --- obtained by scaling a fixed compactly supported mollifier $\vp$, and the set $((\bX_k,\bV_k))_{1\leq k\leq N}$ represents the position in phase space
of $N$ macro-particles evolving along characteristic curves \eqref{traj:00} from initial data $(\bx_k^0, \bv_k^0)$, $1\leq k \leq N$. More explicitly
\begin{equation}
\label{traj:bis}
\left\{
\begin{array}{l}
\ds\varepsilon \frac{\dd\bX_k}{\dd t} \,=\, \bV_k,
\\
\,
\\
\ds\varepsilon \frac{\dd\bV_k}{\dd t} \,=\, -\frac{1}{\varepsilon }\,b(t,\bX_k)\,
\bV_k^\perp \,+\, \bE(t,\bX_k),
\\
\,
\\
\bX_k(0) = \bx^0_k, \quad\bV_k(0) = \bv^0_k,
\end{array}\right.
\end{equation}
where the electric field $\bE$ solves the Poisson equation \eqref{eq:vlasov2d}, or, more exactly in the end, it is computed from a solution of a discretization of the Poisson equation on a mesh of the physical space. Parameters are initially chosen to ensure that $f_N(0,\bx,\bv)$ is a good approximation of (continuous) initial datum $f^0$. To increase the order of approximation of Dirac masses, mollifiers are sometimes chosen with a prescribed number of vanishing moments. Concrete common choices include B-splines. See for instance \cite{Koumoutsakos.1997.jcp,Cottet.Koumoutsakos.2000.cup}.
The main remaining issue is then to design an accurate and stable approximation of the particle trajectories in phase space. Note moreover that we ultimately want those properties to be uniform with respect to $\varepsilon $ at least for the slow part of the evolution. As already mentioned, to achieve this goal, we need to reproduce at the discrete level both the (weak) convergence of each $\bV_k$ to zero and a non trivial slow dynamics of its microscopic energy $e_k=\tfrac{1}{2}\|\bV_k\|^2$. We resolve this difficulty by augmenting the dimension of the phase space by one, looking for unknowns $(\bX_k,\bw_k,e_k)$ starting from $(\bx^0_k,\bv^0_k,\tfrac12\|\bv_k^0\|^2)$ and providing a solution to \eqref{traj:bis} by setting $(\bX_k,\bV_k)=(\bX_k,\sqrt{2e_k}\ \bw_k/\|\bw_k\|)$. The simplest possible choice of such systems would be
\begin{equation}
\left\{
\begin{array}{l}
\ds\varepsilon \frac{\dd\bX_k}{\dd t} \,=\, \bw_k,
\\
\,
\\
\ds\varepsilon \frac{\dd e_k}{\dd t} \,=\, \bE(t,\bX_k)\cdot\bw_k,
\\
\,
\\
\ds\varepsilon \frac{\dd\bw_k}{\dd t} \,=\, -\frac{1}{\varepsilon }\,b(t,\bX_k)\,
\bw_k^\perp \,+\, \bE(t,\bX_k),
\\
\,
\\
\bX_k(0) = \bx^0_k, \quad e_k(0)\,=\, e_k^0,\quad \bw_k(0) = \bw^0_k.
\end{array}\right.
\label{traj:ter}
\end{equation}
with $(\bw_k^0,e_k^0)$ starting from $(\bv^0_k,\tfrac12\|\bv_k^0\|^2)$. However this choice is far from being uniquely determined. For instance one may add to the vector field defining System~\eqref{traj:ter} any function of $(t,\bX,e,\bw)$ that vanishes when $e=\tfrac12\|\bw\|^2$. In the following we shall rather work with
\begin{equation}
\left\{
\begin{array}{l}
\ds\varepsilon \frac{\dd\bX_k}{\dd t} \,=\, \bw_k,
\\
\,
\\
\ds\varepsilon \frac{\dd e_k}{\dd t} \,=\, \bE(t,\bX_k)\cdot\bw_k,
\\
\,
\\
\ds\varepsilon \frac{\dd\bw_k}{\dd t} \,=\, -\frac{1}{\varepsilon }\,b(t,\bX_k)\,
\bw_k^\perp \,+\, \bE(t,\bX_k)\,-\,\chi(e_k,\bw_k)\,\nabla_\bx (\ln(b))(t,\bX_k),
\\
\,
\\
\bX_k(0) = \bx^0_k, \quad e_k(0)\,=\, e_k^0,\quad \bw_k(0) = \bw^0_k
\end{array}\right.
\label{traj:qua}
\end{equation}
with $\chi$ chosen such that
\begin{equation}
\label{hyp:2}
\left(\chi\left(\tfrac{1}{2}\|\bw\|^2\,,\, \bw\right) =0,\,\, \forall \bw\in\RR^2\right)\quad{\rm and}
\quad \left(\lim_{\bw\rightarrow 0} \chi(e,\bw) = e, \,\, \forall e\in\RR\right)
\end{equation}
and
\begin{equation}
\label{hyp:2bis}
0 \leq \chi(e,\bw) \leq e, \quad \forall(e,\bw)\in\RR_+\times\RR^2\,.
\end{equation}
For concreteness, in the following, we actually choose $\chi$ as
$$
\chi(e,\bw) \,=\, \frac{e}{e \,+\, \|\bw\|^2/2 }\,\left( e \,-\, \frac{\|\bw\|^2}{2} \right)^+\,,\quad \forall(e,\bw)\in\RR\times\RR^2
$$
where $s^+=\max(0,s)$. The choice of System~\ref{traj:qua} originates in the fact that it is relatively easy to discretize it so as to ensure that, in the limit $\varepsilon \to0$, $\varepsilon ^{-1}\bw_k$ becomes asymptotically close to
$$
\bF(t,\bX_k)\,-\,e_k\nabla_\bx^\perp \left(\frac1b\right)(t,\bX_k)
$$
(as it occurs for $\varepsilon ^{-1}\bV_k$, see Section~\ref{s:external-asymptotics}). Since we start with $e_k^0=\tfrac12\|\bw_k^0\|^2$ and the constraint is preserved by the evolution, the choice of either System~\eqref{traj:ter} or of \eqref{traj:qua} or of any similar system is perfectly immaterial as long as we do not discretize the time variable. However their discretizations will no longer preserve exactly the condition $e_k^0=\tfrac12\|\bw_k^0\|^2$ thus they will lead to effectively distinct solutions.
Note that the addition of a new phase space variable is much less invasive than doubling time variables (but is insufficient to allow for the description of fast oscillations with coarse grids) and it is also very cheap in the context of PIC methods. In particular, consistently with the former claim, now that once we have chosen an augmented system such as~\eqref{traj:qua} we may discretize it by adapting almost readily the strategy developed in \cite{BFR:15, FR16} and based on semi-implicit solvers for stiff problems.
In the rest of this section, we shall focus on the discretization of System~\eqref{traj:qua} and propose several numerical schemes for that purpose. Yet, prior to that, we briefly summarize the above discussion and explain how this discretization will be used to compute solutions to \eqref{eq:vlasov2d}. Besides already introduced parameters, fix a given time step $\Delta t>0$ and for $n\geq0$ introduce discrete time $t^n=n\,\Delta t$. Then we chose a discretization of System~\eqref{traj:qua} expected to provide for any $k\in\{1,\ldots,N\}$, $(\bx^n_k,,e^n_k,\bw^n_k)$ an approximation of $(\bX_k(t^n),e_k(t^n),\bw_k(t^n))$, where $(\bX_k,e_k,\bw_k)$ solves \eqref{traj:qua}. Of course equations are not uncoupled thus we compute simultaneously all $(\bx^n_k,,e^n_k,\bw^n_k)$, $k\in\{1,\ldots,N\}$, and the corresponding electric field computed from a solution on a mesh of the physical space of a discretization of the Poisson equation with macroscopic density corresponding to the distribution function $(f_{N,\alpha}^n)_{n\geq0}$, given at time $t^{n}$ by
$$
f_{N,\alpha}^{n}(\bx,\bv) \,:=\; \sum_{1\leq k \leq N} \omega_k
\;\vp_\alpha (\bx-\bx^n_k) \;\vp_\alpha (\bv-\bv^n_k)\,,
$$
where
$$
\bv^n_k \,\;:=\,\, \sqrt{2 \, e_k^n} \,\, \frac{\bw^n_k}{\|\bw_k^n\|}\,.
$$
\subsection{A first-order semi-implicit scheme}
Since we only discuss possible discretizations of \eqref{traj:qua}, we shall omit from now on the index $k\in\{1,\ldots,N\}$ and consider the electric field as fixed, assuming smoothness of $(\bE,b)$ and
\eqref{hyp:1}.
We start with the simplest semi-implicit scheme for \eqref{traj:qua}, which is a combination of the backward and forward Euler schemes. For a fixed time step $\Delta t>0$ it is given by
\begin{equation}
\label{scheme:0}
\left\{
\begin{array}{l}
\ds\frac{\bx^{n+1} - \bx^n }{\Delta t} \,\,=\, \frac{\bw^{n+1}}{\varepsilon }\,,
\\
\,
\\
\ds\frac{e^{n+1} - e^n }{\Delta t} \,\,=\, \bE(t^n,\bx^n)\cdot \frac{\bw^{n+1}}{\varepsilon }\,,
\\
\,
\\
\ds\frac{\bw^{n+1} - \bw^n }{\Delta t} \,=\, \frac{1}{\varepsilon }\left(
{\bE(t^n,\bx^n)} \,-\, \chi(e^n,\bw^n)\,\nabla_\bx(\ln(b))(t^n,\bx^n) - b(t^n,\bx^n)\frac{(\bw^{n+1})^\perp }{\varepsilon } \right).
\end{array}\right.
\end{equation}
Notice that only the third equation on $\bw^{n+1}$ is really implicit and it only requires the resolution of a two-dimensional linear system. Then, once the value of $\bw^{n+1}$ has been computed the first and second equations provide explicitly the values of $\bx^{n+1}$ and $e^{n+1}$.
\begin{proposition}[Consistency in the limit $\varepsilon \rightarrow 0$ for a fixed $\Delta t$]
\label{prop:1}
Let us consider a time step $\Delta t>0$, a final time $T>0$ and
set $N_T=\lfloor T/\Delta t\rfloor$. Assume that $(\bx^n_\varepsilon ,\bw_\varepsilon ^n,e^n_\varepsilon )_{0\leq
n\leq N_T}$ is a sequence obtained by \eqref{scheme:0} and such that
\begin{itemize}
\item for all $1\leq n\leq N_T$, $\left(\bx^n_\varepsilon ,\varepsilon \bw^n_\varepsilon , e^n_\varepsilon \right)_{\varepsilon >0}$ is uniformly bounded with respect to $\varepsilon >0$;
\item $\left(\bx^0_\varepsilon ,\bw_\varepsilon ^0,e^0_\varepsilon \right)_{\varepsilon >0}$ converges in the limit $\varepsilon \rightarrow 0$ to some $(\by^0,\bw^0,g^0)$.
\end{itemize}
Then, for any $1\leq n\leq N_T$, $(\bx^n_\varepsilon ,e^n_\varepsilon )_{\varepsilon >0}$ converges to some $(\by^n,g^n)$ as $\varepsilon \rightarrow 0$ and the limiting sequence $(\by^n,g^n)_{1\leq n\leq N_T}$ solves
\begin{equation}
\label{sch:y0}
\left\{
\begin{array}{l}
\ds\frac{\by^{n+1} - \by^n}{\Delta t} \,=\, -\frac{1}{b(t^n,\by^n)} \Big(
\bE(t^n,\by^n) - g^n \,\nabla_\by (\ln(b))(t^n,\by^n)\Big)^\perp
\\
\,
\\
\ds\frac{g^{n+1} - g^n}{\Delta t} = g^n \,\frac{\nabla_\by^\perp b}{b^2}(t^n,\by^n) \cdot \bE(t^n,\by^n)\,
\end{array}\right.
\end{equation}
which provides a consistent first-order approximation with respect to $\Delta t$ of the
gyro-kinetic system~\eqref{traj:limit}.
\end{proposition}
Note that we actually apply this result with $\left(\bx^0_\varepsilon ,\bw_\varepsilon ^0,e^0_\varepsilon \right)_{\varepsilon >0}=(\bx^0,\bv^0,\tfrac12\|\bv^0\|^2)$ so that in this case $(\by^0,\bw^0,g^0)=(\bx^0,\bv^0,\tfrac12\|\bv^0\|^2)$. Furthermore, in this case, the limiting scheme \eqref{sch:y0} starts from initial data
$$
\by^1\,=\,\by^0\,+\,(\Delta t)\,\bF(0,\by^0)\,,\qquad\qquad
g^1\,=\,g^0\,.
$$
More generally, under the assumptions of Proposition~\ref{prop:1}, $(\by^1,g^1)$ differs from $(\by^0,g^0)$ by a first-order term but in general does not satisfy \eqref{sch:y0} with $n=0$. The scheme needs one time-step to reach the expected asymptotic regime in the limit $\varepsilon \to0$.
\begin{proof}
First, note that, for any $0\leq n\leq N_T$, since $(\bx^n_\varepsilon ,e^n_\varepsilon )_{\varepsilon >0}$ is bounded, $\bE(t^n,\bx^n_\varepsilon )$, $\nabla_\bx b(t^n,\bx^n_\varepsilon )$ and $\chi(e^n_\varepsilon ,\bw_\varepsilon ^n)$ are also bounded uniformly in $\varepsilon $. Therefore, combined with assumption \eqref{hyp:1} and boundedness of $\left(\varepsilon \bw^n_\varepsilon \right)_{\varepsilon >0}$, the third equation of \eqref{scheme:0}
written as
\begin{equation}
\label{flav:0}
\frac{(\bw^{n+1}_\varepsilon )^\perp}{\varepsilon }\,\,=\,\frac{1}{b(t^n,\bx_\varepsilon ^n)}\left(-\frac{\varepsilon \bw^{n+1}_\varepsilon - \varepsilon \bw^n_\varepsilon }{\Delta t}\,+\,
\bE(t^n,\bx^n_\varepsilon ) \,-\, \chi(e^n_\varepsilon ,\bw_\varepsilon ^n)\,\nabla_\bx \ln(b(t^n,\bx^n))\right)
\end{equation}
shows that, for any $1\leq n\leq N_T$, $\left(\varepsilon ^{-1}\bw^n_\varepsilon \right)_{\varepsilon >0}$ is bounded and in particular $\left(\bw^n_\varepsilon \right)_{\varepsilon >0}$ converges to zero.
Moreover, for any $1\leq n\leq N_T$, since the sequence $(\bx^n_\varepsilon , e^n_\varepsilon )_{\varepsilon >0}$ is uniformly bounded with respect to
$\varepsilon >0$, we can extract a subsequence still abusively labeled by $\varepsilon $ and find some
$(\by^n, g^n)$ such that $(\bx^n_\varepsilon , e^n_\varepsilon ) \rightarrow (\by^n,g^n)$ as $\varepsilon $ goes to zero.
Therefore, by using continuity of $(E,b,\nabla_\bx b,\chi)$, we first conclude that for any $1\leq n\leq N_T-1$, \eqref{flav:0} yields
$$
\varepsilon ^{-1}\,\bw^{n+1}_\varepsilon \,\rightarrow\,
-\frac{1}{b(t^n,\by^n)}\Big( \,\bE(t^n,\by^n) \,-\, g^n\,\nabla_\by
(\ln(b))(t^n,\by^n) \,\Big)^\perp, \,\,{\rm when}\,\, \varepsilon \rightarrow 0
$$
since $\chi(g^n,0)=g^n$. By substituting the foregoing limit in the first and second equations of \eqref{scheme:0} we obtain that the limit $(\by^n,g^n)_{1\leq n\leq N_T}$ satisfies \eqref{sch:y0}. Moreover the same argument shows that
\begin{equation}\label{sch:y0-init}
\left\{
\begin{array}{l}
\ds\by^1=\,\by^0\,+\,(\Delta t)\,\left(\bF(0,\by^0)
\,+\,\chi(g^0,\bw^0)\,\frac{\nabla_\by^\perp b}{b^2}(0,\by^0)\right)\,,\\
\ds g^1=\,g^0\,+\,(\Delta t)\,\chi(g^0,\bw^0)\,\bE(0,\by^0)\cdot\frac{\nabla_\by^\perp b}{b^2}(0,\by^0)\,.
\end{array}\right.
\end{equation}
Since the limit points $(\by^n,g^n)$ of the extracted subsequence are uniquely determined (by \eqref{sch:y0} and \eqref{sch:y0-init}), actually all the sequence $(\bx^n_\varepsilon ,e^n_\varepsilon )_{\varepsilon >0}$ converges (for any $n$).
\end{proof}
\begin{remark}
The consistency provided by the latter result is far from being uniform with respect to the time step. However, though we restrain from doing so here in order to keep technicalities to a bare minimum, we expect that an analysis similar to the one carried out in \cite[Section~4]{FR16} could lead to uniform estimates, proving uniform stability and consistency with respect to both $\Delta t$ and $\varepsilon $.
\end{remark}
Of course, a first-order scheme may fail to be accurate enough to describe correctly the long time behavior of the solution, but it has the advantage of simplicity. In the following we show how to generalize our approach to second and third-order schemes.
\subsection{Second-order semi-implicit Runge-Kutta schemes}
Now, we come to second-order schemes with two stages. The scheme we consider is a combination of a Runge-Kutta method for the explicit part and of an $L$-stable second-order SDIRK method for the implicit part.
\begin{remark}
In our original treatment of the homogeneous case in \cite{FR16} where we design schemes to capture \eqref{gc:-1}, the discrete velocity $\bv^n_\varepsilon $ was damped to zero in the limit $\varepsilon \to0$, consistently with weak convergence of $\bV^\varepsilon $ to zero but not with conservation of $\tfrac12\|\bV^\varepsilon \|^2$. Therefore there was some interest there in tailoring schemes not too dissipative. Here at the discrete level the two distinct behaviors are encoded by two distinct variables $\bw^n_\varepsilon $ and $e^n_\varepsilon $ so that we may rightfully focus solely on implicit $L$-stable choices.
\end{remark}
To describe the scheme, we introduce $\gamma>0$ the smallest root of the polynomial $X^2 - 2X + 1/2$, {\it i.e.} $\gamma = 1 - 1/\sqrt{2}$. Then the scheme is given by the
following two stages. First,
\begin{equation}
\label{scheme:3-1}
\left\{
\begin{array}{l}
\ds \bx^{(1)} \,=\, \bx^n \,+\, \frac{\gamma\Delta t}{\varepsilon }\,\bw^{(1)}\,,
\\
\,
\\
\ds e^{(1)} \,=\, e^n \,+\, \frac{\gamma\Delta t}{\varepsilon } \,S^{(1)}\,,
\\
\,
\\
\ds{\bw^{(1)} \,=\, \bw^n \,+\, \frac{\gamma\Delta t}{\varepsilon }\,\bF^{(1)},}
\end{array}\right.
\end{equation}
with
\begin{equation}
\label{F1}
\left\{
\begin{array}{l}
\ds \bF^{(1)} \,:=\, \bE(t^n,\bx^n) \,-\,
\chi(e^n,\bw^n)\,\nabla_\bx (\ln(b))(t^n,\bx^n) \,-\,b(t^n,\bx^n)\,\frac{(\bw^{(1)})^\perp}{\varepsilon }\,,\\[0.5em]
S^{(1)}\,:=\, \bE(t^n,\bx^n)\cdot \bw^{(1)}\,.
\end{array}\right.
\end{equation}
Before the second stage, we first introduce $\hat{t}^{(1)} \,=\, t^n + {\Delta t}/{(2\gamma)}$ and explicitly compute $(\hat{\bx}^{(1)},\hat{\bw}^{(1)},\hat{e}^{(1)})$ from
\begin{equation}
\label{tard2}
\left\{
\begin{array}{l}
\ds\hat{\bx}^{(1)} \,=\, \bx^{n}\,+\, \frac{\Delta t}{2\gamma\varepsilon }
\bw^{(1)}
\,=\, \bx^{n}\,+\, \frac{\bx^{(1)}-\bx^{n}}{2\gamma^2},\\ \, \\
\ds\hat{e}^{(1)} \,=\, e^{n}\,+\, \frac{\Delta t}{2\gamma\varepsilon }
\,S^{(1)}\,=\, e^{n}\,+\, \frac{e^{(1)}-e^{n}}{2\gamma^2},
\\ \,\\
\ds \hat{\bw}^{(1)} \,=\,
\bw^{n}\,+\, \frac{\Delta t}{2\gamma\varepsilon } \bF^{(1)}\,=\,
\bw^{n}\,+\, \frac{\bw^{(1)}-\bw^{n}}{2\gamma^2}\,.
\end{array}\right.
\end{equation}
Then the solution of the second stage $(\bx^{(2)},\bw^{(2)},e^{(2)})$ is given by
\begin{equation}
\label{scheme:3-2}
\left\{
\begin{array}{l}
\ds{\bx^{(2)} \,=\, \bx^{n} \,+\, \frac{(1-\gamma)\Delta t}{\varepsilon } \,\bw^{(1)} \,+\, \frac{\gamma\Delta t}{\varepsilon }\,\bw^{(2)},}
\\
\,
\\
\ds{e^{(2)} \,=\, e^{n} \,+\, \frac{(1-\gamma)\Delta t}{\varepsilon } \,S^{(1)} \,+\, \frac{\gamma\Delta t}{\varepsilon }\,S^{(2)},}
\\
\,
\\
\ds{\bw^{(2)} \,=\, \bw^{n} \,+\, \frac{(1-\gamma)\Delta t}{\varepsilon }
\,\bF^{(1)} \,+\, \frac{\gamma\Delta t}{\varepsilon } \,\bF^{(2)}},
\end{array}\right.
\end{equation}
with
\begin{equation}
\label{F2}
\left\{
\begin{array}{l}
\ds \bF^{(2)} :=\bE\left(\hat{t}^{(1)}, \hat{\bx}^{(1)}\right) -
\chi\left(\hat{e}^{(1)},\hat{\bw}^{(1)}\right)\nabla_\bx(\ln\left(b\right))\left(\hat{t}^{(1)}, \hat{\bx}^{(1)}\right)
-b\left(\hat{t}^{(1)}, \hat{\bx}^{(1)}\right) \frac{(\bw^{(2)})^\perp}{\varepsilon }\,,
\\
\;
\\
S^{(2)} := \bE(\hat{t}^{(1)},\hat{\bx}^{(1)})\cdot \bw^{(2)}\,.
\end{array}\right.
\end{equation}
Finally, the numerical solution at the next time step is defined by
\begin{equation}
\label{scheme:3-3}
\bx^{n+1} \,=\, \bx^{(2)},\qquad
e^{n+1} \,=\, e^{(2)},\qquad
\bw^{n+1} \,=\, \bw^{(2)}.
\end{equation}
\begin{proposition}[Consistency in the
limit $\varepsilon \rightarrow 0$ for a fixed $\Delta t$]
\label{prop:3}
Let us consider a time step $\Delta t>0$, a final time $T>0$ and
set $N_T=\lfloor T/\Delta t\rfloor$.
Assume that $(\bx^n_\varepsilon ,\bw_\varepsilon ^n,e^n_\varepsilon )_{0\leq
n\leq N_T}$ is a sequence obtained by \eqref{scheme:3-1}-\eqref{scheme:3-3} and such that
\begin{itemize}
\item for all $1\leq n\leq N_T$, $\left(\bx^n_\varepsilon ,\varepsilon \bw^n_\varepsilon , e^n_\varepsilon \right)_{\varepsilon >0}$ is uniformly bounded with respect to $\varepsilon >0$;
\item $\left(\bx^0_\varepsilon ,\bw_\varepsilon ^0,e^0_\varepsilon \right)_{\varepsilon >0}$ converges in the limit $\varepsilon \rightarrow 0$ to some $(\by^0,\bw^0,g^0)$.
\end{itemize}
Then, for any $1\leq n\leq N_T$, $(\bx^n_\varepsilon ,e^n_\varepsilon )_{\varepsilon >0}$ converges to some $(\by^n,g^n)$ as $\varepsilon \rightarrow 0$ and the limiting sequence $(\by^n,g^n)_{0\leq n\leq N_T}$ solves
\begin{equation}
\label{sch:y2-bis}
\left\{
\begin{array}{l}
\ds \by^{n+1} = \ds \by^{n} + (1-\gamma)\,\Delta t \,\bU^{n} +
\gamma\,\Delta t\,\,\bU^{(1)}\,,
\\
\,
\\
\ds g^{n+1} = \ds g^{n} + (1-\gamma)\,\Delta t \, \,T^n \,+\, \gamma\,\Delta t\,T^{(1)} \,,
\end{array}\right.
\end{equation}
where
$$
\left\{
\begin{array}{l}
\ds \bU^{n} = -\frac{1}{b(t^n,\by^n)} \Big( \bE(t^n,\by^n) - g^n \,\nabla_\by (\ln(b))(t^n,\by^n)\Big)^\perp,
\\
\,
\\
\ds T^{n} = g^n \,\frac{\nabla_\by^\perp b}{b^2}(t^n,\by^n) \cdot \bE(t^n,\by^n)
\end{array}\right.
$$
and
$$
\left\{
\begin{array}{l}
\ds \bU^{(1)} = \ds -\frac{1}{b\left(\hat{t}^{(1)},\hat{\by}^{(1)}\right)} \Big(
\bE\left(\hat{t}^{(1)},\hat{\by}^{(1)}\right) - \hat{g}^{(1)}
\,\nabla_\by(\ln\left(b\right))\left(\hat{t}^{(1)},\hat{\by}^{(1)}\right)
\Big)^\perp,
\\
\,
\\
\ds T^{(1)} = \hat{g}^{(1)}\,\frac{\nabla_\by^\perp b}{b^2}
\left(\hat{t}^{(1)},\hat{\by}^{(1)}\right) \cdot \bE\left(\hat{t}^{(1)},\hat{\by}^{(1)}\right)\,,
\end{array}\right.
$$
with $\hat{t}^{(1)}=t^n+\Delta t/(2\gamma)$ and
$$
\left\{
\begin{array}{l}
\ds \hat{\by}^{(1)} = \by^{n} + \frac{\Delta t}{2\gamma} \,\bU^{n}\,,
\\
\,
\\
\ds \hat{g}^{(1)} = g^n \,+\, \frac{\Delta t}{2\gamma} \,T^n\,,
\end{array}\right.
$$
which provides a consistent second-order approximation of the gyro-kinetic system \eqref{traj:limit}.
\end{proposition}
\begin{proof}
We mainly follow the lines of the proof of Proposition~\ref{prop:1}. To make arguments more precise we mark with a suffix ${}_{n+1,\varepsilon }$ intermediate quantities involved in the step from $t^n$ to $t^{n+1}$.
To begin with, for $1\leq n\leq N_T$, the third equation of \eqref{scheme:3-1} implies that $(\varepsilon ^{-1} \bw^{(1)}_{n,\varepsilon })_\varepsilon $ is bounded, then the first and second equations of \eqref{tard2} show that $(\hat{\bx}^{(1)}_{n,\varepsilon },\hat{e}^{(1)}_{n,\varepsilon })_\varepsilon $ is also bounded, from which the third equation of \eqref{scheme:3-1} shows the boundedness of $(\varepsilon ^{-1} \bw^{(1)}_{n,\varepsilon })_\varepsilon $. Then from the third equation of \eqref{tard2} we derive that $(\hat{\bw}^{1}_{1,\varepsilon })_\varepsilon $ is bounded and that, for $2\leq n\leq N_T$, $(\varepsilon ^{-1}\hat{\bw}^{(2)}_{n,\varepsilon })_\varepsilon $ is bounded.
One may then extract converging subsequences from all bounded families. With obvious notation for limits this leads in particular along the chosen subsequence from the third equations of \eqref{scheme:3-1} and \eqref{scheme:3-2} to
$$
\varepsilon ^{-1}\,\bw^{(1)}_{n+1,\varepsilon } \,\stackrel{\varepsilon \to0}{\longrightarrow}\,
-\frac{1}{b(t^n,\by^n)}\Big( \,\bE(t^n,\by^n) \,-\, g^n\,\nabla_\by
(\ln(b))(t^n,\by^n) \,\Big)^\perp
$$
and
$$
\varepsilon ^{-1}\,\bw^{(2)}_{n+1,\varepsilon }\,\stackrel{\varepsilon \to0}{\longrightarrow}\, -\frac{1}{b(t^{(1)}_{n+1},\hat{\by}^{(1)}_{n+1})}\Big( \,\bE\left(\hat{t}^{(1)}_{n+1},\hat{\by}^{(1)}_{n+1}\right) \,-\, \hat{g}^{(1)}_{n+1}\,\nabla_\by(\ln\left(b\right))(\hat{t}^{(1)}_{n+1},\hat{\by}^{(1)}_{n+1})\,\Big)^\perp
$$
when $1\leq n\leq N_T-1$, since $\bw^n_\varepsilon \to0$ and $\bw^{(1)}_{n+1,\varepsilon }\to0$. With this in hands taking, along the same subsequence, the limit $\varepsilon \to0$ of first and second equations of \eqref{tard2} and \eqref{scheme:3-2} provides the claimed scheme.
Moreover the same arguments provide explicit expressions of $(\by^1,g^1)$ in terms of $(\by^0,\bw^0,g^0,\Delta t)$. This yields uniqueness of limits independently of the chosen subsequence hence convergence of full families.
\end{proof}
\subsection{Third-order semi-implicit Runge-Kutta schemes}
Now we consider a third-order semi-implicit scheme. It is given by a four stages Runge-Kutta method and was introduced in the framework of hyperbolic systems with stiff source terms in \cite{BFR:15}.
First, we choose $\alpha=0.24169426078821$, $\eta= 0.12915286960590$ and set $\beta =
\alpha/4$, $\gamma=1/2-\alpha-\beta-\eta$. Then the first stage of the scheme is
\begin{equation}
\label{scheme:4-1}
\left\{
\begin{array}{l}
\ds{\bx^{(1)} \,=\, \bx^n \,+\, \frac{\alpha\Delta t}{\varepsilon }\,\bw^{(1)},}
\\
\,
\\
\ds{e^{(1)} \,=\, e^n \,+\, \frac{\alpha\Delta t}{\varepsilon }\, S^{(1)},}
\\
\,
\\
\ds{\bw^{(1)} \,=\, \bw^n \,+\, \frac{\alpha\Delta t}{\varepsilon } \,\bF^{(1)},}
\end{array}\right.
\end{equation}
with
\begin{equation}
\label{3F1}
\left\{
\begin{array}{l}
\ds \bF^{(1)} \,:=\, \bE(t^n,\bx^n) \,-\, \chi(e^n,\bw^n)\,\nabla_\bx (\ln(b))(t^n,\bx^n)
\,-\,b(t^n,\bx^n)\,\frac{(\bw^{(1)})^\perp}{\varepsilon }\,,
\\[0.5em]
S^{(1)}\,:=\;\bE(t^n,\bx^n)\cdot\bw^{(1)}\,.
\end{array}\right.
\end{equation}
The second stage is
\begin{equation}
\label{scheme:4-2}
\left\{
\begin{array}{l}
\ds{\bx^{(2)} \,=\, \bx^{n} \,-\, \frac{\alpha\Delta t}{\varepsilon } \,\bw^{(1)} \,+\, \frac{\alpha\Delta t}{\varepsilon }\,\bw^{(2)}\,,}
\\
\,
\\
\ds{e^{(2)} \,=\, e^{n} \,-\, \frac{\alpha\Delta t}{\varepsilon } \,S^{(1)}
\,+\, \frac{\alpha\Delta t}{\varepsilon }\, S^{(2)}\,,}
\\
\,
\\
\ds{\bw^{(2)} \,=\, \bw^{n} \,-\, \frac{\alpha\Delta t}{\varepsilon } \,\bF^{(1)} \,+\, \frac{\alpha\Delta t}{\varepsilon }\,\bF^{(2)}\,,}
\end{array}\right.
\end{equation}
with
\begin{equation}
\label{3F2}
\left\{
\begin{array}{l}
\ds \bF^{(2)} \,:=\, \bE(t^n,\bx^n) \,-\,
\chi(e^n,\bw^n)\,\nabla_\bx (\ln(b))(t^n,\bx^n)
\,-\,b(t^n,\bx^n)\,\frac{(\bw^{(2)})^\perp}{\varepsilon }\,,
\\
\,
\\
S^{(2)}\,:=\,\bE(t^n,\bx^n)\cdot\bw^{(2)}.
\end{array}\right.
\end{equation}
Then, for the third stage we first compute explicitly intermediate values
$$
\left\{
\begin{array}{l}
\ds{\hat{\bx}^{(2)} \,:=\, \bx^{n} + \frac{\Delta t}{\varepsilon }\,\bw^{(2)}
\,=\,\bx^n+\frac1\alpha\left(\bx^{(1)}-\bx^n+\bx^{(2)}-\bx^n\right),}
\\
\,
\\
\ds{\hat{e}^{(2)} \,:=\, e^{n} + \frac{\Delta t}{\varepsilon }\,S^{(2)}
\,=\,e^n+\frac1\alpha\left(e^{(1)}-e^n+e^{(2)}-e^n\right)\,,}
\\
\,
\\
\ds{\hat{\bw}^{(2)} \,:=\, \bw^{n} + \frac{\Delta t}{\varepsilon }\,\bF^{(2)}
\,=\,\bw^n+\frac1\alpha\left(\bw^{(1)}-\bw^n+\bw^{(2)}-\bw^n\right)}
\end{array}\right.
$$
and then construct the new stage
$\left(\bx^{(3)},e^{(3)},\bw^{(3)}\right)$
\begin{equation}
\label{scheme:4-3}
\left\{
\begin{array}{l}
\ds{\bx^{(3)} \,=\, \bx^{n} \,+\, \frac{(1-\alpha)\Delta t}{\varepsilon }\,\bw^{(2)} \,+\, \frac{\alpha\Delta t}{\varepsilon }\,\bw^{(3)},}
\\
\,
\\
\ds{e^{(3)} \,=\, e^{n} \,+\, \frac{(1-\alpha)\Delta t}{\varepsilon }\,S^{(2)} \,+\, \frac{\alpha\Delta t}{\varepsilon }\,S^{(3)},}
\\
\,
\\
\ds{\bw^{(3)} \,=\, \bw^{n} \,+\, \frac{(1-\alpha)\Delta
t}{\varepsilon }\,\bF^{(2)} \,+\, \frac{\alpha\Delta
t}{\varepsilon }\,\bF^{(3)},}
\end{array}\right.
\end{equation}
with
$$
\left\{
\begin{array}{l}
\ds{\bF^{(3)} \,:=\, \bE\left(t^{n+1},\hat{\bx}^{(2)}\right) \,-\, \chi\left(\hat{e}^{(2)},\hat{\bw}^{(2)}\right)\,\nabla_\bx (\ln\left(b\right))(t^{n+1},\hat{\bx}^{(2)}) \,-\,b\left(t^{n+1},\hat{\bx}^{(2)}\right)\,\frac{(\bw^{(3)})^\perp}{\varepsilon }\,,}
\\
\,
\\
\ds{S^{(3)} \,:=\, \bE\left(t^{n+1},\hat{\bx}^{(2)}\right)\cdot \bw^{(3)}\,.}
\end{array}\right.
$$
Finally, we use explicit intermediate values
$\left(\hat{\bx}^{(3)}, \hat{e}^{(3)},\hat{\bw}^{(3)}\right)$
$$
\left\{
\begin{array}{l}
\ds \hat{\bx}^{(3)} \,:=\, \bx^{n} \,+\, \frac{\Delta t}{4\varepsilon }
\left( \bw^{(2)} + \bw^{(3)} \right)
\,=\,\bx^n+\frac{1}{4\alpha}\left(\bx^{(3)}-\bx^n-\frac{1-2\alpha}{\alpha}(\bx^{(2)}-\bx^n+\bx^{(1)}-\bx^n)\right),
\\
\,
\\
\ds \hat{e}^{(3)} \,:=\, e^{n} \,+\, \frac{\Delta t}{4\varepsilon }
\left( S^{(2)} + S^{(3)} \right)
\,=\,e^n+\frac{1}{4\alpha}\left(e^{(3)}-e^n-\frac{1-2\alpha}{\alpha}(e^{(2)}-e^n+e^{(1)}-e^n)\right),
\\
\,
\\
\ds \hat{\bw}^{(3)} \,:=\, \bw^{n} \,+\, \frac{\Delta t}{4\varepsilon }
\left( \bF^{(2)} + \bF^{(3)} \right)
\,=\,\bw^n+\frac{1}{4\alpha}\left(\bw^{(3)}-\bw^n-\frac{1-2\alpha}{\alpha}(\bw^{(2)}-\bw^n+\bw^{(1)}-\bw^n)\right),
\end{array}\right.
$$
to carry out the fourth stage
\begin{equation}
\label{scheme:4-4}
\left\{
\begin{array}{l}
\ds\bx^{(4)} \,=\, \bx^{n} \,+\, \frac{\beta\Delta
t}{\varepsilon }\,\bw^{(1)}\,+\, \frac{\eta\Delta t}{\varepsilon }\,\bw^{(2)} \,+\, \frac{\gamma\Delta t}{\varepsilon }\,\bw^{(3)}\,+\, \frac{\alpha\Delta t}{\varepsilon }\,\bw^{(4)}\,,
\\
\,
\\
\ds e^{(4)} \,=\, e^{n} \,+\, \frac{\beta\Delta
t}{\varepsilon }\,S^{(1)}\,+\,\frac{\eta\Delta t}{\varepsilon }\,S^{(2)}\, +\, \frac{\gamma\Delta t}{\varepsilon }\,S^{(3)}\,+\, \frac{\alpha\Delta t}{\varepsilon }\,S^{(4)}\,,
\\
\,
\\
\ds\bw^{(4)} \, =\, \bw^{n} \,+\, \frac{\beta\Delta
t}{\varepsilon }\,\bF^{(1)}\,+\, \frac{\eta\Delta t}{\varepsilon }\,\bF^{(2)}\,+\,
\frac{\gamma\Delta t}{\varepsilon }\,\bF^{(3)}\,+\, \frac{\alpha\Delta
t}{\varepsilon }\,\bF^{(4)},
\end{array}\right.
\end{equation}
with
$$
\left\{
\begin{array}{l}
\ds \bF^{(4)} \,:=\, \bE\left(t^{n+1/2},\hat{\bx}^{(3)}\right) \,-\, \chi\left(\hat{e}^{(3)},\hat{\bw}^{(3)}\right)\,\nabla_\bx (\ln\left(b\right))\left(t^{n+1/2},\hat{\bx}^{(3)}\right) \,-\,b\left(t^{n+1/2},\hat{\bx}^{(3)}\right)\,\frac{(\bw^{(4)})^\perp}{\varepsilon },
\\
\,
\\
\ds S^{(4)} \,:=\,
\bw^{(4)}\cdot\bE\left({t}^{n+1/2},\hat{\bx}^{(3)}\right)\,,
\end{array}\right.
$$
where $t^{n+1/2}=t^n+\Delta t/2$. The numerical solution at the new
time step is finally given by
\begin{equation}
\label{scheme:4-5}
\left\{
\begin{array}{l}
\bx^{n+1} \,=\, \ds \bx^{n} \,+\, \frac{\Delta t}{6\varepsilon } \left( \bv^{(2)} \,+\,
\bv^{(3)} \,+\, 4\, \bv^{(4)} \right),
\\
\,
\\
e^{n+1} \,=\, \ds e^{n} \,+\, \frac{\Delta t}{6\varepsilon } \left( S^{(2)} \,+\,
S^{(3)} \,+\, 4\, S^{(4)} \right),
\\
\,
\\
\bw^{n+1} \,=\, \ds \bw^{n} \,+\, \frac{\Delta t}{6\varepsilon } \left( \bF^{(2)} \,+\,
\bF^{(3)} \,+\, 4 \,\bF^{(4)} \right).
\end{array}\right.
\end{equation}
We emphasize that in all our schemes at each stage the implicit computation only requires the resolution of a two-by-two linear system. Therefore the computational effort is of the same order as fully explicit schemes.
\begin{proposition}[Consistency in the
limit $\varepsilon \rightarrow 0$ for a fixed $\Delta t$]
\label{prop:5}
Let us consider a time step $\Delta t>0$, a final time $T>0$ and
set $N_T=\lfloor T/\Delta t\rfloor$. Assume that $(\bx^n_\varepsilon ,\bw^n_\varepsilon ,e^{n}_\varepsilon )_{0\leq
n\leq N_T}$ is a sequence obtained by \eqref{scheme:4-1}-\eqref{scheme:4-5} and such that
\begin{itemize}
\item for all $1\leq n\leq N_T$, $\left(\bx^n_\varepsilon ,\varepsilon \bw^n_\varepsilon , e^n_\varepsilon \right)_{\varepsilon >0}$ is uniformly bounded with respect to $\varepsilon >0$;
\item $\left(\bx^0_\varepsilon ,\bw_\varepsilon ^0,e^0_\varepsilon \right)_{\varepsilon >0}$ converges in the limit $\varepsilon \rightarrow 0$ to some $(\by^0,\bw^0,g^0)$.
\end{itemize}
Then, for any $1\leq n\leq N_T$, $(\bx^n_\varepsilon ,e^n_\varepsilon )_{\varepsilon >0}$ converges to some $(\by^n,g^n)$ as $\varepsilon \rightarrow 0$ and the limiting sequence $(\by^n,g^n)_{0\leq n\leq N_T}$ solves
\begin{equation}
\left\{
\begin{array}{l}
\ds\by^{n+1} = \by^{n} \,+\, \frac{\Delta t}{6} \,\left( \bU^{n} \,+\, \bU^{(1)} \,+\, 4\,\bU^{(2)}\right)\\\ds
g^{n+1} = g^{n} \,+\, \frac{\Delta t}{6} \,\left( T^{n} \,+\, T^{(1)} \,+\, 4\,T^{(2)}\right)
\label{sch:y40}
\end{array}\right.
\end{equation}
where
$$
\left\{
\begin{array}{l}
\ds \bU^{n} = -\frac{1}{b(t^n,\by^n)} \Big( \bE(t^n,\by^n) - g^n \,\nabla_\by(\ln(b))(t^n,\by^n)\Big)^\perp,
\\
\,
\\
\ds T^{n} = g^n \,\frac{\nabla_\by^\perp b}{b^2}(t^n,\by^n) \cdot \bE(t^n,\by^n),
\end{array}\right.
$$
whereas $\bU^{(1)}$ and $T^{(1)}$ are given by
$$
\left\{
\begin{array}{l}
\ds \bU^{(1)} = -\frac{1}{b(t^{n+1},\by^{(1)})} \Big( \bE(t^{n+1},\by^{(1)}) - g^{(1)} \,\nabla_\by(\ln(b))(t^{n+1},\by^{(1)})\Big)^\perp,
\\
\,
\\
\ds T^{(1)} = g^{(1)} \,\frac{\nabla_\by^\perp b}{b^2}(t^{n+1},\by^{(1)}) \cdot \bE(t^{n+1},\by^{(1)})
\end{array}\right.
$$
and $\bU^{(2)}$ and $T^{(2)}$
$$
\left\{
\begin{array}{l}
\ds \bU^{(2)} = -\frac{1}{b(t^{n+1/2},\by^{(2)})} \Big( \bE(t^{n+1/2},\by^{(2)}) - g^{(2)} \,\nabla_\by\ln(b (t^{n+1/2},\by^{(2)}))\Big)^\perp,
\\
\,
\\
\ds T^{(2)} = g^{(2)} \,\frac{\nabla_\by^\perp b}{b^2}(t^{n+1/2},\by^{(2)}) \cdot \bE(t^{n+1/2},\by^{(2)})
\end{array}\right.
$$
with $t^{n+1/2}=t^n+\Delta t/2$,
$$
\by^{(1)}\, =\, \by^n \,+\, \Delta t\, \bU^n,\qquad g^{(1)} \,=\, g^n \,+\, \Delta t\, T^n\,,
$$
and
$$
\by^{(2)}\, =\, \by^n \,+\, \frac{\Delta t}{4}\,
\left(\bU^n+\bU^{(1)}\right),\qquad g^{(2)} \,=\, g^n \,+\,
\frac{\Delta t}{4}\, \left( T^n + T^{(1)}\right),
$$
which provides a consistent third-order approximation of the gyro-kinetic system \eqref{traj:limit}.
\end{proposition}
We omit the proof of Proposition~\ref{prop:5} as almost identical to the one of Proposition~\ref{prop:3}.
\begin{remark}
For both our second and third-order schemes, the limit obtained when $\varepsilon \to0$ fails to satisfy the first step of the limiting schemes by a first-order error. This is obviously a more serious problem that in the first-order case. However, though we do not pursue this line of investigation here, one should be able to fix this issue by replacing the very first step (from $t^0$ to $t^1$) of the original ($\varepsilon $-dependent) computation by a step where third equations of intermediate values are also implicited.
\end{remark}
\section{Numerical simulations}
\label{sec:5}
\setcounter{equation}{0}
In this section, we provide examples of numerical computations to validate and compare
the different time discretization schemes introduced in the previous section.
We first consider the motion of a single particle under the effect of a given
electromagnetic field. It allows us to illustrate the ability of the semi-implicit schemes
to capture in the limit $\varepsilon\rightarrow 0$ drift velocities due to variations of magnetic and electric fields, even with large time steps $\Delta t $ .
Then we consider the Vlasov-Poisson system with an external non uniform magnetic field. As already mentioned we implement a classical particle-in-cell method but with specific different time discretization techniques to compute the particle trajectories. In particular, in this case, the collection of charged particles moves collectively and gives rise to a self-consistent electric field, obtained by solving numerically the Poisson equation in a space grid.
\subsection{One single particle motion without electric field}
Before simulating at the statistical level, we investigate on the motion of individual particles in a given magnetic field the accuracy and stability properties with respect to $\varepsilon >0$ of the semi-implicit algorithms presented in Section~\ref{sec:3}.
Numerical experiments of the present subsection are run with a zero electric field $\bE=0$, and a time-independent external magnetic field corresponding to
$$
b\,:\quad \RR^2\to\RR\,,\qquad \bx=(x,y)\,\mapsto\,1\,+\,\alpha\,x^2
$$
with $\alpha=0.5$. Moreover we choose for all simulations the initial data as
$\bx^0=(5,4)$, $\bv^0=(5,6)$, whereas the final time is $T=2$. In this case,
the asymptotic drift velocity predicted by the limiting model~\eqref{traj:limit} is explicitly given by
$$
\bU(\bx,e) \,=\, \frac{2\,\alpha\,e}{(1+\alpha \,x^2)^2}
\left(\begin{array}{l} 0\\x\end{array}\right).
$$
First, for comparison, we compute a reference solution $(\bX^\varepsilon ,\bw^\varepsilon ,e^\varepsilon )_{\varepsilon >0}$
to the initial problem \eqref{traj:ter} thanks to an explicit
fourth-order Runge-Kutta scheme used with a very small time step of
the order of $\varepsilon ^2$ and a reference solution $(\bY,g)$ to the (non
stiff) asymptotic model \eqref{traj:limit} obtained when $\varepsilon
\rightarrow 0$. Recall that the derivation of \eqref{traj:limit} also
shows weak convergence of $\varepsilon ^{-1}\bw^\varepsilon $ to $\bU^0=\bU(\bY,g)$ in
the limit $\varepsilon \to0$. Then we compute an approximate solution
$(\bX^\varepsilon _{\Delta t}, \bw^\varepsilon _{\Delta t}, e^\varepsilon _{\Delta t})$ using either \eqref{scheme:3-1}--\eqref{scheme:3-3} or \eqref{scheme:4-1}--\eqref{scheme:4-5}, and compare them to the reference solutions.
Our goal is to evaluate the accuracy of the numerical solution
$(\bX^\varepsilon _{\Delta t}, \bw^\varepsilon _{\Delta t}, e^\varepsilon _{\Delta t})$ for various regimes when both $\varepsilon $ and $\Delta t$ vary. Computed errors are measured in discrete $L^1$ norms. For instance, we set\footnote{When necessary and not too confusing, as here, we use the same piece of notation for continuous time-dependent functions and their discrete counterpart, obtained by restriction to discrete times.}
$$
\left\{
\begin{array}{ll}
\ds\|\bX^\varepsilon _{\Delta t} - \bX^\varepsilon \|\,:=\, \frac{\Delta t
}{T}\sum_{n=0}^{N_T} \|\bX^{\varepsilon ,n}_{\Delta t} - \bX^\varepsilon (t^n) \|\,,
\\[0.5em]
\ds\varepsilon ^{-1}\,\ds\|\bw^\varepsilon _{\Delta t} - \bw^\varepsilon \|\,:=\, \frac{\Delta t
}{\varepsilon \, T}\sum_{n=0}^{N_T} \|\bw^{\varepsilon ,n}_{\Delta t} - \bw^\varepsilon (t^n) \|\,.
\end{array}\right.
$$
In Figure~\ref{fig0:1}, we present the numerical error on space and velocity\footnote{In graphics we use $\bv$ as a short-hand for $\varepsilon ^{-1}\bw$. This should not be confused with the actual reconstruction of the velocity variable used at the statistical level.} variables between the reference solution for the initial problem \eqref{traj:00} and the one obtained with the third-order scheme \eqref{scheme:4-1}-\eqref{scheme:4-5}. As expected for a fixed time step $\Delta t$ (taken here between $0.0025$ and $0.01$), the scheme is quite stable even in the limit $\varepsilon \rightarrow 0$ and the error on the space variable, measured by $\|\bX^\varepsilon _{\Delta t} - \bX^\varepsilon \|$, is uniformly small and reach a maximum on intermediate regimes, $\varepsilon \in (0.05,\;0.5)$. In contrast, for a fixed time step, the error on the velocity variable, measured by $\varepsilon ^{-1}\,\|\bw^\varepsilon _{\Delta t} - \bw^\varepsilon \|$, is small for large values of $\varepsilon $, but gets very large when $\varepsilon \rightarrow 0$ since the scheme cannot follow high-frequency time oscillations of order $\varepsilon ^{-2}$ when $\varepsilon \ll\sqrt{\Delta t}$.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.cm]{test0_fig1.pdf} &
\includegraphics[width=8.cm]{test0_fig3.pdf}
\\
(a) & (b)
\end{tabular}
\caption{\label{fig0:1}
{\bf One single particle motion without electric field.} Numerical errors (a)
$\|\bX^\varepsilon _{\Delta t}-\bX^\varepsilon \|$, (b)
$\varepsilon ^{-1}\,\|\bw^\varepsilon _{\Delta t}-\bw^\varepsilon \|$ obtained for different time steps $\Delta t$ with third-order scheme \eqref{scheme:4-1}-\eqref{scheme:4-5}, plotted as functions of $\varepsilon $.}
\end{center}
\end{figure}
In Figure \ref{fig0:2}, for the same set of simulations we plot errors with respect to the limiting system~\eqref{traj:limit}. This shows, see Figure \ref{fig0:2} (b), that for a fixed $\Delta t$ we do capture however, in the limit $\varepsilon \to0$, correct drift velocities. More generally, as expected, errors on both space and velocity variables with respect to limiting asymptotic values gets smaller and smaller as $\varepsilon \to0$.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.cm]{test0_fig2.pdf} &
\includegraphics[width=8.cm]{test0_fig4.pdf}
\\
(a) & (b)
\end{tabular}
\caption{\label{fig0:2}
{\bf One single particle motion without electric field.} Numerical errors (a)
$\|\bX^\varepsilon _{\Delta t}-\bY\|$, (b)
$\|\varepsilon ^{-1}\,\bw^\varepsilon _{\Delta t}-\bU^0\|$ obtained for different time steps $\Delta t$ with third-order scheme \eqref{scheme:4-1}-\eqref{scheme:4-5}, plotted as functions of $\varepsilon $.}
\end{center}
\end{figure}
Finally, in Figure~\ref{fig0:3} we show similar quantities for the second-order scheme \eqref{scheme:3-1}-\eqref{scheme:3-3} holding the time step fixed to $\Delta t = 0.01$. For comparison, we also plot results of the third-order scheme. Errors of the third-order scheme are smaller but behaviors are qualitatively the same : uniform precision on the slow variable, convergence to asymptotic descriptions as $\varepsilon \to0$ and consistency of the velocity variable with the velocity of the true solution when $\varepsilon \gg\sqrt{\Delta t}$ and with the asymptotic drift when $\varepsilon \ll\sqrt{\Delta t}$.
\begin{center}
\begin{figure}[ht!]
\begin{tabular}{cc}
\includegraphics[width=8.cm]{test0_fig8.pdf} &
\includegraphics[width=8.cm]{test0_fig9.pdf}
\\
\includegraphics[width=8.cm]{test0_fig5.pdf} &
\includegraphics[width=8.cm]{test0_fig6.pdf}
\\
(a) & (b)
\end{tabular}
\caption{\label{fig0:3}
{\bf One single particle motion without electric field.} Numerical errors on (a)
$(\bX^\varepsilon _{\Delta t})_{\varepsilon >0}$ and (b) $(\varepsilon ^{-1}\,\bw^\varepsilon _{\Delta t})_{\varepsilon >0}$ obtained with second-order scheme \eqref{scheme:3-1}-\eqref{scheme:3-3} and third-order scheme \eqref{scheme:4-1}-\eqref{scheme:4-5}, plotted as functions of $\varepsilon $.}
\end{figure}
\end{center}
\subsection{One single particle motion with an electromagnetic field}
We now add to the otherwise unmodified previous setting a non trivial electric field
$$
\bE\,:\quad \RR^2\to\RR\,,\qquad \bx=(x,y)\,\mapsto\,(0,-y)\,.
$$
In this case, at the limit $\varepsilon \to0$ the behavior of the microscopic kinetic energy remains non trivial (in contrast with what happens in the previous subsection) so that we also measure errors on its evaluations. Moreover the electric field also contributes to asymptotic drift velocities, that are now given by
$$
\bU^0(\bx,e) \,=\, \frac{1}{(1+\alpha\,x^2)} \left(\begin{array}{l} -y \\\,\\
\ds\frac{2\,\alpha\,e\,x}{1+\alpha\,x^2}\end{array}\right),
$$
with $\alpha=1/2$. This simple geometric configuration allows us to clearly distinguish the contribution to the drift velocity due to the interaction of the electric field with the magnetic field from purely magnetic effects due to the gradient of the magnetic field.
Quantitative error measurements are provided in Figure~\ref{fig1:1}. Qualitative features are completely analogous to those of the previous subsection, the microscopic kinetic energy being approximated with essentially the same accuracy as the other slow variable $\bx$.
\begin{center}
\begin{figure}[ht!]
\begin{tabular}{ccc}
\includegraphics[width=5.3cm]{test1_fig1.pdf} &
\includegraphics[width=5.3cm]{test1_fig3.pdf} &
\includegraphics[width=5.3cm]{test1_fig2.pdf}
\\
\includegraphics[width=5.3cm]{test1_fig7.pdf} &
\includegraphics[width=5.3cm]{test1_fig9.pdf} &
\includegraphics[width=5.3cm]{test1_fig8.pdf}
\\
(a) & (b) & (c)
\end{tabular}
\caption{\label{fig1:1}
{\bf One single particle motion with an electromagnetic field.} Numerical errors on (a)
$(\bX^\varepsilon _{\Delta t})_{\varepsilon >0}$, (b) $(e^\varepsilon _{\Delta
t})_{\varepsilon >0}$ and (c) $(\varepsilon ^{-1}\,\bw^\varepsilon _{\Delta t})_{\varepsilon >0}$ obtained with the second-order scheme
\eqref{scheme:3-1}-\eqref{scheme:3-3} and third-order scheme
\eqref{scheme:4-1}-\eqref{scheme:4-5}, plotted as functions of $\varepsilon $.}
\end{figure}
\end{center}
To visualize rather than quantify what happens in the limit $\varepsilon \to0$, we also represent now space trajectories corresponding to different values of $\varepsilon $, but holding the time step fixed to $\Delta t=0.01$. On the reference solution one sees as $\varepsilon \to0$ faster and faster oscillations --- of order $\varepsilon ^{-2}$ --- but with smaller and smaller amplitude --- of order $\varepsilon $ --- around guiding center trajectories as predicted by\footnote{Note incidentally that this illustrates a classical abuse in terminology, used throughout the present paper. Variables $(\bX^\varepsilon ,e^\varepsilon )$ are not really slow but they are uniformly close to genuinely slow variables.} \eqref{traj:limit}. When $\varepsilon \gg0.1$ the approximate solution follows closely the reference solution, whereas when $\varepsilon \ll0.1$ they are also almost indistinguishable though the approximate solution does not reproduce the infinitely fast oscillations of the true solution. Main discrepancies are observed for intermediate values, here exemplified by the case $\varepsilon =0.2$, when the true solution is already too fast for the approximate solution but not yet close enough to the guiding center dynamics.
\begin{center}
\begin{figure}[ht!]
\begin{tabular}{cc}
\includegraphics[width=8.cm]{test1_fig10-bis.pdf} &
\includegraphics[width=8.cm]{test1_fig10.pdf}
\\
(a) & (b)
\\
\includegraphics[width=8.cm]{test1_fig11.pdf} &
\includegraphics[width=8.cm]{test1_fig12.pdf}
\\
(c) & (d)
\end{tabular}
\caption{\label{fig1:2}
{\bf One single particle motion with an electromagnetic field.} Space trajectories
$(\bX^\varepsilon _{\Delta t})_{\varepsilon >0}$ for (a) $\varepsilon =3$, (b) $\varepsilon =1$, (c)
$\varepsilon =0.2$ and (d) $\varepsilon =0.01$, computed with third-order scheme \eqref{scheme:4-1}-\eqref{scheme:4-5}.}
\end{figure}
\end{center}
As a conclusion, these elementary numerical simulations confirm the ability of the semi-implicit discretization to capture the evolution of the "slow" variables $(\bX^\varepsilon ,e^\varepsilon )$ uniformly with respect to $\varepsilon $ by essentially transitioning automatically to the the guiding center motion encoded by \eqref{traj:limit} when it is not enable anymore to follow the fast oscillations of the true evolution.
\subsection{The Vlasov-Poisson system
We now consider the Vlasov-Poisson system \eqref{eq:vlasov2d} set in a disk $\Omega=D(0,6)$ centered at the origin and of radius $6$. Our simulations start with an initial data that is Maxwellian in velocity and whose macroscopic density is the sum of two Gaussians, explicitly
$$
f_0(\bx,\bv) \,=\, \frac{1}{16\pi^2} \left[
\exp\left(-\frac{\|\bx-\bx_0\|^2}{2}\right) + \exp\left(-\frac{\|\bx+\bx_0\|^2}{2}\right)\right]\, \exp\left(-\frac{\|\bv\|^2}{4}\right),
$$
with $\bx_0=(3/2,-3/2)$. The parameter $\varepsilon $ is set either to $\varepsilon =1$, leading to
a non-stiff problem or to $\varepsilon =0.05$, where the asymptotic regime is relevant. For both regimes we compute numerical solutions to the Vlasov-Poisson system \eqref{eq:vlasov2d} with the third-order scheme \eqref{scheme:4-1}-\eqref{scheme:4-5} and time step $\Delta t=0.1$. Note that heuristic arguments based on single-particle simulations suggest that when $\varepsilon =0.05$ this enforces a regime where we are close to the asymptotic limit and our schemes cannot capture fast oscillations of the true solution.
We run two sets of numerical simulations with a time-independent inhomogeneous magnetic field
$$
b\,:\quad \RR^2\to\RR\,,\qquad \bx\,\mapsto\,\frac{10}{\sqrt{10^2-\|\bx\|^2}}
$$
that is radial increasing with value one at the origin.
In Figure~\ref{fig2:0} we represent time evolutions of the total energy
$$
\mathcal{E}(t) \,:=\,
\iint_{\Omega\times\RR^2} f(t,\bx,\bv)\,\frac{\|\bv\|^2}{2}\,\dd\bx\,\dd \bv
\,+\,\frac{1}{2}\int_{\Omega} \|\bE(t,\bx) \|^2 \dd\bx
$$
and of the adiabatic invariant
$$
\mu(t)=\int_{\Omega}\int_{\RR^2}f(t,\bx,\bv)\,\frac{\|\bv\|^2}{2b(\bx)}
\,\dd\bx\,\dd\bv\,.
$$
As far as smooth solutions are concerned, the total energy is
preserved by both the original $\varepsilon $-dependent model and by the
asymptotic model \eqref{eq:gc}. One goal of our experimental
observations is to check that despite the fact that our scheme
dissipates some parts of the velocity variable to reach the asymptotic
regime corresponding to \eqref{eq:gc} it does respect this
conservation. In contrast an essentially exact conservation of the
adiabatic invariant is a sign that we have reached the limiting
asymptotic regime since it does not hold for the original model but
does for the asymptotic~\eqref{eq:gc} as $b$ is time-independent and
$\bE$ is curl-free. Observe that, since $b$ is not homogeneous, even in the
asymptotic regime the kinetic and potential parts of the total
energy are not preserved separately, but the total energy
corresponding to the Vlasov-Poisson system is still
preserved. Figure~\ref{fig2:0} shows that all these features are
captured satisfactorily by our scheme even on long time evolutions
with a large time step and despite its dissipative implicit nature
when $\varepsilon =1$ (Figure~\ref{fig2:0} left column) and also in the asymptotic regime when $\varepsilon =0.05$
(Figure~\ref{fig2:0} right column).
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=7.cm]{test2-2_fig1.pdf} &
\includegraphics[width=7.cm]{test2-1_fig1.pdf}
\\
\includegraphics[width=7.cm]{test2-2_fig2.pdf} &
\includegraphics[width=7.cm]{test2-1_fig2.pdf}
\\
(a) $\varepsilon =1$ &(b) $\varepsilon =0.05$
\end{tabular}
\caption{\label{fig2:0}
{\bf The Vlasov-Poisson system.} Time evolution of total energy and
adiabatic invariant with (a) $\varepsilon =1$ and (b) $\varepsilon =0.05$ obtained using
\eqref{scheme:4-1}-\eqref{scheme:4-5} with $\Delta t=0.1$.}
\end{center}
\end{figure}
In Figure~\ref{fig2:1} and \ref{fig2:2} we visualize the corresponding
dynamics by presenting some snapshots of the time evolution of the
macroscopic charge density. In the first case, with $\varepsilon =1$, the magnetic field is not sufficiently large to provide a good confinement of the macroscopic density, and some macroscopic oscillations occur as already observed on the time evolution of macroscopic
quantities. On the other hand, our inhomogeneous case does exhibit some confinement when $\varepsilon \to0$, since the conservation of $e/b(\bx)$ offers (a weak form of) coercivity jointly in
$(\bx,e)$. Such a confinement is indeed observed when $\varepsilon =0.05$, jointly with the expected eventual merging of two initial vortices in a relatively short time.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=7.cm]{test2-2rho010.jpeg} &
\includegraphics[width=7.cm]{test2-2rho020.jpeg}
\\
$t=10$ & $t=20$
\\
\includegraphics[width=7.cm]{test2-2rho055.jpeg} &
\includegraphics[width=7.cm]{test2-2rho080.jpeg}
\\
$t=55$ & $t=80$
\end{tabular}
\caption{\label{fig2:1}
{\bf The Vlasov-Poisson system.} Snapshots of the time evolution of the macroscopic charge density $\rho$ when $\varepsilon =1$, obtained using
\eqref{scheme:4-1}-\eqref{scheme:4-5} with $\Delta t=0.1$ .}
\end{center}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=7.cm]{test2-1rho010.jpeg} &
\includegraphics[width=7.cm]{test2-1rho020.jpeg}
\\
$t=10$ & $t=20$
\\
\includegraphics[width=7.cm]{test2-1rho055.jpeg} &
\includegraphics[width=7.cm]{test2-1rho080.jpeg}
\\
$t=55$ & $t=80$
\end{tabular}
\caption{\label{fig2:2}
{\bf The Vlasov-Poisson system.} Snapshots of the time evolution of the macroscopic charge density $\rho$ when $\varepsilon =0.05$, obtained using
\eqref{scheme:4-1}-\eqref{scheme:4-5} with $\Delta t=0.1$ .}
\end{center}
\end{figure}
\section{Conclusion and perspectives}
\label{sec:6}
\setcounter{equation}{0}
In the present paper we have proposed a class of semi-implicit time discretization
techniques for particle-in cell simulations. The main feature of our approach is to guarantee the accuracy and stability on slow scale variables even when the amplitude of the magnetic field becomes large, thus allowing a capture of their correct long-time behavior including cases with non homogeneous magnetic fields and coarse time grids. Even on large time simulations the obtained numerical schemes also provide an acceptable accuracy on physical invariants (total energy for any $\varepsilon $, adiabatic invariant when $\varepsilon \ll1$) whereas fast scales are automatically filtered when the time step is large compared to $\varepsilon ^2$.
As a theoretical validation we have proved that under some stability assumptions on numerical approximations, the slow part of the approximation converges when $\varepsilon \rightarrow 0$ to the solution of a limiting scheme for the asymptotic evolution, that preserves the initial order of accuracy. Yet a full proof of uniform accuracy and a classification of admissible schemes --- going beyond the exposition of a few possible schemes --- remain to be carried out.
From a practical point of view, the next natural step would be to consider the genuinely three-dimensional Vlasov-Poisson system. We point out however that, despite recent progresses \cite{Cheverry,PDFF}, the asymptotic limit $\varepsilon \to0$ is much less understood in this context, even at the continuous level with external electro-magnetic fields.
\section*{Acknowledgements}
FF was supported by the EUROfusion Consortium and has received funding
from the Euratom research and training programme 2014-2018 under grant
agreement No 633053. The views and opinions expressed herein do not
necessarily reflect those of the European Commission.
LMR was supported in part by the ANR project BoND (ANR-13-BS01-0009-01). LMR also expresses his appreciation of the hospitality of IMT, Universit\'e Toulouse III, during part of the preparation of the present contribution.
|
1,116,691,501,223 | arxiv | \section{Introduction}\label{intro}
The task of supplying an answer to a question, given some background knowledge, is often considered fairly trivial from a human point of view, as long as the question is clear and the answer is known. The aim of an automated question answering system is to provide a single, unambiguous response to a natural language question, given a text collection as a knowledge source, within a certain amount of time. Since 1999, the Text Retrieval Conferences have included a task to evaluate such systems, based on a large pre-defined corpus (such as AQUAINT, containing around a million news articles in English) and a set of unseen questions.
Many information retrieval systems perform document retrieval, giving a list of potentially relevant documents when queried -- Google's and Yahoo!'s search products are examples of this type of application. Users formulate a query using a few keywords that represent the task they are trying to perform; for example, one might search for ``eiffel tower height" to determine how tall the Eiffel tower is. IR engines then return a set of references to potentially relevant documents.
In contrast, QA systems must return an exact answer to the question. They should be confident that the answer has been correctly selected; it is no longer down to the user to research a set of document references in order to discover the information themselves. Further, the system takes a natural language question as input, instead of a few user-selected key terms.
Once a QA system has been provided with a question, its processing steps can be described in three parts - Question Pre-Processing, Text Retrieval and Answer Extraction:
\paragraph{1. Question Pre-Processing} TREC questions are grouped into series which relate to a given target. For example, the target may be ``Hindenburg disaster" with questions such as ``What type of craft was the Hindenburg?" or ``How fast could it travel?". Questions may include pronouns referencing the target or even previous answers, and as such require processing before they are suitable for use.
\paragraph{2. Text Retrieval} An IR component will return a ranked set of texts, based on query terms. Attempting to understand and extract data from an entire corpus is too resource intensive, and so an IR engine defines a limited subset of the corpus that is likely to contain answers. The question should have been pre-processed correctly for a useful set of texts to be retrieved -- including anaphora resolution.
\paragraph{3. Answer Extraction (AE)} Given knowledge about the question and a set of texts, the AE system attempts to identify answers. It should be clear that only answers within texts returned by the IR component have any chance of being found.
\medskip
Reduced performance at any stage will have a knock-on effect, capping the performance of later stages. If questions are left unprocessed and full of pronouns (e.g.,``When did it sink?") the IR component has very little chance of working correctly -- in this case, the desired action is to retrieve documents related to the Kursk submarine, which would be impossible.
IR performance with a search engine such as Lucene returns no useful documents for at least 35\% of all questions -- when looking at the top 20 returned texts. This caps the AE component at 65\% question ``coverage". We will measure the performance of different IR component configurations, to rule out problems with a default Lucene setup.
For each question, answers are provided in the form of regular expressions that match answer text, and a list of documents containing these answers in a correct context. As references to correct documents are available, it is possible to explore a data-driven approach to query analysis. We determine which questions are hardest then concentrate on identifying helpful terms found in correct documents, with a view to building a system than can automatically extract these helpful terms from unseen questions and supporting corpus. The availability and usefulness of these terms will provide an estimate of performance for query expansion techniques.
There are at least two approaches which could make use of these term sets to perform query expansion. They may occur in terms selected for blind RF (non-blind RF is not applicable to the TREC QA task). It is also possible to build a catalogue of terms known to be useful according to certain question types, thus leading to a dictionary of (known useful) expansions that can be applied to previously unseen questions. We will evaluate and also test blind relevance feedback in IR for QA.
\section{Background and Related Work}
The performance of an IR system can be quantified in many ways. We choose and define measures pertinent to IR for QA. Work has been done on relevance feedback specific to IR for QA, where it is has usually be found to be unhelpful. We outline the methods used in the past, extend them, and provide and test means of validating QA relevance feedback.
\subsection{Measuring QA Performance}
This paper uses two principle measures to describe the performance of the IR component. \emph{Coverage} is defined as the proportion of questions where at least one answer bearing text appears in the retrieved set. \emph{Redundancy} is the average number of answer bearing texts retrieved for each question~\cite{Roberts:Passage}.
Both these measures have a fixed limit $n$ on the number of texts retrieved by a search engine for a query. As redundancy counts the number of texts containing correct answers, and not instances of the answer itself, it can never be greater than the number of texts retrieved.
The TREC reference answers provide two ways of finding a correct text, with both a regular expression and a document ID. Lenient hits (retrievals of answer bearing documents) are those where the retrieved text matches the regular expression; strict hits occur when the document ID of the retrieved text matches that declared by TREC as correct \emph{and} the text matches the regular expression. Some documents will match the regular expression but not be deemed as containing a correct answer (this is common with numbers and dates~\cite{MIR:166}), in which case a lenient match is found, but not a strict one.
The answer lists as defined by TREC do not include every answer-bearing document -- only those returned by previous systems and marked as correct. Thus, false negatives are a risk, and strict measures place an approximate lower bound on the system's actual performance. Similarly, lenient matches can occur out of context, without a supporting document; performance based on lenient matches can be viewed as an approximate upper bound~\cite{lin2005brt}.
\subsection{Relevance Feedback}
Relevance feedback is a widely explored technique for query expansion. It is often done using a specific measure to select terms using a limited set of ranked documents of size $r$; using a larger set will bring term distribution closer to values over the whole corpus, and away from ones in documents relevant to query terms. Techniques are used to identify phrases relevant to a query topic, in order to reduce noise (such as terms with a low corpus frequency that relate to only a single article) and query drift~\cite{roussinov,allan96incremental}.
In the context of QA, Pizzato~\shortcite{pizzato} employs blind RF using the AQUAINT corpus in an attempt to improve performance when answering factoid questions on personal names. This is a similar approach to some content in this paper, though limited to the study of named entities, and does not attempt to examine extensions from the existing answer data.
Monz~\shortcite{monz2003drq} finds a negative result when applying blind feedback for QA in TREC 9, 10 and 11, and a neutral result for TREC 7 and 8's ad hoc retrieval tasks. Monz's experiment, using $r=10$ and standard Rocchio term weighting, also found a further reduction in performance when $r$ was reduced (from 10 to 5). This is an isolated experiment using just one measure on a limited set of questions, with no use of the available answer texts.
Robertson~\shortcite{robertson92okapi} notes that there are issues when using a whole document for feedback, as opposed to just a single relevant passage; as mentioned in Section~\ref{method:irengines}, passage- and document-level retrieval sets must also be compared for their performance at providing feedback. Critically, we will survey the intersection between words known to be helpful and blind RF terms based on initial retrieval, thus showing exactly how likely an RF method is to succeed.
\section{Methodology}
We first investigated the possibility of an IR-component specific failure leading to impaired coverage by testing a variety of IR engines and configurations. Then, difficult questions were identified, using various performance thresholds. Next, answer bearing texts for these harder questions were checked for words that yielded a performance increase when used for query expansion.
After this, we evaluated how likely a RF-based approach was to succeed. Finally, blind RF was applied to the whole question set. IR performance was measured, and terms used for RF compared to those which had proven to be helpful as extension words.
\subsection{IR Engines}\label{method:irengines}
A QA framework \cite{Greenwood:Answerfinder} was originally used to construct a QA system based on running a default Lucene installation. As this only covers one IR engine in one configuration, it is prudent to examine alternatives. Other IR engines should be tested, using different configurations. The chosen additional engines were: Indri, based on the mature INQUERY engine and the Lemur toolkit~\cite{Lemur}; and Terrier, a newer engine designed to deal with corpora in the terabyte range and to back applications entered into TREC conferences~\cite{terrier}.
We also looked at both passage-level and document-level retrieval. Passages can be defined in a number of ways, such as a sentence, a sliding window of $k$ terms centred on the target term(s), parts of a document of fixed (and equal) lengths, or a paragraph. In this case, the documents in the AQUAINT corpus contain paragraph markers which were used as passage-level boundaries, thus making ``passage-level" and ``paragraph-level" equivalent in this paper. Passage-level retrieval may be preferable for AE, as the number of potential distracters is somewhat reduced when compared to document-level retrieval~\cite{Roberts:Passage}.
The initial IR component configuration was with Lucene indexing the AQUAINT corpus at passage-level, with a Porter stemmer~\cite{Porter:stemming} and an augmented version of the CACM~\cite{jones1976irt} stopword list.
Indri natively supports document-level indexing of TREC format corpora. Passage-level retrieval was done using the paragraph tags defined in the corpus as delimiters; this allows both passage- and document-level retrieval from the same index, according to the query.
All the IR engines were unified to use the Porter stemmer and the same CACM-derived stopword list.
The top $n$ documents for each question in the TREC2004, TREC2005 and TREC2006 sets were retrieved using every combination of engine, and configuration\footnote{Save Terrier / TREC2004 / passage-level retrieval; passage-level retrieval with Terrier was very slow using our configuration, and could not be reliably performed using the same Terrier instance as document-level retrieval.}. The questions and targets were processed to produce IR queries as per the default configuration for the QA framework. Examining the top 200 documents gave a good compromise between the time taken to run experiments (between 30 and 240 minutes each) and the amount one can mine into the data. Tabulated results are shown in Table~\ref{basepassagelevelperformance} and Table~\ref{basedocumentlevelperformance}. Queries have had anaphora resolution performed in the context of their series by the QA framework. AE components begin to fail due to excess noise when presented with over 20 texts, so this value is enough to encompass typical operating parameters and leave space for discovery~\cite{Greenwood:TREC2006}.
A failure analysis (FA) tool, an early version of which is described by~\cite{Sanka:Passage}, provided reporting and analysis of IR component performance. In this experiment, it provided high level comparison of all engines, measuring coverage and redundancy as the number of documents retrieved, $n$, varies. This is measured because a perfect engine will return the most useful documents first, followed by others; thus, coverage will be higher for that engine with low values of $n$.
\begin{table}
\small
\begin{center}
\begin{tabular}{ | r | r || c | c | c | c | }
\hline
\multicolumn{2}{ | c | }{} &
\multicolumn{2}{ | c | }{Coverage} &
\multicolumn{2}{ | c | }{Redundancy} \\
\hline
& Year & Len. & Strict & Len. & Strict \\
\hline
\multirow{4}{*}{Lucene}
& 2004 & 0.686 & 0.636 & 2.884 & 1.624 \\
& 2005 & 0.703 & 0.566 & 2.780 & 1.155 \\
& 2006 & 0.665 & 0.568 & 2.417 & 1.181 \\
\hline
\multirow{4}{*}{Indri}
& 2004 & 0.690 & 0.554 & 3.849 & 1.527 \\
& 2005 & 0.694 & 0.512 & 3.908 & 1.056 \\
& 2006 & 0.691 & 0.552 & 3.373 & 1.152 \\
\hline
\multirow{4}{*}{Terrier}
& 2004 & - & - & - & - \\
& 2005 & - & - & - & - \\
& 2006 & 0.638 & 0.493 & 2.520 & 1.000 \\
\hline
\end{tabular}
\caption{Performance of Lucene, Indri and Terrier at paragraph level, over top 20 documents. This clearly shows the limitations of the engines.\label{basepassagelevelperformance}}
\end{center}
\end{table}
\normalsize
\begin{table}
\small
\begin{center}
\begin{tabular}{ | r | r || c | c | c | c | }
\hline
\multicolumn{2}{ | c | }{} &
\multicolumn{2}{ | c | }{Coverage} &
\multicolumn{2}{ | c | }{Redundancy} \\
\hline
& Year & Len. & Strict & Len. & Strict \\
\hline
\multirow{4}{*}{Indri}
& 2004 & 0.926 & 0.837 & 7.841 & 2.663 \\
& 2005 & 0.935 & 0.735 & 7.573 & 1.969 \\
& 2006 & 0.882 & 0.741 & 6.872 & 1.958 \\
\hline
\multirow{4}{*}{Terrier}
& 2004 & 0.919 & 0.806 & 7.186 & 2.380 \\
& 2005 & 0.928 & 0.766 & 7.620 & 2.130 \\
& 2006 & 0.983 & 0.783 & 6.339 & 2.067 \\
\hline
\end{tabular}
\caption{Performance of Indri and Terrier at document level IR over the AQUAINT corpus, with $n=20$\label{basedocumentlevelperformance}}
\end{center}
\end{table}
\normalsize
\subsection{Identification of Difficult Questions}
Once the performance of an IR configuration over a question set is known, it's possible to produce a simple report listing redundancy for each question. A performance reporting script accesses the FA tool's database and lists all the questions in a particular set with the strict and lenient redundancy for selected engines and configurations. Engines may use passage- or document-level configurations.
Data on the performance of the three engines is described in Table~\ref{basedocumentlevelperformance}. As can be seen, the coverage with passage-level retrieval (which was often favoured, as the AE component performs best with reduced amounts of text) languishes between 51\% and 71\%, depending on the measurement method. Failed anaphora resolution may contribute to this figure, though no deficiencies were found upon visual inspection.
Not all documents containing answers are noted, only those checked by the NIST judges~\cite{bilottiadvice}. Match judgements are incomplete, leading to the potential generation of false negatives, where a correct answer is found with complete supporting information, but as the information has not been manually flagged, the system will mark this as a failure. Assessment methods are fully detailed in Dang et al.~\shortcite{dang2006otq}. Factoid performance is still relatively poor, although as only 1.95 documents match per question, this may be an effect of such false negatives~\cite{Voorhees:TREC12}. Work has been done into creating synthetic corpora that include exhaustive answer sets~\cite{bilotti2004qet,tellex2003qep,lin2005brt}, but for the sake of consistency, and easy comparison with both parallel work and prior local results, the TREC judgements will be used to evaluate systems in this paper.
Mean redundancy is also calculated for a number of IR engines. Difficult questions were those for which no answer bearing texts were found by either strict or lenient matches in any of the top $n$ documents, using a variety of engines. As soon as one answer bearing document was found by an engine using any measure, that question was deemed \emph{non-difficult}. Questions with mean redundancy of zero are marked \emph{difficult}, and subjected to further analysis. Reducing the question set to just difficult questions produces a TREC-format file for re-testing the IR component.
\subsection{Extension of Difficult Questions}
The documents deemed relevant by TREC must contain some useful text that can help IR engine performance. Such words should be revealed by a gain in redundancy when used to extend an initially difficult query, usually signified by a change from zero to a non-zero value (signifying that relevant documents have been found where none were before). In an attempt to identify where the useful text is, the relevant documents for each difficult question were retrieved, and passages matching the answer regular expression identified. A script is then used to build a list of terms from each passage, removing words in the question or its target, words that occur in the answer, and stopwords (based on both the indexing stopword list, and a set of stems common within the corpus). In later runs, numbers are also stripped out of the term list, as their value is just as often confusing as useful~\cite{MIR:166}. Of course, answer terms provide an obvious advantage that would not be reproducible for questions where the answer is unknown, and one of our goals is to help query expansion for unseen questions. This approach may provide insights that will enable appropriate query expansion where answers are not known.
Performance has been measured with both the question followed by an extension (Q+E), as well as the question followed by the target and then extension candidates (Q+T+E). Runs were also executed with just Q and Q+T, to provide non-extended reference performance data points. Addition of the target often leads to gains in performance~\cite{arizonasu}, and may also aid in cases where anaphora resolution has failed.
Some words are retained, such as titles, as including these can be inferred from question or target terms and they will not unfairly boost redundancy scores; for example, when searching for a ``Who" question containing the word ``military", one may want to preserve appellations such as ``Lt." or ``Col.", even if this term appears in the answer.
This filtered list of extensions is then used to create a revised query file, containing the base question (with and without the target suffixed) as well as new questions created by appending a candidate extension word.
Results of retrievals with these new question are loaded into the FA database and a report describing any performance changes is generated. The extension generation process also creates custom answer specifications, which replicate the information found in the answers defined by TREC.
This whole process can be repeated with varying question difficulty thresholds, as well as alternative $n$ values (typically from 5 to 100), different engines, and various question sets.
\subsection{Relevance Feedback Performance}\label{rfmethod}
Now that we can find the helpful extension words (HEWs) described earlier, we're equipped to evaluate query expansion methods. One simplistic approach could use blind RF to determine candidate extensions, and be considered potentially successful should these words be found in the set of HEWs for a query. For this, term frequencies can be measured given the top $r$ documents retrieved using anaphora-resolved query $Q$. After stopword and question word removal, frequent terms are appended to $Q$, which is then re-evaluated. This has been previously attempted for factoid questions~\cite{arizonasu} and with a limited range of $r$ values~\cite{monz2003drq} but not validated using a set of data-driven terms.
We investigated how likely term frequency (TF) based RF is to discover HEWs. To do this, the proportion of HEWs that occurred in initially retrieved texts was measured, as well as the proportion of these texts containing at least one HEW. Also, to see how effective an expansion method is, suggested expansion terms can be checked against the HEW list.
We used both the top 5 and the top 50 documents in formulation of extension terms, with TF as a ranking measure; 50 is significantly larger than the optimal number of documents for AE (20), without overly diluting term frequencies.
Problems have been found with using entire documents for RF, as the topic may not be the same throughout the entire discourse~\cite{robertson92okapi}. Limiting the texts used for RF to paragraphs may reduce noise; both document- and paragraph-level terms should be checked.
\section{Results}
Once we have HEWs, we can determine if these are going to be of significant help when chosen as query extensions. We can also determine if a query expansion method is likely to be fruitful. Blind RF was applied, and assessed using the helpful words list, as well as RF's effect on coverage.
\begin{table}
\small
\begin{center}
\begin{tabular}{ | r || c | c | c | c | }
\hline
& \multicolumn{4}{ | c | }{Engine} \\
\hline
Year & \parbox[t][0.24in]{0.4in}{Lucene\\Para} & \parbox[t][0.24in]{0.4in}{Indri\\Para} & \parbox[t][0.24in]{0.4in}{Indri\\Doc} & \parbox[t][0.24in]{0.4in}{Terrier\\Doc} \\
\hline
2004 & 76 & 72 & 37 & 42 \\
2005 & 87 & 98 & 37 & 35 \\
2006 & 108 & 118 & 59 & 53 \\
\hline
\end{tabular}
\caption{Number of difficult questions, as defined by those which have zero redundancy over both strict and lenient measures, at $n=20$. Questions seem to get harder each year. Document retrieval yields fewer difficult questions, as more text is returned for potential matching.\label{difficultquestioncounts}}
\end{center}
\end{table}
\normalsize
\begin{table}
\small
\begin{center}
\begin{tabular}{ | l | c | c | c | }
\multicolumn{4}{ c }{Engine} \\
\hline
& Lucene & Indri & Terrier \\
\hline
Paragraph & 226 & 221 & - \\
Document & - & 121 & 109 \\
\hline
\end{tabular}
\caption{Number of difficult questions in the 2006 task, as defined above, this time with $n=5$. Questions become harder as fewer chances are given to provide relevant documents.\label{difficultquestioncounts2006n5}}
\end{center}
\end{table}
\normalsize
\subsection{Difficult Question Analysis}
\begin{table}
\small
\begin{center}
\begin{tabular}{ | r | r || c | c | }
\hline
\multicolumn{2}{ | c | }{\multirow{2}{*}{}}
& \multicolumn{2}{ | c | }{Match type} \\
\hline
& & Strict & Lenient \\
\hline
\multirow{3}{*}{Year}
& 2004 & 39 & 49 \\
& 2005 & 56 & 66 \\
& 2006 & 53 & 49 \\
\hline
\end{tabular}
\caption{Common difficult questions (over all three engines mentioned above) by year and match type; $n=20$.\label{dqstrictlenient}}
\end{center}
\end{table}
\normalsize
\begin{table}
\small
\begin{center}
\begin{tabular}{ | l | r | }
\hline
Difficult questions used & 118 \\
Variations tested & 6683 \\
Questions that benefited & 87 (74.4\%) \\
Helpful extension words (strict) & 4973 \\
Mean helpful words per question & 42.144 \\
Mean redundancy increase & 3.958 \\
\hline
\end{tabular}
\caption{Using Terrier Passage / strict matching, retrieving 20 docs, with TREC2006 questions / AQUAINT. Difficult questions are those where no strict matches are found in the top 20 IRT from just one engine.\label{dq2k6perf}}
\end{center}
\end{table}
\normalsize
The number of difficult questions found at $n=20$ is shown in Table~\ref{difficultquestioncounts}. Document-level retrieval gave many fewer difficult questions, as the amount of text retrieved gave a higher chance of finding lenient matches. A comparison of strict and lenient matching is in Table~\ref{dqstrictlenient}.
Extensions were then applied to difficult questions, with or without the target. The performance of these extensions is shown in Table~\ref{dq2k6perf}. Results show a significant proportion (74.4\%) of difficult questions can benefit from being extended with non-answer words found in answer bearing texts.
\begin{table}
\small
\begin{center}
\begin{tabular}{ | l || r | r | r |}
\hline
& 2004 & 2005 & 2006 \\
\hline
HEW found in IRT & 4.17\% & 18.58\% & 8.94\% \\
IRT containing HEW & 10.00\% & 33.33\% & 34.29\% \\
\hline
RF words in HEW & 1.25\% & 1.67\% & 5.71\% \\
\hline
\end{tabular}
\caption{``Helpful extension words": the set of extensions that, when added to the query, move redundancy above zero. $r=5$, $n=20$, using Indri at passage level.\label{hewrfintersection}}
\end{center}
\end{table}
\normalsize
\begin{table}
\small
\begin{center}
\begin{tabular}{ | c || l | l | l | l || l |}
\hline
& \multicolumn{4}{ | c | }{$r$} & \\
\cline{2-5}
& \multicolumn{2}{ | c | }{5} & \multicolumn{2}{ | c | }{50} & Baseline \\
\cline{1-5}
Rank & Doc & Para & Doc & Para & \\
\hline
5 & 0.253 & 0.251 & 0.240 & 0.179 & 0.312 \\
10 & 0.331 & 0.347 & 0.331 & 0.284 & 0.434 \\
20 & 0.438 & 0.444 & 0.438 & 0.398 & 0.553 \\
50 & 0.583 & 0.577 & 0.577 & 0.552 & 0.634 \\
\hline
\end{tabular}
\caption{Coverage (strict) using blind RF. Both document- and paragraph-level retrieval used to determine RF terms.\label{brf2k6}}
\end{center}
\end{table}
\normalsize
\subsection{Applying Relevance Feedback}
Identifying HEWs provides a set of words that are useful for evaluating potential expansion terms. Using simple TF based feedback (see Section~\ref{rfmethod}), 5 terms were chosen per query. These words had some intersection (see Table~\ref{hewrfintersection}) with the extension words set, indicating that this RF may lead to performance increases for previously unseen questions. Only a small number of the HEWs occur in the initially retrieved texts (IRTs), although a noticeable proportion of IRTs (up to 34.29\%) contain at least one HEW. However, these terms are probably not very frequent in the documents and unlikely to be selected with TF-based blind RF. The mean proportion of RF selected terms that were HEWs was only 2.88\%. Blind RF for question answering fails here due to this low proportion. Strict measures are used for evaluation as we are interested in finding documents which were not previously being retrieved rather than changes in the distribution of keywords in IRT.
Document and passage based RF term selection is used, to explore the effect of noise on terms, and document based term selection proved marginally superior. Choosing RF terms from a small set of documents ($r=5$) was found to be marginally better than choosing from a larger set ($r=50$). In support of the suggestion that RF would be unlikely to locate HEWs, applying blind RF consistently hampered overall coverage (Table~\ref{brf2k6}).
\section{Discussion}
\begin{table}
\small
\begin{center}
\begin{tabular}{ | l | r | }
\hline
\multicolumn{2}{ | l | }{\emph{Question:}} \\
\multicolumn{2}{ | l | }{Who was the nominal leader after the overthrow?} \\
\hline
\multicolumn{2}{ | l | }{\emph{Target:} Pakistani government overthrown in 1999} \\
\hline
Extension word & Redundancy \\
\hline
Kashmir & 4 \\
Pakistan & 4 \\
Islamabad & 2.5 \\
\hline
\hline
\multicolumn{2}{ | l | }{\emph{Question:} Where did he play in college?} \\
\hline
\multicolumn{2}{ | l | }{\emph{Target:} Warren Moon} \\
\hline
Extension word & Redundancy \\
\hline
NFL & 2.5 \\
football & 1 \\
\hline
\hline
\multicolumn{2}{ | l | }{\emph{Question:} Who have commanded the division?} \\
\hline
\multicolumn{2}{ | l | }{\emph{Target:} 82nd Airborne division} \\
\hline
Extension word & Redundancy \\
\hline
Gen & 3 \\
Col & 2 \\
decimated & 2 \\
officer & 1 \\
\hline
\end{tabular}
\end{center}
\caption{Queries with extensions, and their mean redundancy using Indri at document level with $n=20$. Without extensions, redundancy is zero.\label{pakistanextensions}}
\end{table}
\normalsize
HEWs are often found in answer bearing texts, though these are hard to identify through simple TF-based RF. A majority of difficult questions can be made accessible through addition of HEWs present in answer bearing texts, and work to determine a relationship between words found in initial retrieval and these HEWs can lead to coverage increases. HEWs also provide an effective means of evaluating other RF methods, which can be developed into a generic rapid testing tool for query expansion techniques. TF-based RF, while finding some HEWs, is not effective at discovering extensions, and reduces overall IR performance.
There was not a large performance change between engines and configurations. Strict paragraph-level coverage never topped 65\%, leaving a significant number of questions where no useful information could be provided for AE.
The original sets of difficult questions for individual engines were small -- often less than the 35\% suggested when looking at the coverage figures. Possible causes could include:
{\bf Difficult questions being defined as those for which average redundancy is zero:} This limit may be too low. To remedy this, we could increase the redundancy limit to specify an arbitrary number of difficult questions out of the whole set.\\
{\bf The use of both strict and lenient measures:} It is possible to get a lenient match (thus marking a question as non-difficult) when the answer text occurs out of context.
\medskip
Reducing $n$ from 20 to 5 (Table~\ref{difficultquestioncounts2006n5}) increased the number of difficult questions produced. From this we can hypothesise that although many search engines are succeeding in returning useful documents (where available), the distribution of these documents over the available ranks is not one that bunches high ranking documents up as those immediately retrieved (unlike a perfect engine; see Section~\ref{method:irengines}), but rather suggests a more even distribution of such documents over the returned set.
The number of candidate extension words for queries (even after filtering) is often in the range of hundreds to thousands. Each of these words creates a separate query, and there are two variations, depending on whether the target is included in the search terms or not. Thus, a large number of extended queries need to be executed for each question run. Passage-level retrieval returns less text, which has two advantages: firstly, it reduces the scope for false positives in lenient matching; secondly, it is easier to scan result by eye and determine why the engine selected a result.
Proper nouns are often helpful as extensions. We noticed that these cropped up fairly regularly for some kinds of question (e.g. ``Who"). Especially useful were proper nouns associated with locations - for example, adding ``Pakistani" to a query containing the word Pakistan lifted redundancy above zero for a question on President Musharraf, as in Table~\ref{pakistanextensions}. This reconfirms work done by Greenwood~\shortcite{Greenwood:pertainyms}.
\section{Conclusion and Future Work}
IR engines find some questions very difficult and consistently fail to retrieve useful texts even with high values of $n$. This behaviour is common over many engines. Paragraph level retrieval seems to give a better idea of which questions are hardest, although the possibility of false negatives is present from answer lists and anaphora resolution.
Relationships exist between query words and helpful words from answer documents (e.g. with a military leadership themes in a query, adding the term ``general" or ``gen" helps). Identification of HEWs has potential use in query expansion. They could be used to evaluate RF approaches, or associated with question words and used as extensions.
Previous work has ruled out relevance feedback in particular circumstances using a single ranking measure, though this has not been based on analysis of answer bearing texts. The presence of HEWs in IRT for difficult questions shows that guided RF may work, but this will be difficult to pursue. Blind RF based on term frequencies does not increase IR performance. However, there is an intersection between words in initially retrieved texts and words data driven analysis defines as helpful, showing promise for alternative RF methods (e.g. based on TFIDF). These extension words form a basis for indicating the usefulness of RF and query expansion techniques.
In this paper, we have chosen to explore only one branch of query expansion. An alternative data driven approach would be to build associations between recurrently useful terms given question content. Question texts could be stripped of stopwords and proper nouns, and a list of HEWs associated with each remaining term. To reduce noise, the number of times a particular extension has helped a word would be counted. Given sufficient sample data, this would provide a reference body of HEWs to be used as an aid to query expansion.
\bibliographystyle{coling}
|
1,116,691,501,224 | arxiv | \section{Introduction}
The consensus problem, in which we ask whether
the unanimity of
one among different competing states (e.g., opinions) is reached,
and its mechanisms are of
interest in various disciplines including political science,
sociology, and statistical physics. In models of consensus formation,
it is usually assumed that each individual possesses one of the different
states that can flip over time. The flip rate depends on the
environment such as the number of the individuals that adopt a
different state. Statistical physicists have approached this problem
by analyzing a variety of models including the voter model, majority
rule models, the bounded confidence model, Axelrod's model, and the naming game
(see \cite{sociophys} for a review).
A major mechanism that would lead to consensus in a population is
preference for the majority.
Collective opinion formation under various majority voting rules has been
examined for mean-field populations
\cite{Chen1,Galam1986,Galam1999,Galam2002,Krapivsky,Lambiotte4,Mobilia} and in different types of
networks such as regular lattices \cite{Chen2,Chen1,deOliveira,Liggett,Tessone}, small-world networks
\cite{PPLi}, heterogeneous networks \cite{Lambiotte3}, and networks
with community structure \cite{Lambiotte1,Lambiotte2}.
The majority preference may
be identified with the aversion to the minority. When there are
only two states, they are equivalent
because one state is the majority if and only if the other state is the
minority. However, the two principles may be distinct when more than
two states are assumed \cite{Volovik2009EPL}. We are concerned with
this case in the present study.
Language competition is an example of consensus problems.
The model proposed by Abrams and Strogatz (AS model)
accounts for extinction of endangered
languages \cite{Abrams1}. The AS model implements
competition between two
languages for speakers in a population.
The dynamics of the model is based on the majority preference, which is also regarded as the minority aversion because there are just two competing languages.
Several authors found
that different languages can stably coexist in variants of the AS model.
Two languages can coexist by spatial segregation in a model in which
competition dynamics and spatial diffusion are combined \cite{Patriarca1,Patriarca2}. A Lotka--Volterra variant of the AS model also leads to coexistence \cite{Pinasco}. Introduction of bilingual individuals also enables
coexistence \cite{Mira1,Mira2}.
Castell\'o and colleagues investigated
variants of the AS model with bilingualism on various networks
\cite{Castello1,Castello2,Castello3,Castello4}.
Coexistence also occurs when
the attractiveness of languages is dynamically manipulated
\cite{Chapel} or when
bilingualism, horizontal (i.e., from adults to adults) and vertical
(i.e., from adults to children) transmission of languages, and
dynamics of the languages' attractiveness are combined
\cite{Minett,Wang}.
In the present work, we extend the AS model to the case of competition
among a general number of languages, denoted by $n$.
Our model is a mean-field model (i.e., without spatial or network structure), as is the original AS model.
Because the AS model has been used for modeling competition of other cultural
or social traits such as religion \cite{Abrams2},
opinion, and service sectors \cite{Li},
we use the term ``state'' instead of ``language'' in the
following. We show that the behavior of the
model is essentially different between $n=2$ and $n\ge 3$.
In particular, the coexistence of different states and the consensus can be
multistable, i.e., the coexistence and consensus equilibria are both stable, only when $n\ge 3$.
\section{Model}
\label{model}
We extend the AS model to the case of competition among $n (\ge2)$ states. The dynamics of the fraction of state $i$ ($1\le i\le n$) is given by
\begin{equation}
\frac{dx_i}{dt}=\sum_{j=1, j\neq i}^n x_jP_{ji}-x_i\sum_{j=1, j\neq i}^n P_{ij},
\label{dx/dt}
\end{equation}
where $x_i$ is the fraction of state $i$ in the population,
and $P_{ji}$ represents the transition rate from state $j$ to state $i$. Equation~\eqref{dx/dt} respects the conservation law $\sum_{i=1}^n x_i=1$. The transition rates of the original AS model (i.e., $n=2$) are given by
\begin{equation}
P_{ji}=cs_ix_i^a =cs_i(1-x_j)^a\quad \left((i,j)=(1,2), (2,1)\right),
\label{originalP}
\end{equation}
where $a>0$ controls the strength of the frequency-dependent state transition, $s_i>0$ is the attractiveness of state $i$, and $\sum_{i=1}^n s_i=1$ \cite{Abrams1}.
Because $c$ simply specifies the time scale of the dynamics,
we set $c=1$.
Equation~\eqref{originalP} allows two interpretations:
majority preference because
$P_{ji}=s_ix_i^a$ and minority aversion because
$P_{ji}=s_i(1-x_j)^a$.
The two principles lead to the same model when $n=2$.
However, the two principles are distinct when $n\ge 3$.
Therefore, we redefine $P_{ji}$ to allow for independent manipulation of
the two
factors. The transition rates of the extended model are defined by
\begin{equation}
P_{ji}=s_ix_i^\beta(1-x_j)^{a-\beta},
\label{P}
\end{equation}
where $\beta (\ge 0)$ and $a-\beta (\ge 0)$ represent the strength of the majority preference and the minority aversion, respectively. When $n=2$, the dynamics given by the substitution of
Eq.~\eqref{P} in Eq.~\eqref{dx/dt} becomes independent of the
$\beta$ value.
\section{Analysis}
\label{analysis}
\subsection{Case of majority preference (i.e., $\beta=a$)}\label{sub:majority preference}
In this section, we set
$\beta=a$ to analyze the case in which the
majority preference is present and the minority aversion is absent.
Substitution of Eq.~\eqref{P} in Eq.~\eqref{dx/dt} yields
\begin{eqnarray}
\frac{dx_i}{dt}&=&\sum_{j=1, j\neq i}^n x_js_ix_i^a - x_i\sum_{j=1, j\neq i}^n s_jx_j^a \nonumber\\
&=&\left(s_ix_i^{a-1}-\left<sx^{a-1}\right>\right)x_i,
\label{cp4m}
\end{eqnarray}
where $\left< \cdot \right>$ represents the average over the
population, i.e., the average of a state-dependent variable with weight $x_i$ ($1\le i\le n$).
Equation~\eqref{cp4m} is a replicator equation \cite{Hofbauer1998}
in which $s_ix_i^{a-1}$ and $\left<sx^{a-1}\right>=\sum_{\ell=1}^n s_{\ell}x_{\ell}^a$ play the role of the fitness for state $i$ and the average fitness in the population, respectively.
The dynamics given by Eq.~\eqref{cp4m} has
$n$ trivial equilibria corresponding to the consensus, i.e., the monopoly of a single state, and an interior equilibrium given by
\begin{equation}
x_i^*=\frac{s_i^{\frac{1}{1-a}}}{\sum_{\ell=1}^n s_{\ell}^{\frac{1}{1-a}}}\quad (1\le i\le n).
\label{coexistence}
\end{equation}
$V(\bm{x})\equiv-\left<sx^{a-1}\right>$, where $\bm x = (x_1, \ldots, x_n)$,
is a Lyapunov function of the dynamics given by Eq.~\eqref{cp4m} because
\begin{eqnarray}
\frac{dV(\bm{x})}{dt}&=&
-\sum_{i=1}^n s_iax_i^{a-1}\frac{dx_i}{dt} \nonumber\\
&=& -a\sum_{i=1}^n s_i x_i^a\left( s_ix_i^{a-1}-\left< sx^{a-1}\right>\right) \nonumber\\
&=& -a\left(\left<(sx^{a-1})^2\right>-\left<sx^{a-1}\right>^2\right)
\le 0.
\label{dV/dt}
\end{eqnarray}
$V$ has a unique global extremum at
$\bm x^*$, which is minimum for $a<1$ and maximum for $a>1$ (Appendix~\ref{appendix:unimodal}). Therefore, the coexistence equilibrium given by
Eq.~\eqref{coexistence} is globally stable for $a<1$ and unstable for $a>1$.
Equation~\eqref{cp4m} also admits a unique equilibrium for each
subset of the $n$ states. When $n=3$, for example, the equilibrium
in which states 1 and 2, but not 3, coexist is given by
Eq.~\eqref{coexistence} for $i=1$ and 2, with the denominator replaced
by $s_1^{\frac{1}{1-a}} + s_2^{\frac{1}{1-a}}$, and $x_3=0$. In general,
there are $\left(n\atop n^{\prime} \right)$ equilibria
containing $n^{\prime}$ states. If $2\le n^{\prime}\le
n-1$, these equilibria are unstable.
For $a<1$, the instability immediately follows from the fact that $\bm x^*$ is the unique global minimum of $V(\bm x)$. For $a>1$, any equilibrium containing $n^{\prime}$ ($2\le n^{\prime}\le n-1$) states is unstable because it realizes the global maximum of the same Lyapunov function restricted to the simplex spanned by the $n^{\prime}$ states.
When $a=1$, we obtain $V(\bm{x})\equiv-\left<s\right> =
\sum_{i=1}^n s_i x_i$. Therefore, if $s_i>s_j$ ($j\neq i$), the
consensus of state $i$ is eventually reached. If
$s_i=s_{i^{\prime}}>s_j$ ($j\neq i, i^{\prime}$), for example, the
$n-2$ states corresponding to $s_j$ ($j\neq i, i^{\prime}$)
are eventually eliminated. The
dynamics then stops such that states $i$ and $i^{\prime}$ coexist. If all the
three $s_i$ values are equal, any population is neutrally stable.
Figure~\ref{flow1} represents the dynamics in the two regimes
with $n=3$, which we obtained by numerically integrating
Eq.~\eqref{cp4m}. For $a<1$, a trajectory starting from anywhere
in the interior of
the phase space, i.e., $\bm x$ that satisfies
$x_1+x_2+x_3=1$, $x_1$, $x_2$, $x_3> 0$, asymptotically approaches
the coexistence equilibrium (Fig.~\ref{flow1}a).
It should be noted that a point in the triangle
in Fig.~\ref{flow1}a corresponds to
a configuration of the population, i.e., $\bm x$.
For example,
corner $e_i$ ($i=1$, 2, or 3) represents the consensus (i.e., $x_i=1$ and $x_j=0$ ($j\neq i$)), and
the normalized Euclidean distance from the point to the edge $e_2$--$e_3$ of the triangle
is equal to the $x_1$ value.
For $a>1$, a trajectory starting from the interior of the triangle
converges to one of the $n$ consensus equilibria,
depending on the initial condition (Fig. \ref{flow1}b).
In Fig.~\ref{fixed},
a bifurcation diagram in which we plot
$x_1^*$ against $a$ is shown for $s_1=0.40$, $s_2=0.35$, and $s_3=0.25$.
As $a$ approaches unity from
below, the stable coexistence equilibrium approaches the unstable
consensus equilibrium corresponding to the largest $s_i$ value ($e_1$
in Fig.~\ref{flow1}). At $a=1$, the two equilibria
collide, and an unstable coexistence equilibrium
simultaneously bifurcates from the consensus equilibrium corresponding to the smallest
$s_i$ value ($e_3$ in Fig.~\ref{flow1}).
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=5cm]{figure1a.eps}
\hspace{3mm}
\includegraphics[width=5cm]{figure1b.eps}
\end{center}
\caption{Dynamics of the extended AS model when the majority preference is present and the minority aversion is absent.
We set $n=3$, $\beta=a$, $s_1=0.40$, $s_2=0.35$, and $s_3=0.25$.
(a) $a=0.5$ and (b) $a=1.4$. Solid and open circles represent stable and unstable equilibria, respectively.}
\label{flow1}
\end{figure}
\begin{figure}[tbph]
\begin{center}
\includegraphics[height=5.5cm]{figure2}
\end{center}
\caption{Bifurcation diagram for the extended AS model when the majority preference is present and the minority aversion is absent.
The parameter values are equal to those used in Fig.~\ref{flow1} except that we vary $a$.}
\label{fixed}
\end{figure}
\subsection{Case of minority aversion (i.e., $\beta=0$)}
\label{sub:minority aversion}
In this section, we set $\beta=0$ to
analyze the case in which the majority preference is absent and
the minority aversion is present.
Substitution of
Eq.~\eqref{P} in Eq.~\eqref{dx/dt} yields
\begin{eqnarray}
\frac{dx_i}{dt}&=&s_i\sum_{j=1}^n x_j(1-x_j)^a-(1-x_i)^ax_i \nonumber\\
&=&s_i\left<(1-x)^a\right>-(1-x_i)^ax_i.
\label{ca2m}
\end{eqnarray}
In contrast to the case of the majority preference (Sect.~\ref{sub:majority preference}), the simplex spanned by $n^{\prime}$ ($2\le n^{\prime}\le n-1$) states
is not invariant under the dynamics given by
Eq.~\eqref{ca2m}. Therefore, a state that once gets extinct may reappear.
In this section, we numerically analyze Eq.~\eqref{ca2m} for $n=3$ and $n=4$.
For general $n$, we analytically examine the special case in which $s_i$ is independent of $i$ in Sect.~\ref{symmetric}.
For $n=3$, the dynamics for various values of $a$
is shown in Fig.~\ref{flow2}. We set $s_1=0.36$, $s_2=0.33$, and $s_3=0.31$.
When $a$ is small $(a<1)$, there is a unique globally stable coexistence
equilibrium in the interior (Fig.~\ref{flow2}a).
The three consensus equilibria $e_1$, $e_2$, and $e_3$ are unstable.
At $a=1$, $e_1$, $e_2$, and $e_3$ change the stability such that they are stable beyond $a=1$ (Appendix~\ref{appendix:consensus}).
Simultaneously, a saddle point bifurcates from each consensus equilibrium. The bifurcation occurs simultaneously for the three equilibria at $a=1$ irrespective of the values of $s_1$, $s_2$, and $s_3$.
Slightly beyond $a=1$, the three consensus equilibria and the interior
coexistence equilibrium are multistable
(see Fig.~\ref{flow2}b for the results
at $a=1.3$). As
$a$ increases, the attractive basin of the coexistence
equilibrium becomes small, and
that of each consensus equilibrium becomes large (see Fig.~\ref{flow2}c
for the results at $a=1.4$). At $a= a_{{\rm c}1}\approx 1.43$, the
coexistence equilibrium that is stable for $a<a_{\rm c1}$
and the unstable interior
equilibrium that bifurcates from $e_i$ at $a=1$, where $i$ corresponds to the largest $s_i$ value ($i=1$ in the present example),
collide. This is a
saddle-node bifurcation.
Numerically obtained
$a_{\rm c1}$ values are shown in Fig.~\ref{phase}a
for different values of $s_1$,
$s_2$, and $s_3$. A point in the triangle in the figure specifies
the values of $s_1$, $s_2$, and $s_3$ under the constraint
$s_1+s_2+s_3 = 1$, $s_i> 0$ ($1\le i\le 3$).
It seems that $a_{\rm c1}$ is the largest
when $s_i=1/3$ ($1\le i\le 3$).
Figure~\ref{phase} suggests that heterogeneity in $s_i$ makes
$a_{\rm c1}$ smaller and hence makes the stable coexistence of the three states difficult. When $s_i\approx 1$ and $s_j\approx 0$ ($j\neq i$), we obtain $a_{\rm c1}\approx 1$.
When $a$ is slightly larger than $a_{{\rm c}1}$, there are two saddle points in the interior. In this situation,
one of the three consensus equilibria, which depends on the initial condition, is eventually reached (Fig.~\ref{flow2}d).
However, the manner with which the triangular phase space
is divided into the three attractive basins is qualitatively different from that in the case of
the majority preference (Fig.~\ref{flow1}b). In particular, in the present case of the minority aversion, even if $x_1$ is initially equal to 0, the consensus of state 1 (i.e., $e_1$) can be reached. This behavior never occurs in the case of the majority preference and less likely for a larger $a$ value
in the case of the minority aversion (Fig.~\ref{flow2}e).
The sizes of the attractive basins of the different equilibria
are plotted against $a$ in Fig.~\ref{basin}a.
Up to our numerical efforts with various initial conditions,
we did not find limit cycles.
A discrete jump in the basin size of the coexistence equilibrium
is observed at $a_{\rm c1}\approx$ 1.43, reminiscent of the saddle-node
bifurcation. Interestingly, the attractive basin of the consensus equilibrium $e_1$ is the largest just beyond $a_{\rm c1}$.
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=5cm]{figure3a.eps}
\hspace{3mm}
\includegraphics[width=5cm]{figure3b.eps}
\hspace{3mm}
\includegraphics[width=5cm]{figure3c.eps}
\hspace{3mm}
\includegraphics[width=5cm]{figure3d.eps}
\hspace{3mm}
\includegraphics[width=5cm]{figure3e.eps}
\hspace{3mm}
\includegraphics[width=5cm]{figure3f.eps}
\hspace{3mm}
\includegraphics[width=5cm]{figure3g.eps}
\end{center}
\caption{Dynamics of the extended AS model when the majority preference is absent and the minority aversion is present. We set
$n=3$, $\beta=0$, $s_1=0.36$, $s_2=0.33$, and
$s_3=0.31$. (a) $a=0.9$,
(b) $a=1.3$, (c) $a=1.4$, (d) $a=1.5$, (e) $a=2.6$, (f) $a=2.9$, and (g)
$a=10.0$.}
\label{flow2}
\end{figure}
\begin{figure}[tbph]
\begin{center}
\includegraphics[height=5cm]{figure4a}
\includegraphics[height=5cm]{figure4b}
\end{center}
\caption{Dependence of (a) $a_{\rm c1}$ and (b) $a_{\rm c2}$
on $s_1$, $s_2$, and $s_3$ when the majority preference is absent and the minority aversion is present. A point in the triangle corresponds to
a triplet ($s_1$, $s_2$, $s_3$), where $s_1+s_2+s_3=1$ and $s_i\ge 0$ ($1\le i\le 3$).}
\label{phase}
\end{figure}
As $a$ increases further, the second saddle-node bifurcation occurs at $a=a_{\rm c2}\approx2.81$, where an unstable node and a saddle point coappear (Fig.~\ref{flow2}f). Logically, the sizes of the attractive basins could be discontinuous at $a=a_{\rm c2}$ because some initial conditions with small $x_1$ might be attracted to $e_1$ when $a$ is slightly smaller than $a_{\rm c2}$ and to $e_2$ or $e_3$ when $a$ is slightly larger than $a_{\rm c2}$. However, up to our numerical efforts, we did not observe the discontinuity, as implied by
Fig.~\ref{basin}a.
Numerically obtained
$a_{\rm c2}$ values are shown in Fig.~\ref{phase}b
for different values of $s_1$, $s_2$, and $s_3$.
Heterogeneity in $s_i$ makes $a_{\rm c2}$ larger.
In addition, $a_{\rm c2}$ is equal to $a_{\rm c1}$ when
$s_1=s_2=s_3=1/3$. In this symmetric case, the three saddle
points simultaneously collide with the stable star node at
$a=1$. Beyond $a=1$, the equilibrium that is the stable star node when $a<1$ loses
its stability to become an unstable star node.
The three saddle points move away from the unstable
star node as $a$ increases. This transition can be interpreted as
three simultaneously occurring transcritical bifurcations.
The unstable node that emerges at $a=a_{\rm c2}$ approaches $x_i^*=1/3$ ($1\le i\le 3$)
in the limit $a\rightarrow\infty$, as shown in
Appendix~\ref{appendix:limit}. The three saddle points
approach $(x_1, x_2, x_3)=(1/2, 1/2, 0), (1/2,
0, 1/2)$, and $(0, 1/2, 1/2)$, as shown in
Fig.~\ref{flow2}g. This is a trivial consequence of the
proof given in Appendix~\ref{appendix:limit}.
Therefore, the heterogeneity in $s_i$ does not play the role in the
limit $a\to\infty$ such that the phase space is symmetrically divided into
the three attractive basins corresponding to $e_1$, $e_2$, and $e_3$.
For $n=4$, the relationship between $a$ and the
sizes of the attractive basins of the different equilibria
is shown in Fig.~\ref{basin}b.
The results are qualitatively the same as those for $n=3$
(Fig.~\ref{basin}a).
\begin{figure}[tbph]
\begin{center}
\includegraphics[height=5cm]{figure5a.eps}
\includegraphics[height=5cm]{figure5b.eps}
\end{center}
\caption{Sizes of the attractive basins for
different equilibria
when the majority preference is absent and the minority aversion is present.
The lines with legend ``State $i$'' represent the basin size for the consensus equilibrium of state $i$. The lines with legend ``Coexistence'' represent the basin size for the coexistence equilibrium.
(a) $n=3$, $\beta=0$, $s_1=0.36$,
$s_2=0.33$, and $s_3=0.31$. (b) $n=4$, $\beta=0$, $s_1=0.28$,
$s_2=0.26$, $s_3=0.24$, and $s_4=0.22$. We obtain $a_{{\rm c}1}\approx
1.43$ and $a_{{\rm c}2}\approx 2.81$ in (a) and $a_{{\rm c}1}\approx
1.91$ and $a_{{\rm c}2}\approx 3.29$ in (b). We calculate the sizes of the attractive basins as follows. First, we take the initial condition ($x_1$, $x_2$, $x_3$) $=$
($0.01i$, $0.01j$, $0.01k$), where $i,j,k\ge 1$, $i+j+k=100$, for $n=3$ and
($x_1$, $x_2$, $x_3$, $x_4$) = ($0.05i$, $0.05j$, $0.05k$, $0.05\ell$), where $i,j,k,\ell\ge 1$,
$i+j+k+\ell=20$, for $n=4$. Second, we run the dynamics starting from each initial condition until the trajectory converges. Third, we count fraction of the initial conditions that converge to each stable equilibrium.}
\label{basin}
\end{figure}
\subsection{Symmetric case}\label{symmetric}
In the previous sections,
we separately
considered the effect of the majority preference
(Sect.~\ref{sub:majority preference})
and
the minority aversion (Sect.~\ref{sub:minority aversion}). In this section, we
examine the extended AS model when both effects can be combined.
To gain analytical insight into the model,
we focus on the symmetric case
$s_i=s$ ($1\le i\le n$). Although normalization $\sum_{i=1}^n s_{i}=1$ leads to $s_i=1/n$, we set $s=1$ in this section to simplify the notation; $s$ just controls the time scale of the dynamics.
Then, Eqs.~\eqref{dx/dt} and \eqref{P} are reduced to
\begin{equation}
\frac{dx_i}{dt}=x_i^\beta\sum_{j=1, j\neq i}^n x_j(1-x_j)^{a-\beta}-x_i(1-x_i)^{a-\beta}\sum_{j=1, j\neq i}^n x_j^\beta.
\label{ndx/dt}
\end{equation}
Equation~\eqref{ndx/dt} implies that, regardless of the parameter values,
there exist $n$ trivial
consensus equilibria and symmetric coexistence equilibria of $n^{\prime}$
($2\le n^{\prime}\le n$) states given by
$x_i^*=1/n^{\prime}$, where $i$ varies over the $n^{\prime}$ surviving states arbitrarily selected from the $n$ states.
Owing to the conservation law $\sum_{i=1}^nx_i=1$, the dynamics are
($n-1$)-dimensional. The eigenvalues of the
Jacobian matrix of the dynamics at the coexistence equilibrium containing the
$n$ states
are ($n-1$)-fold and given by
$\left(\tfrac{1}{n}\right)^\beta\left(1-\tfrac{1}{n}\right)^{a-\beta-1}\left[(n-2)\beta+a-n+1\right]$, as shown in Appendix~\ref{appendix:coexistence}. Therefore,
the coexistence equilibrium is stable if and only if
\begin{equation}
(n-2)\beta+a-n+1<0.
\label{coexist}
\end{equation}
Similarly, we show in Appendix \ref{appendix:consensus}
that the consensus equilibria are stable if and only if
\begin{equation}
a>1.
\label{consensus}
\end{equation}
Coexistence equilibria of $n^{\prime}$ ($2\le n^{\prime}\le n-1$) states
are always unstable (Appendix \ref{appendix:coexistence}).
Figure~\ref{para_space} is the phase diagram of the model in which
the stable equilibria for given parameter values
are indicated. The thin solid and dashed lines separating two phases
are given by Eqs.~\eqref{coexist} and \eqref{consensus}, respectively.
A multistable parameter region exists when $n\ge 3$; Eq.~\eqref{coexist} is reduced to $a<1$ when $n=2$.
When $n\ge 3$, the multistablity occurs except in the case of the pure majority preference (i.e., $\beta=a$). The multistable parameter region enlarges as $n$ increases.
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=8cm]{figure6.eps}
\end{center}
\caption{Phase diagram of the extended AS model when all the $s_i$ values are equal. The thin solid lines and the thin dashed line are given by
Eqs.~\eqref{coexist}
and \eqref{consensus}, respectively.}
\label{para_space}
\end{figure}
\section{Discussion}\label{discussion}
We analyzed an extended AS model with $n$ states.
We showed that the introduction of
the minority aversion as compared to the majority preference changes
the behavior of the model with $n\ge 3$ in two main aspects.
First, different states stably coexist up to a larger $a$ value with the minority aversion than with the majority preference. Nevertheless, it should be noted that
$a$ is the exponent associated with different quantities in the two cases (Eq.~\eqref{P}). Second,
the multistability of the consensus
equilibria and the coexistence equilibrium is
facilitated by the minority aversion and opposed by the majority preference.
We verified that the main results also hold true in the case of more general transition rates than Eq.~\eqref{P}, expressed as $P_{ji}=s_i^\gamma (1-s_j)^{1-\gamma}x_i^\beta(1-x_j)^{a-\beta}$
(Appendix~\ref{sec:with gamma}).
Volovik and colleagues examined mean-field dynamics of
a three-state opinion formation model
with minority aversion
\cite{Volovik2009EPL}. Coexistence of at least
two states occurs in their model even if a random choice term, equivalent to diffusive coupling, which the authors assumed, is turned off. This is because
only the most minor state decreases in the number of individuals
and the other two major states are equally strong in
their model.
In our model, the most major and second major states have different strengths in attracting individuals, and the coexistence equilibrium is stable
only for small $a$.
Fu and Wang considered the combined effects of the majority preference and minority avoidance on coevolution of opinions and network structure
\cite{Fu}. They assumed that the majority preference is used for collective opinion formation and the minority avoidance guides network formation.
They showed that segregated groups, each composed of individuals with the same opinion, evolve when the minority avoidance is dominantly used.
The coexistence of multiple states owing to the segregation has also been shown in spatially extended AS models \cite{Patriarca1,Patriarca2}.
We showed that coexistence of different states is facilitated by the minority aversion, not by the majority preference, and that stable coexistence
occurs without segregation, or other spatial or network mechanisms.
Nowak and colleagues analyzed a replicator--mutator equation as a
model of language evolution \cite{Nowak}. They showed that coexistence of different grammars (i.e., states in our terminology) and a consensus-like configuration in which one grammar dominates the others are multistable when a learning parameter takes an intermediate value. If the learning is
accurate, the consensus-like configuration becomes monostable. Our model and theirs differ in at least two aspects.
First, the control parameter in our model is
$a$, that is, the strength of the sum of the majority preference and minority aversion. Second, the stable coexistence in our model requires
some minority aversion, whereas their model
takes into account the majority preference but not the minority aversion.
\begin{acknowledgements}
We thank Kiyohito Nagano, Hisashi Ohtsuki, and Gouhei Tanaka for valuable discussions.
This research is supported by the Aihara Innovative Mathematical
Modelling Project, the Japan Society for the Promotion of Science
(JSPS) through the ``Funding Program for World-Leading Innovative R\&D
on Science and Technology (FIRST Program),'' initiated by the Council
for Science and Technology Policy (CSTP).
We also acknowledge financial support provided through Grants-in-Aid for Scientific Research (No. 23681033).
\end{acknowledgements}
|
1,116,691,501,225 | arxiv |
\section*{Appendix}
\label{appendix}
\renewcommand{\thesubsection}{\Alph{subsection}}
\subsection{Superimposition Attack using $L_\infty$ Norm}
\label{SI3_Linfinity}
In this section we present the results from applying the Superimposition ($3\times$) attack with the $L_\infty$ norm to our model. In Table \ref{tab:distribution_clean_vs_super3_Li} we observe similarly that Noisy Logit reduces the success rate of attack on the individual networks and Ensemble Voting allows accuracy to be further improved for both datasets.
\begin{table}[]
\centering
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Average accuracy}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Single Network} & \textbf{Clean Accuracy} & \textbf{SI3 Attack Accuracy} \\ \midrule
MNIST Network & 95.60\% & 19.13\% \\
MNIST Network with Noisy Logit & 75.22\% & 49.07\% \\
CIFAR10 Network & 83.00\% & 68.53\% \\
CIFAR10 Network with Noisy Logit & 77.47\% & 68.22\% \\ \bottomrule
\end{tabular}%
}
\label{tab:accuracy_clean_vs_super3_Li}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Classifications}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 18.89\% & 64.44\% & 16.67\% \\
MNIST Ensemble with Noisy Logit & 76.67\% & 12.22\% & 11.11\% \\
CIFAR10 Ensemble & 85.56\% & 1.11\% & 13.33\% \\
CIFAR10 Ensemble with Noisy Logit & 87.78\% & 1.11\% & 11.11\% \\ \bottomrule
\end{tabular}%
}
\label{tab:classification_clean_vs_super3_Li}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Average perturbations}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 11.78\% & 39.37\% & 49.01\% \\
MNIST Ensemble with Noisy Logit & 10.08\% & 46.62\% & 28.07\% \\
CIFAR10 Ensemble & 5.42\% & 0.00\% & 8.09\% \\
CIFAR10 Ensemble with Noisy Logit & 5.41\% & 0.00\% & 4.99\% \\ \bottomrule
\end{tabular}%
}
\label{tab:perturb_clean_vs_super3_Li}
\end{subtable}
\vspace{0.3cm}
\caption{Distributions for Superimposition ($3\times$) of adversarial inputs. (a) average single network clean accuracy vs. attack accuracy; (b) breakdown of classifications; (c) average perturbations corresponding to classifications.}
\label{tab:distribution_clean_vs_super3_Li}
\end{table}
\subsection{A Second Look at Noisy Logit}
\label{detailed_noisylogit}
The results for the Random Single Network Attack in Section~\ref{SN_results} provided some insights into how Noisy Logit works to reduce transferability across different neural networks. However, in the case of the MNIST dataset, we observed that although Noisy Logit changes the distribution of perturbations in the adversarial examples, it appears there is no benefit to using Noisy Logit as Ensemble Voting alone provides better accuracy rates. In this section, we look at the output of each individual network in isolation when it's being targeted, to see whether applying Noisy Logit improves robustness in a single network. In Fig.~\ref{fig:adversarial_examples_withandwithout_noisylogit} we craft adversarial examples corresponding to a single sample and single target, on each of the 50 networks in the ensemble.
\begin{figure}[!t]
\centering
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{\footnotesize MNIST No Noisy Logit}
\label{fig:mnist_input0_target0_all}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/input0_target0_all.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{\footnotesize MNIST With Noisy Logit}
\label{fig:mnist_input0_target0_noisy}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/input0_target0_noisy.png}
\end{subfigure}
\end{minipage}
\vfill
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{\footnotesize CIFAR10 No Noisy Logit}
\label{fig:CIFAR10_input0_target0_all}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/input0_target0_all.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{\footnotesize CIFAR10 With Noisy Logit}
\label{fig:CIFAR10_input0_target0_noisy}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/input0_target0_noisy.png}
\end{subfigure}
\end{minipage}
\caption{Adversarial examples without and with Noisy Logit applied}
\label{fig:adversarial_examples_withandwithout_noisylogit}
\end{figure}
\begin{table}[]
\begin{subtable}{0.45\linewidth}
\centering
\subcaption{MNIST}
\vspace{-0.2cm}
\begin{tabular}{|l|l|l|l|l|}
\hline
7 & 7 & 7 & 5 & 0 \\ \hline
7 & 0 & 7 & 0 & 2 \\ \hline
7 & 7 & 7 & 7 & 7 \\ \hline
7 & 0 & 7 & 0 & 2 \\ \hline
7 & 7 & 3 & 2 & 2 \\ \hline
7 & 7 & 3 & 7 & 7 \\ \hline
2 & 7 & 7 & 7 & 3 \\ \hline
7 & 7 & 7 & 2 & 0 \\ \hline
3 & 7 & 5 & 0 & 7 \\ \hline
7 & 2 & 7 & 0 & 7 \\ \hline
\end{tabular}
\label{tab:class_input7target0_noisy_mnist}
\end{subtable}
\vspace{0.3cm}
\begin{subtable}{0.45\linewidth}
\centering
\subcaption{CIFAR10}
\vspace{-0.2cm}
\begin{tabular}{|l|l|l|l|l|}
\hline
8 & 8 & 1 & 8 & 8 \\ \hline
8 & 8 & 1 & 8 & 8 \\ \hline
8 & 8 & 1 & 8 & 8 \\ \hline
8 & 8 & 8 & 8 & 1 \\ \hline
8 & 1 & 8 & 8 & 8 \\ \hline
8 & 1 & 1 & 8 & 8 \\ \hline
8 & 8 & 8 & 8 & 0 \\ \hline
8 & 8 & 8 & 8 & 1 \\ \hline
1 & 1 & 8 & 8 & 8 \\ \hline
1 & 8 & 8 & 8 & 8 \\ \hline
\end{tabular}
\label{tab:class_input7target0_noisy_cifar}
\end{subtable}
\caption{Classifications of the 50 networks corresponding to Fig. \ref{fig:mnist_input0_target0_noisy} and Fig. \ref{fig:CIFAR10_input0_target0_noisy}, respectively. In (a), the numbered classifications correspond to the digits; in (b), 8 corresponds to ship, 1 corresponds to automobile and 0 corresponds to airplane.}
\label{tab:classification_input7target0_noisy}
\end{table}
In Fig.~\ref{fig:mnist_input0_target0_all} and Fig.~\ref{fig:mnist_input0_target0_noisy}, the sample input is the digit 7 and target is the digit 0. Observe that the distribution of perturbations is changed if we apply Noisy Logit, where in Fig.~\ref{fig:mnist_input0_target0_noisy} we see more occurrences in the tails (i.e. very small or very large perturbations). In Fig.~\ref{fig:mnist_input0_target0_all} each targeted network mis-classifies its corresponding adversarial example as 0 (corresponding to $100\%$ success rate of Carlini-Wagner on a single network); whereas in Fig.~\ref{fig:mnist_input0_target0_noisy}, only 8 of the networks misclassify as 0, and 29 of the networks still correctly classify as 7 as shown in Table \ref{tab:class_input7target0_noisy_mnist}. Therefore, for an individual MNIST network with Noisy Logit applied, the success rate of a targeted Carlni-Wagner attack is low. Thus, a single MNIST network is more robust to adversarial examples if Noisy Logit is applied, however the accuracy rate suffers since extra noise is added.
In Fig.~\ref{fig:CIFAR10_input0_target0_all} and Fig.~\ref{fig:CIFAR10_input0_target0_noisy}, the sample input is the object ship and the target is the object airplane. There is no noticeable difference in the distribution of perturbations whether or not Noisy Logit is applied. In Fig.~\ref{fig:CIFAR10_input0_target0_all}, again each targeted network misclassifies its corresponding adversarial example as the target (airplane); whereas in Fig.~\ref{fig:CIFAR10_input0_target0_noisy}, only 1 of the networks misclassifies as airplane, and 38 of the networks still correctly classify as ship as shown in Table \ref{tab:class_input7target0_noisy_cifar}. Therefore, the success rate of a targeted Carlni-Wagner attack on a single CIFAR10 network is very low, i.e., robustness of a single CIFAR10 network is increased in the presence of Noisy Logit. Note also the accuracy rate only slightly suffers from extra noise added.
\subsection{Sample Results for Superimposition Attacks}
In Fig. \ref{fig:mnist_sup2_simple_sample1} - Fig. \ref{fig:cifar_sup3_noisy_sample2} samples of resulting images with adversarial perturbations are shown. The leftmost column displays the original images, the middle columns display the adversarial examples with the two or three smallest perturbations, the last column shows the superimposition of the two or three adversarial examples. The rows correspond to different targets being applied in the adversarial examples. Classifications of these images are provided in Tables \ref{tab:mnist_classifications_sample1} - \ref{tab:cifar_classifications_sample2}.
\begin{figure}[t]
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup2_simple_sample1.png}
\label{fig:mnist_sup2_simple_sample1}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup2_noisy_sample1.png}
\label{fig:mnist_sup2_noisy_sample1}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup3_simple_sample1.png}
\label{fig:mnist_sup3_simple_sample1}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup3_noisy_sample1.png}
\label{fig:mnist_sup3_noisy_sample1}
\end{subfigure}
\end{minipage}
\vspace {-.3cm}
\caption{Adversarial Images using Superimposition ($2\times$ and $3\times$) MNIST Sample 1}
\end{figure}
\begin{table}[]
\resizebox{0.44\textwidth}{!}{%
\begin{tabular}{@{}lllll@{}}
\toprule
Target & SI ($2\times$) & SI-NL ($2\times$) & SI ($3\times$) & SI-NL ($3\times$) \\ \midrule
0 & 7 & 7 & 0 & 7 \\
1 & 2 & 7 & 2 & 7 \\
2 & 7 & 7 & 2 & 7 \\
3 & 7 & 7 & 3 & 7 \\
4 & 4 & 7 & 4 & 7 \\
5 & 7 & 7 & 5 & 7 \\
6 & 6 & 7 & 6 & 7 \\
8 & 7 & 7 & 8 & 7 \\
9 & 9 & 7 & 9 & 7 \\ \bottomrule
\end{tabular}%
}
\caption{Classifications of MNIST Sample 1, corresponding to Fig. \ref{fig:mnist_sup2_simple_sample1}, \ref{fig:mnist_sup2_noisy_sample1}, \ref{fig:mnist_sup3_simple_sample1}, \ref{fig:mnist_sup3_noisy_sample1}, respectively.}
\label{tab:mnist_classifications_sample1}
\end{table}
\begin{figure}[!t]
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup2_simple_sample2.png}
\label{fig:mnist_sup2_simple_sample2}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup2_noisy_sample2.png}
\label{fig:mnist_sup2_noisy_sample2}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup3_simple_sample2.png}
\label{fig:mnist_sup3_simple_sample2}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/mnist/sup3_noisy_sample2.png}
\label{fig:mnist_sup3_noisy_sample2}
\end{subfigure}
\end{minipage}
\vspace {-.3cm}
\caption{Adversarial Images using Superimposition ($2\times$ and $3\times$) MNIST Sample 2}
\end{figure}
\begin{table}[]
\resizebox{0.44\textwidth}{!}{%
\begin{tabular}{@{}lllll@{}}
\toprule
Target & SI ($2\times$) & SI-NL ($2\times$) & SI ($3\times$) & SI-NL ($3\times$) \\ \midrule
0 & 2 & 2 & 2 & 2 \\
1 & 2 & 2 & 2 & 2 \\
3 & 2 & 2 & 3 & 2 \\
4 & 8 & 2 & 4 & 4 \\
5 & 2 & 2 & 5 & 2 \\
6 & 2 & 2 & 6 & 2 \\
7 & 7 & 8 & 7 & 3 \\
8 & 2 & 2 & 8 & 2 \\
9 & 8 & 2 & 9 & 9 \\ \bottomrule
\end{tabular}%
}
\caption{Classifications of MNIST Sample 2, corresponding to Fig. \ref{fig:mnist_sup2_simple_sample2}, \ref{fig:mnist_sup2_noisy_sample2}, \ref{fig:mnist_sup3_simple_sample2}, \ref{fig:mnist_sup3_noisy_sample2}, respectively.}
\label{tab:mnist_classifications_sample2}
\end{table}
\begin{figure}[!t]
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup2_simple_sample1.png}
\label{fig:cifar_sup2_simple_sample1}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup2_noisy_sample1.png}
\label{fig:cifar_sup2_noisy_sample1}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup3_simple_sample1.png}
\label{fig:cifar_sup3_simple_sample1}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup3_noisy_sample1.png}
\label{fig:cifar_sup3_noisy_sample1}
\end{subfigure}
\end{minipage}
\vspace {-.3cm}
\caption{Adversarial Images using Superimposition ($3\times$) CIFAR Sample 1}
\end{figure}
\begin{table}[]
\resizebox{0.44\textwidth}{!}{%
\begin{tabular}{@{}lllll@{}}
\toprule
Target & SI ($2\times$) & SI-NL ($2\times$) & SI ($3\times$) & SI-NL ($3\times$) \\ \midrule
0 & 3 & 3 & 3 & 3 \\
1 & 3 & 3 & 3 & 3 \\
2 & 3 & 3 & 3 & 3 \\
4 & 3 & 3 & 3 & 3 \\
5 & 3 & 3 & 3 & 3 \\
6 & 3 & 3 & 3 & 3 \\
7 & 3 & 3 & 3 & 3 \\
8 & 3 & 3 & 3 & 3 \\
9 & 3 & 3 & 3 & 3 \\ \bottomrule
\end{tabular}%
}
\caption{Classifications of CIFAR Sample 1, corresponding to Fig. \ref{fig:cifar_sup2_simple_sample1}, \ref{fig:cifar_sup2_noisy_sample1}, \ref{fig:cifar_sup3_simple_sample1}, \ref{fig:cifar_sup3_noisy_sample1}, respectively.}
\label{tab:cifar_classifications_sample1}
\end{table}
\begin{figure}[!t]
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup2_simple_sample2.png}
\label{fig:cifar_sup2_simple_sample2}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($2\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup2_noisy_sample2.png}
\label{fig:cifar_sup2_noisy_sample2}
\end{subfigure}
\end{minipage}
\begin{minipage}{0.47\textwidth}
\begin{subfigure}{0.47\textwidth}
\subcaption{Without Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup3_simple_sample2.png}
\label{fig:cifar_sup3_simple_sample2}
\end{subfigure}
\hfill
\begin{subfigure}{0.47\textwidth}
\subcaption{With Noisy Logit ($3\times$)}
\vspace {-.05cm}
\includegraphics[width=1\textwidth]{Plots/cifar/sup3_noisy_sample2.png}
\label{fig:cifar_sup3_noisy_sample2}
\end{subfigure}
\end{minipage}
\vspace {-.3cm}
\caption{Adversarial Images using Superimposition ($2\times$ and $3\times$) CIFAR Sample 2}
\end{figure}
\begin{table}[]
\resizebox{0.44\textwidth}{!}{%
\begin{tabular}{@{}lllll@{}}
\toprule
Target & SI ($2\times$) & SI-NL ($2\times$) & SI ($3\times$) & SI-NL ($3\times$) \\ \midrule
0 & 8 & 8 & 8 & 8 \\
1 & 8 & 8 & 8 & 8 \\
2 & 8 & 8 & 8 & 8 \\
3 & 8 & 8 & 8 & 8 \\
4 & 8 & 8 & 8 & 8 \\
5 & 8 & 8 & 8 & 1 \\
6 & 8 & 8 & 8 & 8 \\
7 & 8 & 8 & 0 & 8 \\
9 & 1 & 8 & 1 & 8 \\ \bottomrule
\end{tabular}%
}
\caption{Classifications of CIFAR Sample 2, corresponding to Fig. \ref{fig:cifar_sup2_simple_sample2}, \ref{fig:cifar_sup2_noisy_sample2}, \ref{fig:cifar_sup3_simple_sample2}, \ref{fig:cifar_sup3_noisy_sample2}, respectively.}
\label{tab:cifar_classifications_sample2}
\end{table}
\section{Conclusion}
In this paper we introduced an approach to protect image classification networks from adversarial examples. The approach is composed of two mechanisms - Noisy Logit and Ensemble Voting, which were evaluated in Section \ref{testing_results}. We saw that Ensemble Voting improves accuracy over the base model, while Noisy Logit reduces transferability across different networks in classifying adversarial examples. Moreover, the approach combining the two mechanisms was shown to have comparable accuracy in classifying adversarial examples as in classifying genuine inputs for both MNIST and CIFAR10, where further improved accuracy was achieved with Rank Verification. Using Noisy Logit impedes the adversary's ability to accurately solve the optimization problem for crafting adversarial examples, as solving the problem requires access to outputs at some layers which have been tampered by the Noisy Logit mechanism. Ensemble Voting works on white-box attacks and is a mechanism that provides resilience as well as improves accuracy. Since using Noisy Logit reduces accuracy in general, the addition of Ensemble Voting complements the approach by improving the reduced accuracy.
There are a number of future directions to extend the current work. First, we can consider Ensemble Voting using a collection of networks with very different architectures (rather than the ones considered here which are the same up to the temperature constant). In the work of \cite{9156305}, it was demonstrated that some architectures are more robust to adversarial examples, we expect that our model would benefit from using more robust architectures since each individual network would be more robust and transferability could also be reduced as well.
One can also study the relation between the number of networks to be used with the amount of noise to be added to the inputs, since it's not obvious how their relationship works in the robustness guarantee. It'd be interesting to research to perform similar experiments on another dataset as well, since MNIST and CIFAR10 exhibit different transferability properties.
\section{Discussion}
The goals of our work might appear somewhat contradictory. On one-hand, we are providing a mechanism that protects against adversarial examples, and it relies on the ground that transferability properties of adversarial examples between networks are not \emph{completely} known. On the other hand, we've shown how easy it is to craft an adversarial example using superimposition, that would result in an example that can fool not only the original networks from which the example was crafted, but can potentially fool other networks as well (that is, in the absence of a protection mechanism). We believe the key to finding an adversarial example that will truly resist all protection mechanisms, including our work here, is to understand the transferability properties of such examples across different networks. Based on our testing results, it appears transferability depends very much on the complexity of the classification task. The images in the CIFAR10 dataset have many more features than those in the MNIST dataset, while the CIFAR10 and MNIST networks were trained with similar model architectures, the MNIST networks are much more prone to transferability than the CIFAR10 networks. Also, we note that it takes a much smaller amount of perturbations to craft an adversarial example on a single CIFAR10 network than it does for an MNIST network, which could mean that on a CIFAR10 image there is a small subset of pixels which are most important to the model. When this small subset of pixels varies across different models, then it becomes very unlikely that an adversarial example crafted with one model would succeed on a different model; unless there is some model-independent small subset of pixels which are universally important to every model.\\
We note that transferability of neural networks was studied by Papernot et al. \citep{papernot2016transferability}, where it was shown in the experiments that transferability from one neural network to another was as high as 38\% using the Fast Gradient Sign method \citep{goodfellow2014explaining}. In \citep{papernot2016transferability}, they did not consider an explicit algorithm for attacking an ensemble model. In our testing results in Tables~\ref{tab:accuracy_clean_vs_super2} and \ref{tab:classification_clean_vs_super2}, we showed that using a simple superimposition of two adversarial examples, the average accuracy on the MNIST networks was reduced from 96.80\% to 55.30\%, with ensemble accuracy of 66.30\%. When three adversarial examples were used, the average accuracy was reduced from 95.60\% to 24.69\%, with ensemble accuracy of 26.67\%, as shown in Tables \ref{tab:accuracy_clean_vs_super3} and \ref{tab:classification_clean_vs_super3}. Note that transferability rate is also dependent on the classification task, where we saw in the same tables
that the same experiments performed on the CIFAR10 dataset showed a much lower transferability rate on similar superimposition attacks.
\section{Experimental Evaluation}
\label{testing_results}
In this section we assess the effectiveness of our approach by formulating two types of potential attacks on our model. In the first type we consider an attack crafted for a randomly chosen network in the ensemble. Since the model architectures are so similar across the networks, transferability is possible where networks other than the chosen one might still incorrectly classify. In the second type of attacks we consider superimpositions of adversarial examples to examine whether this could further increase transferability. For a chosen subset of the networks, each with a corresponding adversarial example, one could reasonably suspect that other networks beyond the chosen subset could incorrect classify as the superimposition could have captured perturbations that are commonly effective on many other networks\footnote {Additional experiments plus the source code for the experiments can be found at: \url{https://tinyurl.com/y4qkabd4} .}.
\subsection{Test Setup}
\label{test_setup}
We conducted experiments on two datasets, MNIST \cite{lecun1998} and CIFAR10 \cite{krizhevsky2009learning}. For the following tests, the architecture of each network is the same as the one in \cite{carlini2017towards}, which we provide in Table~\ref{tab:ensemble_architecture}. We first trained an ensemble of networks $F^l$, each with temperature $T_l, l=1,2,...,m$. We used $T_l=10\cdot l$, $m=50$.
In all the experiments, the $L^2$ norm is used in all places where a norm is needed, including the $L^2$ version of the Carlini-Wagner attack. For the models with Noisy Logit, we employ a
Gaussian noise function $g(0,\sigma^2)$ with $\sigma=0.5$ for MNIST and $\sigma=0.03$ for CIFAR10. We experimented with different values for the scale parameter, and found these values provide sufficient noise without losing too much accuracy.
\begin{table}[t]
\centering
\begin{subtable}{.8\linewidth}
\subcaption{Model architectures}
\vspace {-.2cm}
\centering
\resizebox{0.88\textwidth}{!}{
\begin{tabular}{|l|l|l|}
\hline
\textbf{Layer} & \textbf{MNIST} & \textbf{CIFAR10} \\ \hline
Convolution + ReLU & $3\times3\times32$ & $3\times3\times64$ \\ \hline
Convolution + ReLU & $3\times3\times32$ & $3\times3\times64$ \\ \hline
Max Pooling & $2\times2$ & $2\times2$ \\ \hline
Convolution + ReLU & $3\times3\times32$ & $3\times3\times64$ \\ \hline
Convolution + ReLU & $3\times3\times32$ & $3\times3\times64$ \\ \hline
Max Pooling & $2\times2$ & $2\times2$ \\ \hline
Fully Connected + ReLU & $200$ & $256$ \\ \hline
Fully Connected + ReLU & $200$ & $256$ \\ \hline
Softmax & $10$ & $10$ \\ \hline
\end{tabular}%
}
\label{tab:ensemble_architecture}
\end{subtable}
\quad
\begin{subtable}{.8\linewidth}
\centering
\vspace{.2cm}
\subcaption{Parameters used}
\vspace {-.2cm}
\resizebox{0.86\textwidth}{!}{%
\begin{tabular}{|l|l|l|}
\hline
\textbf{Parameter} & \textbf{MNIST} & \textbf{CIFAR10} \\ \hline
Learning Rate & $0.01$ & $0.01$ \\ \hline
Decay & $1.00e-06$ & $1.00e-06$ \\ \hline
Momentum & $0.9$ & $0.9 $ \\ \hline
Dropout & $0.5$ & $0.5$ \\ \hline
Batch Size & $128$ & $128$ \\ \hline
Partitioned Training Set & Yes & No \\ \hline
Training Set Size (Per Network) & $1100$ & $45000$ \\ \hline
Validation Set Size & $5000$ & $5000$ \\ \hline
Epochs & $3000$ & $150$ \\ \hline
Gaussian Noise Sigma & $0.5$ & $0.03$ \\ \hline
\end{tabular}%
}
\label{tab:ensemble_parameter}
\end{subtable}
\vspace {-.2cm}
\caption{Setup for the ensemble of networks}
\end{table}
\noindent
\textbf{MNIST} We first partitioned the original dataset into 50 training subsets (1100 samples each) and one validation set (5000 samples). We trained 50 teachers individually on the partitioned subsets, each was trained with a different temperature constant for 3000 epochs. The average validation accuracy among the individual networks was ~94.03\%.
\input {Plots/mnist_normal/Figure_flip2count_adv2freq.tex}
\noindent
\textbf{CIFAR10} In the setup for CIFAR10 we used a single training set (45000 samples) and one
validation set (5000 samples). We do not use the networks trained on partitioned datasets for testing on CIFAR10, because we observed that a small training dataset resulted in low accuracy for CIFAR10. We trained 50 teachers individually on the same training set, each was trained with a different temperature constant for 150 epochs. The average validation accuracy among the individual networks was ~72.42\%.
\begin{table}[h]
\centering
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Accuracy}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Model} & \textbf{Clean Accuracy} & \textbf{SN Attack Accuracy}\\ \midrule
MNIST Ensemble & 100.000\% & 90.519\% \\
MNIST Ensemble with NL & 99.867\% & 85.970\% \\
MNIST Ensemble with NL+RV(0.05) & 100.00\% & 95.563\% \\
CIFAR10 Ensemble & 93.333\% & 89.304\% \\
CIFAR10 Ensemble with NL & 90.800\% & 88.030\% \\
CIFAR10 Ensemble with NL+RV(0.05) & 99.819\% & 96.643\% \\
\bottomrule
\end{tabular}%
}
\label{tab:clean_vs_adversarial_single}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Adversarial classifications}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 90.519\% & 3.022\% & 6.459\% \\
MNIST Ensemble with NL & 85.970\% & 5.704\% & 8.326\% \\
CIFAR10 Ensemble & 89.304\% & 1.363\% & 9.333\% \\
CIFAR10 Ensemble with NL & 88.030\% & 1.659\% & 10.311\% \\ \bottomrule
\end{tabular}%
}
\label{tab:classification_adversarial_single}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Average adversarial perturbations}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 15.956\% & 15.879\% & 22.567\% \\
MNIST Ensemble with NL & 14.646\% & 18.653\% & 23.276\% \\
CIFAR10 Ensemble & 4.161\% & 0.960\% & 4.692\% \\
CIFAR10 Ensemble with NL & 4.128\% & 0.759\% & 4.500\% \\ \bottomrule
\end{tabular}%
}
\label{tab:perturbation_adversarial_single}
\end{subtable}
\caption{Distributions for Single Network adversarial inputs. (a) clean accuracy vs. attack accuracy; (b) breakdown of classifications; (c) average perturbations corresponding to classifications.}
\label{tab:distribution_adversarial_single}
\end{table}
\subsection{Random Single Network Attack}
\label{SN_results}
In this section, we look at the possible outcomes of an adversarial example crafted to defeat a single network, to see how it can potentially transfer across the ensemble. Since an adversarial example crafted for one network can potentially fool a different network \cite{papernot2016transferability}, we expect transferability in our ensemble of networks especially given that they have very similar model architectures. Given some sample input $s$ and target label $t$, for each $F^l$ we craft an adversarial example $A(F^l,s,t)$ using the Carlini-Wagner attack, and look at: 1) how each $F^{l'}$ classifies $A(F^l,s,t)$; 2) how the ensemble classifies the example through voting with and without Noisy Logit; and 3) how Rank Verification (with 5\% significance level) can further improve accuracy.
We generate a set of 15 input samples, $s_k, k=1,...,15$. For each $s_k$ we craft an adversarial example $A(F^l,s_k,t_j)$ on network $F^l, l=1,...,50$, for targets $t_j, j=1,...,9$, for a total of $9\times50\times15=6750$ adversarial examples. We define the perturbation of an adversarial example as its normed difference with the original input over the norm of the original input, as follows:
{\small
\begin{equation}
\label{eqn:perturbation}
p(a;s) = \frac{||a-s||}{||s||},
\end{equation}
}
where $a$ is an adversarial example on the input $s$. We bucket the range of perturbations into 40 equally spaced bins, $x_b, b=1,...,40$. In the plots in this section, we aggregate by the perturbation bins and represent the bins by their mid-points on the x-axis.
In Fig. \ref{fig:count_changed_target_mnist} and \ref{fig:count_changed_target_cifar}, we have the average counts of networks whose classifications change to the target of the adversarial example, from some other original classification, i.e., for bucket $x_b$, the values $y_b$ on the y-axis are:
{\small
\begin{equation}\label{eqn:count_changed_target}
\begin{aligned}
y_b = \frac{1}{|x_b|}\sum_{x \in x_b} |{} & \{F^l:F^l(a)\neq F^l(s),F^l(a)=t,\\
& p(s,a)=x,l=1,...,50\}|.
\end{aligned}
\end{equation}
}
In Fig. \ref{fig:count_changed_other_mnist} and \ref{fig:count_changed_other_cifar}, the counts are on the classifications that change to something other than the target, or:
{\small
\begin{equation}
\label{eqn:count_changed_other}
\begin{aligned}
y_b = \frac{1}{|x_b|}\sum_{x \in x_b} |{} & \{F^l:F^l(a)\neq F^l(s),F^l(a)\neq t,\\
& p(s,a)=x,l=1,...,50\}|.
\end{aligned}
\end{equation}
}
These counts are averaged by perturbation bin. The green and blue curves represent the results corresponding to adversarial examples crafted on networks without and with Noisy Logit applied, respectively.
In Fig. \ref{fig:freq_changed_target_mnist} and \ref{fig:freq_changed_target_cifar} we show the frequencies of aggregate outputs of the ensemble which changed to the target of the adversarial example, i.e.:
{\small
\begin{equation}
\label{eqn:freq_changed_target}
\begin{aligned}
y_b = \frac{1}{N}\sum_{x \in x_b} |{} & \{F^*(a):F^*(a)\neq F^*(s),F^*(a)=t,\\
& p(s,a)=x\}|,
\end{aligned}
\end{equation}
}
where $F^*(\cdot)$ represents the aggregate output by voting among the ensemble. Similarly, we have in Fig. \ref{fig:freq_changed_other_mnist} and \ref{fig:freq_changed_other_cifar}the frequencies corresponding to some label other than the target:
{\small
\begin{equation}
\label{eqn:freq_changed_other}
\begin{aligned}
y_b = \frac{1}{N}\sum_{x \in x_b} |{} & \{F^*(a):F^*(a)\neq F^*(s),F^*(a)\neq t,\\
& p(s,a)=x\}|.
\end{aligned}
\end{equation}
}
The frequencies are obtained by normalizing the total changed outputs by the total number of adversarial examples, which is $N=6750$ in this case.
In Fig. \ref{fig:perturbation_accuracy} we show the average accuracy of the ensemble by perturbation bin. From these plots, we observe 1) applying Noisy Logit clearly reduces transferability rate for CIFAR10 networks, where in Fig. \ref{fig:count_changed_target_cifar} we notice a decrease in the number of networks in the ensemble whose classifications change to target when Noisy Logit is applied, and consistently so across different perturbation bins; 2) applying Noisy Logit changes the distribution of perturbations for MNIST, where higher frequencies of larger perturbations are observed.
\input {Plots/cifar_normal/Figure_flip2count_adv2freq.tex}
\input {Plots/cifar_normal/Figure_adv2correct_mnistcifar.tex}
We summarize the results in Table \ref{tab:distribution_adversarial_single}. Observe that for CIFAR10 Ensemble with Noisy Logit, nearly identical accuracy to the original clean accuracy is achieved for adversarial examples that target any single network in the ensemble. We calculate accuracy as the ratio of the number of correct aggregate outputs over the total number (6750) of test attacks.
\input {Plots/mnist_normal/Figure_guarantee2adv.tex}
\input {Plots/cifar_normal/Figure_guarantee2adv.tex}
\subsection{Superimposition Attacks}
\label{SI_results}
In this section we consider superimposition attacks consisting of adversarial examples targeting two or three of the networks in the ensemble. Due to the large number of possible subsets of size two or three, we do not consider every such combination; instead, since the objective of crafting an adversarial example is to minimize the perturbations while causing a network to incorrectly classify, we consider a greedy type superimposition where adversarial examples of minimal perturbations are used. The setup is in Table \ref{tab:si_test_config}.
\begin{table}[]
\resizebox{0.45\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Attack} & \textbf{\# sample inputs} & \textbf{\# adversarial examples crafted} & \textbf{\# tests} \\ \midrule
SI2 & 30 & 30x9x50=13500 & 180 \\
SI3 & 20 & 20x9x50=9000 & 270 \\ \bottomrule
\end{tabular}
}
\caption{Setup for testing superimposition attacks.}
\label{tab:si_test_config}
\end{table}
\vspace{.2cm}
\noindent
\textbf{Superimposition of Two Adversarial Examples}
\noindent
In Table \ref{tab:accuracy_clean_vs_super2_SN} we have the average accuracy over individual networks, where in the left column the accuracy is on the original images, in the right column the accuracy is on the adversarial examples. We see that with a superimposition of only two images, the average accuracy of a single MNIST network reduced to 55.30\% from 96.80\%; whereas for a CIFAR10 network the average accuracy is reduced from 79.53\% to 66.48\%, a much smaller reduction. This again suggests that CIFAR10 networks are more robust to transferability. Observe that with Noisy Logit, the accuracy reduction is less prominent.
\begin{table}[h]
\centering
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Average single network clean accuracy vs. attack accuracy}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Network} & \textbf{Clean Accuracy} & \textbf{SI2 Attack Accuracy} \\ \midrule
MNIST Single & 96.800\% & 55.296\% \\
MNIST Single with NL & 90.044\% & 73.007\% \\
CIFAR10 Single & 79.533\% & 66.481\% \\
CIFAR10 Single with NL & 76.570\% & 72.259\% \\ \bottomrule
\end{tabular}
}
\label{tab:accuracy_clean_vs_super2_SN}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Accuracy of the ensembles, with Noisy Logit (NL), and with NL + Rank Verification at 5\% significance level (NL+RV(0.05))}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Model} & \textbf{Clean Accuracy} & \textbf{SI2 Attack Accuracy} \\ \midrule
MNIST Ensemble & 100.000\% & 66.296\% \\
MNIST Ensemble with NL & 100.000\% & 94.074\% \\
MNIST Ensemble with NL+RV(0.05) & 100.000\% & 99.099\% \\
CIFAR10 Ensemble & 90.000\% & 85.926\% \\
CIFAR10 Ensemble with NL & 87.778\% & 87.037\% \\
CIFAR10 Ensemble with NL+RV(0.05) & 92.035\% & 91.628\% \\ \bottomrule
\end{tabular}%
}
\label{tab:accuracy_clean_vs_super2}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Breakdown of classifications}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 66.296\% & 13.333\% & 20.370\% \\
MNIST Ensemble with NL & 94.074\% & 0.370\% & 5.556\% \\
CIFAR10 Ensemble & 85.926\% & 1.111\% & 12.963\% \\
CIFAR10 Ensemble with NL & 87.037\% & 1.481\% & 11.481\% \\ \bottomrule
\end{tabular}
}
\label{tab:classification_clean_vs_super2}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Average perturbations corresponding to classifications}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 14.628\% & 25.713\% & 24.323\% \\
MNIST Ensemble with NL & 9.564\% & 23.179\% & 24.446\% \\
CIFAR10 Ensemble & 3.685\% & 0.000\% & 4.782\% \\
CIFAR10 Ensemble with NL & 2.651\% & 0.000\% & 3.385\% \\ \bottomrule
\end{tabular}
}
\label{tab:perturb_clean_vs_super2}
\end{subtable}
\caption{Distributions for Superimposition ($2\times$) of adversarial inputs. }
\label{tab:distrib_clean_vs_super2}
\end{table}
Observe in Fig. \ref{tab:accuracy_clean_vs_super2} how much each model improves upon the single network case in terms of clean and adversarial accuracy. Moreover, when both Noisy Logit and Rank Verification (at 5\% significance level) are applied, the model is able to achieve superior adversarial accuracy, one that's even higher than the single network clean accuracy.
We provide the distribution of perturbations corresponding to Table \ref{tab:classification_clean_vs_super2} in Table \ref{tab:perturb_clean_vs_super2}. Observe that the average perturbation for MNIST Ensemble with Noisy Logit is smaller than that without Noisy Logit, consistent with Fig. \ref{fig:perturb_distribution_correct_mnist}, where applying noise shifts the distribution of perturbations toward the tails.
\vspace{.2cm}
\noindent
\textbf{Superimposition of Three Adversarial Examples}\\
\noindent
For the superimposition of three adversarial examples (Table~\ref{tab:distribution_clean_vs_super3}), we observe similar results as two adversarial examples. We note that under this attack, a simple ensemble performs no better than a single network, with only $22\%$ accuracy. However, with Noisy Logit and Rank Verification (at 5\% significance level) applied, the model still achieves great accuracy, especially for CIFAR10.
\begin{table}[h]
\centering
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Average single network clean accuracy vs. attack accuracy}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Network} & \textbf{Clean Accuracy} & \textbf{SI3 Attack Accuracy} \\ \midrule
MNIST Single & 95.500\% & 21.900\% \\
MNIST Single with NL & 88.878\% & 54.267\% \\
CIFAR10 Single & 84.800\% & 69.289\% \\
CIFAR10 Single with NL & 81.578\% & 71.089\% \\ \bottomrule
\end{tabular}
}
\label{tab:accuracy_avg_clean_vs_super3}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Accuracy of the ensembles, with Noisy Logit (NL), and with NL + Rank Verification at 5\% significance level (NL+RV(0.05))}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Model} & \textbf{Clean Accuracy} & \textbf{SI3 Attack Accuracy} \\ \midrule
MNIST Ensemble & 100.000\% & 22.222\% \\
MNIST Ensemble with NL & 100.000\% & 72.222\% \\
MNIST Ensemble with NL+RV(0.05) & 100.000\% & 83.471\% \\
CIFAR10 Ensemble & 95.000\% & 88.889\% \\
CIFAR10 Ensemble with NL & 92.778\% & 88.333\% \\
CIFAR10 Ensemble with NL+RV(0.05) & 100.000\% & 98.485\% \\ \bottomrule
\end{tabular}
}
\label{tab:accuracy_clean_vs_super3}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Breakdown of classifications}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 22.222\% & 59.444\% & 18.333\% \\
MNIST Ensemble with NL & 72.222\% & 10.556\% & 17.222\% \\
CIFAR10 Ensemble & 88.889\% & 0.556\% & 10.556\% \\
CIFAR10 Ensemble with NL & 88.333\% & 1.111\% & 10.556\% \\ \bottomrule
\end{tabular}
}
\label{tab:classification_clean_vs_super3}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{1.0\linewidth}
\centering
\subcaption{Average perturbations corresponding to classifications}
\vspace{-0.2cm}
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{@{}llll@{}}
\toprule
\textbf{Model} & \textbf{Correct} & \textbf{Target} & \textbf{Other} \\ \midrule
MNIST Ensemble & 12.039\% & 29.658\% & 34.468\% \\
MNIST Ensemble with NL & 10.557\% & 31.702\% & 28.083\% \\
CIFAR10 Ensemble & 3.923\% & 0.000\% & 5.570\% \\
CIFAR10 Ensemble with NL & 3.744\% & 0.000\% & 4.545\% \\ \bottomrule
\end{tabular}
}
\label{tab:perturb_clean_vs_super3}
\end{subtable}
\caption{Distributions for Superimposition ($3\times$) of adversarial inputs. }
\label{tab:distribution_clean_vs_super3}
\end{table}
\vspace {.2cm}
\noindent
\textbf{Measuring Robustness}\\
We plot the robustness radius $R$ computed from our certification procedure, against the minimum computed $L^2$ norms of the adversarial perturbations that altered the original output, for the inputs used in experiments above. If the adversarial perturbation does not change the output for a given input sample, there is no minimum distortion radius for that sample.
For samples which our procedure fails to certify (i.e. $\underline{p_A} \leq 0.5$), we mark it with a red x at 0.5. For samples that are certified but whose predictions fail Rank Verification with p-value $\geq 0.05$, we mark it with a yellow x at 0.05. We consider that the procedure should abstain from certification in both cases. We use $n=10000$ and $\alpha=0.05$ for both datasets. For MNIST we use $\sigma=0.5$ and for CIFAR10 $\sigma=0.03$ for certification.
Note that although in general a larger value of $\sigma$ leads to a larger value of certified robust $R$, it might also increase the likelihood that the procedure fails to certify.
We demonstrate the certified radius $R$ for the Single Random Network (SRN) and Superimposition $2\times$ (SI2) attacks Fig.\ref{fig:guarantee_mnist} and \ref{fig:guarantee_cifar}. Observe that the certified radius is quite tight in some cases, where a slightly larger distortion already induces the model to change its prediction.
\section{Introduction}
\label{Introduction}
Deep neural networks (DNN) are increasingly being adapted to perform a wide range of tasks from navigation and personal recommendation systems for consumer use to a larger scale decision making systems such as speech recognition and computer vision. However, application of DNN in safety critical systems is hampered by its vulnerability to \emph {adversarial examples}, where an adversary uses carefully crafted small amounts of perturbations to force the DNN to make erroneous classifications in the inference phase. Since the first published attack by \cite{szegedy2013intriguing} there has been many attempts to eliminate or otherwise substantially increase the cost for crafting such attacks \cite{goodfellow2014explaining,kurakin2016adversarial,papernot2016distillation}. However, it was shown that many such defences were later defeated by more advanced attacks \cite{carlini2017towards}. Since then, there has been a shift in research focus from \emph{empirical} defences to measuring DNN's robustness in terms of its resistance to adversarial examples \citep{bastani2016measuring,weng2018evaluating,hendrycks2019benchmarking}, and more recently to finding solutions that offer robustness guarantees \citep{lecuyer2019certified,cohen2019certified}.
In this paper, we provide an approach toward increasing robustness of DNN against adversarial examples. Our intuition for creating such a defence relies on the idea of training an ensemble of networks and using the aggregate outputs of the networks to decide on a final output. A similar voting mechanism proved successful for protecting data privacy via differential privacy in PATE~\citep{papernot2016semi}, where the training set containing sensitive data is partitioned into disjoint sets, each is used for training a \emph {teacher} network. The mechanism of PATE relies on partitioning and keeping the sensitive data secret to provide data privacy.
In our case, the intent of using an ensemble is to minimize the probability of a successful attack by increasing the amount of effort, and potentially increasing the amount of perturbation to an image so the attack becomes noticeable (i.e. fails). We provide empirical results to demonstrate its effectiveness and an analysis into its robustness properties.
\section{Robustness Guarantee}
It is important to understand how much perturbation the model is able to withstand without changing its prediction, as it gives us confidence about the model's {\emph{robustness}} to make predictions under adversarial attacks. We say a model is robust to perturbations of a certain size when it does not alter its original prediction on an input even after perturbations up to a certain size have been added to the input. Formally, let $F:D\rightarrow\mathcal{Y}$ denote a classifier, let $x \in D$ be any input. We say $F$ is robust at $x$ up to perturbations of size $R$ (under some norm function $||\cdot||$), if for any $\delta \in \Re^m$ such that $||\delta||\leq R$ and $x+\delta \in D$, we have
{\small
\begin{equation}
F(x+\delta)=F(x).
\end{equation}
}
In this section, we provide a Robustness Guarantee for our approach combining Ensemble Voting and Noisy Logit. We follow the analysis in \citep{cohen2019certified} and adapt their guarantee and certification procedure to our approach. Let $F^*$ denote the classifier whose output is given by the most frequent output among an ensemble of networks $\{F^l:l=1,...,m\}$, or
\begin{equation} \label{eqn:ensemble}
F^*(x)=\argmax_{y\in\mathcal{Y}} \sum_{l=1}^{m} \mathds{1}_{F^l(x)=y}.
\end{equation}
Let $\varepsilon ~ N(0,\sigma^2I)$, define
\begin{equation} \label{eqn:noisy_ensemble}
g(x)=F^*(x+\varepsilon).
\end{equation}
\textbf{Theorem 1} ($L^2$ norm): Let $F^*$ be as defined in equation (\ref{eqn:ensemble}) and $g$ be as defined in equation (\ref{eqn:noisy_ensemble}). If there exists $y_A \in \mathcal{Y}$ and $\underline{p_A}, \overline{p_B} \in [0,1]$ such that
\begin{equation}
\mathds{P}(F^*(x+\varepsilon)=y_A)\geq\underline{p_A}\geq\overline{p_B}\geq\max_{y\neq y_A} \mathds{P}(F^*(x+\varepsilon)=y),
\end{equation}
then $g(x+\delta)=y_A$ for all $||\delta||_2 \leq R$, where
\begin{equation}
R=\frac{\sigma}{2}(\Phi^{-1}(\underline{p_A})-\Phi^{-1}(\overline{p_B})).
\end{equation}
The proof of this result is almost identical to that in \citep{cohen2019certified}; in particular, in the proof of \citep{cohen2019certified} their network can be replaced by our ensemble $F^*$, and their smoothed classifier can be replaced by our noisy ensemble $g$. Note that a similar result for the $L^1$ norm also exists and was provided by \citep{teng2020ell}. Robustness in $L^1$ norm is naturally induced by Laplace noise just as robustness in $L^2$ norm is naturally induced by Gaussian noise.
\subsection{Certification}
It is difficult to compute exactly the probability of $F^*$ at any input since we must consider the joint distribution of the ensemble, which requires computing their correlations, and the immense range of possible permutations of subsets among the ensemble. Computing the probability on a noisy input ($x+\varepsilon$) is an even more challenging task. We can, however, approximate the distribution of $F^*(x+\varepsilon)$ for a given input $x$ using Monte Carlo simulations, from which we can then compute the quantities $\underline{p_A}$ approximately as the lower confidence interval at a desired significance level $\alpha$. Specifically, suppose the computed output is $\hat{y_A}=F^*(x+\hat{\varepsilon})$; we can think of the output distribution as consisting of two outcomes: $\{\hat{y_A}\}$ and $\mathcal{Y} \setminus \{\hat{y_A}\}$. Thus, $\underline{p_A}$ can be viewed as a lower bound for the probability of success in binomial distribution. Once we obtain $\underline{p_A}$, we can approximate $\overline{p_B}$ as $1-\underline{p_A}$ so long as $\underline{p_A} > 1/2$. This gives $R=\frac{\sigma}{2}(\Phi^{-1}(\underline{p_A})-\Phi^{-1}(1-\underline{p_A}))=\sigma\Phi^{-1}(\underline{p_A})$.
\begin{algorithm} [h]
\mbox{Certification Procedure}\\
\LinesNumbered
\textbf{Input:} $x$, $\sigma$, $n$, $\alpha$, $F^*$\;
\textbf{Output:} $y_A$, $p$, $R$ \;
$\hat{\varepsilon}$ $\leftarrow$ $N(0,\sigma^2)$ draw a noise sample\;
$\hat{y_A}$, $\hat{y_B}$ $\gets F^*(x+\hat{\varepsilon})$ top 2 classifications\;
$\hat{n_A}$, $\hat{n_B}$ $\gets$ vote counts corresponding to $\hat{y_A}$, $\hat{y_B}$\;
$p \gets$ \textit{BinomTest}$(n_A,n_A+n_B,0.5)$\;
$A \gets \emptyset$\;
\For{$i:=1$ to $n$}{
$\varepsilon_i$ $\leftarrow$ $N(0,\sigma^2)$ draw noise sample\;
$A[i] \gets F^*(x+\varepsilon_i)$\;
$n_A \gets$ counts of $\hat{y_A}$ in $A$\;
$\underline{p_A} \gets$ \textit{ConfIntLower}$(n_A,n,\alpha)$\;
}
if $\underline{p_A} \leq 0.5$ then $R = 0$;
else $R = \sigma\phi^{-1}(\underline{p_A})$ \;
\Return($\hat{y_A}$, $p$), $R$
\label{certify}
\end{algorithm}
Note that strictly speaking $p$ is not required for the Certification Procedure, however it provides indication for the confidence of the prediction $\hat{y_A}$. In particular, when it's large (relative to some desired significance level) we interpret that the prediction might not be reliable or that the input might be easily compromised. Indeed, we observed that a larger value of $p$ would often lead to a smaller value for $\underline{p_A}$.
\section{The Model}
\subsection{Noisy Logits}
Since Carlini-Wagner attacks require access to the logits in the solution to (\ref{adv_problem}), we can obscure the search for solution by adding random noise to the logits. In fact, we can obscure any iterative scheme by adding random noise to any layer whose output is needed for crafting an attack. Note that if we add random noise directly to the original logits, an adversary might recover the original logits by making multiple queries and averaging the resulting \emph{noisy} logits. Instead, we apply random noise at query time to the input, then respond to the query with the logit of the perturbed input. Also, since the softmax function is monotonic, we must ensure the final output is a result of the \emph{noisy} logit, otherwise the genuine logit can be recovered by applying the inverse of the softmax function to the result at the output layer. Let $z_0$ be an input with $F(z_0)$ its output from the network $F$. Moreover, suppose $F$ has $n$ layers besides the input layer and for $1 \leq i \leq n$
{\small
\begin{equation}
z_i := F_i \circ F_{i-1} \circ \cdots \circ F_1(z_0),
\end{equation}
}
where $\circ$ denotes composition and
{\small
\begin{equation}
F(z_0) := z_n = F_n \circ F_{n-1} \circ \cdots \circ F_1(z_0).
\end{equation}
}
Then at the output layer $i$, the Noisy Logit mechanism will produce output
{\small
\begin{equation}
F_i(z_{i-1}') = F_i \circ F_{i-1} \circ \cdots \circ F_1(z_0'),
\end{equation}
}
where $z_0' = z_0 + g(\vec{a})$ and $g(\vec{a})$ is a random noise function with parameter(s) $\vec{a}$.
Note that by this procedure, naturally we can respond to queries at any layer with a noisy output, thus preventing an adversary from trying to reconstruct a genuine output at any layer (including the logits) by making queries to the network.
We remark the important distinction between noise injection at training time vs. at query time. Training a network with noise to over-fit the predictions to some neighbourhood of the input can induce vulnerability to \emph{invariance-based} adversarial examples \citep{jacobsen2019exploiting}, where the adversary inserts enough changes to an input such that its label should be changed but it's still predicted as the original label by the model.
We opt to inject noise at query time so as to obscure iterative schemes for adversarial example constructions.
Noise injection at query time is also used by the authors in \cite{yang2019me}, but this noise is accompanied by noise injection at training time. In their approach, the goal of noise injection is to mask the adversarial perturbations. The masked inputs are then reconstructed to reveal close approximations to the original inputs where the perturbations have been effectively smoothed out. The network is then trained and tested on the reconstructed inputs. In contrast, we do not attempt to remove the adversarial perturbations nor the noise added. Inevitably, this added noise might cause a network to lose accuracy. To compensate for the potential loss in accuracy, we make use of ensemble networks as described below.
\subsection{Ensemble Voting}
The purpose of having an ensemble of networks is two-fold: 1) to provide resilience in the \emph{combined} network when some of the networks are under attack; 2) to improve accuracy in the \emph{combined} network over the individual networks.
If we require that each individual network must successfully classify the input image, then assuming independence of success probabilities across the networks, the probability of simultaneous success across the networks is the product of the success probability of each network, which might be less than a desirable level of accuracy if the total number of networks is large since we are multiplying a series of numbers less than 1. However, if we only require success in the largest subset of the networks, then since there are many possible permutation of subsets when the number of networks is large, the success probability of the combined network as an aggregate might be much better than that of each individual network. Let $S:=\{F^1,F^2,...,F^m\}$ be a collection of $m$ networks, let $\delta(S)$ be the set of all partitions of $S$. For each partition $h\in\delta(S)$, let $L_h$ denote the largest subset in $h$. Then, the probability of success by voting is $\sum_{h\in\delta(S)} P(L_h)$,
where $P(L_h)$ is the probability of simultaneous success in $L_h$. Note that when $m$ is large, the number of possible partitions is large which means the success probability by voting can be high.
We note that although Carlini-Wagner attacks are able to defeat a network trained with any temperature constant, an attack crafted for a network trained with one temperature constant might not work on another trained with a different temperature, due to differences in the trained parameters. Thus we propose a mechanism where we train an ensemble of networks, each trained with a different temperature constant, where we respond to queries using the aggregate outputs of the ensemble of networks.
\subsection{Rank Verification}
In a system that employs voting as a mechanism, the final outputs can be somewhat noisy depending on the number of participants, whereby an addition of another participant can sometimes alter the final outcome. In our context, if two classes have similar levels of votes in the ensemble, the networks see the input as bearing resemblance to both classes, which could be an indication that the input could be easily compromised (if not already compromised). In this case, it makes sense to \emph{abstain} from making a prediction or warn the user of the potential risk. This intuition is supported by the findings in \cite{dogus2018intriguing}, where the authors demonstrated that adversarial accuracy is dictated by the distribution of differences between the values of the logits corresponding to the most likely and the second most likely classes. In particular, the success of many adversarial examples is due to such differences being small. \\
We follow the Rank Verification method proposed in \citep{hung2019rank}, which tests the hypothesis that the top two candidates are equally likely to be the "winner" in a voting system. Let $n_A$ and $n_B$ correspond to the vote counts of the top two classes $y_A$ and $y_B$, respectively, predicted by the ensemble for some given input $x$, where $n_A \geq n_B$. Let $p=\textit{BinomTest}(n_A,n_A+n_B,0.5)$
be the p-value obtained from the hypothesis test for $n_A$ observations of $y_A$ over $n_A+n_B$ trials, with hypothesized success probability equal to 0.5. If $p<\alpha_{RV}$, we reject the hypothesis at some significance level $\alpha_{RV}$. i.e. $y_A$ is most statistically likely to be the winner among the outputs from the ensemble. If we cannot reject the hypothesis, we abstain from making a prediction at $x$ or issue a warning.
\section{Copyright}
\input{Sections/Introduction}
\input{Sections/RelatedWork}
\input{Sections/TheModel}
\input{Sections/RobustnessGuarantee}
\input{Sections/ExperimentalEvaluation}
\input{Sections/Conclusion}
\subsubsection*{Acknowledgements}
We thank anonymous reviewers for their suggestions and feedback. Supports from Vector Institute and Natural Sciences \& Engineering Research Council of Canada (NSERC) are acknowledged.
|
1,116,691,501,226 | arxiv | \section{Introduction}
\IEEEPARstart{R}{econfigurable} intelligent surfaces (RISs) allow controlling the radio propagation environment, thus providing novel degrees of freedom for the design of wireless systems~\cite{liu2021reconfigurable, Prasad2021}. An RIS is a low-cost passive flat surface made of sub-wavelength refractive/reflective elements (atoms) that can add a tunable phase shift to the incident electromagnetic wave. The atoms are controlled using an embedded logic with a power consumption that is usually negligible and, all together, can redirect the planar or spherical wavefront hitting the surface in several ways. E.g., a diffuse scattering, an anomalous reflection, or a beam focused towards a specific point can be produced; in addition, a data message can be even superimposed on the redirected signal~\cite{Guo-2020, Dai-2021}. RISs have been used to boost the performance of wireless communication links~\cite{Geoffrey-Ye-2020, Pan-2021, Basar-2021}; they have been also proven effective in other contexts, including wireless power transfer~\cite{zhao2020wireless}, localization and mapping~\cite{2020-Wymeersch-Localization-and-Mapping, Alouini-Localization}, joint communication and sensing~\cite{joint_waveform}, and, more recently, target detection~\cite{Grossi2021ris, Aubry-2021, foundations}.
A major drawback of passive RISs is that the end-to-end indirect link (source $\rightarrow$ RIS $\rightarrow$ destination) presents a heavy product path-loss attenuation (also referred to as double fading attenuation), which may limit their usability, especially if a direct link is available~\cite{Najafi-2021}. In particular, recent studies suggest that such passive devices should be better placed close to the transmitter or the receiver~\cite{Dunna-2020, Basar-2021, foundations}. To improve the end-to-end power budget of an RIS-assisted communication system, the works in~\cite{Larsson-2021, ActiveRIS2021} have recently introduced the idea of using an \emph{active} RIS, wherein each reflecting element employs an active load to amplify the incident signal. A shortcoming is that the circuitry controlling such an active RIS introduces an internal source of noise and consumes a non-negligible amount of power; however, even after accounting for these effects in the overall link budget, the results in~\cite{Larsson-2021, ActiveRIS2021} indicate that the communication system can perform much better if aided by an active rather than a passive RIS.
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{fig_1.pdf}}
\vspace{-5pt}\caption{Considered architecture composed of a radar, equipped with one transmit and two receive beams, aided by an active RIS; radar and RIS are widely-spaced with respect to the target.}
\label{fig_1}
\end{figure}
In this work, we propose to use an active RIS to improve the detection capability of a radar system. The main intuition is that an active RIS can offer a second look at a target illuminated by the radar, thus providing spatial (angular) diversity, \emph{and}, in addition, can compensate for the product path loss along the indirect target-RIS-radar path. Overall, the proposed RIS-assisted architecture may realize a sort of low-cost distributed radar system, as illustrated in Fig.~\ref{fig_1}; in particular, rather than having a second radar receiver equipped with a full radio-frequency processing chain and using a dedicated data link from each receiver to a common fusion center, an active RIS simply redirects the impinging signal towards a \emph{unique} destination, which collects both the direct and indirect target echoes through two dedicated spatial beams. More specifically, we make the following contributions. Upon adapting the signal model developed in~\cite{Larsson-2021, ActiveRIS2021} to the considered application, we propose to choose the number of RIS elements, their amplification gain, and the power split among the radar transmitter and the active RIS to maximize the detection probability for a fixed probability of false alarm. Since this problem is non-convex, we derive a suboptimal solution based upon an alternating maximization. The results show that the use of an active RIS can grant a large performance improvement compared to the cases where no RIS or a passive RIS is used.
The remainder of this work is organized as follows. In the next section, the system model is presented, while, in Sec.~\ref{sys_des_sec}, the system design is described. An illustrative example is provided in Sec.~\ref{num_res_sec}, and concluding remarks are given in Sec.~\ref{concl_sec}.
\section{System Model}
Consider a target detection problem, where the radar is assisted by an active RIS, as shown in Fig.~\ref{fig_1}. The radar emits an average power $P_r$ and is equipped with one transmit beam with gain $G^\text{tx}_{rt}$ pointing towards the prospective target and two receive beams, one with gain $G^\text{rx}_{rt}$ pointing towards the target and one with gain $G^\text{rx}_{rs}$ pointing towards the RIS. The radar radiates a passband waveform with bandwidth $W$, duration $T$, and carrier wavelength $\lambda$. The active RIS is composed of $L$ elements and is \emph{widely-spaced} from the radar, so as to get an independent look at the target; in particular, radar and RIS are in each other's far-field. The (complex) amplitude response of the $\ell$-th RIS element is denoted by $a_\ell \mathrm e^{\mathrm i\phi_\ell}$, where $a_\ell \in[1, a_\text{max}]$, with $a_\text{max}\geq 1$, and $\phi_\ell\in [0,2 \pi]$.
We assume that the target is in the far-field of both the radar and the RIS. We denote by $d_{rt}$, $d_{ts}$, and $d_{sr}$ the radar-target, target-RIS, and RIS-radar distances, respectively; also, we denote by $G_{st}$ and $G_{sr}$ the gain of each element of the RIS towards the target and the radar, respectively. Accordingly, we can define the following positive quantities
\begin{align}
\alpha_{rt} & = \sqrt{G^\text{tx}_{rt}/(4\pi)} / d_{rt}, & \alpha_{tr} &= \lambda \sqrt{G^\text{rx}_{rt}} /(4\pi d_{rt})\\
\alpha_{ts} & = \lambda \sqrt{G_{st}}/(4\pi d_{ts}), & \alpha_{sr} & = \lambda \sqrt{G_{sr} G^\text{rx}_{rs}} /(4\pi d_{sr})
\end{align}
which account for the link budget along each hop in Fig.~\ref{fig_1}.
After base-band conversion, matched-filtering with the transmit waveform, and sampling,\footnote{The signal from the beam pointing towards the target is sampled at the delay $2d_{rt}/c$, where $c$ is the speed of light; similarly, the signal from the beam pointing towards the RIS is sampled at the delay $(d_{rt}+d_{ts}+d_{sr})/c$.} if a target is present in the cell under test, the baseband signal received by the radar beam pointing toward the target can be written as
\begin{equation}
y_1 = \gamma_1 \alpha_1\sqrt{P_r} + w_1
\end{equation}
where $\gamma_1\in\mathbb C$ is the unknown target response towards the radar, $\alpha_1 = \alpha_{rt} \alpha_{tr} \sqrt{WT}$, and $w_1 \sim \mathcal{CN}(0, P_{w_1})$ is the noise, distributed as a complex circularly symmetric Gaussian random variable with variance $P_{w_1}$. Also, upon denoting by $\beta_{t\ell}$ and $\beta_{\ell r}$ the phase delays along the path linking the target and the $\ell$-th RIS element and the path linking the $\ell$-th RIS element and the radar, respectively, the baseband signal received by the radar beam pointing towards the RIS is
\begin{align}
y_2 & = \gamma_2 \alpha_2 \sqrt{P_r} \sum_{\ell=1}^L a_\ell \mathrm e^{\mathrm i(\beta_{t\ell}+\phi_\ell+\beta_{\ell r})}\notag\\
&\quad + \alpha_{sr} \sum_{\ell=1}^L a_\ell v_\ell \mathrm e^{\mathrm i(\phi_\ell+\beta_{\ell r})} + w_2 \label{eq_y2}
\end{align}
where $\gamma_2 \in \mathbb C$ is the unknown target response towards the RIS, $\alpha_2 = \alpha_{rt} \alpha_{ts} \alpha_{sr} \sqrt{WT}$, $v_\ell\sim \mathcal{CN}(0,P_v)$ is the \emph{dynamic noise} generated by the $\ell$-th RIS element~\cite{Larsson-2021, ActiveRIS2021}, and $w_2 \sim \mathcal{CN}(0, P_{w_2})$, independent of $w_1$, is the receive noise.
We assume that $\{v_\ell\}_{\ell=1}^L$ are independent, so that the distribution of $\sum_{\ell=1}^L a_\ell v_\ell \mathrm e^{\mathrm i(\phi_\ell+\beta_{\ell r})}$ is not influenced by $\{\phi_\ell\}_{\ell=1}^L$. Also, since $\beta_{t\ell}$ and $\beta_{\ell r}$ are known,\footnote{($\{\beta_{t\ell}\}_{\ell=1}^L$ and $\{\beta_{\ell r}\}_{\ell=1}^L$ are uniquely determined by the orientation of the RIS and by the mutual position of radar, RIS, and target; see also~\cite{Grossi2021ris}.} the RIS phases can be chosen as $\phi_\ell = -\beta_{t\ell} - \beta_{\ell r}$, so that all the signal terms in~\eqref{eq_y2} are \emph{phase aligned} (see also~\cite{Grossi2021ris}). In this case, Eq.~\eqref{eq_y2} becomes
\begin{equation}
y_2 = \gamma_2 \alpha_2 \sqrt{P_r} \sum_{\ell=1}^L a_\ell + z_2 \label{y_2_eq}
\end{equation}
where $z_2= w_2 + \alpha_{sr} \sum_{\ell=1}^L a_\ell v_\ell \mathrm e^{-i\beta_{t\ell}}$ is distributed as $\mathcal{CN}(0, P_{w_2}+ \alpha^2_{sr}P_v\sum_{\ell=1}^L a_\ell^2)$.
Following~\cite{Larsson-2021, ActiveRIS2021}, we model the RIS power consumption as $P_s = L\rho_s + \eta_s^{-1} p_\text{out}$, where $\rho_s=P_c +P_{dc}$ is the power consumption needed to operate each reflecting element of the RIS, with $P_{c}$ the switch and control circuit power and $P_{dc}$ the DC biasing power consumption, $\eta_s$ is the amplifier efficiency, and $p_\text{out}$ is the output power. This latter term is in turn given by $(\alpha^2_{rts} \sigma^2_{\gamma_2} P_r + P_v)\sum_{\ell=1}^L a_\ell^2$, where $\sigma^2_{\gamma_2}$ is the mean square value of $\gamma_2$ and $\alpha_{rts}= \alpha_{rt} \alpha_{ts}$, so that
\begin{equation}
P_s = L\rho_s + \eta_s^{-1} (\alpha^2_{rts} \sigma^2_{\gamma_2} P_r + P_v)\sum_{\ell=1}^L a_\ell^2.
\end{equation}
\section{System design}\label{sys_des_sec}
It is not difficult to show that the generalized likelihood ratio test~\cite{Van_Trees_1} with respect to the unknown target responses $(\gamma_1, \gamma_2)$ based on the observations $(y_1, y_2)$ is
\begin{equation}
\frac{|y_1|^2}{P_{w_1}} + \frac{|y_2|^2}{P_{w_2}+ \alpha^2_{sr}P_v\sum_{\ell=1}^L a_\ell^2} \gtrless \gamma
\label{GLRT}
\end{equation}
and a target is declared if the detection threshold $\gamma$ is exceeded. The probability of false alarm is $\text{PFA}=(1+\gamma) \mathrm e^{-\gamma}$~\cite{Richards_2005}, and $\gamma$ is usually set to have a specified PFA level. As to the probability of detection (PD), we need to specify the joint distribution of the target responses: if $\gamma_1\sim\mathcal{CN}(0,\sigma^2_{\gamma_1})$ and $\gamma_2\sim\mathcal{CN}(0,\sigma^2_{\gamma_2})$ are independent, we have\footnote{This corresponds to the Swerling's Case~1, and the test statistic in~\eqref{GLRT} is the sum of two independent exponential random variables with mean $1+\text{SNR}_1$ and $1+\text{SNR}_2$, that follows a hypo-exponential density~\cite{Ross_2014}.}
\begin{equation}
\text{PD}= \frac{1+\text{SNR}_1}{\text{SNR}_1-\text{SNR}_2} \mathrm e^{-\frac{\gamma}{1+\text{SNR}_1}} - \frac{1+\text{SNR}_2}{\text{SNR}_1-\text{SNR}_2} \mathrm e^{-\frac{\gamma}{1+\text{SNR}_2}} \label{Pd}
\end{equation}
where
\begin{equation}
\text{SNR}_1 = \frac{\alpha_1^2 \sigma^2_{\gamma_1} P_r}{P_{w_1}}, \quad
\text{SNR}_2 = \frac{\alpha_2^2 \sigma^2_{\gamma_2} P_r \bigl(\sum_{\ell=1}^L a_\ell\bigr)^2}{P_{w_2}+ \alpha^2_{sr}P_v\sum_{\ell=1}^L a_\ell^2}\label{SNRs}
\end{equation}
are the SNRs on the two receive channels.
Our goal here is to maximize PD for a fixed PFA. The available degrees of freedom for system optimization are the radar power $P_r$, the number $L$ of RIS elements, and the corresponding amplification factors $\bm a = (a_1 \cdots a_L)$; instead, the physical constraints are on the maximum number of RIS elements $L_\text{max}$, on the maximum amplification factor $a_\text{max}$, and on the overall power budget $\rho_r + \eta_r^{-1}P_r + P_s$, where $\rho_r$ is the circuit power required to operate the radar transmitter, and $\eta_r$ is the efficiency of the radar amplifier. Hence, the optimization problem tackled here is
\begin{equation} \label{opt_prob}
\begin{aligned}
\max_{L\in\mathbb N} \max_{\bm a \in \mathbb R^L, P_r \in\mathbb R} & \; \text{PD}\bigl(\text{SNR}_1(P_r), \text{SNR}_2(L, \bm a, P_r) \bigr)\\
\text{s.t.} & \; \rho_r +\eta_r^{-1} P_r + L\rho_s \\
&\; + \eta_s^{-1} ( \alpha^2_{rts} \sigma^2_{\gamma_2} P_r + P_v) \sum_{\ell=1}^L a_\ell^2 \leq P_\text{max} \\
& \; 1 \leq a_\ell \leq a_\text{max}, \quad \ell=1,\ldots, L \\
& \; 0 \leq L \leq L_\text{max}
\end{aligned}
\end{equation}
where the dependency of the objective function from the optimization variables has been made explicit, while $L=0$ simply means here that the RIS is not used.
The optimization problem in~\eqref{opt_prob} is quite difficult, and we resort to a block-coordinate ascent method, also known as alternating-maximization, where, at each iteration, the objective function is maximized over a \emph{block} of variables, while keeping the others fixed at their previous values. In particular, we consider two reduced-complexity sub-problems: the maximization over $P_r$ and the maximization over the pair $(\bm a,L)$. In the following, these two sub-problems are optimally solved, and a closed-form expression for their solutions is provided in Eqs.~\eqref{P_r_opt} and~\eqref{a_L_opt}. For the reader's sake, the complete routine is reported in Alg.~\ref{alg}.
\subsection{Maximization over the radar power}
Since the probability of detection in~\eqref{Pd} is increasing with the SNRs in~\eqref{SNRs}, that are in turn increasing with the radar transmit power, we should choose the largest value of $P_r$ satisfying the power constraint in~\eqref{opt_prob}, i.e.,
\begin{align}
P_r^{\star} = \frac{P_\text{max} - \rho_r -L\rho_s - \eta_s^{-1}P_v \sum_{\ell=1}^L a_\ell^2}{\eta_r^{-1}+ \eta_s^{-1} \alpha_{rts}^2 \sigma_{\gamma_2}^2 \sum_{\ell=1}^L a_\ell^2}. \label{P_r_opt}
\end{align}
\subsection{Maximization over the RIS parameters}
Since the probability of detection in~\eqref{Pd} is increasing with the SNRs in~\eqref{SNRs}, and since $\text{SNR}_1$ is independent of $(\bm a, L)$, the problem to be solved here is equivalent to
\begin{equation} \label{subprob_L_a}
\begin{aligned}
\max_{L\in\mathbb N} \max_{\bm a \in \mathbb R^L} & \; \text{SNR}_2(L, \bm a, P_r)\\
\text{s.t.} & \; \rho_s L + \zeta \sum_{\ell=1}^L a_\ell^2 \leq \kappa \\
& \; 1 \leq a_\ell \leq a_\text{max}, \quad \ell=1,\ldots, L \\
& \; 0 \leq L \leq L_\text{max}
\end{aligned}
\end{equation}
where $\kappa= P_\text{max}-\rho_r-\eta_r^{-1}P_r$, and $\zeta = \eta_s^{-1} ( \alpha^2_{rts} \sigma^2_{\gamma_2} P_r + P_v)$. We point out here that a similar problem has been tackled in~\cite{Larsson-2021}; however, differently from~\cite{Larsson-2021}, we are now considering additional constraints on $L$ and $a_\ell$, that will result into a slightly different solution, as briefly outlined next.
Notice first that, the constraints in~\eqref{subprob_L_a} imply that we must necessarily have $L\leq \bar L$, where
\begin{equation}
\bar L = \min \left\{ L_\text{max}, \left\lfloor \frac{\kappa}{\rho_s+\zeta} \right\rfloor \right\}.
\end{equation}
At this point, from the expression of $\text{SNR}_2$ in~\eqref{SNRs}, it is not difficult to show that, for any $L\in\bigl\{0, 1, \ldots, \bar L \bigr\}$, the maximization over $\bm a$ gives $a_1 = \cdots = a_L = g(L)$, where
\begin{equation}
g(L) = \begin{cases}
\min\left\{ a_\text{max}, \sqrt{\frac{\kappa - \rho_s L}{\zeta L}} \right\}, & \text{if } L\geq 1\\
1, & \text{if } L=0.
\end{cases}\label{eq_a_L}
\end{equation}
Therefore, Problem~\eqref{subprob_L_a} reduces to
\begin{equation} \label{subprob_L}
\max_{L\in\mathbb N :\; 0 \leq L \leq \bar L} \;\;\frac{\alpha_2^2 \sigma^2_{\gamma_2} P_r L^2 g^2(L) }{P_{w_2}+ \alpha^2_{sr}P_v L g^2(L)} .
\end{equation}
\begin{algorithm}[t]
\caption{Block-coordinate descent for Problem~\eqref{opt_prob} \label{alg}}
\begin{algorithmic}
\STATE choose a feasible triplet $(P_r, \bm a, L)$
\REPEAT
\STATE update $P_r$ with~\eqref{P_r_opt}
\STATE update $(\bm a, L)$ with~\eqref{a_L_opt}
\UNTIL convergence
\end{algorithmic}
\end{algorithm}
Let us now relax Problem~\eqref{subprob_L} by extending the search set to $\bigl\{L\in \mathbb R : 0 \leq L \leq \bar L\bigr\}$. Let $f(L)$ and $f'(L)$ denote the objective function of~\eqref{subprob_L} and its first order derivative, respectively, and define $L_1 = \kappa/(\rho_s + a^2_\text{max}\zeta)$.
Then, from~\eqref{eq_a_L}, we have that, if $0\leq L\leq L_1$,
\begin{equation}
f(L) = \frac{\alpha_2^2 \sigma^2_{\gamma_2} P_r a^2_\text{max}L^2}{P_{w_2}+ \alpha^2_{sr}P_v a^2_\text{max}L}
\end{equation}
and $f'(L)\geq 0$; if, instead, $L\geq L_1$,
\begin{equation}
f(L) = \frac{\alpha_2^2 \sigma^2_{\gamma_2} P_r L (\kappa - \rho_s L)}{P_{w_2} \zeta + \alpha^2_{sr} P_v (\kappa - \rho_s L)}
\end{equation}
and $f'(L)\geq 0$ for $L\leq L_2^-$ or $L\geq L_2^+$, and $f'(L) \leq 0$ for $L_2^-\leq L \leq L_2^+$, where
\begin{equation}
L_2^\pm =\frac{P_{w_2} \zeta + \alpha^2_{sr} P_v \kappa \pm \sqrt{\bigl(P_{w_2} \zeta+ \alpha^2_{sr} P_v \kappa\bigr) P_{w_2}\zeta}}{\alpha^2_{sr} P_v \rho_s}
\end{equation}
Notice now that $\max \bigl\{ L_2^-, \bar L\bigr\} \leq \kappa/\rho_s \leq L_2^+$. Therefore, if $L_1\leq L_2^-$, we have $f'(L)\geq 0$ for $L\leq L_2^-$, and $f'(L)\leq 0$ for $L_2^-\leq L \leq \kappa/\rho_s$; then, the solution to the relaxed problem is $L^\star_\text{rel}= \min \bigl\{ L_2^-, \bar L \bigr\}$. If, instead, $L_1\geq L_2^-$, we have $f'(L)\geq 0$ for $L\leq L_1$, and $f'(L)\leq 0$ for $L_1\leq L \leq \kappa/\rho_s$, so that the solution to the relaxed problem is $L^\star_\text{rel}= \min \bigl\{L_1, \bar L \bigr\}$. Thus
\begin{equation}
L^\star_\text{rel} = \min \bigl\{ \max \{ L_1, L_2^-\}, \bar L \bigr\}
\end{equation}
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width=0.33\textwidth]{fig_2.pdf}
\includegraphics[width=0.33\textwidth]{fig_3.pdf}
\includegraphics[width=0.33\textwidth]{fig_4.pdf}}
\caption{Probability of detection (left plot), radar transmit power (middle plot), and RIS element number (right plot) versus $\text{SNR}_0$ for a radar operating alone (``No RIS''), aided by a passive RIS, or helped by an active RIS with a maximum amplification factor of $a^2_\text{max}=10,20,30,40$ dB.\label{fig_2}}
\end{figure*}
Finally, the solution to Problem~\eqref{subprob_L} is found by comparing $f\bigl(\lfloor L_\text{rel}^\star \rfloor \bigr)$ and $f\bigl(\lceil L_\text{rel}^\star \rceil\bigr)$; consequently, the solution to Problem~\eqref{subprob_L_a} is
\begin{subequations} \label{a_L_opt}
\begin{align}
L^\star &= \operatorname*{argmax}\limits_{L\in\{ \lfloor L_\text{rel}^\star \rfloor, \lceil L_\text{rel}^\star \rceil\}} \frac{\alpha_2^2 \sigma^2_{\gamma_2} P_r L^2 g^2(L) }{P_{w_2}+ \alpha^2_{sr}P_v L g^2(L)}\\
a_\ell^\star &= g(L^\star), \quad \ell=1,\ldots, L^\star.
\end{align}%
\end{subequations}
\section{Simulation Results} \label{num_res_sec}
Here we consider an illustrative example to assess the performance improvement granted by an active RIS in the radar detection problem. With reference to Fig.~\ref{fig_1}, the radar is located at the origin of the coordinate system, while the target and the RIS lie in positions $(500,0)$ m and $(200,-200)$ m, respectively; the system parameters are listed in Table~\ref{tab_1}. For comparison, we also consider the cases where no RIS is used (therefore, $P_r=(P_\text{max}-\rho_r)\eta_r$) and where a passive RIS is adopted (therefore, $\rho_s=P_c$, $P_v=0$, $\eta_s=1$, and $a_\ell=1$ for any $\ell$) and optimized as in Problem~\eqref{opt_prob}, with power constraint $\rho_r +\eta_r^{-1} P_r + LP_c \leq P_\text{max}$. All the curves are reported versus $\text{SNR}_0=\alpha_1^2 \sigma^2_{\gamma_1} (P_\text{max}-\rho_r)\eta_r/P_{w_1}$ (i.e., the SNR in the No RIS case), where the target strength $\sigma^2_{\gamma_1}$ is varied, with $\sigma^2_{\gamma_2}=\sigma^2_{\gamma_1}$.
\begin{table}[t]
\centering
\caption{System parameters \label{tab_1}}
\begin{tabular}{lll}
\toprule
$P_\text{max}= 4$ W & $W=10$ MHz & $\eta_r = \eta_s = 0.8$ \\
$a^2_\text{max}\in\{10, 20, 30, 40\}$ dB & $T=0.5$ ms & $\rho_r=2$ W\\
$G_{st}= G_{sr} =3$ dB & $\lambda = 10$ cm & $P_c = -10$ dBm\\
$P_{w_1}= P_{w_2}= -128$ dBm & $\text{PFA} = 10^{-6}$ & $P_{dc}=- 5$ dBm \\
$G^\text{tx}_{rt} = G^\text{rx}_{rt} = G^\text{rx}_{rs} = 33$ dB & $L_\text{max} =2500$ & $P_v= -134$ dBm\\
\bottomrule
\end{tabular}
\end{table}
In the left plot of Fig.~\ref{fig_2}, the optimized PD is reported (solid lines). It is seen by inspection that the use of a passive RIS is advantageous (compared to the No RIS case) only for very large SNRs (corresponding in this example to PD values larger than 0.89), in accordance with the findings in~\cite{Grossi2021ris}. The active RIS with a maximum amplification factor of 10 dB only performs slightly better than the passive RIS and is still not competitive (compared to the No RIS case) over a large SNR range; on the contrary, the active RIS with a maximum amplification factor greater than or equal to 20 dB outperforms the passive RIS and the No RIS cases in all the inspected range of SNRs; remarkably, if $a^2_\text{max}=40$ dB, a gain of as much as 9.2~dB at $\text{PD}=0.5$ can be achieved compared to the No RIS case. Notice that the proposed solution requires prior knowledge of $\sigma^2_{\gamma_2}$, which is not available in practice. A viable fix is to use a design value, say $\sigma^2_{\gamma_2,d}$, and accept some loss in case of mismatch. In this figure, we have also reported PD for such mismatched design ($+$ marker), where $\sigma^2_{\gamma_2,d}$ is the one corresponding to $\text{PD}=0.5$. As it can be seen, a negligible loss is incurred for $a^2_\text{max}= 20, 30, 40$ dB, showing that this design is robust as to the uncertainty in the target strength.
In the middle and right plots of Fig.~\ref{fig_2}, instead, the optimal radar transmit power and number of RIS elements are shown, respectively. The optimal amplification factor is not reported since, for the considered system, $a_\ell = a^2_\text{max}$ for all $\ell$ and in all cases. It is seen that the passive RIS is activated only for $\text{SNR}_0\geq 20.5$ dB: in this latter case, all reflecting elements are used, with a power consumption of about $L_\text{max}P_c=0.25$ W. The active RIS with a maximum amplification factor of 10 dB behaves similarly to the passive RIS case: indeed here the signal amplification is still not sufficient to cope with the severe product path loss along the indirect target-RIS-radar hop. If instead the maximum amplification factor is set to 20 dB, the radar power consumption is $\rho_r+\eta_r^{-1}P_r\approx3$~W; interestingly, as $\text{SNR}_0$ increases, the power is slightly moved here from the RIS to the radar side, as the number of required RIS elements decreases. When $a^2_\text{max}$ is further increased, the number of RIS elements is progressively reduced for the same value of $\text{SNR}_0$ and, consequently, the power consumption of the RIS decreases: the intuition here is that, once the RIS amplification is already sufficiently large to well counteract the product path loss along the target-RIS-radar path, it becomes more advantageous to switch off more RIS elements and move the power at the radar transmitter to better illuminate the target.
\section{Conclusion}\label{concl_sec}
In this letter, we have presented a novel radar architecture, where the radar transmitter illuminates a prospective target, and the radar receiver collects the \emph{direct} echo and an \emph{indirect} echo bouncing on an active RIS widely-spaced from the radar. This system allows to exploit the spatial (angular) diversity of the target response, and, once the RIS is properly sized and the available power properly split among the radar transmitter for target illumination and the RIS for path loss compensation, a significant improvement in the detection probability can be obtained compared to the case where the radar operates alone or with the help of a passive RIS. Future developments may include the use of multiple RISs, the extension to the case where the noise and/or clutter power is unknown and must be estimated (through, e.g., adaptive techniques~\cite{Guerci_2014, Liu_2022}), and the study of joint detection and estimation procedures.
|
1,116,691,501,227 | arxiv | \section{Introduction}
Consider the problem of estimating the velocity field of oceanic currents by releasing into the water
a cloud of tracer particles and by sampling their distribution at a later time. The diffusion coefficient is assumed known and the original cloud that is released at time $t=0$ consists of $N$ particles. These are expected to remain in suspension for a duration of time while they diffuse and drift with the current.
At time $t=1$, their distribution is sampled again. Some of the particles in the meantime have sunk, so that the number of found particles is less than $N$.
Suppose this experiment is performed several times, treating the model originating from previous experiments as a ``prior". Is it conceivable to ``improve" a prior model in a rational way? More explicitly, by relying on a prior model and the new sampling result, is it possible to determine an updated model that represents the most probable way that the tracer cloud may have been transported?
\color{black}
At first sight, this problem appears to be of a different nature than those treated in the theory of Large Deviations \cite{varadhan1966asymptotic,varadhan1984large,DemZei09}, in that the sought path-space measure is not a probability measure per se. Nevertheless, in spite of the paucity of the available data, it is possible to solve this inverse problem by a natural embedding technique. A byproduct is a physically motivated framework to interpolate distributions of unequal mass (integrals). The blueprint for the rationale in our work has been provided by the celebrated duo of papers by E.\ Schr\"odinger in 1931/32 \cite{Sch31,Sch32} where he considered the problem of reconciling marginal distributions with a prior stochastic evolution.
The original Schr\"odinger Bridge Problem (SBP) asks for the most likely evolution of stochastic particles as they travel between marginal probability densities $\rho_0$ and $\rho_1$, specified at two points in time (taken as $t_0=0$ and $t_1=1$ without loss of generality), when these marginals fail to be consistent with a known {\em prior law}.
Interestingly, Schr\"odinger considered this abstract problem before a theory of continuous parameter stochastic processes was in place, and had only been preceded by Ludwig Boltzmann \cite{boltzmann1877uber}. Schr\"odinger attacks the problem very much in his countryman's style, through coarse graining, and applying the De Moivre-Stirling formula and Lagrange multipliers. In spite of the lack of proper tools (Sanov's theorem \cite{sanov1961probability}
will be published in Russian only in 1957), he arrives at the correct answer \cite{Sch31,Sch32}, that the most likely evolution is obtained by solving a certain two-point boundary value problem (Schr\"odinger system of equations). Important contributions to this theory are then provided by Fortet, Beurling and Jamison \cite{fortet1940resolution,beurling1960automorphism,Jam74,jamison1975markov}. It took more than half a century before F\"ollmer \cite{Fol88}, recovering Schr\"odinger's original motivation, properly cast the problem within the framework of Large-Deviations theory. The field has since seen several other significant contributions, a partial selection being \cite{zambrini1986stochastic,nagasawa1989transformations,wakolbinger1989simplified,Bla92,dawson1990schrodinger,nagasawa1990stochastic,Wak90,aebi1992large,Mik04,MikThi08,CheGeoPav14a,Leo12,Leo14,CheGeoPav14e,SIREV,Con19}. Here \cite{Wak90,Leo14,SIREV} are survey papers. Observe that, in view of Sanov's theorem \cite{DemZei09}, the SBP amounts to seeking a new probability law on the path space of the stochastic particles that is consistent with the given marginals and, at the same time, is the closest to the prior probability law in the relative entropy sense.
Schr\"odinger's Bridge Problem (SBP), as well as its zero-noise limit of Optimal Mass Transport (OMT), continue to impact a growing range of disciplines and applications. In this expanding mathematical landscape, the problem to account for variable mass along the transport path received attention from early on. It was chiefly motivated by the need to interpolate distributions of unequal mass for times series spectral analysis and image registration \cite{koehl2021physics}.
The viewpoint that is being pursued herein is closer in spirit to the original rationale of E.\ Schr\"odinger as we build on a Large Deviations formalism. To this end, we consider below a diffusion process with killing and seek the closest update of the corresponding law that is in agreement with the marginal data.
Thus, we ask for the most likely evolution of stochastic particles which are known to obey a given {\em prior law with potential for losses} (``killing rate'') while they transition between two marginal distributions $\rho_0$ and $\rho_1$ as before. The two distributions are not necessarily consistent with the prior law and neither is the loss of mass necessarily consistent with the prior killing rate.
In our formulation of the unbalanced Schr\"odinger Bridge Problem (uSBP), the marginals cannot be assumed to be probability distributions as their integrals differ due to losses. To this end, we embed the distributions into a frame that includes a coffin/extinction state, leading to a probability law on a continuum together with a discrete state. Thereupon, we find the updated law and killing rate that minimize the relative entropy to the prior with losses, and are consistent with the two marginals. In the special case when the marginals are already consistent with the prior, naturally, the solution coincides with the lossy prior, differently from what happens in other formulations of SBP with killing which are based on Feyman-Kac functionals \cite{nagasawa1990stochastic,wakolbinger1989simplified,Bla92,dawson1990schrodinger,aebi1992large,leonard2011stochastic,CheGeoPav14c,CheGeoPav17a}, and unbalanced transport \cite{chizat2018scaling,chizat2018unbalanced,chen2019interpolation,koehl2021physics}, as discussed in Section \ref{sec:killing}.
The structure of the paper is as follows. In Section \ref{sec:SBP} we revisit classical Schr\"odinger bridge problems. The main framework on Schr\"odinger bridges with unbalanced marginals is presented in Section \ref{sec:uSBP}. We also present a fluid dynamic formulation of the main framework in Section \ref{sec:fluid}. A comprehensive comparison between Schr\"odinger bridges with unbalanced marginals and existing results on Schr\"odinger bridges with killing is provided in Section \ref{sec:killing}. This is followed by a numerical example in Section \ref{sec:example} and a concluding remark in Section \ref{sec:conclusion}.
\section{Preliminaries on Schr\"odinger Bridge Problem}\label{sec:SBP}
We briefly review elements of the theory of Schr\"odinger Bridge Problem (SBP).
To this end, consider a diffusion process
\begin{equation}\label{eq:diffusion}
dX_t = b(t,X_t)dt + \sigma(t,X_t) dW_t,
\end{equation}
over the Euclidean space ${\mathbb R}^n$.
In Schr\"odinger's original thought experiment, a large number $N$ of trajectories over the time interval $[0,1]$ are independently sampled from \eqref{eq:diffusion} with probability distribution of $X_t$ at the initial time $t=0$ being $\rho_0$. The law of large numbers dictates that the terminal distribution at time $t=1$ must be
(approximately)
\begin{equation}\label{eq:largenumber}
\int_{{\mathbb R}^n} q(0,x,1,\cdot) \rho_0(x) dx,
\end{equation}
where $q(t,x,s,y),\, t<s$ denotes the kernel of transition rates from state $x$ at time $t$ to state $y$ at time $s$.
Now suppose the observed marginal distribution at time $t=1$, denoted by $\rho_1$, is inconsistent with \eqref{eq:largenumber} and the prior kernel $q(0,x,1,y)$, that is,
\[
\rho_1(\cdot) \neq \int_{{\mathbb R}^n} q(0,x,1,\cdot) \rho_0(x) dx.
\]
Schr\"odinger's problem then seeks the most likely evolution that the particles may have taken between the specified marginals. That is, SBP seeks a suitable update of the law of the diffusion process that reconciles the two marginals $\rho_0,\rho_1$. In the sequel and for notational simplicity, we use the same symbol $\rho$ to denote both, the probability density, as well as the corresponding measure $d\rho= \rho dx$, depending on the context.
As first noted by F\"ollmer \cite{Fol88}, SBP can be more clearly expressed in the language of the theory of {\em large deviations} \cite{DemZei09}. Specifically, let $\Omega = C([0,1],{\mathbb R}^n)$ denote the space of continuous functions on $[0,1]$ with values in ${\mathbb R}^n$, and ${\mathcal P}(\Omega)$ denote the space of probability laws over $\Omega$. Given any two probability measures $P,Q$, the {\em relative entropy} of $P$ with respect to $Q$ is
\begin{equation}
H(P\mid Q) =
\begin{cases}
\int dP \log \frac{dP}{dQ} & \mbox{if}~~ P \ll Q
\\
+\infty & \mbox{otherwise}.
\end{cases}
\end{equation}
Now consider $N$ independent trajectories
$X_t^1, X_t^2,\ldots, X_t^N \in \Omega$
of a diffusion having law $R\in{\mathcal P}(\Omega)$, and let $L_N$ denote their empirical distribution.
Then, asymptotically as $N\to\infty$, Sanov's theorem\footnote{ Sanov's theorem holds when the process takes values in any Polish space.} gives the exponential rate of decay for the probability of occurence of an empirical distribution that differs from the law $R$ \cite{DemZei09} as
\begin{equation}\label{eq:sanov}
{\rm Prob} (L_N \in A) \approx \exp (-N \inf_{P\in A} H(P\mid R)), ~~\forall A \subset {\mathcal P}(\Omega).
\end{equation}
Thus, Sanov's result expresses the likelihood of observing an empirical distribution approximated by $P$ in terms of the relative entropy $H(P\mid R)$. Thence, SBP can be formulated as follows:\\
\begin{problem}
Let $R\in{\mathcal P}(\Omega)$ be the probability measure on $\Omega$ induced by the prior process \eqref{eq:diffusion} with initial distribution $\rho_0$. Determine
\begin{equation}\label{eq:SBP}
P^\star := {\rm arg}\min_{P\in {\mathcal P}(\Omega)} \left\{H(P\mid R)~\mid~ P_0 = \rho_0, P_1 = \rho_1\right\},
\end{equation}
where $P_t$ denotes the marginal distribution of $P$ at time $t$ (i.e., the push forward $X_t\#P=P_t$).
\end{problem}
The entropy functional is strictly convex which ensures uniqueness of the minimizer (when it exists). Further, if $R_0=\rho_0$ as well as $R_1 =\rho_1$, the solution to SBP coincides (trivially) with the prior law $R$, i.e., $P^\star=R$, achieving the minimal value $H(P^\star\mid R)=0$. When
$R_1\neq \rho_1$, the SBP thus seeks an updated law $P^\star$ that is closest to the prior in the sense of relative entropy and restores consistency with the marginals (which fails for the prior $R$).
Next we briefly discuss the solution to SBP. For an in depth exposition see \cite{Leo14} and the review articles \cite{chen2021optimal,SIREV}.
By disintegration of measure,
\begin{align*}
R(\cdot) &= \int_{{\mathbb R}^n\times {\mathbb R}^n} R^{xy}(\cdot) R_{01}(dxdy), \mbox{ and}\\
P(\cdot) &= \int_{{\mathbb R}^n\times {\mathbb R}^n} P^{xy}(\cdot) P_{01}(dxdy),
\end{align*}
where $R_{01}$ ($P_{01}$) denotes the joint marginal distribution of $R$ ($P$) of $X_t$ for $t\in\{0,1\}$ (i.e., $R_{01}=X_{01}\# R$, and similarly, for $P$),
and $R^{xy}$ ($P^{xy}$) denotes the measure induced by $P$ conditioned on $(X_0 = x, X_1 = y)$.
It follows that
\begin{equation}\label{eq:Hdisintegration1}
H(P\mid R ) = H(P_{01} \mid R_{01}) + \int H(P^{xy} \mid R^{xy}) P_{01}(dxdy).
\end{equation}
Clearly, when $P^{xy} = R^{xy}$ for any $x,y\in {\mathbb R}^n$, the second term on the right assumes the minimal value $0$. An immediate consequence is the following {\em static} formulation of the SBP.\\
\begin{problem} Determine
\begin{equation}\label{eq:SSBP}
\pi^\star := {\rm arg}\min_{\pi\in {\mathcal P}({\mathbb R}^n\times {\mathbb R}^n)} \left\{H(\pi\mid R_{01})~\mid~ \pi_0 = \rho_0, \pi_1 = \rho_1\right\}.
\end{equation}
\end{problem}
To distinguish between the two formulations \eqref{eq:SBP} and \eqref{eq:SSBP}, we refer to \eqref{eq:SBP} as the {\em dynamic} SBP. The two formulations are equivalent in the sense that solving one provides a solution to the other, as noted next.\\
\begin{theorem}[\cite{Leo14}]\label{thm:staticdynamic1}
Suppose $P^\star$ is a solution to the dynamic SBP \eqref{eq:SBP}, then $P^\star_{01}$ solves the static SBP \eqref{eq:SSBP}. On the other hand, if $\pi^\star$ is a solution to \eqref{eq:SSBP}, then setting $P^\star=\int_{{\mathbb R}^n\times {\mathbb R}^n} R^{xy}(\cdot) \pi^\star(dxdy)$ solves \eqref{eq:SBP}, while $P^\star_{01}=\pi^\star$.
\end{theorem}
\begin{proof} It follows readily from \eqref{eq:Hdisintegration1}.
\end{proof}
A direct consequence of Theorem \ref{thm:staticdynamic1} is that the Radon-Nikodym ratios between solutions and priors for the two problems, the static \eqref{eq:SUSBP} and the dynamic \eqref{eq:USBP}, coincide, namely,
\begin{equation}\label{eq:nikodym}
\frac{dP^\star}{dR} = \frac{d\pi^\star}{dR_{01}} (X_0,X_1).
\end{equation}
In fact, this ratio can be factored into two parts, one that depends only on $X_0$ and one that depends on $X_1$, as follows.\\
\begin{theorem}[\cite{Leo14}]\label{prop:solution1}
Assume that $R_{01} \ll R_0\otimes R_1$, and that there exists $\pi\in {\mathcal P}({\mathbb R}^n\times {\mathbb R}^n)$ such that $\pi_0 = \rho_0$, $\pi_1 = \rho_1$ (i.e., feasible), for which $H(\pi \mid R_{01})<+\infty$. Then the static problem \eqref{eq:SSBP} admits a unique solution $\pi^\star$ and there exist two measurable functions $f, g: {\mathcal X} \rightarrow {\mathbb R}_+$ such that
\begin{equation}\label{eq:optpi}
\pi^\star = f(X_0) g(X_1) R_{01}.
\end{equation}
The two factors $f, g$ are solutions to the Schr\"odinger system
\begin{subequations}\label{eq:SBsysC1}
\begin{eqnarray}
\frac{d\rho_0}{dR_0}(x) &=& f(x) R(g(X_1)\mid X_0 = x),
\\
\frac{d\rho_1}{dR_1}(y) &=& g(y) R(f(X_0) \mid X_1 = y).
\end{eqnarray}
\end{subequations}
Moreover, the unique solution to the dynamic problem \eqref{eq:SBP} is
\begin{equation}\label{eq:optP}
P^\star = f(X_0) g(X_1) R.
\end{equation}
\end{theorem}
The above theorem provides an abstract construction of the sought probability law(s) via the solution of the Schr\"odinger system \eqref{eq:SBsysC1}. The local characteristics and the modified stochastic differential equation for the process with law $P^\star$ follow. Computationally, these can be expressed most succinctly in terms of a pair of two, forward and backward in time (and identical to that of the prior Fokker-Planck and its adjoint) equations, that are nonlinearly coupled through boundary conditions. We explain this next.
First recall that the marginals $R_t(x)$ of the prior law $R$ for the diffusion \eqref{eq:diffusion} satisfy (weakly) the Fokker-Planck equation
\begin{equation}\label{eq:FK1}
\partial_t R_t + \nabla\cdot(b R_t) = \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} R_t)}{\partial x_i\partial x_j}.
\end{equation}
In what follows, $a(t,x) = \sigma(t,x)\sigma(t,x)'$ is assumed to be everywhere positive definite. Let the two end-point marginals be absolutely continuous with densities $\rho_0$ and $\rho_1$, respectively. The Schr\"odinger system \eqref{eq:SBsysC1} can be reparametrized in terms of
\begin{subequations}
\begin{align}
\hat\varphi(0,x)&:=f(x)R_0(x)\\
\varphi(1,y)&:=g(y),
\end{align}
\end{subequations}
and takes the form
\begin{subequations}\label{eq:SBsystem1}
\begin{eqnarray}\label{eq:SBsystema1}
\partial_t \hat\varphi &=& - \nabla\cdot(b \hat\varphi) +\frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} \hat \varphi)}{\partial x_i\partial x_j}
\\\label{eq:SBsystemc1}
\partial_t\varphi &=& - b \cdot\nabla\varphi - \frac{1}{2} \sum_{i,j=1}^n a_{ij}\frac{\partial^2 \varphi}{\partial x_i\partial x_j}
\\\label{eq:SBsysteme1}
\rho_0 &=& \varphi(0,\cdot)\hat\varphi(0,\cdot)
\\\label{eq:SBsystemf1}
\rho_1 &=& \varphi(1,\cdot)\hat\varphi(1,\cdot).
\end{eqnarray}
\end{subequations}
\begin{theorem}\label{thm:SBEsys1} Let $R$ be the law of \eqref{eq:diffusion} with
$a(t,x) = \sigma(t,x)\sigma(t,x)'$ being positive definite for all $(t,x)\in{\mathbb R}\times {\mathbb R}^n$, and assume that $\rho_0,\rho_1$ are absolutely continuous with respect to the Lebesgue measure.
There exists a unique (up to a constant positive scaling) pair $(\hat\varphi(t,x), \varphi(t,x))$ of non-negative functions that satisfies the Schr\"odinger system \eqref{eq:SBsystem1}. Moreover, the law $P^\star$ for the dynamic problem \eqref{eq:SBP} is law of the diffusion
\begin{equation}\label{eq:controldiffusion}
dX_t = (b(t,X_t) +a(t,X_t)\nabla\log\varphi(t,X_t))dt + \sigma(t,X_t) dW_t,
\end{equation}
with distribution of $X_0$ being $\rho_0$, and at any time $t\in[0,1]$, the marginal density for $P^\star$ satisfies the identity $p_t(x)=\varphi(t,x)\hat\varphi(t,x)$.\color{black}
\end{theorem}
Existence and uniqueness of solutions for the Schr\"odinger system have been provided in various degrees of generality by Fortet, Beurling and Jamison \cite{fortet1940resolution,beurling1960automorphism,Jam74,jamison1975markov}.
For a detailed exposition of the theory of Schr\"odinger's problem we refer to Leonard \cite{Leo14}, in particular, \cite[Theorems 2.8, 2.9, 3.4]{Leo14}. \color{black}
A more recent account along with a proof that is based on the contractiveness of suitable maps in the Hilbert metric was given in \cite{chen2016entropic}. In the present paper, we follow a similar approach as in \cite{chen2016entropic}
when analyzing the more general Schr\"odinger system for diffusions with losses and, therefore, we sketch key steps for this more general case that we consider. An added benefit in recasting the Schr\"odinger system as in
\eqref{eq:SBsystem1} is that it leads, after discretization, to an efficient algorithm for computing $(\hat\varphi(t,x), \varphi(t,x))$, and thereby, $f,g$ as well as $P^\star$. The discretized version of the Schr\"odinger system \eqref{eq:SBsystem1} amounts to the celebrated Sinkhorn algorithm for matrix scaling \cite{SIREV}.
\section{Unbalanced stochastic transport}\label{sec:uSBP}
We now analyze stochastic flows between unequal marginals following E.\ Schr\"odinger original rationale that is rooted in large deviations theory. To this end, we consider a diffusion process with killing and seek the closest update of the corresponding prior law that restores agreement with marginal data.
Once again consider the diffusion process \eqref{eq:diffusion} but, this time, with a nonnegative killing rate $V(t,x)$ (assume $V(\cdot,\cdot)$ is continuous and not constantly zero).
A thought experiment similar to Schr\"odinger's, calls for a large number $N$ of trajectories over a time interval $[0,\,1]$, that are independently sampled from \eqref{eq:diffusion} with initial probability distribution $\rho_0$, and a recorded empirical distribution for the surviving particles at time $t=1$ approximated by $\rho_1$, which is inconsistent with the prior law, that is,
\[
\rho_1(\cdot) \neq \int_{{\mathbb R}^n} q(0,x,1,\cdot) \rho_0(x) dx.
\]
The kernel $q(0,x,1,y)$ is no longer a probability kernel in that $\int_{{\mathbb R}^n} q(0,x,1,y) dy\neq 1$, in general, and thus, neither $\int_{{\mathbb R}^n} q(0,x,1,\cdot) \rho_0(x) dx$ nor $\rho_1$ are necessarily probability densities, due to killing. In particular,
$\int \rho_1(x)dx=N_s/N\le 1$ where $N_s$ denotes the number of survival particles at time $1$. Just as in the standard SBP,
we consider continuous distributions, assuming that $N$ is large, and seek to identify the most likely behavior of the particles. By behavior we mean the most likely evolution of the particles along with the most likely times that the particles may have gotten killed (or, absorbed by an underlying medium).
As in the standard Schr\"odinger bridge, the problem arising from the above thought experiment can be formally stated using the theory of {\em large deviations} \cite{DemZei09}. However, in this case, the space of trajectories needs to be modified to accommodate for possible killing of particles. To this end, we augment the state space of the diffusion ${\mathbb R}^n$ with a ``coffin state'' $\mathfrak c$, resulting in the state space
\[
{\mathcal X}={\mathbb R}^n\cup \{\mathfrak c\}.
\]
Let $\mathbf{\Omega}=D([0,1],{\mathcal X})$ be the Skorokhod space over ${\mathcal X}$, that is, each element in $\mathbf{\Omega}$ is a c\`adl\`ag over ${\mathcal X}$ \cite{Bil99}. Denote by ${\mathcal P}(\mathbf \Omega)$ and ${\mathcal P}({\mathcal X})$ the spaces of probability distributions over $\mathbf \Omega$ and ${\mathcal X}$, respectively. Each diffusion process $X_t$ ($t\in[0,1]$) on ${\mathbb R}^n$ with killing corresponds to a process ${\mathbf X}} %{{\bi{X}}_t$ taking values in $\mathcal X$, and therby, to a law in ${\mathcal P}(\mathbf \Omega)$.
Evidently,
${\mathcal X}$ is a Polish space. The space $\mathbf{\Omega}$ of c\`adl\`ag over ${\mathcal X}$ is thus, with the appropriate topology, also a Polish space \cite{Pol12,EthKur09}. Sanov's theorem applies to measures on Polish spaces and, therefore, the likelihood function is once again expressed in terms of the relative entropy between probability laws.
In our unbalanced SBP setting, the set of probability laws over path space ${\mathcal P}(\mathbf \Omega)$ that are in alignment with the observations is
\[
\{{\mathbf P}} %{{\bi{P}}\in {\mathcal P}(\mathbf \Omega)\mid {\mathbf P}} %{{\bi{P}}_0 = p_0,~{\mathbf P}} %{{\bi{P}}_1 = p_1\},
\]
where $p_0, p_1$ are the natural augmentation of $\rho_0, \rho_1$ so that they belong in ${\mathcal P}({\mathcal X})$, respectively. Specifically, assuming that $\int_{\mathbb R^n} \rho_1(x)dx=1$, we set
\begin{subequations}
\begin{equation}
p_0=(\rho_0(\cdot),0)
\end{equation}
and
\begin{equation}
p_1=(\rho_1(\cdot),1-\int_{\mathbb R^n} \rho_1(x)dx).
\end{equation}
\end{subequations}
Thus, we arrive at the following recasting of uSBP as an ordinary SBP.\\
\begin{problem}[Unbalanced Schr\"odinger Bridge Problem (uSBP)] Determine
\begin{equation}\label{eq:USBP}
{\mathbf P}} %{{\bi{P}}^\star := {\rm arg}\min_{{\mathbf P}} %{{\bi{P}}\in {\mathcal P}(\mathbf \Omega)} \left\{H({\mathbf P}} %{{\bi{P}}\mid {\mathbf R}} %{{\bi{R}})~\mid~ {\mathbf P}} %{{\bi{P}}_0 = p_0, {\mathbf P}} %{{\bi{P}}_1 = p_1\right\}.
\end{equation}
\end{problem}
As before, verbatim, ${\mathbf R}} %{{\bi{R}}(\cdot) = \int_{{\mathcal X}^2} {\mathbf R}} %{{\bi{R}}^{xy}(\cdot) {\mathbf R}} %{{\bi{R}}_{01}(dxdy)$ and
${\mathbf P}} %{{\bi{P}}(\cdot) = \int_{{\mathcal X}^2} {\mathbf P}} %{{\bi{P}}^{xy}(\cdot) {\mathbf P}} %{{\bi{P}}_{01}(dxdy)$, where
now ${\mathbf R}} %{{\bi{R}}_{01}$ (${\mathbf P}} %{{\bi{P}}_{01}$) denotes the joint marginal distribution of ${\mathbf R}} %{{\bi{R}}$ (${\mathbf P}} %{{\bi{P}}$) over the marginal ${\mathbf X}} %{{\bi{X}}_{0,1}$,
and ${\mathbf R}} %{{\bi{R}}^{xy}$ (${\mathbf P}} %{{\bi{P}}^{xy}$) denotes the law conditioned on ${\mathbf X}} %{{\bi{X}}_0 = x\in \mathcal X$ and ${\mathbf X}} %{{\bi{X}}_1 = y\in\mathcal X$.
As before, the relation to the static SBP emerges.\\
\begin{problem} Determine
\begin{equation}\label{eq:SUSBP}
{\boldsymbol\pi}^\star:={\rm arg}\min_{{\boldsymbol\pi}\in {\mathcal P}({\mathcal X}^2)} \left\{H({\boldsymbol\pi}\mid {\mathbf R}} %{{\bi{R}}_{01})~\mid~ {\boldsymbol\pi}_0 = p_0, {\boldsymbol\pi}_1 = p_1\right\}.
\end{equation}
\end{problem}
The two formulations are once again equivalent, as it readily follows from the identity $H({\mathbf P}} %{{\bi{P}}\mid {\mathbf R}} %{{\bi{R}} ) = H({\mathbf P}} %{{\bi{P}}_{01} \mid {\mathbf R}} %{{\bi{R}}_{01}) + \int_{{\mathcal X}^2} H({\mathbf P}} %{{\bi{P}}^{xy} \mid {\mathbf R}} %{{\bi{R}}^{xy}) {\mathbf P}} %{{\bi{P}}_{01}(dxdy)$.\\
\begin{theorem}\label{thm:staticdynamic}
Suppose ${\mathbf P}} %{{\bi{P}}^\star$ solves the dynamic uSBP \eqref{eq:USBP}, then ${\mathbf P}} %{{\bi{P}}^\star_{01}$ also solves the static uSBP \eqref{eq:SUSBP}. On the other hand, if ${\boldsymbol\pi}^\star$ solves \eqref{eq:SUSBP}, then setting ${\mathbf P}} %{{\bi{P}}^\star=\int_{{\mathcal X}^2} {\mathbf R}} %{{\bi{R}}^{xy}(\cdot) {\boldsymbol\pi}^\star(dxdy)$ solves \eqref{eq:USBP}, while ${\mathbf P}} %{{\bi{P}}^\star_{01}={\boldsymbol\pi}^\star$.
\end{theorem}
The Radon-Nikodym ratios between solutions and priors for the two problems, analogous to \eqref{eq:nikodym}, applies here too, and the analogous expressions for the Schr\"odinger system in Theorem \ref{prop:solution1} follow as well. More explicitly, the solutions to \eqref{eq:USBP} and \eqref{eq:SUSBP} are of the form
\begin{equation}
{\mathbf P}} %{{\bi{P}}^\star = f({\mathbf X}} %{{\bi{X}}_0)g({\mathbf X}} %{{\bi{X}}_1){\mathbf R}} %{{\bi{R}}
\end{equation}
and
\begin{equation}
{\boldsymbol\pi}^\star = f({\mathbf X}} %{{\bi{X}}_0)g({\mathbf X}} %{{\bi{X}}_1){\mathbf R}} %{{\bi{R}}_{01}
\end{equation}
respectively.
The divergence between the standard SBP and the present uSBP becomes noticeable when we seek explicit solutions via analogues of system \eqref{eq:SBsystem1} and of the corresponding Fokker-Plank equation in Theorem \ref{thm:SBEsys1}, since now, we need to specify the update on the prior killing rate. We detail this next.
\subsection{Generalized Schr\"odinger system}
The Fokker-Planck equation for a diffusion \eqref{eq:diffusion} with killing rate $V(t,x)$
is
\begin{equation}\label{eq:FK}
\partial_t R_t+ \nabla\cdot(b R_t) + V R_t = \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} R_t)}{\partial x_i\partial x_j}.
\end{equation}
As before, $a(t,X) = \sigma(t,X)\sigma(t,X)'$ is assumed to be positive definite throughout. The corresponding Schr\"odinger system and its relation to the law of ${\mathbf P}} %{{\bi{P}}^\star$ can be expressed after reparametrizing the pair $(f,g)$ of functions on $\mathcal X$ as follows
\begin{subequations}\label{eq:newparameters}
\begin{eqnarray}
f(x){\mathbf R}} %{{\bi{R}}_0(x) &=&
\begin{cases}
\hat\varphi(0,x) & \mbox{if}~x\in {\mathbb R}^n
\\
\hat\psi(0) & \mbox{if}~x=\mathfrak c,
\end{cases}
\\
g(y) &=&
\begin{cases}
\varphi(1,y) & \mbox{if}~y\in {\mathbb R}^n
\\
\psi(1) & \mbox{if}~y=\mathfrak c.
\end{cases}
\end{eqnarray}
\end{subequations}
Comparing with \eqref{eq:SBsystem1}, the Schr\"odinger system along with the non-linear coupling constraints now becomes
\begin{subequations}\label{eq:SBsystem}
\begin{eqnarray}\label{eq:SBsystema}
\partial_t \hat\varphi &=& - \nabla\cdot(b \hat\varphi)-V\hat\varphi +\frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} \hat \varphi)}{\partial x_i\partial x_j}
\\\label{eq:SBsystemb}
\frac{d\hat\psi}{dt} &=& \int V\hat\varphi(t,x) dx
\\\label{eq:SBsystemc}
\partial_t\varphi &=& - b \cdot\nabla\varphi+V\varphi - \frac{1}{2} \sum_{i,j=1}^n a_{ij}\frac{\partial^2 \varphi}{\partial x_i\partial x_j} -V\psi
\\\label{eq:SBsystemd}
\frac{d\psi}{dt} &=& 0
\\\label{eq:SBsysteme}
\rho_0 &=& \varphi(0,\cdot)\hat\varphi(0,\cdot)
\\\label{eq:SBsystemf}
\rho_1 &=& \varphi(1,\cdot)\hat\varphi(1,\cdot)
\\\label{eq:SBsystemg}
\hat\psi(0) &=& 0
\\\label{eq:SBsystemh}
\psi(1)\hat\psi(1) &=& 1-\int \rho_1.
\end{eqnarray}
\end{subequations}
\begin{theorem}\label{thm:SBEsys}
Let $R$ be the law of a diffusion \eqref{eq:diffusion} with nontrivial killing rate $V(t,x)$ and
$a(t,x) = \sigma(t,x)\sigma(t,x)'$ being positive definite for all $(t,x)\in{\mathbb R}\times {\mathbb R}^n$, and assume that $\rho_0,\rho_1$ are absolutely continuous with respect to the Lebesgue measure.
There exists a unique (up to a constant positive scaling) $4$-tuple $(\hat\varphi(t,x), \hat \psi(t), \varphi(t,x), \psi(t))$ of non-negative functions that satisfies the Schr\"odinger system \eqref{eq:SBsystem}.
\end{theorem}
The proof of the theorem, given in Appendix \ref{sec:GSS}, is based on the contractiveness of the iterative scheme that consists in alternating between evaluation of
$(\hat\varphi(1,\cdot),\hat\psi(1))$ from $(\varphi(0,\cdot),\psi(0))$ using (\ref{eq:SBsysteme}-\ref{eq:SBsystema}-\ref{eq:SBsystemg}-\ref{eq:SBsystemb}), and then evaluating $(\varphi(0,\cdot),\psi(0))$ in a followup cycle from $(\hat\varphi(1,\cdot),\hat\psi(1))$ using the backward in time integration, via the remaining equations. Specifically, we prove that the iteration
\begin{align}\label{eq:newmaps}
{\hat\varphi(1,\cdot)\choose{\hat\psi(1)}}
{\mapsto}
{\varphi(0,\cdot) \choose \psi(0)}
{\mapsto}
{\hat\varphi(1,\cdot)\choose \hat\psi(1)}_{\rm next}
\end{align}
is strictly contractive in the Hilbert metric.
As in the ordinary SBP the discretized Schr\"odinger system \eqref{eq:SBsystem} leads to an efficient algorithm to compute $(\hat\varphi(t,x), \hat \psi(t), \varphi(t,x), \psi(t))$, and thereby, ${\mathbf P}} %{{\bi{P}}^\star$ as well as the corresponding Fokker-Planck equation for the corresponding marginals, that is explained next.
\subsection{Dynamic formulation}
In general, a multiplicative transformation such as ${\mathbf R}} %{{\bi{R}}\to {\mathbf P}} %{{\bi{P}}^\star=f({\mathbf X}} %{{\bi{X}}_0)g({\mathbf X}} %{{\bi{X}}_1){\mathbf R}} %{{\bi{R}}$, preserves the Markovian character. Moreover, the generators of the respective semi-groups that relate in this way, herein, $\L_t^{\mathbf R}} %{{\bi{R}},\L_t^{{\mathbf P}} %{{\bi{P}}^\star}$, can be evaluated from one another directly by utilizing the multiplicative factors and the so-called carr\'e du champ operator
\[
\Gamma_t(u,v):=\L_t(uv)-u\L_t(v)-v\L_t(u).
\]
Specifically (see \cite[Equation (3.6)]{Leo14}, and also \cite{leonard2011stochastic,revuz2013continuous}),
\begin{equation}\label{eq:3_6}
\L_t^{{\mathbf P}} %{{\bi{P}}^\star} u (x)= \L_t^{{\mathbf R}} %{{\bi{R}}}u (x) + \Gamma_t^{{\mathbf R}} %{{\bi{R}}}(g_t,u)(x)/g_t(x),
\end{equation}
where
\begin{equation}
g_t(y) =
\begin{cases}
\varphi(t,y) & \mbox{if}~y\in {\mathbb R}^n
\\
\psi(t) & \mbox{if}~y=\mathfrak c.
\end{cases}
\end{equation}
In light of Theorem \ref{thm:SBEsys} we now establish an explicit characterization of the dynamic unbalanced Schr\"odinger bridge problem \eqref{eq:USBP}. We denote by $P_t$ the marginal of ${\mathbf X}} %{{\bi{X}}_t$ restricted to the first component in $\mathcal X$, and by $q_t$ the probabilty of the coffin state. Thus, we use the vectorial notation
\[
{\mathbf P}} %{{\bi{P}}_t=:(P_t,q_t).
\]
Accordingly, for the marginals ${\mathbf R}} %{{\bi{R}}_t=(R_t,s_t)$ of the prior, $R_t$ satisfies the Fokker-Planck equation \eqref{eq:FK} while $s_t=1-\int_{\mathbb R^n}R_t(x)dx$. The solution ${\mathbf P}} %{{\bi{P}}^\star$ to \eqref{eq:USBP} is then characterized by the following theorem.
\begin{theorem}
The solution ${\mathbf P}} %{{\bi{P}}^\star$ to \eqref{eq:USBP} corresponds to a diffusion process
\begin{equation}\label{eq:controldiffusion}
dX_t = (b(t,X_t) +a(t,X_t)\nabla\log\varphi(t,X_t))dt + \sigma(t,X_t) dW_t
\end{equation}
with killing rate $\psi V/\varphi$, where $\varphi$ is obtained from the solution of the generalized Schr\"odinger system \eqref{eq:SBsystem}. Accordingly,
\begin{equation}\label{eq:newFP}
\partial_t P_t + \nabla \cdot((b+ a\nabla \log \varphi)P_t) = \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} P_t)}{\partial x_i\partial x_j} - \frac{\psi}{\varphi} V P_t.
\end{equation}
\end{theorem}
\begin{proof} The generator of ${\mathbf R}} %{{\bi{R}}$, is of the form
\begin{equation}
\L^{\mathbf R}} %{{\bi{R}}_t \;:\;
\left[\begin{matrix} \varphi \\ \psi \end{matrix} \right] \mapsto \left[\begin{matrix} b\cdot \nabla \varphi -V \varphi + \frac{1}{2} \sum_{i,j=1}^n a_{ij}\frac{\partial^2 \varphi}{\partial x_i\partial x_j}+V\psi \\ 0 \end{matrix} \right].
\end{equation}
The carr\'e du champ operator becomes
\begin{equation}
\Gamma^{\mathbf R}} %{{\bi{R}}_t (\left[\begin{matrix} \varphi_1 \\ \psi_1 \end{matrix} \right], \left[\begin{matrix} \varphi_2 \\ \psi_2 \end{matrix} \right])
=\left[\begin{matrix} V\varphi_1\varphi_2 + a \nabla\varphi_1\cdot \nabla\varphi_2 + V \psi_1\psi_2 - V \psi_1 \varphi_2 -V \varphi_1 \psi_2 \\ 0 \end{matrix} \right].
\end{equation}
We readily obtain that
\begin{eqnarray}\label{eq:componentwise}
\L^{{\mathbf P}} %{{\bi{P}}^\star}_t \left[\begin{matrix} v\\ \eta \end{matrix} \right] &=& \L^{\mathbf R}} %{{\bi{R}}_t \left[\begin{matrix} v\\ \eta \end{matrix} \right] + \Gamma^{\mathbf R}} %{{\bi{R}}_t (\left[\begin{matrix} \varphi \\ \psi \end{matrix} \right], \left[\begin{matrix} v \\ \eta \end{matrix} \right])/\left[\begin{matrix} \varphi \\ \psi \end{matrix} \right]
\\&=&\nonumber
\left[\begin{matrix} b\cdot \nabla v -V v + \frac{1}{2} \sum_{i,j=1}^n a_{ij}\frac{\partial^2 v}{\partial x_i\partial x_j}+V\eta + V v +a \nabla \log \varphi \cdot \nabla v + V \frac{\psi}{\varphi} \eta - V \frac{\psi}{\varphi} v- V\eta\\ 0 \end{matrix} \right]
\\&=&\nonumber
\left[\begin{matrix} (b+a \nabla \log \varphi) \cdot \nabla v- \frac{\psi}{\varphi} V v+ \frac{1}{2} \sum_{i,j=1}^n a_{ij}\frac{\partial^2 v}{\partial x_i\partial x_j} + \frac{\psi}{\varphi} V \eta \\ 0 \end{matrix} \right],
\end{eqnarray}
where the division in \eqref{eq:componentwise} is carried out componentwise.
The generator is that of the diffusion process \eqref{eq:controldiffusion} with killing rate $\psi V/\varphi$ over the extended state space ${\mathcal X}$. The Fokker-Planck equation \eqref{eq:newFP} can be obtained by taking the dual of $\L^{{\mathbf P}} %{{\bi{P}}^\star}_t$.
\end{proof}
\begin{theorem}
The marginal distribution ${\mathbf P}} %{{\bi{P}}_t^\star$ on the first component of $\mathcal X$ is $P_t=\varphi(t,\cdot) \hat\varphi(t,\cdot)$, and on the second component of $\mathcal X$ is $q_t = \psi(t)\hat\psi(t)$.
\end{theorem}
\begin{proof}
We verify that $P_t$ as above satisfies the Fokker-Planck equation associated with the diffusion \eqref{eq:controldiffusion} with killing rate $\psi V/\varphi$.
To this end, let
$P_t(\cdot):= \varphi(t,\cdot) \hat\varphi(t,\cdot)$. Then by \eqref{eq:SBsystema} and \eqref{eq:SBsystemc} we obtain
\begin{eqnarray*}
0&=&\partial_t P_t + \nabla \cdot((b+ a\nabla \log \varphi)P_t) - \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} P_t)}{\partial x_i\partial x_j} + \hat\varphi \psi V
\\&=& \partial_t P_t + \nabla \cdot((b+ a\nabla \log \varphi)P_t) - \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} P_t)}{\partial x_i\partial x_j} + \frac{\psi}{\varphi} V P_t,
\end{eqnarray*}
which is exactly the desired Fokker-Planck equation \eqref{eq:newFP}.
Similarly, by \eqref{eq:SBsystemb} and \eqref{eq:SBsystemd},
\[
\frac{dq_t}{dt} = \psi(t) \int V\hat\varphi(t,x)dx = \int \frac{\psi}{\varphi} V P_t dx,
\]
which is consistent with $P_t$ and the new killing rate ${\psi}V/{\varphi}$.
\end{proof}
\section{Fluid dynamic formulation}\label{sec:fluid}
The original Schr\"odinger bridge problem, when there is no killing, is known to be equivalent to the stochastic control problem of minimizing control energy subject to the marginal two end-point constraints \cite{CheGeoPav14e}, or equivalently, to a fluid dynamic formulation whereby the velocity field $u(t,\cdot)$ effecting the flow minimizes this action integral, namely,
\begin{subequations}\label{eq:fluid}
\begin{eqnarray}
\min_{P_t(\cdot), u(t,\cdot)} && \int_0^1 \int_{{\mathbb R}^n} \frac{1}{2} \|u(t,x)\|^2 P_t dx dt
\\&& \partial_t P_t+ \nabla\cdot((b+\sigma u) P_t) - \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} P_t)}{\partial x_i\partial x_j}=0
\\&& P_0 = \rho_0,\quad P_1=\rho_1.
\end{eqnarray}
\end{subequations}
The optimization takes place over the {\em feedback control policy}-{\em flow field} $u(t,x)$ together with the corresponding density flow $P_t(x)$. Below, in this section, we derive an analogous
formulation for the Schr\"odinger bridge problems with unbalanced marginals.
Along the flow, the killing rate may deviate from the prior $V$ and is to be determined. To quantify the deviation of the posterior killing rate from the prior, we introduce an entropic cost
inside the action integral, to penalize changes in the ratio $\alpha(t,x)$ between the posterior and the prior killing rate. That is, $\alpha$ is an added optimization variable which is $\alpha(t,x)\ge 0$, and with the posterior killing rate being $\alpha V$. To penalize differences between the posterior and the prior killing rates
we introduce the factor
\begin{align}\label{eq:entropiccost}
&\alpha \log \alpha -\alpha +1
\end{align}
inside the action integral, which is convex and achieves the minimal value $0$ at $\alpha =1$. This entropy cost has been used in \cite{Leo14,Leo16} to study Schr\"odinger bridge problem over graphs. It is associated with the large deviation principle for continuous-time Markov chain with discrete state. Combining this entropic cost term for the ratio of killing rates with \eqref{eq:fluid} we arrive at
\begin{subequations}\label{eq:fluidkilling}
\begin{eqnarray}\label{eq:fluidkillinga}
\min_{P, u,\alpha} && \int_0^1 \int_{{\mathbb R}^n} \left[\frac{1}{2} \|u(t,x)\|^2 P_t +(\alpha \log \alpha -\alpha +1)VP_t \right]dx dt
\\&& \partial_t P_t+ \nabla\cdot((b+\sigma u) P_t) +\alpha VP_t- \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} P_t)}{\partial x_i\partial x_j}=0 \label{eq:fluidkillingb}
\\&& P_0 = \rho_0,\quad P_1=\rho_1.\label{eq:fluidkillingc}
\end{eqnarray}
\end{subequations}
Note that the control strategy has now two components, a drift term $u(t,x)$ and a correcting term $\alpha(t,x)$ for the killing rate.\\
\begin{theorem}
Let $(\hat\varphi(t,x), \hat \psi(t), \varphi(t,x), \psi(t))$ be the solution to the Schr\"odinger system \eqref{eq:SBsystem}, then the solution to \eqref{eq:fluidkilling} is given by the choice
\begin{subequations}\label{eq:thm12}
\begin{eqnarray}
u^\star(t,x) &=& \sigma (t,x)'\nabla \log \varphi(t,x)
\\
\alpha^\star(t,x) &=& \frac{\psi(t)}{\varphi(t,x)}\\
P_t(x)&=&\varphi(t,x)\hat\varphi(t,x).
\end{eqnarray}
\end{subequations}
\end{theorem}
\begin{proof}
We verify that conditions \eqref{eq:thm12} ensure stationarity of the Lagrangian for \eqref{eq:fluidkilling}.
Introducing the Lagrange multiplier $\lambda(t,x)$, the Lagrangian for \eqref{eq:fluidkilling} is
\begin{align*}
{\mathcal L} = &\int_0^1 \int\left[\frac{1}{2} \|u\|^2 P_t +(\alpha \log \alpha -\alpha +1)VP_t \right.\\
&\left.+ \lambda \left(\partial_t P_t+ \nabla\cdot((b+\sigma u) P_t) +\alpha VP_t- \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} P_t)}{\partial x_i\partial x_j}\right)\right]dx dt.
\end{align*}
Applying integration by part we obtain
\begin{align}\nonumber
{\mathcal L} =& \int_0^1 \int\left[\frac{1}{2} \|u\|^2 P_t +(\alpha \log \alpha -\alpha +1)VP_t - P_t\partial_t\lambda -\nabla\lambda\cdot (b+\sigma u) P_t +\alpha V\lambda P_t
\right.\\
&\left.- \frac{1}{2} \sum_{i,j=1}^n a_{ij} \frac{\partial^2 \lambda}{\partial x_i\partial x_j}P_t\right]dx dt
+ \int \lambda (1,x) P_1(x) dx - \int \lambda(0,x) P_0(x) dx. \label{eq:Lagrangian}
\end{align}
Minimizing the above over $u$ yields
\begin{subequations}\label{eq:optcontrol}
\begin{equation}
u^\star(t,x) = \sigma' \nabla \lambda.
\end{equation}
Similarly, minimization over $\alpha$ yields
\begin{equation}
\alpha^\star(t,x) = \exp (-\lambda).
\end{equation}
\end{subequations}
Substituting \eqref{eq:optcontrol} into \eqref{eq:Lagrangian} we obtain
\begin{align*}
{\mathcal L} =& \int_0^1\int P_t\left(-\frac{1}{2} a \nabla \lambda \cdot \nabla\lambda- b\cdot \nabla\lambda-\partial_t \lambda- \frac{1}{2} \sum_{i,j=1}^n a_{ij} \frac{\partial^2 \lambda}{\partial x_i\partial x_j} +V(1-\exp(-\lambda))\right) dxdt
\\&+\int \lambda (1,x) P_1(x) dx - \int \lambda(0,x) P_0(x) dx.
\end{align*}
The optimality condition
\begin{equation}\label{eq:lambda}
\partial_t \lambda+ b\cdot \nabla\lambda+ \frac{1}{2} \sum_{i,j=1}^n a_{ij} \frac{\partial^2 \lambda}{\partial x_i\partial x_j} +\frac{1}{2} a \nabla \lambda \cdot \nabla\lambda-V(1-\exp(-\lambda)) = 0
\end{equation}
follows. Now, let
\begin{equation}\label{eq:lambdaphi}
\lambda (t,x) = \log \frac{\varphi(t,x)}{\psi(t)},
\end{equation}
then \eqref{eq:lambda} becomes
\begin{equation}
\partial_t \varphi - \frac{d\psi}{dt} \frac{\varphi}{\psi} + b\cdot \nabla \varphi - V\varphi + \frac{1}{2} \sum_{i,j=1}^n a_{ij}\frac{\partial^2 \varphi}{\partial x_i\partial x_j} +V\psi =0,
\end{equation}
and by setting $d\psi/dt=0$, the above reduces to \eqref{eq:SBsystemc}. Finally, plugging \eqref{eq:lambdaphi} into \eqref{eq:optcontrol} yields \eqref{eq:thm12}.
\end{proof}
Substituting the optimal control \eqref{eq:thm12} into \eqref{eq:fluidkillingb} yields the closed loop dynamics under optimal control strategy. Clearly, it is the same as \eqref{eq:newFP} associated with the solution ${\mathbf P}} %{{\bi{P}}^\star$ to the uSBP \eqref{eq:USBP}.
\section{SBP over reweighted processes}\label{sec:killing}
Some early attempts to formulate the Schr\"odinger Bridge Problem for diffusions with losses date back to Nagasawa and Wakolbinger \cite{nagasawa1990stochastic,wakolbinger1989simplified}. These focused on processes that are suitably reweighed via a Feynman-Kac multiplicative functional to model losses. Earlier relevant work on Schr\"odinger Bridges over reweighed processes includes \cite{nagasawa1990stochastic,wakolbinger1989simplified,Bla92,dawson1990schrodinger,aebi1992large,leonard2011stochastic,CheGeoPav14c}. In particular, e.g., \cite[Section 8]{wakolbinger1989simplified}, and more recently, \cite{leonard2011stochastic} discuss Feynman-Kac reweighing of the prior measure $R$, into $f(X_0)\exp\left(-\int_{0}^1V(t,X_t)dt\right)g(X_1)R$. Such a process, with this special Radon-Nikodym derivative, is referred to as the $h$-transform of $R$.
To distinguish this prior work from our uSBP formulation, we refer to the earlier formulation as {\em SBP over reweighted processes}.
Let $\hat\rho_1$ be a normalized version of $\rho_1$ so that $\hat \rho_1$ is a probability distribution, then the classical Schr\"odinger bridge problem over reweighted processes can be formulated as
\begin{equation}\label{eq:SBPK}
\min_{P\in{\mathcal P}(\Omega)} \left\{H(P\mid \hat R )~\mid~ P_0 = \rho_0,\; P_1 = \hat\rho_1\right\},
\end{equation}
where
\begin{equation}
\hat R = \exp\left(-\int_{0}^1V(t,X_t)dt\right)R
\end{equation}
is the (unnormalized) distribution induced by the survival trajectories of the diffusion process \eqref{eq:diffusion} with killing rate $V$.
The solution to this problem reads
\begin{equation}
P^\star=f(X_0)g(X_1)\hat R=f(X_0)\exp\left(-\int_{0}^1V(t,X_t)dt\right)g(X_1)R
\end{equation}
where the two multipliers $f, g$ are chosen such that $P^\star$ satisfies the constraints $P_0 = \rho_0, P_1 = \hat \rho_1$.
These two multipliers can again be obtained by solving a Schr\"odinger system. More specifically, let $\varphi, \hat\varphi$ be the solution to
\begin{subequations}
\begin{eqnarray}
\partial_t \hat\varphi &=& - \nabla\cdot(b \hat\varphi) - V\hat\varphi+\frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} \hat \varphi)}{\partial x_i\partial x_j}
\\
\partial_t\varphi &=& - b \cdot\nabla\varphi + V\varphi- \frac{1}{2} \sum_{i,j=1}^n a_{ij}\frac{\partial^2 \varphi}{\partial x_i\partial x_j}
\\
\rho_0 &=& \varphi(0,\cdot)\hat\varphi(0,\cdot)
\\
\hat \rho_1 &=& \varphi(1,\cdot)\hat\varphi(1,\cdot),
\end{eqnarray}
\end{subequations}
then $\varphi, \hat \varphi$ relate to $f,g$ as
\begin{subequations}
\begin{align}
\hat\varphi(0,x)&=f(x)\hat R_0(x)\\
\varphi(1,y)&=g(y).
\end{align}
\end{subequations}
Unlike the solution ${\mathbf P}} %{{\bi{P}}^\star$ to the uSBP \eqref{eq:USBP}, the solution $P^\star$ to \eqref{eq:SBPK} is a probability measure over $\Omega = C([0,1], {\mathbb R}^n)$. Indeed, it is associated with the diffusion process
\[
dX_t = (b(t,X_t) +a(t,X_t)\nabla\log\varphi(t,X_t))dt + \sigma(t,X_t) dW_t
\]
without losses. The marginal distribution of it equals $P_t = \varphi(t,\cdot) \hat\varphi(t,\cdot)$ and is a probability measure over ${\mathbb R}^n$ for all $t\in [0,\,1]$. We argue that the SBP over weighted process doesn't address Schr\"odinger's orginal problem as described in the thought experiment in Section \ref{sec:uSBP}. The prior $\hat R$ describes the distribution of the surviving trajectories and the problem \eqref{eq:SBPK} can be interpreted as finding the most likely evolution of surviving trajectories that are compatible with the two marginals $\rho_0, \hat\rho_1$. However, the mechanism of how the particles that did not survive got killed is completely ignored in this formulation.
The importance of explicitly considering a possible update of the killing rate becomes salient when the end-point marginals are consistent with the prior law. Such a case highlights a dichotomy between our formulation of uSBP, and the rationale behind SBP over reweighted processes
To see this, consider a scenario where the two marginals are already consistent with the prior law, that is
\[
\rho_1(\cdot) = \int_{{\mathbb R}^n} q(0,x,1,\cdot) \rho_0(x) dx.
\]
One would expect the solution to be the prior $\hat R$, {\em since the prior is consistent} with the end-point marginals. {\em This is, however, not the case!} Indeed, $\hat R_0$ represents the distribution at $t=0$ of those particles that are destined to survive, and this differs from $\rho_0$, the distribution of all particles. Thus, $\hat R_0$ is not the solution to \eqref{eq:SBPK}.
One could attempt to modify Schr\"odinger's thought experiment by postulating that $\rho_0$ is precisely the distribution at $t=0$ of those particles that eventually survive. With this modification, it is easy to see that the prior $\hat R$ solves \eqref{eq:SBPK}. {\em This modification, however, is not physical}: It is not possible to measure at time $t=0$ the marginal of the survival trajectories!
Finally, we note that the Schr\"odinger bridge problem over reweighted processes has the following fluid dynamic (stochastic control) formulation
\begin{subequations}\label{eq:fluidV}
\begin{eqnarray}
\min_{P_t(\cdot), u(t,\cdot)} && \int_0^1 \int_{{\mathbb R}^n} [\frac{1}{2} \|u(t,x)\|^2+V(t,x)] P_t dx dt
\\&& \partial_t P_t+ \nabla\cdot((b+\sigma u) P_t) - \frac{1}{2} \sum_{i,j=1}^n \frac{\partial^2 (a_{ij} P_t)}{\partial x_i\partial x_j}=0
\\&& P_0 = \rho_0,\quad P_1=\hat\rho_1.
\end{eqnarray}
\end{subequations}
This stochastic control problem is over the diffusion process without losses
\[
dX_t = b(t,X_t)dt +\sigma(t,X_t)u(t,X_t)dt + \sigma(t,X_t) dW_t,
\]
and the control $u(t,x)$ only enters the system through the drift. The prior killing rate $V$ serves as a cost term.
This is substantially different from the control formulation \eqref{eq:fluidkilling} of the uSBP where the control has a drift term $u(t,x)$ and a correcting term $\alpha(t,x)$, and the killing rate $V$ appears in the dynamics instead of the cost function.
\section{Numerical example}\label{sec:example}
We conclude by highlighting the uSBP formalism with an academic/numerical example. To this end, we consider the diffusion process
\[
dX_t= \sigma dW_t,
\]
with $X_t,W_t\in \mathbb R$ (i.e., in a $1$-dimensional state space), $\sigma=0.05$, and
killing rate
\[
V(t,x) \equiv 1.
\]
We work out the solution of the unbalanced Schr\"odinger bridge problem (uSBP) with initial marginal density
\[
\rho_0(x) =
\begin{cases}
{0.3-0.3\cos(3\pi x)} & \text{if}~ 0\le x<2/3\\
{2.4-2.4\cos(6\pi x-4\pi)} & \text{if}~ 2/3\le x\le 1,
\end{cases}
\]
and target marginal density
\[
\rho_1(x)= s \rho_0(1-x),
\]
where $s\le 1$ denotes the percentage of survival particles.
Figures \ref{fig:SBP} and \ref{fig:muSBP} display the marginal flow of the uSBP for different values of $s$.
When $s<1$, only a portion of the particles survive until the end and many particles vanish along the way. Thus, the total mass of the particles is a decreasing function of time, as can be seen from Figure \ref{fig:uSBP}. Note that the terminal percentage of surviving particles is consistent with the chosen value for $s$, in each case.
\begin{figure}\begin{center}
\includegraphics[width=0.40\textwidth]{marginalflows1.png}
\caption{Marginal flow of uSBP for $s=1$}
\label{fig:SBP}
\end{center}\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{marginalflows8.png}
\caption{$s=0.8$}
\label{fig:ms8}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{marginalflows6.png}
\caption{$s=0.6$}
\label{fig:ms6}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{marginalflows4.png}
\caption{$s=0.4$}
\label{fig:ms4}
\end{subfigure}
\caption{Marginal flow of uSBP for $s=0.8, 0.6, 0.4$}
\label{fig:muSBP}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{survivals8.png}
\caption{$s=0.8$}
\label{fig:s8}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{survivals6.png}
\caption{$s=0.6$}
\label{fig:s6}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{survivals4.png}
\caption{$s=0.4$}
\label{fig:s4}
\end{subfigure}
\caption{Survival mass of uSBP for $s=0.8, 0.6, 0.4$}
\label{fig:uSBP}
\end{figure}
For comparison, we also display the solution to the SBP over reweighted processes in Figure \ref{fig:reweighted}. Note that its solution is independent of $s$. The solution describes the posterior distribution of the survived particles only, and thus the marginal flow remains a probability measure at all times. In fact, it coincides with uSBP for $s=1$.
\begin{figure}\begin{center}
\includegraphics[width=0.40\textwidth]{marginalflows1.png}
\caption{Marginal flow of SBP over reweighted processes}
\label{fig:reweighted}
\end{center}\end{figure}
\section{Concluding remarks}\label{sec:conclusion}
We introduced Schr\"odinger bridges between unbalanced marginals in the spirit of E.\ Schr\"odinger's original rationale (that led to the standard SBP), aimed to reconcile a given prior law, that now includes a killing rate, with marginal observations.
We formulated the problem as maximum entropy problem over an augmented state space that includes a coffin state representing the state of vanishing particles. The solution is characterized by a Schr\"odinger-type system, different to the classical one, that yields a diffusion process whose drift as well as killing rate are suitable adjusted as compared to the prior. Just like in the standard SBP, this new unbalanced Schr\"odinger bridge problem (uSBP) can be formulated as a stochastic control problem. Naturally, departing from the standard SBP, the control variable in this control formulation includes both the drift and killing rate. We underscore an apparent dichotomy between our formulation of the uSBP and earlier work on SBP over reweighted processes with Feynman-Kac functionals. Though both pertain to SBP's for diffusions with losses, we argued that our uSBP is a natural formulation in the spirit of Schr\"odinger's original quest to reconcile probabilistic models with observations.
The nature of the zero-noise limit of the uSBP and its relation to a corresponding optimal transport flow between unbalanced marginals is left as a topic of future research.
\section{Appendix A: Hilbert's projective metric}\label{sec:appendix}
Herein we discuss Hilbert's projective metric and highlight some important contraction theorems due to Garrett Birkoff and P. J.\ Bushell \cite{birkhoff1957extensions,bushell1973projective,bushell1973hilbert} that we use in this work. A first application of the Birkhoff-Bushell contractive maps to scaling of nonnegative matrices, a topic closely connected to Schr\"odinger bridges, was presented in \cite{franklin1989scaling}. In \cite{georgiou2015positive}, it was shown that the Schr\"odinger bridge for Markov chains and quantum channels can be efficiently obtained from the fixed-point of a map which contracts the Hilbert metric. We refer to \cite{chen2016entropic,SIREV} for more detailed information and further applications of this metric.
Below, following \cite{bushell1973hilbert}, we recall some basic concepts and results of this theory.\\
Let $\mathcal B$ be a real Banach space and let ${\mathcal K}$ be a closed solid cone in $\mathcal B $, i.e., ${\mathcal K}$ is closed with nonempty interior ${\mathcal K}_0$ and is such that ${\mathcal K}+{\mathcal K}\subseteq {\mathcal K}$, ${\mathcal K}\cap -{\mathcal K}=\{0\}$ as well as $\lambda {\mathcal K}\subseteq {\mathcal K}$ for all $\lambda\geq 0$. Define the partial order
\[
{\mathbf x}\preceq {\mathbf y} \Leftrightarrow {\mathbf y}-{\mathbf x}\in{\mathcal K},\quad {\mathbf x} \prec {\mathbf y} \Leftrightarrow {\mathbf y}-{\mathbf x}\in{\mathcal K}_0
\]
and for ${\mathbf x},{\mathbf y}\in{\mathcal K}^+:={\mathcal K}\backslash \{0\}$, define
\begin{eqnarray*}
M({\mathbf x},{\mathbf y})&:=&\inf\, \{\lambda\,\mid {\mathbf x}\preceq \lambda {\mathbf y}\}\\
m({\mathbf x},{\mathbf y})&:=&\sup \{\lambda \mid \lambda {\mathbf y}\preceq {\mathbf x} \}.
\end{eqnarray*}
Then, the Hilbert metric is defined on ${\mathcal K}^+$ by
\[
d_H({\mathbf x},{\mathbf y}):=\log\left(\frac{M({\mathbf x},{\mathbf y})}{m({\mathbf x},{\mathbf y})}\right).
\]
It is easily seen that $d_H(\cdot,\cdot)$ is symmetric, i.e., that $d_H({\mathbf x},{\mathbf y})=d_H({\mathbf y},{\mathbf x})$, and invariant under scaling by positive constants, since $d_H({\mathbf x},{\mathbf y})=d_H(\lambda {\mathbf x}, {\mathbf y})$ for any $\lambda>0$ and ${\mathbf x},{\mathbf y}\in{\mathcal K}_0$.
Therefore $d_H(\lambda {\mathbf x},{\mathbf x})=0$. It can also be shown that the triangular inequality holds and, therefore, $d_H(\cdot,\cdot)$ is a {\em projective} metric that represents distance
between rays.
In our analysis we encounter two types of maps. We encounter inversion
\begin{align}\label{eq:isometry}
\mathcal E_{\rm inv}\,:\; {\mathbf x}\mapsto {\mathbf x}^{-1},
\end{align}
of elements ${\mathbf x}\in{\mathcal K}_0$, and also linear maps that are {\em positive}, namely,
\[
{\mathcal E}:{\mathcal K}^+\rightarrow{\mathcal K}^+.
\]
For both types of maps we are interested in
determining their {\em contraction ratio},
\begin{eqnarray*}
\kappa({\mathcal E}):=\inf\{\lambda \mid d_H({\mathcal E}({\mathbf x}),{\mathcal E}({\mathbf y}))\leq \lambda d_H({\mathbf x},{\mathbf y}),\forall {\mathbf x},{\mathbf y}\in
{\mathcal K}_0\}.
\end{eqnarray*}
It turns out that the former are isometries in the Hilbert metric whereas the latter are contractions. Thus, the composition of a combination of both types turns out to be a contraction.
That \eqref{eq:isometry} is an isometry, i.e., $\kappa(\mathcal E_{\rm inv})=1$, follows immediately from
\[
M({\mathbf x},{\mathbf y})=\frac{1}{m({\mathbf x}^{-1},{\mathbf y}^{-1})} \label{eq:isometry2}.
\]
Then, by G.\ Birkhoff's theorem \cite{birkhoff1957extensions,bushell1973hilbert}, any positive linear map $\mathcal E$
is contractive and the contraction ratio can be expressed in terms of the {\em projective diameter}
\begin{eqnarray*}
\Delta({\mathcal E}):=\sup\{d_H({\mathcal E}({\mathbf x}),{\mathcal E}({\mathbf y}))\mid {\mathbf x},{\mathbf y}\in {\mathcal K}_0\}
\end{eqnarray*}
of the range of $\mathcal E$.
Specifically, under these conditions, G.\ Birkhoff's theorem states that
\begin{equation}\label{condiam}
\kappa({\mathcal E})=\tanh(\frac{1}{4}\Delta({\mathcal E})).
\end{equation}
Thus, a positive linear map is strictly contractive if its projective diameter $\Delta({\mathcal E})$ is finite.
A further useful observation, that follows from the triangular inequality, is that for any ${\mathbf x}_0\in\mathcal {\mathcal K}_0$,
\begin{align}\label{eq:useful}
\Delta({\mathcal E})&\leq 2\sup\{d_H(\mathcal E({\mathbf x}),{\mathbf x}_0) \mid {\mathbf x}\in\mathcal K_0\}.
\end{align}
This allows bounding $\Delta({\mathcal E})$ to ensure strict contraction for $\mathcal E$.
Important examples are provided by the positive orthant of ${\mathbb R}^n$, the cone of Hermitian, positive semidefinite matrices, spaces of bounded positive functions, and so on.
Notice that, in all cases, the boundary of the cone lies at an infinite distance from any interior point.
\section{Appendix B: Proof of Theorem \ref{thm:SBEsys} on the generalized Schr\"odinger system}\label{sec:GSS}
We herein establish existence and uniqueness of solution (up to constant positive scaling) for the system (\ref{eq:SBsystem}).
The steps mimick the analogous case for the SBP where the marginals are supported on a Euclidean space \cite{chen2016entropic}. The difference at present lies in that the support of functions includes an added point that represents the coffin state.
We assume throughout that the marginal measures $\rho_0,\rho_1$ are absolutely continuous with respect to the Lebesgue measure, in that $\rho_0(dx)=\rho_0(x)dx$ and $\rho_1(dx)=\rho_1(x)dx$ for density functions $\rho_0,\rho_1$ with support $S_0,S_1\subseteq \mathbb R^n$, respectively, and that $\rho_0$ is a probability measure
while $\rho_1$ is a nonnegative measure
with $\int_{S_1}\rho_1(x)dx\le 1$.
The case $\int_{S_1}\rho_1(x)dx= 1$ reduces to the standard SBP and is easy to handle. Thus, without loss of generality, we assume
\[
\int_{S_1}\rho_1(x)dx> 1.
\]
As before, we let $q(0,x_0,t,x)$ for $0<t\le 1$ denote the fundamental solution of equation (\ref{eq:SBsystema}) and assume that
it is continuous and strictly positive on compact subsets. This is guaranteed by sufficient smoothness of the coefficients $b,V,a$, positivity of $V$ and positive definiteness on the whole domain of the matrix $a=(a_{ij})$.
Under these assuptions we rewrite the Schr\"odinger system (\ref{eq:SBsystem}) as follows,
\begin{subequations}\label{eq:SBsystem2}
\begin{align}
\label{eq:SBsystem2a}
\hat{\varphi}(t,x)&= \int_{\mathbb R^n} q(0,x_0,t,x) \hat{\varphi}(0,x_0)dx_0,
\\\label{eq:SBsystem2b}
\hat\psi(1) &= \int_0^1\int_{\mathbb R^n} V(t,x)\hat\varphi(t,x) dxdt
\\\label{eq:SBsystem2c}
\varphi(0,x_0)&=\int_{\mathbb R^n} q(0,x_0,1,x_1)\varphi(1,x_1)dx_1+\int_0^1\int_{\mathbb R^n}q(0,x_0,t,x)V(t,x)\psi(t) dxdt
\end{align}
\begin{align}
\label{eq:SBsystem2d}\nonumber
&\phantom{ =\int_{S_1} q(0,x_0,1,x_1)\varphi(1,x_1)dx_1+\int_0^1\int_{S_1}q(0,x_0,t,x)V(t,x)\psi(t) dxdt}\\[-40pt]
\psi(t) &= {\rm constant},\quad 0\le t\le 1,
\\\label{eq:SBsystem2e}
\rho_0(x_0) &= \varphi(0,x_0)\hat\varphi(0,x_0)
\\\label{eq:SBsystem2f}
\rho_1(x_1) &= \varphi(1,x_1)\hat\varphi(1,x_1)
\\\label{eq:SBsystem2g}
\psi(0)\hat\psi(0) &= 1-\int_{S_0}\rho_0=0
\\\label{eq:SBsystem2h}
\psi(1)\hat\psi(1) &= 1-\int_{S_1} \rho_1 =: c_1 > 0.
\end{align}
\end{subequations}
We consolidate the system of equations \eqref{eq:SBsystem2} into
\begin{subequations}\label{eq:SBsystem3}
\begin{align}\label{eq:SBsystem3a}
\hat{\varphi}(t,x)&= \int_{S_0} q(0,x_0,t,x) \rho_0(x_0) \frac{1}{\varphi(0,x_0)}dx_0
\\\nonumber
\hat\psi(1) &= \int_0^1\int_{\mathbb R^n} V(t,x)\hat\varphi(t,x) dxdt\\\label{eq:SBsystem3b}
&= \int_{S_0}\frac{1}{\varphi(0,x_0)}\rho_0(x_0)\underbrace{\int_0^1\int_{\mathbb R^n} q(0,x_0,t,x) V(t,x) dxdt}_{r(x_0)}dx_0
\\\label{eq:SBsystem3c}
\varphi(0,x_0)&=\int_{S_1} q(0,x_0,1,x_1)\rho_1(x_1)\frac{1}{\hat\varphi(1,x_1)}dx_1+ \frac{1}{\hat\psi(1)}c_1\underbrace{\int_0^1\int_{\mathbb R^n} q(0,x_0,t,x)V(t,x) dxdt}_{r(x_0)}\\\label{eq:SBsystem3d}
\psi(0) &=\psi(1)= \frac{1}{\hat\psi(1)}c_1.
\end{align}
\end{subequations}
These four equations, that encapsulate the Schr\"odinger system, suggest considering the composition of maps
\begin{align}\label{eq:newmaps}
{\hat\varphi(1,\cdot)\choose \hat\psi(1)}
\stackrel{\mathcal E_1}{\mapsto}
{\frac{1}{\hat\varphi(1,\cdot)}\choose \frac{1}{\hat\psi(1)}}
\stackrel{\mathcal E_2}{\mapsto}
{\varphi(0,\cdot) \choose \psi(0)}
\stackrel{\mathcal E_3}{\mapsto}
{\frac{1}{\varphi(0,\cdot)}\choose \frac{1}{\psi(0)}}
\stackrel{\mathcal E_4}{\mapsto}
{\hat\varphi(1,\cdot)\choose \hat\psi(1)}_{\rm next}
\end{align}
in order to analyze existence of solutions. Indeed, we utilize the theory of the Hilbert metric (outlined in Appendix \ref{sec:appendix}) to show that the composition is a strict contraction along rays, resulting in a unique fixed point.
To this end, we consider the Banach space $\mathcal B=\mathcal L^\infty(\mathcal X)$ of {\em real-valued} functions $h(\cdot)$ on
${\mathcal X} =S\cup \{\mathfrak c\}$, where $S\in{\mathbb R}^n$ satisfies that $S_0\cup S_1\subset S$.
For notational convenience we use the vectorial notation ${h(x)\choose h(\mathfrak c)}$ to specify the values of $h$ on the two constituents of its support, for $x\in{\mathbb R}^n$ and $\mathfrak c\in\{\mathfrak c\}$.
Thus, the norm of $h$ is
\[
\left\|h\right\|:=\max\{\|h|_S\|_\infty,|h(\mathfrak c)|\}.
\]
We consider the cone of positive functions
\[
{\mathcal K}=\{ h \in \mathcal B\mid h(\mathfrak c)\geq 0 \mbox{ and } h(x)\geq 0 \; a.e.\ x\in S\}
\]
and the corresponding partial order $h_1\preceq h_2 \Leftrightarrow h_2-h_1\in{\mathcal K}$ as usual.
We observe that ${\mathcal K}$ is closed, solid and has a non-empty interior (of strictly positive a.e.\ functions) that we denote ${\mathcal K}_0$; we also denote ${\mathcal K}^+:={\mathcal K}\backslash \{0\}$.
Note that in the on-going development, the components of functions $h\in\mathcal B$, that are (possibly time-dependent) functions on ${\mathbb R}^n$ and $\{\mathfrak c\}$, respectively, are differentiated as $\varphi,\psi$, or $\hat\varphi,\hat\psi$, respectively, e.g., ${h(x)\choose h(\mathfrak c)}={\varphi(t,x)\choose \psi(t)}$.
We proceed to consider the composition of maps in \eqref{eq:newmaps} and establish first the following weaker version of Theorem \ref{thm:SBEsys}:\\
\begin{theorem}\label{thm:SBEsys_weak} Assuming that the support sets $S_0,S_1$ of the two marginals $\rho_0,\rho_1$ of the uSBP are compact, the claim in Theorem \ref{thm:SBEsys} holds true.
\end{theorem}
Recall the notation $M(\cdot,\cdot),m(\cdot,\cdot),\kappa(\cdot)$ and $\Delta(\cdot)$ from Appendix \ref{sec:appendix}.
As noted in the appendix, since $M(h_1,h_2)=m(h_1^{-1},h_2^{-1})^{-1}$ for $h_1,h_2\in{\mathcal K}_0$,
both $\mathcal E_1$ and $\mathcal E_3$ are isometries. They are readily extended to isometries on $\mathcal K^+$ as well.
The map $\mathcal E_2$ is linear (homogeneous of degree $1$) and therefore, by Birkhoff's theorem given in the appendix, contractive on $\mathcal K^+$.
For the same reason, $\mathcal E_4$ is contractive.
Unfortunately, neither map is strictly contractive. To see this, note that since, e.g., $\mathcal E_2 ({\star \choose 0})={\star \choose 0}$, with $\star$ denoting nonzero entries, certain elements on the boundary of $\mathcal K^+$ map onto the boundary and not the interior.
In order to establish the theorem we proceed as follows. Let $z\in S_0$ be an arbitrary fixed point in $S_0$. We modify equation \eqref{eq:SBsystem3d} of the Schr\"odinger system \eqref{eq:SBsystem3}, replacing it with
\begin{equation}\tag{\ref{eq:SBsystem3d}'} \label{eq:modification}
\tilde\psi(0)= \varphi(0,z) = \int_{S_1} q(0,z,1,x_1)\rho_1(x_1)\frac{1}{\hat\varphi(1,x_1)}dx_1+ \frac{1}{\hat\psi(1)}c_1 r(z),
\end{equation}
and, accordingly, replace $\mathcal E_2$ with a corresponding map that we refer to as $\mathcal E_2^\prime$. We then show the existence and uniqueness of solution for the modified system. Interestingly, except for the last of the elements in the $4$-tuple $(\hat\varphi(1,x), \hat \psi(1), \varphi(0,x), \psi(0))$, namely, $\psi(0)$, the remaining dictate the sought solution of the original Schr\"odinger system \eqref{eq:SBsystem}. This last entry plays no role in the original Schr\"odinger system. In particular,
\begin{equation}\label{eq:equivalence}
\mathcal E_4\circ\mathcal E_3\circ\mathcal E_2\circ\mathcal E_1 = \mathcal E_4\circ\mathcal E_3\circ\mathcal E_2^\prime\circ\mathcal E_1=:{\mathcal C}.
\end{equation}
We now consider $\mathcal E_2^\prime : h\mapsto g$ and show that it is strictly contractive in the Hilbert metric.
From \eqref{eq:useful}, taking as ${\mathbf x}_0$ the function which is identically equal to $1$ on $S$ as well as on
$\{\mathfrak c\}$, we deduce that
\begin{align}\label{eq:deltabound}
\Delta(\mathcal E_2^\prime)\leq 2\sup \{\log\left(\frac{\max\{\sup_xg(x),g(\mathfrak c)\}}{\min\{\inf_xg(x),g(\mathfrak c)\}}\right)\mid g=\mathcal E_2^\prime(h) \mbox{ and }h\in {\mathcal K}_0\}
\end{align}
Since $\rho_0,\rho_1$ are supported on compact sets $S_0,S_1$ of $\mathbb R^n$, respectively, we can choose $S$ to be compact as well. Since the kernel $q$ is positive and continuous, the kernel is bounded from below and above on $S\times S$. I.e., there exist $0<\alpha_1\le \beta_1<\infty$ such that
\begin{equation}\label{eq:ab}
\alpha_1\le q(0,x,1,y) \le \beta_1,
\end{equation}
for all $(x,y)\in S\times S$. Similarly, there exist $0<\alpha_2\le \beta_2<\infty$ such that
\begin{equation}\label{eq:ab2}
\alpha_2\le r(x) \le \beta_2
\end{equation}
for all $x\in S$.
Let $h(x)=\frac{1}{\hat\varphi(1,x)}$ and $h(\mathfrak c)=\frac{1}{\hat\psi(1)}$, then
\[
\alpha_1 \int_{S_1} \rho_1(x_1)h(x_1)dx_1\leq \int_{S_1} q(0,x_0,1,x_1)\rho_1(x_1)h(x_1)dx_1 \leq
\beta_1 \int_{S_1} \rho_1(x_1)h(x_1)dx_1, ~\forall x_0 \in S.
\]
It follows that, in view of \eqref{eq:SBsystem3c},
\[
\frac{\sup_xg(x)}{\inf_xg(x)}
\leq \frac{\max_{i\in\{1,2\}}\beta_i}{\min_{i\in\{1,2\}}\alpha_i} <\infty.
\]
Thanks to the modification \eqref{eq:modification}, $g(\mathfrak c) = g(z)$ and therefore
\[
\frac{\max\{\sup_xg(x),g(\mathfrak c)\}}{\min\{\inf_xg(x),g(\mathfrak c)\}}
\leq \frac{\max_{i\in\{1,2\}}\beta_i}{\min_{i\in\{1,2\}}\alpha_i} <\infty.
\]
Thus, from \eqref{eq:deltabound} and using Birkhoff's theorem \eqref{condiam},
\[
\kappa(\mathcal E_2^\prime)<1.
\]
As a consequence, the composition $\mathcal E_4\circ \mathcal E_3\circ \mathcal E_2^\prime\circ \mathcal E_1$ is strictly contractive, i.e.,
\[
\kappa(\mathcal E_4\circ\mathcal E_3\circ\mathcal E_2^\prime\circ\mathcal E_1)<1.
\]
It follows from \eqref{eq:equivalence} that
\[
\kappa({\mathcal C})= \kappa(\mathcal E_4\circ\mathcal E_3\circ\mathcal E_2\circ\mathcal E_1)<1.
\]
The above condition ensures that ${\mathcal C}$ has a unique fixed point in terms of the Hilbert metric \cite{chen2016entropic}. Since Hilbert metric is a projective metric, the uniqueness is up to a constant scaling. Denote the fixed point on the unit sphere $U$ by $h$, then
\[
{\mathcal C}(h) = \lambda h
\]
for some positive number $\lambda$. We next show $\lambda=1$. To this end, we introduce a different factorization of ${\mathcal C}$ as
\[
{\mathcal C} = {\mathcal E}^\dagger \circ {\mathcal E}_{p_0} \circ {\mathcal E} \circ {\mathcal E}_{p_1},
\]
where
\begin{eqnarray*}
{\mathcal E}(u) & = & \left[\begin{matrix}
\int_{S_1} q(0,x,1,x_1)u(x_1)dx_1+ r(x)u(\mathfrak c)
\\
u(\mathfrak c)
\end{matrix}\right]
\\
{\mathcal E}_{p_0} (u) &=& \left[\begin{matrix} \frac{\rho_0(x)}{u(x)}\\0\end{matrix}\right]
\\
{\mathcal E}_{p_1}(u) &=& \left[\begin{matrix} \frac{\rho_1(x)}{u(x)}\\\frac{c_1}{u(\mathfrak c)}\end{matrix}\right],
\end{eqnarray*}
and ${\mathcal E}^\dagger$ is the adjoint operator of ${\mathcal E}$.
Clearly,
\[
\langle u, {\mathcal E}_{p_0}(u) \rangle = \langle {\mathcal E}_{p_1} (u), u\rangle = 1, ~\forall u\in {\mathcal K}_0.
\]
It follows that
\begin{eqnarray*}
1 &=& \langle {\mathcal E} \circ {\mathcal E}_{p_1}(h), {\mathcal E}_{p_0} \circ {\mathcal E}\circ{\mathcal E}_{p_1} (h)\rangle
\\
&=& \langle {\mathcal E}_{p_1} (h), {\mathcal E}^\dagger \circ {\mathcal E}_{p_0} \circ {\mathcal E}\circ{\mathcal E}_{p_1}(h)\rangle
\\
&=& \langle {\mathcal E}_{p_1} (h), {\mathcal C}(h)\rangle
\\
&=& \langle {\mathcal E}_{p_1} (h), \lambda h\rangle = \lambda.
\end{eqnarray*}
Once the fixed point $h$ is computed, the $4$-tuple $(\hat\varphi(1,x), \hat \psi(1), \varphi(0,x), \psi(0))$ can be recovered by
\[
\hat\varphi(1,x) = h(x),\; \hat\psi(1) = h(\mathfrak c),
\]
and
\[
\left[\begin{matrix} \varphi(0,\cdot)\\\psi(0)\end{matrix}\right]
=
{\mathcal E}_2\circ{\mathcal E}_1(h).
\]
The uniqueness of the $4$-tuple $(\hat\varphi(1,x), \hat \psi(1), \varphi(0,x), \psi(0))$ follows from the uniqueness of the fixed point $h$. This completes the proof of Theorem \ref{thm:SBEsys_weak}.
A standard argument \cite[Theorem 3.5]{chen2016entropic} can be used to extend the proof to the setting where $S_0, S_1$ are not necessarily compact for Theorem \ref{thm:SBEsys}.
{
\bibliographystyle{IEEEtran}
|
1,116,691,501,228 | arxiv |
\section{Introduction}
\label{sec:sn1572_hires:introduction}
\glspl*{snia}\ are of great interest. They represent some of the most extreme physical situations in stellar astronomy, control the chemical evolution of galaxies and the Universe at intermediate to late times by producing large amounts of iron-group elements, and are uniquely powerful cosmic distance probes. But despite their wide-ranging significance, fundamental uncertainties remain around the progenitors of these cataclysmic events.
There is general consensus that \glspl*{snia}\ are caused by the deflagration/detonation of a \gls{cowd} which is accreting material from a binary companion. Scenarios exist where the explosion can be initiated from a detonation on the surface of the star \citep{1995ApJ...452...62L, 2010A&A...514A..53F}, through runaway carbon burning in the white dwarf's interior, or through a cataclysmic merger of objects.
Observationally, two main models for this accretion process can be identified. The \gls{sds} sees the accretion process occurring through \gls{rlof} of a close nondegenerate companion (also known as a \gls{donor} star). This companion, which has undergone common-envelope evolution with the white dwarf, can be a helium, main-sequence, subgiant, or red giant star. In all cases the donor star should survive the explosion (except for possibly in the case of the helium-star donor; R.~Pakmor 2012, private communication) and remain visible post-explosion.
The second scenario is the dynamical merger of two white dwarfs, known as the \gls{dds}. In this scenario, the coevolution of two stars eventually leads to a close binary of two white dwarfs, which are able, through the emission of gravitational radiation, to merge over a wide range of times after the initial formation of the system. In most cases this would leave no remaining star \citep[e.g.,][]{2010Natur.463...61P}.
Both scenarios have support in observations and theory. The detection of circumstellar material around certain \glspl*{snia}\ \citep{2007Sci...317..924P,2009ApJ...702.1157S, 2011Sci...333..856S, 2012arXiv1203.2916F} provides evidence for the \gls{sds}. On the other hand, the lack of substantial hydrogen in the majority of other \glspl*{snia}\ \citep{2007ApJ...670.1275L} poses a challenge to the \gls{sds}.
\citet{2010ApJ...708.1025K} suggests that the interaction of the shock wave with the nondegenerate companion should result in a light excess at early times of an \gls*{snia}~light curve, which depends on the viewing angle and the companion radius. Such an excess has not yet been observed \citep{2010ApJ...722.1691H, 2011Ap&SS.tmp...40T, 2011arXiv1106.4008B,2011MNRAS.416.2607G}, which is at odds with \gls{redgiant} companions forming the majority of \glspl*{snia}. \citet{2011ApJ...730L..34J}, \citet{2011ApJ...738L...1D}, and \citet{2012ApJ...756L...4H,2012ApJ...744...69H}, however, have suggested a scenario where the white dwarf is spinning and thus can accrete above the Chandrasekhar limit. The explosion would only occur once the white dwarf had spun down sufficiently, which would give the red giant a chance to evolve and would not require the detection of the early excess in the light curve in a red giant progenitor scenario.
Population-synthesis calculations are challenging, with various authors getting different results for the same inputs. However, there is a general trend from these calculations that neither single-degenerate nor double-degenerate stars can provide enough systems to explain the observed \gls*{snia}\ rate \citep{2008ApJ...677L.109H,2009ApJ...699.2026R, 2010A&A...515A..89M,2010A&A...521A..85Y}. Several authors suggest that the population might comprise both single-degenerate and double-degenerate systems.
The physics of white-dwarf mergers is challenging to simulate numerically, but in the simplest calculations, these mergers lead to the formation of a neutron star via electron capture, rather than to a thermonuclear explosion \citep{1985A&A...150L..21S}. Recently, \citet{2010Natur.463...61P} have shown that for certain parameters (white-dwarf binaries with a mass ratio very close to 1) the merger may explain subluminous supernovae \citep[e.g., \sn{1991}{bg}; see][for a review]{1997ARA&A..35..309F}, although \citet{2011arXiv1101.5132D} note that the initial conditions of the system may change these conclusions.
SN~2011fe was detected only $\sim 11$\,hr after the explosion, and (with a distance of 6.4\,Mpc) is one of the closest \glspl*{snia}\ ever found \citep{2011Natur.480..344N}. \citet{2011Natur.480..344N} and \citet{2011arXiv1110.2538B} have not found any early-time light-curve excess predicted by \citet{2010ApJ...708.1025K} and thus rule out a red giant donor. Radio and X-ray observations by \citet{2011arXiv1109.2912H} show no strong signs of pre-explosion outflows, which again contradicts a red giant scenario for SN~2011fe. Additional radio measurements by \cite{2012arXiv1201.0994C} suggest a low density around SN~2011fe, which is at odds with many conventional single-degenerate scenarios. \citet{2011Natur.480..348L} have searched pre-explosion archival images and can also rule out luminous red giants and almost all helium stars as donors. In addition, \citet{2012ApJ...744L..17B} have used images believed to have been taken 4\,hr post-explosion and suggest the companion radius to be less than 0.1\,R$_\odot$. Most of these results are consistent with a main-sequence companion or white-dwarf companion.
Because it is very difficult to obtain robust constraints on the progenitor system in the immediate aftermath of a $10^{51}$\,erg explosion, an alternative is to study somewhat older and more nearby SNe that can be observed in great detail. \gls{rp04}\ have tried to directly detect donor stars in old and nearby \gls*{snia}\ remnants within the Milky Way. They have identified two historical Galactic \glspl{sn} well suited to this task --- \sn{1006}{}\ and \sn{1572}{} (Tycho's SN). Both remnants are young (1000 and 440\,yr old, respectively), closeby ($2.2\pm0.08\,\textrm{kpc}$, \citealt{2003ApJ...585..324W}; $2.8\pm0.8\,\textrm{kpc}$, \citealt{2004ApJ...612..357R}, respectively), almost certainly \glspl*{snia}\ from their observational signatures \citep{2006ApJ...645.1373B,2004ApJ...612..357R, 2008Natur.456..617K, 2008ApJ...681L..81R}, and not overwhelmed by Galactic extinction. In this paper, we will focus on \sn{1572}{}.
\gls{rp04}\ investigated most bright stars in the central region of \sn{1572}{}\ and found a star with an unusual spatial motion (\object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ by their nomenclature); they suggested this as a possible donor star for \sn{1572}{}. While the star has an unusual spatial motion compared to other stars in the field, its current location and proper motion place it a significant distance from the center of the supernova remnant (SNR) --- a feature difficult to explain in connecting \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ to \snr{1572}{}.
One consequence of \gls{rlof} is a rotational velocity induced on the \gls{donor} star by tidal locking in the system. This results in an unusually large rotational velocity, related to the orbital velocity of the binary system, and it can be used to single out possible donor stars from nearby unrelated stars. \gls{wek09}\ investigated the rotation of \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ but found no excess rotational velocity compared to a normal star. A comparison of \gls{wek09}'s measurements of \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}, including a revised radial velocity $\glssymbol*{vrad}$, with Galactic kinematic models showed that it is statistically consistent with an unrelated thick/thin-disk star. However, \gls{wek09}\ were able to provide an {\it a priori} unlikely donor-star scenario, where the star was able to lose its rotational signature.
\gls{gh09}\ analyzed a spectrum of \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ observed with the \gls{hires} instrument on the Keck-I 10\,m telescope, finding a $\glssymbol*{vrad}$ value consistent with \gls{wek09}'s revised $\glssymbol*{vrad}$. They also measured the stellar parameters and metallicity of \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}, concluding that it has an unusually high nickel abundance, which they claim can be attributed to the accretion of ejecta material on the \gls{donor} star during the explosion.
In this paper we analyze \gls{hires} spectra of the six bright stars near the center of \snr{1572}{}. These spectra were taken as part of the same program that obtained the data used by \gls{gh09}, and we independently reanalyze the \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ spectrum in our program. We describe the observational data and our data-reduction procedures in \S \ref{sec:sn1572_hires:observ-data-reduct}. Section \ref{sec:analysis} is divided into six subsections detailing the measurements of proper motion, radial velocity, rotation, stellar parameters, and abundances, and we provide a detailed comparison between our and \gls{gh09}'s measurements of \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}. In \S \ref{sec:sn1572_hires:discussion} we analyze the measurements of each star to investigate its potential association with \snr{1572}, and we present our conclusions in \S \ref{sec:sn1572_hires:conclusion}.
\section{Observations and Data Reduction}
\label{sec:sn1572_hires:observ-data-reduct}
We obtained spectra with the \gls{hires} spectrograph on the Keck-I telescope on Mauna Kea. The observations were made on 2006 September 10 and 2006 October 11 UT. Slits B5 and C1 (width 0.86\arcsec; B5 length 3.5\arcsec, C1 length 7.0\arcsec) were used, resulting in wavelength coverage of 3930--5330\,\AA, 5380--6920\,\AA, and 6980--8560\,\AA\ with $R = \lambda/\Delta\lambda \approx$ 50,000, providing us with the necessary spectral resolution and wavelength coverage to determine stellar parameters.
The spectra were reduced using the \gls{makee} package. All spectra were corrected to heliocentric velocities, using the \gls{makee} sky-line method. The spectra were not corrected for telluric absorption lines, but only regions known to be free from telluric contamination were used in the analysis to derive the stellar parameters. The final exposure times of the combined spectra for each candidate and the signal-to-noise ratio (S/N) at 5800--5900\,\AA\ are shown in Table~\ref{tab:candexpo}. Finally, we normalized the spectrum using the \gls{iraf} task \textsc{continuum}. We note that \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}\ and \object[{[RCM2004] Tycho D}]{\hbox{Tycho-D}}\ were observed on the same slit (C1); they are separated by 2.1\arcsec, and the seeing was $\sim 0.8\arcsec$, with \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}\ being roughly five times brighter than \object[{[RCM2004] Tycho D}]{\hbox{Tycho-D}}.
\begin{deluxetable}{lccccccc}
\tablecolumns{7}
\tablecaption{Observations \label{tab:candexpo}}
\tablehead{%
\colhead{Tycho} &
\colhead{$\alpha$(J2000)} &
\colhead{$\delta$(J2000)} &
\colhead{Date} &
\colhead{Slit} &
\colhead{$t_\textrm{exp}$} &
\colhead{$V$\tablenotemark{a}}&
\colhead{S/N\tablenotemark{b}} \\
\colhead{(Name)}&%
\colhead{(hh:mm:ss.ss)}&%
\colhead{(dd:mm:ss.ss)}&%
\colhead{(dd/mm/yy)}&%
\colhead{}&%
\colhead{(s)}&%
\colhead{(mag)}&%
\colhead{}%
\\
}
\startdata
A & 00:25:19.73 & +64:08:19.60 & 10/09/06 & B5& 900 &13,29&$ \sim 48$ \\
B & 00:25:19.95 & +64:08:17.11 & 10/09/06 & B5 & 1200 &15.41&$ \sim 45$ \\
C & 00:25:20.40 & +64:08:12.32 & 11/10/06 & C1 & 10,800&19.06\tablenotemark{c} & $ \sim 8$\\
D & 00:25:20.60 & +64:08:10.82 & 11/10/06 & C1 & 10,800 &20.70& $ \sim 3$ \\
E & 00:25:18.29 & +64:08:16.12 & 11/10/06 & C1 & 9000 &19.79& $\sim 9$\\
G & 00:25:23.58 & +64:08:02.06 & 10/09/06 \& 11/10/06 & B5\&C1 & 24,000 & 18.71& $\sim 25$ \\
\enddata
\tablenotetext{a}{Magnitudes from \gls{rp04}.}
\tablenotetext{b}{The S/N value was obtained by measuring the root-mean square of the pixels (each resolution element is sampled by 2 pixels) in continuum regions near 5800--5900\AA. For the purposes of measuring the stellar parameters, the spectrum was convolved and the S/N increases by a factor of 2.24.}
\tablenotetext{c}{\gls{rp04}\ notes that this is an unresolved pair, with a brighter bluer component ($V=19.38$ mag) and a fainter redder component ($V=20.53$ mag).}
\end{deluxetable}
In addition, we obtained low-resolution spectra ($R \approx 1200$) of
\object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ with the dual-arm \gls{lris} mounted on the Keck-I telescope. The
data were taken on 2010 November 7 UT, using only the blue
arm with the 600/4000 grism and the 1\arcsec-wide slit. This resulted
in a wavelength coverage of 3200--5600\,\AA. These data
were taken to obtain a precise measurement of the surface gravity for \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ using the size of the Balmer decrement \citep{2007PASP..119..605B}.
The spectrum of \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ was reduced using standard techniques \citep[e.g.,][]{Foley03}. Routine CCD processing and spectrum extraction
were completed with \gls{iraf}, and the data were extracted with
the optimal algorithm of \citet{Horne86}. We obtained the wavelength
scale from low-order polynomial fits to the calibration-lamp spectra.
Small wavelength shifts were then applied to the data after measuring the offset by
cross-correlating a template sky to the night-sky lines that were
extracted with the star. Using our own \gls{idl} routines, we fit a
spectrophotometric standard-star spectrum to the data in order to flux
calibrate \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ and remove telluric lines \citep{Horne86,Matheson00}.
\section{Analysis}
\label{sec:analysis}
\subsection{Astrometry}
\label{sec:propmot}
Proper motions can be used to identify potential donor stars because donor stars freely travel with their orbital velocity after the SN explosion disrupts the system. \gls{rp04}\ suggested \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ as a possible donor due to its unusually high values for both the proper motion and radial velocity. For this work we measured proper motions for 201 stars within one arcminute of the remnant's center. We used archival {\it Hubble Space Telescope (HST)} images for three different epochs ({\it HST} Programs GO-9729 and GO-10098; November 2003, August 2004, May 2005), each consisting of three exposures (1\,s, 30\,s, and 1440\,s) with the F555W filter using the Advanced Camera for Surveys (ACS). The scale in each exposure is 50\,mas\,pixel$^{-1}$. This dataset results in a maximum baseline of 18 months.
We used an image from the middle epoch (2004) to establish a reference frame and oriented the pixel coordinate system with the equatorial system. We then applied a distortion correction for the F555W filter \citep{2006acs..rept....1A} and calculated transformations between all other images and the reference image. Next we used these transformations to calculate the position of all stars in the reference coordinate system, with the overall uncertainty of each position estimated.
Some faint stars were not detected in the shorter exposures and were thus excluded from proper-motion measurements. In total, 114 stars were used in the astrometric analysis.
For each star, we fit a linear regression for the stellar positions over time in the pixel coordinates (which were aligned with the equatorial system). The $x$ and $y$ data were treated as independent measurements, with separate regressions solved for each axis direction. Uncertainties were estimated using standard least-squares analysis and the individual uncertainty estimates of each object's positions.
There are three J2000 measurements of the geometric center of \sn{1572}{}\ from different datasets. \cite{1997ApJ...491..816R} used \gls{vla} data to measure the center as \rasc{00}{25}{14}{95}, \decl{+64}{08}{05.7}; \citet{2000ApJ...545L..53H} used \gls{rosat} data to measure \rasc{00}{25}{19}{0}, \decl{+64}{08}{10}{}; and \cite{2005ApJ...634..376W} used \gls{chandra} data to measure \rasc{00}{25}{19}{40}, \decl{64}{08}{13.98}. We note that the X-ray centers agree rather well with a difference of less than 5\arcsec, but the radio center is located roughly 30\arcsec away from the X-ray centers. Thus, we believe the error in the geometric center to be rather large (of order 30\arcsec).
Table~\ref{tab:propmot} lists the proper motions and uncertainties of all stars mentioned by \gls{rp04}\ (19 stars) which were analyzed in this work, as well as the distance to the geometric X-ray\ center measured by \gls{chandra}.
We note that Tycho-2 has a relatively high proper motion, but its position in the year 1572 was 67.95\arcsec\ away from the remnant's center, and we thus exclude it as a viable candidate for the donor.
In Figure \ref{fig:propmot_sn1572_hires}, we compare the distribution of proper motions of all measured stars to our candidates. All of our candidates are reconcilable with a normal proper-motion distribution.
\begin{deluxetable}{lccccccccc}
\tablecaption{Proper Motions of Candidates\label{tab:propmot}}
\tablecolumns{10}
\tablehead{
\colhead{Tycho} &
\colhead{$\alpha$(J2000)} &
\colhead{$\delta$(J2000)} &
\colhead{$\mu_\alpha$} &
\colhead{$\mu_\delta$} &
\colhead{$\Delta\mu_\alpha$} &
\colhead{$\Delta\mu_\delta$} &
\colhead{$\mu_l$}&
\colhead{$\mu_b$}&
\colhead{$r$}\\%
\colhead{(Name)}&%
\colhead{(hh:mm:ss.ss)}&%
\colhead{(dd:mm:ss.ss)}&%
\multicolumn{2}{c}{($\ensuremath{\textrm{mas~yr}^{-1}}$)}&
\multicolumn{2}{c}{($\ensuremath{\textrm{mas~yr}^{-1}}$)}&
\multicolumn{2}{c}{($\ensuremath{\textrm{mas~yr}^{-1}}$)}&
\colhead{(\arcsec)}\\
}
\startdata
B & 0:25:19.97 & 64:08:17.1 & -1.24 & 0.56 & 0.62 & 0.64 & -1.17 & 0.68 & 4.86\\
A & 0:25:19.73 & 64:08:19.8 & -0.09 & -0.89 & 1.17 & 0.90 & -0.18 & -0.88 & 6.21\\
A2 & 0:25:19.81 & 64:08:20.0 & -0.71 & -3.60 & 0.69 & 0.64 & -1.07 & -3.51 & 6.58\\
C & 0:25:20.38 & 64:08:12.2 & -0.21 & -2.52 & 0.65 & 0.65 & -0.46 & -2.48 & 6.66\\
E & 0:25:18.28 & 64:08:16.1 & 2.04 & 0.54 & 0.66 & 0.69 & 2.09 & 0.33 & 7.60\\
D & 0:25:20.62 & 64:08:10.8 & -1.12 & -1.99 & 1.01 & 0.86 & -1.32 & -1.87 & 8.60\\
1 & 0:25:16.66 & 64:08:12.5 & -2.27 & -1.37 & 1.60 & 1.15 & -2.40 & -1.14 & 18.00\\
F & 0:25:17.09 & 64:08:30.9 & -4.41 & 0.20 & 0.70 & 0.71 & -4.37 & 0.65 & 22.69\\
J & 0:25:15.08 & 64:08:05.9 & -2.40 & -0.25 & 0.62 & 0.62 & -2.42 & -0.00 & 29.44\\
G & 0:25:23.58 & 64:08:01.9 & -2.50 & -4.22 & 0.60 & 0.60 & -2.91 & -3.95 & 29.87\\
R & 0:25:15.51 & 64:08:35.4 & 0.28 & 0.24 & 0.89 & 0.80 & 0.30 & 0.21 & 33.23\\
N & 0:25:14.73 & 64:08:28.1 & 1.18 & 0.89 & 0.86 & 0.98 & 1.27 & 0.77 & 33.66\\
U & 0:25:19.24 & 64:07:37.9 & 0.01 & -3.04 & 0.73 & 0.75 & -0.30 & -3.03 & 36.06\\
Q & 0:25:14.81 & 64:08:34.2 & 1.45 & 3.07 & 0.64 & 0.72 & 1.75 & 2.91 & 36.19\\
T & 0:25:14.58 & 64:07:55.0 & -3.85 & 0.52 & 0.72 & 0.62 & -3.77 & 0.91 & 36.78\\
K & 0:25:23.89 & 64:08:39.3 & 0.18 & 0.17 & 0.73 & 0.69 & 0.20 & 0.15 & 38.73\\
L & 0:25:24.30 & 64:08:40.5 & 0.16 & -0.44 & 0.75 & 0.82 & 0.11 & -0.45 & 41.59\\
S & 0:25:13.78 & 64:08:34.4 & 4.16 & 0.58 & 0.83 & 0.84 & 4.20 & 0.15 & 42.09\\
2 & 0:25:22.44 & 64:07:32.4 & 74.85 & -4.43 & 0.82 & 0.83 & 74.05 & -11.94 & 46.09\\
\enddata
\end{deluxetable}
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{./ propmot_distr.pdf}
\caption[Proper-motion measurement of stars in SNR~1572 using only {\it HST} images]{The contours show the distribution of proper motions (68\% and 95\% probability) for all stars measured toward the Tycho SNR --- excluding the named stars.
We show the location of the candidate stars and their uncertainties on top of this distribution. Tycho-2 (called HP-1 in \gls{wek09}) is not shown in this figure as it is an extreme outlier with $\mu_\alpha=75$\,\ensuremath{\textrm{mas~yr}^{-1}}\ and $\mu_\delta=-4.4$\,\ensuremath{\textrm{mas~yr}^{-1}}; it is also at a large distance from the remnant's geometric center (46\arcsec). We find, using the Besan\c{c}on model as a proxy, that the contamination of this sample with foreground objects (less than 2\,kpc) is less than 15\%.}
\label{fig:propmot_sn1572_hires}
\end{figure*}
\subsection{Radial Velocity}
\label{sec:radvel}
For the radial-velocity measurement we first obtained well-calibrated wavelength solutions for our spectra. MAKEE performs an initial calibration of the wavelength using arcs and then refines it by cross-correlating the night-sky lines for each observation and determining minor offsets. Both science objects and radial-velocity standards were reduced in the same fashion.
Each order of each star spectrum was then cross-correlated using the \gls{iraf} task \textsc{fxcor} \citep[][]{1979AJ.....84.1511T} with at least two other radial-velocity standards (\object{HR6349}, \object{HR6970}, and \object{HR1283}) which had been observed on the same night. We measure the radial velocity of the standards and, comparing to the canonical values \citep{1999ASPC..185..354S}, we obtain a systematic error of $\sim 1\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}$, which is negligible compared to the measured velocities.
The radial velocity of \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ was measured in the course of determining the stellar parameters of \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ with the stellar parameter fitting package \gls{sfit}. The \gls{sfit} result consistently gives $v_\textrm{helio} = -52\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}$ for different stellar parameters with an uncertainty of $\sim 2\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}$.
In Table~\ref{tab:radvel} we list all of the radial velocities both in a heliocentric frame and a \gls{lsr} frame. We will be referring to the heliocentric measurements henceforth. The listed uncertainty is the standard deviation of the radial-velocity measurement of all orders added in quadrature to the error of the radial-velocity standards.
In Figure \ref{fig:dist_vr} we compare the radial velocity of our sample stars to radial velocities of stars in the direction of Tycho's SNR using the Besan\c{c}on Model \citep{2003A&A...409..523R}. The distances as well as their uncertainties are taken from \S \ref{sec:distance}. The candidates' radial velocities are all typical for their distances. Finally, we note that the measurement of \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ is consistent with the results of \gls{wek09}\ and \gls{gh09}.
\begin{deluxetable}{lccccccccc}
\tablecaption{Radial Velocities of Candidates\label{tab:radvel}}
\tablecolumns{10}
\tablehead{
\colhead{Tycho} &
\colhead{Date} &
\colhead{$v_\textrm{helio}$} &
\colhead{$v_\textrm{LSR}$} &
\colhead{$\sigma$} \\%
\colhead{(Name)}&%
\colhead{(dd/mm/yy)}&%
\colhead{(dd:mm:ss.ss)}&%
\colhead{(\ensuremath{\textrm{km}~\textrm{s}^{-1}})}&%
\colhead{(\ensuremath{\textrm{km}~\textrm{s}^{-1}})}%
}
\startdata
\object[{[RCM2004] Tycho A}]{\hbox{Tycho-A}} & 09/09/06 & $-36.79$ & $-28.50$ & 0.23 \\
\object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}} & 09/09/06 & $-52.70$ & $-44.41$ & $\sim 2$ \\
\object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}} & 11/10/06 & $-58.78$ & $-50.49$ & 0.75 \\
\object[{[RCM2004] Tycho D}]{\hbox{Tycho-D}} & 11/10/06 & $-58.93$ & $-50.64$ & 0.78 \\
\object[{[RCM2004] Tycho E}]{\hbox{Tycho-E}}\tablenotemark{a} & 11/10/06 & $-64.20$ & $-55.91$ & 0.27 \\
\object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}} & 09/09/06 & $-87.12$ & $-78.83$ & 0.25 \\
\object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}} & 11/10/06 & $-87.51$ & $-79.22$ & 0.78
\enddata
\tablenotetext{a}{There seems to be a discrepancy between \gls{rp04}\ and this work (measurement by \gls{rp04}\ $v_\textrm{LSR}$ $-26$\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}), which might be a possible hint of a binary system.}
\end{deluxetable}
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{./ dist_vr.pdf}
\caption{The fcontours indicate $1\sigma$, $2\sigma$, and $3\sigma$ levels of the distance and radial velocity using the Besan\c{c}on Model \citep{2003A&A...409..523R} with $\sim$ 60,000 stars in the direction of SN~1572 (only including stars with $10 < V < 20$ mag and with a metallicity of [Fe/H] $> -1$ for the filled contours and [Fe/H] $>-0.2$ for the dashed contours). We have overplotted our candidate stars with error bars. One should note that the uncertainties in distance are a marginalized approximate of the error; the proper error surfaces can be seen in Figure~\ref{fig:mc_isochrone}. The vertical gray shade shows the error range for the distance to SNR~1572.}
\label{fig:dist_vr}
\end{figure*}
\subsection{Rotational Velocity}
\label{sec:rotation}
We have measured projected rotational velocities ($\glssymbol*{vrot} \sin{i}$) of all stars except \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ in the fashion described by \gls{wek09}. We selected several unblended and strong (but not saturated) \ion{Fe}{1} lines in the stellar spectra, and added them after shifting to the same wavelength and scaling to the same \gls{ew}. This was done to improve the S/N for the faint stars as well as provide consistency throughout all stars.
As a reference we created three synthetic spectra for each star (one broadened only with the instrumental profile, the others with the instrumental profile and a $\glssymbol*{vrot}\sin{i}$\ of 10 and 13\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}, respectively) with the 2010 version of \gls{moog}, using our derived temperature, surface gravity, and metallicity. As input data to \gls{moog} we used the \citet{2004astro.ph..5087C} atmospheric models and a line list from \citet{1995KurCD..23.....K}. We then applied the same process of line selection and adding as for the lines in the observed spectra.
Figure \ref{fig:sn1572_hires:rotvel} shows the comparison between the synthetic spectra with different rotational velocities and the observed spectra. This comparison indicates that the stellar broadening (rotational, macroturbulence, etc.) is less than broadening due to the instrumental profile of 6\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}\ for each star. We adopt 6\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}\ as an upper limit to the rotation for all stars.
\begin{figure*}[ht!]
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth, trim=130 30 60 0]{./ stara_rotation.pdf} &
\includegraphics[width=0.45\textwidth, trim=130 30 60 0]{./ starc_rotation.pdf} \\
\includegraphics[width=0.45\textwidth, trim=130 30 60 0]{./ stare_rotation.pdf} &
\includegraphics[width=0.45\textwidth, trim=130 30 60 0]{./ starg_rotation.pdf} \\
\end{tabular}
\caption[Rotation measurement for all candidate stars in SN~1572]{The figures show the combination of iron line profiles after normalization to the same EW and compare them to synthetic line profiles created by \glsentryname{moog}. We convolved the synthetic lines first with a rotational kernel with three different values for rotation and then with the instrumental profile. All stars show rotation less than 6\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}, which is equal to the instrumental profile at this resolution. }
\label{fig:sn1572_hires:rotvel}
\end{figure*}
Due to its high temperature and rotation, we fit the rotational velocity for \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ with the program \gls{sfit} \citep[][described in \S \ref{sec:stellar-parameters}]{2001A&A...376..497J} as part of the overall fit for this star's stellar parameters. We find $\glssymbol*{vrot} \sin{i}=171^{+16}_{-33}$\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}. While \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}'s rotation is very high compared to the other candidate stars, for stars of this temperature and surface gravity a high rotation is not unusual. In summary, other than \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}, none of the stars show rotation which is measurable at this resolution.
\subsection{Stellar Parameters and Chemical Abundances}
\label{sec:stellar-parameters}
The stellar parameters are presented in Table~\ref{tab:param} and were determined using
a traditional spectroscopic approach.
Due to its high temperature, we measure the stellar parameters for \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\
by direct comparison to models in a separate procedure described later in this subsection.
The first step in the spectroscopic analysis was to rectify the continuum. For each order, we
fit the continuum, by eye, using a low-order polynomial function within the \textsc{continuum}
task in \gls{iraf}. To help identify continuum regions in the program stars, we made use of the
Arcturus and solar spectra \citep{2000IAUJD...1E..26H}. Consideration of the moderate
S/N was a concern. For example, at these values of the S/N,
we were mindful of not fitting the continuum to the highest points since it is
likely that these values are noise rather than true continuum regions.
Next, equivalent widths (EWs) for a set of Fe and Ni
lines were measured using routines in \gls{iraf}.
The $\log\,{gf}$ values for the \ion{Fe}{1}\ lines were from
the laboratory measurements by the Oxford group
\citep[e.g., ][ and references therein]{1979MNRAS.186..633B,1979MNRAS.186..657B,1980MNRAS.191..445B,1986MNRAS.220..549B,1995A&A...296..217B} and the \ion{Fe}{2}\ lines were from the measurements by \citet{1991A&A...249..539B}.
For Ni, the $\log\,{gf}$ values were taken from the compilation
by \citet[][henceforth Reddy03]{2003MNRAS.340..304R}
and \citet[][henceforth RC02]{2002AJ....123.3277R}.
While these EW measurement
routines employ Gaussian fits in a semi-automated manner, we
emphasize that all EWs were visually checked on at least two occasions.
We also required that lines have an EW of at least 10\,m\AA\ to avoid measuring noise
and less than $\sim 150$\,m\AA\ to avoid saturated lines with non-Gaussian profiles
that may lie on the flat part of the curve-of-growth.
Table~\ref{tab:ew} shows the EWs measured for the program stars.
Missing values indicate that the line was not detected or that no reliable measurement
could be obtained.
In the following subsection, we consider in more detail the uncertainties that
arise from continuum placement and EW errors.
We used the 2011 version \citep{2011AJ....141..175S} of the
local thermodynamic equilibrium (LTE) stellar line analysis program
MOOG \citep{1973ApJ...184..839S}
and LTE model atmospheres from the \citet{2003IAUS..210P.A20C} grid to derive an
abundance for a given line. The effective temperature, \glssymbol*{teff}, was adjusted until the
abundances from \ion{Fe}{1}\ lines displayed no trend as a function of
lower excitation potential, $\chi$.
The surface gravity, $\log\,g$,
was adjusted until the abundances from \ion{Fe}{1}\ and \ion{Fe}{2}\ lines were
in agreement. The microturbulent velocity, $\xi_t$, was adjusted until there
was no trend between the abundances from the \ion{Fe}{1}\ lines and the reduced
EW, log\,(EW/$\lambda$). This process was iterated until self-consistent
stellar parameters were obtained for each star.
In our analysis, we explored stellar parameters at discrete values.
For effective temperature, we considered values at every 25\,K (e.g., 6000, 6025\,K, etc.);
for surface gravity, we considered values at every 0.05\,dex (e.g., 4.00, 4.05\,dex, etc.);
and for $\xi_t$, we considered values at every 0.01\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}\ (e.g., 1.70,
1.71\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}, etc.). We assumed that excitation equilibrium was satisfied when
the slope between $\log\,\epsilon$(\ion{Fe}{1}) and lower excitation potential ($\chi$) was $\le 0.004$.
We assumed that ionization equilibrium was achieved when $\log\,{\epsilon}($\ion{Fe}{1}$) -
\log\,\epsilon($\ion{Fe}{2}$) \le 0.02$\,dex. The microturbulent velocity was set when
the slope between $\log\,\epsilon($\ion{Fe}{1}$)$ and reduced EW (log\,EW/$\lambda$) was $\le 0.004$.
We found a unique solution for all program stars.
We estimate that the internal uncertainties are
typically \glssymbol*{teff}\ $\pm$ 100\,K, log\,$g$ $\pm$ 0.3\,dex, and $\xi_t$ $\pm$ 0.3 \ensuremath{\textrm{km}~\textrm{s}^{-1}}.
For further details regarding the derivation of stellar parameters, see \citet{2008ApJ...673..854Y}.
The final iron measurements are the average of \ion{Fe}{1}\ and \ion{Fe}{2}, weighted
by the number of lines measured for each species.
We adopted the solar abundances of \citet{2009ARA&A..47..481A}.
In addition, we measured element abundance ratios for Ni via EW analysis
and Li (only for \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}) via spectrum synthesis (see Figure ~\ref{fig:li_synth}).
For the Li spectrum synthesis, we used
the \citet{2002MNRAS.335.1005R} line list in combination with
MOOG and the \citet{2003IAUS..210P.A20C} model atmospheres. A non-LTE (NLTE) analysis \citep{2009A&A...503..541L} of the Li abundances ($A$(Li)$_{\rm NLTE}$ = 2.45) yields nearly the same result as the LTE abundance ($A$(Li) = 2.46). Abundances are presented in Table~\ref{tab:parvar}.
Tycho-B's abundances are not presented in the table as they were measured in a different fashion.
In summary, the inferred metallicities for all candidates show that the candidates are of roughly solar metallicity with the exception of the metal-poor \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}. The range of metallicities spanned by the program stars is compatible with membership in the thin disk. Based on metallicity alone, we do not regard any of the program stars to be unusually metal-poor or metal-rich. Additionally, we have compared the [Ni/Fe] abundance ratio to a well-calibrated set of F- and G-dwarf abundances \citep{2005A&A...433..185B}, which we calibrated to the solar abundances of \citet{2009ARA&A..47..481A}. Figure~\ref{fig:bensby05} shows that all program stars are consistent with stars of similar metallicity. We do note that \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}\ is a marginal outlier (perhaps $1\sigma$) with a low [Ni/Fe] abundance ratio, but do not regard this to be significant. To avoid selection effects we compared \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}\ to a sample of giant stars \citep{2007AJ....133.2464L}, which gives a similar result as the comparison with \citet{2005A&A...433..185B}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=1\textwidth, trim=0 0 0cm 0, clip]{./ abund_bensby05.pdf}
\caption[Comparison of nickel and iron abundance measurements of stars in SN~1572]{The background gray error bars are F- and G-dwarf abundances from \citet{2005A&A...433..185B}. All candidate stars are consistent with that distribution. \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}\ can be seen as an outlier, but it is a K-giant and its class is not represented in the underlying F- and G-dwarf distribution.}
\label{fig:bensby05}
\end{figure*}
Because \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ has a temperature greater than 9000\,K and is rapidly rotating, the process described above cannot be used to measure stellar parameters. Instead we used the program \gls{sfit} to match the \gls{hires} spectrum to a grid of model spectra. To determine the stellar parameters for \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ we have used a model grid with $\glssymbol*{feh}= -1.0$, $8000 < \glssymbol*{teff} <$ 16,000\,K, and $7 < \log\,{g} < 2$. This low metallicity is suggested by the very weak Ca~II K line and \ion{Mg}{2} lines, but it is hard to measure. We cannot measure helium directly in this spectrum and thus adopt $N$(He) = 0.1, as this is empirically a very common helium abundance in stars.
This analysis resulted in $\glssymbol*{teff} = 10,000^{+ 400}_{-200}\,\textrm{K}$, $\log\,{g} = 3.67$ with slope $\partial \log\,g/\partial \glssymbol*{teff} = 0.27/500\,\textrm{K}^{-1}$, and rotational velocity $\glssymbol*{vrot} \sin{i} = 171$\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}\ with slope $\partial \glssymbol*{vrot} \sin{i} / \partial \glssymbol*{teff} = -41/500\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}\,\textrm{K}^{-1}$. From qualitative analysis this object seems metal poor (e.g., in comparison to stars of similar stellar parameters but solar metallicity), but its high rotation and temperature make it hard to determine this parameter precisely. For the present, we assume [Fe/H] = $-1.0$ unless otherwise noted.
In addition, using the high-resolution spectrum, we measured the \glspl{ew} of several lines predicted to be strong in the \gls{vald}. The abundances were deduced from the \glspl{ew} using a model atmosphere having $T_{\rm eff} =$ 10,000\,K, $\log\,{g} = 3.67$, and [Fe/H] = $-1.0$ (see Table~\ref{tab:starb-abund}).
One caveat regarding these abundances is the use of \glspl{ew} from
single lines with large rotational broadening, since the effect of blending
with nearby weak lines cannot be taken into account. A second is that these
abundances invariably rely on the strongest lines, which are precisely those
most susceptible to departures from LTE.
Nevertheless, they do confirm the earlier impression that the star is
metal-poor, and justify the adoption of \glssymbol*{feh}\ = $-1.0 \pm 0.4$.
\begin{deluxetable}{lcccccc}
\tablecaption{Tycho-B abundances\label{tab:starb-abund}}
\tablecolumns{7}
\tablehead{
\colhead{Ion} &
\colhead{$\lambda$}&
\colhead{$W_\lambda$} &
\colhead{$\epsilon$}&
\colhead{$[X/H]$} &
\colhead{$\frac{\partial \epsilon}{\partial \log\,g}$} &
\colhead{$\frac{\partial \epsilon}{\partial \glssymbol*{teff}}$}\\
\colhead{designation} &
\colhead{(\AA)} &
\colhead{(\AA)} &
\colhead{(dex)} &
\colhead{(dex)}&
\colhead{} &
\colhead{(K$^{-1}$)}\\
}
\startdata
\ion{Mg}{2} & 4481.13+4481.33 & $220\pm15$ & $6.18\pm.08$ & -1.40&0.08&$8\times10^{-5}$ \\
\ion{Si}{2} & 6347.1 & $140\pm5$ & $6.96\pm.18$ & -0.59&-0.02&$1\times10^{-4}$\\
\ion{O}{1} & 7771.9+7774.2+7775.4 & $460\pm30$ & $8.43\pm.10$ & -0.58 &0.24&$-4\times10^{-5}$
\enddata
\end{deluxetable}
As a second approach to determine the stellar parameters of \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ we used the low-resolution spectrum obtained with \gls{lris}. The observation range of \gls{lris} was chosen to be centered around the Balmer jump, as this feature is sensitive to the surface gravity \citep{2007PASP..119..605B}. We fitted the spectrum to a grid of model spectra \citep[]{2005A&A...442.1127M} using a spectrum-fitting tool described below. The final grid we used covered $\log\,{g}$ from 3.5 to 4.5 in steps of 0.5 and \gls{teff} from 9000 to 12,000\,K in steps of 500\,K. In addition, we expanded the grid by reddening the spectra with the \textsc{pysynphot}\footnote{The \textsc{pysynphot} package is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.} package. We also added diffuse interstellar bands \citep{1937PASP...49..224B, 1966ZA.....64..512H, 1967IAUS...31...85H, 1975ApJ...196..129H, 1995ARA&A..33...19H, 1994dib..nasa...31H, 1994A&AS..106...39J, 1958ApJ...128...57W} to the synthetic spectra, scaled with reddening. The included $E(B-V)$ ranged from 0.5 to 1.3 mag in steps of 0.2. We assumed a rotation of 171\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}\ in the grid (see \S \ref{sec:rotation}).
\begin{figure*}[ht!]
\includegraphics[width=1.\textwidth]{./ starb_spec_comp.pdf}
\caption[Fit of low-resolution spectrum of Tycho-B]{The plot shows the normalized spectrum of \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ with the fit which excluded the spectral region 3800--4500\,\AA\ (Best Fit 1) and the fit with the problematic region (Best Fit 2). The region is marked with a grey shade. }
\label{fig:starb_spec_comp}
\end{figure*}
We used $\chi^2$ as a figure of merit in our fitting procedure. To find the best fit for \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ we used the \gls{migrad} algorithm provided by \gls{minuit} and linearly interpolated between the grid points using \textsc{LinearNDInterpolator} provided by the \gls{scipy} package. The fit of \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ results in \glssymbol{teff} = 10,570\,K, $\log\,g = 4.05$, \glssymbol*{feh}\ = $-1.1$, and $E(B-V) = 0.85$\,mag. The model fits the synthetic spectrum poorly in the wavelength region 3800--4280\,\AA\ (see Figure \ref{fig:starb_spec_comp}). The adopted mixing-length parameter in one-dimensional (1D) model atmospheres, used to construct the spectral grid, influences the fluxes in that region and affects the hydrogen line profiles. \citet{2002A&A...392..619H} and others show that a mixing length of 0.5, rather than 1.25 as used in the Kurucz/Munari grid, better fits the violet fluxes and the H line profiles. Spectra using a mixing-length parameter of 0.5 are brighter in the ultraviolet, and the H$\delta$, H$\gamma$, and H$\beta$ profiles give the same \gls{teff} as the \gls{halpha} profiles. We have chosen, however, to fit the spectrum and ignore the problematic spectral region (3800--4280\,\AA) to avoid a systematic error. This yields $\glssymbol*{teff} = 10,722$\,K, $\log\,g=4.13$, $\glssymbol*{feh}\ = -1.1$, and $E(B-V) = 0.86$\,mag. The differences are indicative of the size of systematic errors in the model fits. We adopt the fit excluding the problematic wavelength region in the subsequent analysis. Exploring the complex search space, we estimate the uncertainties to be $\Delta\glssymbol*{teff} = 200$\,K, $\Delta\log\,g = 0.3$, and $\Delta\glssymbol*{feh} = 0.5$, and we note that the parameters are correlated.
\include{./ param}
\include{./ tab_parvar}
\subsection{Tycho-G: A Detailed Comparison with GH09}
\label{sec:tychog_comp}
GH09 suggested that \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ is a plausible donor star, with the primary evidence
consisting of an unusually high Ni abundance and a high
space velocity (radial velocity and proper motion).
In this subsection, we focus on the Ni abundance, and we refer the reader to
\S 3.1, 3.2, and 4 on the proper motion and radial velocity.
The measured values are
[Ni/Fe] = 0.16 $\pm$ 0.04 and 0.07 $\pm$ 0.04 for GH09 and this study, respectively, from the same HIRES spectra.
The magnitude of the difference is 0.09 dex, and it is
significant at the $\sim 1.5\sigma$ level.
While our [Ni/Fe] ratio in \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ is lower than that measured by
GH09, our value does not represent a substantial revision given the
measurement uncertainties involved. Nevertheless, our [Ni/Fe]
measurement and comparison with the literature do not support an unusually high
Ni abundance, and we conclude that \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ does not show any obvious chemical signature
that one may seek to attribute to a supernova companion star.
In order to identify the origin of the difference in [Ni/Fe] ratios,
we now compare our stellar parameters and chemical abundances to those of GH09.
Both studies determined stellar parameters and chemical
abundances in a similar manner, from
a standard spectroscopic EW analysis using 1D LTE Kurucz model atmospheres and
the MOOG stellar line analysis software. Our analysis employed more recent
versions of both tools. The first test we can perform is to use the GH09 line list
and stellar parameters but with our tools --- namely,
the 2011 version of MOOG \citep{2011AJ....141..175S, 1973ApJ...184..839S}
and the \citet{2003IAUS..210P.A20C} model atmospheres.
Adopting this approach, we obtain
$\log\,\epsilon$(\ion{Fe}{1}) = 7.38 ($\sigma$ = 0.13),
$\log\,\epsilon$(\ion{Fe}{2}) = 7.42 ($\sigma$ = 0.10), and
$\log\,\epsilon$(Ni I) = 6.33 ($\sigma$ = 0.19).
These values are in very good agreement with those of GH09, who obtained
$\log\,\epsilon$(\ion{Fe}{1}) = 7.42 ($\sigma$ = 0.12),
$\log\,\epsilon$(\ion{Fe}{2}) = 7.42 ($\sigma$ = 0.10), and
$\log\,\epsilon$(Ni I) = 6.36 ($\sigma$ = 0.19).
Thus, we argue that any abundance differences (for Fe and Ni) between the two studies,
exceeding the $\sim$0.04 dex level, cannot be attributed to differences
in the model-atmosphere grid and/or line-analysis software.
Our stellar parameters (\glssymbol*{teff} = $5900 \pm 100$K, $\log\,g = 3.85
\pm 0.30$, [Fe/H] = $-0.05 \pm 0.09$) are in good agreement with
those of GH09 (\glssymbol*{teff} = $6000 \pm 100$, $\log\,g = 4.00 \pm 0.30$,
[Fe/H] = $-0.13 \pm 0.13$). The second test we can perform is to
determine chemical abundances using (i) the GH09 stellar parameters but with
our line list and (ii) our stellar parameters and line list.
On comparing case (ii) minus case (i), we find
$\Delta\log\,\epsilon$(\ion{Fe}{1}) = 0.10,
$\Delta\log\,\epsilon$(\ion{Fe}{2}) = 0.02, and
$\Delta\log\,\epsilon$(Ni I) = 0.08.
Adopting the same solar abundances and method for determining the
average [Fe/H] value (average of \ion{Fe}{1}\ and \ion{Fe}{2}\ weighted by the number of lines)
as in the present study, we find $\Delta$[Ni/Fe] = 0.00.
We argue that while there are abundance differences for
$\log\,\epsilon(X)$ at the $\sim 0.10$ dex level, the [Ni/Fe] ratio
remains unchanged, and therefore
any differences in the [Ni/Fe] ratio between the two studies
cannot be attributed to differences in the adopted stellar parameters.
The solar abundances for Fe and Ni differ between the two studies.
\gls{gh09}\ adopt 7.47 and 6.25 for Fe and Ni, respectively, while we
use 7.50 and 6.22 \citep[from ][]{2009ARA&A..47..481A}. Had we used the GH09 solar abundances,
we would have obtained a ratio [Ni/Fe] = 0.01. Therefore,
the different solar abundances adopted by the two studies only serve to
decrease the discrepancy in the [Ni/Fe] ratio --- that is, any difference
in [Ni/Fe] cannot be attributed to the solar abundances.
The next series of comparisons we can perform concern the line lists.
We measured Fe and Ni abundances using
the GH09 line list but with our stellar parameters and find
$\log\,\epsilon$(\ion{Fe}{1}) = 7.42 ($\sigma$ = 0.12),
$\log\,\epsilon$(\ion{Fe}{2}) = 7.42 ($\sigma$ = 0.10), and
$\log\,\epsilon$(Ni I) = 6.36 ($\sigma$ = 0.19).
Table~\ref{tab:ni_comparison} gives a comparison of all tests performed.
Adopting the same approach as before, regarding the solar abundances and
metallicity, yields a ratio [Ni/Fe] = 0.22, a value
that exceeds both our measurement and that of GH09.
We therefore speculate that the difference in [Ni/Fe] between the two
studies is driven primarily by differences in the line list.
In particular, we note that while the \ion{Fe}{1}\ and \ion{Fe}{2}\ abundances
are in fair agreement with our value and GH09, it is the Ni abundance,
$\log\,\epsilon$(Ni), that shows a large difference between the two studies:
6.16 $\pm$ 0.09 and 6.33 $\pm$ 0.10 for this study and GH09,
respectively. Although the magnitude of this difference may appear large,
0.17 dex, it is significant only at the $\sim 1.3\sigma$ level.
On comparing the line lists between the two studies, we find
3, 2, and 8 lines in common for \ion{Fe}{1}, \ion{Fe}{2}, and Ni, respectively.
For these three species, the $\log\,{gf}$ values are on the same scale with
differences (this study minus GH09) of $-$0.04 ($\sigma$ = 0.07),
$-$0.03 ($\sigma$ = 0.04), and $-$0.01 ($\sigma$ = 0.03)
for \ion{Fe}{1}, \ion{Fe}{2}, and Ni, respectively. Although the comparison sample is small,
there is no clear evidence for any large systematic difference in $\log\,{gf}$ values
that could explain the differing $\log\,\epsilon$(Ni) or [Ni/Fe] values.
For the lines in common, our EWs are, on average, lower than those of GH09
by 5.7\,m\AA\ ($\sigma$ = 8.0\,m\AA), 5.6\,m\AA\ ($\sigma$ = 5.4\,m\AA), and 12.7\,m\AA\
($\sigma$ = 6.9\,m\AA) for \ion{Fe}{1}, \ion{Fe}{2}, and Ni, respectively.
The most intriguing aspect of this comparison is that the Ni lines
show the greatest discrepancy.
In light of the EW differences for \ion{Fe}{1}\ and \ion{Fe}{2}, we may naively have
expected the Ni EWs to show an offset of $\sim$6\,m\AA\ rather than
a 12.7\,m\AA\ offset. Indeed, differences in the Ni EWs appear
to be the primary reason for the difference in the derived Ni abundances between the two studies.
In Figure~\ref{fig:ew_comp}, we plot our EWs and
the GH09 EWs, for the 8 Ni lines
in common. To estimate the uncertainties in our EWs,
we use the \citet{1988IAUS..132..345C} formula
which considers the measurement uncertainty due to the
line strength, S/N,
and spectral resolution. Uncertainty in the
continuum placement is {\it not} included in the \citet{1988IAUS..132..345C}
formula.
As noted in the
previous subsection, we regard continuum placement to be an additional
source of uncertainty in the EW measurements. To quantify this uncertainty,
we use the DAOSPEC program which fits the
continuum and measures EWs \citep{2008PASP..120.1332S}.
Using DAOSPEC, we remeasure the Ni EWs using four different continuum
fitting criteria: (i) adopting our continuum placement, and
using a (ii) third-order, (iii) fifth-order, and (iv) ninth-order polynomial to
refit our continuum-rectified
spectra. For a given line, we compute the dispersion in the EW measurements
from the four different methods for continuum fitting
and adopt this value as being representative of the EW uncertainties due to
continuum rectification. We then add this value, in quadrature, to the uncertainty
using the \citet{1988IAUS..132..345C} value, noting that the latter value dominates
the total EW error budget (see Table~\ref{tab:err_g}).
To establish whether these EW uncertainties are valid, we first identify the set of
Ni EWs which produce our mean [Ni/Fe] ratio. That is, every line in this
set of ``ideal'' EWs produces $\log\,\epsilon$(Ni) = 6.16, i.e.,
[Ni/Fe] = 0.07. We then added to each of these ideal EWs a
random number drawn from a normal distribution of width corresponding to our
estimate of the EW uncertainty. We repeated this process for each Ni line,
computed Ni abundances for this new set of lines, and measured the
abundance dispersion. We repeated this process for 1,000 new random samples.
The average dispersion in Ni abundance is 0.17 dex ($\sigma$ = 0.06 dex), and
this average value agrees well with our observed dispersion of 0.14 dex.
Therefore, we are confident that our EW measurement uncertainties are realistic,
since this Monte Carlo analysis verifies that these uncertainties
reproduce our observed abundance dispersion.
An additional test is to measure EWs from our spectra for all Fe and Ni
lines measured by GH09. As with our EWs, all lines were manually checked.
For \ion{Fe}{1}, we measured 27 lines and found a mean difference (this study minus GH09)
of $-$1.9\,m\AA\ $\pm$ 1.2 ($\sigma$ = 6.0).
For \ion{Fe}{2}, we measured 8 lines and found a mean difference
of $-$4.6\,m\AA\ $\pm$ 2.8 ($\sigma$ = 7.8).
For Ni, we measured 18 lines and found a mean difference
of $-$8.7\,m\AA\ $\pm$ 2.0 ($\sigma$ = 8.4).
This comparison confirms that our EWs are systematically lower than those of GH09
and that the Ni lines, in particular, show the largest discrepancy.
Indeed, the average difference in Ni EWs is
4 times larger than the average difference in \ion{Fe}{1}\ EWs.
While continuum normalization could potentially explain these differences, these
Ni lines lie in spectral regions similar to those of the Fe lines, so we would
expect the differences in EWs for Fe and Ni to behave similarly.
We note in our line selection that we reject
5, 2, and 4 lines of \ion{Fe}{1}, \ion{Fe}{2}, and Ni (respectively) that were measured by GH09.
These lines were in our opinion blended and/or in regions where the local continuum
was poorly defined.
We return now to the eight Ni lines in common, noting that
(i) for 7 of the 8 lines, our EWs are smaller than those of GH09,
(ii) for 7 of the 8 lines, the difference in EWs exceeds 1$\sigma$, and
for all 7 lines, the difference shows the same ``sign,'' and
(iii) for 4 of the 8 lines, the difference in EWs exceeds 2$\sigma$, and
for all 4 lines, the difference shows the same ``sign.''
Finally, for the eight Ni lines in common with \gls{gh09}, we plot our normalized spectra
along with spectrum syntheses (see Figures~\ref{fig:ni_syntha} and \ref{fig:ni_synthb}). The main points to take from
these figures are the location of the continuum and how well the
spectrum syntheses fit the lines for the abundances we
measure. We note that our abundances were determined from
EW analysis rather than spectrum synthesis. Nevertheless, had we relied solely upon
spectrum synthesis, we would have obtained essentially identical results.
A systematic increase in $\log\,\epsilon$(Ni) of 0.17\,dex or in [Ni/Fe] of 0.09 dex,
as measured by GH09, is not supported by these spectrum syntheses.
The main conclusions we draw from this comparison are
(i) abundance differences between the two studies cannot be attributed
to the different versions of model atmospheres and spectrum synthesis software;
(ii) the [Ni/Fe] ratio remains unchanged when using our line list but with
either the GH09 stellar parameters or our stellar parameters;
(iii) differences in [Ni/Fe] cannot be attributed to the adopted solar abundances;
(iv) although the set of lines in common between the two analyses is small,
there are no large systematic differences in the $\log\,{gf}$ values that could
explain the discrepancy in Ni abundances;
(v) for \ion{Fe}{1}\ and \ion{Fe}{2}, our EWs are systematically lower than those of GH09 by $\sim 6$\,m\AA, and our Ni EWs are systematically lower by $\sim 12$\,m\AA; and
(vi) our EW uncertainties for Ni are consistent with the observed dispersion in Ni abundance.
As noted above, while our measured [Ni/Fe] value does not represent a substantial revision of the
GH09 value, our Ni abundance is not unusual with respect to field stars
at the same metallicity.
Nevertheless, we welcome further analyses of this star,
preferably conducted with higher-quality spectra.
\include{./ tab_ni_comparison}
\include{./ tab_ew}
\include{./ tab_ew_g_err}
\begin{figure*}[t!]
\epsscale{0.8}
\vspace{5mm}
\includegraphics[width=\textwidth]{./ NEWF1-crop.pdf}
\caption{Observed spectra of Tycho-G centered around the Li $\lambda$6707 line.
Synthetic spectra with different Li abundances are overplotted.
The thick red line represents the Li abundance corresponding to the
best-fitting value, and unsatisfactory fits ($\pm 0.15$\,dex) are
plotted as thin black lines.
\label{fig:li_synth}}
\end{figure*}
\begin{figure*}[t!]
\epsscale{0.8}
\vspace{5mm}
\includegraphics[width=\textwidth]{./ NEWF2-crop.pdf}
\caption{EWs for the eight Ni lines in common between GH09 (open red squares)
and this study (filled black circles) for Tycho-G. Lines (a-h)
are
5082.35\,\AA,
5088.54\,\AA,
6086.28\,\AA,
6175.37\,\AA,
6176.82\,\AA,
6643.64\,\AA,
7748.89\,\AA, and
7797.59\,\AA, respectively.
\label{fig:ew_comp}}
\end{figure*}
\begin{figure*}[ht!]
\epsscale{0.9}
\vspace{5mm}
\includegraphics[width=\textwidth]{./ NEWF3-crop.pdf}
\caption{Observed spectra centered around five Ni lines in common with GH09
for Tycho-G.
Synthetic spectra with different Ni abundances are overplotted.
The thick red line represents the Ni abundance corresponding to the value
derived from EW analysis, and unsatisfactory fits ($\pm 0.3$ dex) are
plotted as thin black lines.
\label{fig:ni_syntha}}
\end{figure*}
\begin{figure*}[ht!]
\epsscale{0.9}
\vspace{5mm}
\includegraphics[width=\textwidth]{./ NEWF4-crop.pdf}
\caption{Same as Figure \ref{fig:ni_syntha} but for
the remaining four Ni lines in common with GH09 (the upper-left line is also seen in the previous panel).
\label{fig:ni_synthb}}
\end{figure*}
\subsection{Distances}
\label{sec:distance}
\begin{figure*}[ht!]
\includegraphics[width=0.5\textwidth]{./ tycho-a-panel.pdf}
\includegraphics[width=0.5\textwidth]{./ tycho-b-panel.pdf}
\includegraphics[width=0.5\textwidth]{./ tycho-c-panel.pdf}
\includegraphics[width=0.5\textwidth]{./ tycho-e-panel.pdf}
\includegraphics[width=0.5\textwidth]{./ tycho-g-panel.pdf}
\caption[Distance, extinction, and mass measurements in SN~1572]{The figures show error contours for distance, extinction, and mass of the candidates. In the distance plots we indicate the distance range of SNR~1572 with a gray shade. The lower right shows the optimal isochrone \citep{2004ApJ...612..168P} for the measured values of $T_{\rm eff}$ and $\log\,{g}$. }
\label{fig:mc_isochrone}
\end{figure*}
To measure the distance to the candidate stars we used colors and absolute magnitudes from isochrones by \citet{2004ApJ...612..168P}. We used the \gls{migrad} algorithm \citep{James:1975dr} to find close matches of the measured values to $\glssymbol*{teff}$--$\log\,g$ isochrones by varying the age of the isochrone. Subsequently we calculated $E(B-V)$ using the isochrone's color, and we extracted a mass from the isochrone. The results can be seen in Table~\ref{tab:iso_dist}. To estimate the uncertainties in all distances, reddenings, and masses, we employed the Monte-Carlo
method with 10,000 samples of \gls{teff}, \gls{logg}, \gls{feh}, $B$ magnitude, and $V$ magnitude (see Figure \ref{fig:mc_isochrone}). Errors included in Table~\ref{tab:iso_dist} are the standard deviations of the Monte-Carlo sample.
The data show that all stars are compatible with the distance of the remnant. This is not unexpected, as the uncertainties of the measurements in stellar parameters are relatively large.
\begin{deluxetable}{lcccccc}
\tablecaption{Distances, Ages, and Masses of Candidate Stars\label{tab:iso_dist}}
\tablecolumns{7}
\tablehead{
\colhead{Tycho} &
\colhead{Mass}&
\colhead{$\sigma_\textrm{Mass}$} &
\colhead{Age}&
\colhead{$\sigma_\textrm{Age}$} &
\colhead{$D$} &
\colhead{$\sigma_D$}\\
\colhead{(Name)} &
\colhead{($M/\hbox{$M_{\odot}$}$)} &
\colhead{($M/\hbox{$M_{\odot}$}$)} &
\colhead{(Gyr)} &
\colhead{(Gyr)}&
\colhead{(kpc)} &
\colhead{(kpc)}\\
}
\startdata
\object[{[RCM2004] Tycho A}]{\hbox{Tycho-A}} & 2.4 & 0.8 &0.7 & 2.3 & 1.4 & 0.8\\
\object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}} & 1.8 & 0.4 &0.8 & 0.3 & 1.8 & 0.8\\
\object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}} & 0.9 & 0.4 &10.0 & 3.4 & 5.5 & 3.5\\
\object[{[RCM2004] Tycho E}]{\hbox{Tycho-E}} & 1.7 & 0.4 &1.4 & 1.1 & 11.2 & 7.5\\
\object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}} & 1.1 & 0.2 &5.7 & 2.1 & 3.7 & 1.5\\
\enddata
\end{deluxetable}
\section{Discussion}
\label{sec:sn1572_hires:discussion}
In our sample of six stars we find no star that shows characteristics which strongly indicate that it might be the donor star of \sn{1572}{}. On the other hand, it is difficult to absolutely rule out any particular star, if one is able to invoke improbable post-explosion evolutionary scenarios.
Tycho-A is a metal-rich giant, and it seems likely to be a foreground star. Its principal redeeming feature as a donor-star candidate is that it is located in the geometric center of the remnant, and that it has a relatively low surface gravity. \object[{[RCM2004] Tycho A}]{\hbox{Tycho-A}}\ shows a very low spatial motion, which is consistent with a giant-donor-star scenario, although its lack of rotation is in conflict with a donor-star scenario.
Taking all measurements into account, we regard \object[{[RCM2004] Tycho A}]{\hbox{Tycho-A}}\ to be a very weak candidate (although a wind accretion scenario might still work).
Tycho-B's high temperature, position at the center of the remnant, high rotational velocity, and unusual chemical abundance make it the most unusual candidate in the remnant's center. Despite the {\it a posteriori} unlikely discovery of such a star in the remnant's center, \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}'s high rotational velocity coupled with its low spatial velocity seem to be in conflict with any viable donor-star scenario.
These scenarios predict that the donor star will tidally couple to the white dwarf before explosion, causing the rotation and spatial motion to be correlated post explosion (as discussed in \gls{wek09}). The large rotation seen in \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ should be accompanied by a large spatial motion, which is ruled out by the observations presented here, a problem we are unable to reconcile with \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ being the donor star.
However, Tycho-B does show some unusual abundances, which we will scrutinize in future studies.
Tycho-C consists of two stars which are resolved only in {\it HST} images. It consists of a brighter bluer component ($B = 21.28$, $V = 19.38$, $R = 18.10$ mag; \gls{rp04}) and a dimmer redder component ($B = 22.91$, $V = 20.53$, $R = 19.23$ mag; \gls{rp04}). In our analysis we find a consistent solution for the spectrum and infer that this is from the brighter bluer component.
We find that \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}\ is a metal-poor giant, probably located beyond the remnant. \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}, similarly to \object[{[RCM2004] Tycho A}]{\hbox{Tycho-A}}, might be compatible with a giant-donor-star scenario. Its lack of rotation and kinematics, however, make it an uncompelling candidate. The only information we have about the dimmer component is the proper motion, which is insignificant with $\mu_\alpha= 0.58 \pm 1.73\,\ensuremath{\textrm{mas~yr}^{-1}}, \mu_\delta = -0.29 \pm 1.21$\,\ensuremath{\textrm{mas~yr}^{-1}}.
Tycho-D is roughly a factor of 10 dimmer than the nearby star \object[{[RCM2004] Tycho C}]{\hbox{Tycho-C}}\ (separation $\approx 0.6$\arcsec). We could not measure reliable EWs for spectra with this S/N. Visual inspection of the star's spectral features shows it to be consistent with a cool star with low rotation. Its luminosity precludes it from being a relatively slowly rotating giant, and its slow rotation precludes it from being a subgiant or main-sequence donor star. All of this suggests that \object[{[RCM2004] Tycho D}]{\hbox{Tycho-D}}\ is an uncompelling donor candidate.
Tycho-E is the most distant star in this set (11.2\,kpc), although large uncertainties in the distance remain. It seems to be similar to \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ in temperature, but appears to have a lower surface gravity. It is located 7\arcsec\ from the geometric center, but has no unusual stellar parameters or kinematics. \gls{gh09}\ have suggested this to be a double-lined binary, but we are unable to confirm this using Fourier cross-correlation techniques. \citet{2007PASJ...59..811I} have looked at iron absorption lines in stellar spectra made by the remnant and found \object[{[RCM2004] Tycho E}]{\hbox{Tycho-E}}\ to be unusual. They suggest that a star in the background would show blueshifted and redshifted iron lines, whereas a star inside the remnant would only show blueshifted iron lines, and a foreground star would not show any iron features from the remnant. \citet{2007PASJ...59..811I} claim that \object[{[RCM2004] Tycho E}]{\hbox{Tycho-E}}\ only shows blueshifted lines, and thus suggest that it is inside the remnant. We believe, however, that \object[{[RCM2004] Tycho E}]{\hbox{Tycho-E}}\ is located far behind the remnant and suggest that a low column density on the receding side of the remnant could cause a lack of redshifted iron features. In summary, a lack of rotation, kinematic signatures, and an inconsistent distance make \object[{[RCM2004] Tycho E}]{\hbox{Tycho-E}}\ a very weak candidate.
Tycho-G is located 30\arcsec\ from the X-ray\ center, making it the most remote object from the center in this work (in the plane of the sky; for comparison a distance of 32.6\arcsec\ corresponds to 1000\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}\ over 433 yr at the distance of 2.8\,kpc). This work confirms the radial velocity measured by \gls{gh09}\ and \gls{wek09}. Figure \ref{fig:dist_vr} shows the expected distribution of radial velocities from the Besan\c{c}on model of Galactic dynamics. \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ lies well within the expected range of \gls{vrad} for stars with its stellar parameters and distance.
In addition, this work has analyzed the proper motion of stars around the center of \sn{1572}{}. Figures \ref{fig:propmot_sn1572_hires} and \ref{fig:sn1572_hires:ppmxl_compare} show \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ not to be significantly deviant from the distribution of proper motions in the \snr{1572} neighborhood. Figure~\ref{fig:propmot_sn1572_hires} shows \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ to be a $2\sigma$ outlier, which implies that there should be about six stars in the {\it HST} sample sharing similar proper-motion features as \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}; thus, its proper motion is by no means a unique trait. To further explore the proper-motion parameter space, we have selected candidates within a $1^\circ$ radius around \sn{1572}{}\ from the proper-motion catalogue \gls{ppmxl} \citep{2010AJ....139.2440R}. To exclude the many foreground stars we introduced the additional selection criteria $R>16$ mag and $V-R < 1$ mag (for comparison, the Sun has a color of $V-R=1.3$ mag). These selection criteria are meant to exclude foreground stars. We tested this by applying the same selection criteria on the Besan\c{c}on Model, resulting in 95\% of stars more distant than 2\,kpc. This shows that a high proper motion at great distances is not a unique feature, as there are more stars that share this trait (see Figure \ref{fig:sn1572_hires:ppmxl_compare}). In particular, stars in the thick disk have motions entirely consistent with Tycho-G (see contours in Figure \ref{fig:sn1572_hires:ppmxl_compare}, and Figure 10 in \gls{gh09}). Finally, the {\it HST} proper-motion measurements are challenging, and it is conceivable that there are systematic errors in our proper-motion measurements which are larger than our reported statistical errors. Such errors would tend to increase the chance of larger-than-actual proper-motion measurements. Taken in total, while \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ may have an unusual proper motion, the significance of this motion, even if current measurements are exactly correct, is not exceptional.
As described, the kinematic features of a donor star might easily be lost in the kinematic noise of the Galaxy. \gls{wek09}\ recommend using post-explosion stellar rotation as an additional possible feature for a donor star. This work suggests that \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ has a rotation below the instrumental profile of 6\,\ensuremath{\textrm{km}~\textrm{s}^{-1}}, much less than expected for a donor star \citep[for an estimate, see][]{2009ApJ...701.1665K}. New results by \citet{2012ApJ...750..151P}, however, suggest that only taking tidal coupling into account could overestimate the rotation, and thus \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}'s low rotation might still be reconcilable with a donor-star model.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth, trim=0 0 2cm 0, clip]{./ ppmxl_besancon_plot.pdf}
\caption[Comparison between PPMXL catalog and the Besan\c{c}on Model]{The gray points are \glsentryname{ppmxl} stars within $1^\circ$ of \sn{1572}{}. \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ has been marked with a gray cross. In addition, we show the distribution for the Besan\c{c}on Model with an area of 1 square degree (a third of the search area of the \gls{ppmxl} sample) around the remnant and a distance between 2.2\,kpc\ and 5.2\,kpc\ as the black contours ($1\sigma$, $2\sigma$, and $3\sigma$). We have constrained the model output to only show thick-disk stars with which \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ seems to be consistent. }
\label{fig:sn1572_hires:ppmxl_compare}
\end{figure*}
We find \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ to be a subgiant/main-sequence star with roughly solar temperature and metallicity.
\gls{gh09}\ measure a nickel enhancement, which they believe to originate in the contamination from the ejecta. We have conducted a detailed comparison with \gls{gh09}'s measurement in \S~\ref{sec:tychog_comp} and do not find \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ to be an outlier as suggested by \gls{gh09}, but rather consistent with other stars of similar metallicity. In addition, our Li measurement is in agreement with that of \gls{gh09}\ (see Table~\ref{tab:parvar}). In contrast to the GH09 interpretation, this Li abundance is consistent with that of stars of similar parameters \citep{2010A&A...519A..87B}.
Finally, we have measured the distance to \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}, showing it to be consistent with a background star. In addition, the radial-velocity signature matches that of background stars (see Figure \ref{fig:dist_vr}).
In summary, while \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ may have unusual kinematics as indicated by its proper motion, the significance of this motion is not large when compared to a large sample of similar stars in the direction of the Tycho remnant. Furthermore, such a kinematic signature, if it were related to the binary orbital velocity, might predict rotation for \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ which we do not observe (modulo the caveats from \gls{wek09}\ \& \citealt{2012ApJ...750..151P}). Also, we have not found a reasonable explanation for \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}'s large distance from the geometric center, and suggest that \object[{[RCM2004] Tycho G}]{\hbox{Tycho-G}}\ is unlikely to be related to the Tycho SNR.
\section{Conclusion}
\label{sec:sn1572_hires:conclusion}
This work did not detect an unambiguously identifiable donor-star candidate. Although \object[{[RCM2004] Tycho B}]{\hbox{Tycho-B}}\ shows some unusual features, there currently remains no convincing explanation for all of its parameters which can be attributed to the donor-star scenario. We believe that our results provide evidence that the Tycho SNR does not have a main-sequence, subgiant, or red giant donor star. Some other possibilities remain. In the spin-down scenario, the companion star can become a helium
white dwarf from a red giant donor, or a very low mass main-sequence
star from a more massive main-sequence star. Such a compact companion
can escape detection \citep{2011ApJ...738L...1D, 2011ApJ...730L..34J, 2012ApJ...756L...4H,2012ApJ...744...69H}. Another scenario is a helium donor, such as the so-called sub-\glsentryname{mchan} explosions discussed by \citet{1995ApJ...452...62L} and \citet{2010ApJ...714L..52S}. These progenitor systems might leave a very faint and fast-moving helium star, or no remnant at all (R.~Pakmor 2012, privat communication). Such a progenitor would probably evade detection, and would likely not leave traces, such as circumstellar interaction with the remnant, or early light-curve anomalies \citep{2010ApJ...708.1025K}. Deep multi-epoch wide-field optical images should catch any such star speeding away from the remnant's center, but such observations have not yet been taken. Finally, a double-degenerate progenitor, in most cases, does not leave a compact remnant, and is consistent with our finding no donor star in \snr{1572}.
\sn{1006}{}\ and \sn{1604}{}\ (Kepler's \gls*{sn}) are two other \gls*{snia}\ remnants in the Milky Way. \sn{1006}{}\ is far from the Galactic plane and shows no signs of circumstellar interaction. Kerzendorf et al. (2012b) have studied this remnant and have not found any unusual star that can be explained with a donor-star scenario (consistent with this work). \snr{1604}, while far from the Galactic plane, shows circumstellar interaction with its remnant, and has all the indications of what might be expected from a single-degenerate scenario with an asymptotic giant branch donor \citep{2011arXiv1103.5487C}. Observations of these remnants will better establish if there is a continued pattern to the unusual stars in \gls*{snia}\ remnant centers, or whether the lack of viable donor stars persists in multiple systems.
\section{Acknowledgements}
B.~P.~Schmidt and W.~E.~Kerzendorf were supported by Schmidt's ARC Laureate Fellowship (FL0992131). A.~Gal-Yam acknowledges support by the Israeli Science Foundation. A.~V.~Filippenko is grateful for the support of the Christopher R. Redlich Fund, the TABASGO Foundation, and NSF grants AST-0908886 and AST-1211916; funding was also provided by NASA grants GO-10098, GO-12469, and AR-12623 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. R.~J.~Foley was supported by a Clay Fellowship.
We thank Christopher Onken and Jorge Melendez for helpful advice on the HIRES data reduction. We acknowledge useful discussions about differential rotation with Amanda Karakas and also thank Peter Wood for advising us on stellar evolution matters. R\"{u}diger Pakmor provided information on helium-star mergers. We thank Martin Asplund for providing us with Li NLTE corrections.
\bibliographystyle{apj}
|
1,116,691,501,229 | arxiv | \section{Introduction}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
Let $\Gamma$ be a finite simplicial graph and denote its vertex set and edge set by $\vv \Gamma$ and $\ee \Gamma$, respectively. The associated \textit{right-angled Artin group} (RAAG) $\raag \Gamma$ is the group defined by the following finite presentation
$$
\raag \Gamma=\big\langle \vv \Gamma \ \big\vert \ [v,w] \ \text{whenever} \ (v,w)\in \ee \Gamma\big\rangle.
$$
\noindent RAAGs have been a central object of study in geometric group theory because of the beautiful interplay between algebraic properties of the groups and combinatorial properties of the defining graphs, and also because they contain many interesting subgroups, such as the fundamental group of many surfaces and $3$-manifolds, and more generally, specially cubulated groups; see \cite{HW08}.
The \textit{RAAG recognition problem} consists in deciding whether a given group is a RAAG.
Several authors have worked on this problem for various classes of groups,
for instance, the pure symmetric automorphism groups of RAAGs in \cite{CharneyRuaneStambaughVijayanTheAutomorphismgroupofagraphproductiwithnoSIL} and \cite{KobanPiggottTheBNSofthepuresymmetricautomorphismofRAAG}, the pure symmetric outer automorphism groups of RAAGs in \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}, and a certain class of subgroups of RAAGs and RACGs in \cite{DaniLevcovitzRightangledArtinsubgroupsofRAACsandRAAGs} and of mapping class groups in \cite{KoberdaRAAGsandaGeneralizedIsoProblemforFGsubgpsofMCGs}.
An analogous recognition problem for right-angled Coxeter groups has been considered in \cite{CEPR16}
However, the RAAG recognition problem is not easy to answer in general, even when the given group shares some essential properties with RAAGs.
For example, the group $G$ with the following presentation
$$
G=\big\langle a,b,c,d,e \ \big\vert \ [a,b], [b,c], [c,d], [b^{-1}c,e] \big\rangle
$$
is finitely presented with only commutator relators; it is CAT$(0)$ and splits as a graph of free abelian groups. However, it is not a RAAG; see \cite[Example 2.8]{PapadimaSuciuAlgebraicinvariantsforBBGs}.
Even more is true: Bridson \cite{BridsonOntheRecognitionofRAAGs} showed that there is no algorithm to determine whether or not a group presented by commutators is a RAAG, answering a question by Day and Wade \cite[Question 1.2]{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}.
In this article, we study the RAAG recognition problem for a class of normal subgroups of RAAGs, namely, the \textit{Bestvina--Brady groups} (BBGs).
Let $\chi \colon \raag \Gamma \to \mathbb Z$ be the homomorphism sending all the generators to $1$. The BBG defined on $\Gamma$ is the kernel of $\chi$ and is denoted by $\bbg\Gamma$.
For example, the group $G$ from above is the BBG defined on the \textit{trefoil graph} (see Figure~\ref{fig:trefoil}).
BBGs were introduced and studied in \cite{bestvinabradymorsetheoryandfinitenesspropertiesofgroups},
and they have become popular as a source of pathological examples in the study of finiteness properties and cohomology of groups.
For instance, some BBGs are finitely generated but not finitely presented; and there are some BBGs that are potential counterexamples to either the Eilenberg--Ganea conjecture or the Whitehead conjecture.
\begin{figure}[h]
\centering
\input{pictures/trefoil}
\caption{The trefoil graph}
\label{fig:trefoil}
\end{figure}
\clearpage
Inspired by the example of the group $G$ from above, we are interested in understanding how much a BBG can be similar to a RAAG without being a RAAG.
In particular, we are interested in a criterion that can be checked directly on the defining graph.
It is well-known that two RAAGs are isomorphic if and only if their defining graphs are isomorphic; see \cite{DromsIsomorphismsofGraphGroups}.
However, this is not the case for BBGs.
For instance, the BBG defined on a tree with $n$ vertices is always the free group of rank $n-1$.
Nevertheless, some features of BBGs can still be seen directly from the defining graphs.
For example, it was proved in \cite{bestvinabradymorsetheoryandfinitenesspropertiesofgroups} that $\bbg \Gamma$ is finitely generated if and only if $\Gamma$ is connected; and $\bbg \Gamma$ is finitely presented if and only if the flag complex $\flag \Gamma$ associated to $\Gamma$ is simply connected.
When a BBG is finitely generated, an explicit presentation was found by Dicks and Leary \cite{DicksLearypresentationsforsubgroupsofArtingroups}.
More properties that have been discussed from a graphical perspective include various cohomological invariants in \cite{PapadimaSuciuAlgebraicinvariantsforBBGs,DimacaPapadimaSuciuQuasiKahlerBBGs,PapadimaandSuciuBNSRinvariantsandHomologyJumpingLoci,LearySaadetogluTheCohomologyofBBGs}, Dehn functions in \cite{YCCIdentifyingDehnFunctionsofBBGfromtheirdefininggraphs}, and graph of groups decompositions in \cite{ChangJSJofBBGs,lorenzo,DR22}.
In this paper, we add to this list a solution to the RAAG recognition problem for BBGs whose associated flag complexes are $2$-dimensional (equivalently, the largest complete subgraphs of the defining graphs are triangles).
We note that it is natural to make two assumptions.
The first one is that $\Gamma$ is biconnected, that is, it has no cut vertices (otherwise, one can split $\bbg \Gamma$ as the free product of the BBGs on the biconnected components of $\Gamma$; see Corollary~\ref{cor:biconnected components}).
The second assumption is that the associated flag complex $\flag \Gamma$ is simply connected (otherwise, the group $\bbg \Gamma$ is not even finitely presented).
Our solution to the RAAG recognition problem in dimension 2 is in terms of the presence or absence of two particular types of subgraphs.
A \textit{tree 2-spanner} $T$ of the graph $\Gamma$ is a spanning tree such that for any two vertices $x$ and $y$, we have $d_T(x,y)\leq 2 d_\Gamma (x,y)$.
A \textit{crowned triangle} of the associated flag complex $\flag \Gamma$ is a triangle whose edges are not on the boundary of $\flag \Gamma$ (see \S\ref{section: BBGs on 2-dim flag complexes} for the precise definition). For instance, the central triangle of the trefoil graph in Figure~\ref{fig:trefoil} is a crowned triangle.
\begin{introtheorem}\label{main thm 2dim}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is $2$-dimensional and simply connected. Then the following statements are equivalent.
\begin{enumerate}
\item $\Gamma$ admits a tree $2$-spanner.
\item $\flag \Gamma$ does not contain crowned triangles.
\item $\bbg \Gamma$ is a RAAG.
\item $\bbg \Gamma$ is an Artin group.
\end{enumerate}
\end{introtheorem}
Our proof of Theorem~\ref{main thm 2dim} relies on two conditions that are independent, in the sense that they work separately and regardless of the dimension of $\flag \Gamma$.
The first one is a sufficient condition for a BBG to be a RAAG that is based on the existence of a tree 2-spanner (see \S\ref{intro: condition RAAG}).
The second one is a sufficient condition for any finitely generated group not to be a RAAG that is based on certain properties of the \textit{Bieri--Neumann--Strebel invariant} (BNS-invariant) and may be of independent interest (see \S\ref{intro: condition not RAAG}).
We prove that these two conditions are equivalent when the flag complex $\flag \Gamma$ is 2-dimensional (see \S\ref{intro: 2-dim}).
This allows one to recover the fact that the group $G$ from above (that is, the BBG defined on the trefoil graph from Figure~\ref{fig:trefoil}) is not a RAAG. This was already known by the results of \cite{PapadimaSuciuAlgebraicinvariantsforBBGs} or \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}.
While the results in these two papers apply to groups that are more general than the group $G$, they do not address the case of a very minor modification of that example, such as the BBG defined on the graph in Figure~\ref{fig: extended PS no orientation}.
This BBG shares all the properties with the group $G$ described above. But again, it is not a RAAG by Theorem~\ref{main thm 2dim} since the defining graph contains a crowned triangle; see Example~\ref{ex:extended trefoil continued}.
The features of the BNS-invariant that we use to show that a BBG is not a RAAG turn out to imply that the BBG cannot even be a more general Artin group.
This relies on the theory of \textit{resonance varieties} developed by Papadima and Suciu in \cite{PapadimaSuciuAlgebraicinvariantsforRAAGs,PapadimaSuciuAlgebraicinvariantsforBBGs}.
Roughly speaking, we show that for the BBGs under consideration in this paper, the resonance varieties are the same as the complements of the BNS-invariants (see \S \ref{sec:resonance varieties}).
\begin{figure}[h]
\centering
\input{pictures/extended_PS_unoriented}
\caption{The extended trefoil graph. The BBG defined by it has this presentation: $\big\langle a,b,c,d,e,f \ \big\vert \ [a,b], [b,c], [c,d], [b^{-1}c,e], [e,f] \big\rangle$}
\label{fig: extended PS no orientation}
\end{figure}
\subsection{The condition to be a RAAG: tree 2-spanners}\label{intro: condition RAAG}
As observed in \cite[Corollary 2.3]{PapadimaSuciuAlgebraicinvariantsforBBGs}, when $\bbg \Gamma$ is finitely presented, any spanning tree $T$ of $\Gamma$ provides a finite presentation whose relators are commutators.
If $T$ is a tree $2$-spanner, then this presentation can actually be simplified to a standard RAAG presentation, and in particular, the group $\bbg \Gamma$ is a RAAG.
We can even identify the defining graph for this RAAG in terms of the \textit{dual graph} $T^\ast$ of $T$, that is, the graph whose vertices are edges of $T$, and two vertices are adjacent if and only if the corresponding edges of $T$ are contained in the same triangle of $\Gamma$.
Note that the following result does not have any assumption on the dimension of $\flag\Gamma$.
\begin{introtheorem}\label{main thm BBG=RAAG}
If $\Gamma$ admits a tree $2$-spanner $T$, then $\bbg \Gamma$ is a RAAG. More precisely, the Dicks--Leary presentation can be simplified to the standard RAAG presentation with generating set $\ee T$. Moreover, we have $\bbg \Gamma \cong \raag{T^\ast}$.
\end{introtheorem}
Here are two applications. The first one is that if $\Gamma$ admits a tree $2$-spanner, then $\flag \Gamma$ is contractible; see Corollary~\ref{cor:tree2spanner implies contractible}.
The second application is that for any graph $\Lambda$, the BBG defined on the cone over $\Lambda$ is isomorphic to $\raag \Lambda$, regardless of the structure of $\Lambda$; see Corollary~\ref{cor: cone graph gives an isomorphism between BBG and RAAG}.
This means that the class of BBGs contains the class of RAAGs. That is, every RAAG arises as the BBG defined on some graph.
As we have mentioned before, two RAAGs are isomorphic if and only if their defining graphs are isomorphic, but this is not true for BBGs.
However, when two graphs admit tree $2$-spanners, the associated BBGs and the dual graphs completely determine each other.
\begin{introcorollary}\label{intro: BBGs iso iff dual graphs iso}
Let $\Gamma$ and $\Lambda$ be two graphs admitting tree 2-spanners $T_\Gamma$ and $T_\Lambda$, respectively.
Then $\bbg \Gamma \cong \bbg \Lambda$ if and only if $T_\Gamma^\ast \cong T_\Lambda^\ast$.
\end{introcorollary}
This provides new examples of non-isomorphic graphs defining isomorphic BBGs; see Example~\ref{ex:iso bbg non iso graphs}.
On the other hand, when $\Gamma$ does not admit a tree $2$-spanner, the presentation for $\bbg \Gamma$ associated to any spanning tree is never a RAAG presentation.
However, there might be a RAAG presentation not induced by a spanning tree.
In order to obstruct this possibility, we need to look for invariants that do not depend on the choice of a generating set.
We will consider the BNS-invariant $\bns {\bbg \Gamma}$ of $\bbg \Gamma$.
\subsection{The BNS-invariants of BBGs from the defining graphs}\label{intro: BNS invariant}
The BNS-invariant $\bns G$ of a finitely generated group $G$ is a certain open subset of the character sphere $\chars G$, that is, the unit sphere in the space of group homomorphisms $\operatorname{Hom}(G,\mathbb R)$.
This invariant was introduced in \cite{bierineumannstrebelageometricinvariantofdiscretegroups} as a tool to study finiteness properties of normal subgroups of $G$ with abelian quotients, such as kernels of characters. In general, the BNS-invariants are hard to compute.
The BNS-invariants of RAAGs have been characterized in terms of the defining graphs by Meier and VanWyk in \cite{meierthebierineumannstrebelinvariantsforgraphgroups}.
The BNS-invariants of BBGs are less understood.
In \cite[Theorem 15.8]{PapadimaandSuciuBNSRinvariantsandHomologyJumpingLoci}, Papadima and Suciu gave a cohomological upper bound for the BNS-invariants of BBGs.
Recently, Kochloukova and Mendon\c{c}a have shown in \cite[Corollary 1.3]{kochloukovamendonontheBNSRsigmainvariantsoftheBBGs} how to reconstruct the BNS-invariant of a BBG from that of the ambient RAAG.
However, an explicit description of the BNS-invariant of a BBG from its defining graph is still in need (recall that the correspondence between BBGs and graphs is not as explicit as in the case of RAAGs).
Since the vertices of $\Gamma$ are generators for $\raag \Gamma$, a convenient way to describe characters of $\raag \Gamma$ is via vertex-labellings.
Inspired by this, in the present paper, we encode characters of $\bbg \Gamma$ as edge-labellings.
This relies on the fact that the edges of $\Gamma$ form a generating set for $\bbg \Gamma$ (see \cite{DicksLearypresentationsforsubgroupsofArtingroups} and \S\ref{sec:coordinates}).
We obtain the following graphical criterion for a character of a BBG to belong to the BNS-invariant.
The condition appearing in the following statement involves the \textit{dead edge subgraph} $\deadedge \chi$ of a character $\chi$ of $\bbg \Gamma$, which is the graph consisting of edges on which $\chi$ vanishes.
This is reminiscent of the living subgraph criterion for RAAGs in \cite{meierthebierineumannstrebelinvariantsforgraphgroups}.
However, it turns out that the case of BBGs is better understood in terms of the dead edge subgraph (see Example~\ref{ex:no good living edge subgraph criterion}). An analogous dead subgraph criterion for RAAGs was considered in \cite{lorenzo}.
\begin{introtheorem}[Graphical criterion for the BNS-invariant of a BBG]\label{intro: graphical criterion}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected.
Let $\chi\in\mathrm{Hom}(\bbg \Gamma,\mathbb R)$ be a non-zero character. Then $[\chi]\in \bns{\bbg \Gamma}$ if and only if $\deadedge \chi$ does not contain a full subgraph that separates $\Gamma$.
\end{introtheorem}
Theorem~\ref{intro: graphical criterion} allows one to work explicitly in terms of graphs with labelled edges.
In particular, we show in Corollary~\ref{cor:equivalent biconnected} that the following are equivalent: the graph $\Gamma$ is biconnected,
the BNS-invariant $\bns{\bbg \Gamma}$ is nonempty, and $\bbg \Gamma$ \textit{algebraically fibers} (that is, it admits a homomorphism to $\mathbb Z$ with finitely generated kernel).
In the same spirit, we obtain the following graphical description for (the complement of) the BNS-invariants of BBGs.
Here, a \textit{missing subsphere} is a subsphere of the character sphere that is in the complement of the BNS-invariant (see \S\ref{sec:BNS stuff} for details).
\begin{introtheorem}[Graphical description of the BNS-invariant of a BBG]\label{intro:graphical description}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected.
Then $\bnsc{\bbg \Gamma}$ is a union of missing subspheres corresponding to full separating subgraphs. More precisely,
\begin{enumerate}
\item $\bnsc{\bbg \Gamma}= \bigcup_\Lambda S_\Lambda$, where $\Lambda$ ranges over the minimal full separating subgraphs of $\Gamma$.
\item There is a bijection between maximal missing subspheres in $\bnsc{\bbg \Gamma}$ and minimal full separating subgraphs of $\Gamma$.
\end{enumerate}
\end{introtheorem}
In particular, as observed in \cite[Corollary 1.4]{kochloukovamendonontheBNSRsigmainvariantsoftheBBGs}, the set $\bnsc{\bbg \Gamma}$ carries a natural structure of a rationally defined spherical polyhedron. A set of defining equations can be computed directly by looking at the minimal full separating subgraphs of $\Gamma$.
This is analogous to the case of RAAGs; see \cite{meierthebierineumannstrebelinvariantsforgraphgroups}.
As a corollary of our description, we can identify the complement of the BNS-invariant with the first resonance variety (see Proposition~\ref{prop:bns resonance}).
This improves the inclusion from \cite[Theorem 15.8]{PapadimaandSuciuBNSRinvariantsandHomologyJumpingLoci} to an equality.
Once again, this is analogous to the case of RAAGs; see \cite[Theorem 5.5]{PapadimaSuciuAlgebraicinvariantsforRAAGs}.
It should be noted that there are groups for which the inclusion is strict; see \cite{SU21}.
\subsection{The condition not to be a RAAG: redundant triangles}\label{intro: condition not RAAG}
The BNS-invariant of a RAAG or BBG is the complement of a certain arrangement of subspheres of the character sphere.
(Equivalently, one could consider the arrangement of linear subspaces given by the linear span of these subspheres.)
The structural properties of this arrangement do not depend on any particular presentation of the group, so this arrangement turns out to be a useful invariant.
In \S\ref{sec:IEP}, inspired by the work of \cite{KobanPiggottTheBNSofthepuresymmetricautomorphismofRAAG,DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}, we consider the question of whether the maximal members in this arrangement are ``in general position'', that is, whether they satisfy the inclusion-exclusion principle.
In \cite{KobanPiggottTheBNSofthepuresymmetricautomorphismofRAAG}, Koban and Piggott proved that the maximal members in the arrangement for a RAAG satisfy the inclusion-exclusion principle.
Day and Wade in \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs} developed a homology theory to detect when an arrangement does not satisfy the inclusion-exclusion principle.
These results can be used together with our description of the BNS-invariants of BBGs to see that many BBGs are not RAAGs.
However, some BBGs elude Day--Wade's homology theory, such as the BBG defined on the graph in Figure~\ref{fig: extended PS no orientation}.
This motivated us to find an additional criterion to certify that a group $G$ is not a RAAG.
A more general result in Proposition~\ref{prop:criterion non RAAG} roughly says that if there are three maximal subspheres of $\bnsc{G}$ that are not ``in general position'', then $G$ is not a RAAG.
We are able to apply Proposition~\ref{prop:criterion non RAAG} to a wide class of BBGs.
This is based on the notion of a \textit{redundant triangle}.
Loosely speaking, a redundant triangle is a triangle in $ \Gamma$ such that the links of its vertices are separating subgraphs of $\Gamma$ that do not overlap too much (see \S\ref{sec:redundant triples BBGs} for the precise definition).
The presence of such a triangle provides a triple of missing subspheres (in the sense of our graphical description; see Theorem~\ref{intro:graphical description}) that does not satisfy the inclusion-exclusion principle.
\begin{introtheorem}\label{main thm BBGnotRAAG}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is simply connected.
If $\Gamma$ has a redundant triangle, then $\bbg \Gamma$ is not a RAAG.
\end{introtheorem}
We emphasize that Theorem~\hyperref[thm:redundant triple criterion]{E} works without any assumptions on the dimension of $\flag \Gamma$.
On the other hand, the obstruction is $2$-dimensional, in the sense that it involves a triangle, regardless of the dimension of $\flag \Gamma$; see Example~\ref{ex:higher dimensional}.
\subsection{The 2-dimensional case: proof of Theorem~\ref{main thm 2dim}}\label{intro: 2-dim}
The two conditions described in \S\ref{intro: condition RAAG} and \S\ref{intro: condition not RAAG} are complementary when $\Gamma$ is biconnected and $\flag\Gamma$ is $2$-dimensional and simply connected.
This follows from some structural properties enjoyed by $2$-dimensional flag complexes.
In Proposition~\ref{prop: tree 2-spanner iff no crowned triangles}, we establish that $\Gamma$ admits a tree $2$-spanner if and only if $\flag \Gamma$ does not contain crowned triangles.
The ``if'' direction relies on a decomposition of $\flag \Gamma$ into certain elementary pieces, namely, the cones over certain 1-dimensional flag complexes.
It then follows from Theorem~\ref{main thm BBG=RAAG} that $\bbg \Gamma$ is a RAAG.
On the other hand, we show in Lemma~\ref{lem:crowned tri is redundant in dim 2} that every crowned triangle is redundant in dimension two.
It then follows from Theorem~\ref{main thm BBGnotRAAG} that $\bbg \Gamma$ is not a RAAG.
The theory of resonance varieties (see \S\ref{sec:resonance varieties}) allows us to conclude that $\bbg \Gamma$ cannot be a more general Artin group either.
Figure~\ref{fig:implications} illustrates the various implications.
The only implication we do not prove directly is that if $\bbg \Gamma$ is a RAAG, then $\Gamma$ has a tree $2$-spanner.
This implication follows from the other ones, and in particular, it means that one can write down the RAAG presentation for $\bbg \Gamma$ associated to the tree $2$-spanner.
This fact is a priori not obvious but quite satisfying.
For the sake of completeness, we note that Theorem~\ref{main thm 2dim} fails for higher-dimensional flag complexes; see Remark~\ref{rem: higher dimensional}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\node at (0,0) {$\bbg \Gamma$ is a RAAG};
\node at (0,-.5) {($\bbg \Gamma$ is an Artin group)};
\node at (-4,-2) {$\Gamma$ admits a};
\node at (-4,-2.5) {tree $2$-spanner};
\node at (4,-2) {$\flag \Gamma$ does not contain};
\node at (4,-2.5) {crowned triangles};
\draw [thick, <->] (-2.5,-2.25)--(2,-2.25);
\node at (0,-2.5) {Prop.~\ref{prop: tree 2-spanner iff no crowned triangles}};
\draw [thick, ->] (-4,-1.5)--(-1.5,0);
\node at (-3.5,-.5) {Thm.~\ref{main thm BBG=RAAG}};
\draw [thick, <->] (1.5,0)--(4,-1.5);
\node at (3.5,-.5) {Thm.~\ref{main thm BBGnotRAAG}};
\end{tikzpicture}
\caption{The implications in the proof of Theorem~\ref{main thm 2dim}}
\label{fig:implications}
\end{figure}
\subsection{Structure of the paper.}
The rest of the paper is organized as follows.
In \S\ref{section: preliminaries}, we fix some terminology and give some background on BBGs.
In \S\ref{section: BBGs that are RAAGs}, we study tree $2$-spanners and use them to provide a sufficient condition for a BBG to be a RAAG (Theorem~\ref{main thm BBG=RAAG}).
We also give many examples.
In \S\ref{section: BBGs that are not RAAGs}, we present a graphical criterion (Theorem~\ref{intro: graphical criterion}) and a graphical description (Theorem~\ref{intro:graphical description}) for the BNS-invariants of BBGs.
We use this to provide a sufficient condition for a BBG not to be a RAAG (Theorem~\ref{main thm BBGnotRAAG}).
This is based on a study of the inclusion-exclusion principle for the arrangements that define the complement of the BNS invariants.
We discuss the relation with resonance varieties in \S \ref{sec:resonance varieties}.
In \S\ref{section: BBGs on 2-dim flag complexes}, we provide a solution to the RAAG recognition problem for BBGs defined on $2$-dimensional flag complexes (Theorem~\ref{main thm 2dim}).
In the end, we include some observations about the higher dimensional case.
\bigskip
\noindent \textbf{Acknowledgements}
We thank Tullia Dymarz for bringing \cite{kochloukovamendonontheBNSRsigmainvariantsoftheBBGs} to our attention and Daniel C. Cohen, Pallavi Dani, Max Forester, Wolfgang Heil, Alexandru Suciu, and Matthew Zaremsky for helpful conversations.
We thank the School of Mathematics at the Georgia Institute of Technology for their hospitality during a visit in which part of this work was done.
The second author acknowledges support from the AMS and the Simons Foundation.
\section{Preliminaries}\label{section: preliminaries}
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
\subsection{Notation and terminology}\label{subsection: notation and terminology}
In this paper, unless otherwise stated, a \textit{graph} $\Gamma$ is a finite $1$-dimensional simplicial complex, not necessarily connected.
We denote by $\vv \Gamma$ the set of its \textit{vertices} and by $\ee \Gamma$ the set of its \textit{edges}.
We do not fix any orientation on $\Gamma$, but we often need to work with \textit{oriented edges}. If $e$ is an oriented edge, then we denote its initial vertex and terminal vertex by $\iota e$ and $\tau e$, respectively; we denote by $\bar e$ the same edge with opposite orientation.
We always identify edges of $\Gamma$ with the unit interval and equip $\Gamma$ with the induced length metric.
A \textit{subgraph} of $\Gamma$ is a simplicial subcomplex, possibly not connected, possibly not full.
A \textit{path}, a \textit{cycle}, and a \textit{complete graph} on $n$ vertices are denoted by $P_n$, $C_n$, and $K_n$, respectively.
(Note that by definition, there is no repetition of edges in a path or cycle.)
A \textit{clique} of $\Gamma$ is a complete subgraph.
A \textit{tree} is a simply connected graph.
A \textit{spanning tree} of a graph $\Gamma$ is a subgraph $T\subseteq \Gamma$ such that $T$ is a tree and $\vv T = \vv \Gamma$.
The \emph{link} of a vertex $v \in \vv \Gamma$, denoted by $\lk{v,\Gamma}$, is the full subgraph induced by the vertices that are adjacent to $v$.
The \emph{star} of $v$ in $\Gamma$, denoted by $\st{v,\Gamma},$ is the full subgraph on $\lk{v,\Gamma}\cup\lbrace v\rbrace$.
More generally, let $\Lambda$ be a subgraph of $\Gamma$.
The \textit{link} of $\Lambda$ is the full subgraph $\lk{\Lambda,\Gamma}$ induced by vertices at distance $1$ from $\Lambda$.
The \emph{star} of $\Lambda$ in $\Gamma$, denoted by $\st{\Lambda,\Gamma},$ is the full subgraph on $\lk{\Lambda,\Gamma}\cup \vv \Lambda$.
The \emph{join} of two graphs $\Gamma_{1}$ and $\Gamma_{2}$, denoted by $\Gamma_{1}\ast\Gamma_{2}$, is the full graph on $V(\Gamma_{1})\cup V(\Gamma_{2})$ together with an edge joining each vertex in $V(\Gamma_{1})$ to each vertex in $V(\Gamma_{2})$.
A vertex in a graph that is adjacent to every other vertex is called a \emph{cone vertex}. A graph that has a cone vertex is called a \emph{cone graph}. In other words, a cone graph $\Gamma$ can be written as a join $\lbrace v\rbrace\ast\Gamma'$. In this case, we also say that $\Gamma$ is a \textit{cone over} $\Gamma'$.
The \textit{complement of $\Lambda$ in $\Gamma$} is the full subgraph $\Gamma\setminus \Lambda$ spanned by $\vv \Gamma \setminus \vv \Lambda$.
We say that $\Lambda$ is \textit{separating} if $\Gamma\setminus\Lambda$ is disconnected.
A \textit{cut vertex} of $\Gamma$ is a vertex that is separating as a subgraph.
A \textit{cut edge} of $\Gamma$ is an edge that is separating as a subgraph.
A graph is \textit{biconnected} if it has no cut vertices. If a graph is not biconnected, its \textit{biconnected components} are the maximal biconnected subgraphs.
Given a graph $\Gamma$, the \textit{flag complex} $\flag \Gamma$ on $\Gamma$ is the simplicial complex obtained by gluing a $k$-simplex to $\Gamma$ for every collection of $k+1$ pairwise adjacent vertices of $\Gamma$ (for $k\geq 2$).
The \textit{dimension} of $\flag \Gamma$ is denoted by $\dim \flag \Gamma$ and defined to be the maximal dimension of a simplex in $\flag \Gamma$.
(If $\flag \Gamma$ $1$-dimensional, then it coincides with $\Gamma$, and the following terminology agrees with the one introduced before.)
If $Z$ is a subcomplex of $\flag \Gamma$, the \textit{link} of $Z$ in $\flag \Gamma$, denoted by $\lk{Z,\flag \Gamma}$, is defined as the full subcomplex of $\flag \Gamma$ induced by the vertices at a distance one from $Z$.
Similarly, the \textit{star} of $Z$ in $\flag \Gamma$, denoted by $\st{Z,\flag \Gamma}$, is defined as the full subcomplex induced by $\lk{Z,\flag \Gamma} \cup Z$.
\subsection{The Dicks--Leary presentation}\label{sec:DL presentation}
Let $\Gamma$ be a graph, and let $\raag \Gamma$ be the associated RAAG.
Let $\chi \colon \raag \Gamma \to \mathbb Z$ be the homomorphism sending all the generators to $1$. The \textit{Bestvina--Brady group} (BBG) on $\Gamma$, denoted by $\bbg\Gamma$, is defined to be the kernel of $\chi$.
When $\Gamma$ is connected, the group $\bbg \Gamma$ is finitely generated (see \cite{bestvinabradymorsetheoryandfinitenesspropertiesofgroups}) and has the following (infinite) presentation, called the \emph{Dicks--Leary presentation}.
\begin{theorem}\textnormal{(\cite[Theorem 1]{DicksLearypresentationsforsubgroupsofArtingroups})}\label{thm:DL presentation embedding}
Let $\Gamma$ be a graph. If $\Gamma$ is connected, then $\bbg \Gamma$ is generated by the set of oriented edges of $\Gamma$, and the relators are words of the form $e^{n}_{1}\dots e^{n}_{l}$ for each oriented cycle $(e_{1},\dots,e_{l})$, where $n,l\in\mathbb Z$, $n\neq 0$, and $l\geq2$.
Moreover, the group $\bbg \Gamma$ embeds in $\raag \Gamma$ via $e \mapsto \tau e(\iota e)^{-1}$ for each oriented edge $e$.
\end{theorem}
For some interesting classes of graphs, the Dicks--Leary presentation can be considerably simplified.
For instance, when the flag complex $\flag \Gamma$ on $\Gamma$ is simply connected, the group $\bbg \Gamma$ admits the following finite presentation.
\clearpage
\begin{corollary}\textnormal{(\cite[Corollary 3]{DicksLearypresentationsforsubgroupsofArtingroups})}\label{cor:DL presentation}
When the flag complex $\flag \Gamma$ on $\Gamma$ is simply connected, the group $\bbg \Gamma$ admits the following finite presentation: the generating set is the set of the oriented edges of $\Gamma$, and the relators are $e\bar{e}=1$ for every oriented edge $e$, and $e_{i}e_{j}e_{k}=1$ and $e_{k}e_{j}e_{i}=1$ whenever $(e_{i},e_{j},e_{k})$ form an oriented triangle; see Figure \ref{Oriented triangle}.
\end{corollary}
\begin{figure}[ht]
\begin{tikzpicture}[scale=0.5]
\draw [thick, middlearrow={stealth}] (4,3)--(0,0);
\draw [thick, middlearrow={stealth}] (0,0)--(6,0);
\draw [thick, middlearrow={stealth}] (6,0)--(4,3);
\node [left] at (2,2) {$e_{i}$};
\node at (3,-1) {$e_{j}$};
\node [right] at (5,2) {$e_{k}$};
\draw [fill] (0,0) circle (4pt);
\draw [fill] (6,0) circle (4pt);
\draw [fill] (4,3) circle (4pt);
\end{tikzpicture}
\caption{Oriented triangle.}
\label{Oriented triangle}
\end{figure}
\begin{remark}\label{rem:DL presentation commute}
In the notations of Corollary~\ref{cor:DL presentation}, it follows that $e_i$, $e_j$, and $e_k$ generate a $\mathbb Z^2$ subgroup.
\end{remark}
\begin{example}\label{ex:bbg basic examples}
If $\Gamma$ is a tree on $n$ vertices, then $\bbg \Gamma$ is a free group of rank $n-1$.
If $\Gamma=K_n$ is a complete graph on $n$ vertices, then $\bbg \Gamma = \mathbb Z^{n-1}$.
\end{example}
Moreover, as observed by Papadima and Suciu, the edge set of a spanning tree is already enough to generate the whole group.
\begin{corollary}\textnormal{(\cite[Corollary 2.3]{PapadimaSuciuAlgebraicinvariantsforBBGs})}\label{cor:PS presentation}
Let $T$ be a spanning tree of $\Gamma$.
When the flag complex $\flag \Gamma$ on $\Gamma$ is simply connected, the group $\bbg \Gamma$ admits a finite presentation in which the generators are the edges of $T$, and the relators are commutators between words in generators.
\end{corollary}
\begin{remark}[Oriented vs unoriented edges]\label{rem:orientation}
The presentation from Corollary \ref{cor:DL presentation} is very symmetric but clearly redundant because each (unoriented) edge appears twice.
The orientation is just an accessory tool, and one can obtain a shorter presentation by choosing an arbitrary orientation for each edge $e$, dropping the relator $e\bar e$, and allowing inverses in the relators whenever needed.
For instance, this is what happens in Corollary~\ref{cor:PS presentation}.
Strictly speaking, each choice of orientation for the edges results in a slightly different presentation.
However, switching the orientation of an edge simply amounts to replacing a generator with its inverse.
Therefore, in the following sections, we will naively regard the generators in Corollary~\ref{cor:PS presentation} as being given by unoriented edges of $T$, and we will impose a specific orientation only when needed in a technical argument.
\end{remark}
\section{BBGs that are RAAGs}\label{section: BBGs that are RAAGs}
When $\Gamma$ is a tree or complete graph, the group $\bbg \Gamma$ is a free group or abelian group, respectively. Hence, it is a RAAG (see Example~\ref{ex:bbg basic examples}).
In this section, we identify a wider class of graphs whose associated BBGs are RAAGs.
\subsection{Tree 2-spanners}\label{sec:tree 2-spanner}
Let $\Gamma$ be a connected graph. Recall from the introduction that a tree $2$-spanner of $\Gamma$ is a spanning tree $T$ of $\Gamma$ such that for all $x,y\in \vv T$, we have $d_T(x,y)\leq 2 d_\Gamma (x,y)$.
If $\Gamma$ is a tree, then $\Gamma$ is a tree $2$-spanner of itself.
Here we are interested in more general graphs which admit tree $2$-spanners.
We start by proving some useful properties of tree $2$-spanners.
\begin{lemma}\label{tree 2spanner dicothomy}
Let $T$ be a tree $2$-spanner of $\Gamma$, and let $e\in \ee \Gamma$.
Then either $e\in \ee T$ or there is a unique triangle $(e,f,g)$ such that $f,g\in \ee T$.
\end{lemma}
\proof
Write $e=(x,y)$, then $d_T(x,y)\leq 2 d_\Gamma (x,y)=2$.
If $e$ is not an edge of $T$, then $d_T(x,y)=2$. So, there must be some $z\in \vv T$ adjacent to both $x$ and $y$ in $\Gamma$ such that the edges $f=(x,z)$ and $g=(y,z)$ are in $T$. Obviously, the edges $e$, $f$, and $g$ form a triangle.
To see that such a triangle is unique, let $(e,f',g')$ be another triangle such that $f',g'\in \ee T$. Then $(f,g,f',g')$ is a cycle in the spanning tree $T$, which leads to a contradiction.
\endproof
\begin{lemma}\label{tree 2spanner triangle dicothomy}
Let $T$ be a tree $2$-spanner of $\Gamma$.
Then in every triangle of $\Gamma$, either no edge is in $T$ or two edges are in $T$.
\end{lemma}
\proof
Let $(e,f,g)$ be a triangle in $\Gamma$, and assume by contradiction that $e\in \ee T$ but $f,g\not \in \ee T$.
Then by Lemma \ref{tree 2spanner dicothomy}, the edges $f$ and $g$ are contained in uniquely determined triangles $(f,f_1,f_2)$ and $(g,g_1,g_2)$, respectively, with $f_1,f_2,g_1,g_2\in \ee T$.
Then $(e,f_1,f_2,g_1,g_2)$ is a loop in $T$, which is absurd since $T$ is a tree.
\endproof
\begin{lemma}\label{tree 2spanner tetrahedron}
Let $T$ be a tree $2$-spanner of $\Gamma$, and let $(e,f,g)$ be a triangle in $\Gamma$ with no edges from $T$.
Then there are edges $e',f',g' \in \ee T$ that together with $e,f,g$ form a $K_4$ in $\Gamma$.
\end{lemma}
\proof
By Lemma~\ref{tree 2spanner dicothomy},
there are uniquely determined triangles $(e,e_1,e_2)$, $(f,f_1,f_2)$, and $(g,g_1,g_2)$ such that $e_1,e_2, f_1,f_2,g_1,g_2 \in \ee T$.
Let $v_e$ be a common vertex shared by $e_1$ and $e_2$ and similarly define $v_f$ and $v_g$.
If at least two vertices among $v_e$, $v_f$, and $v_g$ are distinct, then concatenating the edges $e_1,e_2,f_1,f_2,g_1,g_2$ gives a non-trivial loop in $T$, which is absurd. Thus, we have $v_e=v_f=v_g$.
Therefore, there is a $K_{4}$ induced by the vertex $v_{e}$ and the triangle $(e,f,g)$.
\endproof
We establish the following result about the global structure of $\flag \Gamma$.
(We will prove in Corollary~\ref{cor:tree2spanner implies contractible} that if $\Gamma$ is a tree 2-spanner, then $\flag \Gamma$ is even contractible.)
\begin{lemma}\label{tree 2-spanner implies simply connected}
If $\Gamma$ has a tree $2$-spanner, then $\flag \Gamma$ is simply connected.
\end{lemma}
\begin{proof}
It is enough to check that every cycle of $\Gamma$ bounds a disk in $\flag \Gamma$.
Let $T$ be a tree $2$-spanner of $\Gamma$, and let $C=(e_1,e_2,\dots,e_n)$ be a cycle of $\Gamma$.
If $n=3$, then by construction $C$ bounds a triangle in $\flag \Gamma$. So, we may assume $n\geq 4$.
If $C$ contains a pair of vertices connected by an edge not in $C$, then $C$ can be split into the concatenation of two shorter cycles.
So, we assume that $C$ contains no such a pair of vertices, that is, a chordless cycle. In particular, all edges of $C$ are distinct.
For each $e_i\in \ee C$, either $e_i\in \ee T$ or $e_i\not \in \ee T$.
In the second case, by Lemma~\ref{tree 2spanner dicothomy}, there are two edges $e_i^-$ and $e_i^+$ in $\ee T$ such that $(e_i,e_i^-,e_i^+)$ form a triangle.
We denote by $w_i$ the common vertex of $e_i^-$ and $e_i^+$. Note that $w_i \not \in \vv C$ and $e_i^-,e_i^+\not \in \ee C$ because $C$ is assumed to be chordless and of length $n\geq 4$.
Let $L$ be the loop obtained by the following surgery on $C$ (see Figure~\ref{fig:tree2spanner_simplyconnected}, left): for each edge $e_i$, if $e_i\in \ee T$, then keep it; otherwise, replace it with the concatenation of the two edges $e_i^-$ and $e_i^+$.
Then $L$ is a loop made of edges of $T$. Since $T$ is a tree, the loop $L$ is contractible. Thus, if we start from a vertex of $L$ and travel along the edges of $L$ back to the starting vertex, then we must travel along each edge an even number of times in opposite directions.
Since $e_i^-,e_i^+\not \in \ee C$, each edge of $C$ appears at most once in $L$.
So, if some edge of $C$ appears in $L$, then $L$ is not contractible.
This proves that $\ee C \cap \ee T = \varnothing$, and therefore, we have $L=(e_1^-,e_1^+,e_2^-,e_2^+,\dots e_n^-,e_n^+)$.
Once again, since the edges of $L$ must appear an even number of times, the loop $L$ contains a repeated edge. That is, we have $e_i^+= e_{i+1}^-$ and $w_i=w_{i+1}$ for some $i$.
Deleting the vertex $v_{i+1}$ (and the repeated edge), we obtain a shorter cycle in $T$, made of edges from $L$.
Iterating the process, we see that $w_1,\dots, w_n$ are all actually the same vertex, say $w$ (see Figure~\ref{fig:tree2spanner_simplyconnected}, right).
Notice that every vertex of $C$ is adjacent to $w$, so $C$ is entirely contained in $\st{w,\Gamma}$.
Therefore, the cycle $C$ bounds a triangulated disk in $\flag \Gamma$ as desired.
\end{proof}
\begin{figure}[ht!]
\centering
\input{pictures/tree2spanner_simply_connected}
\caption{The construction of the loop $L$ from the cycle $C$ (left), and its contraction to a cone vertex (right).}
\label{fig:tree2spanner_simplyconnected}
\end{figure}
In the next statement, we show that if $\Gamma$ has a tree 2-spanner, then $\bbg \Gamma$ is a RAAG.
Even more: the tree 2-spanner itself provides a RAAG presentation for $\bbg \Gamma$.
Let $T$ be a tree 2-spanner for $\Gamma$.
Recall from the introduction that the dual graph $T^\ast$ of $T$ is the graph whose vertices are edges of $T$, and two vertices are adjacent if and only if the corresponding edges of $T$ are contained in the same triangle of $\Gamma$.
Roughly speaking, the dual graph encodes the way in which $T$ sits inside $\Gamma$.
\begin{maintheoremc}{B}\label{containing a tree 2-spanner implies that BBG is a RAAG}
If $\Gamma$ admits a tree $2$-spanner $T$, then $\bbg \Gamma$ is a RAAG. More precisely, the Dicks--Leary presentation can be simplified to the standard RAAG presentation with generating set $\ee T$. Moreover, we have $\bbg \Gamma \cong \raag{T^\ast}$.
\end{maintheoremc}
\begin{proof}
Let $T$ be a tree $2$-spanner of $\Gamma$.
By Lemma~\ref{tree 2-spanner implies simply connected}, the flag complex $\flag \Gamma$ is simply connected.
By Corollary~\ref{cor:DL presentation}, the Dicks--Leary presentation for $\bbg \Gamma$ is finite. The generators are the oriented edges of $\Gamma$, and the relators correspond to the oriented triangles in $\Gamma$.
By Corollary~\ref{cor:PS presentation}, the presentation can be further simplified by discarding all edges not in $T$ to obtain a presentation that only involves commutators between words in the generators.
We explicitly note that to achieve this, one also needs to choose an arbitrary orientation for each edge of $T$ (compare Remark~\ref{rem:orientation}).
To ensure that the resulting presentation is a standard RAAG presentation, we need to check that it is enough to use relators that are commutators of edges of $T$ (as opposed to commutators of more general words).
In order to do this, we check what happens to the Dicks--Leary presentation from Corollary~\ref{cor:DL presentation} when we remove a generator corresponding to an edge that is not in $T$.
The relators involving such an edge correspond to the triangles of $\Gamma$ that contain it.
One of them is the special triangle from Lemma~\ref{tree 2spanner dicothomy}, and there might be other ones corresponding to other triangles.
Let $e\in \ee \Gamma \setminus \ee T$.
By Lemma~\ref{tree 2spanner dicothomy}, we know that there is a unique triangle $(e,f,g)$ with $f,g\in \ee T$.
Then $(e^{\varepsilon_1},f^{\varepsilon_2},g^{\varepsilon_3})$ is an oriented triangle (in the sense of Figure~\ref{Oriented triangle}) for some suitable $\varepsilon_j = \pm 1$, where the negative exponent stands for a reversal in the orientation.
When we drop $e$ from the generating set, the relations $e^{\varepsilon_1}f^{\varepsilon_2}g^{\varepsilon_3}=1=g^{\varepsilon_3}f^{\varepsilon_2}e^{\varepsilon_1}$ can be replaced by $f^{\varepsilon_2}g^{\varepsilon_3}=e^{-\varepsilon_1}=g^{\varepsilon_3}f^{\varepsilon_2}$, hence, by the commutator $[f^{\varepsilon_2},g^{\varepsilon_3}]$ (compare with Remark~\ref{rem:DL presentation commute}).
But such a commutator can always be replaced by $[f,g]$. This is completely insensitive to the chosen orientation. This shows that the relators of the presentation from Corollary~\ref{cor:DL presentation}, which arise from the triangles provided by Lemma~\ref{tree 2spanner dicothomy}, are turned into commutators between generators in the presentation from Corollary~\ref{cor:PS presentation}.
We need to check what happens to the other type of relators.
We now show that they follow from the former type of relators and hence can be dropped.
As before, let $e\in \ee \Gamma \setminus \ee T$, and let $(e,f,g)$ be the triangle from Lemma~\ref{tree 2spanner dicothomy} having $f,g\in \ee T$.
Let $(e,f',g')$ be another triangle containing $e$.
Since $e\not \in \ee T$, by the uniqueness in Lemma~\ref{tree 2spanner triangle dicothomy} we have $e,f',g'\not \in \ee T$.
Therefore, by Lemma~\ref{tree 2spanner tetrahedron}, there are $e'',f'',g'' \in \ee T$ that form a $K_4$ together with $e$, $f'$, and $g'$; see the left picture of Figure~\ref{fig:proof of a BBG with a tree 2-spanner is a RAAG}.
Up to relabelling, say that $e''$ is the edge of this $K_4$ that is disjoint from $e$.
Then $(e,f'',g'')$ is a triangle containing $e$ with $f'',g''\in \ee T$.
Again, since the triangle $(e,f,g)$ is unique, we have $\{f'',g''\}=\{f,g\}$.
In particular, the triangles $(e,f,g)$ and $(e,f',g')$ are part of a common $K_4$; see the right picture of Figure~\ref{fig:proof of a BBG with a tree 2-spanner is a RAAG}.
The edges of this $K_4$ that are in $T$ are precise $e''$, $f$, and $g$, and any two of them commute by Remark~\ref{rem:DL presentation commute}.
So, the relator $ef'g'$ follows from the fact that $e$, $f'$, and $g'$ can be rewritten in terms of $f$, $g$, and $e''$.
In particular, this relator can be dropped.
Therefore, the Dicks--Leary presentation for $\bbg \Gamma$ can be simplified to a presentation in which the generating set is $\ee T$, and the relators are commutators $[e_{i},e_{j}]$, where $e_{i}$ and $e_{j}$ are in $\ee T$ and are contained in the same triangle of $\flag \Gamma$.
In particular, we have $\bbg \Gamma \cong \raag{T^\ast}$.
\end{proof}
\begin{figure}[ht!]
\centering
\input{pictures/proof_BBG_tree2-spanner_RAAG}
\caption{The graph on the left shows the triangle $(e,f,g)$ and a $K_{4}$ consisting of the edges $e,f',g',e'',f''$ and ,$g''$. The red edges are in $\ee T$. The graph on the right illustrates the uniqueness of the triangle $(e,f,g)$.}
\label{fig:proof of a BBG with a tree 2-spanner is a RAAG}
\end{figure}
\begin{remark}
It is natural to ask which graph admits a tree $2$-spanner.
The problem of determining whether a graph admits a tree $2$-spanner is NP-complete (see \cite{BernPhDThesis}).
However, if a graph contains a tree $2$-spanner, then it can be found in linear time (see \cite[Theorem 4.5]{CaiCorneilTreeSpanners}).
\end{remark}
As a consequence, we have the following criterion to detect whether two BBGs are isomorphic in terms of the defining graphs in the special case where they admit tree 2-spanners.
\begin{maincorollaryc}{1}\label{cor:isom of bbgs}
Let $\Gamma$ and $\Lambda$ be two graphs admitting tree 2-spanners $T_\Gamma$ and $T_\Lambda$, respectively.
Then $\bbg \Gamma \cong \bbg \Lambda$ if and only if $T_\Gamma^\ast \cong T_\Lambda^\ast$.
\end{maincorollaryc}
\begin{proof}
The result follows from Theorem~\hyperref[containing a tree 2-spanner implies that BBG is a RAAG]{B} and the fact that two RAAGs are isomorphic if and only if their defining graphs are isomorphic; see Droms \cite{DromsIsomorphismsofGraphGroups}.
\end{proof}
\begin{remark}\label{rmk: noniso graphs give iso BBGs}
In general, non-isomorphic graphs can define isomorphic BBGs.
For example, any two trees with $n$ vertices define the same BBG (the free group of rank $n-1$).
Notice that every tree is a tree 2-spanner of itself with a totally disconnected dual graph.
Even when $\Gamma$ admits a tree 2-spanner with a connected dual graph, the group $\bbg \Gamma$ does not determine $\Gamma$; see Example~\ref{ex:iso bbg non iso graphs}.
\end{remark}
\begin{example}\label{ex:iso bbg non iso graphs}
Denote the graphs in Figure~\ref{fig:isomorphic BBGs with nonisomorphic tree 2 spanners} by $\Gamma$ and $\Lambda$.
Let $T_\Gamma$ and $T_\Lambda$ be the tree $2$-spanners of $\Gamma$ and $\Lambda$, respectively, given by the red edges in the pictures.
One can see that $\Gamma \not\cong \Lambda$ as well as $T_\Gamma \not \cong T_\Lambda$.
However, the dual graphs $T_\Gamma^\ast$ and $T_\Lambda^\ast$ are isomorphic to the path on five vertices $P_5$. Thus, by Theorem~\hyperref[containing a tree 2-spanner implies that BBG is a RAAG]{B} and Corollary~\hyperref[cor:isom of bbgs]{1}, we have $\bbg \Gamma \cong \raag {P_5} \cong \bbg \Lambda$.
\begin{figure}[h]
\centering
\input{pictures/Example_Whitney_Twist}
\caption{Non-isomorphic biconnected graphs give isomorphic BBGs.}
\label{fig:isomorphic BBGs with nonisomorphic tree 2 spanners}
\end{figure}
\end{example}
The following graph-theoretic statements might be of independent interest.
The first one says that any two tree 2-spanners for a graph $\Gamma$ sit in the same way inside $\Gamma$ (even though they do not have to be isomorphic as trees; see Example~\ref{ex:iso bbg non iso graphs}).
The second one strengthens the conclusion of Lemma~\ref{tree 2-spanner implies simply connected}.
\begin{corollary}
If $T_1$ and $T_2$ are tree 2-spanners for $\Gamma$, then $T_1^\ast\cong T_2^\ast$.
\end{corollary}
\begin{proof}
Take $\Gamma=\Lambda$ in Corollary~\hyperref[cor:isom of bbgs]{1}.
\end{proof}
\begin{corollary}\label{cor:tree2spanner implies contractible}
If $\Gamma$ admits a tree $2$-spanner, then $\flag \Gamma$ is contractible.
\end{corollary}
\begin{proof}
By Theorem~\hyperref[containing a tree 2-spanner implies that BBG is a RAAG]{B}, the group $\bbg \Gamma$ is isomorphic to the RAAG $\raag \Lambda$ on some graph $\Lambda$.
The Salvetti complex associated to $\Lambda$ is a finite classifying space for $\raag \Lambda$, so the group $\bbg \Gamma \cong \raag \Lambda$ is of type $F$.
It follows from \cite{bestvinabradymorsetheoryandfinitenesspropertiesofgroups} that $\flag \Gamma$ is simply connected and acyclic.
By the Hurewicz Theorem, the homotopy group $\pi_k(\flag \Gamma)$ is trivial for $k\geq 1$. By the Whitehead Theorem, we can conclude that $\flag \Gamma$ is contractible.
\end{proof}
\subsection{Joins and 2-trees}
In this section, we describe some ways of constructing new graphs out of old ones in such a way that the BBG defined on the resulting graph is a RAAG.
\subsubsection{Joins}
Recall from Section~\ref{subsection: notation and terminology} the definition of the join of two graphs.
It corresponds to a direct product operation on the associated RAAGs. The following corollary can also be found in \cite[Example 2.5]{PapadimaSuciuAlgebraicinvariantsforBBGs} and \cite[Proposition 3.4]{YCCIdentifyingDehnFunctionsofBBGfromtheirdefininggraphs}.
\begin{corollary}\label{cor: cone graph gives an isomorphism between BBG and RAAG}
Let $\Lambda$ be a graph.
If $\Gamma=\lbrace v\rbrace\ast\Lambda$, then $\bbg \Gamma\cong \raag \Lambda$.
\end{corollary}
\begin{proof}
Since $v$ is a cone vertex, the edges that are incident to $v$ form a tree $2$-spanner $T$ of $\Gamma$.
By Theorem~\hyperref[containing a tree 2-spanner implies that BBG is a RAAG]{B}, we know that $\bbg \Gamma$ is a RAAG, namely, $\bbg \Gamma \cong \raag{T^\ast}$.
The result follows from the observation that $T^\ast \cong \Lambda$.
\end{proof}
For instance, if $\Gamma$ does not contain a full subgraph isomorphic to $C_4$ or $P_3$, then $\Gamma$ is a cone (see the first Lemma in \cite{DromsSubgroupsofGraphGroups}), and the previous corollary applies.
Actually, in this case every subgroup of $\raag \Gamma$ is known to be a RAAG by the main Theorem in \cite{DromsSubgroupsofGraphGroups}.
\begin{remark}
Corollary~\ref{cor: cone graph gives an isomorphism between BBG and RAAG} implies that the class of BBGs contains the class of RAAGs, that is, every RAAG arises as the BBG defined on some graph.
\end{remark}
\begin{remark}\label{rem: cone on a non-RAAG BBG graph is a RAAG}
Corollary~\ref{cor: cone graph gives an isomorphism between BBG and RAAG} indicates that the fact that $\bbg \Gamma$ is not a RAAG is not obviously detected by subgraphs in general.
Indeed, if $\Gamma$ is a cone over $\Lambda$, then $\bbg \Gamma$ is always a RAAG, regardless of the fact that $\bbg \Lambda$ is a RAAG or not.
\end{remark}
\begin{corollary}\label{cor: join graph gives a RAAG}
Let $\Lambda$ be a graph and $\Gamma'$ a cone graph.
If $\Gamma=\Gamma'\ast\Lambda$, then $\bbg\Gamma$ is a RAAG.
\end{corollary}
\begin{proof}
Since $\Gamma'$ is a cone graph, so is $\Gamma$. Therefore, the group $\bbg\Gamma$ is a RAAG by Corollary \ref{cor: cone graph gives an isomorphism between BBG and RAAG}.
\end{proof}
\begin{corollary}\label{cor:center implies BBG is RAAG}
If $\raag \Gamma$ has non-trivial center, then $\bbg \Gamma$ is a RAAG.
\end{corollary}
\begin{proof}
By \cite[The Centralizer Theorem]{ServatiusAutomorphismsofGraphGroups}, when $\raag \Gamma$ has non-trivial center, there is a complete subgraph $\Gamma' \subseteq \Gamma$ such that each vertex of $\Gamma'$ is adjacent to every other vertex of $\Gamma$. That is, the graph $\Gamma$ decomposes as $\Gamma=\Gamma'\ast \Lambda$, where $\vv \Lambda =\vv \Gamma \setminus \vv \Gamma'$. Since a complete graph is a cone graph, the result follows from Corollary~\ref{cor: join graph gives a RAAG}.
\end{proof}
\begin{remark}
BBGs defined on arbitrary graph joins are not necessarily isomorphic to RAAGs. For example, the cycle of length four $C_4$ is the join of two pairs of non-adjacent vertices. The associated RAAG is $\ff_2\times \ff_2$, and the associated BBG is not a RAAG because it is not even finitely presented (see \cite{bestvinabradymorsetheoryandfinitenesspropertiesofgroups}).
\end{remark}
\subsubsection{2-trees}\label{section: example 2-trees}
Roughly speaking, a \textit{$2$-tree} is a graph obtained by gluing triangles along edges in a tree-like fashion.
More formally, the class of $2$-trees is defined recursively as follows: the graph consisting of a single edge is a $2$-tree, and then a graph $\Gamma$ is a $2$-tree if it contains a vertex $v$ such that the neighborhood of $v$ in $\Gamma$ is an edge and the graph obtained by removing $v$ from $\Gamma$ is still a $2$-tree.
The trefoil graph from Figure~\ref{fig:trefoil} is an explicit example of a $2$-tree.
A general $2$-tree may not be a triangulation of a $2$-dimensional disk as it can have branchings; see Figure~\ref{fig: A 2-tree that is not a triangulation of a disk} for an example.
It is not hard to see that the flag complex on a $2$-tree is simply connected. So, the associated BBG is finitely presented and has only commutator relators.
Cai showed that a $2$-tree contains no trefoil subgraphs if and only if it admits a tree $2$-spanner; see \cite[Proposition 3.2]{caionspanning2trees}. The next corollary follows from Cai's result and Theorem~\hyperref[containing a tree 2-spanner implies that BBG is a RAAG]{B}.
In Section~\ref{section: BBGs on 2-dim flag complexes}, we will prove a more general result that especially implies the converse of the following statement.
\begin{corollary}\label{cor: trefoil-free 2-tree implies BBG=RAAG}
Let $\Gamma$ be a $2$-tree.
If $\Gamma$ is trefoil-free, then $\bbg \Gamma$ is a RAAG.
\end{corollary}
\begin{example}[A bouquet of triangles]\label{ex: a bouquet of triangles}
Let $\Gamma$ be a $2$-tree as shown in Figure~\ref{fig: A 2-tree that is not a triangulation of a disk}. Since $\Gamma$ does not contain trefoil subgraphs, the group $\bbg \Gamma$ is a RAAG by Corollary~\ref{cor: trefoil-free 2-tree implies BBG=RAAG}. The reader can check that the red edges form a tree $2$-spanner of $\Gamma$.
\end{example}
\begin{figure}[ht!]
\centering
\input{pictures/ex_bbg_on_a_bouquet_of_triangles}
\caption{A $2$-tree whose flag complex is not a triangulated disk.}
\label{fig: A 2-tree that is not a triangulation of a disk}
\end{figure}
\section{BBGs that are not RAAGs}\label{section: BBGs that are not RAAGs}
While in the previous section, we have provided a condition on $\Gamma$ to ensure that $\bbg \Gamma$ is a RAAG. In this section, we want to obtain a condition on $\Gamma$ to guarantee that $\bbg \Gamma$ is not a RAAG.
The main technical tool consists of a description of the BNS-invariants of BBGs in terms of the defining graphs.
\subsection{BNS-invariants of finitely generated groups}\label{sec:BNS stuff}
Let $G$ be a finitely generated group. A \emph{character} of $G$ is a homomorphism $\chi\colon G\rightarrow\mathbb R$. Two characters $\chi_{1}$ and $\chi_{2}$ are equivalent, denoted by $\chi_{1}\sim\chi_{2}$, whenever $\chi_{1}=\lambda\chi_{2}$ for some positive real number $\lambda$. Denote by $[\chi]$ the equivalence class of $\chi$.
The set of equivalence classes of non-zero characters of $G$ is called the \emph{character sphere} of $G$:
$$
\chars G=\big\lbrace[\chi] \ \vert \ \chi\in\mathrm{Hom}(G,\mathbb R)\setminus\lbrace0\rbrace\big\rbrace.
$$
The character sphere naturally identifies with the unit sphere in $\mathrm{Hom}(G,\mathbb R)$ (with respect to some background inner product), so by abuse of notation, we will often write $\chars G \subseteq \mathrm{Hom}(G,\mathbb R)$.
A character $\chi\colon G \to \mathbb R$ is called \textit{integral}, \textit{rational}, or \textit{discrete} if its image is an infinite cyclic subgroup of $\mathbb Z, \mathbb Q$, or $\mathbb R$, respectively. In particular, an integral character is rational, a rational character is discrete, and the equivalent class of a discrete character always contains an integral representative.
Let $\mathcal{S}$ be a finite generating set for $G$, and let $\operatorname{Cay}(G,\mathcal{S})$ be the Cayley graph for $G$ with respect to $\mathcal S$.
Note that the elements of $G$ are identified with the vertex set of the Cayley graph.
For any character $\chi\colon G\rightarrow\mathbb R$, let $\operatorname{Cay}(G,\mathcal{S})_{\chi\geq0}$ be the full subgraph of $\operatorname{Cay}(G,\mathcal{S})$ spanned by $\{g\in G \mid \chi (g)\geq 0\}$.
Bieri, Neumann, and Strebel \cite{bierineumannstrebelageometricinvariantofdiscretegroups} introduced a geometric invariant of $G$, known as the \emph{BNS-invariant} $\bns G$ of $G$, which is defined as the following subset of $\chars G$:
$$
\bns G =\big\lbrace[\chi]\in \chars G \ \big\vert \ \mathrm{Cay}(G,\mathcal{S})_{\chi\geq0} \ \text{is connected}\big\rbrace.
$$
They also proved that the BNS-invariant of $G$ does not depend on the generating set $\mathcal S$.
The interest in $\bns G$ is due to the fact that it can detect finiteness properties of normal subgroups with abelian quotients, such as kernels of characters.
For instance, the following statement can be taken as an alternative definition of what it means for a discrete character to belong to $\bns G$ (see \cite[\S 4]{bierineumannstrebelageometricinvariantofdiscretegroups} or \cite[Corollary A4.3]{strebelnotesonthesigmainvariants}).
\begin{theorem}\label{thm:bns fg}
Let $\chi\colon G \to \mathbb R$ be a discrete character. Then $\ker (\chi) $ is finitely generated if and only if both $[\chi]$ and $[-\chi]$ are in $\bns G$.
\end{theorem}
As a major motivating example, when $G$ is the fundamental group of a compact $3$-manifold $M$, the BNS-invariant $\bns G$ describes all the possible ways in which $M$ fibers over the circle with fiber a compact surface (see \cite{ST62,TH86,bierineumannstrebelageometricinvariantofdiscretegroups}).
\begin{remark}\label{rmk:bns antipodal}
For each group $G$ of interest in this paper, it admits an automorphism that acts as the antipodal map $\chi \mapsto -\chi$ on $\mathrm{Hom}(G,\mathbb R)$.
In this case, the BNS-invariant $\bns G$ is invariant under the antipodal map. Therefore, its rational points correspond exactly to discrete characters with finitely generated kernels.
\end{remark}
\begin{remark}[The complement and the missing subspheres]\label{rmk:missing subspheres}
It is often the case that the BNS-invariant $\bns G$ is better described in terms of its complement in the character sphere $\chars G$.
Moreover, for many groups of interest, the complement of the BNS-invariant is often a union of subspheres (see \cite{meierthebierineumannstrebelinvariantsforgraphgroups,kochloukovamendonontheBNSRsigmainvariantsoftheBBGs,KO21,BG84,CL16} for examples).
In this paper, the \textit{complement} of $\bns G$ is by definition $\bnsc {G}=\chars G \setminus \bns G$.
A \textit{great subsphere} is defined as a subsphere of $\chars G$ of the form $S_W=\chars G \cap W$, where $W$ is a linear subspace of $\operatorname{Hom}(G,\mathbb R)$ going through the origin.
We say that a great subsphere $S_W$ is a \textit{missing subsphere} if $S_W\subseteq \bnsc G$.
The subspace $W$ is the linear span of $S_W$ and is called a \textit{missing subspace}.
\end{remark}
\subsubsection{The BNS-invariants of \texorpdfstring{RAAGs}{right-angled Artin groups}}\label{sec:bns for raags}
The BNS-invariants of RAAGs have a nice description given by Meier and VanWyk \cite{meierthebierineumannstrebelinvariantsforgraphgroups}. Let $\Gamma$ be a graph and $\chi\colon \raag \Gamma\rightarrow\mathbb R$ a character of $\raag \Gamma$. Define the \emph{living subgraph} $\living \chi$ of $\chi$ to be the full subgraph of $\Gamma$ on the vertices $v$ with $\chi(v)\neq0$ and the \emph{dead subgraph} $\dead \chi$ of $\chi$ to be the full subgraph of $\Gamma$ on the vertices $v$ with $\chi(v)=0$.
Note that $\living \chi$ and $\dead \chi$ are disjoint, and they do not necessarily cover $\Gamma$.
A subgraph $\Gamma'$ of $\Gamma$ is \emph{dominating} if every vertex of $\Gamma\setminus\Gamma'$ is adjacent to some vertex of $\Gamma'$.
\begin{theorem}\textnormal{(A graphical criterion for $\bns{\raag \Gamma}$, \cite[Theorem 4.1]{meierthebierineumannstrebelinvariantsforgraphgroups})}\label{BNS-invariant for raag}
Let $\chi\colon \raag \Gamma\rightarrow\mathbb R$ be a character. Then $[\chi]\in \bns{\raag \Gamma}$ if and only if $\living \chi$ is connected and dominating.
\end{theorem}
By Theorem~\ref{thm:bns fg}, if $\chi$ is a discrete character, then $\living \chi$ detects whether $\ker (\chi)$ is finitely generated.
Indeed, in a RAAG, the map sending a generator to its inverse is a group automorphism. Hence, the set $\bns{\raag \Gamma}$ is invariant under the antipodal map $\chi\mapsto -\chi$ (see Remark~\ref{rmk:bns antipodal}).
We find it convenient to work with the following reformulation of the condition in Theorem~\ref{BNS-invariant for raag}.
It previously appeared inside the proof of \cite[Corollary 3.4]{lorenzo}.
We include a proof for completeness.
For the sake of clarity, in the following lemma, the graph $\Lambda$ is a subgraph of $\dead \chi$ that is separating as a subgraph of $\Gamma$, but it may not separate $\dead \chi$ (see Figure~\ref{fig:bad dead subgraph} for an example).
\begin{lemma}\label{lem:dead subgraph criterion for RAAGs}
Let $\chi\colon \raag \Gamma \rightarrow \mathbb R$ be a non-zero character. Then the following statements are equivalent.
\begin{enumerate}
\item \label{item:living} The living graph $\living \chi$ is either not connected or not dominating.
\item \label{item:dead} There exists a full subgraph $\Lambda \subseteq \Gamma$ such that $\Lambda$ separates $\Gamma$ and $\Lambda \subseteq \dead \chi$.
\end{enumerate}
\end{lemma}
\proof
We begin by proving that \eqref{item:living} implies \eqref{item:dead}.
If $\living \chi$ is not connected, then $\Lambda=\dead \chi$ separates $\Gamma$.
If $\living \chi$ is not dominating, then there is a vertex $v\in \vv \Gamma$ such that $\chi$ vanishes on $v$ and on all the vertices adjacent to $v$.
Since $\chi$ is non-zero, the vertex $v$ is not a cone vertex.
In particular, the graph $\Lambda = \lk{v,\Gamma}$ is a subgraph of $\dead \chi$. Moreover, the subgraph $\Lambda$ is a full separating subgraph of $\Gamma$, as desired.
To prove that \eqref{item:dead} implies \eqref{item:living}, assume that $\living \chi$ is connected and dominating.
Let $\Lambda \subseteq \dead \chi$ be a full subgraph of $\Gamma$, and let $u_1, u_2\in \vv \Gamma \setminus \vv \Lambda$.
We want to show that $u_1$ and $u_2$ can be connected in the complement of $\Lambda$. There are three cases.
Firstly, if $u_{1}$ and $u_{2}$ are vertices of $\living \chi$, then they are connected by a path entirely in $\living \chi$.
Secondly, if both $u_{1}$ and $u_{2}$ are vertices of $\dead \chi$, then they are adjacent to some vertices in $\living \chi$, say $v_{1}$ and $v_{2}$, respectively.
Then we can extend a path in $\living \chi$ between $v_{1}$ and $v_{2}$ to a path between $u_{1}$ and $u_{2}$ avoiding $\vv\Lambda$.
Finally, suppose that $u_{1}$ is a vertex of $\living \chi$ and $u_{2}$ is a vertex of $\dead \chi$, then $u_{2}$ is adjacent to a vertex $v_{2}$ of $\living \chi$. Again, we can extend a path in $\living \chi$ between $u_{1}$ and $v_{2}$ to a path between $u_{1}$ and $u_{2}$ avoiding $\vv \Lambda$. As a result, we have connected $u_1$ to $u_2$ with a path disjoint from $\Lambda$. This shows that $\Lambda$ is not separating, which contradicts \eqref{item:dead}.
\endproof
Notice that a subgraph $\Lambda$ arising from Lemma~\ref{lem:dead subgraph criterion for RAAGs} may not be connected and may not be equal to $\dead \chi$. Also, it may not even be a union of connected components of $\dead \chi$.
This is in particular true when looking for a minimal such $\Lambda$; see Figure~\ref{fig:bad dead subgraph}.
\begin{figure}[ht!]
\centering
\input{pictures/bad_dead_subgraph_tikz_version}
\caption{The subgraph $\Lambda$ in Lemma~\ref{lem:dead subgraph criterion for RAAGs} may not be a union of connected components of $\dead \chi$. Here, the graph $\Lambda$ is given by the two red vertices.}
\label{fig:bad dead subgraph}
\end{figure}
\begin{remark}\label{rem:complement BNS for RAAGs}
It follows from Theorem~\ref{BNS-invariant for raag} that $\bnsc{\raag \Gamma}$ is a rationally defined spherical polyhedron, given by a union of missing subspheres (see \cite[Theorem 5.1]{meierthebierineumannstrebelinvariantsforgraphgroups}).
Moreover, each (maximal) missing subsphere consists of characters that vanish on a (minimal) separating subgraph of $\Gamma$, thanks to Lemma~\ref{lem:dead subgraph criterion for RAAGs} (see also \cite[Proposition A4.14]{strebelnotesonthesigmainvariants}).
For example, the missing hyperspheres in $\bnsc{\raag\Gamma}$ are in bijective correspondence with the cut vertices of $\Gamma$.
We will further discuss the correspondence in Example \ref{ex: bns of raag on tree}.
\end{remark}
\subsubsection{The BNS-invariants of \texorpdfstring{BBGs}{Bestvina-Brady groups}}
As in the case of RAAGs, some elementary properties of the BNS-invariants of BBGs can be seen directly from the defining graph.
\begin{example}
The graph $\Gamma$ is complete if and only if $\bns{\bbg \Gamma}=S(\bbg \Gamma)$.
Indeed, in this case, the group $\bbg \Gamma$ is free abelian.
\end{example}
\begin{example}\label{ex:cut vertex implies empty bns}
At the opposite extreme, if $\Gamma$ is connected and has a cut vertex, then $\bns{\bbg \Gamma} =\varnothing$ (see \cite[Corollary 15.10]{PapadimaandSuciuBNSRinvariantsandHomologyJumpingLoci}).
Vice versa, if $\Gamma$ has no cut vertices and $\bbg \Gamma$ is finitely presented, then we will prove in Corollary~\ref{cor:bns for bbg non empty} that $\bns{\bbg \Gamma}\neq \varnothing$.
\end{example}
The following lemma shows that the BNS-invariant of a BBG is invariant under the antipodal map, as in the case of a RAAG.
\begin{lemma}\label{graph orientation reversing map is an isomorphism on BBG}
For all $\chi\colon\bbg \Gamma\to\mathbb R$, if $[\chi]\in\bns{\bbg \Gamma}$, then $[-\chi]\in\bns{\bbg \Gamma}$.
\end{lemma}
\begin{proof}
Choose an orientation for the edges of $\Gamma$, and let $f\colon\Gamma\to\Gamma$ be the map that reverses the orientation on each edge.
Then $f$ induces an automorphism $f_{\ast}\colon\bbg \Gamma\to\bbg \Gamma$ which sends every generator $e$ to its inverse $e^{-1}$.
Then the lemma follows from the fact that $-\chi = \chi \circ f_\ast$ (see Remark~\ref{rmk:bns antipodal}).
\end{proof}
Beyond these observations, not many explicit properties are known, and more refined tools are needed. We will use a recent result of Kochloukova and Mendon\c{c}a that relates the BNS-invariant of a BBG to that of the ambient RAAG.
The following statement is a particular case of \cite[Corollary 1.3]{kochloukovamendonontheBNSRsigmainvariantsoftheBBGs}.
\begin{proposition}\label{character in the BNS of BBG iff all the extensions are in the BNS of RAAG}
Let $\Gamma$ be a connected graph, and let $\chi:\bbg \Gamma \to \mathbb R$ be a character.
Then $[\chi]\in\bns{\bbg \Gamma}$ if and only if $[\hat{\chi}]\in\bns{\raag \Gamma}$ for every character $\hat{\chi}:\raag \Gamma \to \mathbb R$ that extends $\chi$.
\end{proposition}
Proposition~\ref{character in the BNS of BBG iff all the extensions are in the BNS of RAAG} allows one to recover the previous observations as well as more properties of the BNS-invariants of BBGs, which are reminiscent of those of RAAGs. For instance, the complement of the BNS-invariant of a BBG is a rationally defined spherical polyhedron (see \cite[Corollary 1.4]{kochloukovamendonontheBNSRsigmainvariantsoftheBBGs} and compare with Remark~\ref{rem:complement BNS for RAAGs}).
\subsubsection{Coordinates and labellings}\label{sec:coordinates}
Here, we want to describe a useful parametrization for $\operatorname{Hom}(\raag \Gamma, \mathbb R)$ and $\operatorname{Hom}(\bbg \Gamma, \mathbb R)$ in terms of labelled graphs.
This is based on the following elementary observation about a class of groups with a particular type of presentation that includes RAAGs and BBGs.
\begin{lemma}\label{lem:zero exp sum presentation}
Let $G$ be a group with a presentation $G=\langle \mathcal{S} | \mathcal{R}\rangle$ in which for each generator $s$ and each relator $r$, the exponent sum of $s$ in $r$ is zero.
Let $A$ be an abelian group.
Then there is a bijection between $\operatorname{Hom}(G,A)$ and $\{ f\colon \mathcal{S} \to A \}$.
\end{lemma}
\begin{proof}
Given a homomorphism $G\to A$, one obtains a function $\mathcal{S} \to A$ just by restriction.
Conversely, let $f \colon \mathcal{S} \to A$ be any function and let $\widetilde f \colon \ff (\mathcal{S}) \to A$ be the induced homomorphism on the free group on $\mathcal{S}$.
Let $r\in \ff (\mathcal{S})$ be a relator for $G$.
Since the exponent sum of each generator in $r$ is zero and $A$ is abelian, we have that $\widetilde{f}(r)$ is trivial in $A$.
Therefore, the homomorphism $\widetilde f\colon \ff (\mathcal{S}) \to A$ descends to a well-defined homomorphism $G\to A$.
\end{proof}
A typical example of a relator in which the exponent sum of every generator is zero is a commutator.
In particular, the standard presentation for a RAAG and the simplified Dicks--Leary presentation for a BBG in Corollary~\ref{cor:PS presentation} are presentations of this type.
We now show how Lemma~\ref{lem:zero exp sum presentation} can be used to introduce nice coordinates on $\operatorname{Hom}(G,\mathbb R)$ for these two classes of groups.
Let $\Gamma$ be a connected graph, and let $\vv \Gamma = \lbrace v_{1},\dots,v_{n}\rbrace$.
By Lemma~\ref{lem:zero exp sum presentation}, a homomorphism $\chi \colon \raag \Gamma \to \mathbb R$ is uniquely determined by its values on $\vv \Gamma$.
Therefore, we get a natural identification
$$ \mathrm{Hom}(\raag \Gamma,\mathbb R) \to \mathbb R^{|\vv \Gamma|}, \quad \chi \mapsto (\chi(v_1),\dots, \chi(v_n)).$$
In other words, a character $\chi\colon \raag \Gamma \to \mathbb R$ is the same as a labelling of $\vv \Gamma$ by real numbers.
A natural basis for $\mathrm{Hom}(\raag \Gamma,\mathbb R)$ is given by the characters $\chi_1,\dots,\chi_n$, where $\chi_i(v_j)=\delta_{ij}$.
For BBGs, a similar description is available in terms of edge labellings.
Different from RAAGs, not every assignment of real numbers to the edges of $\Gamma$ corresponds to a character.
Indeed, the labels along an oriented cycle must sum to zero.
So, assigning the labels on sufficiently many edges already determines the labels on the other ones.
To find a clean description, we assume that the flag complex $\flag \Gamma$ is simply connected, and we fix a spanning tree $T$ of $\Gamma$ with $\ee T=\lbrace e_{1},\dots,e_{m}\rbrace$.
By Corollary~\ref{cor:PS presentation}, we know that the Dicks--Leary presentation can be simplified to have only $\ee T$ as a generating set and all relators are commutators.
By Lemma~\ref{lem:zero exp sum presentation}, we get an identification
$$ \mathrm{Hom}(\bbg \Gamma,\mathbb R) \to \mathbb R^{|\ee T|}, \quad \chi \mapsto (\chi(e_1),\dots, \chi(e_m)).$$
In other words, a character $\chi\colon \bbg \Gamma \to \mathbb R$ is encoded by a labelling of $\ee T$ by real numbers.
To obtain a basis for $\mathrm{Hom}(\bbg \Gamma,\mathbb R)$, one can take the characters $\chi_1,\dots,\chi_m$, where $\chi_i(e_j)=\delta_{ij}$, with respect to some arbitrary orientation of the edges of $T$ (compare Remark~\ref{rem:character on edges}).
\begin{remark}[Computing a character on an edge]\label{rem:character on edges}
Note that there is a slight abuse of notation: strictly speaking, in order to see an edge $e$ as an element of $\bbg \Gamma$, one needs to orient it.
So, it only makes sense to evaluate a character $\chi$ on an oriented edge (see Remark~\ref{rem:orientation}).
However, the value of $\chi(e)$ with respect to the two possible orientations just differs by a sign.
Indeed, if we change the orientation of an edge and the sign of the corresponding label, then we obtain a different description of the same character of $\bbg \Gamma$.
In particular, it makes sense to say that a character vanishes or not on an edge, regardless of orientation.
Moreover, given the symmetry of $\bns{\bbg \Gamma}$ under sign changes (see Remark~\ref{rmk:bns antipodal} and Lemma~\ref{graph orientation reversing map is an isomorphism on BBG}), it is still quite meaningful and useful to think of a character as an edge labelling for a spanning tree $T$.
\end{remark}
As a result of the previous discussions, we obtain the following lemma, which we record for future reference.
\begin{lemma}\label{lem:characters of f.p BBGs are given by assigning values on spanning trees}
Let $\Gamma$ be a graph with $\flag \Gamma$ simply connected.
Let $T$ be a spanning tree of $\Gamma$.
Fix an orientation for the edges of $T$.
Then the following statements hold.
\begin{enumerate}
\item A character $\chi\colon\bbg \Gamma\to\mathbb R$ is uniquely determined by its values on $\ee T$.
\item Any assignment $\ee T \to \mathbb R$ uniquely extends to a character $\chi\colon\bbg \Gamma\to\mathbb R$.
\end{enumerate}
\end{lemma}
We conclude this section with a description of a natural restriction map.
Recall from Theorem~\ref{thm:DL presentation embedding} that $\bbg \Gamma$ embeds in $\raag \Gamma$ via $e\mapsto \tau e(\iota e)^{-1}$ for each oriented edge $e$, where $\iota e$ and $\tau e$ respectively denote the initial vertex and terminal vertex of $e$.
We have an induced restriction map
$$ r\colon \operatorname{Hom}(\raag \Gamma,\mathbb R) \to \operatorname{Hom}(\bbg \Gamma,\mathbb R), \ \hat \chi \mapsto r\hat\chi,$$
where $(r\hat \chi) (e)=\hat \chi (\tau e) - \hat \chi (\iota e)$.
See Figure~\ref{fig:ex_bns_bbg_conditions_needed} for examples.
The next result follows from the Dicks--Leary presentation in Theorem~\ref{thm:DL presentation embedding}, so it holds without additional assumptions on $\Gamma$.
\begin{lemma}\label{lem:existence extension}
Let $\Gamma$ be a connected graph.
The restriction map $r\colon \operatorname{Hom}(\raag \Gamma,\mathbb R) \to \operatorname{Hom}(\bbg \Gamma,\mathbb R)$ is a linear surjection, and its kernel consists of the characters defined by the constant function $\vv \Gamma \to \mathbb R$.
\end{lemma}
\begin{proof}
The map $r$ is clearly linear.
Let us prove that it is surjective.
Let $\chi\colon \bbg \Gamma \to \mathbb R$ be a character.
Define a character $\hat{\chi}\in\mathrm{Hom}(\raag \Gamma,\mathbb R)$ by prescribing its values on vertices as follows.
Fix some $v_0 \in \vv \Gamma$ and choose $ \hat\chi (v_0) \in \mathbb R$ arbitrarily.
Pick a vertex $v\in \vv \Gamma$ and choose an oriented path $p=e_{1}\dots e_{k}$ connecting $v_0$ to $v$.
Here, we mean that $p$ is oriented from $v_0$ to $v$ and that all the edges along $p$ are given the induced orientation. In particular, the edges of $p$ can be seen as elements of $\bbg \Gamma$.
Define the value at $v$ to be:
\begin{equation}\label{eq:extension}
\hat{\chi}(v)=\hat \chi (v_0)+\sum^{k}_{i=1}\chi(e_{i}).
\end{equation}
We claim that $\hat{\chi}\colon \vv \Gamma\to \mathbb R$ is well-defined.
Suppose that there is another oriented path $p'=e'_{1}\dots e'_{h}$
from $v_0$ to $v$.
Then the loop $p(p')^{-1}$ gives a relator in the Dicks--Leary presentation for $\bbg \Gamma$.
Thus, we have
$$\chi (e_{1}\dots e_{k} (e'_{1}\dots e'_{h})^{-1})=0. $$
In other words, we have
$$\sum^{k}_{i=1}\chi(e_{i}) = \sum^{h}_{j=1}\chi(e'_{j})$$
Therefore, the value $\hat{\chi}(v)$ does not depend on the choice of a path from $v_0$ to $v$, as desired.
This provides that $\hat \chi:\raag \Gamma \to \mathbb R$ is a character.
A direct computation shows that for each oriented edge $e$, we have $\chi(e)=\hat \chi (\tau v)- \hat \chi (\iota v)$. That is, the character $\hat \chi$ is an extension of $\chi$ to $\raag \Gamma$.
To describe the kernel of $r$, note that if $\hat \chi$ is constant on $\vv \Gamma$, then for each oriented edge $e$ we have $(r\hat \chi)(e)=\hat \chi (\tau e) - \hat \chi (\iota e)=0$.
Conversely, let $\hat \chi \in \ker (r)$. It follows from \eqref{eq:extension} that $\hat \chi(v)=\hat \chi(w)$ for any choice of $v,w\in \vv \Gamma$.
\end{proof}
Note that the (non-zero) characters defined by the constant functions $\vv \Gamma \to \mathbb R$ all differ by a (multiplicative) constant.
In particular, they all have the same kernel, which is precise $\bbg \Gamma$.
The restriction map $r$ has a natural linear section
$$ s\colon \operatorname{Hom}(\bbg \Gamma,\mathbb R) \to \operatorname{Hom}(\raag \Gamma,\mathbb R), \ \chi \mapsto s\chi,$$
defined as follows.
Let $\hat \chi$ be any extension of $\chi$ to $\raag \Gamma$.
Then define
$$ s\chi = \hat \chi - \frac{1}{|\vv \Gamma|} \sum_{v\in \vv \Gamma} \hat \chi (v).$$
The image of $s$ is a hyperplane $W$ going through the origin that can be regarded as a copy of $\operatorname{Hom}(\bbg \Gamma,\mathbb R)$ inside $\operatorname{Hom}(\raag \Gamma,\mathbb R)$.
Recall that if $\vv \Gamma=\{v_1,\dots, v_n\}$, then $\operatorname{Hom}(\raag \Gamma,\mathbb R)$ carries a canonical basis given by the characters $\chi_1,\dots, \chi_n$ such that $\chi_i(v_j)=\delta_{ij}$.
Fix an inner product that makes this basis orthonormal.
Then $\ker (r) = \operatorname{span}(1,\dots,1)$, the hyperplane $W$ is the orthogonal complement of $\ker (r)$, and the restriction map (or rather the composition $s\circ r$) is given by the orthogonal projection onto $W$.
It is natural to ask how this behaves with respect to the BNS-invariants, that is, whether $r$ restricts to a map $\bns{\raag \Gamma} \to \bns{\bbg \Gamma}$.
In general, this is not the case. For instance, the set $\bns{\bbg \Gamma}$ could be empty even if $\bns{\raag \Gamma}$ is not empty.
On the other hand, the restriction map $r$ maps each missing subspace of $\operatorname{Hom}(\raag \Gamma,\mathbb R)$ into one of the missing subspaces of $\operatorname{Hom}(\bbg \Gamma,\mathbb R)$ (compare Remark~\ref{rmk:missing subspheres} and Remark~\ref{rem:general position arrangements}).
Indeed, one way to reinterpret the content of Proposition~\ref{character in the BNS of BBG iff all the extensions are in the BNS of RAAG} (a particular case of \cite[Corollary 1.3]{kochloukovamendonontheBNSRsigmainvariantsoftheBBGs}) is to say that if $\chi\in \operatorname{Hom}(\bbg \Gamma, \mathbb R) \cong W$, then $[\chi]\in \bns{\bbg \Gamma}$ if and only if the line parallel to $\ker (r)$ passing through $\chi$ avoids all the missing subspaces of $\operatorname{Hom}(\raag \Gamma,\mathbb R)$.
\subsection{A graphical criterion for \texorpdfstring{$\bns{\bbg\Gamma}$}{the BNS-invariant of a BBG}}
In this subsection, we give a graphical criterion for the BNS-invariants of BBGs that is analogous to Theorem~\ref{BNS-invariant for raag}.
Let $\chi\in\mathrm{Hom}(\bbg \Gamma,\mathbb R)$ be a non-zero character. An edge $e\in \ee \Gamma$ is called a \emph{living edge} of $\chi$ if $\chi(e)\neq0$; it is called a \emph{dead edge} of $\chi$ if $\chi(e)=0$.
This is well-defined, regardless of orientation, as explained in Remark~\ref{rem:character on edges}.
We define the \emph{living edge subgraph}, denoted by $\livingedge \chi$, to be the subgraph of $\Gamma$ consisting of the living edges of $\chi$.
The \emph{dead edge subgraph} $\deadedge \chi$ is the subgraph of $\Gamma$ consisting of the dead edges of $\chi$.
We will also say that $\chi$ \textit{vanishes on} any subgraph of $\deadedge \chi$ because the associated labelling (in the sense of \S\ref{sec:coordinates}) is zero on each edge of $\deadedge \chi$.
Notice that $\livingedge \chi$ and $\deadedge \chi$ cover $\Gamma$, but they are not disjoint; they intersect at vertices.
Also, note that $\livingedge \chi$ and $\deadedge \chi$ are not always full subgraphs and do not have isolated vertices.
Moreover, in general, the dead subgraph of an extension of $\chi$ is only a proper subgraph of $\deadedge \chi$. See Figure~\ref{fig:not full} for an example displaying all these behaviors.
\begin{figure}[h]
\centering
\input{pictures/living_edge_subgraph_dead_edge_subgraphs_not_full}
\caption{The dead edge subgraph $\deadedge \chi$ consists of a pair of opposite edges, and the living edge subgraph $\livingedge \chi$ consists of the remaining edges. Neither is a full subgraph. Moreover, the dead subgraph of any extension of $\chi$ is a proper subgraph of $\deadedge \chi$.}
\label{fig:not full}
\end{figure}
The next lemma establishes a relation between the dead edge subgraph of a character of a BBG and the dead subgraphs of its extensions to the ambient RAAG.
Note that the statement fails without the assumption that $\Lambda$ is connected (see the example in Figure~\ref{fig:not full} once again).
\begin{lemma}\label{a subgraph is edge dead iff there is an extension of characters on BBG to RAAG}
Let $\Gamma$ be a connected graph and let $\Lambda\subseteq\Gamma$ be a connected subgraph with at least one edge.
Let $\chi\in\mathrm{Hom}(\bbg \Gamma,\mathbb R)$ be a non-zero character.
Then $\Lambda\subseteq\deadedge \chi$ if and only if there is an extension $\hat{\chi}\in\mathrm{Hom}(\raag \Gamma,\mathbb R)$ of $\chi$ such that $\Lambda\subseteq\mathcal{D}(\hat{\chi})$.
\end{lemma}
\begin{proof}
Suppose $\Lambda\subseteq\deadedge \chi$.
By Lemma~\ref{lem:existence extension}, there exists an extension of $\chi$ to $\raag \Gamma$, unique up to additive constants.
In particular, if we fix a vertex $v_0\in \vv \Lambda$, then we can find an extension $\hat \chi \in \mathrm{Hom}(\raag \Gamma,\mathbb R)$ such that $\hat \chi (v_0)=0$.
Let $v\in \vv \Lambda$.
Since $\Lambda$ is connected, there is a path $p$ connecting $v_0$ to $v$ entirely contained in $\Lambda$.
Since $\hat \chi$ extends $\chi$ and $\chi$ vanishes on edges of $p$, a direct computation shows that $\hat \chi (v)=0$. Therefore, we have $\Lambda\subseteq\mathcal{D}(\hat{\chi})$.
For the other direction, let $\hat{\chi}\in\mathrm{Hom}(\raag \Gamma,\mathbb R)$ be an extension of $\chi$ such that $\Lambda\subseteq\mathcal{D}(\hat{\chi})$. For every oriented edge $e=(v,w)\in \ee \Lambda$, we have $\chi(e)=\hat{\chi}(vw^{-1})=\hat{\chi}(v)-\hat{\chi}(w)=0$. Thus, the edge $e$ is in $\deadedge \chi$. Hence, we have $\Lambda\subseteq\deadedge \chi$.
\end{proof}
The main reason to focus on the dead edge subgraph instead of the living edge subgraph is that it is not clear how to transfer connectivity conditions from $\living{\hat \chi}$ to $\livingedge \chi$.
On the other hand, the disconnecting features of $\dead{\hat \chi}$ do transfer to $\deadedge \chi$.
This is showcased by the following example.
\begin{example}\label{ex:no good living edge subgraph criterion}
Let $\Gamma$ be a cone over the path $P_5$ and consider a character $\hat{\chi}\colon\raag \Gamma\to\mathbb Z$ as shown in Figure \ref{fig:edge living subgraph is not enough to construct an extension character}.
The living subgraph $\mathcal{L}(\hat{\chi})$ is neither connected nor dominating.
It follows from Theorem~\ref{BNS-invariant for raag} that $[\hat \chi] \not \in \bns{\raag \Gamma}$, and therefore, the restriction $\chi=\hat{\chi}\vert_{\bbg \Gamma}\colon\bbg \Gamma\to\mathbb Z$ is not in $\bns{\bbg \Gamma}$ by Proposition~\ref{character in the BNS of BBG iff all the extensions are in the BNS of RAAG}.
However, the living edge subgraph $\livingedge \chi$ is connected and dominating.
On the other hand, note that $\dead{\hat \chi}$ contains a full subgraph that separates $\Gamma$ (compare with Lemma~\ref{lem:dead subgraph criterion for RAAGs}), and so does $\deadedge\chi$.
\end{example}
\begin{figure}[h]
\centering
\input{pictures/living_edge_subgraph_dead_edge_subgraphs_example}
\caption{The living subgraph $\mathcal{L}(\hat{\chi})$ consists of two red vertices. It is neither connected nor dominating. The living edge subgraph $\livingedge \chi$ (labelled by $\pm1$) is connected and dominating.}
\label{fig:edge living subgraph is not enough to construct an extension character}
\end{figure}
Our goal now is to show that the observations made in Example~\ref{ex:no good living edge subgraph criterion} about the dead edge subgraph hold in general.
We will need the following general topological facts that we collect for the convenience of the reader.
Here and in the following, ``minimality'' is always with respect to the inclusion of subgraphs.
More precisely, a ``minimal full separating subgraph'' is a ``full separating subgraph whose full subgraphs are not separating''.
\begin{lemma}\label{lem:link connected}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected.
\begin{enumerate}
\item \label{item:general MV} If $\Lambda\subseteq \Gamma$ is a connected full subgraph, then there is a bijection between the components of its complement and the components of its link.
\item \label{item:link of a vertex is connected} The link of every vertex is connected.
\item \label{item:minimal separating is connected} If $\Lambda \subseteq \Gamma$ is a minimal full separating subgraph, then $\Lambda$ is connected and not a single vertex.
\item \label{item:dimension at least 2} If $|\vv \Gamma| \geq 3$, then every edge is contained in at least one triangle. In particular, we have $\dim \flag \Gamma \geq 2$.
\end{enumerate}
\end{lemma}
\begin{proof}
Proof of \eqref{item:general MV}.
Let $A=\flag \Gamma\setminus \flag \Lambda$ (set-theoretic difference) and $B=\st{\flag \Lambda,\flag \Gamma}$.
Then $\flag \Gamma = A\cup B$, and $\lk{\flag \Lambda,\flag \Gamma}$ deformation retracts to $A\cap B$.
The Mayer--Vietoris sequence for reduced homology associated to this decomposition of $\flag \Gamma$ provides the following exact sequence:
$$
\cdots
\to
H_1(\flag \Gamma)
\to
\widetilde H_0(\lk{\flag \Lambda,\flag \Gamma})
\to
\widetilde H_0(A) \oplus \widetilde H_0(B)
\to
\widetilde H_0(\flag \Gamma)
\to
0
$$
We have $H_1(\flag \Gamma)=0=\widetilde H_0(\flag \Gamma)$ since $\flag \Gamma$ is simply connected.
Moreover, since $\Lambda$ is connected, the subcomplex $B$ is connected, and therefore, we obtain $\widetilde H_0(B) = 0$.
This gives a bijection between $\widetilde H_0(\lk{\flag \Lambda,\flag \Gamma})$ and $\widetilde H_0(A)$, as desired.
Proof of \eqref{item:link of a vertex is connected}.
Take $\Lambda=v$ to be a single vertex.
Since $\Gamma$ is biconnected, the vertex $v$ is not a cut vertex, so its complement is connected.
Then the conclusion follows from \eqref{item:general MV}.
Proof of \eqref{item:minimal separating is connected}.
Let $\Lambda$ be a minimal full separating subgraph.
Then we can find two subcomplexes $A$ and $B$ of $\flag \Gamma$ such that $A\cup B = \flag \Gamma$ and $A\cap B=\flag \Lambda$.
The Mayer--Vietoris sequence for reduced homology gives
$$
\cdots
\to
H_1(\flag \Gamma)
\to
\widetilde H_0(\flag \Lambda)
\to
\widetilde H_0(A) \oplus \widetilde H_0(B)
\to
\widetilde H_0(\flag \Gamma)
\to
0
$$
Arguing as in \eqref{item:general MV}, we have $H_1(\flag \Gamma)=0=\widetilde H_0(\flag \Gamma)$ since $\flag \Gamma$ is simply connected.
Therefore, we obtain $\widetilde H_0(\flag \Lambda) = \widetilde H_0(A) \oplus \widetilde H_0(B)$.
Suppose by contradiction that $\Lambda$ is disconnected.
Then at least one of $A$ or $B$ is disconnected.
Without loss of generality, say $A=A_1\cup A'$, with $A_1$ a connected component of $A$ and $A_1\cap A' = \varnothing$.
Let $B'=B\cup A'$ and let $\Lambda'$ be the subgraph such that $\flag {\Lambda '} = A_1\cap B'$. Then $\Lambda'$ is a proper full subgraph of $\Lambda$ which separates $\Gamma$, contradicting the minimality of $\Lambda$.
Finally, if by contradiction $\Lambda$ were a single vertex, then it would be a cut vertex.
But this is impossible because $\Gamma$ is biconnected.
Proof of \eqref{item:dimension at least 2}. Suppose by contradiction that there is an edge $e=(u,v)$ in $\flag \Gamma$ that is not contained in a triangle.
Since $\Gamma$ has at least three vertices, at least one endpoint of $e$, say $v$, is adjacent to at least another vertex different from $u$.
Since $e$ is not contained in a triangle, the vertex $u$ is an isolated component of $\lk{v,\Gamma}$.
Therefore, the subgraph $\lk{v,\Gamma}$ has at least two components, and hence, the vertex $v$ is a cut vertex of $\Gamma$ by \eqref{item:general MV}.
This contradicts the fact that $\Gamma$ is biconnected.
\end{proof}
We now give a graphical criterion for a character to belong to the BNS-invariant of a BBG that is analogous to the living subgraph criterion in \cite{meierthebierineumannstrebelinvariantsforgraphgroups}, or rather to the dead subgraph criterion Lemma~\ref{lem:dead subgraph criterion for RAAGs} (see also \cite[Corollary 3.4]{lorenzo}).
\begin{maintheoremc}{C}[Graphical criterion for the BNS-invariant of a BBG]\label{thm:graphical criterion for bns of bbg}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected.
Let $\chi\in\mathrm{Hom}(\bbg \Gamma,\mathbb R)$ be a non-zero character. Then $[\chi]\in \bns{\bbg \Gamma}$ if and only if $\deadedge \chi$ does not contain a full subgraph that separates $\Gamma$.
\end{maintheoremc}
\begin{proof}
Let $[\chi]\in \bns{\bbg \Gamma}$. Suppose by contradiction that there is a full subgraph $\Lambda\subseteq\deadedge \chi$ that separates $\Gamma$.
Up to passing to a subgraph, we can assume that $\Lambda$ is a minimal full separating subgraph.
So, by \eqref{item:minimal separating is connected} in Lemma~\ref{lem:link connected}, we can assume that $\Lambda$ is connected.
By Lemma \ref{a subgraph is edge dead iff there is an extension of characters on BBG to RAAG}, there is an extension $\hat{\chi} \in \mathrm{Hom}(\raag \Gamma,\mathbb R)$ of $\chi$ such that $\Lambda\subseteq\mathcal{D}(\hat{\chi})$.
Since $\Lambda$ separates $\Gamma$, by Lemma~\ref{lem:dead subgraph criterion for RAAGs}, we have $[\hat{\chi}]\notin\bns{\raag \Gamma}$, and therefore $[\chi]\notin \bns{\bbg \Gamma}$ by Proposition~\ref{character in the BNS of BBG iff all the extensions are in the BNS of RAAG}. Hence, we reach a contradiction.
Conversely, assume $[\chi]\notin\bns{\bbg \Gamma}$.
Then by Proposition~\ref{character in the BNS of BBG iff all the extensions are in the BNS of RAAG}, there is an extension $\hat{\chi}\in\mathrm{Hom}(\raag \Gamma,\mathbb R)$ of $\chi$ such that $[\hat{\chi}]\notin\bns{\raag \Gamma}$.
So, the living subgraph $\living{\hat{\chi}}$ is either disconnected or not dominating.
Equivalently, by Lemma~\ref{lem:dead subgraph criterion for RAAGs}, the dead subgraph $\dead{\hat{\chi}}$ contains a full subgraph $\Lambda$ which separates $\Gamma$.
Note that every edge of $\Lambda$ is contained in $\deadedge \chi$ because $\Lambda \subseteq \dead{\hat \chi}$.
A priori, the subgraph $\Lambda$ could have some components consisting of isolated points.
Once again, passing to a subgraph, we can assume that $\Lambda$ is a minimal full separating subgraph. By
\eqref{item:minimal separating is connected} in Lemma~\ref{lem:link connected}, we can also assume that $\Lambda$ is connected and not reduced to a single vertex.
Therefore, we have $\Lambda \subseteq\deadedge \chi$. This completes the proof.
\end{proof}
We give two examples to illustrate that the hypotheses of Theorem~\hyperref[thm:graphical criterion for bns of bbg]{C} are optimal.
Here, characters are represented by labels in the sense of \S\ref{sec:coordinates}.
\begin{example}[Simple connectedness is needed]\label{ex:simple connectedness is necessary}
Consider the cycle of length four $\Gamma=C_4$; see
the left-hand side of Figure~\ref{fig:ex_bns_bbg_conditions_needed}.
Then $\flag \Gamma$ is not simply connected. Note that in this case, the group $\bbg \Gamma$ is finitely generated but not finitely presented; see \cite{bestvinabradymorsetheoryandfinitenesspropertiesofgroups}.
Let $\hat\chi\in\mathrm{Hom}(\raag\Gamma,\mathbb R)$ be the character of $\raag \Gamma$ that sends two non-adjacent vertices to $0$ and the other two vertices to $1$.
Let $\chi=\hat\chi\vert_{\bbg \Gamma}\in\mathrm{Hom}(\bbg \Gamma,\mathbb R)$ be the restriction of $\hat\chi$ to $\bbg \Gamma$.
Then the dead edge subgraph $\deadedge\chi$ is empty. In particular, it does not contain any subgraph that separates $\Gamma$.
However, the living subgraph $\living{\hat \chi}$ consists of two opposite vertices, which is not connected. Thus, we have $[\hat\chi]\not \in \bns{\raag\Gamma}$. Hence, by Proposition~\ref{character in the BNS of BBG iff all the extensions are in the BNS of RAAG}, we obtain $[\chi] \not \in \bns{\bbg\Gamma}$.
\end{example}
\begin{figure}[h]
\centering
\input{pictures/ex_bns_bbg_square}
\caption{Theorem~\hyperref[thm:graphical criterion for bns of bbg]{C} does not hold on a graph with $\flag \Gamma$ not simply connected (left), nor on a graph with a cut vertex (right).}
\label{fig:ex_bns_bbg_conditions_needed}
\end{figure}
\begin{example}[Biconnectedness is needed]\label{ex:biconnectedness is necessary}
Let $\Gamma$ be the graph obtained by gluing two triangles at a vertex; see the right-hand side of Figure~\ref{fig:ex_bns_bbg_conditions_needed}.
Let $\hat\chi\in\mathrm{Hom}(\raag\Gamma,\mathbb R)$ be the character that sends the cut vertex to $0$ and all the other vertices to $1$.
Let $\chi=\hat\chi\vert_{\bbg \Gamma}\in\mathrm{Hom}(\bbg \Gamma,\mathbb R)$ be the restriction of $\hat{\chi}$ to $\bbg \Gamma$.
Then the dead edge subgraph $\deadedge\chi$ consists of the two edges that are not incident to the cut vertex. In particular, it does not contain any subgraph that separates $\Gamma$.
However, the living subgraph $\living{\hat \chi}$ is not connected (also notice that $\living{\hat \chi}=\deadedge \chi$). Thus, we have $[\hat\chi]\not \in \bns{\raag\Gamma}$. Hence, Proposition~\ref{character in the BNS of BBG iff all the extensions are in the BNS of RAAG} implies $[\chi] \not \in \bns{\bbg\Gamma}$.
\end{example}
As mentioned in Example~\ref{ex:cut vertex implies empty bns}, the graph $\Gamma$ in Example~\ref{ex:biconnectedness is necessary} has a cut vertex, and hence, the BNS-invariant $\bns{\bbg\Gamma}$ is empty; see \cite[Corollary 15.10]{PapadimaandSuciuBNSRinvariantsandHomologyJumpingLoci}.
As promised, we now show the following result.
\begin{corollary}\label{cor:bns for bbg non empty}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected. Then $\bns{\bbg \Gamma} \neq \varnothing$.
\end{corollary}
\begin{proof}
Let $T$ be a spanning tree of $\Gamma$.
Assign an orientation to each edge of $T$ and write $\ee T = \lbrace e_1,\dots, e_m \rbrace$. Let
$$ \chi \colon \ee T \to \mathbb R, \ \ \chi (e_k)=10^k, \ \ k=1,\dots, m.$$
Then this defines a character thanks to Lemma~\ref{lem:characters of f.p BBGs are given by assigning values on spanning trees}.
We claim that $\chi$ does not vanish on any edge of $\Gamma$. Indeed, let $e\in \ee \Gamma$. The claim is clear if $e \in \ee T$. Suppose $e\not \in \ee T$, and let $(e_{i_1},\dots, e_{i_p})$ be the path in $T$ between the endpoints of $e$. Then $(e,e_{i_1},\dots, e_{i_p})$ is a cycle in $\Gamma$, and hence, the element $ee_{i_1}\dots e_{i_p}$ is a relator in $\bbg \Gamma$ by Theorem~\ref{thm:DL presentation embedding}. Therefore, we have
$$ 0 = \chi (e) + \chi (e_{i_1}) + \dots + \chi (e_{i_p}) = \chi (e) \pm 10^{k_{i_1}} \pm \dots \pm 10^{k_{i_p}},$$
where the signs are determined by the orientations of the corresponding edges.
The sum $\pm 10^{k_{i_1}} \pm \dots \pm 10^{k_{i_p}}$ is never zero since all the exponents are different.
Thus, we have $\chi(e)\neq0$. This proves the claim.
It immediately follows that $\deadedge \chi =\varnothing$, and therefore, we have $[\chi] \in \bns{\bbg \Gamma}$ by Theorem~\hyperref[thm:graphical criterion for bns of bbg]{C}.
\end{proof}
As a summary, we have the following corollary.
Most implications are well-known.
Our contribution is that \eqref{item:biconnected} implies \eqref{item:non empty bns}.
Recall that a finitely generated group $G$ \textit{algebraically fibers} if there is a surjective homomorphism $G\to\mathbb Z$ whose kernel is finitely generated.
\begin{corollary}\label{cor:equivalent biconnected}
Let $\Gamma$ be a connected graph such that $\flag \Gamma$ is simply connected. Then the following statements are equivalent.
\begin{enumerate}
\item \label{item:biconnected} $\Gamma$ is biconnected.
\item \label{item:non empty bns} $\bns{\bbg \Gamma} \neq \varnothing$.
\item \label{item:free split} $\bbg \Gamma$ does not split as a free product.
\item \label{item:one ended} $\bbg \Gamma$ is $1$-ended.
\item \label{item:alg fibration} $\bbg \Gamma$ algebraically fibers.
\end{enumerate}
\end{corollary}
\begin{proof}
The equivalence of \eqref{item:biconnected} and \eqref{item:non empty bns} is given by Corollary~\ref{cor:bns for bbg non empty} and \cite[Corollary 15.10]{PapadimaandSuciuBNSRinvariantsandHomologyJumpingLoci}.
Given that $\bbg \Gamma$ is torsion-free, the equivalence of \eqref{item:free split} and \eqref{item:one ended} is just Stalling's theorem about the ends of groups (see \cite[Theorem I.8.32]{BH99}).
The fact that
\eqref{item:non empty bns} implies
\eqref{item:free split}
is discussed in \cite[Example 3 in A2.1a]{strebelnotesonthesigmainvariants}.
The fact that \eqref{item:free split} implies \eqref{item:biconnected} can be seen directly from the Dicks--Leary presentation from Theorem~\ref{thm:DL presentation embedding}.
Finally, we show that \eqref{item:alg fibration} is equivalent to \eqref{item:non empty bns}.
It follows from Theorem~\ref{thm:bns fg} that $\bbg \Gamma$ algebraically fibers if and only if there exists a discrete character $\chi:\bbg \Gamma\to \mathbb R$ such that both $[\chi]$ and $[-\chi]$ are in $\bns{\bbg \Gamma}$.
This is actually equivalent to just requiring that $[\chi] \in \bns{\bbg \Gamma}$, because $\bns{\bbg \Gamma}$ is symmetric; see Lemma~\ref{graph orientation reversing map is an isomorphism on BBG}.
Note that the points of the character sphere $\chars{\bbg{\Gamma}}$ given by the equivalent classes of discrete characters are exactly the rational points.
In particular, since $\bns{\bbg \Gamma}$ is an open subset of the character sphere $\chars{\bbg{\Gamma}}$ (see Theorem A in \cite{bierineumannstrebelageometricinvariantofdiscretegroups}), it is non-empty if and only if it contains the equivalent class of a discrete character.
\end{proof}
We record the following consequence for future reference. It will reduce our discussion about the RAAG recognition problem to the case of biconnected graphs.
\begin{corollary}\label{cor:biconnected components}
Let $\Gamma$ be a connected graph with $\flag \Gamma$ simply connected, and let $\Gamma_1,\dots, \Gamma_n$ be its biconnected components.
Then $\bbg \Gamma$ is a RAAG if and only if $\bbg{\Gamma_i}$ is a RAAG for all $i=1,\dots,n$.
\end{corollary}
\begin{proof}
It is clear from the Dicks--Leary presentation that $\bbg \Gamma$ is the free product of the $\bbg{\Gamma_i}$.
Moreover, since $\Gamma_i$ is biconnected, each $\bbg{\Gamma_i}$ is freely indecomposable (see Corollary~\ref{cor:equivalent biconnected}).
If all the $\bbg{\Gamma_i}$ are RAAGs, then $\bbg \Gamma$ is a RAAG because the free product of RAAGs is a RAAG. This proves one implication.
For the converse implication, suppose that $\bbg \Gamma$ is a RAAG, say $\bbg \Gamma=\raag \Lambda$ for some graph $\Lambda$.
Let $\Lambda_1,\dots,\Lambda_m$ be the connected components of $\Gamma$.
Then $\bbg \Gamma=\raag \Lambda$ can also be written as the free product of the RAAGs $\raag{\Lambda_j}$, each of which is freely indecomposable.
It follows that $m=n$, and for each $i$, there is some $j$ such that $\bbg{\Gamma_i} \cong \raag{\Lambda_j}$.
\end{proof}
\begin{remark}
We conclude this subsection by observing that when $\Gamma$ is a chordal graph, the statement in Theorem~\hyperref[thm:graphical criterion for bns of bbg]{C} can also be obtained as follows.
By \cite[\S 3.2]{lorenzo}, the group $\bbg \Gamma$ splits as a finite graph of groups.
More precisely, the vertex groups correspond to the BBGs on the maximal cliques of $\Gamma$, and the edge groups correspond to BBGs on the minimal separating subgraphs of $\Gamma$ (that are also cliques because $\Gamma$ is chordal).
In particular, all these groups are finitely generated free abelian groups. Hence, one can apply the results from \S 2 of \cite{CL16}.
\end{remark}
\subsection{A graphical description of \texorpdfstring{$\bns{\bbg\Gamma}$}{the BNS-invariant of a BBG}}\label{sec:graphical description}
We now provide a graphical description of $\bns{\bbg \Gamma}$, that is, a way to compute the BNS-invariant of $\bbg \Gamma$ in terms of subgraphs of $\Gamma$.
Recall from Remark~\ref{rem:complement BNS for RAAGs} that $\bnsc{\raag \Gamma}$ is given by an arrangement of missing subspheres parametrized by the separating subgraphs of $\Gamma$.
Thanks to \cite[Corollary 1.4]{kochloukovamendonontheBNSRsigmainvariantsoftheBBGs}, we know that
$\bnsc{\bbg \Gamma}$ is also an arrangement of missing subspheres.
Moreover, the restriction map $ r\colon \operatorname{Hom}(\raag \Gamma,\mathbb R) \to \operatorname{Hom}(\bbg \Gamma,\mathbb R)$ sends the missing subspheres of $\bnsc{\raag \Gamma}$ to those of $\bnsc{\bbg \Gamma}$ (see the discussion after Lemma~\ref{lem:existence extension}).
So, it makes sense to look for a description of the missing subspheres of $\bnsc{\bbg \Gamma}$ in terms of subgraphs of $\Gamma$, analogous to the one available for $\bnsc{\raag \Gamma}$.
However, recall from Example~\ref{ex:iso bbg non iso graphs} that $\bbg \Gamma$ does not completely determine $\Gamma$, so it is a priori not clear that $\bnsc{\bbg \Gamma}$ should admit such a description.
Moreover, the restriction map is not always well-behaved with respect to the vanishing behavior of characters, in the sense that the dead edge subgraph of a character can be strictly larger than the dead subgraph of any of its extensions; see Figure~\ref{fig:not full}.
To address this, we need a way to construct characters with prescribed vanishing behavior.
For any subgraph $\Lambda$ of $\Gamma$, we define the following linear subspace of $\operatorname{Hom}(\bbg \Gamma, \mathbb R)$
$$ W_\Lambda = \{ \chi \colon \bbg\Gamma \to \mathbb R \mid \chi(e)=0, \, \forall e\in \ee \Lambda \} = \{ \chi \colon \bbg\Gamma \to \mathbb R \mid \Lambda \subseteq \deadedge \chi \}$$
and the great subsphere $S_\Lambda$ given by the following intersection
$$ S_\Lambda = W_\Lambda \cap \chars{\bbg \Gamma}.$$
Note that if a character $\chi$ of $\bbg \Gamma$ vanishes on a spanning tree of $\Gamma$, then $\chi$ is trivial (see Lemma~\ref{lem:characters of f.p BBGs are given by assigning values on spanning trees}).
In other words, if $\Lambda$ is a spanning tree, then $W_\Lambda =0$ and $S_\Lambda=\varnothing$.
We look for a condition on $\Lambda$ such that $W_\Lambda\neq 0$ and $S_\Lambda\neq \varnothing$.
Notice that the following lemma applies as soon as $\vv \Lambda \neq \vv \Gamma$, and that if it applies to $\Lambda$, then it also applies to all of its subgraphs.
\begin{lemma}\label{lem:non-spanning implies critical}
Let $\Gamma$ be a graph with $\flag \Gamma$ simply connected, and let $\Lambda\subseteq \Gamma$ be a subgraph.
Assume that there is an edge $e_0 \in \ee \Gamma$ with at least one endpoint not in $\vv \Lambda$.
Then there exists a character $\chi\colon \bbg \Gamma \to \mathbb R$ such that $\chi(e_0)=1$ and $\chi(e)=0$ for all $e \in \ee \Lambda$.
In particular, we have $[\chi]\in S_\Lambda$.
\end{lemma}
\begin{proof}
Let $T_\Lambda$ be a spanning forest of $\Lambda$.
Observe $e_0\not \in \ee \Lambda$ by assumption.
Therefore, we can extend $T_\Lambda \cup \{e_0\}$ to a spanning tree $T$ of $\Gamma$.
Orient the edges of $T$ arbitrarily and label the edges of $T_\Lambda$ by $0$ and all the remaining edges of $T$ by $1$.
By Lemma~\ref{lem:characters of f.p BBGs are given by assigning values on spanning trees}, this defines a character $\chi\colon \bbg \Gamma \to \mathbb R$.
By construction, we have $\chi(e_0)=1$ and $\chi (e)=0$ for $e\in \ee{T_\Lambda}$.
Let $e \in \ee \Lambda \setminus \ee{T_\Lambda}$. Since $T_\Lambda$ is a spanning forest of $\Lambda$, there is a unique path $p$ in $T_\Lambda$ from $\tau e$ to $\iota e$. Then $ep$ is a cycle in $\Gamma$, and therefore, it is a relator in the Dicks--Leary presentation for $\bbg \Gamma$. Since $\chi$ vanishes on $p$, it must also vanish on $e$, as desired.
\end{proof}
\begin{remark}\label{rem:isolated vertices}
Notice that if two subgraphs $\Lambda$ and $\Lambda '$ have the same edge sets, then $W_\Lambda = W_{\Lambda '}$ because these subspaces only depend on the edge sets.
In particular, we have $S_\Lambda = S_{\Lambda '}$.
This is the reason why we use the strict inclusion $\subsetneq$ instead of the weak inclusion $\subseteq$ in the statement \eqref{item:inclusion reverse} of the following lemma.
\end{remark}
\begin{lemma}\label{lem: missing subspheres}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected, and let $\Lambda$ and $\Lambda '$ be
full separating subgraphs.
Then we have the following statements.
\begin{enumerate}
\item \label{item:extensible is in complement}
$S_\Lambda$ is a missing subsphere, that is, we have $S_\Lambda \subseteq \bnsc{\bbg \Gamma}$.
\item \label{item:inclusion reverse} $\Lambda ' \subsetneq \Lambda$ if and only if $S_{\Lambda} \subsetneq S_{\Lambda '}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Proof of \eqref{item:extensible is in complement}. If $[\chi] \in S_\Lambda$, then $\deadedge \chi$ contains $\Lambda$, which is a separating subgraph.
Then the statement follows from Theorem~\hyperref[thm:graphical criterion for bns of bbg]{C}.
Proof of \eqref{item:inclusion reverse}. The implication $\Lambda '\subsetneq \Lambda \Rightarrow S_{\Lambda} \subsetneq S_{\Lambda '}$ follows from the definitions.
For the reverse implication $ S_{\Lambda} \subsetneq S_{\Lambda '} \Rightarrow \Lambda '\subsetneq \Lambda$ we argue as follows.
The inclusion $S_\Lambda \subsetneq S_{\Lambda '}$ implies that a character vanishing on $\Lambda$ must also vanish on $\Lambda'$. We need to show that $\Lambda'$ is a proper subgraph of $\Lambda$.
By contradiction, suppose that $\Lambda '$ is not a subgraph of $\Lambda$.
Notice that if $\Lambda ' \setminus \Lambda$ consists of isolated vertices, then $S_\Lambda = S_{\Lambda '}$ (see Remark~\ref{rem:isolated vertices}).
So, we can assume that there is an edge $e_0 \in \ee{\Lambda'} \setminus \ee \Lambda$.
Since $\Lambda$ is full, the edge $e_0$ cannot have both endpoints in $\Lambda$.
By Lemma~\ref{lem:non-spanning implies critical}, there is a character $\chi\colon \bbg \Gamma \to \mathbb R$ with $\chi (e_0)=1$ and $\chi(e)=0 $ for all $e\in \ee \Lambda$.
This is a character that vanishes identically on $\Lambda$ but not on $\Lambda'$, which is absurd.
\end{proof}
Recall that if $\Lambda$ is a separating subgraph, then $S_\Lambda\neq \varnothing$.
\begin{maintheoremc}{D}[Graphical description of the BNS-invariant of a BBG]\label{thm:graphical description for bns of bbg}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected.
Then $\bnsc{\bbg \Gamma}$ is a union of missing subspheres corresponding to full separating subgraphs. More precisely,
\begin{enumerate}
\item \label{item:complement is union minimal spheres} $\bnsc{\bbg \Gamma}= \bigcup_\Lambda S_\Lambda$, where $\Lambda$ ranges over the minimal full separating subgraphs of $\Gamma$.
\item \label{item:correspondence spheres and separators} There is a bijection between maximal missing subspheres of $\bnsc{\bbg \Gamma}$ and minimal full separating subgraphs of $\Gamma$.
\end{enumerate}
\end{maintheoremc}
\begin{proof}
Proof of \eqref{item:complement is union minimal spheres}. We start by proving that $\bnsc{\bbg \Gamma}= \bigcup_\Lambda S_\Lambda$, where $\Lambda$ ranges over the full separating subgraphs of $\Gamma$.
If $\Lambda$ is a full separating subgraph, then we know that $S_\Lambda \subseteq \bnsc{\bbg\Gamma}$ by \eqref{item:extensible is in complement} in Lemma~\ref{lem: missing subspheres}.
So one inclusion is clear.
Vice versa, let $[\chi]\in \bnsc{\bbg \Gamma}$.
Then by Theorem~\hyperref[thm:graphical criterion for bns of bbg]{C} we have that $\deadedge \chi$ contains a full separating subgraph $\Lambda$.
In particular, the character $\chi$ vanishes on $\Lambda$, hence $ [\chi] \in S_\Lambda$. This proves the other inclusion.
To see that one can restrict to $\Lambda$ ranging over minimal full separating subgraphs, just observe that the latter correspond to maximal missing subspheres by \eqref{item:inclusion reverse} in Lemma~\ref{lem: missing subspheres}.
This completes the proof of \eqref{item:complement is union minimal spheres}.
Proof of \eqref{item:correspondence spheres and separators}.
By \eqref{item:complement is union minimal spheres}, we know that $\bnsc{\bbg \Gamma}$ is a union of maximal missing subspheres.
Notice that this is a finite union because $\Gamma$ has only finitely many subgraphs.
So, each maximal missing subsphere $S$ is of the form $S=S_\Lambda$ for $\Lambda$ a minimal full separating subgraph.
Vice versa, let $\Lambda$ be a minimal full separating subgraph.
We know from \eqref{item:extensible is in complement} in Lemma~\ref{lem: missing subspheres} that $S_\Lambda$ is a missing subsphere.
We claim that $S_\Lambda$ is a maximal missing subsphere in $\bnsc{\bbg \Gamma}$.
Let $S$ be a maximal missing subsphere in $\bnsc{\bbg \Gamma}$ such that $S_\Lambda \subseteq S$.
By the previous paragraph, we know that $S=S_{\Lambda '}$ for some minimal full separating subgraph $\Lambda '$.
If we had $S_\Lambda \subsetneq S= S_{\Lambda'}$, then by \eqref{item:inclusion reverse} in Lemma~\ref{lem: missing subspheres} it would follow that $\Lambda' \subsetneq \Lambda$.
But this would contradict the minimality of $\Lambda$.
Thus, we have $S_\Lambda=S_{\Lambda '}=S$. Hence, the missing subsphere $S_\Lambda$ is maximal.
\end{proof}
The following example establishes a correspondence between the cut edges in $\Gamma$ and the missing hyperspheres (the missing subspheres of codimension one) in $\bnsc{\bbg \Gamma}$.
It should be compared with the case of RAAGs, where the correspondence is between the cut vertices of $\Gamma$ and the missing hyperspheres in $\bnsc{\raag \Gamma}$ (compare Remark~\ref{rem:complement BNS for RAAGs} and Example~\ref{ex: bns of raag on tree}).
\clearpage
\begin{example}[Hyperspheres]\label{ex:bijection between cut edges and hyperplanes for BBGs}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected.
Let $e$ be a cut edge of $\Gamma$.
Notice that $e$ is a minimal separating subgraph since $\Gamma$ is biconnected, and it is also clearly full.
So by Theorem~\hyperref[thm:graphical description for bns of bbg]{D} we know that $S_e$ is a maximal missing subsphere in $\bnsc{\bbg \Gamma}$.
We want to show that the subspace $W_e=\operatorname{span}(S_e)$ is a hyperplane.
To see this, let $T$ be a spanning tree of $\Gamma$ with $\ee T=\lbrace e_{1},\dots,e_{m}\rbrace$, and let $y_i$ be the coordinate dual to $e_i$ in the sense of \S\ref{sec:coordinates}.
This means that $y_i(\chi)=\chi(e_i)$ for all $\chi\in \operatorname{Hom}(\bbg \Gamma, \mathbb R)$.
Note that $W_{e_i}$ is the hyperplane given by the equation $y_{i}=0$.
If $e\in \ee T$, then $e=e_i$ for some $i=1,\dots,m$ and $W_e=W_{e_i}$ is a hyperplane.
If $e\not \in \ee T$, then there is a unique path $(e_{j_1},\dots,e_{j_p})$ in $T$ connecting the endpoints of $e$.
Since $(e_{j_1},\dots,e_{j_p},e)$ is a cycle in $\Gamma$, the word $e_{j_1}\dots e_{j_p}e$ is a relator in the Dicks--Leary presentation.
So, we have $\chi (e_{j_1}) + \dots +\chi (e_{j_p}) + \chi (e) =0$. Therefore, we obtain $\chi(e)=0$ if and only if $y_{j_{1}}(\chi) + \dots + y_{j_{p}}(\chi)=0$.
This means that $W_e$ is the hyperplane defined by the equation $y_{j_{1}} + \dots + y_{j_{p}}=0$.
Vice versa, let $S\subseteq \bnsc{\bbg \Gamma}$ be a hypersphere.
We claim that $S=S_e$ for some cut edge $e$.
To see this, let $[\chi]\in S$.
By Theorem~\hyperref[thm:graphical criterion for bns of bbg]{C} we know that $\deadedge \chi$ contains a full subgraph $\Lambda$ that separates $\Gamma$.
Since $\Gamma$ is biconnected, the subgraph $\Lambda$ must contain at least one edge.
In particular, the character $\chi$ vanishes on $\ee \Lambda$, and therefore, we have $[\chi] \in \bigcap_{e\in \ee \Lambda} S_e$.
This proves $S\subseteq \bigcap_{e\in \ee \Lambda} S_e$.
However, by the discussion above, we know that $S_e$ is a hypersphere. Since $S$ is also a hypersphere, the subgraph $\Lambda$ must consist of a single edge $e$ only.
In particular, it is a cut edge.
\end{example}
\begin{remark}\label{rem:general position arrangements}
The linear span of the arrangement of the missing subspheres of $\bnsc{G}$ gives rise to a subspace arrangement in $\operatorname{Hom}(G,\mathbb R)$.
The main difference between RAAGs and BBGs is that the arrangement for a RAAG is always ``in general position'' while the arrangement for a BBG is not. We will discuss the details in the next section.
\end{remark}
\subsection{The inclusion-exclusion principle}\label{sec:IEP}
Given a group $G$, one can consider the collection of maximal missing subspheres. That is, the maximal great subspheres of the character sphere $\chars G$ that are in the complement of the BNS-invariant $\bns G$ (see Remark~\ref{rmk:missing subspheres}).
Additionally, one can also consider the collection of maximal missing subspaces in $\operatorname{Hom}(G,\mathbb R)$, that is, the linear spans of the maximal missing subspheres.
This provides an arrangement of (great) subspheres in $\chars G$ and an arrangement of (linear) subspaces in $\operatorname{Hom}(G,\mathbb R)$ that can be used as invariants for $G$.
For instance, these arrangements completely determine the BNS-invariant when $G$ is a RAAG or a BBG (see Remark~\ref{rem:complement BNS for RAAGs} or Theorem~\hyperref[thm:graphical description for bns of bbg]{D} respectively).
Moreover, in the case of RAAGs, these arrangements satisfy a certain form of the inclusion-exclusion principle (see \S\ref{sec:IEP RAAG behavior}).
This fact can be used to detect when a group $G$ is not a RAAG.
We take this point of view from the work of Koban and Piggott in \cite{KobanPiggottTheBNSofthepuresymmetricautomorphismofRAAG} and Day and Wade in \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}.
The former focuses on the subsphere arrangement, while the latter focuses on the subspace arrangement.
In this section, we find it convenient to focus on the subspace arrangement.
Let $V$ be a real vector space. (The reader should think $V=\operatorname{Hom}(G,\mathbb R)$ for a group $G$.)
For convenience, we fix some background inner product on $V$. All arguments in the following are combinatorial and do not depend on the choice of inner product.
We say that a finite collection of linear subspaces $\lbrace W_{j}\rbrace_{j\in J}$ of $V$ satisfies the \textit{inclusion-exclusion principle} if the following equality holds:
\begin{equation}\label{eq:IEP subspaces}
\dim{\left(\sum^{|J|}_{j=1} W_j\right)}
=
\sum^{|J|}_{k=1}(-1)^{k+1}\left(\sum_{I\subset J, |I|=k}\dim\left(\bigcap_{j\in I}W_j\right)\right)
\end{equation}
Notice that if an arrangement satisfies \eqref{eq:IEP subspaces}, then any linearly equivalent arrangement also satisfies \eqref{eq:IEP subspaces}.
Here are two examples. The first is a RAAG, and the collection of maximal subspaces in the complement of its BNS-invariant satisfies the inclusion-exclusion principle. The second is a BBG, and the collection of maximal subspaces in the complement of its BNS-invariant does not satisfy the inclusion-exclusion principle.
Note that this BBG is known to be not isomorphic to any RAAG by \cite{PapadimaSuciuAlgebraicinvariantsforBBGs}.
\begin{example}[Trees]\label{ex: bns of raag on tree}
Let $\Gamma$ be a tree on $n$ vertices, and let $\lbrace v_{1},\dots,v_{m}\rbrace$ be the set of cut vertices of $\Gamma$.
Then it follows that $\bns{\raag \Gamma}$ is obtained from $\chars{\raag \Gamma}=S^n$ by removing the hyperspheres $S_i$ defined by $ x_i=0$ for $i=1,\dots, m$ (see \S\ref{sec:coordinates}).
The associated missing subspaces satisfy the inclusion-exclusion principle \eqref{eq:IEP subspaces}.
\end{example}
\begin{example}[The trefoil]\label{ex: bns of PS}
Let $\Gamma$ be the (oriented) trefoil graph with a choice of a spanning tree $T$ whose edge set is $\ee T=\lbrace e_{1},e_{2},e_{3},e_{4},e_{5}\rbrace$; see Figure~\ref{fig: oriented trefoil graph with a spanning tree}.
We consider the three cut edges $e_1$, $e_2$, and $f$.
By Example~\ref{ex:bijection between cut edges and hyperplanes for BBGs}, we have that $S_{e_{1}}$, $S_{e_{2}}$, and $S_{f}$ are missing hyperspheres in $\bnsc{\bbg \Gamma}$.
By Theorem~\hyperref[thm:graphical description for bns of bbg]{D}, we have $\bnsc{\bbg \Gamma}=S_{e_{1}} \cup S_{e_{2}} \cup S_{f}$.
If $y_1,\dots, y_5$ are the dual coordinates on $\mathrm{Hom}(\bbg\Gamma,\mathbb R)\cong\mathbb R^{5}$ with respect to $T$ (in the sense of \S\ref{sec:coordinates}), then $S_{e_{1}}$, $S_{e_{2}}$, and $S_{f}$ are given by $y_{1}=0$, $y_{2}=0$, and $y_{1}-y_{2}=0$, respectively.
To see the latter, first note that we have a relator $e_{1}f=e_{2}=fe_{1}$ in the Dicks--Leary presentation.
Then for any character $\chi\in\mathrm{Hom}(\bbg\Gamma,\mathbb R)$, we have $\chi(e_{1})+\chi(f)=\chi(e_{2})$.
Thus, we obtain $\chi(f)=0$ if and only if $\chi(e_{1})=\chi(e_{2})$.
Therefore, the hypersphere $S_{f}$ is defined by $y_{1}=y_{2}$, that is, the equation $y_{1}-y_{2}=0$.
A direct computation shows that the associated missing subspaces do not satisfy the inclusion-exclusion principle \eqref{eq:IEP subspaces}.
\end{example}
\begin{figure}[h]
\centering
\input{pictures/Example_BBG_cut_edges_and_hyperplanes}
\caption{An oriented trefoil graph with a spanning tree.}
\label{fig: oriented trefoil graph with a spanning tree}
\end{figure}
It is natural to ask whether the phenomenon from Example~\ref{ex: bns of PS} is actually a general obstruction for a BBG to be a RAAG.
In \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}, Day and Wade developed a homology theory $H_\ast(\mathcal{V})$ for a subspace arrangement $\mathcal V$ in a vector space that is designed to measure the failure of the inclusion-exclusion principle for $\mathcal V$.
They proved that if $G$ is a RAAG, then
$ H_k(\mathcal V_G) = 0$ for all $k>0$, where $\mathcal V_G$ denotes the arrangement of maximal subspaces corresponding to the maximal missing spheres in $\bnsc G$; see \cite[Theorem B]{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}.
\clearpage
Given our description of the BNS-invariant for BBGs from \S\ref{sec:coordinates} and Theorem~\hyperref[thm:graphical description for bns of bbg]{D}, we can determine that certain BBGs are not RAAGs.
For example, a direct computation shows that the group $G=\bbg \Gamma$ from Example~\ref{ex: bns of PS} has $H_1(\mathcal V_G)\neq 0$.
On the other hand, there are BBGs that cannot be distinguished from RAAGs in this way, as in the next example.
\begin{example}[The extended trefoil]\label{ex:extended trefoil}
Let $\Gamma$ be the trefoil graph with one extra triangle attached; see Figure~\ref{fig: A special but not extra-special triangulation}.
Imitating Example~\ref{ex: bns of PS}, we choose a spanning tree $T$ whose edge set is $\ee T=\lbrace e_{1},e_{2},e_{3},e_{4},e_{5},e_6\rbrace$.
By Theorem~\hyperref[thm:graphical description for bns of bbg]{D},
we have $\bnsc{\bbg \Gamma}=S_{e_{1}} \cup S_{e_{2}} \cup S_{f} \cup S_{e_{5}}$.
If $y_1,\dots, y_6$ are the dual coordinates on $\mathrm{Hom}(\bbg\Gamma,\mathbb R)\cong\mathbb R^{6}$ with respect to $T$ (in the sense of \S\ref{sec:coordinates}), then these missing hyperspheres are defined by the hyperplanes given by $y_{1}=0$, $y_{2}=0$, $y_{1}-y_{2}=0$, and $y_5=0$, respectively.
A direct computation shows that $H_k(\mathcal V_{\bbg \Gamma})=0$ for all $k\geq 0$, that is, these homology groups look like the homology groups for the arrangement associated to a RAAG.
However, we will show that this BBG is not a RAAG in Example~\ref{ex:extended trefoil continued}.
\end{example}
\begin{figure}[h]
\centering
\input{pictures/ex_bbg_on_PS_with_an_extra_triangle}
\caption{The extended trefoil: a new example of a BBG that is not a RAAG.}
\label{fig: A special but not extra-special triangulation}
\end{figure}
Our goal now is to obtain a criterion to detect when a BBG is not a RAAG that is still based on a certain failure of the inclusion-exclusion principle in the complement of the BNS-invariant.
The obstruction always involves only a collection of three subspaces, regardless of the complexity of the graph.
So, we find it convenient to introduce the following notation:
\begin{equation}\label{eq:def IEP}
\begin{split}
\iep{W_1}{W_2}{W_3} = & \dim W_1+ \dim W_2+ \dim W_3 \\
- & \dim (W_1\cap W_2) - \dim (W_1\cap W_3) + \dim (W_2\cap W_3) \\
+ & \dim (W_1\cap W_2\cap W_3).
\end{split}
\end{equation}
\subsubsection{RAAG behavior}\label{sec:IEP RAAG behavior}
The following lemma states that the arrangement defining the BNS-invariant of any RAAG satisfies the inclusion-exclusion principle.
This is due to the fact that in this case, the missing subspaces are effectively described in terms of sets of vertices of $\Gamma$ and the inclusion-exclusion principle holds for subsets of a given set.
The argument follows the proof of \cite[Lemma 5.3]{KobanPiggottTheBNSofthepuresymmetricautomorphismofRAAG}. We include a proof for completeness.
\begin{lemma}\label{lem:koban piggott}
Let $\Gamma$ be a connected graph.
Let $\{W_j\}_{j\in J}$ be a collection of maximal missing subspaces in $\operatorname{Hom}(\raag \Gamma,\mathbb R)$.
Then $\{W_j\}_{j\in J}$ satisfies \eqref{eq:IEP subspaces}.
In particular, when $J=\{1,2,3\}$ we have $\dim(W_1+W_2+W_3)=\iep{W_1}{W_2}{W_3}$.
\end{lemma}
\begin{proof}
Recall that the subspace $W_j$ corresponds to a minimal full separating subgraph $\Lambda_j$ of $\Gamma$ (see Remark~\ref{rem:complement BNS for RAAGs}).
Moreover, the dimension of $W_j$ is equal to the number of vertices in the complement $A_j=\Gamma \setminus \Lambda_j$ of $\Lambda_j$ (those vertices provide a basis for $W_j$, in the sense of \S\ref{sec:coordinates}.)
It follows that
\begin{align*}
\dim{\left(\sum^{|J|}_{j=1} W_j\right)}
&=\left| \bigcup_{j=1}^{|J|} \vv{A_j} \right| \\
&=\sum^{|J|}_{k=1}(-1)^{k+1} \left( \sum\limits_{\substack{I\subset J \\ |I|=k}} \left| \bigcap_{j\in I} \vv{A_j} \right| \right) \\
&=\sum^{|J|}_{k=1}(-1)^{k+1}\left(\sum\limits_{\substack{I\subset J \\ |I|=k}}\dim\left(\bigcap_{j\in I}W_j\right)\right).
\end{align*}
This means precisely that $\{W_j\}_{j\in J}$ satisfies \eqref{eq:IEP subspaces}, as desired.
\end{proof}
\subsubsection{Non RAAG behavior}\label{sec:IEP NON-RAAG behavior}
We now want to identify a condition that is not compatible with the property established in \S\ref{sec:IEP RAAG behavior} for the arrangement associated to a RAAG.
More precisely, we look for a sharp lower bound for the term $\iep{W_1}{W_2}{W_3}$.
The key condition is the one in Lemma~\ref{lem:LA general}. It is inspired by \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}, and it could be interpreted in the setting of the homology theory introduced in that paper (see Remark~\ref{rem:DW dual class}).
For the reader's convenience, we provide a self-contained exposition.
Let $V$ be a real vector space of dimension $n$. Once again, the reader should think of the case $V=\operatorname{Hom}(G,\mathbb R)\cong\mathbb R^n$ for some group $G$ with $n$ generators.
We fix some inner product, an orthonormal basis $\{e_1,\dots,e_n\}$, and the corresponding coordinates $\{y_1,\dots,y_n\}$, that is, $y_i(e_j)=\delta_{ij}$.
Consider three subspaces of $V$ given by the following systems of equations:
\begin{equation}\label{eq: redundant}
\begin{split}
W_1 & = \{y_1=0, \ \sum_{i=1}^n \lambda^1_{ij}y_i=0 \textrm{ for } j=1,\dots,m_1\}, \\
W_2 & = \{y_2=0, \ \sum_{i=1}^n \lambda^2_{ij}y_i=0 \textrm{ for } j=1,\dots,m_2\}, \\
W_3 & = \{y_1-y_2=0, \ \sum_{i=1}^n \lambda^3_{ij}y_i=0 \textrm{ for } j=1,\dots,m_3\},
\end{split}
\end{equation}
where for $k\in\lbrace 1,2,3\rbrace$, we have $\lambda^k_{ij}\in \mathbb R$, and $m_k$ is a non-negative integer (possibly zero, in which case it is understood that the subspace is just given by the first equation, as in Example~\ref{ex: bns of PS}).
Without loss of generality, we assume that each set of equations is minimal. That is, we have $\dim {W_k}=n-(m_k+1)$.
We now proceed to compute the term $\iep{W_1}{W_2}{W_3}$ defined in \eqref{eq:def IEP}.
In the naive system of equations that defines the intersection $W_1\cap W_2\cap W_3$ (that is, the one obtained by putting all the equations together), there is an obvious linear relation among the equations $y_1=0, y_2=0$, and $y_1-y_2=0$.
This can cause the dimension of $W_1\cap W_2\cap W_3$ to be higher than expected.
From this perspective, one of the three equations is redundant.
We find it convenient to work with the orthogonal complements.
For $i,j\in \{1,2,3\}, i\neq j$,
consider the following natural maps:
\begin{equation}
\begin{split}
& I_{ij}:W_i^\perp \cap W_j^\perp \longrightarrow W_i^\perp \oplus W_j^\perp, \ I_{ij}(u)=(u,-u), \\
& F_{ij}:W_i^\perp \oplus W_j^\perp \longrightarrow W_i^\perp + W_j^\perp, \ F_{ij}(\zeta_i,\zeta_j)=\zeta_i+\zeta_j,\\
& J_{ij}:W_i^\perp \oplus W_j^\perp \longrightarrow W_1^\perp \oplus W_2^\perp \oplus W_3^\perp,
\end{split}
\end{equation}
where the last one is the natural inclusion (for example, $J_{12}(\zeta_1,\zeta_2)=(\zeta_1,\zeta_2,0)$).
These maps fit in the diagram in Figure~\ref{pic:diagram}, where the first row is exact. The exactness implies the Grassmann's identity.
\begin{figure}[ht]
\centering
\include{pictures/linear_algebra}
\caption{The diagram for Lemma~\ref{lem:LA general}.}
\label{pic:diagram}
\end{figure}
Let $K_{ij}\subseteq W_1^\perp \oplus W_2^\perp \oplus W_3^\perp$ be the image of $J_{ij}\circ I_{ij}$.
By construction, we have $K_{ij}\cong (W_i+W_j)^\perp=W_i^\perp \cap W_j^\perp$.
Finally, consider the vector $\xi=(-e_1,e_2,e_1-e_2)\in W_1^\perp \oplus W_2^\perp \oplus W_3^\perp$.
We say that a triple of subspaces $\{W_1,W_2,W_3\}$ as above is a \textit{redundant triple of subspaces} if $\xi\not \in K_{12} + K_{23} + K_{13}$.
\begin{remark}\label{rem:DW dual class}
Although we will not need it, we observe that the condition $\xi\not \in K_{12} + K_{23} + K_{13}$ described above can be interpreted in the sense of the subspace arrangement homology introduced in \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs} as follows.
Consider the arrangement $\mathcal W^\perp$ given by the orthogonal complements $\{W_1^\perp,W_2^\perp,W_3^\perp\}$. Then $\{W_1,W_2,W_3\}$ is a redundant triple of subspaces precisely when $\xi$ defines a non-trivial class in $H_1(\mathcal W^\perp)$.
\end{remark}
\begin{lemma}\label{lem:LA general}
In the above notation, if $\{W_1,W_2,W_3\}$ is a redundant triple of subspaces, then it does not satisfy the inclusion-exclusion principle.
More precisely,
\begin{equation*}
\dim (W_1+W_2+W_3 )+1 \leq \iep{W_1}{W_2}{W_3}.
\end{equation*}
\end{lemma}
\begin{proof}
We will compute all the terms that appear in $\iep{W_1}{W_2}{W_3}$ (see \eqref{eq:def IEP}).
The exactness of the first row of the diagram in Figure~\ref{pic:diagram} yields that
\begin{equation*}
\dim ( W_i^\perp + W_j^\perp) = \dim ( W_i^\perp \oplus W_j^\perp) - \dim ( W_i^\perp \cap W_j^\perp) = 2+m_i+m_j - \dim K_{ij}.
\end{equation*}
It follows that
\begin{equation}\label{eq:appendix_intersection2}
\begin{split}
\dim( W_i\cap W_j) = & n- \dim ( (W_i\cap W_j)^\perp ) \\
= & n- \dim(W_i^\perp + W_j^\perp) \\
= & n - (2+m_i+m_j) +\dim K_{ij}.
\end{split}
\end{equation}
We deal with the triple intersection similarly.
Consider the map
\begin{equation*}
F:W_1^\perp \oplus W_2^\perp \oplus W_3^\perp \longrightarrow W_1^\perp + W_2^\perp + W_3^\perp, \ F(\zeta_1,\zeta_2,\zeta_3)=\zeta_1+\zeta_2+\zeta_3.
\end{equation*}
We have $\dim (W_1^\perp \oplus W_2^\perp \oplus W_3^\perp)= 3+m_1+m_2+m_3$.
Since $F$ is surjective, its codomain has dimension $3+m_1+m_2+m_3-\dim (\ker F) $. It follows that
\begin{equation}\label{eq:appendix_intersection3}
\begin{split}
\dim (W_1\cap W_2\cap W_3)
& = n- \dim ( (W_1\cap W_2\cap W_3)^\perp ) \\
& = n - \dim ( W_1^\perp + W_2^\perp + W_3^\perp) \\
& = n - (3+m_1+m_2+m_3) +\dim (\ker F).
\end{split}
\end{equation}
Using $\dim {W_k}=n-(m_k+1)$, \eqref{eq:appendix_intersection2} and \eqref{eq:appendix_intersection3}, we obtain:
\begin{equation}\label{eq:IEPnKerFij}
\iep{W_1}{W_2}{W_3}=n +\dim (\ker F) - \dim K_{12} - \dim K_{13} - \dim K_{23}.
\end{equation}
We now claim that $\dim(\ker F)\geq 1+\dim K_{12} + \dim K_{13} + \dim K_{23}$.
The vector $\xi=(-e_1,e_2,e_1-e_2)$ is in $\ker F$, and $K_{ij}$ is a subspace of $\ker F$ by definition.
A direct computation shows that $K_{ij} \cap K_{ik} =0$.
By assumption, we also have $\xi\notin K_{12}+K_{13}+K_{23}$.
Therefore, the direct sum $\operatorname{span}(\xi)\oplus K_{12} \oplus K_{13} \oplus K_{23}$ is a subspace of $\ker F$.
This proves the claim.
Then it follows from \eqref{eq:IEPnKerFij} that
\begin{equation*}
\begin{split}
\iep{W_1}{W_2}{W_3}
& = n +\dim (\ker F) - \dim K_{12} - \dim K_{13} - \dim K_{23} \\
& \geq n+1 \\
& \geq \dim ( W_1+W_2+W_3)+1.
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
On the other hand, if $\{W_1,W_2,W_3\}$ is not a redundant triple of subspaces, then we have the dichotomy in the following statement. This criterion will be useful in the proof of Theorem~\hyperref[thm:redundant triple criterion]{E}.
\begin{lemma}\label{new linear algebra}
In the above notation, if $\{W_1,W_2,W_3\}$ is not a redundant triple of subspaces, then one of the following situations occurs:
\begin{enumerate}
\item \label{item:e1 e2 in all} either $e_1,e_2\in W_j^\perp$ for all $j=1,2,3$,
\item \label{item:engaged in all} or there exists some $i\geq 3$ such that $e_i \not \in W_j$ for all $j=1,2,3$.
\end{enumerate}
\end{lemma}
\begin{proof}
Recall that $K_{ij}$ is the image of the natural map $J_{ij}\circ I_{ij}:W_i^\perp \cap W_j^\perp \to W_1^\perp \oplus W_2^\perp \oplus W_3^\perp$ (see Figure~\ref{pic:diagram} at the beginning of \S\ref{sec:IEP NON-RAAG behavior}).
We have an induced map
$$
K:(W_1^\perp \cap W_2^\perp ) \oplus (W_2^\perp \cap W_3^\perp ) \oplus (W_1^\perp \cap W_3^\perp ) \to W_1^\perp \oplus W_2^\perp \oplus W_3^\perp,
$$
$$
K(a,b,c) = (a+c,-a+b,-b-c),
$$
whose image is precise $K_{12} + K_{23} + K_{13}$.
Since $\{W_1,W_2,W_3\}$ is not a redundant triple of subspaces, we have $\xi=(-e_1,e_2,e_1-e_2)\in \operatorname{Im}(K)$.
This means that there exist $a=\sum_{i=1}^n a_ie_i\in W_1^\perp \cap W_2^\perp$, $b=\sum_{i=1}^n b_ie_i\in W_2^\perp \cap W_3^\perp$, and $c=\sum_{i=1}^n c_ie_i\in W_1^\perp \cap W_3^\perp$, such that $a+c=-e_1$, $-a+b=e_2$, and $-b-c=e_1-e_2$, where $a_i,b_i,c_i\in \mathbb R$.
A direct computation shows that $a$, $b$, and $c$ must satisfy the following relations:
\begin{equation}\label{eq:new linear relations}
a_1=b_1=-1-c_1, \ a_2=-c_2=b_2-1, \ \text{and} \ a_i=b_i=-c_i \ \text{for} \ i\geq 3.
\end{equation}
Note that if $ a_i$, $b_i$, and $c_i$ are equal to zero for all $i\geq 3$, then $a=a_1e_1+a_2e_2\in W_1^\perp \cap W_2^\perp$.
Since $e_1\in W_1^\perp$, we have $e_2\in W_1^\perp$.
Similar arguments show that $e_1$ and $e_2$ also belong to $W_2^\perp$ and $W_3^\perp$.
Therefore, we are in case \eqref{item:e1 e2 in all}.
If \eqref{item:e1 e2 in all} does not occur, then we can reduce to the case that one of $a$, $b$, and $c$ has at least one non-zero coordinate along $e_i$ for some $i\geq 3$. But $a_i\neq 0$ implies that $W_1^\perp$ and $e_i$ are not orthogonal, so we have $e_i\not \in W_1$.
Thanks to \eqref{eq:new linear relations},
we also know that $b$ and $c$ have non-zero coordinates along $e_i$. Then a similar argument shows that $e_i\not \in W_2, W_3$. Therefore, we are in case \eqref{item:engaged in all}.
\end{proof}
Finally, we obtain a criterion to certify that a group is not a RAAG.
\begin{proposition}\label{prop:criterion non RAAG}
Let $G$ be a finitely generated group.
Suppose that there exist three maximal missing
subspaces $W_1$, $W_2$, and $W_3$
in $\operatorname{Hom}(G, \mathbb R)$.
If they form a redundant triple of subspaces, then $G$ is not a RAAG.
\end{proposition}
\begin{proof}
Since $\{W_1,W_2,W_3\}$ is a redundant triple of subspaces, by Lemma~\ref{lem:LA general} we have that $\dim (W_1+W_2+W_3 ) +1 \leq \iep{W_1}{W_2}{W_3}.$
Assume by contradiction that $G$ is a RAAG.
Then by Lemma~\ref{lem:koban piggott} we have
$\dim (W_1+W_2+W_3 ) = \iep{W_1}{W_2}{W_3}$.
This leads to a contradiction.
\end{proof}
The fact that certain BBGs are not isomorphic to RAAGs can be obtained via the methods in \cite{PapadimaSuciuAlgebraicinvariantsforBBGs} or \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}, such as the BBG defined on the trefoil graph in Example~\ref{ex: bns of PS}.
Proposition~\ref{prop:criterion non RAAG} allows us to obtain new examples that were not covered by previous criteria, such as the BBG defined on the extended trefoil (see Examples~\ref{ex:extended trefoil} and \ref{ex:extended trefoil continued}).
\subsection{Redundant triples for BBGs}\label{sec:redundant triples BBGs}
The purpose of this section is to find a general graphical criterion to certify that a BBG is not a RAAG.
The idea is to start from a triangle in the flag complex $\flag \Gamma$ and find suitable subspaces of the links of its vertices that induce a redundant triple of subspaces in the complement of $\bns{\bbg \Gamma}$.
Let $\tau$ be a triangle in $\flag \Gamma$ with vertices $(v_1,v_2,v_3)$. Let $e_j$ be the edge opposite to $v_j$.
We say that $\tau$ is a \textit{redundant triangle} if for each $j=1,2,3$, there exists a subgraph $\Lambda_j\subseteq \lk{v_j,\Gamma}$ such that:
\begin{enumerate}
\item $e_j \in \ee{\Lambda_j}$;
\item $\Lambda_j$ is a minimal separating subgraph of $\Gamma$;
\item \label{item: omega} $\Lambda_1\cap \Lambda_2 \cap \Lambda_3$ is the empty subgraph.
\end{enumerate}
\begin{example}
The central triangle in the trefoil graph in Figure~\ref{fig:trefoil} is redundant.
However, if we consider the cone over the trefoil graph, then the central triangle in the base trefoil graph is not redundant. Redundant triangles can appear in higher-dimensional complexes; see Example~\ref{ex:higher dimensional}.
\end{example}
The purpose of this section is to prove the following theorem.
\begin{maintheoremc}{E}\label{thm:redundant triple criterion}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is simply connected.
If $\Gamma$ has a redundant triangle, then $\bbg \Gamma$ is not a RAAG.
\end{maintheoremc}
We start by considering a redundant triangle $\tau$ with a choice of subgraph $\Lambda_j$ of the link $\lk{v_j,\Gamma}$ as in the above definition of redundant triangle.
We denote by $W_j=W_{\Lambda_j}$ the induced subspace of $V=\operatorname{Hom}(\bbg \Gamma,\mathbb R)$.
By Theorem~\hyperref[thm:graphical description for bns of bbg]{D}, we know that $W_j=W_{\Lambda_j}$ is a maximal subspace in the complement of $\bns{\bbg\Gamma}$.
We want to show that $\{W_1,W_2,W_3\}$ is a redundant triple of subspaces.
To do this, we will choose some suitable coordinates on $V$, that is, a suitable spanning tree for $\Gamma$.
Notice that different spanning trees correspond to different bases on $\operatorname{Hom}(\bbg \Gamma,\mathbb R)$. In particular, the linear isomorphism class of the arrangement of missing subspaces does not depend on these choices, and we can work with a convenient spanning tree.
To construct a desired spanning tree, we will need the following terminology.
Let $v\in\vv\Gamma$.
The \textit{spoke} of $v$ in $\Gamma$ is the subgraph $\spoke v$ consisting of the edges that contain $v$. Note that $\spoke v$ is a spanning tree of $\st{v}$.
Let $\Lambda$ be a subgraph of $\lk v$.
We define the \textit{relative star} of $v$ with respect to $\Lambda$ to be the full subgraph $\relstar v\Lambda$ of $\st{v}$ generated by $\{v\} \cup \vv \Lambda$.
We define the \textit{relative spoke} of $v$ with respect to $\Lambda$ to be the subgraph $\relspoke v\Lambda$ of $\spoke{v}$ consisting of the edges that connect $v$ to a vertex of $\Lambda$.
Note that $\relspoke v\Lambda$ is a spanning tree of $\relstar v\Lambda$.
We now construct a spanning tree $T$ for $\Gamma$ as follows.
\begin{itemize}
\item Let $T_3=\relspoke{v_3}{\Lambda_3}$.
Since we chose $\Lambda_3$ to contain $e_3$, we have $v_1,v_2\in \vv{\Lambda_3}$ and $e_1,e_2 \in \ee{T_3}$.
\item Let $Z_2=\relspoke{v_2}{\Lambda_2 \setminus \relstar{v_3}{\Lambda_3}}$ and let $T_2=T_3\cup Z_2$.
Notice that $T_2$ is a spanning tree of $\relstar{v_2}{\Lambda_2}\cup \relstar{v_3}{\Lambda_3}$.
\item Let $Z_1=\relspoke{v_1}{\Lambda_1 \setminus (\relstar{v_2}{\Lambda_2}\cup \relstar{v_3}{\Lambda_3})}$ and let $T_1=T_2\cup Z_1$.
Notice that $T_1$ is a spanning tree of $\relstar{v_1}{\Lambda_1}\cup \relstar{v_2}{\Lambda_2}\cup \relstar{v_3}{\Lambda_3}$.
\item Finally, extend $T_1$ to a spanning tree $T$ for $\Gamma$.
\end{itemize}
To fix a notation, say $\ee T = \{f_1,f_2,\dots, f_n\}$.
Without loss of generality, say $f_1=e_1$ and $f_2=e_2$.
Fix an arbitrary orientation for the edges of $T$.
With respect to the associated system of coordinates the subspaces $W_1$, $W_2$, and $W_3$ are given by equations of the form \eqref{eq: redundant}:
\begin{equation*}
W_1 = \{y_1=0, \dots\}, \ W_2 = \{y_2=0, \dots\}, \ \text{and} \ W_3 = \{y_1-y_2=0, \dots\}.
\end{equation*}
Recall that $\{\chi_f \mid f\in \ee T\}$ is a basis for $\operatorname{Hom}(\bbg \Gamma,\mathbb R)$, where $\chi_f:\bbg \Gamma \to \mathbb R$ is the character defined by $\chi_f(e)=1$ if $f=e$ and $\chi_f(e)=0$ if $f\neq e$.
We also fix a background inner product with respect to which $\{\chi_f \mid f\in \ee T\}$ is an orthonormal basis.
We now proceed to prove some technical lemmas that will be used to recognize the edges $f\in \ee T$ for which the associated character $\chi_f$ is in one of the subspaces $W_1$, $W_2$, and $W_3$. This is needed to use Lemma~\ref{new linear algebra}.
We start with the following general fact.
\begin{lemma}\label{lem: chi_f not in lk then f is in a spanning tree}
Let $v\in\Gamma$, and let $\Lambda$ be a subgraph of $\lk{v}$.
Let $T_\Lambda$ be a spanning tree of $\relstar v\Lambda$.
If $f\not \in \ee {T_\Lambda}$, then $\chi_f\in W_{\Lambda}$.
\end{lemma}
\begin{proof}
Suppose $f\notin \ee {T_\Lambda}$.
Then $\chi_f=0$ on $T_\Lambda$.
Since $T_\Lambda$ is a spanning tree of $\relstar v\Lambda$, the character $\chi_f$ vanishes on $\relstar v\Lambda$ by Lemma~\ref{lem:characters of f.p BBGs are given by assigning values on spanning trees}.
In particular, it vanishes on $\Lambda$, hence $\chi_f\in W_{\Lambda}$.
\end{proof}
We now proceed to use Lemma~\ref{lem: chi_f not in lk then f is in a spanning tree} for each $\Lambda_j$, with respect to a suitable choice of spanning tree for $\relstar{v_j}{\Lambda_j}$.
\begin{lemma}\label{lem:involve3}
Let $f\in \ee T$.
If $f \not \in \ee{T_3}$, then $\chi_f \in W_3$.
\end{lemma}
\begin{proof}
Since $T_3$ is a spanning tree of $\relstar{v_3}{\Lambda_3}$, the statement follows directly from Lemma~\ref{lem: chi_f not in lk then f is in a spanning tree}.
\end{proof}
\begin{lemma}\label{lem:involve2}
Let $f\in \ee T$.
If $f\not \in \ee{Z_2}$, $f\neq e_1$, and $f$ does not join $v_3$ to a vertex in $\Lambda_2\cap \Lambda_3$,
then $\chi_f \in W_2$.
\end{lemma}
\begin{proof}
We construct a spanning tree for $\relstar{v_2}{\Lambda_2}$ as follows.
First, note that $Z_2$ is a spanning tree of $\relstar{v_2}{\Lambda_2 \setminus \relstar{v_3}{\Lambda_3}}$ by construction.
If $u$ is a vertex in $\relstar{v_2}{\Lambda_2}$ but not in $Z_2$, then either $u=v_3$ or $u\in \vv{\Lambda_3}$.
Let $T_2'$ be the result of extending $Z_2$ with the edge $e_1=(v_2,v_3)$ and all the edges that join $v_3$ to the vertices in $\Lambda_2\cap \Lambda_3$.
This gives a spanning subgraph $T_2'$ of $\relstar{v_2}{\Lambda_2}$.
Note that $T_2'$ is a tree because it is a subgraph of $T$.
By the choice of $f$, we have $f\not \in \ee{T_2'}$. Then it follows from Lemma~\ref{lem: chi_f not in lk then f is in a spanning tree} that $\chi_f\in W_2$.
\end{proof}
\begin{lemma}\label{lem:involve1}
Let $f\in \ee T$.
If $f\not \in \ee{Z_1}$, $f\neq e_1,e_2$, and $f$ does not join $v_3$ to a vertex in $\Lambda_1\cap \Lambda_3$ nor $v_2$ to a vertex in $\Lambda_1\cap \Lambda_2$,
then $\chi_f \in W_1$.
\end{lemma}
\begin{proof}
We construct a spanning tree for $\relstar{v_1}{\Lambda_1}$ as follows.
First, note that $Z_1$ is a spanning tree for $\relstar{v_1}{\Lambda_1 \setminus (\relstar{v_2}{\Lambda_2}\cup \relstar{v_3}{\Lambda_3})}$ by construction.
If $u$ is a vertex in $\relstar{v_1}{\Lambda_1}$ but not in $Z_1$, then either $u=v_2$, $u=v_3$, $u\in \vv{\Lambda_2}$, or $u\in \vv{\Lambda_3}$.
Let $T_1'$ be the result of extending $Z_1$ with the edges $e_1=(v_2,v_3)$, $e_2=(v_1,v_3)$, all the edges that join $v_3$ to the vertices in $\Lambda_1\cap \Lambda_3$, and all the edges that join $v_2$ to the vertices in $\Lambda_1\cap \Lambda_2$.
This gives a spanning subgraph $T_1'$ of $\relstar{v_1}{\Lambda_1}$.
Note that $T_1'$ is a tree because it is a subgraph of $T$.
By the choice of $f$, we have $f\not \in \ee{T_1'}$. Then it follows from Lemma~\ref{lem: chi_f not in lk then f is in a spanning tree} that $\chi_f\in W_1$.
\end{proof}
\begin{lemma}\label{lem:involved in all only e1 e2}
Let $f\in \ee T$.
If $\chi_f\not \in W_j$ for all $j=1,2,3$, then $f=e_1$ or $f=e_2$.
\end{lemma}
\begin{proof}
By contradiction, suppose that there is an edge $f\neq e_1,e_2$ such that $\chi_f\not \in W_j$ for all $j=1,2,3$.
Since $\chi_f\not \in W_3$, we know that $f\in \ee{T_3}$ by Lemma~\ref{lem:involve3}.
In particular, we have $v_3\in \vv f$.
Since $f\neq e_1,e_2$, this implies that $v_1,v_2\not \in \vv f$, and in particular this means that $f\notin \ee{Z_1}, \ee{Z_2}$.
The assumption $\chi_f\not \in W_2$ implies that $f$ joins $v_3$ to a vertex in $\Lambda_2\cap \Lambda_3$, thanks to Lemma~\ref{lem:involve2}.
Similarly, the assumption $\chi_f\not \in W_1$ implies that $f$ joins $v_3$ to a vertex in $\Lambda_1 \cap \Lambda_3$, thanks to Lemma~\ref{lem:involve1}.
Therefore, we have obtained that $f$ connects $v_3$ to a vertex in $\Lambda_1 \cap \Lambda_2\cap \Lambda_3$. But this is absurd because this intersection is empty, by condition \eqref{item: omega} in the definition of redundant triangle.
\end{proof}
We are now ready for the proof of Theorem~\hyperref[thm:redundant triple criterion]{E}.
\begin{proof}[Proof of Theorem~{\hyperref[thm:redundant triple criterion]{E}}]
Recall by construction that the subspaces $W_1$, $W_2$, and $W_3$ are given by equations of the form \eqref{eq: redundant} with respect to the coordinates defined by the spanning tree $T$ constructed above.
Suppose by contradiction that $\{W_1,W_2,W_3\}$ is not a redundant triple
By Lemma~\ref{new linear algebra}, one of the following cases occurs:
\begin{enumerate}
\item \label{item:e1 e2 in all bbg} either $\chi_{e_1},\chi_{e_2}\in W_j^\perp$ for all $j=1,2,3$,
\item \label{item:engaged in all bbg} or there exists some $i\geq 3$ such that $\chi_{f_i} \not \in W_j$ for all $j=1,2,3$.
\end{enumerate}
We claim that neither of these two situations can occur in our setting.
To see that \eqref{item:e1 e2 in all} does not occur, observe that $\chi_{e_1}+\chi_{e_2} \in W_3$ and $\chi_{e_1}+\chi_{e_2}$ is not orthogonal to $\chi_{e_1}$, so $\chi_{e_1}\not \in W_3^\perp$. The same is true for $\chi_{e_2}$.
On the other hand, \eqref{item:engaged in all} does not occur by Lemma~\ref{lem:involved in all only e1 e2}.
We have reached a contradiction, so $\{W_1,W_2,W_3\}$ is a redundant triple of subspaces.
Then it follows from Proposition~\ref{prop:criterion non RAAG} that $\bbg\Gamma$ is not a RAAG.
\end{proof}
We will use Theorem~\hyperref[thm:redundant triple criterion]{E} in \S\ref{section: BBGs on 2-dim flag complexes} to prove that certain BBGs are not isomorphic to RAAGs (see Theorem~\hyperref[body main thm 2dim]{A} for the case in which $\flag \Gamma$ is 2-dimensional and Example~\ref{ex:higher dimensional} for a higher-dimensional example).
\subsection{Resonance varieties for BBGs}\label{sec:resonance varieties}
The goal of this section is to show that for a finitely presented BBG, the complement of its BNS-invariant coincides with the restriction of its first real resonance variety to the character sphere.
Let $A=H^*(G,\mathbb R)$ be the cohomology algebra of $G$ over $\mathbb R$.
For each $a\in A^1=H^1(G,\mathbb R)=\operatorname{Hom}(G,\mathbb R)$, we have $a^2=0$. So, we can define a cochain complex $(A,a)$
$$(A,a): A^0 \to A^1 \to A^2 \to \cdots,$$
where the coboundary is given by the right-multiplication by $a$.
The \textit{(first) resonance variety} is defined to be the set points in $A^1$ so that the above chain complex fails to be exact, that is,
$$ \mathcal R_1(G)=\{ a \in A^1 \mid H^1(A,a) \neq 0\}.$$
In many cases of interest, the resonance variety $\mathcal R_1(G)$ is an affine algebraic subvariety of the vector space $A^1=H^1(G,\mathbb R)=\operatorname{Hom}(G,\mathbb R)$.
For $G$ a RAAG or a BBG, these varieties have been computed in \cite{PapadimaSuciuAlgebraicinvariantsforRAAGs} and \cite{PapadimaSuciuAlgebraicinvariantsforBBGs}, respectively.
These varieties turn out to be defined by linear equations; that is, they are subspace arrangements.
Following the notation in \cite{PapadimaSuciuAlgebraicinvariantsforBBGs}, let $\Gamma$ be a finite graph. For any $U\subseteq \vv \Gamma$, let $H_U$ be the set of characters $\chi:\raag\Gamma \to \mathbb R$ vanishing (at least) on all the vertices in the complement of $U$.
In our notation, this means $U\subseteq \dead{\hat \chi}$.
Moreover, let $H'_U$ be the image of $H_U$ under the restriction map $ r\colon \operatorname{Hom}(\raag \Gamma,\mathbb R) \to \operatorname{Hom}(\bbg \Gamma,\mathbb R)$; see \S\ref{sec:coordinates}.
\begin{proposition}\label{prop:bns resonance}
Let $\Gamma$ be a biconnected graph with $\flag \Gamma$ simply connected.
Then $\bnsc{\bbg \Gamma} = \mathcal R_1(\bbg \Gamma) \cap \chars{\bbg \Gamma}$.
\end{proposition}
\proof
By \cite[Theorem 1.4]{PapadimaSuciuAlgebraicinvariantsforBBGs}, we have that $\mathcal R_1(\bbg \Gamma)$ is the union of the subspaces $H'_U$, where $U$ runs through the maximal collections of vertices that induce disconnected subgraphs.
Similarly, it follows from Theorem~\hyperref[thm:graphical description for bns of bbg]{D} that $\bnsc{\bbg \Gamma} $ is the union of the subspheres $S_\Lambda$, where $\Lambda$ runs through the minimal separating subgraphs.
Note that $U$ is a maximal collection of vertices inducing a disconnected subgraph precisely when the subgraph $\Lambda$ induced by $\vv \Gamma \setminus U$ is a minimal separating subgraph.
So, it is enough to show that for each such $U$, we have $ H'_U = W_\Lambda$, where $W_\Lambda$ is the linear span of the sphere $S_\Lambda$, as defined above in \S\ref{sec:graphical description}.
To show this equality, let $\chi:\bbg \Gamma \to \mathbb R$. Then $\chi \in H'_U$ if and only if there is an extension $\hat \chi$ of $\chi$ to $\raag \Gamma$ such that $\hat \chi \in H_U$.
This means that $\Lambda\subseteq \dead{\hat \chi}$.
Note that $\Lambda$ is connected by \eqref{item:minimal separating is connected} in Lemma~\ref{lem:link connected}.
So, by Lemma~\ref{a subgraph is edge dead iff there is an extension of characters on BBG to RAAG}, we have $\Lambda\subseteq \dead{\hat \chi}$ if and only if $\Lambda \subseteq \deadedge \chi$, which is equivalent to $\chi \in W_\Lambda$.
\endproof
\begin{remark}\label{rem:upper bound}
We note that in general one has the inclusion
$\bnsc{G} \subseteq \mathcal R_1(G) \cap \chars{G}$ thanks to \cite[Theorem 15.8]{PapadimaandSuciuBNSRinvariantsandHomologyJumpingLoci}.
However, the equality does not always hold; see \cite[\S 8]{SU21} for some examples, such as the Baumslag--Solitar group $\operatorname{BS}(1,2)$.
\end{remark}
We now recall a construction that reduces an Artin group to a RAAG (see \cite[\S 11.9]{oddconstruction} or \cite[\S 9]{PapadimaSuciuAlgebraicinvariantsforBBGs}).
Let $(\Gamma,m)$ be a weighted graph, where $m\colon \ee{\Gamma}\to\mathbb N$ is an assignment of positive integers on the edge set.
We denote by $\raag{\Gamma,m}$ the associated \textit{Artin group}.
When $m=2$ on every edge, it reduces to $\raag{\Gamma,m}=\raag \Gamma$.
The \textit{odd contraction} of $(\Gamma,m)$ is an unweighted graph $\widetilde{\Gamma}$ defined as follows. Let $\Gamma_{odd}$ be the graph whose vertex set is $\vv{\Gamma}$ and edge set is $\lbrace e\in\ee{\Gamma} \ \vert \ \text{$m(e)$ is an odd number}\rbrace$. The vertex set $\vv{\widetilde{\Gamma}}$ of $\widetilde{\Gamma}$ is the set of connected components of $\Gamma_{odd}$, and two vertices $C$ and $C'$ are connected by an edge if there exist adjacent vertices $v\in \vv C$ and $v'\in \vv C'$ in the original graph $\Gamma$.
\begin{corollary}\label{cor:not Artin}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is simply connected.
If $\Gamma$ has a redundant triangle, then $\bbg \Gamma$ is not an Artin group.
\end{corollary}
\begin{proof}
Let $\tau$ be a redundant triangle, with chosen minimal full separating subgraphs $\{\Lambda_1,\Lambda_2,\Lambda_3\}$.
Let $W_j=W_{\Lambda_j}$ be the subspace of $V=\operatorname{Hom}(\bbg\Gamma,\mathbb R)$ defined by $\Lambda_j$.
Arguing as in the proof of Theorem~\hyperref[thm:redundant triple criterion]{E}, we have that $W_j$ is a maximal missing subspace in the complement of $\bns{\bbg \Gamma}$, and that $\{W_1,W_2,W_3\}$ is a redundant triple in subspaces of $V$.
By Lemma~\ref{lem:LA general}, we have
$$
\dim (W_1+W_2+W_3 )+1 \leq \iep{W_1}{W_2}{W_3}.
$$
Now, assume by contradiction that $\bbg \Gamma$ is isomorphic to an Artin group $\raag {\Gamma ',m}$.
Let $\widetilde{\Gamma}'$ be the odd contraction of $(\Gamma ',m)$.
Then $\raag{\widetilde{\Gamma}'}$ is a RAAG.
Notice that $\raag {\Gamma ',m}$ and $\raag{\widetilde{\Gamma}'}$ have the same abelianization. Hence, the three spaces
$\operatorname{Hom}(\raag {\Gamma ',m},\mathbb R)$, $\operatorname{Hom}(\raag{\widetilde{\Gamma}'},\mathbb R)$, and $V$ can be identified together. In particular, the three character spheres $\chars{\raag{\Gamma,m}}$, $\chars{\raag{\widetilde{\Gamma}'}}$, and $\chars{\bbg \Gamma}$ can be identified as well.
Arguing as in \cite[Proposition 9.4]{PapadimaSuciuAlgebraicinvariantsforBBGs},
there is an ambient isomorphism of the resonance varieties $\mathcal R_1(\bbg \Gamma)\cong \mathcal R_1(\raag{ \Gamma',m})\cong \mathcal R_1(\raag{\widetilde{\Gamma}'})$, seen as subvarieties of $V$.
Since $\raag{\widetilde{\Gamma}'}$ is a RAAG, by \cite[Theorem 5.5]{PapadimaSuciuAlgebraicinvariantsforRAAGs} we have $\bnsc{\raag{\widetilde{\Gamma}'}} = \mathcal R_1(\raag{\widetilde{\Gamma}'}) \cap \chars{\raag{\widetilde{\Gamma}'}}$.
Similarly, since $\bbg \Gamma$ is a BBG, by Proposition~\ref{prop:bns resonance} we have
$\bnsc{\bbg \Gamma} = \mathcal R_1(\bbg \Gamma) \cap \chars{\bbg \Gamma}$.
It follows that we have an ambient isomorphism of the complements of the BNS-invariant $\bnsc{\bbg \Gamma}\cong \bnsc{\raag{\widetilde{\Gamma}'}}$ (seen as arrangements of subspheres in $\chars{\bbg \Gamma}$), as well as an ambient isomorphism of the associated arrangements of (linear) subspaces of $V$.
In particular, the arrangement of maximal missing subspaces of $\bbg \Gamma$ inside $V$ is ambient isomorphic to the arrangement of maximal missing subspaces of a RAAG.
Applying Lemma~\ref{lem:koban piggott} to the triple $\{W_1,W_2,W_3\}$ gives
$$\iep{W_1}{W_2}{W_3} = \dim (W_1+W_2+W_3 ).$$
This leads to a contradiction.
\end{proof}
\section{BBGs on 2-dimensional flag complexes}\label{section: BBGs on 2-dim flag complexes}
If $\flag \Gamma$ is a simply connected flag complex of dimension one, then $\Gamma$ is a tree. In this case, the group $\bbg \Gamma$ is a free group generated by all the edges of $\Gamma$, and in particular, it is a RAAG.
The goal of this section is to determine what happens in dimension two.
Namely, we will show that the BBG defined on a $2$-dimensional complex is a RAAG if and only if a certain poison subgraph is avoided.
We will discuss some higher dimensional examples at the end; see Examples~\ref{ex:cone over PS} and Example~\ref{ex:higher dimensional}.
Throughout this section, we assume that $\Gamma$ is a biconnected graph such that $\flag \Gamma$ is 2-dimensional and simply connected unless otherwise stated.
Note that by Lemma~\ref{lem:link connected} this implies that $\flag \Gamma$ is homogeneous of dimension two.
We say that
\begin{itemize}
\item An edge $e$ is a \textit{boundary edge} if it is contained in exactly one triangle. Denote by $\partial \flag \Gamma$ the \textit{boundary} of $\flag \Gamma$.
This is a 1-dimensional subcomplex consisting of boundary edges.
An edge $e$ is an \textit{interior edge} if $e\cap \partial \flag \Gamma = \varnothing$.
Equivalently, none of its vertices is on the boundary.
\item A \textit{boundary vertex} is a vertex contained in $\partial \flag \Gamma$. Equivalently, it is contained in at least one boundary edge.
A vertex $v$ is an \textit{interior vertex} if it is contained only in edges that are not boundary edges.
\item A triangle $\tau$ is an \textit{interior triangle} if $\tau \cap \partial \flag \Gamma = \varnothing$.
A triangle $\tau$ is called a \textit{crowned triangle} if none of its edges is on $\partial \flag \Gamma$.
This is weaker than being an interior triangle because a crowned triangle can have vertices on $\partial \flag \Gamma$.
If $\tau$ is a crowned triangle, each of its edges is contained in at least one triangle different from $\tau$.
\end{itemize}
\begin{remark}
We will prove in Lemma~\ref{lem:crowned tri is redundant in dim 2} that in dimension two, a crowned triangle is redundant in the sense of \S\ref{sec:redundant triples BBGs}.
If $\partial \flag \Gamma$ is empty, then every triangle is crowned, simply because no edge can be a boundary edge.
Note that a vertex is either a boundary vertex or an interior vertex, but we might have edges which are neither boundary edges nor interior edges.
For example, the trefoil graph (see Figure~\ref{fig:trefoil}) has no interior edges, but only six of its nine edges are boundary edges. Moreover, it has no interior triangles, but it has one crowned triangle.
Notice that a crowned triangle is contained in a trefoil subgraph of $\Gamma$, but the trefoil subgraph is not necessarily a full subgraph of $\Gamma$; see Figure~\ref{fig: diamond and house}.
\end{remark}
\begin{figure}[h]
\centering
\input{pictures/diamond_house}
\caption{Graphs that contain crowned triangles, but the resulting trefoil subgraphs are not full subgraphs.}
\label{fig: diamond and house}
\end{figure}
\subsection{Complexes without crowned triangles}
The goal of this section is to provide a characterization of complexes without crowned triangles.
\begin{lemma}\label{lem: v is interior iff its link has no deg 1 vertices}
A vertex $v \in \vv \Gamma$ is an interior vertex if and only if for each vertex $w$ in $\lk{v,\Gamma}$, its degree in $\lk{v,\Gamma}$ is at least two.
\end{lemma}
\begin{proof}
First of all, notice that for a vertex $w$ in $\lk{v,\Gamma}$, its degree in $\lk{v,\Gamma}$ is equal to the number of triangles of $\flag \Gamma$ that contain the edge $(v,w)$.
Suppose that $v \in \vv \Gamma$ is an interior vertex, and let $w$ be a vertex of $\lk{v,\Gamma}$.
Since $v$ is interior, the edge $(v,w)$ is not a boundary edge, hence it is contained in at least two triangles.
Therefore, the vertex $w$ has degree at least two in
$\lk{v,\Gamma}$.
Conversely, let $v\in \vv \Gamma$ and let $e=(v,w)$ be an edge containing $v$, where $w$ is some vertex in $\lk{v,\Gamma}$.
Since the degree of $w$ in $\lk{v,\Gamma}$ is at least two, the edge $e$ must be contained in at least two triangles.
Thus, the edge $e$ is not on $\partial \flag \Gamma$. Hence, the vertex $v$ is an interior vertex.
\end{proof}
\begin{lemma}\label{lem: no crowned triangles implies no interior 1-2 simplex and at most 1 interior vertex}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is $2$-dimensional and simply connected. If $\flag \Gamma$ has no crowned triangles, then $\flag \Gamma$ has no interior triangles, no interior edges, and has at most one interior vertex.
\end{lemma}
\begin{proof}
Since an interior triangle is automatically a crowned triangle, it is clear that $\flag \Gamma$ has no interior triangles.
For the second statement, assume that there is an interior edge $e=(u,v)$ of $\flag \Gamma$.
Since $e$ is an interior edge, it is contained in at least two triangles. Let $\tau$ be a triangle containing $e$. Let $w$ be the third vertex of $\tau$, and let $e_1$ and $e_2$ be the other two edges of $\tau$.
Since $u$ and $v$ are interior vertices, we have that $e_1$ and $e_2$ are not on $\partial \flag \Gamma$. So, no edge of $\tau$ is a boundary edge. That is, the triangle $\tau$ is a crowned triangle, a contradiction.
Finally, let $v$ be an interior vertex.
By definition, none of the edges containing $v$ is on $\partial \flag \Gamma$.
We claim that $\Gamma = \st{v,\Gamma}$, and in particular, there are no other interior vertices.
First, take a triangle $\tau$ containing $v$, then the two edges of $\tau$ that meet at $v$ are not on $\partial \flag \Gamma$.
The third edge of $\tau$ must be a boundary edge; otherwise, the triangle $\tau$ would be a crowned triangle.
This shows that all vertices in $\lk{v,\Gamma}$ are in $\partial \flag \Gamma$.
Now, assume by contradiction that there is a vertex $u$ at distance two from $v$.
Let $w$ be a vertex in $\\lk{v,\Gamma}$ that is adjacent to $u$. Note that $w$ is a boundary vertex. Since $\lk{w,\Gamma}$ is connected by \eqref{item:link of a vertex is connected} in Lemma~\ref{lem:link connected}, there is a path $p$ in $\lk{w, \Gamma}$ from $u$ to a vertex $u'$ in $\lk{v,\Gamma}\cap\lk{w,\Gamma}$; see Figure~\ref{fig: proof of only one interior vertex}. Then the path $p$, together with the edges $(w,u)$ and $(w,u')$, bounds a triangulated disk in $\flag\Gamma$. Then the edge $(w,u')$ is contained in more than one triangle, and therefore, it is not a boundary edge, and the triangle formed by the vertices $v$, $w$, and $u'$ is a crowned triangle, a contradiction.
\end{proof}
\begin{figure}[h]
\centering
\input{pictures/proof_only_one_interior_vertex}
\caption{The path $p$ and the edges $(w,u)$ and $(w,u')$ bound a triangulated disk in $\flag \Gamma$. This implies that the edge $(w,u')$ is not a boundary edge, and the vertices $v$, $w$, and $u'$ form a crowned triangle.}
\label{fig: proof of only one interior vertex}
\end{figure}
Before we prove the next result, we give some terminologies on graphs.
A graph $\Gamma$ is called an \emph{edge-bonding} of two graphs $\Gamma_1$ and $\Gamma_2$ if it is obtained by identifying two edges $e_1\in \ee {\Gamma_1}$ and $e_2\in \ee {\Gamma_2}$.
If $e$ denotes the image of $e_1$ and $e_2$ in $\Gamma$, we also write $\Gamma=\Gamma_1\cup_{e}\Gamma_2$ and say that $e$ is the \textit{bonding edge}.
\begin{remark}\label{rem:simultaneous edge bonding}
Since an edge-bonding involves identifying two edges from two different graphs, one can perform several edge-bondings of a collection of graphs simultaneously.
In particular, if one performs a sequence of edge-bondings, then the result can actually be obtained by a simultaneous edge-bonding.
We also note that there are two ways of identifying $e_1$ and $e_2$ that can result in two different graphs. However, this will not be relevant in the following.
\end{remark}
Our goal is to decompose a given graph as an edge-bonding of certain elementary pieces that we now define.
A \emph{fan} is a cone over a path.
Let $\Gamma_0$ be a connected graph having no vertices of degree $1$ and whose associated flag complex $\flag {\Gamma _{0}}$ is $1$-dimensional. Note that $\Gamma_0$ contains no triangles.
The cone over such a $\Gamma_0$ is called a \emph{simple cone}; see Figure~\ref{fig: example of simple cone} for an example.
\begin{figure}[h]
\centering
\input{pictures/example_simple_cone}
\caption{A simple cone.}
\label{fig: example of simple cone}
\end{figure}
\begin{remark}\label{rem: no further decomposition}
Fans and simple cones could be further decomposed via edge-bonding by disconnecting them along a cut edge.
For example, a fan can be decomposed into triangles.
However, we will not take this point of view. Instead, it will be convenient to decompose a graph into fans and simple cones and regard them as elementary pieces.
\end{remark}
It follows from Corollary~\ref{cor: cone graph gives an isomorphism between BBG and RAAG} that the BBG defined on a fan or simple cone is a RAAG.
Here are some further properties of fans and simple cones that follow directly from the definitions.
\begin{lemma}\label{lem:easy fans simple cones}
Let $\Gamma$ be a fan or a simple cone. The following statements hold.
\begin{enumerate}
\item The flag complex $\flag \Gamma$ is $2$-dimensional, simply connected, and contractible.
\item The flag complex $\flag \Gamma$ has no interior edges, no interior triangles, and no crowned triangles.
\item If $\Gamma = \{v\}\ast P$ is a fan over a path $P$ with endpoints $u$ and $w$, then $\partial \flag \Gamma = P \cup \{(v,u),(v,w)\}$, and there are no interior vertices.
\item If $\Gamma = \{v\}\ast \Gamma_0$ is a simple cone over $\Gamma_0$, then $\partial \flag \Gamma = \Gamma_0$, and the cone vertex $v$ is the only interior vertex.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{lem: no crowned triangles implies edge-bonding of wheels and fans}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is $2$-dimensional and simply connected. Suppose that $\flag \Gamma$ has no crowned triangles. Then $ \Gamma$ decomposes as edge-bondings of fans and simple cones.
\end{lemma}
\begin{proof}
We argue by induction on the number of cut edges of $\Gamma$. Suppose that $\Gamma$ has no cut edges. By Lemma~\ref{lem: no crowned triangles implies no interior 1-2 simplex and at most 1 interior vertex}, the complex $\flag\Gamma$ contains at most one interior vertex. We claim that if $\flag \Gamma$ contains no interior vertices, then $\Gamma$ is a fan. Let $v\in\vv \Gamma$. Since $v$ is a boundary vertex, its link has degree one vertices by Lemma~\ref{lem: v is interior iff its link has no deg 1 vertices}.
Moreover, since $\Gamma$ has no cut edges, the link of $v$ has no cut vertices.
Then $\lk{v,\Gamma}$ must be a single edge, and therefore, the graph $\Gamma$ is a triangle, which is a fan.
Thus, the claim is proved.
If $\flag \Gamma$ contains one interior vertex $u$, then $\Gamma = \st{u,\Gamma}$ as in the proof of Lemma~\ref{lem: no crowned triangles implies no interior 1-2 simplex and at most 1 interior vertex}. So, the graph $\Gamma$ is the cone over $\lk{u,\Gamma}$. Since $u$ is an interior vertex, its link has no degree one vertices. Note that the flag complex on $\lk{u,\Gamma}$ is $1$-dimensional; otherwise, the dimension of $\flag \Gamma$ would be greater than two. Thus, the graph $\Gamma=\st{u,\Gamma}$ is a simple cone. This proves the base case of induction.
Suppose that the conclusion holds for graphs having $n$ cut edges, $n\geq1$. Assume that $\Gamma$ has $n+1$ cut edges. Let $e$ be a cut edge of $\Gamma$. Cutting along $e$ gives some connected components $\Gamma_1,\dots,\Gamma_k$. Each of these components, as a full subgraph of $\Gamma$, satisfies all the assumptions of the lemma and has at most $n$ cut edges. By induction, the subgraphs $\Gamma_1,\dots,\Gamma_k$ are edge-bondings of fans and simple cones. Therefore, the graph $\Gamma$, as an edge-bonding of $\Gamma_1,\dots,\Gamma_k$, is also an edge-bonding of fans and simple cones.
\end{proof}
\begin{remark}
The decomposition in Lemma~\ref{lem: no crowned triangles implies edge-bonding of wheels and fans} is not unique (for instance, it is not maximal; see Remark~\ref{rem: no further decomposition}). We do not need this fact in this paper.
\end{remark}
We now proceed to study the ways in which one can perform edge-bondings of fans and simple cones.
Recall from \S\ref{sec:redundant triples BBGs} that the spoke of a vertex $v$ is the collection of edges containing $v$.
When $\Gamma$ is a fan, write $\Gamma=\lbrace v\rbrace\ast P_n$, where $P_n$ is the path on $n$ labelled vertices; see Figure~\ref{fig: peripheral edges}. We call the edges $(v,w_1)$ and $(v,w_n)$ \emph{peripheral edges}, and the edges $(w_1,w_2)$ and $(w_{n-1},w_n)$ are called \emph{modified-peripheral edges}. A \emph{peripheral triangle} is a triangle containing a peripheral edge and a modified-peripheral edge.
\begin{figure}[h]
\centering
\input{pictures/peripheral_edges}
\caption{The red edges are peripheral edges, and the green edges are modified-peripheral edges. The left-most and right-most triangles are peripheral triangles.}
\label{fig: peripheral edges}
\end{figure}
We say that an edge of a fan is \textit{good} if either it belongs to the spoke or it is a modified-peripheral edge.
Similarly, we say that an edge of a simple cone is \textit{good} if it belongs to the spoke.
We say an edge is \textit{bad} if it is not good.
Note that a bad edge is necessarily a boundary edge; see Lemma~\ref{lem:easy fans simple cones}.
We extend this definition to more general graphs as follows: let $\Gamma$ be a graph obtained via an edge-bonding on a collection of fans and simple cones, and let $e\in \ee \Gamma$ be a bonding edge of $\Gamma$.
We say that $e$ is \textit{good} if it is good in each fan component or simple cone component of $\Gamma$ that contains $e$.
We say that $e$ is \textit{bad} otherwise.
These concepts are motivated by the fact that forming edge-bonding along good edges does not create crowned triangles; see the following example.
\begin{example}\label{ex: good edge-bonding}
Let $\Gamma_1$ and $\Gamma_2$ be a fan and a simple cone, respectively.
If we form the edge-bonding of $\Gamma_1$ and $\Gamma_2$ by identifying a good edge in each of them, the resulting graph has no crowned triangles.
The situation is analogous if $\Gamma_1$ and $\Gamma_2$ are both fans or both simple cones.
\end{example}
\begin{lemma}\label{good edge-bonding must be along spokes or modified-peripheral edges}
Let $\Gamma=\Gamma_1\cup_{e}\Gamma_2$, where
$\Gamma_1$ is a fan or a simple cone, and $\Gamma_2$ is any graph obtained via edge-bonding of fans and simple cones.
If $e$ is a bad edge of $\Gamma_1$, then $\Gamma$ contains a crowned triangle.
\end{lemma}
\begin{proof}
If $e\in \ee {\Gamma_1}$ is bad, then it is in $\partial \flag {\Gamma_1}$. In particular, there is a unique triangle $\tau$ of $\Gamma_1$ containing $e$ (namely, the cone over $e$), and the other two edges of $\tau$ are not boundary edges (in the case of a fan, recall that a modified-peripheral edge is good).
When we form an edge-bonding along $e$, the edge $e$ is no longer a boundary edge in $\Gamma$, so $\tau$ becomes a crowned triangle in $\Gamma$.
\end{proof}
\begin{proposition}\label{prop: tree 2-spanner iff no crowned triangles}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is $2$-dimensional and simply connected. Then $\Gamma$ admits a tree $2$-spanner if and only if $\flag \Gamma$ does not contain crowned triangles.
\end{proposition}
\begin{proof}
Let $T$ be a tree $2$-spanner of $\Gamma$. Suppose by contradiction that $\Gamma$ contains a crowned triangle $\tau$ whose edges are $e$, $f$, and $g$.
By Lemma~\ref{tree 2spanner triangle dicothomy}, either two of $e$, $f$, and $g$ are in $\ee T$ or none of them is in $\ee T$.
If $e$, $f$, and $g$ are not in $\ee T$, then by Lemma~\ref{tree 2spanner tetrahedron}, the graph $\Gamma$ contains a $K_4$.
This contradicts the fact that $\flag \Gamma$ is 2-dimensional.
Now consider the case that $e\notin \ee T$ and $f$ and $g$ are in $\ee T$.
Since $\tau$ is a crowned triangle, the edge $e$ is not on the boundary of $\flag \Gamma$, and there is another triangle $\tau'$ based on $e$ that is different from $\tau$.
Denote the other edges of $\tau'$ by $f'$ and $g'$.
Note that $f'$ and $g'$ cannot be in $\ee T$ by the uniqueness part of Lemma~\ref{tree 2spanner dicothomy}.
This means that none of the edges of $\tau'$ is in $\ee T$.
Again, by Lemma~\ref{tree 2spanner tetrahedron} we obtain a $K_4$, hence a contradiction.
Therefore, the graph $\Gamma$ has no crowned triangles.
Conversely, suppose that $\flag \Gamma$ has no crowned triangles.
By Lemma~\ref{lem: no crowned triangles implies edge-bonding of wheels and fans} the graph $\Gamma$ decomposes as edge-bondings of some fans and simple cones $\Gamma_1,\dots,\Gamma_{m}$.
Let $\lbrace e_1,\dots,e_n\rbrace$ be the set of bonding edges.
Note that by Remark~\ref{rem:simultaneous edge bonding} these edge-bonding operations can be performed simultaneously.
Since $\Gamma$ has no crowned triangles, by Lemma~\ref{good edge-bonding must be along spokes or modified-peripheral edges}, each of the edges in $\lbrace e_1,\dots,e_n\rbrace$ is good.
We now construct a tree $2$-spanner for $\Gamma$. We do this by constructing a tree $2$-spanner $T_i$ for each $\Gamma_i$ and then gluing them together.
For a simple cone component $\Gamma_i$, choose $T_i$ to be the spoke.
For a fan component $\Gamma_i$, write $\Gamma_i=\lbrace v_i\rbrace\ast P_{n_i}$ and order the vertices of $P_{n_i}$ as $w_1,\dots,w_{n_i}$.
Define $T_i$ to consist of the edges $(v_i,w_{n_2}),\dots,(v_i,w_{n_{i-1}})$, together with two more edges, one from each peripheral triangle, chosen as follows.
If the peripheral edge or the modified-peripheral edge in a peripheral triangle is involved in some edge-bondings, then choose that edge to be in $T_i$.
If none of them is involved in any edge-bonding, then choose either one of them.
Note that it is not possible that both the peripheral edge and the modified-peripheral edge of the same peripheral triangle are involved in edge-bondings; otherwise, the graph $\Gamma$ would contain a crowned triangle.
In all the cases, this provides a tree $2$-spanner $T_i$ in $\Gamma_i$.
Moreover, if $e$ is a bonding edge for $\Gamma$ that appears in a component $\Gamma_i$, then $e$ is in $T_i$.
It follows from \cite[Theorem 4.4]{CaiCorneilTreeSpanners} that $T=\bigcup^{m}_{i=1}T_i$ is a tree $2$-spanner of $\Gamma$.
\end{proof}
\begin{remark}
When $\Gamma$ is a $2$-tree (recall from \S\ref{section: example 2-trees}), the flag complex $\flag \Gamma$ is a biconnected contractible 2-dimensional flag complex.
In \cite{caionspanning2trees}, Cai showed that a $2$-tree admits a tree $2$-spanner if and only if it does not contain a trefoil subgraph (see Figure~\ref{fig:trefoil}).
Proposition~\ref{prop: tree 2-spanner iff no crowned triangles} generalizes Cai's result to any biconnected and simply connected 2-dimensional flag complex.
Note that a trefoil subgraph in a $2$-tree is necessarily full, but this is not the case in general; see Figure~\ref{fig: diamond and house}.
\end{remark}
\subsection{The RAAG recognition problem in dimension 2}
In this section, we provide a complete answer to the RAAG recognition problem on $2$-dimensional complexes.
In other words, we completely characterize the graphs $\Gamma$ such that $\bbg \Gamma$ is a RAAG, under the assumption $\dim \flag \Gamma=2$.
Observe that a RAAG is always finitely presented (recall that all graphs are finite in our setting).
On the other hand, by \cite[Main Theorem (3)]{bestvinabradymorsetheoryandfinitenesspropertiesofgroups}, a BBG is finitely presented precisely when the defining flag complex is simply connected.
Therefore, we can assume that $\flag \Gamma$ is simply connected.
Moreover, by Corollary~\ref{cor:biconnected components} we can assume that $\Gamma$ is also biconnected.
Note that RAAGs are actually groups of type $F$, so one could even restrict to the case that $\flag \Gamma$ is contractible, thanks to \cite[Main Theorem]{bestvinabradymorsetheoryandfinitenesspropertiesofgroups}; compare this with Corollary~\ref{cor:tree2spanner implies contractible}. However, we do not need this fact.
We start by showing that in dimension two, any crowned triangle is redundant.
\begin{lemma}\label{lem:crowned tri is redundant in dim 2}
If $\dim (\flag \Gamma)=2$, then every crowned triangle is a redundant triangle.
\end{lemma}
\begin{proof}
Let $\tau$ be a crowned triangle with edges $e_1,e_2,e_3$ and vertices $v_1,v_2,v_3$, where $v_j$ is opposite to $e_j$.
Since $\tau$ is a crowned triangle, no edge $e_j$ is a boundary edge. Hence, there is another triangle $\tau_j$ adjacent to $\tau$ along $e_j$. Let $u_j$ be the vertex of $\tau_j$ not in $\tau$. If $u_j$ were adjacent to $v_j$, then we would have a $K_4$, which is impossible since $\dim \flag \Gamma = 2$.
Thus, the vertices $v_j$ and $u_j$ are not adjacent.
As a consequence, we can choose a full subgraph $\Lambda_j\subseteq \lk{v_j,\Gamma}$ that contains $e_j$ and is a minimal full separating subgraph of $\Gamma$.
Finally, note that the intersection $\Lambda_1\cap \Lambda_2\cap \Lambda_3$ cannot contain any vertex. Otherwise, we would see a $K_4$, which is against the assumption that $\flag \Gamma$ is 2-dimensional.
\end{proof}
\begin{maintheoremc}{A}\label{body main thm 2dim}
Let $\Gamma$ be a biconnected graph such that $\flag \Gamma$ is $2$-dimensional and simply connected. Then the following statements are equivalent.
\begin{enumerate}
\item \label{item: tree 2-spanner} $\Gamma$ admits a tree $2$-spanner.
\item \label{item: crowned triangles} $\flag \Gamma$ does not contain crowned triangles.
\item \label{item: BBG not RAAG} $\bbg \Gamma$ is a RAAG.
\item \label{item: BBG an Artin} $\bbg \Gamma$ is an Artin group.
\end{enumerate}
\end{maintheoremc}
\begin{proof}
The implications \eqref{item: tree 2-spanner} $\Leftrightarrow$ \eqref{item: crowned triangles} follows from Proposition~\ref{prop: tree 2-spanner iff no crowned triangles}.
Moreover, the implication \eqref{item: tree 2-spanner} $\Rightarrow$ \eqref{item: BBG not RAAG} is Theorem~\hyperref[containing a tree 2-spanner implies that BBG is a RAAG]{B}.
The implication \eqref{item: BBG not RAAG} $\Rightarrow$ \eqref{item: BBG an Artin} is obvious.
We prove the implication \eqref{item: BBG not RAAG} $\Rightarrow$ \eqref{item: crowned triangles} as follows.
Assume that $\flag \Gamma$ contains a crowned triangle $\tau$.
Then by Lemma~\ref{lem:crowned tri is redundant in dim 2} we know that $\tau$ is also a redundant triangle.
Then it follows from Theorem~\hyperref[thm:redundant triple criterion]{E} that $\bbg\Gamma$ is not a RAAG.
The implication
\eqref{item: BBG an Artin} $\Rightarrow$ \eqref{item: crowned triangles} is obtained in a similar way, using Corollary~\ref{cor:not Artin} instead of Theorem~\hyperref[thm:redundant triple criterion]{E}.
\end{proof}
Papadima and Suciu in \cite[Proposition 9.4]{PapadimaSuciuAlgebraicinvariantsforBBGs} showed that if $\flag \Gamma$ is a certain type of triangulation of the $2$-disk (which they call \textit{extra-special triangulation}), then $\bbg \Gamma$ is not a RAAG.
Those triangulations always contain a crowned triangle, so Theorem~\hyperref[body main thm 2dim]{A} recovers Papadima--Suciu's result and extends it to a wider class of graphs, such as arbitrary triangulations of disks (see Example~\ref{ex:extended trefoil continued}), or even flag complexes that are not triangulations of disks (see Example~\ref{ex: a bouquet of triangles}.)
\begin{example}[The extended trefoil continued]\label{ex:extended trefoil continued}
Let $\Gamma$ be the graph in Figure~\ref{fig: A special but not extra-special triangulation}.
Since $\Gamma$ contains a crowned triangle, the group $\bbg \Gamma$ is not a RAAG by Theorem~\hyperref[body main thm 2dim]{A}.
Note that this fact does not follow from \cite{PapadimaSuciuAlgebraicinvariantsforBBGs}: the flag complex $\flag \Gamma$ is a triangulation of the disk but not an extra-special triangulation.
This fact also does not follow from \cite{DayWadeSubspaceArrangementBNSinvariantsandpuresymmetricOuterAutomorphismsofRAAGs}, because all the subspace arrangement homology groups vanish for this group $\bbg\Gamma$, that is, they look like those of a RAAG (as observed in Example~\ref{ex:extended trefoil}).
\end{example}
\begin{remark}\label{rem: higher dimensional}
The criterion for a BBG to be a RAAG from Theorem~\hyperref[containing a tree 2-spanner implies that BBG is a RAAG]{B} works in any dimension.
On the other hand, Theorem~\hyperref[body main thm 2dim]{A} fails for higher dimensional complexes.
Indeed, the mere existence of a crowned triangle is not very informative in higher dimension cases; see Example~\ref{ex:cone over PS}.
However, the existence of a redundant triangle is an obstruction for a BBG to be a RAAG even in higher dimensional complexes; see Example~\ref{ex:higher dimensional}.
\end{remark}
\begin{example}[A crowned triangle in dimension three does not imply that the BBG is not a RAAG]\label{ex:cone over PS}
Let $\Gamma$ be the cone over the trefoil graph in Figure~\ref{fig:trefoil}.
Then $\flag \Gamma$ is $3$-dimensional and $\Gamma$ contains a crowned triangle (the one sitting in the trefoil graph).
However, this crowned triangle is not a redundant triangle, and the group $\bbg \Gamma$ is actually a RAAG by Corollary~\ref{cor: cone graph gives an isomorphism between BBG and RAAG}.
\end{example}
\begin{example}[A redundant triangle in dimension three implies that the BBG is not a RAAG]\label{ex:higher dimensional}
Consider the graph $\Gamma$ in Figure~\ref{fig: no crowned triangles and no tree 2-spanner}.
Then $\flag \Gamma$ is $3$-dimensional and every $3$-simplex has a $2$-face on $\partial \flag{\Gamma}$.
However, we can show that this $\bbg \Gamma$ is not a RAAG.
The triangle induced by the vertices $v_1$, $v_2$, and $v_3$ is a redundant triangle.
Indeed, the full subgraphs $\Lambda_1$, $\Lambda_2$, and $\Lambda_3$ induced by the sets of vertices $\lbrace u,v_2,v_3\rbrace$, $\lbrace u,v_1,v_3\rbrace$, and $\lbrace v_1,v_2,w\rbrace$, respectively, satisfy condition \eqref{item: omega} in the definition of redundant triangle.
Then it follows from Theorem~\hyperref[thm:redundant triple criterion]{E} that this $\bbg \Gamma$ is not a RAAG.
\end{example}
\begin{figure}[ht!]
\centering \input{pictures/ex_tetrahedron_with_3_simplices_and_one_extra}
\caption{A $3$-dimensional complex that contains a redundant triangle.}
\label{fig: no crowned triangles and no tree 2-spanner}
\end{figure}
\printbibliography
\end{document}
|
1,116,691,501,230 | arxiv | \section{Introduction} \label{sec:intro}
An exoplanet is, in general, a planet orbiting a star other than our Sun. The first confirmed discoveries of exoplanets were made in the early 1990s, opening up a field that is rapidly expanding with several thousand confirmed exoplanets known today, giving us insight into different planetary systems to our own and introducing challenges to our understanding of how such systems form and evolve.
A variety of techniques are used to discover exoplanets. In this project, we concentrated on the transit methods that have been used to discover the most exoplanets to date --- namely monitoring the brightness of the exoplanet system. Exoplanets are generally too close to their host stars to be seen as separate objects. The transit method tracks the brightness of the combined system (exoplanets and host star) with time, looking for changes caused such as when the planet passes in front of its star and blocks some light from reaching the Earth. The method tells us about the size of the planets and the angle they orbit about the host star relative to our line of sight.
In this paper we study transits for the exoplanet WASP-140b. This planet was discovered by Hellier {\em et al.} (2016), being 2.4 Jupiter masses orbiting its V=11.1 K0 host star (coordinates $\alpha = 04^{h} 01^{m} 32.54^{s}$, $\delta = -20^{\circ}27' 03.9"$ J2000) once in roughly 2.24 days. Hellier {\em et al.} note a rotational modulation of the out of transit flux with an $\sim10.4$ day cycle, which they attribute to magnetic activity of the host. They note that the transit is grazing, leading to a higher uncertainty in the estimate radius of the planet ($1.44^{+0.42}_{-0.18}$ Jupiter radii).
We apply the {\sc exotic} model (Zellem {\em et al.}, 2020) to estimate basic parameters of the system such as time of mid-transit, planetary radius relative to the host star, and orbital radius. We compare and contrast these results with a simple transit model (Mandel \& Agol, 2002) we implemented with a Bayesian optimizer, as well as with literature results. We were particularly interested in seeing if there were deviations in the times of mid-transits compared to a fixed orbital period. The Transit Timing Variation (TTV) method is based on monitoring such changes in timing of transits. The presence of non-transiting planets (in the same system) can be inferred from TTV measurements. The gravitational interaction of these non-transiting planets will sometimes increase the orbital period of the transiting planet, and at other times decrease the period, depending on their relative postions and so the mid-transit times will vary from a fixed, regular cycle.
\begin{figure}[htb]
\centering
\begin{subfloat}[18 November 2018]{
\includegraphics[width=0.45\textwidth]{FinalLightCurve_WASP-140_b_2018-11-18.pdf}
\label{fig:1}}
\end{subfloat}\hfil
\begin{subfloat}[22 January 2019]{
\includegraphics[width=0.45\textwidth]{FinalLightCurve_WASP-140_b_2019-01-22.pdf}
\label{fig:2}}
\end{subfloat}\hfil
\medskip
\begin{subfloat}[11 October 2020]{
\includegraphics[width=0.45\textwidth]{FinalLightCurve_WASP-140_b_2020-10-11.pdf}
\label{fig:3}}
\end{subfloat}
\begin{subfloat}[20 October 2020]{
\includegraphics[width=0.45\textwidth]{FinalLightCurve_WASP-140_b_2020-10-20.pdf}
\label{fig:4}}
\end{subfloat}\hfil
\medskip
\begin{subfloat}[29 October 2020]{
\includegraphics[width=0.45\textwidth]{FinalLightCurve_WASP-140_b_2020-10-29.pdf}
\label{fig:5}}
\end{subfloat}\hfil
\begin{subfloat}[02 January 2021]{
\includegraphics[width=0.45\textwidth, height=0.34\linewidth]{FinalLightCurve_WASP-140_b_2021-01-02.pdf}
\label{fig:7}}
\end{subfloat}\hfil
\caption{Selected WASP-140b transit data collected by the MicroObservatory and models. MicroObservatory observations have no filter. The red lines show the expected variation based on the best fitting {\sc exotic} model for each transit. Not all transits are shown for reasons of space.
\label{fig:MObs_WASP_140_transits}}
\end{figure}
\begin{sidewaysfigure}[htb]
\centering
\begin{subfloat}[04 October 2019 ($w$)]{
\includegraphics[width=0.45\linewidth]{FinalLightCurve_WASP-140_b_04-10-2019.pdf}
\label{fig:8}}
\end{subfloat}\hfil
\begin{subfloat}[14 October 2020 ($i_p$)]{
\includegraphics[width=0.45\linewidth]{FinalLightCurve_WASP-140_b_2020-10-14.pdf}
\label{fig:9}}
\end{subfloat}
\medskip
\begin{subfloat}[24 October 2021 ($i_p$)]{
\includegraphics[width=0.45\linewidth]{FinalLightCurve_WASP-140_b_24-11-2021.pdf}
\label{fig:10}}
\end{subfloat}\hfil
\begin{subfloat}[28 December 2021 ($r_p$)]{
\includegraphics[width=0.45\linewidth]{FinalLightCurve_WASP-140_b_28-12-2021.pdf}
\label{fig:11}}
\end{subfloat}
\caption{WASP-140b transit data collected using the LCO. The filters used for the LCO observations are indicated in the appropriate sub-figure captions.
\label{fig:LCO_WASP_140_transits}}
\end{sidewaysfigure}
\begin{table}[t]
\caption{{\bf Fitted Parameters for WASP-140b} from the EXOTIC modelling. Mid-transit times are given in Barycentric Julian Dates (Barycentric Dynamical Time), the orbital semi-major axis ($a$) in terms of the stellar radius ($r_s$), and the planetary radius ($r_p$) relative to the stellar radius. {\sc exotic} outputs ${a}/{r_{s}}$, so a column giving the inverse is given for convenience when comparing with a later model and the literature. Uncertainties are $1\sigma$. `Quality' is a subjective assessment by the authors of the quality of the light curve. Exposure times for the LCOGT observations were 16.5 seconds for 4 October 2019, 100 s for 14 October 2020, 16.8 s for 24 October 2021, and 60 seconds for 28 December 2021.}
\centering
\hspace{-2.5cm}
\begin{tabular}{||l|l|l|l|l|l||}
\hline
Date & Mid-transit & ${a}/{r_{s}}$ & $r_{s}/a$ & $r_{p} / r_{s}$ & Quality \\
\hline
18 Nov 2018 & 2458441.7633 $\pm$ 0.0028 & 7.69 $\pm$ 0.30 & $0.130 \pm 0.005$ & 0.1786 $\pm$ 0.0099 & complete \\
22 Jan 2019 & 2458506.6080 $\pm$ 0.0026 & 7.63 $\pm$ 0.24 & $0.131 \pm 0.004$ & 0.179 $\pm$ 0.001 & partial \\
11 Oct 2020 & 2459134.9220 $\pm$ 0.0038 & 7.51 $\pm$ 0.52 & $0.133^{+0.010}_{-0.009}$ & 0.154 $\pm$ 0.024 & complete \\
20 Oct 2020 & 2459143.8611 $\pm$ 0.0020 & 8.40 $\pm$ 0.26 & $0.119 \pm 0.004$ & 0.178 $\pm$ 0.015 & complete \\
29 Oct 2020 & 2459152.8145 $\pm$ 0.0026 & 8.14 $\pm$ 0.33 & $0.123 \pm 0.005$ & 0.176 $\pm$ 0.016 & complete \\
15 Dec 2020 & 2459199.7704 $\pm$ 0.0083 & 7.29 $\pm$ 0.64 & $0.137^{+0.013}_{-0.011}$ & 0.119 $\pm$ 0.030 & partial \\
02 Jan 2021 & 2459217.6512 $\pm$ 0.0023 & 7.70 $\pm$ 0.24 & $0.130 \pm 0.004$ & 0.179 $\pm$ 0.012 & complete \\
04 Oct 2019 & 2458761.5091 $\pm$ 0.0004 & 7.631 $\pm$ 0.085 & $0.131 \pm 0.001$ & 0.1684 $\pm$ 0.005 & complete \\
14 Oct 2020 & 2459137.1516 $\pm$ 0.0023 & 7.20 $\pm$ 0.23 & $0.139^{+0.005}_{-0.004}$ & 0.1618 $\pm$ 0.0085 & complete \\
24 Oct 2021 & 2459512.8046 $\pm$ 0.0033 & 6.56 $\pm$ 0.19 & $0.152^{+0.005}_{-0.004}$ & 0.1678 $\pm$ 0.0059 & partial \\
28 Dec 2021 & 2459577.6402 $\pm$ 0.0015 & 6.486 $\pm$ 0.035 & $0.154 \pm 0.001$ & 0.1783 $\pm$ 0.0027 & partial \\
\hline
\end{tabular}
\label{tab:exotic_wasp_140b_fits}
\end{table}
\section{Data and Initial Processing}
The bulk of observations are 60-second, unfiltered exposures collected by a 6-inch aperture MicroObservatory (MObs; Sadler {\em et al.}, 2001) telescope located at Mount Hopkins (latitude $31.675^\circ$, longitude $-110.952^\circ$, 1,268m altitude above sea level) in Arizona, using a KAF-1403 ME CCD camera with a pixel scale of 5.2" per pixel and $2 \times 2$ binning to reduce noise. These data were analysed using {\sc exotic}, which is a {\sc python}-based tool developed by JPL's `Exowatch' program for reducing exoplanet transit data. This software can run on a variety of operating systems as well as via Google's online `Colaboratory'\footnote{For further details on this tool see: https://research.google.com/colaboratory/faq.html} tool. Technical details on {\sc exotic} can be found in Zellem~{\em et al.} (2020). Priors for Markov Chain Monte Carlo (MCMC) fitting by {\sc exotic} are automatically scraped from the NASA Exoplanet Archive (Akeson~{\em et al.}, 2013), while limb darkening parameters are generated by {\sc exofast} (Eastman~{\em et al.}, 2013). {\sc exotic} generates $1\sigma$ uncertainties based on the resulting posterior distributions.
Only dark images were available for the MObs observations, i.e., no flat field images were collected. The dark frames were collected at the beginning and end of each night of observation. As part of the analysis, {\sc exotic} applied the dark frames to the science data, and then performed differential aperture photometry. For each transit, the analyst supplied {\sc exotic} a list of comparison stars. {\sc exotic} performed a stability assessment of this candidate list, choosing the most stable star as the final comparison star. Relatively poor pointing accuracy of the telescope and drift in tracking throughout a transit could lead to selection of different final comparison stars across the transits. However, typically {\sc exotic} selected stars 108 or 112 from the AAVSO comparison star sequence for WASP-140. We plate-solved science frames for each transit to ensure correct selection of the exoplanet host star, using astrometry.net, together with confirmation using charts prepared using the online AAVSO finding chart tool.
\section{Analysis}
We analysed 22 MObs attempts to observe transits of WASP-140b, dating from 12 October 2016 to 24 October 2021. Only 7 resulted in successful measurements of transits (see Figure~\ref{fig:MObs_WASP_140_transits} for charts of representative transits), a success rate of 32\%. Clouds or incorrect pointing of the telescope accounted for the failed attempts. Table~\ref{tab:exotic_wasp_140b_fits} lists the key output from these fits using {\sc exotic}, namely the orbital semi-major axis $a$ (relative to the stellar radius $r_s$), the planetary radius ($r_p$), and the time of mid-transit (in BJD). The observations and fitted parameter values from {\sc exotic} have been uploaded to the AAVSO exoplanet database, under the usercode BTSB.
We also made use of the Las Cumbres Observatory Global Telescope network (LCOGT; Brown~{\em et al.}, 2013), first using archival data of transits and also collecting $r_p$ photometry on the night of 28 December 2021 using a telescope at the Cerro Tololo Inter-American Observatory. All the analysed LCOGT were collected using 0.4 meter telescopes equipped with CCDs. We processed all these data using {\sc exotic}, following flat fielding, dark subtraction, and bias correction via the LCO {\sc banzai} system.\footnote{See https://github.com/LCOGT/banzai for further information on this data pipeline.} Model fits to the transits are shown in Figure~\ref{fig:LCO_WASP_140_transits} and final parameter estimates are given in Table~\ref{tab:exotic_wasp_140b_fits}. We did not upload the LCOGT archival data or the model fits based on these to the AAVSO exoplanet database, given that we did not collect the data and did not wish to `make claim' to them over the original investigators.
\begin{figure}
\centerline{\includegraphics[height=8.5cm]{figures/Wasp_140b_residuals.png}}
\caption{Residuals from linear regression fit of orbits versus mid-transit time for WASP-140b. A linear model was fitted to the residuals, with no statistically significant slope. The grey shaded zone is the $3 \sigma$ confidence interval for the regression. The blue line is the mean regression slope, which is not statistically different from zero at the $3 \sigma$ level. The error bars for the mid-transit timing estimates overlap with this, and with zero, indicating no statistically significant trends in the residuals. Transits were classified by eye into complete and incomplete transits, to see if data quality might obscure any trends (see Table~\ref{tab:exotic_wasp_140b_fits}). It does not.
\label{fig:wasp_140b_residuals}
}
\end{figure}
\subsection{Orbital Period}
The ephemeris of Hellier {\em et al.} (2016) was used to calculate the number of orbits made by Wasp-140b about its host star since their starting epoch. These were then regressed against the mid-transit times given in Table~\ref{tab:exotic_wasp_140b_fits} using the `lm' function in R (R Core Team, 2021),\footnote{R is available from https://www.r-project.org} giving an orbital period of $2.235987 \pm 0.000008$ days and an epoch of $2456912.349 \pm 0.008$. These are in good agreement with the values of Hellier {\em et al.} (2016): $2.2359835 \pm 0.0000008$ days for the orbit and $2456912.35105 \pm 0.00015 $ for the epoch. Higher order polynomial fits did not result in additionally statistically significant parameters. Inspection of the residuals (see Figure~\ref{fig:wasp_140b_residuals}) reveals no apparent variation in period. These results therefore do not indicate any significant transit timing variations (TTVs). As noted above, TTVs would indicate the presence of an additional planet in the WASP-104 system through its gravitational attraction periodically altering the orbital velocity of WASP-140b. This would have led to observed transits (of WASP-140b) being earlier or later than predicted by a linear ephemeris. Maciejewski (2022) also analysed Transiting Exoplanet Survey Satellite (TESS, Ricker~{\em et al.}, 2015) data for the system searching unsuccessfully for TTVs, concluding that there were none currently detectable and so in agreement with the current study.
\begin{sidewaysfigure}
\centerline{\includegraphics[height=0.54\textheight]{figures/wasp_140_b_mcmc.png}}
\caption{
Example MCMC results for the 4 October 2019 transit of WASP-140b. This represents 4,000 steps in the Markov chain, including the initial steps known as `burn-in'. These steps are excluded from the final results, and are considered a result of starting the optimization in a lower probability set of parameters, leading to movement to the global minimum. Actual runs included 40,000 steps, which unfortunately `overloaded' the plotting software and are therefore not included here. `Ratio' is the ratio of the planetary radius to the stellar one, `orbital' is the ratio of stellar radius to the orbital semi-major axis, `u' is the linear limb darkening co-efficient, `cos\_i' is the cosine of the inclination, 'offset' an adjustment in phase, `L' an adjustment in flux, and `sigma' an estimate of the white noise in the data. The chart provides the distributions of each of these parameters on its diagonal as bar charts, correlations between the variables are given in the upper right, and scatter plots crossing each of the parameters in turn are given in the lower left. Each point in a scatter plot represents a step in the Markov chain. The bold lines are linear regressions to the data, corresponding to the correlation results.
\label{fig:14_oct_19_wasp_140_mcmc}
}
\end{sidewaysfigure}
\subsection{Transit Models}
While {\sc exotic} had already fitted the transits, we decided to build from `first principles' a simple transit model and couple this with optimization techniques in order to both make a comparison and explore including inclination as a free parameter. This was primarily a student project acting as an introduction to exoplanet research, so building our own model and coupling this with optimization was considered a good learning exercise. {\sc exotic} adopts its priors from the NASA Exoplanet Archive, hence it adopted the inclination from Hellier {\em et al.} (2016) as a fixed parameter. Given the glancing nature of this transit, fixing the inclination has a large effect on the derived parameter estimates. For optimization of our transit model, we used the Markov Chain Monte Carlo (MCMC) technique Hamilton Monte Carlo (HMC). MCMC allows construction of a Markov process such that the stationary distribution is the same as our target distribution, through the generation of a `chain' of random samples from the process. Through a sufficient number of samples, such a chain becomes close enough to the stationary distribution and therefore provides a good approximation to the target distribution. This is known as convergence of the MCMC chain (see Sinharay, 2003), and allows exploration of the uncertainty in the parameter estimates --- explaining our interest in this technique. We implemented HMC using the {\em rstan}\footnote{Available from https://mc-stan.org/users/interfaces/rstan} implementation of Stan (Carpenter~{\em et al.}, 2017; Stan Development Team, 2016) inside the statistical programming language R. Uniform priors were used, reflecting minimum previous knowledge of the parameters.
To build this model we used some key parameters of the exoplanet and its host star:
\begin{itemize}
\item{$a$, $r_s$, and $r_p$ were as defined above, with the radii being in terms of $a$;}
\item{$u$ = linear limb darkening coefficient (see below for an explanation of this parameter);}
\item{$i$ = orbital inclination (in degrees). Ninety degrees means that the orbital plane is in the line of sight from the Earth;}
\item{{\em offset} = a parameter to adjust the reference point of phase axis;}
\item{$U$ = system brightness, used to adjust the reference point of flux axis. The out of transit flux should be approximately unity, i.e., the fluxes are normalized to the mean out of transit level.}
\end{itemize}
We first consider that $d$ is the center-to-center distance between the planet and the star. We can then calculate $z = \frac{d}{r_*}$, which denotes the normalised separation of the centers (of the exoplanet and its host star) and $p = \frac{r_p}{r_*}$, which is the ratio of the disk radii. This allows us to model a transit based on the equations in Mandel \& Agol's (2002) paper. These specify that for a uniform source, the ratio of obscured to unobscured flux is $F^e(p, z)=1-\lambda^e(p, z)$, where
\begin{equation}\label{eqn:mandel}
\lambda^e_{(p, z)}=\left\{\begin{array}{ll}
0 & 1+p<z \\
\frac{1}{\pi}\left[p^{2} k_{0}+k_{1}-\sqrt{\frac{4 z^{2}-\left(1+z^{2}-p^{2}\right)^{2}}{4}}\right]
& |1-p|<z \leq 1+p \\
p^{2} & z \leq 1-p \\
1 & z \leq p-1.
\end{array}\right.
\end{equation}
and $\kappa_{1}=\cos ^{-1}\left[\left(1-p^{2}+z^{2}\right) / 2 z\right]$ and $\kappa_{0}=\cos ^{-1}\left[\left(p^{2}+z^{2}-1\right) / 2 p z\right] .$
This set of equations describe the flux of planetary systems in the following cases:
\begin{enumerate}
\item{When the planetary disk does not obscure any portion of the stellar disk. There will be no dimming of the combined light, and so the normalized flux would be 1.}
\item{When the planetary disk is completely in front of the stellar disk. In the case of a uniformly bright stellar disk, the dimming will scale by the obscured area -- which can be calculated by $\frac{r_{p}^2}{r_{s}^2}$ (or $p^2$).}
\item{The boundary case when the planetary disk is moving onto or off the stellar disk.}
\end{enumerate}
The fourth case in Equation \ref{eqn:mandel} corresponds to the unlikely case of when the planet is larger (or equal to the same radius) than its host star.
\begin{table}[bt]
\caption{{\bf MCMC results.} Only one of the LCOGT data sets gave a reliable solution. Results of three of the better MObs transits are shown, to demonstrate the lower confidence in the estimated parameter estimates for such data sets (together with an implausibly large `planet'). Uncertainties are $1\sigma$. `Date' is the night of observation. }
\centering
\hspace{-2.5cm}
\begin{tabular}{||l|c|c|c|c|c|l||}
\hline
Date & ${r_p}/{r_s}$ & ${r_s}/a$ & $u$ & $\cos{i}$ & $\sigma$ & Observatory\\
\hline
04 October 2019 & $0.159 \pm 0.013$ & $0.109 \pm 0.007$ & $0.48 \pm 0.23$ & $0.086 \pm 0.013$ & $0.0036 \pm 0.0001$ & LCOGT \\
11 October 2020 & $0.35 \pm 0.23$ & $0.14 \pm 0.04$ & $0.55 \pm 0.30$ & $0.16 \pm 0.07$ & $0.010 \pm 0.001$ & MObs \\
20 October 2020 & $0.32 \pm 0.22$ & $0.10 \pm 0.02$ & $0.53 \pm 0.28$ & $0.11 \pm 0.05$ & $0.0058 \pm 0.0005$ & MObs \\
02 January 2021 & $0.33 \pm 0.20$ & $0.11 \pm 0.02$ & $0.58 \pm 0.28$ & $0.11 \pm 0.05$ & $0.0063 \pm 0.0005$ & MObs \\
\hline
\end{tabular}
\label{tab:mcmc_results}
\end{table}
Limb darkening refers to the phenomenon that the brightness of a star appears to decrease from the centre to the edge, or limb, of the observed disk. This occurs because a stellar atmosphere increases in temperature with depth. At the centre of a stellar disk an observer `sees' deeper and hotter layers that emit more light compared to at the limbs, where the upper and cooler layers are seen (which produce less light). The `small planet' approximation was used for the transit model, in that the limb darkening value corresponding to the centre of the planetary disk projected onto the stellar disk was uniformly applied across the stellar area obscured by the planet. We implemented linear limb darkening for the model to adjust the obscured flux values, i.e., a limb darkening model with only a single term.
Only one of our data sets (LCOGT 04 October 2019) could be reliably fitted with this model, as it had a sufficient signal to noise ratio, a well-defined transit, and sufficient observations before and after the transit so that the out of transit flux levels were well constrained. Interestingly, we were not able to derive a determinate solution for the 04 October 2019 data set, which by eye appears to be a suitable transit. This would indicate that we have too many free parameters in the fit, a point we will come back to later in the paper. Table~\ref{tab:mcmc_results} presents results of this fitting and some example MObs fits. Clearly we were asking too much of the MObs data when we included inclination and limb darkening as free parameters, as we have physically unreasonable solutions for these data sets. {\sc exotic} is a better tool for these high noise data sets. The HCM fit to the LCOGT data is more reasonable.
\subsection{Comparison with the Literature}
Hellier {\em et al.} (2016) estimated $r_p / r_s$ as $ 0.166^{+0.059}_{-0.027}$, $\cos{i} = 0.117^{+0.013}_{-0.009}$, and ${r_s}/a = 0.125^{+0.030}_{-0.022}$. These figures are in good agreement with the HMC model fit based on the LCOGT data bar for $\cos{i}$, with the HCM model corresponding to an inclination of $85.07 \pm 0.75$ degrees compared to Hellier {\em et al.'s} value of $83.3^{+0.5}_{-0.8}$ degrees. This is within two standard deviations though.
A comparison with the results from the {\sc exotic} model for the same data shows that the orbital radius from the HMC model is substantially larger (at $\sim 9.2$ times the stellar radius) as is the planetary radius ({\sc exotic's} $0.131 \pm 0.001 \: r_s$ compared to $0.159 \pm 0.013$). The lack of agreement is puzzling, given that both Hellier {\em et al.} and {\sc exotic} both integrate the limb darkened fluxes obscured by the planetary disk, suggesting that the small planet approximation is not the primary cause of the difference.
\begin{figure}[!t]
\begin{subfloat}[TESS Sector 31 Light Curve]{
\includegraphics[width=0.45\textwidth]{figures/wasp_140_tess.png}
\label{fig:tess_1}}
\end{subfloat}\hfil
\begin{subfloat}[2020-Nov-08 Transit]{
\includegraphics[width=0.45\linewidth]{figures/wasp_140b_20_nov_2020.png}
\label{fig:tess_2}}
\end{subfloat}
\caption{The figure on the left (a) shows the non-normalized Pre-search Data Conditioning Simple Aperture Photometry (PDC\_SAP) generated by the TESS team, which has had removed longstanding systematic trends and so provides better data quality than the simple aperture photometry (also available from MAST). Remaining variability is clearly visible, showing these changes are on timescales comparable to that between transits. Hellier et al. (2016) noted residual variation at a 5-9 milli-magnitude amplitude. This range is consistent with the observed remaining variability. The figure on the right (b) shows one of these transits plus the optimal model generated by the HMC code. This transit is the second from the left in the data following the break in the middle of Figure \ref{fig:tess_1}.
\label{fig:tess_light_curves}}
\end{figure}
Davoudi et al. (2020) used {\sc EXOFAST} (Eastman {\em et al.}, 2013) to model a clear filter 01 January 2017 transit data set of the system, finding the planet's radius to be $1.1990 \pm 0.0735$ that of Jupiter, which is smaller than Hellier~{\em et al.}'s estimate of $1.44^{+0.42}_{-0.18} \: {\rm R_{J}}$ and this paper's of $1.38^{+0.18}_{-0.17} \: {\rm R_{J}}$ (although within the error ranges). No inclination or orbital radius data were supplied by Davoudi et al., so a comparison is not possible.
Alexoudi (2022) applied the {\em emcee} Bayesian sampler (Foreman {\em et al.}, 2015) to analyse 28 transits from 3 sectors\footnote{Sector 4 from 18 October 2018 to 15 November 2018, sector 5 from 15 November 2018 to 11 December 2018, and sector 31 from 21 Octo\-ber 2020 to 19 Novem\-ber 2020.} of data collected by the TESS space telescope. Alexoudi derived an inclination of $84.30 \pm 0.06$ degrees, $r_{s}/a = 0.1166 \pm 0.0008$, and $r_p / r_s = 0.1464 \pm 0.0010$. These values are similar to those of the current paper and Hellier {\em et al.}, but not within formal uncertainties. Alexoudi noted the differences with Hellier {\em et al.}, commenting that these could be due to the higher accuracy of the TESS data. As a check, we downloaded 2-minute cadence TESS data from MAST (see Figure \ref{fig:tess_1}) and applied the HMC model to a transit (centred on TBJD 2459161.75, see Figure \ref{fig:tess_2}). We found $r_{s}/a = 0.109 \pm 0.008$, $r_p/r_s = 0.163 \pm 0.016$, and $\cos{i} = 0.089 \pm 0.016$ ($\sim 84.87^{\circ}$). The linear limb darkening coefficient was poorly constrained ($0.48 \pm 0.29$). Our model resulted in a larger planetary radius than Alexoudi's, and very close to those derived from the LCOGT data.
\subsection{Recommendations}
Problems with the other data sets included the lack of sufficient pre-transit data prevented reliable estimates (e.g., the 14 October 2020 data set) while variations in the out-of-transit flux levels prevented a reliable fit to the 28 December 2021 data set. The increased noise of the MObs data compared to LCOGT data also led to less accurate parameter estimates, especially for ratio of the planetary to stellar radii. It would be interesting to see if additional data processing, such as collection and use of flat fields, would help improve the quality of these data sets.
For transit fittings of this system, we recommend that the pre- and post- transit observations be roughly as long as the actual transit time period, particularly since the host star appears to be active (changing in flux levels) on a short time scale. For instance, the pre-transit flux levels appear to be greater than post-transit for the 28 December 2021 data set, and are a complication for a simple model such as ours.
A further complication is the use of the small planet approximation for a high inclination orbit such as WASP-140b's; in later projects we intend to apply a graduated limb darkening adjustment to the obscured flux. There is a clear correlation between $u$ with ${r_p}/{r_s}$ and ${r_s}/a$ (see Figure~\ref{fig:14_oct_19_wasp_140_mcmc}), so locking $u$ to a value based on theory could lead to a tighter confidence interval for these two parameters. The parameter $u$ can also be seen to be poorly defined in Figure~\ref{fig:14_oct_19_wasp_140_mcmc}. This suggests that it could be better to set it to a value using theory and include $u$ as a fixed (rather than a free) parameter. See Banks \& Budding (1990) for further discussion of the information content of data and the question of over-parameterization. Finally, WASP-140b transits close to the stellar limb where the gradient will be strongest in the limb darkening, further supporting the conclusion above.
The signal to noise ratio is clearly important for transit fitting, affecting the accuracy of the MObs fits by our model. Observations with the LCOGT (similar to those presented here) appear to have sufficient ``information content'' to support the HMC model, providing sufficient data about the shoulders of the eclipse are collected for accurate estimation of the out-of-transit flux level.
\section{Summary}
This paper presented MCMC modeling of transits of WASP-140b, collected using robotic telescopes of the MObs and LCOGT. These data included a transit in December 2021 collected by the authors. We coded a fitting function based on the equations of Mandel~\& Agol (2002) and coupled this with Bayesian optimization. Together with the {\sc exotic} analysis program, two MCMC-based optimization models have been applied to these transits, deriving estimates for the times of mid-transit as well as physical parameters of the system. Inspection of the mid-transit times revealed a linear period with no statistical evidence from the data of transit time variations, i.e., no evidence for the gravitational influence of a non-transit planet on the orbit of WASP-140b.
Results from the two analysis programs ({\sc exotic} and HMC) were in good agreement, indicating that the radius for WASP-140b to be $1.38^{+0.18}_{-0.17}$ Jupiter radii, with the planet orbiting its host star in $2.235987 \pm 0.000008$ days at an inclination of $85.75 \pm 0.75$ degrees. The derived parameters are in formal agreement with the discovery paper of Hellier {\em et al.} (2016), and somewhat larger than a recent independent study based on photometry by the TESS space telescope (Alexoudi, 2022).
We were probably too ambitious in our selection of an exoplanet with a high inclination orbit about a host star itself with rapidly changing flux levels (to apply a high parameter model such as the HMC model), but that is part of the learning process. Application of techniques such as Gaussian Processes to model out the host star variations would be a good next step, which would allow the combination together of multiple transits which could be binned together to increase the signal to noise ratio and strengthen the information content of the data. We also plan to use our HMC model on more simple systems, such as Kepler 1\footnote{See, e.g., Ng et al. (2021) who applied the Mandel~\& Agol (2002) models, MCMC, and Gaussian Processes to Kepler space telescope data of Kepler-1b and other systems.} which do not have such active host stars and orbits with inclinations closer to 90 degrees, where the model's deficiencies will be less and the correlation between limb darkening and inclination less confounding. Having made these comments, we still recommend that programming a simple model such as Mandel \& Agol (2002) and coupling this with an optimizer is a useful learning exercise, and makes for a useful student project. Our points are rather to choose a more quiet system than the one we did, and to either implement improved handling of limb darkening for highly tilted systems or to choose an exoplanet with an orbit closer to $90^{\circ}$ inclination as well as being somewhat smaller relative to its host star (so that the small planet approximation is more valid). If investigation of TTVs is the primary goal of the project, then {\sc exotic} is an excellent tool for such work.
\newpage
\begin{acknowledgments}
This publication makes use of the EXOTIC data reduction package from Exoplanet Watch, a citizen science project managed by NASA’s Jet Propulsion Laboratory (JPL) on behalf of NASA’s Universe of Learning and which is supported by NASA under award number NNX16AC65A to the Space Telescope Science Institute. We are grateful for observing time on the Las Cumbres Observatory Global Telescope (LCOGT) Network, and to Rachel Zimmerman Brachman (JPL) for making available this opportunity. We thank the LCOGT for making available archival data. We also thank the Harvard-Smithsonian Institute for Astrophysics for the MicroObservatory data kindly made available by Frank Sienkiewicz. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. We thank the University of Queensland for collaboration software. This paper includes data collected by the TESS mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. We thank the anonymous referee for their comments and guidance which improved the paper.
\end{acknowledgments}
\section{Introduction} \label{sec:intro}
An exoplanet is, in general, a planet orbiting a star other than our Sun. The first confirmed discoveries of exoplanets were made in the early 1990s, opening up a field that is rapidly expanding with several thousand confirmed exoplanets known today, giving us insight into different planetary systems to our own and introducing challenges to our understanding of how such systems form and evolve.
A variety of techniques are used to discover exoplanets. In this project, we concentrated on the transit methods that have been used to discover the most exoplanets to date --- namely monitoring the brightness of the exoplanet system. Exoplanets are generally too close to their host stars to be seen as separate objects. The transit method tracks the brightness of the combined system (exoplanets and host star) with time, looking for changes caused such as when the planet passes in front of its star and blocks some light from reaching the Earth. The method tells us about the size of the planets and the angle they orbit about the host star relative to our line of sight.
In this paper we study transits for the exoplanet WASP-140b. This planet was discovered by Hellier {\em et al.} (2016), being 2.4 Jupiter masses orbiting its V=11.1 K0 host star (coordinates $\alpha = 04^{h} 01^{m} 32.54^{s}$, $\delta = -20^{\circ}27' 03.9"$ J2000) once in roughly 2.24 days. Hellier {\em et al.} note a rotational modulation of the out of transit flux with an $\sim10.4$ day cycle, which they attribute to magnetic activity of the host. They note that the transit is grazing, leading to a higher uncertainty in the estimate radius of the planet ($1.44^{+0.42}_{-0.18}$ Jupiter radii).
We apply the {\sc exotic} model (Zellem {\em et al.}, 2020) to estimate basic parameters of the system such as time of mid-transit, planetary radius relative to the host star, and orbital radius. We compare and contrast these results with a simple transit model (Mandel \& Agol, 2002) we implemented with a Bayesian optimizer, as well as with literature results. We were particularly interested in seeing if there were deviations in the times of mid-transits compared to a fixed orbital period. The Transit Timing Variation (TTV) method is based on monitoring such changes in timing of transits. The presence of non-transiting planets (in the same system) can be inferred from TTV measurements. The gravitational interaction of these non-transiting planets will sometimes increase the orbital period of the transiting planet, and at other times decrease the period, depending on their relative postions and so the mid-transit times will vary from a fixed, regular cycle.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_2018-11-18.pdf}
\caption{18 November 2018}
\label{fig:1}
\end{subfigure}\hfil
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_2019-01-22.pdf}
\caption{22 January 2019}
\label{fig:2}
\end{subfigure}\hfil
\medskip
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_2020-10-11.pdf}
\caption{11 October 2020}
\label{fig:3}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_2020-10-20.pdf}
\caption{20 October 2020}
\label{fig:4}
\end{subfigure}\hfil
\medskip
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_2020-10-29.pdf}
\caption{29 October 2020}
\label{fig:5}
\end{subfigure}\hfil
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\linewidth, height=0.77\linewidth]{FinalLightCurve_WASP-140 b_2021-01-02.pdf}
\caption{02 January 2021}
\label{fig:7}
\end{subfigure}\hfil
\caption{Selected WASP-140b transit data collected by the MicroObservatory and models. MicroObservatory observations have no filter. The red lines show the expected variation based on the best fitting {\sc exotic} model for each transit. Not all transits are shown for reasons of space.
\label{fig:MObs_WASP_140_transits}}
\end{figure}
\begin{sidewaysfigure}[htb]
\centering
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_04-10-2019.pdf}
\caption{04 October 2019 ($w$)}
\label{fig:8}
\end{subfigure}\hfil
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_2020-10-14.pdf}
\caption{14 October 2020 ($i_p$)}
\label{fig:9}
\end{subfigure}
\medskip
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_24-11-2021.pdf}
\caption{24 October 2021 ($i_p$)}
\label{fig:10}
\end{subfigure}\hfil
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{FinalLightCurve_WASP-140 b_28-12-2021.pdf}
\caption{28 December 2021 ($r_p$)}
\label{fig:11}
\end{subfigure}
\caption{WASP-140b transit data collected using the LCO. The filters used for the LCO observations are indicated in the appropriate sub-figure captions.
\label{fig:LCO_WASP_140_transits}}
\end{sidewaysfigure}
\begin{table}[t]
\caption{{\bf Fitted Parameters for WASP-140b} from the EXOTIC modelling. Mid-transit times are given in Barycentric Julian Dates (Barycentric Dynamical Time), the orbital semi-major axis ($a$) in terms of the stellar radius ($r_s$), and the planetary radius ($r_p$) relative to the stellar radius. {\sc exotic} outputs ${a}/{r_{s}}$, so a column giving the inverse is given for convenience when comparing with a later model and the literature. Uncertainties are $1\sigma$. `Quality' is a subjective assessment by the authors of the quality of the light curve. Exposure times for the LCOGT observations were 16.5 seconds for 4 October 2019, 100 s for 14 October 2020, 16.8 s for 24 October 2021, and 60 seconds for 28 December 2021.}
\centering
\hspace{-2.5cm}
\begin{tabular}{||l|l|l|l|l|l||}
\hline
Date & Mid-transit & ${a}/{r_{s}}$ & $r_{s}/a$ & $r_{p} / r_{s}$ & Quality \\
\hline
18 Nov 2018 & 2458441.7633 $\pm$ 0.0028 & 7.69 $\pm$ 0.30 & $0.130 \pm 0.005$ & 0.1786 $\pm$ 0.0099 & complete \\
22 Jan 2019 & 2458506.6080 $\pm$ 0.0026 & 7.63 $\pm$ 0.24 & $0.131 \pm 0.004$ & 0.179 $\pm$ 0.001 & partial \\
11 Oct 2020 & 2459134.9220 $\pm$ 0.0038 & 7.51 $\pm$ 0.52 & $0.133^{+0.010}_{-0.009}$ & 0.154 $\pm$ 0.024 & complete \\
20 Oct 2020 & 2459143.8611 $\pm$ 0.0020 & 8.40 $\pm$ 0.26 & $0.119 \pm 0.004$ & 0.178 $\pm$ 0.015 & complete \\
29 Oct 2020 & 2459152.8145 $\pm$ 0.0026 & 8.14 $\pm$ 0.33 & $0.123 \pm 0.005$ & 0.176 $\pm$ 0.016 & complete \\
15 Dec 2020 & 2459199.7704 $\pm$ 0.0083 & 7.29 $\pm$ 0.64 & $0.137^{+0.013}_{-0.011}$ & 0.119 $\pm$ 0.030 & partial \\
02 Jan 2021 & 2459217.6512 $\pm$ 0.0023 & 7.70 $\pm$ 0.24 & $0.130 \pm 0.004$ & 0.179 $\pm$ 0.012 & complete \\
04 Oct 2019 & 2458761.5091 $\pm$ 0.0004 & 7.631 $\pm$ 0.085 & $0.131 \pm 0.001$ & 0.1684 $\pm$ 0.005 & complete \\
14 Oct 2020 & 2459137.1516 $\pm$ 0.0023 & 7.20 $\pm$ 0.23 & $0.139^{+0.005}_{-0.004}$ & 0.1618 $\pm$ 0.0085 & complete \\
24 Oct 2021 & 2459512.8046 $\pm$ 0.0033 & 6.56 $\pm$ 0.19 & $0.152^{+0.005}_{-0.004}$ & 0.1678 $\pm$ 0.0059 & partial \\
28 Dec 2021 & 2459577.6402 $\pm$ 0.0015 & 6.486 $\pm$ 0.035 & $0.154 \pm 0.001$ & 0.1783 $\pm$ 0.0027 & partial \\
\hline
\end{tabular}
\label{tab:exotic_wasp_140b_fits}
\end{table}
\section{Data and Initial Processing}
The bulk of observations are 60-second, unfiltered exposures collected by a 6-inch aperture MicroObservatory (MObs; Sadler {\em et al.}, 2001) telescope located at Mount Hopkins (latitude $31.675^\circ$, longitude $-110.952^\circ$, 1,268m altitude above sea level) in Arizona, using a KAF-1403 ME CCD camera with a pixel scale of 5.2" per pixel and $2 \times 2$ binning to reduce noise. These data were analysed using {\sc exotic}, which is a {\sc python}-based tool developed by JPL's `Exowatch' program for reducing exoplanet transit data. This software can run on a variety of operating systems as well as via Google's online `Colaboratory'\footnote{For further details on this tool see: https://research.google.com/colaboratory/faq.html} tool. Technical details on {\sc exotic} can be found in Zellem~{\em et al.} (2020). Priors for Markov Chain Monte Carlo (MCMC) fitting by {\sc exotic} are automatically scraped from the NASA Exoplanet Archive (Akeson~{\em et al.}, 2013), while limb darkening parameters are generated by {\sc exofast} (Eastman~{\em et al.}, 2013). {\sc exotic} generates $1\sigma$ uncertainties based on the resulting posterior distributions.
Only dark images were available for the MObs observations, i.e., no flat field images were collected. The dark frames were collected at the beginning and end of each night of observation. As part of the analysis, {\sc exotic} applied the dark frames to the science data, and then performed differential aperture photometry. For each transit, the analyst supplied {\sc exotic} a list of comparison stars. {\sc exotic} performed a stability assessment of this candidate list, choosing the most stable star as the final comparison star. Relatively poor pointing accuracy of the telescope and drift in tracking throughout a transit could lead to selection of different final comparison stars across the transits. However, typically {\sc exotic} selected stars 108 or 112 from the AAVSO comparison star sequence for WASP-140. We plate-solved science frames for each transit to ensure correct selection of the exoplanet host star, using astrometry.net, together with confirmation using charts prepared using the online AAVSO finding chart tool.
\section{Analysis}
We analysed 22 MObs attempts to observe transits of WASP-140b, dating from 12 October 2016 to 24 October 2021. Only 7 resulted in successful measurements of transits (see Figure~\ref{fig:MObs_WASP_140_transits} for charts of representative transits), a success rate of 32\%. Clouds or incorrect pointing of the telescope accounted for the failed attempts. Table~\ref{tab:exotic_wasp_140b_fits} lists the key output from these fits using {\sc exotic}, namely the orbital semi-major axis $a$ (relative to the stellar radius $r_s$), the planetary radius ($r_p$), and the time of mid-transit (in BJD). The observations and fitted parameter values from {\sc exotic} have been uploaded to the AAVSO exoplanet database, under the usercode BTSB.
We also made use of the Las Cumbres Observatory Global Telescope network (LCOGT; Brown~{\em et al.}, 2013), first using archival data of transits and also collecting $r_p$ photometry on the night of 28 December 2021 using a telescope at the Cerro Tololo Inter-American Observatory. All the analysed LCOGT were collected using 0.4 meter telescopes equipped with CCDs. We processed all these data using {\sc exotic}, following flat fielding, dark subtraction, and bias correction via the LCO {\sc banzai} system.\footnote{See https://github.com/LCOGT/banzai for further information on this data pipeline.} Model fits to the transits are shown in Figure~\ref{fig:LCO_WASP_140_transits} and final parameter estimates are given in Table~\ref{tab:exotic_wasp_140b_fits}. We did not upload the LCOGT archival data or the model fits based on these to the AAVSO exoplanet database, given that we did not collect the data and did not wish to `make claim' to them over the original investigators.
\begin{figure}
\centerline{\includegraphics[height=8.5cm]{figures/Wasp_140b_residuals.png}}
\caption{Residuals from linear regression fit of orbits versus mid-transit time for WASP-140b. A linear model was fitted to the residuals, with no statistically significant slope. The grey shaded zone is the $3 \sigma$ confidence interval for the regression. The blue line is the mean regression slope, which is not statistically different from zero at the $3 \sigma$ level. The error bars for the mid-transit timing estimates overlap with this, and with zero, indicating no statistically significant trends in the residuals. Transits were classified by eye into complete and incomplete transits, to see if data quality might obscure any trends (see Table~\ref{tab:exotic_wasp_140b_fits}). It does not.
\label{fig:wasp_140b_residuals}
}
\end{figure}
\subsection{Orbital Period}
The ephemeris of Hellier {\em et al.} (2016) was used to calculate the number of orbits made by Wasp-140b about its host star since their starting epoch. These were then regressed against the mid-transit times given in Table~\ref{tab:exotic_wasp_140b_fits} using the `lm' function in R (R Core Team, 2021),\footnote{R is available from https://www.r-project.org} giving an orbital period of $2.235987 \pm 0.000008$ days and an epoch of $2456912.349 \pm 0.008$. These are in good agreement with the values of Hellier {\em et al.} (2016): $2.2359835 \pm 0.0000008$ days for the orbit and $2456912.35105 \pm 0.00015 $ for the epoch. Higher order polynomial fits did not result in additionally statistically significant parameters. Inspection of the residuals (see Figure~\ref{fig:wasp_140b_residuals}) reveals no apparent variation in period. These results therefore do not indicate any significant transit timing variations (TTVs). As noted above, TTVs would indicate the presence of an additional planet in the WASP-104 system through its gravitational attraction periodically altering the orbital velocity of WASP-140b. This would have led to observed transits (of WASP-140b) being earlier or later than predicted by a linear ephemeris. Maciejewski (2022) also analysed Transiting Exoplanet Survey Satellite (TESS, Ricker~{\em et al.}, 2015) data for the system searching unsuccessfully for TTVs, concluding that there were none currently detectable and so in agreement with the current study.
\begin{sidewaysfigure}
\centerline{\includegraphics[height=0.54\textheight]{figures/wasp_140_b_mcmc.png}}
\caption{
Example MCMC results for the 4 October 2019 transit of WASP-140b. This represents 4,000 steps in the Markov chain, including the initial steps known as `burn-in'. These steps are excluded from the final results, and are considered a result of starting the optimization in a lower probability set of parameters, leading to movement to the global minimum. Actual runs included 40,000 steps, which unfortunately `overloaded' the plotting software and are therefore not included here. `Ratio' is the ratio of the planetary radius to the stellar one, `orbital' is the ratio of stellar radius to the orbital semi-major axis, `u' is the linear limb darkening co-efficient, `cos\_i' is the cosine of the inclination, 'offset' an adjustment in phase, `L' an adjustment in flux, and `sigma' an estimate of the white noise in the data. The chart provides the distributions of each of these parameters on its diagonal as bar charts, correlations between the variables are given in the upper right, and scatter plots crossing each of the parameters in turn are given in the lower left. Each point in a scatter plot represents a step in the Markov chain. The bold lines are linear regressions to the data, corresponding to the correlation results.
\label{fig:14_oct_19_wasp_140_mcmc}
}
\end{sidewaysfigure}
\subsection{Transit Models}
While {\sc exotic} had already fitted the transits, we decided to build from `first principles' a simple transit model and couple this with optimization techniques in order to both make a comparison and explore including inclination as a free parameter. This was primarily a student project acting as an introduction to exoplanet research, so building our own model and coupling this with optimization was considered a good learning exercise. {\sc exotic} adopts its priors from the NASA Exoplanet Archive, hence it adopted the inclination from Hellier {\em et al.} (2016) as a fixed parameter. Given the glancing nature of this transit, fixing the inclination has a large effect on the derived parameter estimates. For optimization of our transit model, we used the Markov Chain Monte Carlo (MCMC) technique Hamilton Monte Carlo (HMC). MCMC allows construction of a Markov process such that the stationary distribution is the same as our target distribution, through the generation of a `chain' of random samples from the process. Through a sufficient number of samples, such a chain becomes close enough to the stationary distribution and therefore provides a good approximation to the target distribution. This is known as convergence of the MCMC chain (see Sinharay, 2003), and allows exploration of the uncertainty in the parameter estimates --- explaining our interest in this technique. We implemented HMC using the {\em rstan}\footnote{Available from https://mc-stan.org/users/interfaces/rstan} implementation of Stan (Carpenter~{\em et al.}, 2017; Stan Development Team, 2016) inside the statistical programming language R. Uniform priors were used, reflecting minimum previous knowledge of the parameters.
To build this model we used some key parameters of the exoplanet and its host star:
\begin{itemize}
\item{$a$, $r_s$, and $r_p$ were as defined above, with the radii being in terms of $a$;}
\item{$u$ = linear limb darkening coefficient (see below for an explanation of this parameter);}
\item{$i$ = orbital inclination (in degrees). Ninety degrees means that the orbital plane is in the line of sight from the Earth;}
\item{{\em offset} = a parameter to adjust the reference point of phase axis;}
\item{$U$ = system brightness, used to adjust the reference point of flux axis. The out of transit flux should be approximately unity, i.e., the fluxes are normalized to the mean out of transit level.}
\end{itemize}
We first consider that $d$ is the center-to-center distance between the planet and the star. We can then calculate $z = \frac{d}{r_*}$, which denotes the normalised separation of the centers (of the exoplanet and its host star) and $p = \frac{r_p}{r_*}$, which is the ratio of the disk radii. This allows us to model a transit based on the equations in Mandel \& Agol's (2002) paper. These specify that for a uniform source, the ratio of obscured to unobscured flux is $F^e(p, z)=1-\lambda^e(p, z)$, where
\begin{equation}\label{eqn:mandel}
\lambda^e_{(p, z)}=\left\{\begin{array}{ll}
0 & 1+p<z \\
\frac{1}{\pi}\left[p^{2} k_{0}+k_{1}-\sqrt{\frac{4 z^{2}-\left(1+z^{2}-p^{2}\right)^{2}}{4}}\right]
& |1-p|<z \leq 1+p \\
p^{2} & z \leq 1-p \\
1 & z \leq p-1.
\end{array}\right.
\end{equation}
and $\kappa_{1}=\cos ^{-1}\left[\left(1-p^{2}+z^{2}\right) / 2 z\right]$ and $\kappa_{0}=\cos ^{-1}\left[\left(p^{2}+z^{2}-1\right) / 2 p z\right] .$
This set of equations describe the flux of planetary systems in the following cases:
\begin{enumerate}
\item{When the planetary disk does not obscure any portion of the stellar disk. There will be no dimming of the combined light, and so the normalized flux would be 1.}
\item{When the planetary disk is completely in front of the stellar disk. In the case of a uniformly bright stellar disk, the dimming will scale by the obscured area -- which can be calculated by $\frac{r_{p}^2}{r_{s}^2}$ (or $p^2$).}
\item{The boundary case when the planetary disk is moving onto or off the stellar disk.}
\end{enumerate}
The fourth case in Equation \ref{eqn:mandel} corresponds to the unlikely case of when the planet is larger (or equal to the same radius) than its host star.
\begin{table}[bt]
\caption{{\bf MCMC results.} Only one of the LCOGT data sets gave a reliable solution. Results of three of the better MObs transits are shown, to demonstrate the lower confidence in the estimated parameter estimates for such data sets (together with an implausibly large `planet'). Uncertainties are $1\sigma$. `Date' is the night of observation. }
\centering
\hspace{-2.5cm}
\begin{tabular}{||l|c|c|c|c|c|l||}
\hline
Date & ${r_p}/{r_s}$ & ${r_s}/a$ & $u$ & $\cos{i}$ & $\sigma$ & Observatory\\
\hline
04 October 2019 & $0.159 \pm 0.013$ & $0.109 \pm 0.007$ & $0.48 \pm 0.23$ & $0.086 \pm 0.013$ & $0.0036 \pm 0.0001$ & LCOGT \\
11 October 2020 & $0.35 \pm 0.23$ & $0.14 \pm 0.04$ & $0.55 \pm 0.30$ & $0.16 \pm 0.07$ & $0.010 \pm 0.001$ & MObs \\
20 October 2020 & $0.32 \pm 0.22$ & $0.10 \pm 0.02$ & $0.53 \pm 0.28$ & $0.11 \pm 0.05$ & $0.0058 \pm 0.0005$ & MObs \\
02 January 2021 & $0.33 \pm 0.20$ & $0.11 \pm 0.02$ & $0.58 \pm 0.28$ & $0.11 \pm 0.05$ & $0.0063 \pm 0.0005$ & MObs \\
\hline
\end{tabular}
\label{tab:mcmc_results}
\end{table}
Limb darkening refers to the phenomenon that the brightness of a star appears to decrease from the centre to the edge, or limb, of the observed disk. This occurs because a stellar atmosphere increases in temperature with depth. At the centre of a stellar disk an observer `sees' deeper and hotter layers that emit more light compared to at the limbs, where the upper and cooler layers are seen (which produce less light). The `small planet' approximation was used for the transit model, in that the limb darkening value corresponding to the centre of the planetary disk projected onto the stellar disk was uniformly applied across the stellar area obscured by the planet. We implemented linear limb darkening for the model to adjust the obscured flux values, i.e., a limb darkening model with only a single term.
Only one of our data sets (LCOGT 04 October 2019) could be reliably fitted with this model, as it had a sufficient signal to noise ratio, a well-defined transit, and sufficient observations before and after the transit so that the out of transit flux levels were well constrained. Interestingly, we were not able to derive a determinate solution for the 04 October 2019 data set, which by eye appears to be a suitable transit. This would indicate that we have too many free parameters in the fit, a point we will come back to later in the paper. Table~\ref{tab:mcmc_results} presents results of this fitting and some example MObs fits. Clearly we were asking too much of the MObs data when we included inclination and limb darkening as free parameters, as we have physically unreasonable solutions for these data sets. {\sc exotic} is a better tool for these high noise data sets. The HCM fit to the LCOGT data is more reasonable.
\subsection{Comparison with the Literature}
Hellier {\em et al.} (2016) estimated $r_p / r_s$ as $ 0.166^{+0.059}_{-0.027}$, $\cos{i} = 0.117^{+0.013}_{-0.009}$, and ${r_s}/a = 0.125^{+0.030}_{-0.022}$. These figures are in good agreement with the HMC model fit based on the LCOGT data bar for $\cos{i}$, with the HCM model corresponding to an inclination of $85.07 \pm 0.75$ degrees compared to Hellier {\em et al.'s} value of $83.3^{+0.5}_{-0.8}$ degrees. This is within two standard deviations though.
A comparison with the results from the {\sc exotic} model for the same data shows that the orbital radius from the HMC model is substantially larger (at $\sim 9.2$ times the stellar radius) as is the planetary radius ({\sc exotic's} $0.131 \pm 0.001 \: r_s$ compared to $0.159 \pm 0.013$). The lack of agreement is puzzling, given that both Hellier {\em et al.} and {\sc exotic} both integrate the limb darkened fluxes obscured by the planetary disk, suggesting that the small planet approximation is not the primary cause of the difference.
\begin{figure}[!t]
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{figures/wasp_140_tess.png}
\caption{TESS Sector 31 Light Curve}
\label{fig:tess_1}
\end{subfigure}\hfil
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{figures/wasp_140b_20_nov_2020.png}
\caption{2020-Nov-08 Transit}
\label{fig:tess_2}
\end{subfigure}
\caption{The figure on the left (a) shows the non-normalized Pre-search Data Conditioning Simple Aperture Photometry (PDC\_SAP) generated by the TESS team, which has had removed longstanding systematic trends and so provides better data quality than the simple aperture photometry (also available from MAST). Remaining variability is clearly visible, showing these changes are on timescales comparable to that between transits. Hellier et al. (2016) noted residual variation at a 5-9 milli-magnitude amplitude. This range is consistent with the observed remaining variability. The figure on the right (b) shows one of these transits plus the optimal model generated by the HMC code. This transit is the second from the left in the data following the break in the middle of Figure \ref{fig:tess_1}.
\label{fig:tess_light_curves}}
\end{figure}
Davoudi et al. (2020) used {\sc EXOFAST} (Eastman {\em et al.}, 2013) to model a clear filter 01 January 2017 transit data set of the system, finding the planet's radius to be $1.1990 \pm 0.0735$ that of Jupiter, which is smaller than Hellier~{\em et al.}'s estimate of $1.44^{+0.42}_{-0.18} \: {\rm R_{J}}$ and this paper's of $1.38^{+0.18}_{-0.17} \: {\rm R_{J}}$ (although within the error ranges). No inclination or orbital radius data were supplied by Davoudi et al., so a comparison is not possible.
Alexoudi (2022) applied the {\em emcee} Bayesian sampler (Foreman {\em et al.}, 2015) to analyse 28 transits from 3 sectors\footnote{Sector 4 from 18 October 2018 to 15 November 2018, sector 5 from 15 November 2018 to 11 December 2018, and sector 31 from 21 Octo\-ber 2020 to 19 Novem\-ber 2020.} of data collected by the TESS space telescope. Alexoudi derived an inclination of $84.30 \pm 0.06$ degrees, $r_{s}/a = 0.1166 \pm 0.0008$, and $r_p / r_s = 0.1464 \pm 0.0010$. These values are similar to those of the current paper and Hellier {\em et al.}, but not within formal uncertainties. Alexoudi noted the differences with Hellier {\em et al.}, commenting that these could be due to the higher accuracy of the TESS data. As a check, we downloaded 2-minute cadence TESS data from MAST (see Figure \ref{fig:tess_1}) and applied the HMC model to a transit (centred on TBJD 2459161.75, see Figure \ref{fig:tess_2}). We found $r_{s}/a = 0.109 \pm 0.008$, $r_p/r_s = 0.163 \pm 0.016$, and $\cos{i} = 0.089 \pm 0.016$ ($\sim 84.87^{\circ}$). The linear limb darkening coefficient was poorly constrained ($0.48 \pm 0.29$). Our model resulted in a larger planetary radius than Alexoudi's, and very close to those derived from the LCOGT data.
\subsection{Recommendations}
Problems with the other data sets included the lack of sufficient pre-transit data prevented reliable estimates (e.g., the 14 October 2020 data set) while variations in the out-of-transit flux levels prevented a reliable fit to the 28 December 2021 data set. The increased noise of the MObs data compared to LCOGT data also led to less accurate parameter estimates, especially for ratio of the planetary to stellar radii. It would be interesting to see if additional data processing, such as collection and use of flat fields, would help improve the quality of these data sets.
For transit fittings of this system, we recommend that the pre- and post- transit observations be roughly as long as the actual transit time period, particularly since the host star appears to be active (changing in flux levels) on a short time scale. For instance, the pre-transit flux levels appear to be greater than post-transit for the 28 December 2021 data set, and are a complication for a simple model such as ours.
A further complication is the use of the small planet approximation for a high inclination orbit such as WASP-140b's; in later projects we intend to apply a graduated limb darkening adjustment to the obscured flux. There is a clear correlation between $u$ with ${r_p}/{r_s}$ and ${r_s}/a$ (see Figure~\ref{fig:14_oct_19_wasp_140_mcmc}), so locking $u$ to a value based on theory could lead to a tighter confidence interval for these two parameters. The parameter $u$ can also be seen to be poorly defined in Figure~\ref{fig:14_oct_19_wasp_140_mcmc}. This suggests that it could be better to set it to a value using theory and include $u$ as a fixed (rather than a free) parameter. See Banks \& Budding (1990) for further discussion of the information content of data and the question of over-parameterization. Finally, WASP-140b transits close to the stellar limb where the gradient will be strongest in the limb darkening, further supporting the conclusion above.
The signal to noise ratio is clearly important for transit fitting, affecting the accuracy of the MObs fits by our model. Observations with the LCOGT (similar to those presented here) appear to have sufficient ``information content'' to support the HMC model, providing sufficient data about the shoulders of the eclipse are collected for accurate estimation of the out-of-transit flux level.
\section{Summary}
This paper presented MCMC modeling of transits of WASP-140b, collected using robotic telescopes of the MObs and LCOGT. These data included a transit in December 2021 collected by the authors. We coded a fitting function based on the equations of Mandel~\& Agol (2002) and coupled this with Bayesian optimization. Together with the {\sc exotic} analysis program, two MCMC-based optimization models have been applied to these transits, deriving estimates for the times of mid-transit as well as physical parameters of the system. Inspection of the mid-transit times revealed a linear period with no statistical evidence from the data of transit time variations, i.e., no evidence for the gravitational influence of a non-transit planet on the orbit of WASP-140b.
Results from the two analysis programs ({\sc exotic} and HMC) were in good agreement, indicating that the radius for WASP-140b to be $1.38^{+0.18}_{-0.17}$ Jupiter radii, with the planet orbiting its host star in $2.235987 \pm 0.000008$ days at an inclination of $85.75 \pm 0.75$ degrees. The derived parameters are in formal agreement with the discovery paper of Hellier {\em et al.} (2016), and somewhat larger than a recent independent study based on photometry by the TESS space telescope (Alexoudi, 2022).
We were probably too ambitious in our selection of an exoplanet with a high inclination orbit about a host star itself with rapidly changing flux levels (to apply a high parameter model such as the HMC model), but that is part of the learning process. Application of techniques such as Gaussian Processes to model out the host star variations would be a good next step, which would allow the combination together of multiple transits which could be binned together to increase the signal to noise ratio and strengthen the information content of the data. We also plan to use our HMC model on more simple systems, such as Kepler 1\footnote{See, e.g., Ng et al. (2021) who applied the Mandel~\& Agol (2002) models, MCMC, and Gaussian Processes to Kepler space telescope data of Kepler-1b and other systems.} which do not have such active host stars and orbits with inclinations closer to 90 degrees, where the model's deficiencies will be less and the correlation between limb darkening and inclination less confounding. Having made these comments, we still recommend that programming a simple model such as Mandel \& Agol (2002) and coupling this with an optimizer is a useful learning exercise, and makes for a useful student project. Our points are rather to choose a more quiet system than the one we did, and to either implement improved handling of limb darkening for highly tilted systems or to choose an exoplanet with an orbit closer to $90^{\circ}$ inclination as well as being somewhat smaller relative to its host star (so that the small planet approximation is more valid). If investigation of TTVs is the primary goal of the project, then {\sc exotic} is an excellent tool for such work.
\newpage
\begin{acknowledgments}
This publication makes use of the EXOTIC data reduction package from Exoplanet Watch, a citizen science project managed by NASA’s Jet Propulsion Laboratory (JPL) on behalf of NASA’s Universe of Learning and which is supported by NASA under award number NNX16AC65A to the Space Telescope Science Institute. We are grateful for observing time on the Las Cumbres Observatory Global Telescope (LCOGT) Network, and to Rachel Zimmerman Brachman (JPL) for making available this opportunity. We thank the LCOGT for making available archival data. We also thank the Harvard-Smithsonian Institute for Astrophysics for the MicroObservatory data kindly made available by Frank Sienkiewicz. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. We thank the University of Queensland for collaboration software. This paper includes data collected by the TESS mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. We thank the anonymous referee for their comments and guidance which improved the paper.
\end{acknowledgments}
|
1,116,691,501,231 | arxiv | \section{Introduction}
Spontaneous emission of light from initially excited atoms became one of the corner stones of our understanding of the interaction of light and matter, soon after the introduction of the ``photon''. It was introduced phenomenologically by Einstein \cite{Ein16} through his famous $A$-coefficient that gives the rate of spontaneous de-excitation of an excited atom. Later, spontaneous emission was understood through the theory of Wigner and Weisskopf \cite{Wei30} as the result of the perturbation of an atom through the vacuum-fluctuations of the electromagnetic field surrounding the atom. The infinitely large number of modes involved in the process leads to effectively irreversible behavior. Once this mechanism was understood, it became clear that the rate with which the excitation of an atom in a given state decays is not a natural constant for this atom, but can be influenced by its environment. By engineering the mode-structure of the electromagnetic environment of an atom, in particular through modifying the density of states of the field modes at the resonance frequency, spontaneous emission can be enhanced (in the case of an increased density of states), or reduced (in the opposite case), as first found by Purcell in the context of nuclear resonance \cite{Pur46}. This important insight is now routinely used in photonic crystals, where an electromagnetic band-structure can be designed at will and used for creating e.g.~a band gap around the resonance frequency, resulting in largely increased lifetime of an excited atom, inverted spin, exciton, or plasmonic excitation \cite{Byk72,Yab87,Joh87}. \\
Even earlier, Dicke studied spontaneous emission of several atoms in close vicinity of each other, and found that in such a case spontaneous emission becomes a cooperative effect in which the amplitudes of all atoms emitting simultaneously interfere. Depending on the initial collective internal state of the atoms, emission can be largely enhanced (superradiance), or reduced (subradiance) \cite{Dic54}. Superradiance developed to a large research field in its own right \cite{Bon71,Bon71b,Nar74,Gla76,Gro82,Dev96,Bra98,Bra98b,Che05,Akk08,Bra11,Wie11,Opp14,Wie15,Bha15}, culminating recently in matter-wave superradiance in cold atomic gases \cite{Ino99}. It was soon realized that dipole-dipole interactions between atoms can significantly alter these cooperative processes \cite{Cof78,Fri74, Fre86, Fre87, Fen13, Ric90, Fri72}, but can also be exploited for a variety of purposes, such as the (partial) trapping of light \cite{Bie12} or the implementation of quantum gates using the dipole blockade~\cite{Luk01}.
In this paper, we reveal yet a third mechanism how spontaneous emission can be influenced: Collective emission can be largely ``quantum programmed'' by engineering the external quantum state of motion of the atoms. To this end, we derive a master equation that fully takes into account the quantum nature of the atomic motion and, when relevant, the indistinguishability of atoms. This is essential when the atoms form a Bose-Einstein condensate or are loaded in an optical lattice. For example, when two fermionic atoms are placed in the same potential well and motional state, one in the internal excited state and the other in the ground state, the Pauli exclusion principle forbids the main decay channel, and leads to an increased lifetime of the atomic excited state~(see e.g.\ \cite{San11}). Moreover, it has been known for a long time that the coherence of radiation scattering off atoms in a solid (e.g.\ in X-ray or neutron scattering) can be influenced through the thermal motion of the atoms. This results in the Debye-Waller factor \cite{Deb13,Wal23} that describes the reduction of visibility of interference maxima as function of temperature. But while in a solid one has in general little influence on the state of motion of the atoms (apart from controlling the temperature of the lattice), a whole new world has opened up in the physics of ion-traps and cold atoms. There, the external motional state can now be very well controlled and engineered, to the extent that quantum gates coupling internal states of the atoms originally relied heavily on the use of precise states of this external ``quantum bus'' \cite{Cir95}, even though this requirement could be relaxed later \cite{Mol99}.
Thus, the quantum nature of the atomic motion appears to be an efficient way to influence the internal dynamics of atoms and its engineering has a wide range of potential applications~\cite{Lod04}. However, it turns out that most of the methods used to describe the internal dynamics of atoms including a quantum treatment of their motion are either restricted to the Lamb-Dicke regime \cite{Jav88,Bre95,Vog96,Lei03,Bra08,Pic10,Cer10} or do not account for both recoil and indistinguishability~\cite{Yin95,Dub96,Ber97,Mor99,Mcd07,Rog08}. Therefore, it appears worthwhile to develop a general theory of spontaneous emission of an ensemble of atoms valid for arbitrary quantum states of motion, which is the purpose of this paper. The master equation we derive constitutes a powerful tool to study the combined effects of the recoil and the indistinguishability of atoms on both their dissipative and conservative internal dynamics, even beyond the Lamb-Dicke regime. The dependence of the dipole-dipole interactions as well as the life-time under spontaneous emission on the motional state of the atoms might be observable in dense Rydberg gases, which are under intense current experimental and theoretical investigation \cite{Afr04,Rob04,Alt11,Pel14}.
The paper is organized as follows. In Section II, we present our model. In section III, we derive a general master equation for the internal dynamics of atoms valid for arbitrary motional states. In section IV, we provide general expressions for the dipole-dipole shifts and decay rates which determine the conservative and dissipative part of the master equation, and discuss the effects of the indistinguishability of atoms on these quantities. In section V, we calculate explicitly the decay rates and the dipole-dipole shifts for particularly relevant motional states (Gaussian states, Fock states and thermal states), both for distinguishable and indistinguishable atoms.
\section{Model and Hamiltonian}
We consider $N$ identical two-level atoms spontaneously emitting photons due to their interaction with the free electromagnetic field initially in vacuum, and treat their motion quantum-mechanically.
In the point of view of Power-Zienau-Wolley (multipolar coupling scheme \cite{Buh12, Pow59, Coh87}), the Hamiltonian describing the composite system is
\begin{equation}\label{tHamiltonian}
H= H_{A}+H_{F}+H_{AF},
\end{equation}
with $H_{A}$ the Hamiltonian of the atoms, $H_{F}$ the Hamiltonian of the free field, and $H_{AF}$ the interaction Hamiltonian responsible for emission/absorption of photons and field-mediated interactions between atoms.
In Eq.~(\ref{tHamiltonian}), the atomic Hamiltonian $H_{A}=H_{A}^{\mathrm{ex}}+H_{A}^{\mathrm{in}}+
H_{A}^{\mathrm{self}}$ consists of an external, an internal and a self-interaction part, respectively given by
\begin{align}\label{tHAex}
& H_{A}^{\mathrm{ex}} = \sum_{j=1}^N\left(\frac{\hat{\mathbf{p}}_{j}^2}{2M} + V(\hat{\mathbf{r}}_j) \right), \\
\label{tHAin}
& H_{A}^{\mathrm{in}} =\frac{\hbar\omega_0}{2}\sum_{j=1}^N
\sigma_z^{(j)}, \\
\label{tHAself}
& H_{A}^{\mathrm{self}} =\frac{1}{2 \epsilon_0} \int |\hat{\mathbf{P}}\big(\mathbf{r}\big)|^2 \,d\mathbf{r} .
\end{align}
The external part $H_{A}^{\mathrm{ex}}$ corresponds to the kinetic and potential energy of the atoms, with $\hat{\mathbf{r}}_j$ and $\hat{\mathbf{p}}_j$ the center-of-mass position and momentum operators of atom $j$ ($j=1,\ldots,N$) of mass $M$ and $V(\mathbf{r})$ the external potential experienced by the atoms~\cite{footnote1}. We include the spin degree of freedom in the internal state and consider an external potential which does not depend on the spin. This form of $H_{A}^{\mathrm{ex}}$ is quite general and can account for a wide range of experimental settings.
The internal part $H_{A}^{\mathrm{in}}$ of the atomic Hamiltonian corresponds to the internal energy of the atoms, with $\omega_0$ the atomic transition frequency and $\sigma_z^{(j)} =
|e_j\rangle \langle e_j|-|g_j \rangle \langle g_j|$ with $|g_j\rangle$ ($|e_j\rangle$) the lower (upper) level of atom $j$ of energy $-\hbar \omega_0/2$ ($\hbar \omega_0/2$). Finally, the self-interaction part $H_{A}^{\mathrm{self}}$ corresponds to the self-energy and contact interaction between atoms, with $\epsilon_0$ the permittivity of free space and $\hat{\mathbf{P}}\big(\mathbf{r}\big)$ the atomic polarization density, given in the dipole approximation by~\cite{Pow59}
\begin{equation}
\hat{\mathbf{P}}\big(\mathbf{r} \big) = \sum_{j = 1}^N \mathbf{D}_j \, \delta(\mathbf{r}-\hat{\mathbf{r}}_j)
\end{equation}
where $\mathbf{D}_j=\mathbf{d}_j\,\sigma_-^{(j)}+\mathbf{d}_j^*\,\sigma_+^{(j)}$ is the dipole operator for atom $j$, with dipole matrix element $\mathbf{d}_j=\bra{g_j}\mathbf{D}_j\ket{e_j}$, $\sigma_-^{(j)}=|g_j\rangle\langle e_j|$, $\sigma_+^{(j)}=|e_j\rangle\langle g_j|$ and $\delta$ is the Dirac delta distribution. We consider a polarized atomic sample in which all atoms share the same dipole moment, i.e.\ $\mathbf{d}_j=\mathbf{d}\;\forall\,j$. The dipole moment $\mathbf{d}$ can be decomposed in the spherical basis $\{\boldsymbol{\varepsilon}_0\equiv\mathbf{e}_z,\boldsymbol{\varepsilon}_\pm\equiv\mp (\mathbf{e}_x\pm i \mathbf{e}_y)/\sqrt{2}\}$ with $\{\mathbf{e}_x, \mathbf{e}_y,\mathbf{e}_z\}$ the Cartesian unit vectors and the $z$-axis taken as the quantization axis,
\begin{equation}\label{decompd}
\mathbf{d}=\sum_{q=0,\pm}d_q\,\boldsymbol{\varepsilon}_q.
\end{equation}
For a $\pi$ transition from the upper to the lower level, the only non-vanishing component in (\ref{decompd}) is $d_0$, whereas for a $\sigma^\pm$ transition, the only non-vanishing component is $d_\mp$.
In Eq.~(\ref{tHamiltonian}), the free field Hamiltonian $H_{F}$ reads
\begin{equation} \label{tHF}
H_{F} = \sum_{\mathbf{k}\boldsymbol{\varepsilon}} \hbar\omega_k\, a^\dagger_{\mathbf{k}\boldsymbol{\varepsilon}} a_{\mathbf{k}\boldsymbol{\varepsilon}},
\end{equation}
with $\omega_k=ck$, $k=|\mathbf{k}|$, $c$ the speed of light in vacuum and $a_{\mathbf{k}\boldsymbol{\varepsilon}}$
($a^\dagger_{\mathbf{k}\boldsymbol{\varepsilon}}$) the annihilation (creation) operator of a mode of the radiation field of wave vector $\mathbf{k}$ and polarization $\boldsymbol{\varepsilon}$. Note that in Eq.~(\ref{tHF}), we have dropped the zero-point energy of the radiation field, as it has no influence on the dynamics of the system.
In the dipole approximation (when the typical size of the atoms is much smaller than the wavelength of the emitted radiation) and the interaction picture with respect to $H_0\equiv H_{A}^{\mathrm{ex}}+H_{A}^{\mathrm{in}}+H_F$, the interaction Hamiltonian $H_{AF}(t)$ reads
\begin{equation}\label{tHAF}
H_{AF}(t)=-\sum_{j=1}^N \mathbf{D}_j(t)\boldsymbol{\cdot}\mathbf{E}\big (\hat{\mathbf{r}}_j(t),t \big)
\end{equation}
with the electric field operator
\begin{equation}\label{E}
\mathbf{E}(\mathbf{r},t)=i\sum_{\mathbf{k}\boldsymbol{\varepsilon}}{\cal E}_k\, \left(a_{\mathbf{k}\boldsymbol{\varepsilon}} \,\boldsymbol{\varepsilon}_\mathbf{k} \, e^{i(\mathbf{k}\boldsymbol{\cdot}\mathbf{r}-\omega_k t)} - \mathrm{h.c.}\right)
\end{equation}
where h.c.\ stands for Hermitian conjugate, ${\cal E}_k=\sqrt{\hbar
\omega_k/2\epsilon_0 L^3}$, $L^3$ is the electromagnetic mode quantization volume, $\boldsymbol{\varepsilon}_\mathbf{k}$
the normalized polarization vector, and
\begin{equation}\label{xt}
\hat{\mathbf{r}}_j(t)= e^{i H_A^\mathrm{ex} t/\hbar} \, \hat{\mathbf{r}}_j \,e^{- i H_A^\mathrm{ex} t/\hbar}.
\end{equation}
Performing the Schmidt decomposition of the dipole interaction Hamiltonian~(\ref{tHAF}), we get \cite{Bre06}
\begin{equation}\label{HIgen}
H_{AF}(t)=\sum_{j=1}^N\sum_{\omega=\pm\omega_0}e^{-i\omega t} A_{j}^{\mathrm{in}}(\omega)\otimes B_{j}(t),
\end{equation}
with the \emph{quantum jump operators}
\begin{equation} \label{Aj}
\begin{aligned}
& A^{\mathrm{in}}_{j}(\omega_0)= \sigma_-^{(j)},\\
& A^{\mathrm{in}}_{j}(-\omega_0) = \sigma_+^{(j)},
\end{aligned}
\end{equation}
and the \emph{bath operators}
\begin{equation} \label{Bj}
B_j(t)=-\mathbf{d}\boldsymbol{\cdot}\mathbf{E}\big( \hat{\mathbf{r}}_{j}(t),t\big)
\end{equation}
defined for any atom $j=1,\ldots,N$.
\section{General master equation for the internal dynamics}\label{sec.ME}
We are interested in the internal dynamics of the atoms only, since our aim is to quantify the effects of the quantization of the atomic motion on cooperative spontaneous emission. In this Section, we derive a Markovian master equation for the internal degrees of freedom from a microscopic approach~\cite{Bre06}. The derivation of a quantum optical master equation is commonly made for atoms at fixed positions. Here, we go beyond this approximation by treating the atomic position quantum mechanically.
The atomic internal degrees of freedom specify our system $S$, and all other degrees of freedom (atomic external and electromagnetic field degrees of freedom) specify the bath $B$ to which $S$ is coupled, as illustrated in Fig.~\ref{system_bath}.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig1.pdf}
\end{center}
\caption{(Color online) Decomposition of the global system into bath and system of interest. The system of interest is the internal part of the atoms described by the state $\rho_A^\mathrm{in}$. The bath corresponds to the atomic external degrees of freedom, described by the state $\rho_A^\mathrm{ex}$, and the electromagnetic field degrees of freedom, intially in the vacuum state $\ket{0}\bra{0}$.} \label{system_bath}
\end{figure}
\subsection{Microscopic derivation and general form of the master equation}
Our starting point is the Liouville-von Neumann evolution
equation
\begin{equation}\label{liou}
i\hbar\frac{d \rho(t)}{dt}=[H_A^{\mathrm{self}}(t) + H_{AF}(t),\rho(t)]
\end{equation}
for the global density matrix $\rho(t)$ in the interaction picture with respect to $H_0$. Time-integration of Eq.~(\ref{liou}) together with a Born series expansion to second order in $H_{AF}$ yields, after tracing over the bath degrees of freedom,
\begin{equation}\label{LvN}
\begin{aligned}
\frac{d
\rho_A^\mathrm{in}(t)}{dt} &= -\frac{i}{\hbar}{\rm Tr}_B\left([H_A^{\mathrm{self}}(t),\rho(t)]\right)-\frac{i}{\hbar}{\rm Tr}_B\left([H_{AF}(t),\rho(0)]\right) \\
&\hspace{0.4cm} -\frac{1}{\hbar^2}\int_0^t{\rm Tr}_B\left([H_{AF}(t),[H_{AF}(t'),\rho(t')]]\right)dt'
\end{aligned}
\end{equation}
where
\begin{equation}
\rho_A^\mathrm{in}(t)={\rm Tr}_B[\rho(t)]
\end{equation}
is the reduced density matrix of $S$ (in the interaction picture) describing the atomic internal dynamics.
\subsubsection{Born approximation}
We consider the weak coupling regime and resort to the Born approximation (see e.g.~\cite{Bre06}), which assumes the form
\begin{equation}\label{bornapp}
\rho(t) \approx \rho_A^\mathrm{in}(t) \otimes \rho_B,
\end{equation}
for the global density matrix to describe the time evolution of the system $S$ only. Here $\rho_B = \rho_A^\mathrm{ex} \otimes \rho_F$ is the bath density matrix with $\rho_A^\mathrm{ex}$ the motional density matrix and $\rho_F=\ket{0}\bra{0}$ the electromagnetic field density matrix which we take as the vacuum state~\cite{footnote1b}. The Born approximation excludes correlations between external and internal states. In this approximation, the bath is considered as stationary during the whole relaxation dynamics and the influence of the system on the bath is neglected. Accordingly, we consider in this work that the characteristic evolution time $\tau_M$ of the atomic motion is much larger than the relaxation time $\tau_R$ of the system. This condition is met in a wide range of experimental situations where atoms are optically or magnetically trapped. For example, the typical frequency $\Omega_M$ of a harmonic potential produced with visible light is in the range $1-10^{3}$~Hz, which leads to $\tau_M \sim 1/\Omega_M\gg\tau_R \sim 1/\gamma_0$ where $\gamma_0$ is the single-atom free spontaneous emission rate, of the order of $10^{9}$ Hz for optical transitions (i.e.~there are at least six orders of magnitude separation between $\tau_M$ and $\tau_R$).
In Eq.~(\ref{LvN}), we can furthermore assume without loss of generality that the second term
on the right-hand side vanishes~\cite{footnote2},
which leads to
\begin{equation}\label{LvNB}
\begin{aligned}
\frac{d\rho_A^\mathrm{in}(t)}{dt} &= -\frac{i}{\hbar}\big[\langle H_A^{\mathrm{self}}(t)
\rangle_{\mathrm{ex}},\rho_A^\mathrm{in}(t)\big]
\\[5pt]
&\hspace{-0.5cm} -\frac{1}{\hbar^2} \int_0^t {\rm Tr}_B([H_{AF}(t),[H_{AF}(t'),\rho_A^\mathrm{in}(t')\otimes\rho_B]])\, dt',
\end{aligned}
\end{equation}
since $
{\rm Tr}_B\left([H_A^{\mathrm{self}}(t),\rho(t)]\right) = [\langle H_A^{\mathrm{self}}(t)
\rangle_{\mathrm{ex}},\rho_A^\mathrm{in}(t)]$, where $\langle \, \cdot \,
\rangle_{\mathrm{ex}}=\mathrm{Tr}(\,\cdot\,\rho_A^{\mathrm{ex}})$
stands for the expectation value over the atomic external degrees of
freedom.
\subsubsection{Markov approximation}
The next step is to perform the Markov approximation to eliminate memory effects and end up with a time-local master equation for $\rho_A^\mathrm{in}(t)$. This can be achieved by making the change of variable $t'\to t-t'$, extending the integration domain to infinity, and replacing $\rho_A^\mathrm{in}(t-t')$ by $\rho_A^\mathrm{in}(t)$ under the integral. This approximation is justified as long as the bath correlation time $\tau_B$ is much smaller than the typical relaxation time $\tau_R$ of the system. It is well established that the Markov approximation is an excellent approximation for describing the process of spontaneous emission of photons from atoms at fixed positions~\cite{Gar04}. We now show that this is also the case when the bath operators $B_j$ [Eq.~(\ref{Bj})] contain in addition the motional degrees of freedom. Inserting Eq.~(\ref{HIgen}) into Eq.~(\ref{LvNB}) yields
\begin{widetext}
\begin{equation}\label{LvNBM}
\frac{d\rho_A^\mathrm{in}(t)}{dt} =-\frac{i}{\hbar}\big[\langle H_A^{\mathrm{self}}(t)
\rangle_{\mathrm{ex}},\rho_A^\mathrm{in}(t)\big]\, + \sum_{i,j = 1}^{N} \sum_{\omega, \omega' \atop = \pm \omega_0} \Bigg[\Gamma_{ij}(\omega)\,e^{i (\omega'-\omega)t} \left( A^{\mathrm{in}}_j(\omega) \rho_A^\mathrm{in}(t) A_i^{\mathrm{in}\dagger}(\omega') - A_i^{\mathrm{in}\dagger}(\omega') A^{\mathrm{in}}_j(\omega)\rho_A^\mathrm{in}(t) \right) + \mathrm{h.c.} \Bigg],
\end{equation}
\end{widetext}
with the spectral correlation tensor
\begin{equation}\label{eq:corrtensor}
\Gamma_{ij}(\omega)=\frac{1}{\hbar^2}\int_0^\infty e^{i \omega t} \, \mathcal{C}_{ij}(t)\,dt,
\end{equation}
and the bath correlation function
\begin{equation}\label{eq:corr}
\mathcal{C}_{ij}(t) = \langle B_i^\dagger(t)B_j(0)\rangle_B
\end{equation}
where $B_j(t)$ is given by Eq.~(\ref{Bj}) and the expectation value is over the bath degrees of freedom. The bath correlation function $\mathcal{C}_{ij}(t)$ decays on a time scale $\tau_B$, which defines the bath correlation time. The standard case of atoms at fixed classical positions is obtained formally through the substitution $\hat{\mathbf{r}}_i(t) \to \mathbf{r}_i$ in Eq.~(\ref{Bj}). The correlation function then reduces to $\mathcal{C}_{ij}(t) =\langle \mathbf{E}(\mathbf{r}_{i},t)\boldsymbol{\cdot}\mathbf{d}^*\,\mathbf{E}(\mathbf{r}_{j},0)\boldsymbol{\cdot}\mathbf{d}\rangle$ for the electric field components along $\mathbf{d}$. The bath correlation time $\tau_B$ is smaller than an optical period, and thereby much smaller than the spontaneous emission time $\tau_R$ and justifies the Markov
approximation. This is true for both the diagonal ($i=j$) and off-diagonal ($i\ne j$) terms, for all positions $\mathbf{r}_{i}$ and
$\mathbf{r}_{j}$. One might wonder if the motional degrees of freedom induce correlations on a much longer time scale. The relevant bath correlation function $\mathcal{C}_{ij}(t) =\langle \mathbf{E}(\hat{\mathbf{r}}_{i}(t),t)\boldsymbol{\cdot}\mathbf{d}^*\,\mathbf{E}(\hat{\mathbf{r}}_{j}(0),0)\boldsymbol{\cdot}\mathbf{d}\rangle$ is still given by the correlation of the field components --- now taken in general at different positions, which are themselves subject to quantum fluctuations and dynamics. However, since the electric field correlations decay on a time scale $\tau_B$ regardless of the positions, we see that the motion of the atoms does not increase the bath correlation time, and the Markov approximation remains therefore justified.
\subsubsection{Rotating Wave Approximation}
We now resort to a rotating wave approximation (RWA) by keeping in Eq.~(\ref{LvNBM})
only the energy-conserving terms ($\omega' = \omega$). This ensures that the master equation preserves the positivity of $\rho_A^\mathrm{in}(t)$. The RWA is valid as long as the
relaxation time of the system, $\tau_R\sim 1/\gamma_0$, is much larger than the typical time
scale $\tau_S$ of its intrinsic evolution. Here the intrinsic evolution
corresponds to the internal dynamics of the atoms, hence $\tau_S\sim
1/\omega_0$. We thus have $\tau_S/\tau_R\sim \gamma_0/\omega_0\sim \alpha
(a_0/\lambda_0)^2$ with $\alpha$ the fine-structure constant, $a_0$ the Bohr radius and $\lambda_0$ the wavelength of the emitted radiation. In the optical domain, this
ratio is much smaller than one and the dipole approximation and RWA are entirely justified. Figure~\ref{time_scales} summarizes all the approximations performed in the derivation of the master equation in terms of the relevant characteristic time scales.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig2.pdf}
\end{center}
\caption{Characteristic time scales corresponding to the evolution of the external dynamics ($\tau_M$), the internal dynamics ($\tau_R\sim 1/\gamma_0$ with $\gamma_0$ the free spontaneous emission rate), the isolated system dynamics ($\tau_S\sim 1/\omega_0$ with $\omega_0$ the atomic transition frequency), and the bath ($\tau_B<\tau_S$).} \label{time_scales}
\end{figure}
\subsubsection{Correlation functions}\label{seccorfun}
The bath correlation function $\mathcal{C}_{ij}(t)$ [Eq.~(\ref{eq:corr})] can be further specified by evaluating the expectation value of the electromagnetic field degrees of freedom. Since the electromagnetic field is initially in vacuum, only the $a_{\mathbf{k}\boldsymbol{\varepsilon}} a_{\mathbf{k}\boldsymbol{\varepsilon}}^\dagger$ term survives and the bath correlation function becomes
\begin{equation}\label{eq:corrdec}
\mathcal{C}_{ij}(t) = \frac{1}{L^3}\sum_{\mathbf{k}\boldsymbol{\varepsilon}} \mathcal{C}^\mathrm{em}_{\mathbf{k}\eps} (t) \, \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k},t)
\end{equation}
with
\begin{equation}\label{cijclas}
\mathcal{C}^\mathrm{em}_{\mathbf{k}\eps} (t) = \frac{\hbar \omega_k}{2\epsilon_0} \, |\boldsymbol{\varepsilon}_\mathbf{k}\boldsymbol{\cdot} \mathbf{d}|^2\, e^{- i\omega_k t}
\end{equation}
and the motional correlation function
\begin{equation}\label{eq:com}
\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k},t)= \big\langle e^{i\mathbf{k}\boldsymbol{\cdot} \hat{\mathbf{r}}_{i}(t)}e^{-i \mathbf{k}\boldsymbol{\cdot} \hat{\mathbf{r}}_{j}(0)}\big\rangle_{\mathrm{ex}}.
\end{equation}
The motional correlation function~(\ref{eq:com}) explicitely
depends on time. However, as explained above, the motion of the atoms
does not increase the bath correlation time. Moreover, since the
typical relaxation time of the internal dynamics, $\tau_R$, is much
smaller than the intrinsic evolution time associated with the atomic
motion, $\tau_M$, the latter is approximately frozen during the
emission of photons, so that $\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k},t)\approx
\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k}, 0) \equiv \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})$ (see the
discussion after Eq.~\eqref{bornapp}, where we found that the $\tau_M$ and
$\tau_R$ are separated by at least $6$ orders of magnitude in the
typical optical regime). The bath
correlation function then simplifies to
\begin{equation}
\label{eq:corr2}
\mathcal{C}_{ij}(t) \approx \frac{1}{L^3}\sum_{\mathbf{k}\boldsymbol{\varepsilon}} \mathcal{C}^\mathrm{em}_{\mathbf{k}\eps} (t) \, \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})
\end{equation}
with
\begin{equation}\label{cijex}
\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k}) = \left\langle e^{i\mathbf{k}\boldsymbol{\cdot} \hat{\mathbf{r}}_{ij}}\right\rangle_{\mathrm{ex}} = \mathrm{Tr}_{ij}\left[ e^{i\mathbf{k}\boldsymbol{\cdot} \hat{\mathbf{r}}_{ij}} \rho_{ij}^\mathrm{ex} \right]
\end{equation}
with $\hat{\mathbf{r}}_{ij} = \hat{\mathbf{r}}_{i} - \hat{\mathbf{r}}_j$ and where the trace is now performed over the motional degrees of freedom of the atoms $i$ and $j$ with $\rho_{ij}^\mathrm{ex} $ their external reduced density matrix.
For classical atomic positions, $\hat{\mathbf{r}}_j$ can be replaced by $\mathbf{r}_j$ for all $j$ and the motional correlation function (\ref{cijex}) reduces to $\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})=e^{i \mathbf{k}\boldsymbol{\cdot} \mathbf{r}_{ij}}$ with $\mathbf{r}_{ij} = \mathbf{r}_i - \mathbf{r}_j$ the vector connecting atoms $i$ and $j$, so that Eq.~(\ref{cijclas}) yields the Fourier components of the classical correlation function for the electromagnetic field. In contrast, when the atomic motion is quantized, the plane waves $e^{i \mathbf{k}\boldsymbol{\cdot} \mathbf{r}_{ij}}$ in the Fourier series (\ref{eq:corr2}) are replaced by $\langle e^{i\mathbf{k}\boldsymbol{\cdot} \hat{\mathbf{r}}_{ij}}\rangle_{\mathrm{ex}}$ to account for the fluctuations and correlations in the positions of atoms $i$ and $j$.
\subsubsection{Standard form of the master equation}
Under Born-Markov approximation and RWA, Eq.~(\ref{LvNBM}) takes the Lindblad form
\begin{multline}\label{sf}
\frac{d\rho_A^\mathrm{in}(t)}{dt} =-\frac{i}{\hbar}\big[\langle H_A^{\mathrm{self}}
\rangle_{\mathrm{ex}},\rho_A^\mathrm{in}(t)\big] \\
- \frac{i}{\hbar}\left[H_\mathrm{\Omega} , \rho_A^\mathrm{in}(t) \right] + \mathcal{D}\left(\rho_A^\mathrm{in}(t)\right)
\end{multline}
with the level-shift Hamiltonian
\begin{equation}\label{hamilcons}
H_\mathrm{\Omega} = \sum_{i, j=1}^{N} \hbar \Omega_{ij} \, \sigma_+^{(i)}\sigma_-^{(j)}
\end{equation}
in terms of level shifts
\begin{equation}
\label{Omegaij}
\Omega_{ij} =\mathrm{Im}\left[\Gamma_{ij}(\omega_0)+\Gamma_{ij}(-\omega_0)\right],
\end{equation}
and the dissipator
\begin{equation}
\label{Dissip0}
\mathcal{D}\left(\cdot\right) = \sum_{i,j=1}^N\gamma_{ij}\left(\sigma_-^{(j)}\cdot\sigma_+^{(i)}-\frac{1}{2}\left\{\sigma_+^{(i)}\sigma_-^{(j)},\cdot\right\}\right)
\end{equation}
in terms of decay rates
\begin{equation}
\label{decaycoeff}
\gamma_{ij} = 2\,\mathrm{Re}\left[\Gamma_{ij}(\omega_0)\right].
\end{equation}
Note that in Eq.~(\ref{sf}), $H_A^\mathrm{self}$ does not depend anymore on time because of the approximation $\hat{\mathbf{r}}_j(t) \approx \hat{\mathbf{r}}_j$ performed above.
Equations~(\ref{hamilcons}) and (\ref{Dissip0}) describe respectively the conservative and dissipative dynamics of the atomic internal state caused by the interaction with the electromagnetic field. The level shifts $\Omega_{ij}$ and the decay rates $\gamma_{ij}$ are obtained from the imaginary and real parts of the spectral correlation tensor $\Gamma_{ij}$ [Eq.~(\ref{eq:corrtensor})]. In the following, we analyse more precisely the structure of these coefficients entering the master equation.
\subsection{Dissipative part}
An explicit expression for the decay rates $\gamma_{ij}$ [Eq.~(\ref{decaycoeff})] can be obtained by performing the time integration in Eq.~(\ref{eq:corrtensor}) together with Eq.~(\ref{eq:corr2}) for the bath correlation function, thereby yielding
\begin{equation}\label{gij2}
\gamma_{ij} = \frac{1}{L^3}\sum_{\mathbf{k}\boldsymbol{\varepsilon}} \gamma^\mathrm{em}_{\mathbf{k}\eps} \, \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})
\end{equation}
with
\begin{equation}
\gamma^\mathrm{em}_{\mathbf{k}\eps} = \frac{\pi \omega_k}{\hbar \epsilon_0} \,|\boldsymbol{\varepsilon}_\mathbf{k}\boldsymbol{\cdot}\mathbf{d}|^2\,
\delta(\omega_k - \omega_0)
\end{equation}
the Fourier components of the decay rates for classical atomic positions. Equation~(\ref{gij2}) shows that the Fourier components of the decay rates are affected by the quantization of the atomic motion through weighting by the motional correlation function (\ref{cijex}). In the limit of a continuum of modes, the sum over the wave vectors can be replaced by an integral (we use the standard spherical coordinates $(k,\theta,\varphi)$ with $d\Omega = \sin\theta \,d\theta\,d\varphi $),
\begin{equation}
\frac{1}{L^3}\sum_{\mathbf{k}}\to \int \frac{d\mathbf{k}}{(2\pi)^3}\equiv \frac{1}{(2\pi)^3c^3}\int_{0}^{+\infty}\omega^2\,d\omega\int d\Omega,\label{sumtoint}
\end{equation}
and Eq.~(\ref{gij2}) yields, after performing the $\omega$-integration,
\begin{equation}\label{gijgen}
\gamma_{ij} = \int \sum_{\boldsymbol{\varepsilon}} \gamma^\mathrm{em}_{\mathbf{k}_0\eps} \,\, \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k}_0) \,\frac{d\Omega}{(2\pi)^2}
\end{equation}
with $\mathbf{k}_0= k_0\,(\cos\varphi\sin\theta, \sin\varphi\sin\theta, \cos\theta)$, $k_0=\omega_0/c$,
\begin{equation}\label{gijclass}
\gamma^\mathrm{em}_{\mathbf{k}_0\eps} = \frac{3 \pi \gamma_0}{2} |\boldsymbol{\varepsilon}_{\mathbf{k}_0}\boldsymbol{\cdot}\mathbf{e}_\mathbf{d}|^2
\end{equation}
with $\mathbf{e}_\mathbf{d}=\mathbf{d}/d$, $d=|\mathbf{d}|$ and $\gamma_0$ the single-atom spontaneous emission rate
\begin{equation}\label{saper}
\gamma_0 = \frac{\omega_0^3 d^2}{3\pi\hbar\epsilon_0c^3}.
\end{equation}
For classical atomic positions, $\mathcal{C}_{ij}^{\mathrm{ex}}(\mathbf{k})= e^{i \mathbf{k} \boldsymbol{\cdot} \mathbf{r}_{ij}}$ and Eq.~(\ref{gijgen}) reduces to the classical form of the decay rates for atoms separated by a distance $r_{ij}=|\mathbf{r}_{ij}|$, $\gamma_{ij}=\gamma^{\mathrm{cl}}(\mathbf{r}_{ij})$ with~\cite{Ste64,Aga74}
\begin{equation}
\label{gammaijcl}
\gamma^{\mathrm{cl}}(\mathbf{r}_{ij}) =\frac{3 \gamma_0 }{2}\Bigg[ p_{ij} \,\frac{\sin \xi_{ij}}{\xi_{ij}} + q_{ij} \left( \frac{\cos \xi_{ij}}{\xi_{ij}^2} - \frac{\sin \xi_{ij}}{\xi_{ij}^3}\right)\Bigg].
\end{equation}
with $\xi_{ij}=k_0 r_{ij}$.
For a $\pi$ transition, the angular factors $p_{ij}$ and $q_{ij}$ are given by
\begin{equation}\label{pqpi}
p_{ij}=\sin^2 \alpha_{ij} ,\;\; q_{ij}= (1-3 \cos^2 \alpha_{ij}),
\end{equation}
and for a $\sigma^\pm$ transition by
\begin{equation}\label{pqsigma}
p_{ij}=\tfrac{1}{2}(1+\cos^2 \alpha_{ij}) ,\;\; q_{ij}= \tfrac{1}{2}(3 \cos^2 \alpha_{ij}-1)
\end{equation}
with $\alpha_{ij}=\arccos(\mathbf{r}_{ij}\boldsymbol{\cdot}\mathbf{e}_z/r_{ij})$ the angle between the quantization axis and the vector connecting atoms $i$ and $j$. Equation~(\ref{gijgen}) can also be written as
$\gamma_{ij}= \mathcal{F}_{\mathbf{0}}^{-1}\left[ \mathcal{F}_\mathbf{k}\left[ \gamma^{\mathrm{cl}}\right]\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k}) \right]$
which can be seen to be the convolution product $ \left(\gamma^{\mathrm{cl}} \, \star \, f_{ij} \right)(\mathbf{0})$ with $f_{ij}(\mathbf{r} ) = \mathcal{F}^{-1}_{\mathbf{r}}\left[\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k}) \right]$~\cite{footnote3}. Therefore, the decay rates takes the alternative form
\begin{equation}\label{gammaijconv}
\begin{aligned}
\gamma_{ij} &= \int_{\mathbb{R}^3} \gamma^{\mathrm{cl}}(\mathbf{r}) \,\mathcal{F}^{-1}_{\mathbf{r}} \left[\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})\right] d\mathbf{r}
\end{aligned}
\end{equation}
in terms of their classical expression~(\ref{gammaijcl}) and the inverse Fourier transform of the motional correlation function~(\ref{cijex}).
Two important features follow from Eq.~(\ref{gijgen}) (or equivalently from Eq.~(\ref{gammaijconv})). First, the diagonal decay rates $\gamma_{ii}$ are seen to coincide with those obtained in the classical case because $\mathcal{C}^\mathrm{ex}_{ii}(\mathbf{k})= 1$ for any motional state and wave vector $\mathbf{k}$. Hence, the dissipative internal dynamics of a single atom is not affected by its motional state when the electromagnetic field is initially in vacuum. Second, Eq.~(\ref{gijgen}) shows that as soon as the quantum nature of the atomic motion becomes appreciable, we have the additional
possibility of influencing the decay rates through engineering the motional
state of the atoms. The motional correlation function $\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k}_0)$ can be seen from
Eq.~(\ref{gijgen}) to play a similar role as mode-dependent modifications of the coupling constants, and can thus be expected to lead to similar effects as Purcell's enhancement or reduction of spontaneous emission~\cite{San11}.
It readily follows from Eq.~(\ref{gammaijconv}) that $\gamma_{ij}=\gamma_{ji}$ and $|\gamma_{ij}| \leqslant \gamma_0$. Indeed, the classical expression~(\ref{gammaijcl}) satisfies $|\gamma^\mathrm{cl}(\mathbf{r})| \leqslant \gamma^\mathrm{cl}(\mathbf{0}) = \gamma_0 \,\, \forall\, \mathbf{r}$, which implies
\begin{equation}
\begin{aligned}
|\gamma_{ij}| &\leqslant \gamma_0 \left| \int_{\mathbb{R}^3} \,\mathcal{F}^{-1}_{\mathbf{r}} \left[\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})\right] d\mathbf{r} \right|= \gamma_0 \left| \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{0}) \right|=\gamma_0
\end{aligned}
\end{equation}
since $\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{0}) = \mathrm{Tr}(\rho_{ij}^\mathrm{ex}) = 1 $ for any $i,j$ due to normalization.
\subsection{Conservative part}
An explicit expression for the level shifts $\Omega_{ij}$ [Eq.~(\ref{Omegaij})] can be obtained along the same lines as for the decay rates, and reads
\begin{equation}\label{Oijsum}
\Omega_{ij} = \frac{1}{L^3}\sum_{\mathbf{k}\boldsymbol{\varepsilon}} \Omega^\mathrm{em}_{\mathbf{k}\eps} \, \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k}) \,
\end{equation}
with
\begin{equation}
\Omega^\mathrm{em}_{\mathbf{k}\eps} = -\frac{1}{\hbar \epsilon_0}\,
\, \mathrm{v.p.}\left(\frac{\omega_k^2}{\omega^2_k-\omega^2_0}\right) |\boldsymbol{\varepsilon}_\mathbf{k}\boldsymbol{\cdot}\mathbf{d}|^2
\end{equation}
where v.p.\ stands for the Cauchy principal value~\cite{footnote4}.
In the limit of a continuum of modes, Eq.~(\ref{Oijsum}) becomes
\begin{equation} \label{omegaij}
\Omega_{ij} = \mathrm{v.p.}\int \sum_{\boldsymbol{\varepsilon}} \Omega^\mathrm{em}_{\mathbf{k}\eps} \,
\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})\,\frac{d\mathbf{k}}{(2\pi)^3}
\end{equation}
with
\begin{equation}
\Omega^\mathrm{em}_{\mathbf{k}\eps} = - \frac{3 \pi \gamma_0}{k_0^3}\,
\frac{k^2}{k^2-k_0^2} \, |\boldsymbol{\varepsilon}_\mathbf{k}\boldsymbol{\cdot} \mathbf{d}|^2.
\end{equation}
As for the decay rates, the plane waves $e^{i \mathbf{k}\boldsymbol{\cdot} \mathbf{r}_{ij}}$ in the Fourier series for the level shifts $\Omega_{ij}$ are replaced by the motional correlation function (\ref{cijex}) taking into account the quantization of the atomic motion. The diagonal coefficients $\Omega_{ii}$ related to the Lamb shifts are not affected by the quantization of the motion since $\mathcal{C}_{ii}^\mathrm{ex}(\mathbf{k})=1$; they are all equal and can be discarded by means of a renormalization of the atomic transition frequency $\omega_0$. The off-diagonal shifts $\Omega_{ij}$ ($i\ne j$) contain divergent terms, that are already present without quantization of the atomic motion, i.e.~with classical atomic positions. However, these terms are exactly cancelled by other divergent terms appearing in the Hamiltonian $H_A^{\mathrm{self}}$~\cite{Aga74}. This cancellation still holds when the atomic motion is quantized, as we proceed to show. We start by rewriting the Hamiltonian $H_A^{\mathrm{self}}$ [Eq.~(\ref{tHAself})] using the expression of the Dirac delta distribution in integral form in momentum space,
\begin{equation}
\begin{aligned}
H_{A}^{\mathrm{self}} & = \frac{d^2}{2 \epsilon_0} \sum_{i,j=1}^N \sigma_x^{(i)} \sigma_x^{(j)}\\
& \;\;\iiint e^{i (\mathbf{k} -\mathbf{k}') \boldsymbol{\cdot} \mathbf{r}} e^{-i (\mathbf{k}\boldsymbol{\cdot} \hat{\mathbf{r}}_i - \mathbf{k}' \boldsymbol{\cdot} \hat{\mathbf{r}}_j)} d\mathbf{r} \,\frac{d\mathbf{k}}{(2\pi)^3}\, \frac{d\mathbf{k}'}{(2\pi)^3} .
\end{aligned}
\end{equation}
The integration over $\mathbf{r}$ yields a Dirac delta distribution $\delta(\mathbf{k}-\mathbf{k}')$, which eventually leads to the contact interaction Hamiltonian
\begin{equation}\label{Hselfcontact}
H_{A}^{\mathrm{self}} = \frac{d^2}{2 \epsilon_0} \sum_{i,j=1}^N \sigma_x^{(j)} \sigma_x^{(i)} \delta(\hat{\mathbf{r}}_i - \hat{\mathbf{r}}_j).
\end{equation}
By keeping only the energy conserving terms (RWA) in Eq.~(\ref{Hselfcontact}), the expectation value $\langle H_A^{\mathrm{self}}
\rangle_{\mathrm{ex}}$ appearing in Eq.~(\ref{sf}) becomes
\begin{equation}
\begin{aligned}\label{selfh2}
\langle H_A^{\mathrm{self}}
\rangle_{\mathrm{ex}} &= \sum_{i \neq j}^N
\hbar \Omega_{ij}^{\mathrm{self}} \sigma_+^{(j)} \sigma_-^{(i)} + \sum_{i=1}^N
\hbar \Omega_{ii}^{\mathrm{self}} \,\mathbb{1}^{(i)}
\end{aligned}
\end{equation}
with $\mathbb{1}^{(i)}$ the internal identity operator for atom $i$ and
\begin{align}\label{omegaijself}
&\Omega_{ij}^\mathrm{self} = \frac{3\pi \gamma_0}{k_0^3} \int \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})\,\frac{d\mathbf{k}}{(2\pi)^3} , \\
&\Omega_{ii}^\mathrm{self} = \frac{3\pi \gamma_0}{2 k_0^3} \int \frac{d\mathbf{k}}{(2\pi)^3}.
\end{align}
Since the divergent level-shift $\Omega_{ii}^\mathrm{self}$ in Eq.~(\ref{selfh2}) is proportional to the identity, it can be absorbed by means of a redefinition of the zero energy, so that $\langle H_A^{\mathrm{self}}
\rangle_{\mathrm{ex}}$ reduces to
\begin{equation}
\langle H_A^{\mathrm{self}}
\rangle_{\mathrm{ex}} = \sum_{i \neq j}^N
\hbar \Omega_{ij}^{\mathrm{self}} \sigma_+^{(j)} \sigma_-^{(i)}.
\end{equation}
We now split the level shifts $\Omega_{ij}$ [Eq.~(\ref{omegaij})] into~\cite{Aga74}
\begin{equation}
\Omega_{ij}= \Delta_{ij} - \Omega_{ij}^\mathrm{self}
\end{equation}
where $\Omega^{\mathrm{self}}_{ij}$ is given by Eq.~(\ref{omegaijself}) and $\Delta_{ij}$ is the dipole-dipole shift given by
\begin{equation}\label{deltaijgen}
\Delta_{ij} = \mathrm{v.p.}\int\sum_{\boldsymbol{\varepsilon}} \Delta^\mathrm{em}_{\mathbf{k}\eps} \, \mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})\,\frac{d\mathbf{k}}{(2\pi)^3}
\end{equation}
with
\begin{equation}\label{Dijclkc}
\Delta^\mathrm{em}_{\mathbf{k}\eps} = \frac{3\pi \gamma_0}{k_0^3} \left[ 1 - \frac{k^2}{k^2-k^2_0} |\boldsymbol{\varepsilon}_\mathbf{k}\boldsymbol{\cdot}
\mathbf{e}_\mathbf{d}|^2 \right].
\end{equation}
The Hamiltonian (\ref{hamilcons}) entering the master equation can then be decomposed as
\begin{equation}
H_\mathrm{\Omega} = \sum_{i \neq j}^{N} \hbar\Omega_{ij} \, \sigma_+^{(i)}\sigma_-^{(j)} \equiv H_\mathrm{\Delta} - \langle H_A^{\mathrm{self}}
\rangle_{\mathrm{ex}} \label{hdeltaij}
\end{equation}
with the dipole-dipole Hamiltonian
\begin{equation}\label{dipdipHa}
H_\mathrm{\Delta} = \sum_{i \neq j }^{N} \hbar \Delta_{ij} \, \sigma_+^{(i)}\sigma_-^{(j)},
\end{equation}
so that Eq.~(\ref{sf}) eventually reads
\begin{equation}\label{sfsch}
\begin{aligned}
\frac{d\rho_A^\mathrm{in}(t)}{dt} &= - \frac{i}{\hbar}\left[ H_\mathrm{\Delta}, \rho_A^\mathrm{in}(t) \right] + \mathcal{D}\left(\rho_A^\mathrm{in}(t)\right).
\end{aligned}
\end{equation}
Hence, $H_A^\mathrm{self}$ does not contribute to the dynamics, and $H_\mathrm{\Delta}$ is the proper form of the Hamiltonian to describe the conservative dynamics of the atomic system. It accounts for second order photon exchanges between pairs of atoms in different internal energy eigenstates~\cite{footnote5}.
For classical atomic positions, $\mathcal{C}_{ij}^{\mathrm{ex}}(\mathbf{k})= e^{i \mathbf{k} \boldsymbol{\cdot} \mathbf{r}_{ij}}$ and Eq.~(\ref{deltaijgen}) reduces to the retarded interaction energy (divided by $\hbar$) between two parallel dipoles located at fixed positions $\mathbf{r}_{i}$ and $\mathbf{r}_{j}$, i.e.\ $\Delta_{ij}=\Delta^{\mathrm{cl}}(\mathbf{r}_{ij})$ with~\cite{Ste64,Aga74}
\begin{equation}
\label{deltaijcl}
\Delta^{\mathrm{cl}}(\mathbf{r}_{ij}) =\frac{3 \gamma_0 }{4}\Bigg[ - p_{ij} \,\frac{\cos \xi_{ij}}{\xi_{ij}} + q_{ij} \left( \frac{\sin \xi_{ij}}{\xi_{ij}^2} + \frac{\cos \xi_{ij}}{\xi_{ij}^3}\right) \Bigg],
\end{equation}
$\xi_{ij}=k_0r_{ij}$ and where $p_{ij}$ and $q_{ij}$ are given by Eq.~(\ref{pqpi}) for a $\pi$ transition, and by Eq.~(\ref{pqsigma}) for a $\sigma^\pm$ transition. The sum over the polarizations of the Fourier components~(\ref{Dijclkc}) is thus equal to the Fourier transform of the retarded dipole-dipole interaction energy (divided by $\hbar$), $\Delta^{\mathrm{cl}}(\mathbf{r})$. Equation~(\ref{deltaijgen}) is the generalization of the dipole-dipole shifts (\ref{deltaijcl}) to account for quantum fluctuations and correlations in the atomic motion. Similarly to the decay rates, the dipole-dipole shifts can be written as
\begin{equation}\label{deltaijconv2}
\begin{aligned}
\Delta_{ij} &= \int_{\mathbb{R}^3} \Delta^{\mathrm{cl}}(\mathbf{r}) \,\mathcal{F}^{-1}_{\mathbf{r}} \left[\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})\right] d\mathbf{r}.
\end{aligned}
\end{equation}
As an example, let us consider again the case of two atoms at classical positions $\mathbf{r}_i$ and $\mathbf{r}_j$. We then have $\mathcal{C}_{ij}^{\mathrm{ex}}(\mathbf{k})= e^{i \mathbf{k}\boldsymbol{\cdot} \mathbf{r}_{ij}}$ and Eq.~(\ref{deltaijconv2}) reduces to $\Delta_{ij}=\Delta^{\mathrm{cl}}(\mathbf{r}_{ij})$, as expected. However, in most cases, Eq.~(\ref{deltaijconv2}) yields an infinite result because the
$1/r^3$ divergence of $\Delta^{\mathrm{cl}}(\mathbf{r})$ at $r=0$ is not integrable in $\mathbb{R}^3$ and because $\mathcal{F}^{-1}_{\mathbf{r}} [\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})]$ does in general not vanish at the origin. In order to treat dipole-dipole interactions, one must introduce a minimal distance, i.e.\ a cutoff, in the integral (\ref{deltaijconv2}). A natural cutoff would be of the order of the size of an atom, so as to remain compatible with the dipole approximation made in the derivation of the master equation. The effect of the cutoff will be discussed in detail in the following sections.
\section{General expressions of decay rates and dipole-dipole shifts}
\label{secdecay}
The master equation (\ref{sfsch}) is completely determined in terms of the motional correlation function~(\ref{cijex}) through the expressions of the decay rates $\gamma_{ij}$, given by Eq.~(\ref{gammaijconv}) and appearing in the dissipator (\ref{Dissip0}), and the dipole-dipole shifts $\Delta_{ij}$, given by Eq.~(\ref{deltaijconv2}) and appearing in the dipole-dipole Hamiltonian (\ref{dipdipHa}). All the effects related to recoil, quantum fluctuations of motion and indistinguishability are included in the motional correlation function $\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})$. In this section, we provide general expressions for $\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})$, $\gamma_{ij}$ and $\Delta_{ij}$ both for distinguishable and indistinguishable atoms for arbitrary motional states.
\subsection{Distinguishable atoms}
When $N$ distinguishable atoms are in the motional separable state $\ket{\phi_{1_{\ell}} \dotsc \phi_{N_{\ell}}}$ with a probability $p_\ell \geqslant 0$ ($\sum_\ell p_\ell = 1$), the global motional state is the statistical mixture
\begin{equation}\label{rhoAexdis}
\rho_A^{\mathrm{ex,sep}} = \sum_{\ell = 1}^{L} p_\ell \, \ket{\phi_{1_{\ell}} \dotsc \phi_{N_{\ell}}}\bra{\phi_{1_{\ell}} \dotsc \phi_{N_{\ell}}}.
\end{equation}
The single-atom motional states $\ket{\phi_{j_{\ell}}}$ ($j=1,\ldots,N$; $\ell=1,\ldots,L$) are normalized but are not necessarily orthogonal. The two-atom reduced density matrix $\rho_{ij}^\mathrm{ex}$ is obtained by tracing over the motional degrees of freedom of all atoms but $i$ and $j$, and reads
\begin{equation}\label{rhoijdis}
\rho_{ij}^{\mathrm{ex,sep}} = \sum_{\ell = 1}^{L}p_\ell \, \ket{\phi_{i_{\ell}} \phi_{j_{\ell}}} \bra{\phi_{i_{\ell}}\phi_{j_{\ell}}}.
\end{equation}
The motional correlation function (\ref{cijex}) is thus given, for distinguishable atoms (in the mixture (\ref{rhoAexdis})), by
\begin{equation}\label{cijexdis}
\mathcal{C}_{ij}^\mathrm{\,ex,sep}(\mathbf{k}) = \sum_{\ell = 1}^{L} p_\ell \, I_{i_{\ell} i_{\ell}}(\mathbf{k}) I_{j_{\ell} j_{\ell}}(-\mathbf{k})
\end{equation}
with the overlap integral
\begin{equation}\label{overlap}
\begin{aligned}
I_{\alpha \beta} (\mathbf{k})
&= \int_{\mathbb{R}^3} e^{i \mathbf{k} \boldsymbol{\cdot} \mathbf{r}} \, \phi_{\alpha}(\mathbf{r}) \, \phi^*_{\beta}(\mathbf{r})\,d\mathbf{r} \\
&= \int_{\mathbb{R}^3} \mathcal{F}_{\mathbf{k}' - \mathbf{k}}[{\phi_{\alpha}}] \, \mathcal{F}_{\mathbf{k}'}[\phi^{*}_{\beta}]\, d\mathbf{k}'
\end{aligned}
\end{equation}
defined for any pair of indices $\alpha\beta$. The overlap integral (\ref{overlap}) is equal to the overlap in momentum space between the state $\phi_{\beta}$ and the state $\phi_{\alpha}$ shifted by the momentum $\hbar \mathbf{k}$ of a photon of wave vector $\mathbf{k}$. The inverse Fourier transform of (\ref{cijexdis}) can be written
\begin{equation}\label{FinvCijex}
\mathcal{F}^{-1}_{\mathbf{r}} \left[\mathcal{C}_{ij}^\mathrm{ex}(\mathbf{k})\right] = \sum_{\ell = 1}^L p_\ell \int_{\mathbb{R}^3} |\phi_{i_\ell}(\mathbf{r}')|^2 \,|\phi_{j_\ell}(\mathbf{r}+\mathbf{r}')|^2 \, d\mathbf{r}'.
\end{equation}
On inserting Eq.~(\ref{FinvCijex}) into Eqs.~(\ref{gammaijconv}) and (\ref{deltaijconv2}), we obtain explicit expressions for the decay rates and the dipole-dipole shifts in terms of single-atom motional states
\begin{equation}\label{disEXCHANGE1}
\gamma_{ij}^{\mathrm{sep}} = \sum_{\ell = 1}^L p_\ell \iint_{\mathbb{R}^3\times \mathbb{R}^3} \gamma^{\mathrm{cl}}(\mathbf{r}-\mathbf{r}') \,|\phi_i(\mathbf{r})|^2 \,|\phi_j(\mathbf{r}')|^2 \,d\mathbf{r}\, d\mathbf{r}',
\end{equation}
\begin{equation}\label{disEXCHANGE2}
\Delta_{ij}^{\mathrm{sep}} = \sum_{\ell = 1}^L p_\ell \iint_{\mathbb{R}^3\times \mathbb{R}^3} \Delta^{\mathrm{cl}}(\mathbf{r}-\mathbf{r}') \,|\phi_i(\mathbf{r})|^2 \,|\phi_j(\mathbf{r}')|^2 \,d\mathbf{r}\, d\mathbf{r}'.
\end{equation}
\subsection{Indistinguishable atoms}
For indistinguishable atoms in a statistical mixture $\rho_A$, each wave function of the mixture has to be either symmetric or antisymmetric under exchange of particles, depending on the quantum statistics of the atoms (bosonic or fermionic). Due to the Born approximation, the mixture contains a single term and the initial state has to be of the form $\rho_A(0)=\rho_A^\mathrm{in}\otimes \rho_A^\mathrm{ex}$. For clarity, we shall consider \emph{pure} product initial states, and restrict ourselves to states that are both individually either symmetric ($+$) or antisymmetric ($-$). The symmetrization (antisymmetrization) of the separable motional state $\ket{\phi_{1} \dotsc \phi_{N}}$ leads to the $N$-atom symmetric (antisymmetric) state
\begin{equation}
\begin{aligned}
\ket{\Phi_A^\mathrm{ex,\pm}}=\sqrt{\frac{n_{\phi_1}!\cdots n_{\phi_N}!}{N!}}\,\sum_{\pi} s_{\pm}^{\pi} \,
\ket{\phi_{\pi(1)}
\cdots \phi_{\pi(N)}}
\end{aligned}
\end{equation}
where $n_{\phi_j}$ is the number of atoms occupying the single-atom motional state $\ket{\phi_j}$, the sum runs over all permutations $\pi$ of the indices $\{1, \dotsc, N\}$, and the symbol $ s_{\pm}^{\pi} $ is defined as
\begin{equation}
s_{\pm}^{\pi} = \begin{cases} 1 & \mbox{if $+$}, \\
\mathrm{sign}(\pi) & \mbox{if $-$}, \end{cases}
\end{equation}
where $\mathrm{sign}(\pi)$ is the signature of the permutation $\pi$.
The two-atom reduced density matrix, obtained by taking the partial trace of $\rho_A^{\mathrm{ex, \pm}}=\ket{\Phi_A^\mathrm{ex,\pm}}\bra{\Phi_A^\mathrm{ex,\pm}}$ over all atoms but $i$ and $j$, has the form
\begin{equation}\label{rhoijpm}
\begin{aligned}
\rho_{ij}^{\mathrm{ex, \pm}} &= \sum_{\pi, \pi'}\lambda_{ij}^{\pi\pi', \pm} \,\ket{\phi_{\pi(i)}\phi_{\pi(j)}} \bra{\phi_{\pi'(i)}\phi_{\pi'(j)}}
\end{aligned}
\end{equation}
with
\begin{equation}\label{pij}
\lambda_{ij}^{\pi\pi', \pm} = \frac{\displaystyle s_{\pm}^{\pi} \, s_{\pm}^{\pi'} \, \prod_{n = 1 \atop n\neq i,j}^N \langle\phi_{\pi'(n)}|\phi_{\pi(n)}\rangle}{\displaystyle \sum_{\tilde{\pi},\tilde{\pi}'} s_{\pm}^{\tilde{\pi}} \, s_{\pm}^{\tilde{\pi}'} \prod_{n = 1}^N\langle\phi_{\tilde{\pi}'(n)}|\phi_{\tilde{\pi}(n)}\rangle}.
\end{equation}
Inserting Eq.~(\ref{rhoijpm}) into (\ref{cijex}) eventually leads to the motional correlation function
\begin{equation}\label{cijexpm}
\begin{aligned}
\mathcal{C}_{ij}^\mathrm{\,ex, \pm}(\mathbf{k}) = \sum_{\pi, \pi'} \lambda_{ij}^{\pi\pi',\pm} \, I_{\pi(i)\pi'(i)}(\mathbf{k}) \, I_{\pi(j)\pi'(j)}(- \mathbf{k}).
\end{aligned}
\end{equation}
An important result is that $\mathcal{C}_{ij}^\mathrm{\,ex, \pm}$, and thus $\gamma_{ij}$ and $\Delta_{ij}$ [see Eqs.~(\ref{gijgen}) and (\ref{deltaijconv2})], do not depend on $i$ and $j$ for indistinguishable atoms, regardless of the average distance between atoms. This fact has far reaching consequences on how the atomic system radiates, especially in the regime in which cooperative processes are enhanced, when atoms are located within a volume smaller than $\lambda_0^3$. For distinguishable atoms, cooperative emission (superradiance or subradiance) is strongly altered by the dephasing of the atomic dipoles as a consequence of dipole-dipole interactions, whereas for indistinguishable atoms no such dephasing occurs.
The reduced density matrix~(\ref{rhoijpm}) leads to decay rates and dipole-dipole shifts in terms of the following \emph{exchange integrals}
\begin{multline}
\gamma_{ij} = \sum_{\pi, \pi'} \lambda_{ij}^{\pi\pi',\pm} \iint_{\mathbb{R}^3\times \mathbb{R}^3} \gamma^{\mathrm{cl}}(\mathbf{r}-\mathbf{r}') \,\phi_{\pi(i)}(\mathbf{r}) \phi_{\pi'(i)}^*(\mathbf{r}) \\\label{indisEXCHANGE1}
\times \phi_{\pi(j)}(\mathbf{r}')\phi_{\pi'(j)}^*(\mathbf{r}')
\,d\mathbf{r}\, d\mathbf{r}',
\end{multline}
\begin{multline}
\Delta_{ij} = \sum_{\pi, \pi'} \lambda_{ij}^{\pi\pi',\pm} \iint_{\mathbb{R}^3\times \mathbb{R}^3} \Delta^{\mathrm{cl}}(\mathbf{r}-\mathbf{r}') \,\phi_{\pi(i)}(\mathbf{r}) \phi_{\pi'(i)}^*(\mathbf{r}) \\ \label{indisEXCHANGE2}
\times \phi_{\pi(j)}(\mathbf{r}')\phi_{\pi'(j)}^*(\mathbf{r}')
\,d\mathbf{r}\, d\mathbf{r}'.
\end{multline}
A particularly relevant situation in the context of cold-atom physics is when all atoms occupy the same motional state $\ket{\phi_0}$ and thus form a Bose-Einstein condensate, i.e.\ when the global motional state $\rho_A^\mathrm{ex}=(\ket{\phi_0}\bra{\phi_0})^{\otimes N}$ is symmetric and separable. The corresponding correlation function is given by Eq.~(\ref{cijexdis}) for $L = 1$ and can be simplified into
\begin{equation}
\mathcal{C}_{ij}^{\mathrm{ex},+}(\mathbf{k}) = I_{00}(\mathbf{k}) \, I_{00}(-\mathbf{k})=\big|\mathcal{F}_{\mathbf{k}}\left[ \,|\phi_0(\mathbf{r})|^2\, \right]\!\big|^2.
\end{equation}
The decay rates (\ref{indisEXCHANGE1}) and dipole-dipole shifts (\ref{indisEXCHANGE2}) read in this case
\begin{equation}\label{beEXCHANGE1}
\gamma_{ij} = \iint_{\mathbb{R}^3\times \mathbb{R}^3} \gamma^{\mathrm{cl}}(\mathbf{r}-\mathbf{r}') \:|\phi_0(\mathbf{r})|^2 \: |\phi_0(\mathbf{r}')|^2 \,d\mathbf{r}\, d\mathbf{r}',
\end{equation}
\begin{equation}\label{beEXCHANGE2}
\Delta_{ij} = \iint_{\mathbb{R}^3\times \mathbb{R}^3} \Delta^{\mathrm{cl}}(\mathbf{r}-\mathbf{r}') \:|\phi_0(\mathbf{r})|^2 \: |\phi_0(\mathbf{r}')|^2 \,d\mathbf{r}\, d\mathbf{r}'.
\end{equation}
\section{Decay rates and dipole-dipole shifts for particular motional states}
In this section, we determine \emph{explicit} expressions for the decay rates $\gamma_{ij}$ and the dipole-dipole shifts $\Delta_{ij}$ for different motional states of particular interest. We also discuss the effects of quantum statistics by considering both cases of distinguishable and indistinguishable atoms. For calculation purposes, it is convenient to work in the coordinate system $Ox'y'z'$ as depicted in Fig.~\ref{coordinate_system} with the $z'$-axis along the vector $\mathbf{r}'_{ij}\equiv \mathbf{r}_{ij}$ connecting the atoms $i$ and $j$, so that $\mathbf{k}'_0\boldsymbol{\cdot}\mathbf{r}'_{ij} = k_0 r_{ij} \cos \theta'$. This coordinate system results from a clockwise rotation of $Oxyz$ by an angle $\alpha_{ij}$ around the $y$ axis.
\begin{figure}
\begin{center}
\includegraphics[width=6.5cm]{fig3.pdf}
\end{center}
\caption{(Color online) Coordinate system $Oxyz$ where the $z$-direction corresponds to the quantization axis. The dipole moment for a $\pi$ transition is $\mathbf{d}_\pi=d_0\,\mathbf{e}_z$ with $d_0\in\mathbb{R}$, and for a $\sigma^{\pm}$ transition is $\mathbf{d}_{\sigma^{\pm}}=d_\mp\,\boldsymbol{\varepsilon}_\mp$ with $d_\mp\in\mathbb{C}$. The vector $\mathbf{r}_{ij}$ connecting the atoms $i$ and $j$ lies in the plane $y = 0$ and forms an angle $\alpha_{ij}$ with the $z$-axis. The primed coordinate system $Ox'y'z'$, equiped with spherical coordinates $(k',\theta',\varphi')$, is chosen so that $\mathbf{r}'_{ij}\equiv \mathbf{r}_{ij}$ lies along the $z'$ axis in order to facilitate the calculation of the correlation functions.} \label{coordinate_system}
\end{figure}
\subsection{Gaussian states}
Gaussian wave packets are of particular importance because they describe a broad class of states, such as the ground state of atoms trapped in harmonic potential, realized e.g.\ in a non-interacting Bose-Einstein condensate at zero temperature, but also non-classical states such as squeezed vibrational states of ions in harmonic trap \cite{Hep95}. We consider $N$ single-atom Gaussian wave packets
\begin{equation}\label{Gaussianstate}
\phi_j(\mathbf{r}') \equiv \langle \mathbf{r}' \ket{\phi_j} = \prod_{u = x',y',z'} \sqrt{\frac{1}{\sqrt{2\pi} \sigma_{u}}}\,e^{-(u-u_j)^2/4\sigma_{u}^2}
\end{equation}
for $j = 1, \dotsc, N$. The wave packets are centered around arbitrary positions $\mathbf{r}'_j = (x'_j, y'_j, z'_j)$ with widths $\sigma_{x'}$, $\sigma_{y'}$ and $\sigma_{z'}$ corresponding to the standard deviations along the three spatial directions, taken equal for all atoms. These states can be seen as the ground states of 3D-harmonically trapped atoms, with $\mathbf{r}'_j$ the position of the center of the trap, $\Omega_u=\hbar /2M\sigma_{u}^2$ its frequency along the $u$-direction ($u = x',y',z'$) and $M$ the atomic mass. Plugging Eq.~(\ref{Gaussianstate}) into (\ref{overlap}), we get for the overlap integral between any two Gaussian states $\ket{\phi_i}$ and $\ket{\phi_j}$
\begin{equation}
\begin{aligned}\label{overlapGS}
I_{ij}(\mathbf{k}) &= \prod_{u = x',y',z'} e^{-k_u\left[k_u \sigma_{u}^2 - i (u_i + u_j)\right]/2}\,e^{-(u_i-u_j)^2/8\sigma_{u}^2}.
\end{aligned}
\end{equation}
For simplicity, we now consider $\sigma_{x'} \to 0$ and $\sigma_{y'} \to 0$, so that the atomic motion is only quantized along the $z'$-direction, hence along $\mathbf{r}'_{ij}=(0,0,z'_{ij})$. This is the most interesting case of quantization along only one direction as it allows for the spatial overlap of atomic wave packets. This choice of coordinate system can always be made for $N=2$ atoms, and the results that we obtain can be transposed to more than two atoms as long as the atoms are aligned along the $z'$-direction. From now on, we denote by $\ell_0\equiv \sigma_{z'}$ the standard deviation of the Gaussian wave packet along this direction.
\subsubsection{Distinguishable atoms}
For two distinguishable atoms $i$ and $j$ in the states $\ket{\phi_{i}}$ and $\ket{\phi_{j}}$ respectively, Eq.~(\ref{overlapGS}) yields for the correlation function (\ref{cijexdis})
\begin{align}
\mathcal{C}_{ij}^{\mathrm{ex, sep}}(\mathbf{k}') &= I_{ii}(\mathbf{k}')\,I_{jj}(-\mathbf{k}') \nonumber\\
&= e^{-(k' \ell_0 \cos\theta')^2} e^{i k' r_{ij} \cos\theta'}. \label{corkmGSdis}
\end{align}
In the limit of tight confinement, $\ell_0 \to 0$ , atoms are well localized and the correlation function reduces to its classical expression $e^{i \mathbf{k}' \boldsymbol{\cdot} \mathbf{r}'_{ij}}$. For any other value of $\ell_0$, the decay rates (\ref{gijgen}) resulting from the correlation function (\ref{corkmGSdis}) are obtained from the angular integral
\begin{equation}
\begin{aligned}\label{gijGSintegral}
\gamma_{ij}^\mathrm{sep} &= \frac{3 \gamma_0}{8\pi} \int \sum_{\boldsymbol{\varepsilon}'} |\boldsymbol{\varepsilon}'_{\mathbf{k}'_0}\boldsymbol{\cdot}\mathbf{e}'_{\mathbf{d}'}|^2 e^{-(k_0 \ell_0 \cos\theta')^2} e^{i k_0 r_{ij} \cos\theta'} d\Omega'
\end{aligned}
\end{equation}
with $d\Omega' = \sin\theta'd\theta'd\varphi'$ and where the sum over the polarizations yields a factor $\sum_{\boldsymbol{\varepsilon}'} |\boldsymbol{\varepsilon}'_{\mathbf{k}'_0}\boldsymbol{\cdot}\mathbf{e}'_{\mathbf{d}'}|^2 = 1 - \mu_{ij}$
with
\begin{equation}\label{polapi}
\mu_{ij} = \left(\cos\varphi' \sin \alpha_{ij} \sin \theta' + \cos \alpha_{ij} \cos \theta'\right)^2
\end{equation}
for a $\pi$ transition
and
\begin{equation}\label{polapm}
\begin{aligned}
\mu_{ij} = \frac{\left(\cos \theta' \sin \alpha_{ij} - \cos \alpha_{ij} \cos\varphi' \sin\theta' \right)^2 - \sin^2\theta' \sin^2\varphi'}{2}
\end{aligned}
\end{equation}
for a $\sigma^{\pm}$ transition. The integral can be evaluated analytically and provides us with the closed formula
\begin{widetext}
\begin{equation}
\label{gijgsdis}
\gamma^{\mathrm{sep}}_{ij}(\mathbf{r}_{ij},\ell_0) = \frac{3\gamma_0}{16 \eta_0^5} \bigg(\frac{\sqrt{\pi}}{6} e^{-\frac{\xi_{ij}^2}{4\eta_0^2}}\big[ 16 \eta_0^4 - q_{ij}\left(4 \eta_0^4 + 3 \xi_{ij}^2 - 6 \eta_0^2 \right) \big]\mathrm{Re}\left\{ \mathrm{erf}\left( \eta_0 + \frac{i \xi_{ij}}{2 \eta_0} \right) \right\} - q_{ij} \eta_0 e^{-\eta_0^2} \left[2 \eta_0^2 \cos \xi_{ij} - \xi_{ij}\sin \xi_{ij} \right] \bigg)
\end{equation}
\end{widetext}
where $\mathrm{erf}(z)=\frac{2}{\sqrt{\pi }}\int _0^ze^{-t^2}d t $ is the error function, $q_{ij}$ is the angular factor given by Eq.~(\ref{pqpi}) for a $\pi$ transition and by Eq.~(\ref{pqsigma}) for a $\sigma^{\pm}$ transition, and
\begin{equation}
\begin{aligned}\label{LDD}
&\xi_{ij} = k_0 r_{ij} = 2\pi \frac{r_{ij}}{\lambda_0}, \hspace{1cm} \eta_0 = k_0 \ell_0 = 2\pi \frac{\ell_0}{\lambda_0} .
\end{aligned}
\end{equation}
The parameter $\xi_{ij}$ quantifies the significance of atomic cooperative processes. The Lamb-Dicke parameter $\eta_0$ is a measure of the recoil experienced by an atom after emission (or absorption) of a photon of wavelength $\lambda_0$. Finally, the ratio $\eta_0/\xi_{ij}=\ell_0/r_{ij}$ is a quantifier of the overlap between atomic wave packets (see Fig.~\ref{fig:gaussians}). Equation~(\ref{gijgsdis}) is remarkable in that it is valid for any values of both $\xi_{ij}$ and $\eta_0$. It provides an accurate description of the combined effects of indeterminacy in atomic positions and recoil on the dissipitave dynamics of atoms for any possible realizations of the three characteristic lengths $r_{ij}$, $\ell_0$ and $\lambda_0$. In particular, it allows for a full description of recoil effects beyond the Lamb-Dicke regime, i.e.\ when $\eta_0 \gtrsim 1$. Table~\ref{table} summarizes several regimes that our theory covers as defined through the comparison of the adimensional parameters $\xi_{ij}$ and $\eta_0$.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{fig4.pdf}
\end{center}
\caption{(Color online) Schematic view of two atoms separated by a distance $r_{ij}$ and whose external states are described by Gaussian wave packets of width $\ell_0$. The dipole moments and dipole radiation patterns are illustrated in red and correspond to a $\pi$ transition with $\alpha_{ij}=\pi/2$. } \label{fig:gaussians}
\end{figure}
\begin{table}
\renewcommand{\arraystretch}{1.6}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Regime & Relevant Phenomena \\
\hline
\hline
\, $1 \lesssim \eta_0 \ll \xi_{ij}$\, & recoil effects\\
\hline
\,$\eta_0 \lesssim \xi_{ij} \ll 1$\, & cooperative effects \\
\hline
\,$1 \ll \xi_{ij} \lesssim \eta_0$ \, & indistinguishability, recoil effects\\
\hline
\multirow{2}*{ \,$\xi_{ij} \ll 1 \lesssim \eta_0$\, } & cooperative effects, recoil effects,\\[-4pt]
& indistinguishability\ \, \\
\hline
\,$\xi_{ij} \lesssim \eta_0 \ll 1$\, & \, cooperative effects, indistinguishability\, \\
\hline
\end{tabular}
\end{center}
\caption{Regimes and relevant phenomena related to the different ranges of the adimensional parameters $\xi_{ij}=k_0 r_{ij}$ and $\eta_0=k_0 \ell_0$. Recoil effects result from photon emission processes and are significant when $\eta_0 \gg 1$. Cooperative effects reflect the fact that the atoms do not behave as independent emitters when $\xi_{ij} \ll 1$ (provided $\eta_0$ is not too large, see Fig.~\ref{fig:ggm}). Indistinguishability becomes significant as soon as the wave packets overlap (i.e.\ when $\xi_{ij} \lesssim \eta_0$).
}\label{table}
\end{table}
Let us consider two important limiting cases : I. when the distance between any two atoms is much larger than $\lambda_0$ ($\xi_{ij} \gg 1$ : no cooperative effects in the case of classical positions), and II. when
the distance between any two atoms is much smaller than $\lambda_0$ ($\xi_{ij} \ll 1$ : superradiant regime). In the regime I, Eq.~(\ref{gijgsdis}) reduces to
\begin{equation}
\gamma_{ij}^\mathrm{sep} \stackrel[\xi_{ij} \gg 1]{}{\simeq} \,\frac{3\gamma_0 }{2}\, p_{ij} \,\frac{\sin\xi_{ij}}{\xi_{ij}}\,e^{-\eta_0^2}\label{gijfar}
\end{equation}
with $p_{ij}$ the angular factor given by Eq.~(\ref{pqpi}) for a $\pi$ transition and by Eq.~(\ref{pqsigma}) for a $\sigma^\pm$ transition.
This result differs by a factor $e^{-\eta_0^2}$ from the classical result that is obtained for atoms at fixed positions (i.e.\ the radiative term of Eq.~(\ref{gammaijcl})). This factor arises from the quantization of the atomic motion and can be interpreted as a reduction of phase coherence in the cooperative emission due to the uncertainty in the atomic positions. It is reminiscent of the Debye-Waller factor $\exp\left(-k_B T k_0^2/3M\Omega^2\right)$ typical for neutron scattering, where the position of the atoms is smeared out due to their thermal motion~\cite{Deb13} (here $T$ is the temperature, $k_B$ the Boltzmann constant, $M$ the atomic mass, $\Omega$ the atomic oscillation frequency and $k_0$ the neutron wavenumber).
In the opposite regime ($\xi_{ij}\ll 1$) and for any Lamb-Dicke parameter $\eta_0$, Eq.~(\ref{gijgsdis}) reduces to
\begin{multline}
\label{gijgscsran}
\gamma^{\mathrm{sep}}_{ij} \stackrel[\xi_{ij} \ll 1]{}{\simeq} \gamma_0 \left[ \sqrt{\pi}\, \mathrm{erf}\left(\eta_0\right) \frac{ (8-2q_{ij}) \eta_0^2 + 3 q_{ij} }{16 \eta_0^3}\right.\\
\left. - \frac{3q_{ij}\, e^{-\eta_0^2} }{8 \eta_0^2}\right].
\end{multline}
In particular, in the Lamb-Dicke regime ($\eta_0\ll 1$), the decay rates decrease with $\eta_0$ as
\begin{equation}
\gamma_{ij}^{\mathrm{sep}}\stackrel[\eta_0 \ll 1]{}{\simeq} \gamma_0 \left( 1 - \frac{5+q_{ij}}{15} \eta_0^2 \right),
\end{equation}
while for large values of $\eta_0$, we have
\begin{equation}
\gamma_{ij}^{\mathrm{sep}}\stackrel[\eta_0 \gg 1]{}{\simeq} \gamma_0 \,\frac{\sqrt{\pi}(4-q_{ij})}{8\eta_0}.
\end{equation}
We now turn to the calculations of the dipole-dipole shifts. Equation (\ref{FinvCijex}) yields for the inverse Fourier transform of (\ref{corkmGSdis})
\begin{equation}\label{invFouTra}
\mathcal{F}^{-1}_{\mathbf{r}'} \left[\mathcal{C}_{ij}^\mathrm{ex,sep} \left(\mathbf{k}'\right)\right] = \frac{e^{- (z' + z'_{ij})^2/4\ell^2_0}}{2\sqrt{\pi}\,\ell_0} \delta(x') \delta(y')\,
\end{equation}
so that Eq.~(\ref{deltaijconv2}) yields
\begin{equation}\label{deltaij_gs}
\Delta_{ij}^\mathrm{sep}(\mathbf{r}_{ij},\ell_0) = \int_{-\infty}^{+\infty} e^{- (z' + z'_{ij})^2/4\ell_0^2} \: \Delta^\mathrm{cl}\left(0,0, z'\right) \,\frac{dz'}{2\sqrt{\pi}\,\ell_0}
\end{equation}
with $\Delta^\mathrm{cl}$ given by Eq.~(\ref{deltaijcl}). Equation~(\ref{deltaij_gs}) depends parametrically on the vector $\mathbf{r}'_{ij}=(0,0,z'_{ij})$ connecting the center of the two Gaussian wave packets. The integral diverges unless a cutoff $\epsilon$ is introduced in order to remove the small values of $z'$ around $z'=0$. Therefore, we introduce the regularized dipole-dipole shifts
\begin{multline}\label{deltaijdisGS}
\Delta_{ij}^\mathrm{sep}(\mathbf{r}_{ij},\ell_0,\epsilon) = \left[\int_{-\infty}^{-\epsilon} + \int_{\epsilon}^{+\infty}\right] e^{- (z' +
z'_{ij})^2/4\ell_0^2} \, \\
\times \Delta^\mathrm{cl}\left(0,0, z'\right) \, \frac{dz'}{2\sqrt{\pi}\,\ell_0}.
\end{multline}
In Fig.~\ref{fig:dij_gs_eps}, we show the result of a numerical integration of (\ref{deltaijdisGS}) as a function of $\xi_{ij}$ for different cutoffs $\epsilon$. For $\xi_{ij}\gtrsim 10$, all curves are seen to collapse to a single curve displaying similar oscillations as the classical dipole-dipole shift but with a reduced amplitude (depending on the Lamb-Dicke parameter). The chosen cutoffs have no influence in this parameter range. For $\xi_{ij}\lesssim 10$, the curves corresponding to different cutoffs start to differ. The ones with smaller values of $\epsilon$ diverge more rapidly as $\xi_{ij}$ decreases. However, the cutoff cannot be arbitrary small since atoms are not point-like particles but have a finite spatial extent of the order of the Bohr radius $a_0$. For frequencies in the optical domain, this leads to the condition $k_0 \epsilon > k_0\, a_0 \sim 10^{-3}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{fig5.pdf}
\caption{(Color online) Regularized dipole-dipole shifts $\Delta_{ij}^\mathrm{sep}$ as a function of $\xi_{ij}=k_0r_{ij}$ for atoms in Gaussian states (\ref{Gaussianstate}) with a Lamb-Dicke parameter $\eta_0 = 1$ and different cutoffs (solid lines from left to right) : $k_0\epsilon = 10^{-1}$ (blue curve), $k_0\epsilon = 10^{-2}$ (green curve), $k_0\epsilon = 10^{-3}$ (orange curve), $k_0\epsilon = 10^{-4}$ (dark red curve). The red dashed curve corresponds to the classical dipole-dipole shift $\Delta^\mathrm{cl}$ given by Eq.~(\ref{deltaijcl}). The plots shown are for a $\pi$ transition with $\alpha_{ij}=\pi/2$.}
\label{fig:dij_gs_eps}
\end{center}
\end{figure}
\subsubsection{Indistinguishable atoms}
The motional correlation function for indistinguishable atoms in single-atom Gaussian states [see Eq.~(\ref{Gaussianstate})] is obtained by inserting (\ref{overlapGS}) into (\ref{cijexpm}).
The decay rates (\ref{gijgen}) and dipole-dipole shifts (\ref{deltaijconv2}) for indistinguishable atoms are thus given by
\begin{align}
\label{gijGGpm}
\gamma_{ij}^{\pm}(\{\mathbf{r}_{ij}\},\ell_0) &= \sum_{\pi,\pi'}w^{\pi\pi',\pm} \:\gamma^\mathrm{sep}_{ij}\big(\bar{\mathbf{r}}_{ij}^{\pi\pi'},\ell_0\big),\\
\label{deltaijpmGS}
\Delta_{ij}^{\mathrm{\pm}}(\{\mathbf{r}_{ij}\},\ell_0) &= \sum_{\pi, \pi'} \, w^{\pi\pi',\pm}\,\Delta_{ij}^\mathrm{sep}\big(\bar{\mathbf{r}}_{ij}^{\pi\pi'},\ell_0\big),
\end{align}
with
\begin{equation}\label{npmGSz}
w^{\pi\pi',\pm} = \frac{\displaystyle s_{\pm}^{\pi}\,s_{\pm}^{\pi'} \prod_{n = 1}^N e^{-\frac{z'^2_{\pi(n)\pi'(n)}}{8\ell_0^2}}}{\displaystyle
\sum_{\tilde{\pi},\tilde{\pi}'}s_{\pm}^{\tilde{\pi}}\,s_{\pm}^{\tilde{\pi}'} \prod_{n = 1}^N e^{-\frac{z'^2_{\tilde{\pi}(n)\tilde{\pi}'(n)}}{8\ell_0^2}}},
\end{equation}
where $\gamma^\mathrm{sep}_{ij}$ and $\Delta_{ij}^\mathrm{sep}$ are given by Eqs.~(\ref{gijgsdis}) and (\ref{deltaijdisGS}) respectively, and
\begin{equation}
\bar{\mathbf{r}}_{ij}^{\pi\pi'}=\frac{1}{2}\left(\mathbf{r}_{\pi(i)\pi(j)} + \mathbf{r}_{\pi'(i)\pi'(j)}\right).
\end{equation}
Equations (\ref{gijGGpm}) and (\ref{deltaijpmGS}) now depend on all $\mathbf{r}_{ij}$, but are equal for all $i$ and $j$ as a consequence of indistinguishability, as discussed in the previous section.
Figure~\ref{fig:ggm} displays the decay rates and regularized dipole-dipole shifts as a function of $\xi_{ij}$ and $\eta_0$, both for $(a)$ distinguishable and $(b),(c)$ indistinguishable atoms (corresponding to symmetric and antisymmetric wave functions respectively). The decay rates $\gamma_{ij}^\mathrm{sep}$ and $\gamma_{ij}^\pm$ are those given in Eqs.~(\ref{gijgsdis}) and (\ref{gijGGpm}), while the dipole-dipole shifts $\Delta_{ij}^\mathrm{sep}$ and $\Delta_{ij}^\pm$ are those given in Eqs.~(\ref{deltaijdisGS}) and (\ref{deltaijpmGS}). In the Lamb-Dicke regime ($\eta_0 \ll 1$), the quantum fluctuations of the atomic positions are small and the decay rates and dipole-dipole shifts only slightly depart from their classical values, Eqs.~(\ref{gammaijcl}) and (\ref{deltaijcl}). Beyond the Lamb-Dicke regime ($\eta_0 \gtrsim 1$), the decay rates and dipole-dipole shifts still display oscillations as a function of $\xi_{ij}$ but with a reduced amplitude. This reduction in amplitude is more and more pronounced as $\eta_0$ increases. Physically, this can be understood as the result of an average over the atomic positions at the scale of the atomic wave packets of the corresponding oscillating classical quantities (see Eqs.~(\ref{gammaijconv}) and (\ref{deltaijconv2})). For small interatomic distances in comparison to the wave packets extension ($\xi_{ij} \lesssim \eta_0$), the symmetry of the wave function has major effects on how fast the amplitude decreases with $\eta_0$. It is seen to decrease much faster for the antisymmetric wave function than for the symmetric one (see $(b)$ and $(c)$ in the middle panel). Symmetric and separable wave functions yield very close results because their two-atom reduced density matrices $\rho_{ij}^+$ and $\rho_{ij}^{\mathrm{sep}}$ are very close. In particular, when $\xi_{ij} \to 0$, we have $\rho_{ij}^+ \to \rho_{ij}^\mathrm{sep}$ and $\gamma_{ij}^+\to\gamma_{ij}^\mathrm{sep}$ with $\gamma_{ij}^\mathrm{sep}$ given by Eq.~(\ref{gijgscsran}). On the contrary, the two-atom reduced density matrix $\rho_{ij}^-$ differ significantly because of the Pauli exclusion principle. When the overlap between atomic wave packets becomes negligible ($\xi_{ij} \gg \eta_0$), the decay rates and the dipole-dipole shifts are approximately equal for the symmetric, antisymmetric and separable wave functions, showing that atoms can be treated as distinguishable particles in this regime.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{fig6.png}
\end{center}
\caption{(Color online) Off-diagonal decay rates (top) and regularized dipole-dipole shifts (bottom) for the configuration illustrated in Fig.~\ref{fig:gaussians} as a function of $\xi_{ij}=k_0 r_{ij}$ and $\eta_0=k_0\ell_0$ for $(a)$ distinguishable atoms [Eq.~(\ref{gijgsdis}) and (\ref{deltaijdisGS})] and $(b), (c)$ indistinguishable atoms [Eq.~(\ref{gijGGpm}) and (\ref{deltaijpmGS}) for $N=2$]. The plots shown are for a $\pi$ transition with $\alpha_{ij}=\pi/2$ and a cutoff $k_0\epsilon=0.01$. The black solid curves at the front of each plot (for $\eta_0=0$) are the classical decay rate (\ref{gammaijcl}) and dipole-dipole shift (\ref{deltaijcl}). The black solid curves on the left of each plot of the decay rates (for $\xi_{ij}=0$) correspond, in the cases $(a)$ and $(b)$, to Eq.~(\ref{gijgscsran}).} \label{fig:ggm}
\end{figure*}
\subsection{Harmonic oscillator eigenstates}
We now consider as single-atom motional states the vibrational states of harmonically trapped atoms centered around the positions $\mathbf{r}'_j$ ($j = 1, \dotsc, N$), hereafter referred as Fock states. We denote them by $\ket{\phi_{(n,\mathbf{r}'_j)}}$ where $n = 0, 1, \dotsc$ stands for the number of vibrational excitations. Gaussian states are a particular case ($n = 0$) of this more general class of states. As previously, atoms are taken to be aligned along the $z'$-direction and their motion is quantized only along this direction.
In the position representation, the single-atom motional Fock states $|\phi_{(n,\mathbf{r}'_j)} \rangle $ with typical size $\ell_0$ along $z'$ are given by
\begin{equation}\label{fockstater}
\phi_{(n,\mathbf{r}'_j)}(\mathbf{r}') = \frac{e^{-\left(z' - z'_j\right)^2/4\ell_0^2}}{\left(2^n n!\right)^\frac{1}{2}\left(2\pi \ell^2_0\right)^\frac{1}{4}}\,H_n\left(\frac{z'-z'_j}{\sqrt{2}\ell_0}\right) \delta\left(x'_j\right)\delta\left(y'_j\right)
\end{equation}
where $H_n(z')$ is the Hermite polynomial of order $n$. The overlap integral (\ref{overlap}) between two Fock states at the same position $\mathbf{r}'$ reads~\cite{Win79}
\begin{multline}\label{overlapFS}
I_{(n_i,\mathbf{r}') (n_j,\mathbf{r}')}(\mathbf{k}') = e^{i \mathbf{k}' \boldsymbol{\cdot} \mathbf{r}'}\, e^{-k'^2_{z'} \ell_0^2/2} \\
\times \sqrt{\frac{n_{<}!}{(n_{<}+\Delta n)!}}\, \big(i k'_{z'} \ell_0\big)^{\Delta n}\,L^{\Delta n}_{n_{<}}\big(k_{z'}'^2 \ell_0^2\big)
\end{multline}
where $L^{\alpha}_{n}$ are the generalized Laguerre polynomials of degree $n$, $\Delta n=|n_i-n_j|$ and $n_{<}=\mathrm{min}\{n_i,n_j\}$.
\subsubsection{Distinguishable atoms}
When atom $i$ is in the state $\ket{\phi_{(n_i,\mathbf{r}'_i)}}$ and atom $j$ in the state $\ket{\phi_{(n_j,\mathbf{r}'_j)}}$, according to Eq.~(\ref{overlapFS}), the correlation function (\ref{cijexdis}) reads
\begin{align}
\mathcal{C}_{ij}^{\mathrm{ex,sep}}(\mathbf{k}') &= I_{(n_i,\mathbf{r}'_i)(n_i,\mathbf{r}'_i)}(\mathbf{k}')\,I_{(n_j,\mathbf{r}'_j)(n_j,\mathbf{r}'_j)}(-\mathbf{k}') \nonumber\\[5pt]
&= e^{i \mathbf{k}'\boldsymbol{\cdot} \mathbf{r}'_{ij}}\, e^{- k'^2_{z'} \ell_0^2} \,L^{0}_{n_i}\big(k_{z'}'^2 \ell_0^2\big) \,L^{0}_{n_j}\big(k_{z'}'^2 \ell_0^2\big). \label{corkmFSdis}
\end{align}
The decay rates of distinguishable atoms with Fock states at arbitrary positions can be obtained by inserting Eq.~(\ref{corkmFSdis}) into Eq.~(\ref{gijgen}) and performing the angular integration. Simple analytical expressions can be obtained in the limit $\xi_{ij}\to 0$ (superradiant regime). To this end, we first express the product of Laguerre polynomials as a linear combination of these same polynomials,
\begin{equation}\label{laguerreproduct}
L_{n_i}^0(x) L_{n_j}^0(x)=\sum\limits_{\ell=|n_i-n_j|}^{n_i+n_j}c_{n_i,n_j,\ell} \,L_{\ell}^0(x)
\end{equation}
with
\begin{equation}\label{laguerreproductcoeff}
c_{n_i,n_j,\ell}=\left(-\frac{1}{2}\right)^p\sum_n\frac{2^{2n} (n_i+n_j-n)!}{(n_i-n)!(n_j-n)!(2n-p)!(p-n)!},
\end{equation}
where $p=n_i+n_j-\ell$ and the sum over $n$ runs over all integers such that the arguments of the factorials are positive \cite{Gil60}. By plugging $\mathcal{C}_{ij}^{\mathrm{ex,sep}}(\mathbf{k}_0')$ with $e^{i \mathbf{k}_0' \boldsymbol{\cdot}\mathbf{r}'_{ij}} \approx 1$ into Eq.~(\ref{gijgen}) and performing the integration over all directions, we get
\begin{widetext}
\begin{equation}\label{gij_fs_sol}
\gamma_{ij}^\mathrm{sep}(n_i, n_j,\ell_0) \stackrel[\xi_{ij} \ll 1]{}{\simeq} \frac{\gamma_0}{4} \sum\limits_{\ell=|n_i-n_j|}^{n_i+n_j}c_{n_i,n_j,\ell}
\Bigg[ q_{ij} \; {}_2F_2\left(\frac{3}{2},\ell+1;1,\frac{5}{2};-\eta_0^2 \right) \Bigg.
\Bigg. + (4 - q_{ij}) \; {}_2F_2\left(\frac{1}{2},\ell+1;1,\frac{3}{2};-\eta_0^2\right) \Bigg]
\end{equation}
\end{widetext}
with $q_{ij}$ given by Eq.~(\ref{pqpi}) for $\pi$ transition and by Eq.~(\ref{pqsigma}) for $\sigma^\pm$ transition and ${}_pF_q(\mathbf{a};\mathbf{b};z)$ the generalized hypergeometric series~\cite{Abr70,footnote7}.
The decay rates (\ref{gij_fs_sol}) for atoms in the same Fock state are shown in Fig.~\ref{gammaij_fock} as a function of the Lamb-Dicke parameter for a $\pi$ transition with $\alpha_{ij} = \pi/2$ and for different excitation numbers $n=n_i=n_j$. At fixed Lamb-Dicke parameter, the decay rates are smaller as the excitation number increases. For large Lamb-Dicke parameters, they decrease like a power-law, as can be seen from the inset. Some oscillations are present for excitation numbers $n>0$, which we attribute to oscillations (in momentum space) of the motional wave packets.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{fig7.pdf}
\end{center}
\caption{(Color online) Decay rates in the superradiant regime ($\xi_{ij}\ll 1$) as a function of the Lamb-Dicke parameter $\eta_0$ for atoms $i$ and $j$ initially in the same motional Fock state $|\phi_{(n, \mathbf{0})}\rangle$ centered around the origin with number of vibrational excitations $n=0$ (ground state - red curve), $n=1$ (green curve) and $n=10$ (blue curve). In this situation, $\gamma_{ij}^{\mathrm{sep}}$ and $\gamma_{ij}^{+}$ coincide. Inset: same figure in log-log scale showing the power-law decrease of $\gamma_{ij}^{\mathrm{sep}}$ as $1/\eta_0$. The plots shown are for a $\pi$ transition with $\alpha_{ij}=\pi/2$.} \label{gammaij_fock}
\end{figure}
\subsubsection{Indistinguishable atoms}
For indistinguishable atoms in Fock states, the correlation function is given by Eq.~(\ref{cijexpm}). The decay rates can be evaluated for arbitrary positions and Lamb-Dicke parameter by inserting (\ref{cijexpm}) into (\ref{gijgen}). In the limit $\xi_{ij} \to 0$, the vibrational states for different excitation numbers are orthogonal and the decay rates are given by
\begin{multline}
\label{FockN}
\gamma_{ij}^{\pm}(\{n_i\},\ell_0) \stackrel[\xi_{ij} \ll 1]{}{\simeq} \frac{1}{N!} \sum_{\pi,\pi'} s_{\pm}^{\pi}\,s_{\pm}^{\pi'} \,\sigma_{ij}^{\pi,\pi'} \\
\times \int \sum_{\boldsymbol{\varepsilon}} \gamma^\mathrm{em}_{\mathbf{k}_0\eps} \, I_{\pi(i)\pi'(i)}(\mathbf{k}_0) \, I_{\pi(j)\pi'(j)}(- \mathbf{k}_0) \,\frac{d\Omega}{(2\pi)^2},
\end{multline}
with $\gamma^\mathrm{em}_{\mathbf{k}_0\eps} $ and $I_{\alpha \beta}(\mathbf{k}_0)$ given by Eqs.~(\ref{gijclass}) and (\ref{overlapFS}) and
\begin{equation}
\sigma_{ij}^{\pi,\pi'} =\prod_{n = 1 \atop n \neq i,j}^N \!\delta_{\pi(n)\pi'(n)},
\end{equation}
where $\delta_{\pi(n)\pi'(n)}$ is the Kronecker symbol. For equal excitation numbers, the symmetric motional state $\rho_{ij}^+$ becomes separable in this regime and the symmetric decay rates $\gamma_{ij}^+$ tend to $\gamma_{ij}^\mathrm{sep}$ given by Eq.~(\ref{gij_fs_sol}).
\subsection{Thermal states in a harmonic trap}
We now consider as motional state the thermal state of atoms trapped in a harmonic potential of frequency $\Omega_{z'}= \hbar/2M\ell_0^2$ along the $z'$ direction. In this case, all atoms occupy the same motional mixed state~\cite{footnote8}
\begin{equation}\label{rhotherm}
\rho_{(\bar{n},\mathbf{0}')}=\sum_{n=0}^{+\infty}\frac{\bar{n}^n}{(1+\bar{n})^{n+1}}\,|\phi_{(n,\mathbf{0}')}\rangle\langle \phi_{(n,\mathbf{0}')}|
\end{equation}
where $\bar{n}=1/\big(e^{\hbar\Omega_{z'}/k_B T}-1\big)$ is the mean phonon number at temperature $T$.
The overlap
\begin{equation}
\label{overlapTS}
I_{(\bar{n},\mathbf{0}')(\bar{n},\mathbf{0}')}(\mathbf{k}') =\big\langle e^{i k'_{z'} \hat{z}'_j} \big \rangle
= {\rm Tr} \big( e^{i k'_{z'} \hat{z}'_j} \rho_{(\bar{n},\mathbf{0}')}\big)
\end{equation}
can be evaluated analytically by writing the position operator as $\hat{z}'_j = \ell_0(b_j + b_j^\dagger)$ with $b_j$ and $b_j^\dagger$ the annihilation and creation operators of a motional excitation for atom $j$. Upon using the identity $\big\langle \exp\big({\ell_0 (b_j^\dagger + b_j)}\big)\big\rangle = \exp\big({\ell_0^2\langle (b_j^\dagger + b_j)^2 \rangle}\big)$ where the expectation value is taken in a thermal state~\cite{Hua01}, we get
\begin{equation}
\label{overlapTS2}
I_{(\bar{n},\mathbf{0}')(\bar{n},\mathbf{0}')}(\mathbf{k}') = e^{-k_{z'}'^2 \ell_0^2\left(2\bar{n} + 1\right)/2}.
\end{equation}
The corresponding correlation function (\ref{cijexdis}) reads
\begin{align}
\mathcal{C}_{ij}^\mathrm{ex, sep}(\mathbf{k}')
= e^{-k_{z'}'^2 \ell_0^2 (2\bar{n}+1)}, \label{corkm}
\end{align}
and is of the same form as for Gaussian states centered around the origin (see Eq.~(\ref{corkmGSdis})), now with a width $\tilde{\ell}_0 = \ell_0 \sqrt{2\bar{n} +1}$ which depends on the temperature through $\bar{n}$. As a consequence, the decay rates and the dipole-dipole shifts for atoms in the same thermal state are given by Eqs.~(\ref{gijgsdis}) and (\ref{deltaijdisGS}) with $\eta_0$ replaced by $\tilde{\eta}_0 = \eta_0 \sqrt{2\bar{n} +1}$. The increase in Lamb-Dicke parameter from $\eta_0$ to $\tilde{\eta}_0$ comes from the Debye-Waller factor $e^{-k_0^2\langle \hat{z}'^2_j \rangle }$ where $\langle \hat{z}'^2_j \rangle =\ell_0^2 (2 \bar{n} + 1)$ is the mean square displacement of atom $j$.
\section{Conclusions}
In this work, we derived a general master equation for the internal dynamics of atoms coupled to the electromagnetic field in vacuum, taking into account the quantization of their motion. Our master equation provides an accurate description of recoil effects, even beyond the Lamb-Dicke regime, and applies equally well to distinguishable and indistinguishable atoms. We obtained general expressions for the dipole-dipole shifts and the decay rates, which determine the conservative and dissipative atomic internal dynamics, in terms of their classical expressions and the motional correlation function defined for arbitrary motional states. We showed that the motional state allows one to engineer the dipole-dipole shifts and the decay rates, and can lead to a large modification compared to the classical value. In particular, we obtained analytical expressions for the decay rates for Gaussian states, harmonic oscillator eigenstates and thermal states, that are relevant in cold atom experiments.
\begin{acknowledgments}
FD would like to thank the F.R.S.-FNRS for financial support.
FD is a FRIA grant holder of the Fonds de la Recherche Scientifique-FNRS.
\end{acknowledgments}
|
1,116,691,501,232 | arxiv | \section{Introduction}
\noindent
A common goal in precision optics is to use feedback \cite{Bechhoefer2005Feedback} to stabilize (lock) the frequency of a laser to that of an external system such as a Fabry-Perot resonator \cite{Drever1983Laser} or atomic transition \cite{Dschao1980I2, Cerez1980He}. This can be used either to stabilize the laser itself or to monitor the dynamics of the external system. In all feedback schemes, it is desirable to achieve the largest possible closed-loop gain (i.e.~the degree to which noise can be suppressed) and a dynamic range (headroom) sufficient to compensate for the largest fluctuations.
Due to causality, the gain of a feedback loop is ultimately limited by the speed with which corrections can be applied (i.e.~the loop's bandwidth), which in turn is fundamentally limited by the delay of the signal propagating through the loop \cite{Bechhoefer2005Feedback}. In many situations, the achievable gain is practically limited by other nonidealities. For example, one means of tuning laser frequency is to mechanically stretch an optical path, but the bandwidth is then practically limited by the structure's mechanical resonances. For this reason, typical low-noise, mechanically tuned lasers (e.g.~commercial Nd:YAG) achieve control bandwidths limited to $\sim$\SI{100}{kHz}. Faster feedback can be achieved via electronic control of the laser's pump. Diode lasers, for example, readily achieve $\sim$5 MHz bandwidth using pump feedback to stabilize to an external cavity \cite{Schoof2001Reducing}, and as such this technique is routinely used as a first stage in reducing their comparatively large noise; the combined system, however, is then subject to the mechanical bandwidth of the external cavity. Cavity length stabilization has improved in recent years, achieving \SI{180}{kHz} using short-travel piezo actuation \cite{Briles2010Simple} and now up to $\sim$\SI{700}{kHz} with the incorporation of photothermal tuning \cite{Brachmann2016Photothermal}.
Instead of controlling the frequency of the light generated by a laser, one can also shift the frequency after emission. For visible wavelengths, this is often accomplished with an acousto-optical modulator (AOM), which can achieve $\sim$\SI{200}{kHz} feedback bandwidth \cite{Kessler2012A} and $\sim$MHz-scale headroom. At near-infrared (telecom) wavelengths, low-cost fiber modulators are more commonly employed. Using serrodyne techniques, wherein a voltage-controlled oscillator (VCO), nonlinear transmission line (NLTL), and electro-optical modulator (EOM) generate a saw-tooth phase that effectively shifts the carrier frequency \cite{Houtz2009Wideband, Kohlhaas2012Robust}, or single-sideband modulation (SSM), wherein a Mach-Zehnder interferometer shifts a small portion of the carrier \cite{Gatti2015Wide}, it is routine to achieve several-MHz feedback bandwidth and well over \SI{100}{MHz} headroom.
A second common goal in precision optics is to perform heterodyne readout \cite{Protopopov2009Laser}, wherein a weak ``probe'' beam is overlapped with a strong optical local oscillator (LO) detuned by an electronically-measurable frequency. When measured by a photodiode, the beating between these beams produces an amplified electronic signal from the probe with a spectrum shifted to the LO detuning, thereby providing access to the signal's amplitude and phase quadratures.
Here we present and characterize a simple, low-cost, high-bandwidth, post-emission laser locking technique with built-in heterodyne readout. In complement to serrodyne and SSM systems, this approach relies on a high-speed VCO and optical modulator to control the frequency, and can be implemented with any laser. In contrast to serrodyne systems, this does not require a precise saw-tooth or high-bandwidth EOM, and, similar to SSM, shifts only a fraction of the laser light. In contrast to both, the carrier is exploited as an optical LO for heterodyne readout, and since this follows the same optical path, no alignment or relative path stabilization is required. Using the test ports of our chosen electronics, we directly measure the frequency-dependence of the closed-loop gain, demonstrating a delay-limited feedback bandwidth of \SI{3.5}{MHz} and headroom exceeding \SI{500}{MHz} ($\sim$\SI{1}{GHz} should be possible with this VCO/EOM combination, at the expense of added amplitude noise). The measured gain matches a simple model based on ideal components, and from this we propose a modified setup that should realistically achieve a gain of $4\times 10^7$ at \SI{1}{kHz} (\SI{6.6}{MHz} bandwidth). Section \ref{sec:review} briefly reviews requisite concepts in laser feedback. Section \ref{sec:PDH} then introduces the ``Pound-Drever-Hall'' method for generating an error signal \cite{Pound1946Electronic,Drever1983Laser,Black2001An} (including a derivation of its dynamical response), along with a simple electronic modification enabling heterodyne readout. We then present the technical details of our ``proof-of-concept'' system in Sec.~\ref{sec:technique}, characterize its closed-loop performance in Sec.~\ref{sec:performance}, and conclude in Sec.~\ref{sec:discussion}.
\section{A Brief Review of Laser Feedback}\label{sec:review}
\noindent
All frequency stabilization schemes rely on (i) the generation of an ``error'' signal proportional to the detuning $\delta$ between the laser and the external system, and (ii) the processing and routing of this signal to a port capable of adjusting $\delta$ to compensate \cite{Bechhoefer2005Feedback}. \Figure{1}(a) shows a generic diagram of a ``typical'' feedback loop for locking a laser to a cavity resonance. Environmental noise (e.g.~vibrations, laser noise) introduces a nominal detuning $\delta_n$, which is subsequently converted to an optical signal by the cavity ``$C$'', translated into an electrical signal by a photodiode ``$D$'', and modified by assorted electronic components and amplifiers ``$-A$'', before being sent to a ``feedback'' port ``$F$'' to compensate. This correction is added to the environmental noise, resulting in a relationship for the \emph{actual} detuning $\delta = \delta_n-CDAF\delta$, where $C$, $D$, $-A$, and $F$ are the complex, frequency-dependent complex gains (transfer functions) of the cavity, diode, electronics, and feedback port. Solving for $\delta$ yields
\begin{equation}\label{eq:feedback}
\delta = \frac{\delta_n}{1+CDAF}.
\end{equation}
This immediately highlights the central concerns for stabilization. First, the ``closed-loop gain'' $G\equiv CDAF$ should be made as large as possible to cancel the environmental noise. For $|G| \gg 1$, the overall phase $\phi_G$ does not matter, but if $G$ approaches $-1$ at a some frequency, then the noise at that frequency is \emph{amplified}. This places unavoidable limits on $G$ for the following reasons: (i) any delay $t_d$ in the signal path multiplies $G$ by the phase factor $e^{-i\omega t_d}$, forcing $\phi_G=-\pi$ at finite frequencies, regardless of what electronics are chosen for $A$, (ii) stability concerns impose that the magnitude of the gain at the lowest of these frequencies $\omega_{-\pi}$ should be less than 1, and (iii) causality places an upper bound $|G| < \omega_{-\pi}^2/\omega^2$ on how much the gain can increase below this point \cite{Bechhoefer2005Feedback}. Since most noise occurs at low frequencies, it is therefore desirable to make $\omega_{-\pi}$ large, and to engineer a feedback circuit such that $G$ increases as rapidly as possible below that frequency.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.98\columnwidth]{figure1}
\caption{Feedback stabilization. (a) Generic control loop for stabilizing a laser's detuning $\delta$ from the resonance of an optical cavity. Noise $\delta_n$ enters, is converted to an optical signal by the cavity (transfer function $C$), collected by a diode ($D$), manipulated by electronics ($-A$) and sent to a ``feedback'' port ($F$). (b) Practical implementation using Pound-Drever-Hall readout. Straight red lines represent optical paths, straight black lines represent electrical paths, and dashed gray lines show potential feedback paths. The laser is phase-modulated, lands on a beam splitter (BS) and interacts with the cavity, which converts phase to amplitude modulation. This is recorded with a photodiode and mixed (demodulated) with a local oscillator. Inset shows the resulting steady-state voltage $V_Y(\delta)$, with a red dot indicating a stable lock point. The manipulated signal can be fed back to (i) the cavity length or (ii) the laser frequency. Feeding back to (iii) the oscillator frequency only adjusts the sidebands.}
\label{fig1}
\end{figure}
A readout of $\delta$ (the error signal) can be obtained by several methods. A high-finesse optical cavity having length $L$, input mirror (power) transmission $T$ and reflection $R$, and power ringdown time $\tau$ has an overall field reflection coefficient (see Supplementary Material)
\begin{equation}\label{eq:cavity}
r(\delta)\approx \frac{c\tau T/L}{1+i2\tau\delta}-\sqrt{R}.
\end{equation}
The reflected power ($\propto$$|r|^2$) therefore follows a Lorentzian line shape, and on resonance ($\delta$=$0$), cannot on its own be used for feedback, since it does not provide information about the sign of $\delta$. One can of course generate a bipolar error signal by tuning the laser away from resonance \cite{Barger1973Frequency}, but this technique couples laser power fluctuations to detuning errors. However, the phase of $r(\delta)$ does vary linearly on resonance, and has been extracted via phase modulation \cite{Drever1983Laser}, heterodyne \cite{Protopopov2009Laser,Danilishin2012Quantum}, and homodyne \cite{Heurs2010Homodyne} schemes, wherein the mode of interest interferes with one or more reference beams having different frequency or phase. Other techniques employ a second cavity mode as a reference, for example a mode of different polarization \cite{Hansch1980Laser} or a higher order spatial mode \cite{Wieman1982Laser, Shaddock1999Frequency}. The ubiquitous and powerful ``Pound-Drever-Hall'' technique \cite{Drever1983Laser} is discussed in the following section.
\section{Modified Pound-Drever-Hall Readout and Dynamical Response}\label{sec:PDH}
\noindent
A common method for on-resonance laser stabilization is the ``Pound-Drever-Hall'' (PDH) technique \cite{Pound1946Electronic,Drever1983Laser}, a diagram of which is drawn in \Figure{1}(b). Stated briefly, this technique effectively amounts to dithering the laser frequency with an electro-optical modulator (EOM) and measuring the induced modulation in the reflected power to infer the \emph{slope} of $|r(\delta)|^2$ (or Im[$r$]) \cite{Black2001An}. The resulting error signal (inset blue curve, near red dot) can then be manipulated with electronics ($-A$) and fed back to either (i) the cavity length or (ii) the laser frequency, as described above. Feeding back to a voltage-controlled oscillator (VCO, iii) will not adjust the \emph{carrier} frequency (or $\delta$) in this configuration, but can be used to lock a \emph{sideband} to the cavity as discussed in Sec.~\ref{sec:technique}. An elegant, pedagogical derivation of the steady-state error signal ($V_Y$ in \Fig{1}(b)) from this system can be found in Ref.~\cite{Black2001An}. This accurately captures the system's ability to convert low-frequency detuning noise into an error signal, but breaks down when the detuning $\delta(t)$ contains frequencies comparable to the cavity's linewidth $1/\tau$. A straightforward means of deriving the \emph{dynamic} response \cite{Rakhmanov2002Dynamic} for small deviations $\delta$ about the lock point is to propagate a laser ``noise'' component through an EOM, cavity, diode, and demodulation (mixer) circuit in Fig.~\ref{fig1}(b) to extract a combined transfer function, as follows.
Suppose there exists a frequency noise component at frequency $\omega$ that is the real part of $\Omega(t)=\Omega_n e^{i\omega t},$ where $\Omega_n$ is a constant. This corresponds to phase modulation $\phi(t)=\phi_n \sin(\omega t)$, where $\phi_n = \Omega_n/\omega$. If this light is fed through a phase modulator (EOM) driven by voltage $V_\text{osc} = V_e \sin(\omega_e t)$, the field landing on the cavity is then
\begin{eqnarray}\label{eq:field}
E = E_l \cos\left(\omega_l t + \phi_e \sin\omega_e t + \phi_n \sin\omega t\right)
\end{eqnarray}
where $E_l$ is a constant amplitude and $\phi_e\propto V_e$ according to the efficiency of the EOM. Assuming all modulations are small ($\phi_e,\phi_n\ll 1$), Eq.~\ref{eq:field} can be written as the sum of a ``carrier'' at frequency $\omega_l$, 4 first-order sidebands ($\omega_l\pm \omega_e$ and $\omega_l\pm\omega$) and 8 second-order sidebands ($\omega_l\pm 2\omega_e$, $\omega_l\pm 2\omega$, $\omega_l \pm \omega_e \pm \omega$, and $\omega_l \pm \omega_e \mp \omega$). If we also assume the modulator frequency is large compared to the cavity linewidth and noise frequency ($\omega_e \gg 1/\tau,\omega$), and the \emph{carrier} is on resonance, only five beams ($\omega_l$, $\omega_l \pm \omega$, and $\omega_l \pm 2\omega$) acquire a significant change in magnitude and phase upon reflection, as per the cavity response (Eq.~\ref{eq:cavity}). When the 13 reflected beams land on a photodiode, they produce a time-averaged photocurrent $\propto$$\langle E^2\rangle$ containing all frequencies ($\ll \omega_l$) within the photodiode's bandwidth. If this signal is then mixed with the original oscillator voltage $V_\text{osc}$, the mixer output is proportional to $\langle E^2\rangle \sin(\omega_e t)$, and an appropriately chosen low-pass filter can then eliminate all terms except those having frequency near $\omega$. After some bookkeeping (see Supplementary Material), the transfer function for converting a frequency noise $\Omega$ to an error signal $V_Y$ is
\begin{equation}\label{eq:dynamic-PDH}
\frac{V_Y}{\Omega} \approx
-\frac{2\phi_e E_l^2\beta \tau^2}{1+2i\tau\omega}
\end{equation}
where constant $\beta$ includes a combination of cavity constants and the conversion efficiencies of the diode and mixer.\footnote{Note the diode and mixers employed below have large bandwidths, and are assumed to have frequency-independent efficiencies for simplicity. This assumption is validated for our chosen components, as discussed below.} The interpretation of this result is straightforward. Assuming $\phi_n\ll 1$ restricts the $V_Y(\delta)$ to the region of linear response (i.e.~near the red dot in Fig.~\ref{fig1}(b)). The resulting transfer function sensibly scales with the laser power and dither amplitude \cite{Drever1983Laser,Black2001An}, and the cavity's amplitude ringdown time $2\tau$ imposes a low-pass filter on the readout \cite{Rakhmanov2002Dynamic}.
Equation \ref{eq:dynamic-PDH} also motivates the use of a ``proportional-integral'' (PI) amplifier for feedback electronics. A PI amplifier has a transfer function
\begin{equation}
\label{eq:PI-amp}
A_{PI} = G_0\frac{1+i\omega/\omega_{PI}}{1/g+i\omega/\omega_{PI}}
\end{equation}
where $G_0$ is an overall scaling factor, $\omega_{PI}$ is a ``PI corner'' frequency, above which the response changes from integrator-like to proportional, and $g$ is a gain limit at low frequencies. Often (especially while locked) the gain limit is removed ($1/g\rightarrow0$), in which case $A_{PI}\rightarrow G_{0}\left(1-i\frac{\omega_{PI}}{\omega}\right)$; when combined with the readout transfer function (Eq.~\ref{eq:dynamic-PDH}), the choice $\omega_{PI}=1/2\tau$ then results in a partial-loop transfer function
\begin{equation}\label{eq:nearly-closed-gain}
\frac{V_Y}{\Omega}A_{PI} = \frac{\phi_e E_l^2 \beta \tau^2 G_0}{i \tau\omega}
\end{equation}
The total system behaves like an integrator over all frequencies, with increasing gain at low frequencies. The overall phase is also far from $-\pi$, preventing the system's delay factor $e^{-i\omega t_d}$ from forcing the closed-loop gain below 1 at a low frequency. This also provides ``wiggle room'' for loop nonidealities such as indirectly driven resonances that can cause a temporary excursion in phase (see, e.g.,~Ref.~\cite{Briles2010Simple}). However, even if the bandwidth of the feedback port $F$ is effectively infinite and / or we have precisely compensated for all of its artifacts, the ultimate gain is limited by the signal delay $t_d$ -- in this case from the output of the EOM to the cavity, back to the diode, through the electronics, and through the feedback port -- which forces the closed-loop gain to be less than 1 at frequency $\omega_{-\pi} < \pi/4t_d$ for this choice of electronics.
As mentioned (and discussed below) it is also possible to lock the first-order sidebands ($\omega\pm\omega_e$) to the cavity. Following the same analysis for the case of either sideband resonant with the cavity produces a transfer function (see Supplementary Material)
\begin{equation}
\label{eq:dynamic-sideband}
\frac{V_{Y,\pm}}{\Omega} \approx
\frac{\phi_e E_l^2 \beta \tau^2}{1+2i\tau\omega}
\end{equation}
which is inverted and half as large as the carrier-resonant case (Eq.~\ref{eq:dynamic-PDH}), consistent with the slope of the steady state solution ($V_Y$ in \Fig{1}(b)) at $\delta=\pm\omega_e$ \cite{Drever1983Laser, Black2001An}.
Finally, similarly propagating an \emph{amplitude} noise component through this system (i.e., setting $E_l\rightarrow \left(1+\operatorname{Re}[\epsilon]\right) E_l$, where $\epsilon(t) = \epsilon_n e^{i\omega t}$ with constant $\epsilon_n\ll 1$) has no impact on $V_Y$ or $V_{Y,\pm}$. However, introducing a relative $\pi/2$ phase shift between the mixer's LO and signal ports provides a measurement of the other quadrature $V_X$, which carries amplitude information. When locked to either sideband, the amplitude quadrature transfer function is (see Supplementary Material)
\begin{equation}
\label{eq:dynamic-sideband-amp}
\frac{V_{X,\pm}}{\epsilon} \approx \mp \phi_e E_l^2 \beta \tau
\frac{1+i\tau\omega}{1+2i\tau\omega}
\end{equation}
for the upper or lower sidebands resonant with the cavity, respectively. We note that, in contrast to $V_{Y,\pm}$, the amplitude quadrature $V_{X,\pm}$ is influenced by the off-resonance sideband. Hence, adding a second phase-shifted mixer (or using an IQ mixer) enables heterodyne readout with no additional lasers, optical modulators, or alignment. Conveniently, the steady-state form of this quadrature, discussed below and shown in Fig.~\ref{fig2}, also provides a simple means of verifying which sideband is locked to the cavity (along with an independent estimate of how well it is locked).
\section{Apparatus for Sideband Locking with Heterodyne Readout}\label{sec:technique}
\begin{figure*}[htb]
\includegraphics[width=0.95\textwidth]{figure2}
\caption{Sideband locking with heterodyne readout. Many parts from Thorlabs (TL) and Minicircuits (MC). (a) A VCO (MC ZX95-1600W-S+) signal is split (MC ZX-10-2-20-S+) and amplified (MC ZX60-4016E-S+) feeding both an EOM (TL LN65-10-P-A-A-BNL-Kr with shortened output fiber) and, after delay, the electronic LO (``L'') ports of two mixers (MC ZFM-5X-S) for quadrature readout. A fiber laser (Koheras Adjustik E15) feeds a 14.2-dBm (26.3 mW) carrier through the EOM, producing a \SI{10.8}{dBm} carrier and \SI{-2.2}{dBm} (5\%) sidebands, tuned by a variable attenuator (MC ZX73-2500-S+, $\sim$\SI{15}{dB}) leading to the EOM. Once collimated (TL F260APC-1550), the beam passes through a beam splitter (BS, TL BS018 50:50), mode-matching lenses (\SI{-5}{cm} and \SI{10}{cm} focal length, shown in (b)), and steering mirrors (M) before landing on a cavity comprising a flat (Newport 10CM00SR.70F) and curved (Newport 10CV00SR.70F) supermirror, the second of whose position is swept by a piezo mirror mount (TL K1PZ). The transmitted beam is focused on a photodiode (PD, TL PDA10CF), while the reflected beam is rerouted by the BS, passes through a free-space isolator (TL IO-2.5-1550-VLP) to eliminate standing waves, and is focused upon a 2-GHz-bandwidth low-noise photodiode (PD, Femto HSA-X-S-2G-IN). The signal's low-frequency noise ($<20$ MHz) is eliminated with a high-pass (MC SHP-20+), before amplification (MC ZX60-P105LN+) and splitting by a $\pi/2$ splitter (MC ZX10Q-2-13-S+). The phase-shifted signals are fed to the mixer's RF (``R'') ports for demodulation to the IF (``I'') ports. The ``phase'' quadrature ($V_Y$) is fed into a PI amplifier ($-A$, New Focus LB1005) for feedback to the VCO. Inset shows the predicted steady-state voltage of the amplitude quadrature $V_X(\delta)$. (b) Photograph of optical setup. (c) Simultaneously acquired phase and amplitude quadratures from points (i) and (ii) in (a), respectively, for three different VCO control voltages: 0 V (lightest), 0.8 V, and 1.8 V (darkest). The cavity length is swept quickly to avoid run-to-run variations due to vibrations, and to show transient signals common to high-finesse cavities (see text).}
\label{fig2}
\end{figure*}
\noindent
\Figure{2}(a) shows our test setup for locking a first order sideband (at $\omega_l\pm\omega_e$) to a cavity resonance. Sidebands are created with a fiber EOM driven at $\omega_e$ by a VCO with \SI{90}{MHz} modulation bandwidth and 0.65-\SI{1.75}{GHz} tuning range. Light from the EOM passes through a beam splitter (BS) and mode-matching optics (shown in (b)), reflects from the cavity, and is collected by a high-bandwidth photodiode. The resulting signal is filtered and amplified before passing through a power splitter that produces a phase shift of 0 and $\pi/2$ at its outputs. These two signals are separately mixed with that of the VCO to produce $V_X$ and $V_Y$. The VCO output is split prior to the EOM, delayed, and used as the electronic LO for both mixers. In order to maintain a fixed phase between the mixers' LO and signal ports over the full range of VCO frequencies $\omega_e$, the delay between the two signal paths must match. Any difference $\Delta t_d$ produces a relative phase $\omega_e \Delta t_d$ that must remain small compared to $\pi/2$ at the highest VCO frequency. Here this imposes that $\Delta t_d \ll \pi/2 \omega_e \sim 1$ ns, corresponding to a free-space path difference $\ll 30$ cm; this is mostly compensated for with a combination of cables and extension adapters (\Fig{2}(a)), with mm-scale fine tuning of the photodiode's position. The higher precision required for larger-$\omega_e$ systems can be easily implemented with the diode optics mounted on a translation stage.
\Figure{2}(b) shows a photograph of the optical path; the electronics are mounted on a nearby platform. The detuning $\delta$ between the laser and cavity can be widely adjusted with long-travel piezos in the second mirror mount (``Piezo M''). \Figure{2}(c) shows a diagnostic measurement of $V_Y(\delta)$ and $V_X(\delta)$ recorded during cavity length sweeps for a few values of $\omega_e$. Each sweep was performed ``quickly'' (16 ms over the full range) to reduce run-to-run variations from the ambient vibrations of the test cavity. The insensitivity of the quadrature readout to $\omega_e$ indicates the delay is matched (see Supplementary Material for a larger range). The cavity has a power ringdown time $\tau=1.2\pm0.1~\upmu\text{s}$ (finesse 4700$\pm$400), and so these fast sweeps produce a transient response \cite{Lawrence1999Dynamic} resulting in a measured $V_Y$ (top plot of (c)) that is consistently not symmetric about $V_Y=0$, and a measured $V_X$ (bottom plot of (c)) that deviates from a simple peak. This artifact can be highly misleading when tuning the relative delay, and so rather than trying to symmetrize $V_Y$, we recommend slowly modulating $\omega_e$ while quickly sweeping the cavity, and adjusting $\Delta t_d$ to produce a signal shape that does not vary with $\omega_e$.
The error signal $V_Y$ is then fed through a tunable PI amplifier having the transfer function of Eq.~\ref{eq:PI-amp}, with $\omega_{PI} = 110$ kHz and $g = 105 =\SI{40}{dB}$ (measured) before finally being fed back to the VCO. Due to the sidebands' opposed frequency response, one is always stabilized by this feedback and one is always destabilized; here we (arbitrarily) lock the upper sideband (verified by the negative value of $V_X$). Despite the open-air design and flagrant disregard for vibration isolation, this system readily locks and remains so indefinitely.\footnote{The lock is impervious to chair scoots, door slams, claps, and shrieks, but fails if the table surface is tapped with a wrench.}
\section{Performance}\label{sec:performance}
\noindent
Once locked, we increase the feedback gain $G_0$ until the system rings (at $\sim$\SI{3}{MHz} for this implementation), indicating that the gain at $\omega_{-\pi}$ has exceeded unity, with $\omega_{-\pi}\sim$ \SI{3}{MHz}. We then reduce $G_0$ until the remaining noise in $V_Y$ is minimized. The most sensitive estimate of $V_Y$ is achieved by referring the PI amplifier's output back to its input using its known (measured) transfer function; together with an independent measurement of the error signal slope on resonance $2\pi\times \partial_\delta V_Y = 388\pm40$ mV/MHz, we estimate that the stabilized RMS detuning noise $\delta_\text{RMS}/2\pi$ is below \SI{70}{Hz} (0.0005 cavity resonance full-widths). This is a factor of 3000 lower than the pre-stabilized value of $240$ kHz (1.6 fullwidths, corresponding to \SI{0.3}{nm} RMS cavity length noise), as estimated directly from the PI output and the VCO specifications (\SI{52}{MHz/V}). \Figure{3}(a) shows the power spectral densities of these two inferred detuning signals. The square root of their ratio provides a basic estimate of the closed-loop gain magnitude $|G(\omega)| \sim 1000$ at $\omega/2\pi=\SI{1}{kHz}$.
To directly measure $G(\omega)$, we inject a small amount of ``noise'' into the locked system and observe how it is suppressed. The PI amplifier provides a second (inverted) input, and an isolated input monitor for independently measuring the in-loop error signal. Using a lock-in amplifier, we apply an oscillatory signal $V_n$ of frequency $\omega$ to this input and record both quadratures of the error signal $V_Y$ at $\omega$ (correcting for the transfer functions between the input and error monitor, as well as the lock-in and its measurement cables). Using the same analysis of \Fig{1}(a) with $CDAF \rightarrow G$, $\delta_n\rightarrow V_n$, and $\delta\rightarrow V_Y$, we solve for the closed-loop gain $G=V_n/V_Y - 1$, which is plotted in \Fig{3}(b) (blue). Importantly, the observed gain smoothly decreases with $\omega$ (approximately as $1/\omega$), and the phase crosses $-\pi$ at $\omega/2\pi=\SI{3.5}{MHz}$, where $|G|<1$, consistent with the observed ringing frequency. The measurement noise increases at low frequencies due to the reduced signal at high gain. It is worth noting that, despite the addition of sidebands to the VCO output (at $\omega_e\pm \omega$), the measured transfer function of the EOM, cavity, diode, and mixer is identical to that of simple laser frequency noise (see Supplementary Material).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\columnwidth]{figure3}
\caption{(a) Detuning noise power spectral density (PSD) before and after lock, recorded while locked. The pre-feedback noise (red) is inferred from the proportional-integral (PI) amplifier output and the VCO conversion factor 52 MHz/V, while the post-feedback noise (blue) is inferred from the PI output referred back to its input and the independently measured slope of the error function ($388\pm40$ mV/MHz) at the lock point. (b) Measured (blue) and modeled (red) closed-loop transfer function. The model includes the cavity (green, ring-down time $\tau=\SI{1.1}{\mu s}$), PI amplifier (yellow, $\omega_{PI}$=\SI{110}{kHz}, and g=105), and a delay (brown, \SI{70}{ns}, \SI{52}{ns} from the PI amplifier). Transfer functions of other components are assumed to be ``flat'' on this scale. The gray dashed line shows a closed-loop gain that could be achieved with optimizations: replacement of the PI amplifier and further delay reductions to \SI{10}{ns} and two PI filters, one with $\omega_{PI}/2\pi=\SI{70}{kHz}$, $1/g=0$), and the other with $\omega_{PI}/2\pi=\SI{15}{MHz}$ and $g=10^5$. }
\label{fig3}
\end{figure}
The red line in Fig.~\ref{fig3}(b-c) represents a simple model for $G(\omega)$ comprising the product of (i) the PI transfer function (Eq.~\ref{eq:PI-amp}) with measured $\omega_{PI}$=\SI{110}{kHz} and $g$=105, (ii) the cavity transfer function (Eq.~\ref{eq:dynamic-sideband}) with $\tau$=\SI{1.1}{\upmu\text{s}} (i.e.~one standard deviation below the measured value), (iii) a closed-loop delay $t_d=70$ ns, and (iv) an overall scaling factor chosen to match the measured $G(\omega)$. The yellow and blue curves show the modeled PI and cavity transfer functions alone for reference, and the brown curve shows the phase contribution from the delay. The employed value of $t_d$ is consistent with the signal travel time of the loop, independently estimated to be approximately \SI{68}{ns} from the signal path of the lower VCO loop in Fig.~\ref{fig2}(a): a combined cable and component length of 127" traversed at $2/3$ the speed of light (\SI{16}{ns}) plus the measured internal delay of the PI amplifier (\SI{52}{ns}). The agreement between the model and measurement suggests that the chosen components exhibit no important nonidealities up to $\sim$\SI{10}{MHz}, and that the other components (the EOM, optics, diode, filters, mixers, amplifiers, attenuators, splitters, and connectors) can be assumed to have a flat response, adding a combined delay on the order of nanoseconds at most.
The phase plot of \Fig{3}(b) highlights that the achieved bandwidth is limited primarily by the delay. Without it, the phase would remain above $-\pi/2$ to a significantly higher frequency, allowing for larger $G_0$. The PI amplifier accounts for 75\% of the delay, implying the greatest gains can be made by replacing it with a faster (albeit less flexible) integrated circuit. Modern amplifiers routinely achieve sub-nanosecond delays, and the requisite PI filters can be realized with passives (capacitors and resistors). It is also straightforward to reduce the optical and electronic lengths: using compact mode-matching optics and shorter cables alone can reduce the delay to $\sim$\SI{10}{ns}. Furthermore, replacing the existing PI filter with two -- one having $\omega_{PI}/2\pi = 70$ kHz and $1/g\rightarrow 0$ and the other having $\omega_{PI}/2\pi=15$ MHz and $g=10^5$ -- for example, would produce a bandwidth of \SI{6.6}{MHz} and (more importantly) a near-causality limited gain $|G(2\pi\times\SI{1}{kHz})|\sim 4\times 10^7$ (Fig.~\ref{fig3}(b), dashed line). This optimization will be the subject of future work.
To estimate the headroom, we change the cavity length $L$ while locked and monitor the output voltage of the PI amplifier; the system remains locked over the full $\sim$\SI{100}{MHz} tuning range presented in Fig.~\ref{fig2}(b), in this case limited by the cavity's small free spectral range: the lower sideband of an adjacent mode eventually becomes degenerate with the locked sideband, spoiling the error signal. Performing the same test on a 5-cm cavity, we find a headroom of \SI{550}{MHz}, limited instead by the maximum output voltage of the PI amplifier (10 V), which covers only half the tuning range of the VCO. A headroom exceeding \SI{1}{GHz} is in principle possible with these components, however, while more headroom is certainly useful for tracking large fluctuations, the frequency-dependencies of the VCO output, EOM, and other electronics will eventually couple these fluctuations to the amplitude of the probe and optical LO beams (see Supplementary Material). Further engineering effort is therefore best spent reducing the system's inherent noise.
\section{Summary}\label{sec:discussion}
\noindent
We have demonstrated a simple technique for locking a first order laser sideband to an optical cavity with a delay-limited feedback bandwidth of \SI{3.5}{MHz} with a single integrator, and a headroom exceeding \SI{500}{MHz}. We directly measured the closed-loop gain, finding excellent agreement with a model based on ideal components, and suggest simple modifications for realizing a gain exceeding $10^7$ at \SI{1}{kHz}. Finally, we note that, by implementing an appropriately weighted sum of $V_X$ and $V_Y$ (or otherwise shifting the relative phase of the mixers' electronic LO and signal ports), it should be possible to create an amplitude-insensitive locking point (i.e.~a zero crossing in the resulting error signal) at arbitrary detuning.
\section{Acknowledgments}
\noindent
We thank Erika Janitz, Maximilian Ruf, Alexandre Bourassa, Simon Bernard, Abeer Barasheed, and Vincent Dumont for helpful discussions. T.M. acknowledges support by a Swiss National Foundation Early Postdoc Mobility Fellowship. The authors also acknowledge computational support from Calcul Qu\'{e}bec and financial support from NSERC, FRQNT, the Alfred P. Sloan Foundation, CFI, INTRIQ, RQMP, CMC Microsystems, and the Centre for the Physics of Materials at McGill.
|
1,116,691,501,233 | arxiv | \section{Introduction}
Let $T\colon X\mapsto X$ be a measure preserving transformation of the standard Borel probability space $(X,\mathcal{B},\nu)$. The well-known Theorem of Birkhoff states that for any $f\in L^1(X,\mathcal{B},\nu)$, the limit
$$
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(T^kx)\text{ exists for $\nu$-almost every $x$.}
$$
Moreover, if $\nu$ is an ergodic measure with respect to $T$ then
$$
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(T^kx)=\int fd\nu\text{ for $\nu$-almost every $x$.}
$$
We call the ``time'' average $\frac{1}{n}\sum_{k=0}^{n-1}f(T^kx)$ the Birkhoff average. If $T$ is uniquely ergodic then for a continuous potential $f\in C(X)$, the limit $\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(T^kx)$ exists for all $x\in X$ and converges to a constant. That is, there is no spectral behaviour. However, if $h_{\rm top}(T)>0$ then one can claim that there is a huge variety of different possible limits. It is a natural question to ask, how large is the set of points in $X$ for which the Birkhoff average converges to a pre-given value $\alpha$?
This question was answered by Takens and Verbitskiy \cite{TV}. Namely, let $X$ be a compact metric space, let $T\colon X\mapsto X$ be a continuous transformation, and let $\varphi\colon X\mapsto\R$ be a continuous potential. Then for an $\alpha\in\R$ Takens and Verbitskiy \cite{TV} showed that the topological entropy of the set
$$
E(\alpha)=\left\{x\in X:\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\varphi(T^kx)=\alpha\right\}.
$$
equals to the Legendre transform of the topological pressure, which is equal to the supremum of the entropy of all invariant and ergodic measures for which the ``space'' average (i.e. the integral of $\varphi$) equals to $\alpha$. For further results on digit frequencies, see Barreira, Saussol and Schmeling \cite{BSS}.
In this paper, we are interested in the following generalisation of the problem, namely, is it possible to determine the spectrum for weighted Birkhoff averages? That is, let $\ww=\{w_k\}_{k\in\N}$ be a sequence of bounded reals and let $\varphi\colon X\mapsto\R^d$ be a continuous potential and let $\alpha\in\R^d$. Is it possible to determine
$$
h_{\rm top}\left(\left\{x\in X:\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}w_k\varphi(T^kx)=\alpha\right\}\right)=?
$$
This problem is motivated by Sarnak's conjecture \cite{S}. Let us recall the definition of the M\"obius sequence, $\boldsymbol{\mu}\colon\N\mapsto\{-1,0,1\}$,
$$
\boldsymbol{\mu}(n)=\begin{cases} (-1)^k & \text{ if $n$ is a product of $k$ distinct primes,}\\
0 & \text{ if there exists $a\geq2$ such that $a^2\vert n$.}
\end{cases}
$$
Sarnak's conjecture \cite{S} claims that if $T\colon X\mapsto X$ is continuous over the compact metric space $X$ with topological entropy zero then for every $x\in X$ and every continuous potential $\varphi\colon X\mapsto\R$
$$
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\boldsymbol{\mu}(k)\varphi(T^kx)=0.
$$
Even though Sarnak's conjecture has been verified for various special dynamical systems (e.g. rotations on the circle, automorphism of the torus with entropy zero etc.), it is still widely open in general. We refer to \cite{FKL2018} for a survey of many recent results on Sarnak conjecture.
El Abdalaoui, Ku\l aga-Przymus, Lema\'nczyk and de la Rue \cite{AKLd} showed Birkhoff's type ergodic theorem with M\"obius weight.
\begin{thm*}
Let $T$ be an automorphism of a standard Borel probability space $(X,\mathcal{B},\nu)$ and let $f\in L^1(X,\mathcal{B},\nu)$. Then, for $\nu$-almost every $x\in X$, we have
$$
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\boldsymbol{\mu}(k)f(T^{k}(x))=0.
$$
\end{thm*}
Fan \cite{F} proved similar result for a more general family of sequences, like Davenport's type. Hence, the usual method of calculating the spectrum for weighted Birkhoff averages, that is, to show that it is equal to the supremum of the entropy of invariant measures, is not applicable. This paper is devoted to present a method, which allows us to calculate the spectrum. In his recent preprint, Fan \cite{F2} studied the same question, but with strictly different methods. We will point out the main differences between our and his results.
\section{Results}
In the rest of the paper, we restrict our interest to the full shift space. That is, let $\mathcal{A}=\{1,\ldots,K\}$ be a finite alphabet, and let $\Sigma=\mathcal{A}^\N$. Let us denote the left-shift operator on $\Sigma$ by $\sigma$. Denote $\Sigma_n$ the set of $n$-length finite word. Moreover, denote $\Sigma_*$ the set of all finite prefixes of the infinite words in $\Sigma$. For an $\ii\in\Sigma_*$, denote $|\ii|$ the length of $\ii$ and let $[\ii]$ denote the corresponding cylinder set, that is, $[\ii]:=\{\jj\in\Sigma:\jj|_{|\ii|}=\ii\}$. We use $l(\cdot)$ to denote the level of cylinder. The space $\Sigma$ is clearly metrisable with metric
\begin{equation}\label{eq:metric}
d(\ii,\jj)=e^{-\min\{n\geq0:i_n\neq j_n\}}.
\end{equation}
In some cases, we extend our interest of a special family of $\sigma$-invariant compact sets. Let $A$ be a $K\times K$ matrix with entries $0,1$, and we say that the set $\Sigma_A\subseteq\Sigma$ is {\it subshift of finite type} if
$$
\Sigma_A=\{\ii=(i_0,i_1,\ldots)\in\mathcal{A}^\N:A_{i_k,i_{k+1}}=1\text{ for every }k=0,1,\ldots\}.
$$
We call the matrix $A$ the adjacency matrix. Let us denote the set of admissible words with length $n$ by $\Sigma_{A,n}$ and denote $\Sigma_{A,*}$ the set of all admissible words. Without loss of generality, we may assume that $\Sigma_{A,1}=\mathcal{A}$. Moreover, we say that $\Sigma_A$ is {\it aperiodic and irreducible} if there exists $r\geq1$ such that every entry of $A^r$ is strictly positive.
\subsection{Continuity of the entropy}
The first aspect of the study is the continuity of the entropy in a more general setting than weighted Birkhoff averages. That is, let $\Sigma_A$ be an aperiodic and irreducible subshift of finite type and let $\phi_i:\Sigma_A\to\R$ be a sequence of continuous potentials with uniformly decreasing variations, meaning
\[
\rho_n^{(1)} := \sup_i \sup_{\ii\in \Sigma_n} \sup_{\jj,\mathbf{k}\in[\ii]} |\phi_i(\jj)-\phi_i(\mathbf{k})|,
\]
exists and converges to 0 as $n$ tends to $\infty$. For $\ii\in \Sigma_A$, let
$$
\overline{A}(\ii):= \limsup_{n\to\infty} \frac 1n \sum_{i=0}^{n-1} \phi_i(\sigma^i \ii),
$$
$$
\underline{A}(\ii):= \liminf_{n\to\infty} \frac 1n \sum_{i=0}^{n-1} \phi_i(\sigma^i \ii).
$$
Moreover, if the limit exists let
$$
A(\ii):= \lim_{n\to\infty} \frac 1n \sum_{i=0}^{n-1} \phi_i(\sigma^i \ii).
$$
Given $\alpha\leq\beta\in\R$, let
\[
L_A(\alpha,\beta)=\{\ii\in\Sigma_A: \underline{A}(\ii)=\alpha\text{ and }\overline{A}(\ii)=\beta \}.
\]
For short, let $L_A(\alpha):=L_A(\alpha,\alpha)$. Now we state our first main result.
\begin{thm}\label{thm:cont}
Let $\Sigma_A\subseteq \Sigma$ be an aperiodic and irreducible subshift of finite type. For every sequence $\phi_i\colon\Sigma_A\mapsto\R$ of potentials with uniformly decreasing variations, the function $\alpha\mapsto h_{\rm top}(L_A(\alpha))$ is continuous and concave over its domain, which is a (possibly empty) closed interval.
\end{thm}
In his recent preprint, Fan \cite{F2} gave upper and lower bounds for $h_{\rm top}(L_A(\alpha))$ in case of full shift by using a generalized topological pressure generated by the sequence $\phi_i$. If the pressure is sufficiently smooth then these bounds agree.
It is a natural question how large is the set of irregular points, that is, let
$$
D:=\left\{\ii\in\Sigma_A:\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\phi_i(\sigma^k\ii)\text{ does not exists }\right\}.
$$
\begin{thm} \label{thm:contgen}
Let $\Sigma_A\subseteq \Sigma$ be an aperiodic and irreducible subshift of finite type. Let $\phi_i\colon\Sigma_A\mapsto\R$ be a sequence of potentials with uniformly decreasing variations.
Assume that $A(\ii)$ takes at least two possible values, that is, the domain of the function $\alpha\mapsto h_{\rm top}(L_A(\alpha))$ is a nontrivial interval. Then
\[
h_{\rm top}(D) = h_{\rm top}(\Sigma_A).
\]
\end{thm}
\subsection{Random weights} Let us now extend our symbolic space $\Sigma=\mathcal{A}^\N$. Namely, Let $\Lambda=\{1,\ldots,N\}$ be another finite alphabet, and let $\Omega=\Lambda^\N$ be compact left-shift invariant subsets. Let us define the extended symbolic space $\Gamma:=\Omega\times\Sigma$. As an abuse of notation, we denote the left-shift operator on $\Omega$, and $\Gamma$ by $\sigma$ too. Adapting the notations for $\Omega$ and $\Gamma$, let $\Omega_n$ and $\Gamma_n$ the set of $n$-length finite words, and denote $\Omega_*$ and $\Gamma_*$ the set of all these finite words. The spaces $\Omega,\Sigma$ and $\Gamma$ are clearly metrisable with the same metric defined in \eqref{eq:metric}. For short, denote $\ii\wedge\jj=\min\{n\geq0:i_n\neq j_n\}$.
For an aperiodic and irreducible subshift of finite type $\Sigma_A\subseteq\Sigma$, the set $\Gamma_A=\Omega\times\Sigma_A$ is an aperiodic and irreducible subshift of finite type as well. Denote the set of finite admissible words by $\Gamma_{A,*}$ and with length $n$ by $\Gamma_{A,n}$. Let the a continuous potential $f:\Gamma_A\mapsto\R^d$. For a given sequence $\ww\in\Omega$ and $\alpha\in\R^d$ let
$$
E_\ww(\alpha):=\left\{\ii\in\Sigma_A:\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(\sigma^k\ww,\sigma^k\ii)=\alpha\right\}.
$$
Our goal is to determine the topological entropy of $E_\ww(\alpha)$, at least for the case of typical $\ww\in\Omega$. In order to do so, we need to introduce further regularity properties on $f$ and on the choice of $\ww$.
We say that the potential $f\colon\Omega\times\Sigma_A\mapsto\R^d$ has {\it bounded variation} if
$$
\sum_{k=0}^\infty\max_{\substack{(\ww,\ii),(\zz,\jj)\in\Gamma_{A,*}:\\(\ww,\ii)\wedge(\zz,\jj)=k}}\|f(\ww,\ii)-f(\zz,\jj)\|<\infty.
$$
Let $\nu$ be a $\sigma$-invariant ergodic measure on $\Omega$. We say that $\nu$ is {\em quasi-Bernoulli} if there exists $C>0$ such that for every $\ww,\zz\in\Omega_*$ with $\ww\zz\in\Omega_*$
$$
C^{-1}\nu([\ww])\nu([\zz])\leq\nu([\ww\zz])\leq C\nu([\ww])\nu([\zz]).
$$
Denote $\Pi$ the natural projection $\Pi\colon\Omega\times\Sigma\mapsto\Omega$, that is, $\Pi(\ww,\ii)=\ww$. Denote $\mathcal{E}_\nu(\Gamma)$, $\mathcal{E}_\nu(\Gamma_A)$ the set of ergodic $\sigma$-invariant measures on $\Gamma$ and $\Gamma_A$ respectively, whose marginal is $\nu$, i.e., $\Pi_*\mu=\nu$. Denote $\mathcal{M}_\nu(\Gamma)$ and $\mathcal{M}_\nu(\Gamma_A)$ the set of $\sigma$-invariant measures on $\Gamma$ and $\Gamma_A$ with marginal $\nu$. Let
\begin{equation}\label{eq:defpa}
\mathcal{P}_A=\{\alpha\in\R^d:\text{ there exists }\mu\in\mathcal{M}_\nu(\Gamma_A)\text{ such that }\int f d\mu=\alpha\}.
\end{equation}
Denote the relative interior of $\mathcal{P}_A$ by $\mathcal{P}_A^o$.
Moreover, let us define the conditional pressure of a potential $f:\Gamma_A\mapsto\R$ by
\begin{equation}\label{eq:condpresdefdef}
P_{\nu}(f)=\lim_{n\to\infty}\frac{1}{n}\int\log\sum_{\ii\in\Sigma_n}\sup_{\jj\in[\ii]}e^{S_nf(\ww,\jj)}d\nu(\ww),
\end{equation}
where $S_nf=f+f\circ\sigma+\cdots+f\circ\sigma^{n-1}$ and $\log$ is taken in the base $e$. Throughout the paper, we will use the convention that $0\cdot\log0=0$. Now, we can formalise our second theorem.
\begin{thm}\label{thm:typmain} Let $\Sigma_A\subseteq\Sigma$ be an aperiodic and irreducible subshift of finite type, and let $\nu$ be a quasi-Bernoulli $\sigma$-invariant ergodic measure on $\Omega$. Moreover, let $f\colon\Omega\times\Sigma_A\mapsto\R^d$ be continuous map with bounded variance. Then for every $\alpha\in\mathcal{P}^o_A$ and for $\nu$-almost every $\ww\in\Omega$,
\[
\begin{split}
h_{\rm top}(E_{\ww}(\alpha))&=\sup\{h_\mu:\mu\in\mathcal{E}_\nu(\Gamma_A)\text{ and }\int fd\mu=\alpha\}-h_\nu\\
&=\sup\{h_\mu:\mu\in\mathcal{M}_\nu(\Gamma_A)\text{ and }\int f d\mu=\alpha\}-h_\nu\\
&=\inf_{\p\in\R^d}P_\nu(\langle\p,f-\alpha\rangle).
\end{split}
\]
Furthermore, there exists $\alpha_0\in\R^d$ such that for $\nu$-almost every $\ww$,
\begin{equation}\label{eq:max}
h_{\rm top}(E_\ww(\alpha_0))=h_{\rm top}(\Sigma_A).
\end{equation}
\end{thm}
Combining Theorem~\ref{thm:cont} and Theorem~\ref{thm:typmain} we get the following result for real valued potentials.
\begin{thm}\label{cor:main}
Let $\Sigma_A\subseteq\Sigma$ be an aperiodic and irreducible subshift of finite type, and let $\nu$ be a quasi-Bernoulli $\sigma$-invariant ergodic measure on $\Omega$. Moreover, let $f\colon\Omega\times\Sigma_A\mapsto\R$ is a continuous map with bounded variance. Then for $\nu$-almost every $\ww\in\Omega$,
\[
\begin{split}
h_{\rm top}(E_{\ww}(\alpha))&=\sup\{h_\mu:\mu\in\mathcal{E}_\nu(\Gamma_A)\text{ and }\int f d\mu=\alpha\}-h_\nu\\
&=\sup\{h_\mu:\mu\in\mathcal{M}_\nu(\Gamma_A)\text{ and }\int f d\mu=\alpha\}-h_\nu\\
&=\inf_{p\in\R}\left(P_\nu(p\cdot f)-\alpha\cdot p\right)\text{ for every $\alpha\in\R$}.
\end{split}
\]
Moreover, for $\nu$-almost every $\ww$, the map $\alpha\mapsto h_{\rm top}(E_\ww(\alpha))$ is continuous and concave over its domain $\mathcal{P}_A$.
\end{thm}
We note that we define the supremum over an empty set as $-\infty$ and the topological entropy of an empty set as $-\infty$.
Fan~\cite{F2} proved in his recent preprint similar results. Namely, he showed a version of Theorem~\ref{cor:main} for full shifts with the choice $\phi_k(\ii)=w_k\varphi(\ii)$, where $(w_k)_k$ is an ergodic sequence of real random variables or deduced from a uniquely ergodic dynamical system, and $\varphi$ depends only on a finite number of coordinates. In this cases, he shows analyticity of the conditional topological pressure, while our result only gives differentiability.
\subsection{Potentials depending on the first coordinate} Now, we state the second version of our main theorem. Here we assume that $f\colon\Omega\times\Sigma\mapsto\R^d$ depends only on the first symbol, that is, $f(\ww,\ii)=f_{w_0,i_0}$. Then for a $\ww\in\Omega$,
$$
E_\ww(\alpha):=\left\{\ii\in\Sigma:\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f_{w_k,i_k}=\alpha\right\}.
$$
Let $\underline{q}=(q_1,\ldots,q_N)\in\mathcal{S}_N$ be a probability vector. We say that $\ww\in\Omega$ is {\it $\underline{q}$-frequency regular}, if
\begin{equation}\label{eq:freq}
\lim_{n\to\infty}\frac{\#\{k\in[0,n]\cap\mathbb{Z}:\omega_k=i\}}{n}=q_i\text{ for every }i=1,\ldots,N,
\end{equation}
where $\mathcal{S}_N$ denotes the $(N-1)$-dimensional simplex.
In this case, we choose $\nu$ to be the Bernoulli measure on $\Omega$. Then for a potential $f\colon\Gamma\mapsto\R$, the conditional pressure has the form
\begin{equation}\label{eq:simplepres}
P_{\underline{q}}(\langle\p,f-\alpha\rangle)=\sum_{j=1}^Nq_j\log\sum_{i=1}^Ke^{\langle\p,f_{j,i}-\alpha\rangle}.
\end{equation}
Denote $\mathcal{B}_{\underline{q}}(\Gamma)$ the set of all Bernoulli measures on $\Gamma$ with marginal $\nu$. That is, let $(p_{j,i})_{j=1,i=1}^{N,K}\in\mathcal{B}_{\underline{q}}(\Gamma)\subset\mathcal{S}_{NK}$ such that $\sum_{i=1}^Kp_{j,i}=q_j$. Our third main result is as follows.
\begin{thm}\label{thm:goal} Let $\ww\in\{1,\ldots,N\}^\N$ be a $\underline{q}$-frequency regular sequence with frequencies $(q_1,\ldots,q_N)$. Then for every $\alpha\in\R$.
\[
\begin{split}
h_{\rm top}(E_\ww(\alpha))&=\sup_{(p_{j,i})\in\mathcal{B}_{\underline{q}}(\Gamma)}\left\{-\sum_{i,j}p_{j,i}\log p_{ji}:\sum_{i,j}p_{j,i}f_{j,i}=\alpha\right\}+\sum_{i=1}^Nq_i\log q_i\\
&=\inf_{p\in\R}\left\{P_{\underline{q}}(p\cdot f)-p\alpha\right\}.
\end{split}
\]
\end{thm}
Fan \cite{F2} also gave similar result in his recent preprint. Namely, Fan shows Theorem~\ref{thm:goal} under a weaker condition that $\varphi$ depends on finitely many coordinates but under the stronger assumption that it takes only values ${-1,1}$.
Now we state the corresponding version of Theorem~\ref{thm:contgen} for the frequency regular case. Similarly, let
$$
D_\ww=\left\{\ii\in\Sigma:\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f_{w_k,i_k}\text{ does not exists}\right\}.
$$
\begin{thm}\label{thm:irreg2} Let $\ww\in\{1,\ldots,N\}^\N$ be a $\underline{q}$-frequency regular sequence with frequencies $(q_1,\ldots,q_N)$. Suppose that $g_i=\sum_{j=1}^Nq_jf_{j,i}$ is not constant. Then
$$
h_{\rm top}(D_\ww)=\log K.
$$
\end{thm}
\subsection{Weighted Birkhoff averages with frequency regular weights} Now, we demonstrate our result on the spectrum of real valued potentials depending on the first coordinate and frequency regular weights. Here we assume again that our potential supported on the whole spaces $\Sigma=\Lambda^\N$, $\Omega=\Lambda^\N$ and the potential $\varphi\colon\Sigma\mapsto\R$ and the weight $\lambda\colon\Omega\mapsto\R$ depend only on the first symbol. Then for a $\ww\in\Omega$, let
$$
E_\ww(\alpha):=\left\{\ii\in\Sigma:\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\lambda_{w_k}\varphi_{i_k}=\alpha\right\}.
$$
Let $\underline{q}=(q_1,\ldots,q_N)\in\mathcal{S}_N$ be a probability vector, and let $\ww\in\Omega$ be an arbitrary $\underline{q}$-frequency regular sequence. Denote by $\varphi_{\max}=\max\{\varphi_i: 1\le i\le K \}$ and $\varphi_{\min}=\min\{\varphi_i: 1\le i\le K \}$. To avoid the trivial case, we assume $\varphi_{\max}\not=\varphi_{\min} $. Finally, let
\begin{equation}\label{eq:domain}
I=\left[\varphi_{\min}\sum_{\lambda_{j}>0}q_j\lambda_{j}+\varphi_{\max}\sum_{\lambda_{j}<0}q_j\lambda_{j}, \varphi_{\max}\sum_{\lambda_{j}>0}q_j\lambda_{j}+\varphi_{\min}\sum_{\lambda_{j}<0}q_j\lambda_{j}\right].
\end{equation}
Now we show a compatible form of $h_{\rm top}(E_\ww(\alpha))$ in order to compute some examples.
\begin{thm}\label{thm: n=1}
Let $\ww\in\{1,\ldots,N\}^\N$ be a $\underline{q}$-frequency regular sequence with frequencies $(q_1,\ldots,q_N)$. Then for every $\alpha\in I$
\[
h_{\rm top}(E_\ww(\alpha))=
\sum_{j=1}^Nq_j\log\sum_{i=1}^Ke^{p(\lambda_{j}\varphi_i-\alpha)},
\]
where $p$ is the unique solution of the equation
\begin{equation}
\sum_{j=1}^{N} q_j\lambda_j \frac{\sum_{i=1}^{N} \varphi_ie^{p\lambda_j \varphi_i} }{\sum_{i=1}^{N}e^{p\lambda_j \varphi_i}}=\alpha.
\end{equation}
Moreover, if $\alpha\notin I,$ $\inf_{p}P_\nu(f_{p})=-\infty$, that is, there is no $\p^*\in\R^d$ such that $\inf_{p}P_\nu(f_{p})=P_\nu(f_{p^*})$.
\end{thm}
\begin{proof}
Let $\alpha\in I$. For sake of simplicity, let
$P(p)=P_{\underline{q}}(p(\lambda\varphi-\alpha))$. It is easy to check by \eqref{eq:simplepres} that
$$
P(p)=\sum_{j=1}^{N}q_j\log \sum_{i=1}^{N}e^{p\lambda_j \varphi_i}-p\alpha.
$$
It follows that
$$
P'(p)=\sum_{j=1}^{N}q_j \lambda_j \frac{\sum_{i=1}^{N} \varphi_ie^{p\lambda_j \varphi_i} }{\sum_{i=1}^{N}e^{p\lambda_j \varphi_i}}-\alpha,
$$
and
$$
P''(p)=\sum_{j=1}^{N}q_j \lambda_j^2 \frac{(\sum_{i=1}^{N} \varphi_i^2e^{p\lambda_j \varphi_i})(\sum_{i=1}^{N}e^{p\lambda_j \varphi_i})-(\sum_{i=1}^{N} \varphi_ie^{p\lambda_j \varphi_i})^2 }{(\sum_{i=1}^{N}e^{p\lambda_j \varphi_i})^2}.
$$
Since $\varphi_{\max}\not=\varphi_{\min}$, by Cauchy-Schwarz inequality, we see that $P''(p)>0$ for all $p\in \R$. A simple computation shows that
$$
P'(-\infty)=\varphi_{\min}\sum_{\lambda_{j}>0}q_j\lambda_{j}+\varphi_{\max}\sum_{\lambda_{j}<0}q_j\lambda_{j}-\alpha<0,
$$
and
$$P'(+\infty)=\varphi_{\max}\sum_{\lambda_{j}>0}q_j\lambda_{j}+\varphi_{\min}\sum_{\lambda_{j}<0}q_j\lambda_{j}-\alpha>0.
$$
Thus $P'(p)=0$ has the unique solution at which $P$ achieves minima.
Now let $\alpha\notin I$. It is easy to calculate that
$$
P(-\infty)=\lim\limits_{p\to -\infty} p(\varphi_{\min}\sum_{\lambda_{j}>0}q_j\lambda_{j}+\varphi_{\max}\sum_{\lambda_{j}<0}q_j\lambda_{j}-\alpha),
$$
and
$$
P(+\infty)=\lim\limits_{p\to -\infty} p(\varphi_{\max}\sum_{\lambda_{j}>0}q_j\lambda_{j}+\varphi_{\min}\sum_{\lambda_{j}<0}q_j\lambda_{j})-\alpha).
$$
Thus $\inf_{p}P(p)=-\infty$.
\end{proof}
\begin{ex}
Let us now consider again the M\"obius sequence with the potential $\varphi(\ii)=i_0$ for $\ii\in\Sigma=\{0,\ldots,N-1\}^\N$. The M\"obius function is frequency regular with
\[
\begin{split}
\lim_{n\to\infty}\large\frac{\#\{0\leq i\leq n-1:\boldsymbol{\mu}(i)=\pm1\}}{n}&=\frac{3}{\pi^2}\text{ and }\\
\lim_{n\to\infty}\frac{\#\{0\leq i\leq n-1:\boldsymbol{\mu}(i)=0\}}{n}&=1-\frac{6}{\pi^2},
\end{split}
\]
see for example \cite{CS}. Applying Theorem~\ref{thm: n=1} for $\varphi\colon\{0,\ldots,N-1\}^\N\mapsto\R$ with $\varphi(\ii)=i_0$, we get
$$
h_{\rm top}(E_{\boldsymbol{\mu}}(\alpha))=\left(1-\frac{6}{\pi^2}\right)\log(N)+\frac{6}{\pi^2}\log\left(\frac{e^{pN}-1}{e^p-1}\right)-\left((N-1)\frac{3}{\pi^2}+\alpha\right)p,
$$
where $H(\underline{p})=-\sum_ip_i\log p_i.$ and $p$ is the unique solution of
$$
\frac{(e^{(N+1)p}-1)(N-1)-(N+1)(e^{N p}-e^p)}{(e^{Np}-1)(e^p-1)}=\frac{\pi^2\alpha}{3},\text{ for }\alpha\in\left[\frac{-(N-1)3}{\pi^2},\frac{(N-1)3}{\pi^2}\right].
$$
\end{ex}
A corollary of the results above is that non-degenerate weights and potentials gives us non-degenerate weighted spectrum.
\begin{cor}
Let $\ww\in\Omega$ be a frequency regular sequence with frequencies $(q_1,\ldots,q_N)$ with non-degenerate weights, i.e. $\sum_{j=1}^Nq_j|\lambda_j|>0$. Let $\varphi\colon\Sigma\mapsto\R$ be a potential depending only on the first coordinate. Then there exists $\alpha_0\in I$ such that $h_{\rm top}(E_\ww(\alpha_0))=\log K$. Moreover, the domain $I$ is a non-degenerate closed interval unless the potential $\varphi(\ii)=\varphi_{i_0}$ is constant. In particular, either the weighted Birkhoff average at every point exists and equals $\alpha_0$ or the set of points at which the weighted Birkhoff average does not exist has full topological entropy.
\end{cor}
\begin{proof}
The first assertion follows by Theorem~\ref{thm:goal} for $f_{j,i}=\lambda_j\varphi_i$ with the choice $p_{j,i}=\frac{q_j}{K}$ and $\alpha_0=\left(\sum_{i=1}^K\frac{\varphi_i}{K}\right)\left(\sum_{j=1}^Nq_j\lambda_j\right)$. Moreover, \eqref{eq:domain}, Theorem~\ref{thm: n=1} and the continuity of the spectrum gives the second claim by some algebraic manipulation. The proof can be finished by applying Theorem~\ref{thm:irreg2}.
\end{proof}
The difference between the usual Birkhoff averages and weighted Birkhoff averages is shown by the following example:
\begin{ex} \label{ex:alter}
On the full shift system $\Sigma=\{0,1\}^\N$ there exist a potential $\varphi\colon\Sigma\mapsto\R$ depending only on the first symbol and a bounded sequence of weights $\ww=(w_i)_i$ (which is not frequency regular) such that
\begin{itemize}
\item[--] there exists only one point $\alpha_0\in\R$ which is a possible value of the weighted Birkhoff average,
\item[--] $0<h_{\rm top} (E_\ww(\alpha_0)) < \log 2$.
At all the points in $\Sigma\setminus E_\ww(\alpha_0)$ the weighted Birkhoff average does not exist.
\end{itemize}
\end{ex}
In particular, to have non-degenerate weighted spectrum, the frequency regularity of the weights is somewhat necessary. The proof of the example will be given in the last section.
\section{Preliminaries}
\subsection{Topological entropy} Let us recall here the definition of topological entropy on the shift space. Let $\Sigma=\mathcal{A}^\N$ be the symbolic space. Let $E\subset \Sigma$.
Define
$$\mathcal{H}^s_r(E):=\inf_{\alpha} \sum_{C\in \alpha} e^{-sl(C)}$$
where $\alpha$ is taken over all covers consisting of cylinders of levels large than $r$. Clearly, $\mathcal{H}^s_r(E)$ is increasing as a function of $r$. We define
$$\mathcal{H}^s(E):=\lim_{r\to \infty}\mathcal{H}^s_r(E)\in [0,+\infty].$$
The \textit{topological entropy} of $E$ is the value where the above limit jumps from $+\infty$ to 0, that is,
$$
h_{\rm top}(E):=\inf\{ s\ge 0:\mathcal{H}^s(E)<+\infty \}.
$$
The upper bound of $h_{\rm top}(E)$ is given by
\begin{equation}\label{eq:topentbasic}
h_{\rm top} (E) \leq \liminf_{n\to\infty} \frac 1n \log \#\{\ii\in\Sigma_n:[\ii]\cap E\neq\emptyset\}.
\end{equation}
In fact, the reason is that we can always take a cover with cylinders of level $n$ when estimating $\mathcal{H}_n^s(E)$. If $E$ is a closed $\sigma$-invariant set, then the equality holds. However, the equality does not necessarily hold in general, because there might exist a better cover (in the sense that we could get smaller value of $\sum_{C\in \alpha} e^{-sl(C)}$) than covers consisting of cylinders of level $n$.
To get the lower bound, one has a version of Frostman Lemma as follows.
\begin{lem}\label{lem:Forstman}
Let $E\subset \Sigma$. Suppose that there exists a probabilistic measure $\mu$ on $E$ satisfying that there is a constant $c$ such that for every cylinder $C$, we have $\mu(C\cap E) \leq ce^{-sl(C)}$. Then $h_{\rm top} (E) \geq s$.
\end{lem}
\subsection{Pinsker's formula} Let us recall that $\Pi$ is the natural projection $\Pi\colon\Omega\times\Sigma\mapsto\Omega$, that is, $\Pi(\ww,\ii)=\ww$. Let $\mu$ be an ergodic $\sigma$-invariant measure on $\Gamma$. Clearly, if $\mu$ is $\sigma$-invariant and ergodic then $\Pi_*\mu$ is $\sigma$-invariant and ergodic on $\Omega$ too. By Shannon-McMillan-Breiman's Theorem,
\begin{equation}\label{eq:shannon}
\begin{split}
h_\mu&=\lim\frac{-1}{n}\log\mu[(\ww,\ii)|_n]\text{ for $\mu$-almost every $(\ww,\ii)$,}\\
h_{\Pi_*\mu}&=\lim\frac{-1}{n}\log\Pi_*\mu[\ww|_n]\text{ for $\Pi_*\mu$-almost every $\ww$.}
\end{split}
\end{equation}
Denote $\xi$ the partition generated by the inverse branches $\Pi^{-1}(\ww)=\{\ww\}\times\Sigma=\xi(\ww)$. By Rohlin's Disintegration Theorem, there exists a family of probability measures $\{\mu_\ww^{\xi}\}$ such that
\begin{enumerate}
\item $\mu_\ww^{\xi}$ is supported on $\xi(\ww)$;
\item for every $A\in\mathcal{B}_\Gamma$, the map $\ww\mapsto\mu_\ww^{\xi}(A)$ is $\mathcal{B}_\Omega$-measurable;
\item $\mu=\int\mu_\ww^{\xi}d\Pi_*\mu(\ww)$.
\end{enumerate}
The family measures $\{\mu_\ww^{\xi}\}$ of measures is unique up to a zero measure set. Let us define the conditional entropy of $\mu_\ww^\xi$ by
$$
h_{\mu}^\xi:=\int-\log\mu_{\Pi(\ww,\ii)}^\xi([i_0])d\mu(\ww,\ii).
$$
The following theorem is the corresponding version of Pinsker's formula \cite{R}, which we need to establish relation between the conditional entropy, and the entropy of the projection.
\begin{thm}[Pinsker's formula]\label{thm:entconv}
If $\mu$ is an ergodic $\sigma$-invariant measure then for $\Pi_*\mu$-almost every $\ww$, we have
\begin{equation}\label{eq:condshannon}
\lim_{n\to\infty}\frac{-1}{n}\log\mu_\ww^\xi([\ii|_n])=h_{\mu}^\xi\text{ for $\mu_\ww^\xi$-a.e. $\ii$.}
\end{equation}
Moreover,
$$
h_\mu=h_{\Pi_*\mu}+h_{\mu}^\xi.
$$
\end{thm}
For completeness, we give a proof here. Observe that the map $(\ww,\ii)\mapsto-\log\mu_{\Pi(\ww,\ii)}^\xi([i_0])$ is in $L^1(\Gamma,\mu)$. Indeed,
\[
\begin{split}
h_{\mu}^\xi&=\int-\log\mu_{\Pi(\ww,\ii)}^\xi([i_0])d\mu(\ww,\ii)\\
&=\int_0^\infty\mu(\{(\ww,\ii):-\log\mu_{\Pi(\ww,\ii)}^\xi([i_0])>x\})dx\\
&=\int_0^\infty\int\ind_{\{-\log\mu_{\Pi(\ww,\ii)}^\xi([i_0])>x\}}(\ww,\ii)d\mu(\ww,\ii)dx\\
&=\sum_{k\in\mathcal{A}}\int_0^\infty\int\ind_{\{-\log\mu_{\Pi(\ww,\ii)}^\xi([i_0])>x\}}(\ww,\ii)\mu^{\xi}_{\Pi(\ww,\ii)}([k])d\mu(\ww,\ii)dx\\
&\leq\sum_{k\in\mathcal{A}}\int_0^\infty\int e^{-x}d\mu(\ww,\ii)dx=K.\\
\end{split}
\]
Let us denote the partition with respect to cylinders on $\Gamma$ by $\mathfrak{P}$. Then clearly,
\begin{equation}\label{eq:unique}
\sigma_*\left(\mu_{\ww}^{\xi\vee\mathfrak{P}}\right)=\mu_{\sigma\ww}^\xi.
\end{equation}
Indeed, $\sigma_*\mu_{\ww}^{\xi\vee\mathfrak{P}}$ is supported on $\Pi^{-1}(\sigma\ww)$, and by the definition of conditional measures,
\[
\begin{split}
\int\sigma_*\left(\mu_{\ww}^{\xi\vee\mathfrak{P}}\right)d\Pi_*\mu(\ww)&=\sigma_*\int\left(\mu_{\ww}^{\xi\vee\mathfrak{P}}\right)d\Pi_*\mu(\ww)=\sigma_*\mu=\mu\\
&=\int\mu_{\ww}^{\xi}d\Pi_*\mu(\ww)=\int\mu_{\sigma\ww}^{\xi}d\Pi_*\mu(\ww).
\end{split}
\]
Thus, \eqref{eq:unique} follows by the uniqueness of the conditional measures.
\begin{proof}[Proof of Theorem~\ref{thm:entconv}]
Let us first show the first assertion of the theorem. By \eqref{eq:unique}, we have
\[
\begin{split}
\mu^\xi_{\Pi(\ww,\ii)}([\ii|_n])&=\mu_{\Pi(\ww,\ii)}^\xi([\ii|_1])\prod_{k=2}^{n}\dfrac{\mu_{\Pi(\ww,\ii)}^\xi([\ii|_{k}])}{\mu_{\Pi(\ww,\ii)}^\xi([\ii|_{k-1}])}\\
&=\mu_{\Pi(\ww,\ii)}^\xi([\ii|_1])\prod_{k=2}^{n}\mu_{\Pi(\ww,\ii)}^{\xi\vee\mathcal{P}_{k-1}}(\sigma^{-(k-1)}[\sigma^{k-1}\ii|_1])\\
&=\mu_{\Pi(\ww,\ii)}^\xi([\ii|_1])\prod_{k=2}^{n}\mu_{\Pi\circ\sigma^{k-1}(\ww,\ii)}^{\xi}([\sigma^{k-1}\ii|_1]).
\end{split}\]
Taking logarithm and applying Birkhoff's Ergodic Theorem, we get $\frac{-1}{n}\log\mu^\xi_{\Pi(\ww,\ii)}([\ii|_n])=\frac{1}{n}\sum_{k=0}^{n-1}-\log\mu_{\Pi\circ\sigma^{k}(\ww,\ii)}^{\xi}([\sigma^{k}\ii|_1])\to h_\mu^\xi$ for $\mu$-almost every $(\ww,\ii)$. Thus, \eqref{eq:condshannon} follows by Fubini's Theorem.
Now, we show that $h_\mu=h_{\Pi_*\mu}+h_{\mu}^\xi$. By Egorov's Theorem, for every $\varepsilon>0$ there exists $J_1\subset\Gamma$ such that $\mu(J_1)>1-\varepsilon$ and the convergences \eqref{eq:shannon} and \eqref{eq:condshannon} are uniform. That is, there exists $C>0$ such that for every $n\geq1$ and every $(\ww,\ii)\in J_1$
$$
C^{-1}e^{-h_{\Pi_*\mu}n}\leq\Pi_*\mu([\ww|_n])\leq Ce^{-h_{\Pi_*\mu}n}\text{ and }C^{-1}e^{-nh_\mu^\xi}\leq\mu_\ww^\xi([\ii|_n])\leq Ce^{-nh_\mu^\xi}.
$$
By Lebesgue's Density Theorem and Egorov's Theorem, there exists $J_2\subset J_1$ such that $\mu(J_2)>1-2\varepsilon$ and there exists $N\geq1$ such that for every $(\ww,\ii)\in J_2$ and $n\geq N$
$$
\mu(J_1\cap[(\ww,\ii)|_n])\geq\frac{1}{2}\mu([(\ww,\ii)|_n])\text{ and }\mu_\ww^\xi(J_1\cap[(\ww,\ii)|_n])\geq\frac{1}{2}\mu_\ww^\xi([(\ww,\ii)|_n]).
$$
Thus, for every $(\ww,\ii)\in J_2$ and every $n\geq N$
\[
\begin{split}
\mu([(\ww,\ii)|_n])&\leq2\mu(J_1\cap[(\ww,\ii)|_n])\\
&=2\int\mu_{\Pi(\ww,\ii)}^\xi(J_1\cap[(\ww,\ii)|_n])d\mu(\ww,\ii)\\
&=2\int_{\Pi^{-1}[\ww|_n]}\mu_{\Pi(\ww,\ii)}^\xi(J_1\cap[(\ww,\ii)|_n])d\mu(\ww,\ii)\\
&\leq 2\Pi_*\mu([\ww|_n])Ce^{-nh_\mu^\xi}\leq 2C^2e^{-n(h_{\Pi_*\mu}+h_\mu^\xi)}.
\end{split}
\]
On the other hand, for every $(\ww,\ii)\in J_2$
\[
\begin{split}
\mu([(\ww,\ii)|_n])&\geq\mu(J_1\cap[(\ww,\ii)|_n])\\
&=\int\mu_{\Pi(\ww,\ii)}^\xi(J_1\cap[(\ww,\ii)|_n])d\mu(\ww,\ii)\\
&=\int_{\Pi^{-1}[\ww|_n]}\mu_{\Pi(\ww,\ii)}^\xi(J_1\cap[(\ww,\ii)|_n])d\mu(\ww,\ii)\\
&\geq \frac{1}{2}\int_{\Pi^{-1}[\ww|_n]}\mu_{\Pi(\ww,\ii)}^\xi([(\ww,\ii)|_n])d\mu(\ww,\ii)\\
&\geq \frac{1}{2}\Pi_*\mu([\ww|_n])C^{-1}e^{-nh_\mu^\xi}\geq \frac{1}{2}C^{-2}e^{-n(h_{\Pi_*\mu}+h_\mu^\xi)}.
\end{split}
\]
Thus, the statement follows by Shannon-McMillan-Breiman's Theorem.
\end{proof}
\subsection{Slicing Theorem for entropy} The following proposition is the corresponding version of Marstrand's Generalised Slicing Theorem for topological entropy, which can be found in \cite[Theorem~3.3.1]{BP}.
\begin{prop}\label{prop:slicing}
Let $E\subset\Gamma$ and let $\nu$ be a measure on $\Omega$ such that there exists $c>0$ and $s>0$ such that for every $n\geq1$, $\nu([\ww|_n])\leq ce^{-ns}$. Then
$$
h_{\rm top}(E\cap\xi(\ww))\leq\max\{h_{\rm top}(E)-s,0\}\text{ for $\nu$-a.e. $\ww$.}
$$
\end{prop}
For completeness, we give a proof here.
\begin{proof}
Without loss of generality, we may assume that $h_{\rm top}(E)\geq s$, otherwise $\nu(\Pi(E))=0$ and the statement is trivial.
Let $\beta_n=h_{\rm top}(E)+1/n$ and $N\geq1$. Then there exists a cover $\alpha$ of $E$ with cylinders such that $l(C)>N$ for every $C\in\alpha$ and $\sum_{C\in\alpha}e^{-\beta_nl(C)}<1/N$. Moreover, let us define
$$
f(\ww,\ii)=\sum_{C\in\alpha}\ind_C(\ww,\ii)e^{-(\beta_n-s-\log K)l(C)}.
$$
Denote $\eta$ the uniform measure on $\Sigma$. So
\[
\begin{split}
\iint f(\ww,\ii)d\nu(\ww)d\eta(\ii)&=\sum_{C\in\alpha}\nu\times\eta(C)e^{-(\beta_n-s-\log K)l(C)}\\
&\leq\sum_{C\in\alpha}ce^{-(\beta_n-s-\log K)l(C)}K^{-l(C)}ce^{-sl(C)}\\
&=c\sum_{C\in\alpha}e^{-sl(C)}<\frac{c}{N}.
\end{split}
\]
For every $C\in\alpha$ let us write $C=C_{1}\times C_2$, where $C_1$ and $C_2$ are the corresponding cylinders on $\Omega$ and $\Sigma$ respectively. Then
\[
\begin{split}
\iint f(\ww,\ii)d\nu(\ww)d\eta(\ii)&=\int\sum_{C\in\alpha}\ind_{C_1}(\ww)\ind_{C_2}(\ii)e^{-(\beta_n-s-\log K)l(C)}d\nu(\ww)d\eta(\ii)\\
&=\int \sum_{C\in\alpha} \ind_{C_1}(\ww)e^{-(\beta_n-s)l(C)}d\nu(\ww)\\
&\geq\int \mathcal{H}_N^{\beta_n-s}(E\cap\xi(\ww))d\nu(\ww).
\end{split}
\]
Taking $N\to\infty$ and applying Fatou's Lemma, we get\linebreak $\mathcal{H}^{\beta_n-s}(E\cap\xi(\ww))=0$ for $\nu$-almost every $\ww$, that is, $h_{\rm top}(E\cap\xi(\ww))\leq\beta_n-s$. Taking $n\to\infty$, we get the statement.
\end{proof}
\section{Continuity and concavity of the spectrum}
Let us recall the conditions and notations of Theorem~\ref{thm:cont}. That is, we assume that $\Sigma_A\subseteq\Sigma=\mathcal{A}^\N$ is an aperiodic and irreducible subshift of finite type. Moreover, let $\phi_i:\Sigma_A\to\R$ be a sequence of continuous potentials with uniformly decreasing variations.
For $\ii\in \Sigma_A$, let
$$
\overline{A}(\ii):= \limsup_{n\to\infty} \frac 1n \sum_{i=0}^{n-1} \phi_i(\sigma^i \ii)\text{ and }\underline{A}(\ii):= \liminf_{n\to\infty} \frac 1n \sum_{i=0}^{n-1} \phi_i(\sigma^i \ii).
$$
Given $\alpha\leq\beta\in\R$, let
\[
L_A(\alpha,\beta)=\{\ii\in\Sigma: \underline{A}(\ii)=\alpha\text{ and }\overline{A}(\ii)=\beta \}.
\]
For short, let $L_A(\alpha):=L_A(\alpha,\alpha)$. Define
\[
B_m^n(\ii) := \sum_{i=m}^{n-1} \phi_i(\sigma^i \ii)
\]
and $A_m^n(\ii)=\frac{1}{n-m}B_m^n(\ii)$. Let
\[
\rho_n^{(2)} := \sup_{\ii\in \Sigma_{A,n}} \sup_{\jj,\mathbf{k}\in[\ii]} |A_0^n(\jj)-A_0^n(\mathbf{k})|,
\]
for $m,n\in \N$ with $n>m$.
It is clear that
\begin{equation}
\rho_n^{(2)}\leq \frac 1n \sum_{i=1}^n \rho_i^{(1)}.
\end{equation}
Since $\rho_n^{(1)}$ converges to $0$ as $n$ tends to $\infty$, so does $\rho_n^{(2)}$.
\begin{lem}\label{lem:m n k}
Let $\varepsilon>0$ and $N\in \N$. Suppose that $|A_0^n(\ii)-\alpha|<\varepsilon$ for all $n>N$.
Then for $m,n > N$ we have
\[
|A_m^n(\ii)-\alpha| \leq \varepsilon\frac{n+m}{n-m}.
\]
\end{lem}
\begin{proof}
The statement follows simply from $(n-m) A_m^n(\ii) = n A_0^n(\ii) - mA_0^m(\ii).$
\end{proof}
We remind that for an aperiodic and irreducible subshift of finite type $\Sigma_A$ there exist a constant $r$ such that for any two admissible words $\ii, \jj\in\Sigma_{A,*}$ there exist a word $\mathbf{k}$ of length $r$ such that the concatenation $\ii\mathbf{k}\jj$ is admissible, moreover one can choose $\mathbf{k}$ depending only on the last symbol of $\ii$ and the first symbol of $\jj$. We fix $r$ for the rest of the section.
We will need the following technical lemma. Note that although $\phi_i$ is defined only on $\Sigma_A\subseteq\Sigma$, it can be naturally extended to $\Sigma$ in such a way that the sequence reminds to be uniformly continuous. For instance, for every $\ii\in\Sigma$ let $n(\ii)=\inf\{n\geq0:\ii|_0^n\in\Sigma_{A,*}\}$, that is, $\ii|_0^{n(\ii)}$ is the longest admissible prefix of $\ii$ and let $\phi_i(\ii):=\max_{\jj\in[\ii|_0^{n(\ii)}]}\phi_i(\jj)$.
\begin{lem} \label{lem:techn}
Let $(q_j)_{j=1}^\infty$ be an increasing sequence of integers satisfying $q_j/j\to\infty$ and $q_{j+1}-q_j> 2r$. Let $\pi:\Sigma\to\Sigma$ be a map satisfying the following properties:
\begin{itemize}
\item[i)] if $\ii_0^{n}=\jj|_0^{n}$ for $q_j<n\leq q_{j+1}$ then $(\pi\ii)|_{0}^{q_j}=(\pi\jj)|_0^{q_j}$,
\item[ii)] if $\ii_k\neq (\pi \ii)_k$ then $k\in \{q_j+1,\ldots, q_j+r\}$ for some $j$.
\end{itemize}
Then there exist a sequence $\rho^{(3)}_n\searrow 0$ such that for every $\ii\in \Sigma$ and for every $n$
\[
|A_0^n(\pi\ii)-A_0^n(\ii)| < \rho^{(3)}_n.
\]
Moreover, for every $X\subset \Sigma$
\[
h_{\rm top}(\pi(X))= h_{\rm top}(X).
\]
\end{lem}
\begin{proof}
Taking $j$ such that $q_j<n\leq q_{j+1}$ we get
\[
\begin{split}
|A_0^{n}(\pi \ii) - A_0^{n}(\ii)|& \leq \frac{(j+1)r}{n} \max_{i\geq0,\ii\in\mathcal{A}^\N}|\phi_i(\ii)| + \frac{1}{n}\sum_{i=1}^j \sum_{\ell=0}^{q_{i}-q_{i-1}-r}\rho^{(1)}_{\ell}+\frac{1}{n}\sum_{\ell=q_{j+1}-q_j-r-n}^{q_{j+1}-q_j-r}\rho_\ell^{(1)},\\
&\leq\frac{(j+1)r}{n} \max_{i\geq0,\ii\in\mathcal{A}^\N}|\phi_i(\ii)| + \frac{1}{n}\sum_{i=1}^j(q_{i}-q_{i-1}-r)\rho_{q_i-q_{i-1}-r}^{(2)} +\rho_n^{(2)}.
\end{split}
\]
Observe that $\frac{1}{n}\sum_{i=1}^j(q_{i}-q_{i-1}-r)\rho_{q_i-q_{i-1}-r}^{(2)}\to0$ as $n\to\infty$. Indeed, since $q_{j}-q_{j-1}-r\to\infty$ as $j\to\infty$, for every $\varepsilon>0$ there exists $J>0$ so that for every $i\geq J$ $\rho_{q_{i}-q_{i-1}-r}^{(2)}<\varepsilon$ and thus, $\frac{1}{n}\sum_{i=1}^j(q_{i}-q_{i-1}-r)\rho_{q_i-q_{i-1}-r}^{(2)}\leq \frac{q_j-q_{J-1}}{n}\varepsilon+\frac{q_J\rho_1^{(2)}}{n}$. This proves the first assertion.
To prove the second assertion, we need a lower and an upper bound. For the upper bound we notice that the image under $\pi$ of a cylinder which level is not of form $ \{q_j+1,\ldots, q_j+2r\}$ is contained in a cylinder of the same level. As for any set $X$ we can construct a family of covers realizing the topological entropy using only cylinders of levels not of form $ \{q_j+1,\ldots, q_j+2r\}$, the images of those cylinders will give us a family of covers of $\pi(X)$ realizing the same topological entropy.
For the lower bound, let $\mu$ be a measure supported on $X$ such that for every cylinder $C$ of level $\ell(C)=n$ we have
\[
\mu(C\cap X) \leq e^{(h_{\rm top}(X)+\varepsilon)\ell(C)}.
\]
Then if $n$ is not of form $ \{q_j+1,\ldots, q_j+2r\}$ but $q_j<n\leq q_{j+1}$ then for every cylinder $C'$ of level $n$ we have
\[
\pi_*(\mu)(C') \leq K^{(j+1)r} e^{(h_{\rm top}(X)+\varepsilon)\ell(C)}.
\]
Speaking intuitively but not quite precisely, the map $\pi$ acting on initial words of length $\leq q_{j+1}$ is at most $K^{(j+1)r}$-to-1. As $j=o(q_j)$, the factor $K^{r(j+1)}$ is subexponential in $q_j$ and thus we get the lower bound from Lemma \ref{lem:Forstman}.
\end{proof}
The proof of Theorem~\ref{thm:cont} relies on the following technical proposition.
\begin{prop}\label{prop:technical}
Let $\varepsilon_n>0$ and $\alpha_n$ be sequences of reals such that\linebreak $\limsup_{n\to\infty}\alpha_n=\alpha_{\rm max}$, $\liminf_{n\to\infty}\alpha_n=\alpha_{\rm min}$ and $\lim_{n\to\infty}\varepsilon_n=0$. Moreover, assume that for every $n\geq1$ there exists a set $M_n\subset\Sigma_A$ and a positive integer $T_n>0$ such that for every $\ii\in M_n$ and $m\geq T_n$
$$
\left|\frac{1}{m}\sum_{k=0}^{m-1}\phi_k(\sigma^k\ii)-\alpha_n\right|<\varepsilon_n.
$$
Then $h_{\rm top}(L_A(\alpha_{\rm min},\alpha_{\rm max}))\geq\liminf_{n\to\infty}h_{\rm top}(M_n)$.
Moreover, in case $\lim_{n\to\infty}\alpha_n=\alpha$ then there exists a set $M\subset L_A(\alpha)$ such that
the convergence $A_0^n(\ii)\to\alpha$ is uniform on $M$ and $h_{\rm top}(M)\geq\limsup_{n\to\infty}h_{\rm top}(M_n)$.
\end{prop}
For a subset $M\subset \Sigma$, we denote by $M[a,b]=\{\ii\in \AA^{b-a+1}: \exists \jj\in M, \jj|_a^b=\ii \}$. That is, the collection of $(b-a+1)$-words occurring in certain element of $M$ starting at place $a$ and ending at place $b$. Moreover, we use the notation $Z_a^b(M)=\#M[a,b]$ for convenience. It is clear that for $a<b<c$, we have $Z_a^c(M) \leq Z_a^b(M) \cdot Z_b^c(M)$.
\begin{lem}\label{lem:1a}
Let $M$ be a set with $h_{\rm top}(M)>0$. Then for every $h<h_{\rm top}(M)$ there exists a sequence $(z_i)_{i\in \N}$ of $\N$ such that for every $z_i$ and for every $n>z_i$ we have $$\log Z_{z_i}^n(M) > (n-z_i)h.$$
\end{lem}
\begin{proof}
Indeed, if it fails then we would be able to find an increasing subsequence $(n_i)_{i\in \N}$ of $\N$ such that $\log Z_{n_i}^{n_{i+1}}(M) \leq (n_{i+1}-n_i)h$, and by summing them up this would imply $\log Z_0^{n_i}(M) \leq (n_i-n_0)h+\log Z_0^{n_0}(M)$ , hence $h_{\rm top}(M) \leq h$, which is a contradiction.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:technical}]
Let $M_k$ be the sequence of subsets and $T_k$ as in the assumption. Moreover, let $\inf_k h_{\rm top}(M_k)>\delta>0$ be arbitrary but fixed. Then by Lemma~\ref{lem:1a}, for every $k\in\N$ there exists a sequence $(z_i^k)_{i\in\N}$ such that
\begin{equation}\label{eq:lowertop}
\log Z_{z_i^k}^n(M_k) > (n-z_i^k)(h_{\rm top}(M_k)-\delta)\text{ for every }n\geq z_{i}^k.
\end{equation}
We choose a subsequence $(N_k)_{k\in \N}$ of $\N$ satisfying the following properties:
\begin{itemize}
\item $N_0=0$, $N_k>T_{k-1}$;
\item $N_k\in (z_i^{k+1})_{i\in \N}$;
\item $\lim_{k\to\infty}\frac{N_{k+1}}{N_k}=\infty$;
\item $\log Z_0^n(M_k)\geq n(h_{\rm top}(M_k)-\delta)$ for all $n>N_k$.
\item $N_k\varepsilon_k\to\infty$
\end{itemize}
Now, let us define a sequence $2\geq r_k>1$ and $m(k)\in\N$ such that
$$
r_k^{m(k)}=\frac{N_{k+1}}{N_{k}}\text{, }\lim_{k\to\infty}r_k=1\text{ and }\lim_{k\to\infty}(r_k-1)\varepsilon_k^{-1}=\infty.
$$
Define a sequence $(t_i^{k})_{i=0}^{m(k)}$ by $t_{i}^k=\lfloor (r_k)^iN_{k-1} \rfloor$ for $i=0,\ldots,m(k)$. It is easy to check that
$$r_k -\frac{1}{N_{k-1}}\leq\frac {t_{i+1}^k} {t_{i}^k}\le r_k+\frac{2}{N_{k-1}}\text{ for }1\le i\le m(k)-1.$$
Using $(t_i^{k})_{i=0}^{m(k)}$ as the endpoints, we set $S_1^k=[t_0^k, t_1^k), \ldots, S_{m(k)}^k=[t_{m(k)-1}^k, t_{m(k)}^k)$.
Finally, let
\begin{equation*}
\begin{split}
\widetilde{M}=\{&\ii\in \mathcal{A}^\N: \ii|_0^{N_1-1}\in M_1[0, N_1-1], \\
&\ii|_{t_i^k}^{t_{i+1}^k-1}\in M_k[t_i^k, t_{i+1}^k-1], \forall 0\le i\le m(k)-1, \forall k\ge 2 \}.
\end{split}
\end{equation*}
In other words, on positions $0,\ldots, N_1-1$ we can put any sequence that appears in $M_1$. For $k>1$, on positions in each $S_i^k$ we can put any sequence that can appear (on those positions) in $M_k$. Note that $\widetilde{M}$ is not necessarily a subset of $\Sigma_A$, since it might happen that these concatenations are forbidden. We will use this set to construct one with the properties claimed in the statement, but first we show that $\widetilde{M}$ a prototype of our goal set. Namely, we will first show that the set $\widetilde{M}\subseteq\Sigma$ satisfies
\begin{enumerate}[(i)]
\item\label{it:inL} $\alpha_{\min}\leq\liminf_{n\to\infty}A_0^n(\ii)\leq \limsup_{n\to\infty}A_0^n(\ii)\leq\alpha_{\max}$ for every $\ii\in\widetilde{M}$,
\item\label{it:htop} $h_{\rm top}(\widetilde{M})\geq\liminf_{n\to\infty}h_{\rm top}(M_n)$.
\end{enumerate}
Consider $\ii\in \widetilde{M}$ and $n\in \N$. Take $k\in \N$ with $N_k\leq n < N_{k+1}$. Let $m$ be the largest number such that $n-t_m^{k+1}>0$.
$$
B_0^n(\ii)=B_0^{N_1-1}(\ii)+\sum_{j=2}^{k}\sum_{\ell=0}^{m(j)}B_{t_\ell^j}^{t_{\ell+1}^j}(\sigma^{t_\ell^j}\ii)+\sum_{\ell=0}^{m-1}B_{t_\ell^{k+1}}^{t_{\ell+1}^{k+1}}(\sigma^{t_\ell^{k+1}}\ii)+B_{t_m^{k+1}}^n(\sigma^{t_m^{k+1}}\ii).
$$
Observe that for every $t_{\ell}^j$ there exists a $\jj\in M_{j}$ such that for every $t_{\ell}^j\leq i<t_{\ell+1}^j$, $|\phi_i(\sigma^i\ii)-\phi_i(\sigma^i\jj)|\leq\mathrm{var}_{t_{\ell+1}^j-i}(\phi_i)$, and thus
$$
B_{t_\ell^j}^{t_{\ell+1}^j}(\sigma^{t_\ell^j}\ii)=\sum_{i=t_\ell^j}^{t_{\ell+1}^j}\phi_i(\sigma^i\ii)\leq\sum_{i=t_\ell^j}^{t_{\ell+1}^j}\phi_i(\sigma^i\ii)+\sum_{i=1}^{t_{\ell+1}^j-t_\ell^j}\mathrm{var}_{i}(\phi_i).
$$
Hence, by Lemma \ref{lem:m n k}
\begin{multline*}
B_{0}^n(\ii)\leq \alpha_1N_1+\sum_{j=1}^{k-1}\alpha_{j+1}(N_{j+1}-N_j)+(n-N_k)\alpha_{k+1}+\\
\varepsilon_1 N_1+\sum_{j=2}^{k}\sum_{\ell=0}^{m(j)}\varepsilon_{j+1}(t_{\ell+1}^j+t_\ell^j)+\sum_{\ell=0}^{m-1}\varepsilon_{k+1}(t_{\ell+1}^{k+1}+t_\ell^{k+1})+(n+t_m^{k+1})\varepsilon_{k+1}\\
+\sum_{j=1}^{k-1}\sum_{\ell=0}^{m(j)}\sum_{i=1}^{t_{\ell+1}^j-t_\ell^j}\mathrm{var}_{i}(\phi_i)+\sum_{\ell=0}^{m-1}\sum_{i=1}^{t_{\ell+1}^{k+1}-t_\ell^{k+1}}\mathrm{var}_{i}(\phi_i)+\sum_{i=0}^{n-t_{m}^{k+1}-1}\mathrm{var}_{n-t_{m}^{k+1}-i}(\phi_i).
\end{multline*}
Observe that
$$
\sum_{\ell=0}^{m(j)}\varepsilon_{j+1}(t_{\ell+1}^j+t_\ell^j)\leq\sum_{\ell=0}^{m(j)}\varepsilon_{j+1}r_k^\ell(r_j+1)N_{j}\leq\frac{2\varepsilon_{j+1}N_j(r_j^{m(j)+1}-1)}{r_j-1}\leq\frac{2\varepsilon_{j+1}r_jN_{j+1}}{r_j-1}.
$$
Hence,
$$
\sum_{j=2}^{k}\sum_{\ell=0}^{m(j)}\varepsilon_{j+1}(t_{\ell+1}^j+t_\ell^j)\leq\sum_{j=2}^k\frac{2\varepsilon_{j+1}r_j}{r_j-1}N_{j+1}
$$
and therefore,
$$
\frac{1}{n}\sum_{j=2}^{k}\sum_{\ell=0}^{m(j)}\varepsilon_{j+1}(t_{\ell+1}^j+t_\ell^j)\leq\frac{1}{n}\sum_{j=2}^k\frac{2\varepsilon_{j+1}r_j}{r_j-1}N_{j+1}=o(1).
$$
On the other hand, since $\mathrm{var}_i(\phi_i)\to0$ as $i\to\infty$, we get $\frac{1}{i}\sum_{j=1}^{i}\mathrm{var}_j(\phi_j)\to0$ as $i\to\infty$ and hence,
\begin{multline*}
\sum_{j=1}^{k-1}\sum_{\ell=0}^{m(j)}\sum_{i=1}^{t_{\ell+1}^j-t_\ell^j}\mathrm{var}_{i}(\phi_i)+\sum_{\ell=0}^{m-1}\sum_{i=1}^{t_{\ell+1}^{k+1}-t_\ell^{k+1}}\mathrm{var}_{i}(\phi_i)+\sum_{i=0}^{n-t_{m}^{k+1}-1}\mathrm{var}_{n-t_{m}^{k+1}-i}(\phi_i)\\
\leq\sum_{j=1}^{k-1}\sum_{\ell=0}^{m(j)}(t_{\ell+1}^j-t_\ell^j)o(1)+\sum_{\ell=0}^{m-1}(t_{\ell+1}^{k+1}-t_\ell^{k+1})o(1)+(n-t_{m}^{k+1}-1)o(1)\\
=n\cdot o(1).
\end{multline*}
The lower bound is similar, thus we get.
$$
A_0^n(\ii)=\frac{\alpha_kN_k+(n-N_k)\alpha_{k+1}}{n}+o(1).
$$
This implies \eqref{it:inL}. Moreover, if $\alpha_k\to\alpha$, this shows that the convergence $A_0^n\to\alpha$ is uniform on $\widetilde{M}$. So it only remains to show \eqref{it:htop}.
We pick arbitrarily $t_{\ell}^k\le n<t_{\ell+1}^{k}$ for some $k\in \N$ and $0\le \ell\le m(k)-1$. By definition of $\widetilde{M}$ and \eqref{eq:lowertop}, we have
\[
Z_{N_{k-1}}^{t_\ell^k}(\widetilde{M}) \geq Z_{N_{k-1}}^{t_\ell^k}(M_k) \geq \exp((t_\ell^k-N_{k-1})(h_{\rm top}(M_k)-\delta))
\]
The last inequality is due to the fact that $N_{k-1}\in (z_i^k)_{i\in \N}$.
Similarly, we see that
$$
Z_{N_{k-1}}^{N_{k}}(\widetilde{M}) \geq \exp((N_{k}-N_{k-1})(h_{\rm top}(M_k)-\delta)),
$$
for all $i\le k$. Since $\widetilde{M}[N_{i-1}, N_{i}]$ $\widetilde{M}[N_{j-1},N_{j}]$ are independent for $i\not=j$, we have that
\begin{multline}\label{eq:lowertop2}
Z_0^{t_\ell^k}(\widetilde{M}) = Z_{N_{k-1}}^{t_\ell^k}(\widetilde{M}) \cdot \prod_{i=1}^{k-1}Z_{N_{i-1}}^{N_{i}}(\widetilde{M})\\
\ge \exp\left((t_\ell^k-N_{k-1})(h_{\rm top}(M_k)-\delta)+\sum_{i=1}^{k-1}(h_{\rm top}(M_i)-\delta)(N_{i}-N_{i-1})\right)\\
\geq \exp\left(t_\ell^k(\liminf_{i\to\infty}h_{\rm top}(M_i)-o(1)-\delta)\right).
\end{multline}
We define a probability measure $\mu$ as follows. For any $\ii\in\Sigma_n$, let $k\in \N$ and $0\le \ell\le m(k)-1$ be the unique integers such that $t_\ell^k< n\leq t_{\ell+1}^k$, and let
$$
\mu([\ii])=\frac{\#\{A\in M[0,t_{\ell+1}^k]:[\ii]\supset[A]\}}{Z_{0}^{t_{\ell+1}^k}(\widetilde{M})}
$$
It is easy see that $\mu$ is a well defined measure supported on $\widetilde{M}$. Indeed, if $|\ii|<t_{\ell+1}^k$ then
\[
\begin{split}
\sum_{j\in\mathcal{A}}\mu[\ii j]&=\sum_{j\in\mathcal{A}}\frac{\#\{A\in \widetilde{M}[0,t_{\ell+1}^k]:[\ii j]\supset[A]\}}{Z_{0}^{t_{\ell+1}^k}(\widetilde{M})}\\
&=\frac{\#\{A\in \widetilde{M}[0,t_{\ell+1}^k]:\text{ there exists $j\in\mathcal{A}$ such that }[\ii j]\supset[A]\}}{Z_{0}^{t_{\ell+1}^k}(\widetilde{M})}\\
&=\frac{\#\{A\in \widetilde{M}[0,t_{\ell+1}^k]:[\ii]\supset[A]\}}{Z_{0}^{t_{\ell+1}^k}(\widetilde{M})},
\end{split}
\]
and if $|\ii|=t_{\ell+1}^k$ then
\[
\begin{split}
\sum_{j\in\mathcal{A}}\mu([\ii j])&=\sum_{j\in\mathcal{A}}\frac{\#\{A\in \widetilde{M}[0,t_{\ell+2}^k]:[\ii j]\supset[A]\}}{Z_{0}^{t_{\ell+2}^k}(\widetilde{M})}\\
&=\frac{\#\{A\in \widetilde{M}[0,t_{\ell+2}^k]:\text{ there exists $j\in\mathcal{A}$ such that }[\ii j]\supset[A]\}}{Z_{0}^{t_{\ell+1}^k}(\widetilde{M})Z_{t_{\ell+1}^k}^{t_{\ell+2}^k}(\widetilde{M})}\\
&=\frac{ Z_{t_{\ell+1}^k}^{t_{\ell+2}^k}(\widetilde{M})\delta_{\ii\in \widetilde{M}[0,t_{\ell+1}^k]}}{Z_{0}^{t_{\ell+1}^k}(\widetilde{M})Z_{t_{\ell+1}^k}^{t_{\ell+2}^k}(\widetilde{M})},
\end{split}
\]
where with a slight abuse of notation we used the $t_{m(k)+1}^k:=t_{1}^{k+1}$.
By \eqref{eq:lowertop2}, we have that for every $\ii\in \widetilde{M}$
\[
\begin{split}
\liminf_{n\to\infty} \frac{-\log\mu([\ii|_0^n])}{n}&\ge \liminf_{n\to\infty}\frac{t_\ell^k}{n}\left(\liminf_{i\to\infty}h_{\rm top}(M_i)-o(1)-\delta\right)\\
&\geq \liminf_{k\to\infty}r_k\left(\liminf_{i\to\infty}h_{\rm top}(M_i)-o(1)-\delta\right)\\
&=\liminf_{i\to\infty}h_{\rm top}(M_i)-\delta.
\end{split}
\]
By Lemma \ref{lem:Forstman}, we get \eqref{it:htop}.
We are now almost done. We have constructed the set $\widetilde{M}$ which has almost all the demanded properties, the only one that is still missing is that $\widetilde{M}$ is not necessarily a subset of $\Sigma_A$. The last step will be for us to find a map $\pi$ satisfying the assumptions of Lemma \ref{lem:techn} and such that $\pi(\widetilde{M})\subset \Sigma_A$. Observe that the assertion of Lemma \ref{lem:techn} will guarantee that the set $M=\pi(\widetilde{M})$ will satisfy the assertion of Proposition \ref{prop:technical}.
It is easy enough to do. Our sequence $(q_j)$ will be the sequence $(t_i^k)_{i,k}$ (ignoring the initial finitely many terms we can freely assume that $q_{j+1}-q_j>2r$). For every $j$ and every sequence $\ii\in \widetilde{M}$ the part $\widetilde{M}[q_j,q_{j+1}]$ of the sequence comes from some $M_n\subset\Sigma_A$, thus it is an admissible word for our subshift of finite type. So, we only need to modify the sequence $\ii$ on positions $q_j+1,\ldots, q_j+r$ (for all $j$) to obtain a sequence contained in $\Sigma_A$, moreover this modification will only depend on $\ii_1, \ldots, \ii_{q_j}$ and $\ii_{q_j+r+1},\ldots, \ii_{q_j+2r}$. This modification defines a map $\pi$ which satisfies the assumptions of Lemma \ref{lem:techn}, we are done.
Finally, to obtain the second part of the assertion let us consider the case when $\lim_{n\to\infty}\alpha_n=\alpha$. By taking a supsequence $n_k$ such that $\limsup_{n\to\infty}h_{\rm top}(M_n)=\lim_{k\to\infty}h_{\rm top}(M_{n_k})$, and applying the previous argument for the sequences $\{\alpha_{n_k}\}_k$ and $\{\varepsilon_{n_k}\}_k$ and $\{M_{n_k}\}_k$ we get the claimed statement.
\end{proof}
\begin{cor}\label{cor:egorov}
If $L_A(\alpha)\neq\emptyset$ then for every $\delta>0$ there exists $\emptyset\neq M\subset L_A(\alpha)$ such that $h_{\rm top}(M)>h_{\rm top}(L_A(\alpha))-\delta$ and the convergence of $A_0^n(\ii)\to\alpha$ on $M$ is uniform.
\end{cor}
\begin{proof}
Take a sequence $\varepsilon_n\to0$ be arbitrary but fixed. Since $A_0^n(\ii)\to\alpha$ as $n\to\infty$ for every $\ii\in L(\alpha)$, there exists $N_n(\ii)$ such that for every $m\geq N_n(\ii)$, $|A_0^m(\ii)-\alpha|<\varepsilon_n$. For every $n\geq1$ and $T\geq1$, let
$$
M_{n,T}=\{\ii\in L_A(\alpha):N_n(\ii)\leq T\}.
$$
Since $L_A(\alpha)=\bigcup_{T=1}^\infty M_{n,T}$ we get that there exists a $T_n$ such that $h_{\rm top}(M_{n,T_n})>h_{\rm top}(L(\alpha))-\delta$.
By applying Proposition~\ref{prop:technical} for the sequence $\alpha_n\equiv\alpha$, $\varepsilon_n$ and $M_n:=M_{n,T_n}$, we get that there exists a set $M\subset L_A(\alpha)$ such that
$h_{\rm top}(M)\geq\limsup_{n\to\infty}h_{\rm top}(M_{n,T_n})\geq h_{\rm top}(L_A(\alpha))-\delta$, and the convergence is uniform on $M$.
\end{proof}
\begin{cor}\label{cor:lowersemi}
The map $\alpha\mapsto h_{\rm top}(L_A(\alpha))$ is upper semi-continuous.
\end{cor}
\begin{proof}
Let $\alpha_n\to \alpha$ be such that $L_A(\alpha_n)\neq\emptyset$. Then we can use Corollary \ref{cor:egorov} to find in each $L_A(\alpha_n)$ a large entropy subset $M_n$ with uniform convergence of the Birkhoff averages, then we apply Proposition \ref{prop:technical} to get the assertion.
\end{proof}
\begin{prop}\label{prop:concave}
The domain of $f\colon\alpha\mapsto h_{\rm top}(L_A(\alpha))$ is a (possibly empty) closed convex set and $f$ is a concave function.
\end{prop}
\begin{proof}
Let $\alpha, \alpha'$ in the domain of $f$. Assuming $L_A(\alpha)$ and $L_A(\alpha')$ are nonempty, we want to prove that $L_A(p\alpha + (1-p)\alpha')$ is nonempty and that $f(p\alpha + (1-p)\alpha')\geq pf(\alpha) + (1-p)f(\alpha')$ for all $p\in (0,1)$. Pick arbitrarily $\epsilon>0$.
By Corollary~\ref{cor:egorov}, for every $\varepsilon>0$ there exist subsets $M(\alpha)\subset L_A(\alpha)$ and $M(\alpha')\subset L_A(\alpha')$ such that
\begin{itemize}
\item $h_{\rm top}(M(\alpha))> f(\alpha)-\epsilon$ and $h_{\rm top}(M(\alpha'))> f(\alpha')-\epsilon$;
\item there exists an increasing sequence $(N_k)_{k\in \N}$ such that for every $\ii\in M(\alpha)$ and every $\ii'\in M(\alpha')$ for every $k$ for every $n>N_k$ we have $|A_0^n(\ii)-\alpha| \leq 1/k$ and $|A_0^n(\ii')-\alpha'| \leq 1/k$.
\end{itemize}
We choose two sequences $(t_i)_{i\in \N}, (s_i)_{i\in \N}$ satisfying the following conditions.
\begin{itemize}
\item [(i)] $t_0=0$, $t_i\nearrow\infty$ and $t_{i+1}/t_i\searrow 1$.
\item [(ii)] $s_i\to\infty$.
\item [(iii)]$(t_{i+1}-t_i)$ is divisible by $s_i$ and $\frac{t_{i+1}-t_i}{s_i}\nearrow\infty$.
\item[(iv)] $\frac{2s_it_{i+1}}{n(t_{i+1}-t_i)}\to 0$ where $n$ is the largest number such that $N_n<t_i$.
\end{itemize}
For example, we can choose $t_{i+1}/t_i \sim 1+n^{-1/2}$ and $s_i \sim n^{1/3}$ where $n$ is the largest number such that $N_n<t_i$.
We divide each interval $[t_i, t_{i+1}-1]$ into $s_i$ equal subintervals, with endpoints $z_0^i=t_i, z_1^i=t_{i}+(t_{i+1}-t_i)/s_i,\ldots,z_{s_i}^i=t_{i+1}$.
We will construct a set $\widetilde{M}\subset\Sigma$ step by step as follows.
{\rm Step 0.} At positions $0,\ldots, t_1-1$ we can put anything.
{\rm Step i. $(i\ge 1)$} We put the $s_i$ numbers
\[
W_k^i := \log Z_{z_k^i}^{z_{k+1}^i}(\alpha) - \log Z_{z_k^i}^{z_{k+1}^i}(\alpha'); k=0,1,\ldots s_i-1
\]
in an increasing order and we choose $\lfloor p s_i \rfloor$ largest ones. At those chosen intervals the sequences in $\widetilde{M}$ will be taken from $M_{z_k^i}^{z_{k+1}^i}(\alpha)$, at the not chosen intervals from $M_{z_k^i}^{z_{k+1}^i}(\alpha')$.
It is enough to show that $\widetilde{M}\subset\Sigma$ has the following properties:
\medskip
{\rm Claim 1:} for $\ii\in \widetilde{M}$ we have
\[
A(\ii) = p\alpha + (1-p)\alpha'.
\]
\medskip
{\rm Claim 2:} $h_{\rm top}(\widetilde{M})\geq pf(\alpha) + (1-p)f(\alpha')$.
\medskip
Indeed, just like in the proof of Proposition~\ref{prop:technical}, we will prove that there exists a map $\pi\colon\Sigma\mapsto\Sigma$ such that $\pi(\widetilde{M})\subseteq\Sigma_A$ and assumptions of Lemma~\ref{lem:techn} hold.
\begin{proof}[Proof of Claim 1]
As $t_{i+1}/t_i\to 1$, it is enough to check that $A_0^{t_i}(\ii)\to p\alpha + (1-p)\alpha'$ as $i$ tends to $\infty$. Pick $i$ and $n$ such that $N_{n}<t_i\le N_{n+1}$.
By Lemma \ref{lem:m n k}, we have
\begin{equation*}
\begin{split}
&|A_{t_i}^{t_{i+1}}(\ii)-\left(p\alpha + (1-p)\alpha'\right)|\\
=&|\sum_{k=0}^{s_i-1}\frac{1}{s_i} A_{z_k^i}^{z_{k+1}^i}(\ii)-\left(p\alpha + (1-p)\alpha'\right)|\\
\le &I_1^i + I_2^i+I_3^i,
\end{split}
\end{equation*}
where
$$
I_1^i=\sum_{k=0}^{s_i-1}\frac{1}{s_i} \rho^{(2)}_{\frac{t_{i+1}-t_i}{s_i}}=\rho^{(2)}_{\frac{t_{i+1}-t_i}{s_i}},
$$
$$
I_2^i=\sum_{k=0}^{s_i-1}\frac{1}{s_i} \cdot \frac{z_k^i+z_{k+1}^i}{n\left( \frac{t_{i+1}-t_i}{s_i}\right)}\le \frac{2s_it_{i+1}}{n(t_{i+1}-t_i)},
$$
and
$$
I_3^i=\left|\frac{1}{s_i}\left(\lfloor ps_i \rfloor \alpha +(s_i-\lfloor ps_i \rfloor) \alpha \right)-\left(p\alpha + (1-p)\alpha'\right)\right|.
$$
By $(ii)$, it is easy to see that $I_3^i$ is convergent to $0$ as $i$ tends to $\infty$.
By $(iii)$ and the fact that $\rho_\ell^{(2)}\to 0$ as $\ell \to \infty$, we see that $I_1^i$ is convergent to $0$ as $i$ tends to $\infty$. By $(iv)$, $I_2^i$ is convergent to $0$ as $i$ tends to $\infty$. Thus we obtain that $A_{t_i}^{t_{i+1}}(\ii)$ is convergent to $p\alpha + (1-p)\alpha'$ as $i$ tends to $\infty$. Since $A_0^{t_i}(\ii)=\frac{1}{t_i}\sum_{j=0}^{i-1} (t_{j+1}-t_j) A_{t_j}^{t_{j+1}}(\ii)$, we complete the proof.
\end{proof}
\begin{proof}[Proof of Claim 2]
Observe that the constructions for different $j$ are completely independent from each other: whatever the initial $t_j$ symbols of $\ii\in \widetilde{M}$, we allow any admissible $t_{j+1}-t_j$ symbols to follow.
Thus we have
\begin{equation}\label{eq:11}
Z_0^{t_i}(\widetilde{M})=Z_0^{t_1}(\widetilde{M}) \cdot \prod_{k=1}^{i-1} Z_{t_k}^{t_{k+1}}(\widetilde{M})
\end{equation}
and
\begin{equation}\label{eq:12}
Z_{t_k}^{t_{k+1}}(\widetilde{M}) \geq \left(\prod_{\ell=0}^{s_k-1} Z_{z_\ell^k}^{z_{\ell+1}^k}(M(\alpha))\right)^{\lfloor ps_k\rfloor/s_k} \cdot \left(\prod_{\ell=0}^{s_k-1} Z_{z_\ell^k}^{z_{\ell+1}^k}(M(\alpha'))\right)^{1-\lfloor ps_k\rfloor/s_k},
\end{equation}
for $1\le k\le i-1$.
Moreover, we have
\begin{equation}\label{eq:13}
\prod_{\ell=0}^{s_k-1} Z_{z_\ell^k}^{z_{\ell+1}^k}(M(\alpha)) \geq Z_{t_k}^{t_{k+1}}(M(\alpha))
\end{equation}
and
\begin{equation}\label{eq:14}
\prod_{k=1}^{i-1} Z_{t_k}^{t_{k+1}}(M(\alpha))| \geq Z_{t_1}^{t_{i}}(M(\alpha)).
\end{equation}
The same is for $\alpha'$.
We define the probability measure $\mu$ such that for an $\ii\in\Sigma_n$ let $t_{i-1}<n\leq t_{i}$ and
$$
\mu([\ii])=\frac{\#\{A\in \widetilde{M}[0,t_i]:[\ii]\supset A \}}{Z_{0}^{t_i}(\widetilde{M})}.
$$
Similarly to the proof of Proposition~\ref{prop:technical}, $\mu$ is a well defined probability measure supported on $\widetilde{M}$. By \eqref{eq:11}, \eqref{eq:12}, \eqref{eq:13} and \eqref{eq:14}, as $t_{i+1}/t_i\to 1$, we have that
\begin{equation*}
\begin{split}
&\liminf_{n\to\infty} \frac{-\log\mu(C_n\cap \widetilde{M})}{n} \\
\ge & \liminf_{i\to\infty} \frac 1 {t_i} \left(p\log Z_0^{t_i}(M(\alpha))+(1-p)\log Z_0^{t_i}(M(\alpha')) \right)\\
\ge & pf(\alpha)+(1-p)f(\alpha'),
\end{split}
\end{equation*}
for any decreasing sequence $(C_n)_{n\in \N}$ of cylinders with $C_n\cap \widetilde{M}\not=\emptyset$.
By Lemma \ref{lem:Forstman}, this completes the proof.
\end{proof}
As in the proof of Proposition \ref{prop:technical}, we have now obtained a set $\widetilde{M}$ satisfying all the necessary properties except for one: it does not have to be contained in $\Sigma_A$. Again, we have the same solution to this problem: we will find a map $\pi$ satisfying the assumptions of Lemma \ref{lem:techn} such that $\pi(\widetilde{M})\subset \Sigma_A$. It is done in almost the same manner: we define $(q_j)_j=(z_k^i)_{k,i}$ and then we modify each sequence $\ii\in\widetilde{M}$ on the initial $r$ positions of every interval $(q_j, q_{j+1}]$.
Therefore, we complete the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:cont}]
Since any concave function is clearly lower semi-continuous, Proposition~\ref{prop:concave} together with Corollary~\ref{cor:lowersemi} implies the claim.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:contgen}]
There are two cases. Consider first the simple case:
$h_{\rm top}(L_A(\alpha_0))= h_{\rm top}(\Sigma_A)$.
Fix some $\varepsilon > 0$. We assume that the spectrum domain is larger than one point, hence
by Theorem \ref{thm:cont} we can find a value $\alpha_1$ such that
$h_{\rm top}(L_A(\alpha_1))> h_{\rm top}(L_A(\alpha_0))-\varepsilon$. By Corollary \ref{cor:egorov} we can find
a set $M_0$ such that $h_{\rm top} (M_0) > h_{\rm top} (L_A(\alpha_0))-\varepsilon$ and that
the convergence $A_0^n(\ii)\to \alpha_0$ is uniform in $M_0$ and we can find a set $M_1$ such that
$h_{\rm top} (M_1) > h_{\rm top} (L_A(\alpha_1))-\varepsilon$ and that the convergence
$A_0^n(\ii)\to \alpha_1$ is uniform in $M_1$. We then apply the Proposition \ref{prop:technical} to the
sequence of sets $M_1, M_0, M_1, M_0, \ldots$, with $\alpha_n$ being $\alpha_0$ or $\alpha_1$
depending on $n$ being even or odd. We get
\[
h_{\rm top}(L_A(\alpha_0, \alpha_1))\geq h_{\rm top} (L_A(\alpha_0))-2\varepsilon.
\]
Naturally, $L_A(\alpha_0, \alpha_1)\subset D$, hence passing with $\varepsilon$ to zero ends
the proof.
The complicated case is when $h_{\rm top}(L_A(\alpha_0))< h_{\rm top}(\Sigma_A)$. Note that
we can still freely assume that $h_{\rm top} (\Sigma_A\setminus D) = h_{\rm top}(\Sigma_A)$, otherwise
we would have $h_{\rm top}(D) = h_{\rm top}(\Sigma_A)$ immediately. We start with a simple observation.
\begin{lem} \label{lem:meeting}
There exists $\beta_0$ such that the sets $\{\ii\in \Sigma_A; A(\ii)< \beta_0\}$ and
$\{\ii\in\Sigma_A; A(\ii)>\beta_0\}$ are both of full entropy $h_{\rm top}(\Sigma_A)$.
\end{lem}
\begin{proof}
The function $\beta \to h_{\rm top} (\bigcup_{\alpha<\beta} L_A(\alpha))$ is nondecreasing and
left continuous, hence the set
$\{\beta: h_{\rm top} (\bigcup_{\alpha<\beta} L_A(\alpha)) = h_{\rm top}(\Sigma_A)\}$ is closed.
So is the set $\{\beta: h_{\rm top} (\bigcup_{\alpha>\beta} L_A(\alpha)) = h_{\rm top}(\Sigma_A)\}$,
for analogous reason. Hence, the two sets must intersect -- otherwise we would have some $\beta$
which would belong to neither, and this is impossible because of
\[
\Sigma_A\setminus D = \bigcup_{\alpha<\beta} L_A(\alpha) \cup \bigcup_{\alpha>\beta} L_A(\alpha) \cup L_A(\beta)
\]
and all three sets on the right would have entropy strictly smaller than the one on the left.
\end{proof}
We fix $\varepsilon>0$. Using again the left-continuity of the function
$\beta \to h_{\rm top} (\bigcup_{\alpha<\beta} L_A(\alpha))$
we can find some $\beta_1<\beta_0$ such that
$ h_{\rm top} (\bigcup_{\alpha<\beta_1} L_A(\alpha)) > h_{\rm top} (\Sigma_A) - \varepsilon$.
Let $M_+ = \bigcup_{\alpha>\beta_0} L_A(\alpha)$ and $M_-= \bigcup_{\alpha<\beta_1} L_A(\alpha)$.
We now need a one-sided version of Proposition \ref{prop:technical}.
\begin{prop} \label{prop:technical2}
Let $\varepsilon_n>0$, $\varepsilon_n\to 0$.
Let $\alpha_n$ be a sequence such that $\alpha_{2k}\to \beta_0$ and $\alpha_{2k+1}\to \beta_1$.
Moreover, assume that for every $n\geq 1$ there exists a set $M_n\subset\Sigma_A$ and a positive
integer $T_n>0$ such that for every $\ii\in M_n$ and $m\geq T_n$ we have
$$
\frac{1}{m}\sum_{k=0}^{m-1}\phi_k(\sigma^k\ii)>\alpha_n - \varepsilon_n
$$
(if $n$ is even) or
$$
\frac{1}{m}\sum_{k=0}^{m-1}\phi_k(\sigma^k\ii)<\alpha_n + \varepsilon_n
$$
(if $n$ is odd).
Then we can find a set $M\subset \Sigma_A$ such that for $\ii\in M$ we have
$\underline{A}(\ii) \leq \beta_1$ and $\overline{A}(\ii) \geq \beta_0$ and that
$h_{\rm top} M \geq \liminf h_{\rm top} M_i$.
\end{prop}
\begin{proof}
The proof is virtually identical with the proof of Proposition \ref{prop:technical}.
The construction and the calculation of entropy is the same, the only difference is that
when the sets $M_i$ give only one-sided bounds on the behavior of the Birkhoff sums,
we can only get a weaker statement about $\underline{A}$ and $\overline{A}$. We skip the details.
\end{proof}
We can now fix any sequence $\varepsilon_n\to 0$ and use the sets $M_-$ and $M_+$ defined above to construct sets $M_n$ satisfying the assumptions of
Proposition \ref{prop:technical2}, in such a way that
$h_{\rm top} M_{2k} > h_{\rm top} M_+ - \varepsilon$ and
$h_{\rm top} M_{2k+1} > h_{\rm top} M_- - \varepsilon$ (by choosing $T_{2k}$, resp. $T_{2k+1}$, large enough).
Using now Proposition \ref{prop:technical2} with those sets $M_n$ we construct a set $M$ which is by
construction contained in $D$, moreover $h_{\rm top} M > h_{\rm top}(\Sigma_A)- 2\varepsilon$.
Passing with $\varepsilon$ to 0 we end the proof of this case.
\end{proof}
\begin{comment}
We finish this section by presenting the Example \ref{ex:alter}. Let $\Sigma_A$ be the full shift on two
symbols and define the potential by $\phi(\ii) = 3 \ii_0 - 1$. This is a piecewise constant potential
depending only on the first coordinate. Let $N_n$ be a very fast increasing sequence, with $N_n/N_{n+1}\to 0$.
Let $w_i=1$ for $N_n<i<2N_n$, otherwise $w_i=0$.
We see that
\[
\left|\frac 1 N_n \sum_{i=1}^{N_n} \phi(\sigma^i \ii)\right| \leq \frac {2N_{n-1}} {N_n} \to 0,
\]
hence the only possible value of the weighted Birkhoff average is zero. However,
\[
\sum_{i=1}^{2N_n} \phi(\sigma^i \ii) = \sum_{i=1}^{N_n} \phi(\sigma^i \ii) + \sum_{i=N_n+1}^{2N_n} \phi(\sigma^i \ii),
\]
and while the former sum is always of order $O(N_{n-1}) \ll N_n$, the latter sum is the sum of $N_n$ summands
taking either value $-1$ (if $\ii_i=0$) or $2$ (if $\ii_i=1$). Thus, the latter sum is $o(N_n)$ only when
between $N_n$ and $2N_n$ symbol 0 appears with frequency $2/3$ (approximately) and symbol 1 with frequency $1/3$.
It is not difficult to calculate that
\[
h_{\rm top} L_\ww(0) = \frac 16 \log 2 + \frac 12 \log 3 < \log 2.
\]
\end{comment}
\section{Typical weights}
First, we need to introduce some notations. Let $\Sigma_A$ be an aperiodic and irreducible subshift of finite type, $\Omega=\Lambda^\N$ and $\Gamma_A=\Omega\times\Sigma_A$. Let $f\colon\Gamma_A\mapsto\R$ be a continuous potential. Let us recall that $S_nf$ denotes the $n$th Birkhoff sum of $f$, that is, $S_nf=f+f\circ\sigma+\cdots+f\circ\sigma^{n-1}$. For every $\ww\in\Omega$ let
$$
Z_n(f,\ww)=\sum_{\ii\in\Sigma_{A,n}}\sup_{\jj\in[\ii]}e^{S_nf(\ww,\jj)},
$$
and we define the conditional pressure of $f$ on $\xi(\ww)$ by
\begin{equation}\label{eq:condpresdef}
P(f,\ww)=\limsup_{n\to\infty}\frac{1}{n}\log Z_n(f,\ww).
\end{equation}
The following theorem was shown by Ledrappier and Walters \cite{LW}. They proved a more general statement but we state here only the form which corresponds to our main setup.
\begin{thm}[Ledrappier, Walters]\label{thm:LW} Let $\nu$ be a $\sigma$-invariant measure on $\Omega$ and let $f\colon\Gamma_A\mapsto\R$ be a continuous potential. Then
$$
\sup\{h_\mu^\xi+\int fd\mu:\mu\in\mathcal{M}_\nu(\Gamma_A)\}=\int P(f,\ww)d\nu(\ww).
$$
\end{thm}
Unfortunately, this theorem itself does not provide enough regularity conditions in order to do multifractal analysis on weighted Birkhoff averages. So we adapt the idea of Ledrappier and Walters \cite{LW} combining with the method of Takens and Verbitskiy \cite{TV} and Heurteaux \cite{H}.
\subsection{Regularity of conditional pressure} In this part of the section, we study the regularity properties of the conditional pressure $P(f,\ww)$ under stronger assumptions than the setup of Ledrappier and Walters. Namely, we assume that $f$ has bounded variation, that is,
$$
\sum_{k=0}^\infty \max_{(\mathbf{x},\mathbf{k})\in\Gamma_{A,k}}\sup_{(\ww,\ii),(\zz,\jj)\in[(\mathbf{x},\mathbf{k})]}|f(\ww,\ii)-f(\zz,\jj)|<\infty.
$$
Moreover, we assume that the measure $\nu$ is quasi-Bernoulli. note that for a quasi-Bernoulli measure $\nu$, the transformation $\sigma^m$ is ergodic for every $m\geq1$.
The following lemma is an easy calculation.
\begin{lem}\label{lem:const}
For every $\ww\in\Omega$,
$$
P(f,\ww)=P(f,\sigma\ww).
$$
Moreover, if $f\to g$ uniformly then $P(f,\ww)\to P(g,\ww)$
\end{lem}
\begin{proof}
Since $f\colon \Gamma_A\mapsto\R$ is continuous over a compact set, we get that $|f|$ is bounded by $C$. Hence,
\[
\begin{split}
\sum_{\ii\in\Sigma_{A,n+1}}\sup_{\jj\in[\ii]}e^{S_{n+1}f(\ww,\jj)}&=\sum_{\ii\in\Sigma_{A,n+1}}\sup_{\jj\in[\ii]}e^{S_nf(\sigma\ww,\sigma\jj)}e^{f(\ww,\jj)}\\
&\leq\sum_{\ii\in\Sigma_{A,n+1}}\sup_{\jj\in[\ii]}e^{S_nf(\sigma\ww,\sigma\jj)}e^C\\
&\leq Ke^C\sum_{\ii\in\Sigma_{A,n}}\sup_{\jj\in[\ii]}e^{S_nf(\sigma\ww,\jj)}\\
\end{split}
\]
The direction $\sum_{\ii\in\Sigma_{A,n+1}}\sup_{\jj\in[\ii]}e^{S_{n+1}f(\ww,\jj)}\geq e^{-C}\sum_{\ii\in\Sigma_{A,n}}\sup_{\jj\in[\ii]}e^{S_nf(\sigma\ww,\jj)}$ is similar.
The second observation follows by the fact that if $\sup_{(\ww,\ii)\in\Gamma_A}|f(\ww,\ii)-g(\ww,\ii)|<\varepsilon$ then $|S_nf-S_ng|\leq \varepsilon n$.
\end{proof}
Since $\nu$ is ergodic, a simple corollary of Lemma~\ref{lem:const} is that we can define the conditional pressure with respect to $\nu$
\begin{equation}\label{eq:condpres2}
P_\nu(f):=\int P(f,\ww)d\nu(\ww)=P(f,\ww)\text{ for $\nu$-almost every }\ww.
\end{equation}
Here, we abused a notation slightly, since $P_\nu(f)$ of \eqref{eq:condpres2} does not necessarily equal to the defined conditional pressure in \eqref{eq:condpresdefdef}. However, we will show in the upcoming lemma that it is indeed equal to the pressure defined in \eqref{eq:condpresdefdef}.
For short, for $\ww\in\Omega$ and $\ii\in\Sigma_{A,*}$ let
$$V(f,\ww,\ii):=\sup_{\jj\in[\ii]}e^{S_{|\ii|}f(\ww,\jj)},$$
and for an $\ww\in\Omega_*$ let
$$
Y(f,\ww,\ii):=\sup_{\zz\in[\ww]}V(f,\zz,\ii)\text{ and }W(f,\ww):=\sup_{\zz\in[\ww]}Z_{|\ww|}(f,\zz).
$$
We also use that convention that $Z_{m}(f,\ww)=1$ for $m\leq0$.
Since $f$ has bounded variation, there exists constant $C>0$ such that for every $n\geq1$ and every $(\ww,\ii),(\zz,\jj)\in\Gamma_A$ with $|(\ww,\ii)\wedge(\zz,\jj)|=n$
\begin{equation}\label{eq:bd1}
|S_nf(\ww,\ii)-S_nf(\zz,\jj)|<C.
\end{equation}
Thus, for every $\ww\in\Omega$ and every, $\ii,\jj\in\Sigma_{A,*}$ with $\ii\jj\in\Sigma_{A,*}$
\begin{equation}\label{eq:almostmulti}
V(f,\ww,\ii\jj)\leq V(f,\ww,\ii)V(f,\sigma^{|\ii|}\ww,\jj)\leq e^C\cdot V(f,\ww,\ii\jj).
\end{equation}
So clearly, for every $\ww\in\Omega$
\begin{equation}\label{eq:submulti
Z_{n+m}(f,\ww)\leq Z_{n}(f,\ww)Z_m(f,\sigma^n\ww).
\end{equation}
On the other hand,
\begin{equation}\label{eq:superadditive}
\begin{split}
Z_{n}(f,\ww)Z_m(f,\sigma^n\ww)&\leq K^r e^{r|f|} Z_{n}(f,\ww)Z_{m-r}(f,\sigma^{n+r}\ww)\\
&\leq K^re^{2r|f|+2C} Z_{n+m}(f,\ww),
\end{split}
\end{equation}
where $r\geq1$ is such that $A^r$ is strictly positive.
Applying the bounded distortion again, we get for every $(\ww,\ii)\in\Gamma_{A,*}$, and every $\zz\in[\ww]$ that
\begin{equation}\label{eq:compareYV}
V(f,\zz,\ii)\leq Y(f,\ww,\ii)\leq e^C V(f,\zz,\ii)
\end{equation}
and therefore
\begin{equation}\label{eq:compareZW}
Z_{|\ww|}(f,\zz)\leq W(f,\ww)\leq e^C Z_{|\ww|}(f,\zz).
\end{equation}
By \eqref{eq:submulti} and Kingman's subadditive ergodic theorem, we have that for $\nu$-almost every $\ww\in\Omega$ the limit
$$
\lim_{n\to\infty}\frac{1}{n}\log Z_n(f,\ww)=P(f,\ww)=P_\nu(f)
$$
exists and
\begin{equation}\label{prop:calcpres}
\begin{split}
P_\nu(f)&=\lim_{n\to\infty}\frac{1}{n}\int\log Z_n(f,\ww)d\nu(\ww)\\
&=\lim_{n\to\infty}\frac{1}{n}\sum_{\ww\in\Omega_n}\nu([\ww])\log W(f,\ww),
\end{split}
\end{equation}
where in the last equation we used \eqref{eq:compareZW} too.
\begin{comment}
\begin{prop}\label{prop:calcpres}
Let $\nu$ be a $\sigma^m$-invariant ergodic measure on $\Omega$ for every $m\geq1$ and let $f$ be a potential on $\Gamma$ with bounded variation. Then there exists $C>0$ such that for every $n\geq1$
$$
C^{-1}e^{nP_\nu(f)}\leq e^{\int\log Z_n(f,\ww)d\nu(\ww)}\leq Ce^{nP_\nu(f)}.
$$
In particular,
\[
\begin{split}
P_\nu(f)&=\lim_{n\to\infty}\frac{1}{n}\int\log Z_n(f,\ww)d\nu(\ww)\\
&=\lim_{n\to\infty}\frac{1}{n}\sum_{\ww\in\Omega_n}\nu([\ww])\log W(f,\ww).
\end{split}
\]
\end{prop}
\begin{proof}
First, we show that for every $m\geq1$ there exists a sequence $\ell_k$ such that for every $\ww\in\Omega$
\begin{equation}\label{eq:subseq}
\lim_{k\to\infty}\frac{1}{m\ell_k}\log Z_{m\ell_k}(f,\ww)=\limsup_{n\to\infty}\frac{1}{n}\log Z_n(f,\ww).
\end{equation}
Fix $\ww\in\Omega$. Let $n_k$ be the sequence for which the $\limsup$ is achieved in \eqref{eq:condpresdef}. Without loss of generality we may assume that $n_{k+1}-n_k>m$ for every $k\geq1$. Let $\ell_k$ be the sequence such that $m\ell_k\leq n_k<(\ell_k+1)m$. Thus, $|\ell_k m-n_k|\leq m$. Then for every $k\geq1$ and every $\ii\in\Sigma$
\begin{equation}\label{eq:bounded}
|S_{m\ell_k}f(\ww,\ii)-S_{n_k}f(\ww,\ii)|\leq m\sup|f|.
\end{equation}
Thus, by \eqref{eq:bounded}
\[
\begin{split}
Z_{n_k}(\ww)=&\sum_{\ii\in\Sigma_{n_k}}V(f,\ww,\ii)\\
&\leq\sum_{\ii\in\Sigma_{n_k}}e^{m|f|}\sup_{\jj\in[\ii]}e^{S_{m\ell_k}f(\ww,\jj)}\\
&\leq\sum_{\ii\in\Sigma_{n_k}}e^{m|f|}\sup_{\jj\in[\ii|_{m\ell_k}]}e^{S_{m\ell_k}f(\ww,\jj)}\\
&=\sum_{\ii\in\Sigma_{m\ell_k}}K^{n_k-m\ell_k}e^{m|f|}V(f,\ww,\ii)\\
&\leq K^{m}e^{m|f|}Z_{m\ell_k}(\ww),\\
\end{split}
\]
and similarly by \eqref{eq:almostmulti}
\[
\begin{split}
Z_{n_k}(\ww)=&\sum_{\ii\in\Sigma_{n_k}}V(f,\ww,\ii)\\
&\geq\sum_{\ii_1\in\Sigma_{m\ell_k},\ii_2\in\Sigma_{n_k-m\ell_k}}C^{-1}V(f,\ww,\ii_1)V(f,\sigma^{m\ell_k}\ww,\ii_2)\\
&\geq\sum_{\ii_1\in\Sigma_{m\ell_k},\ii_2\in\Sigma_{n_k-m\ell_k}}C^{-1}e^{-m|f|}V(f,\ww,\ii_1)\\
&=\sum_{\ii\in\Sigma_{m\ell_k}}C^{-1}e^{-m|f|}K^{n_k-m\ell_k}V(f,\ww,\ii)\\
&\geq C^{-1}e^{-m|f|}Z_{m\ell_k}(\ww).
\end{split}
\]
This, clearly implies \eqref{eq:subseq}.
Bounded distortion \eqref{eq:almostmultiZn} implies that for every $n,m\geq1$ and every $\ww\in\Omega$
\begin{equation}\label{eq:tech}
Z_{n\cdot m}(f,\ww)\leq \prod_{k=0}^{n-1}Z_m(f,\sigma^{k m}\ww)\leq C^{n}Z_{n\cdot m}(f,\ww).
\end{equation}
Since $\nu$ is ergodic w.r.t. $\sigma^m$, by Birkhoff's ergodic theorem we get
$$
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\log Z_m(f,\sigma^{k m}\ww)=\int Z_m(f,\ww)d\nu(\ww)\text{ for $\nu$-almost every }\ww.
$$
Thus, by \eqref{eq:tech} for $\nu$-almost every $\ww$ and using the sequence $\ell_k$ (which might depend on $\ww$) defined in \eqref{eq:subseq}
\[
\begin{split}
P_\nu(f)&=\lim_{k\to\infty}\frac{1}{m\ell_k}\log Z_{m\ell_k}(f,\ww)\leq\lim_{k\to\infty}\frac{1}{m\ell_k}\sum_{p=0}^{\ell_k-1}\log Z_m(f,\sigma^{pm}\ww)\\
&=\frac{1}{m}\int\log Z_m(f,\ww)d\nu(\ww)\leq \frac{2\log C}{m}+P_\nu(f).
\end{split}
\]
\end{proof}
\end{comment}
\begin{thm}\label{thm:equillibrium}
Let $\nu$ be an ergodic $\sigma$-invariant quasi-Bernoulli measure on $\Omega$ and let $f\colon\Gamma_A\mapsto\R$ be a continuous potential with bounded variation. Then there exists a unique ergodic $\sigma$-invariant measure $\mu$ such that there exists a constant $C>0$ such that for every $(\ww,\ii)\in\Gamma_{A,*}$
\begin{equation}\label{eq:comparemu}
C^{-1}\frac{Y(f,\ww,\ii)}{W_{|\ww|}(f,\ww)}\nu([\ww])\leq\mu([\ww,\ii])\leq C\frac{Y(f,\ww,\ii)}{W_{|\ww|}(f,\ww)}\nu([\ww]).
\end{equation}
In particular, $\Pi_*\mu=\nu$ and
$$
h_\mu^\xi+\int fd\mu=P_\nu(f).
$$
\end{thm}
\begin{proof}
Let $\zz$ be a generic point such that $\frac{1}{n}\sum_{k=0}^{n-1}\delta_{\sigma^k\zz}\to\nu$ as $n\to\infty$. Then let
$$
\eta_m=Z_m(f,\zz)^{-1}\sum_{\ii\in\Sigma_{A,m}}V(f,\zz,\ii)\delta_{(\zz,\ii\jj)},
$$
where $\jj\in\Sigma_A$ is arbitrary but fixed. Moreover, let
$$
\nu_{n}=\frac{1}{n}\sum_{k=0}^{n-1}\eta_{2n}\circ\sigma^{-k}.
$$
Let $\{n_j\}$ be a subsequence such that\linebreak $\lim_{j\to\infty}\frac{1}{n_j}\log Z_{n_j}(f,\ww)=P_\nu^f$ and $\nu_{n_j}\to\mu$. Clearly, $\mu$ is $\sigma$-invariant measure on $\Gamma_A$.
Fix $(\ww,\ii)\in\Gamma_{A,*}$ with $|\ww|=|\ii|$. Choose $n$ sufficiently large such that $n>|\ww|=|\ii|$. Then by \eqref{eq:almostmulti} and \eqref{eq:superadditive} there exists $C'>0$ such that
\[
\begin{split}
\nu_{n}([\ww,\ii])&=\frac{1}{n}\sum_{k=0}^{n-1}\sum_{\substack{(\alpha,\beta)\in\Gamma_{A,k},(\gamma,\tau)\in\Gamma_{A,2n-|\ii|-k}:\\(\alpha\ww\gamma,\beta\ii\tau)\in\Gamma_{A,2n}
}}\eta_{2n}([(\alpha\ww\gamma,\beta\ii\tau)])\\
&=\frac{1}{n}\sum_{k=0}^{n-1}\sum_{\substack{\beta\in\Sigma_{A,k},\tau\in\Sigma_{A,2n-|\ii|-k}:\\ \beta\ii\tau\in\Sigma_{A,2n}}}\frac{V(f,\zz,\beta\ii\tau))}{Z_{2n}(\zz)}\ind_{[\ww]}(\sigma^k\zz)\\
&\leq\frac{C'}{n}\sum_{k=0}^{n-1}\sum_{\substack{\beta\in\Sigma_{A,k},\\ \tau\in\Sigma_{A,2n-|\ii|-k}}}\frac{V(f,\zz,\beta)V(f,\sigma^k\zz,\ii)V(f,\sigma^{|\ww|+k}\zz,\tau)}{Z_{k}(\zz)Z_{|\ii|}(\sigma^k\zz)Z_{2n-k-|\ii|}(\sigma^{|\ii|+k}\zz)}\ind_{[\ww]}(\sigma^k\zz)\\
&=\frac{C'}{n}\sum_{k=0}^{n-1}\frac{V(f,\sigma^k\zz,\ii)}{Z_{|\ii|}(\sigma^k\zz)}\ind_{[\ww]}(\sigma^k\zz).
\end{split}
\]
Thus, by Birkhoff's ergodic theorem
\[
\begin{split}
\mu([\ww,\ii])&=\lim_{j\to\infty}\nu_{n_j}([\ww,\jj])\\
&\leq C'\int\frac{V(f,\zz,\ii)}{Z_{|\ii|}(\zz)}\ind_{[\ww]}(\zz)d\nu(\zz)\\
&\leq C''\frac{Y(f,\ww,\ii)}{W_{|\ww|}(f,\ww)}\nu([\ww]),
\end{split}
\]
where we used \eqref{eq:compareYV} and \eqref{eq:compareZW}.
Now, we show the other inequality. Similarly by using \eqref{eq:almostmulti}, \eqref{eq:submulti}, we have
\[
\begin{split}
&\nu_{n}([\ww,\ii])\\
&\geq\frac{1}{n}\sum_{k=0}^{n-1}\sum_{\substack{\beta\in\Sigma_{A,k},\tau\in\Sigma_{A,2n-|\ii|-k}:\\ \beta\ii\tau\in\Sigma_{A,2n}}}\frac{V(f,\zz,\beta)V(f,\sigma^k\zz,\ii)V(f,\sigma^{|\ww|+k}\zz,\tau)}{Z_{k}(\zz)Z_{|\ii|}(\sigma^k\zz)Z_{2n-k-|\ii|}(\sigma^{|\ii|+k}\zz)}\ind_{[\ww]}(\sigma^k\zz)\\
&\geq\frac{e^{-2|f|r}}{n}\sum_{k=0}^{n-1}\sum_{\substack{\beta'\in\Sigma_{A,k-r}\\\tau'\in\Sigma_{A,2n-|\ii|-k-r}}}\frac{V(f,\zz,\beta')V(f,\sigma^k\zz,\ii)V(f,\sigma^{|\ww|+k+r}\zz,\tau')}{Z_{k}(\zz)Z_{|\ii|}(\sigma^k\zz)Z_{2n-k-|\ii|}(\sigma^{|\ii|+k}\zz)}\ind_{[\ww]}(\sigma^k\zz)\\
&\geq\frac{e^{-2|f|r}}{n}\sum_{k=0}^{n-1}\frac{V(f,\sigma^k\zz,\ii)}{Z_{|\ii|}(\sigma^k\zz)Z_r(\zz)Z_r(\sigma^{|\ii|+k}\zz)}\ind_{[\ww]}(\sigma^k\zz)\\
&\geq\frac{e^{-4|f|rK^{-22r}}}{n}\sum_{k=0}^{n-1}\frac{V(f,\sigma^k\zz,\ii)}{Z_{|\ii|}(\sigma^k\zz)}\ind_{[\ww]}(\sigma^k\zz)
\end{split}
\]
and thus, taking the subsequence $n_j$ and using \eqref{eq:compareYV} and \eqref{eq:compareZW}, we have
$$
\mu([\ww,\ii])\geq C'^{-1}\frac{Y(f,\ww,\ii)}{W_{|\ww|}(f,\ww)}\nu([\ww]).
$$
Now, since $\nu$ is quasi-Bernoulli, \eqref{eq:almostmulti}-\eqref{eq:submulti} and \eqref{eq:compareYV}-\eqref{eq:compareZW} we have
\[
\begin{split}
\mu([(\ww\mathbf{x},\ii\jj)])&\geq C'^{-1}\frac{Y(f,\ww\mathbf{x},\ii\jj)}{W_{|\ww\mathbf{x}|}(f,\ww\mathbf{x})}\nu([\ww\mathbf{x}])\\
&\geq C'^{-2}\frac{Y(f,\ww,\ii)}{W_{|\ww|}(f,\ww)}\nu([\ww])\frac{Y(f,\mathbf{x},\jj)}{W_{|\mathbf{x}|}(f,\mathbf{x})}\nu([\mathbf{x}])\\
&\geq C'^{-4}\mu([(\ww,\ii)])\mu([(\mathbf{x},\jj)]).
\end{split}
\]
This implies that $\mu$ is ergodic. Since $\mu$ was arbitrary accumulation point and two equivalent ergodic measures are equal, we get that $\mu$ is unique.
For every $\ww\in\Omega_*$, and every $\zz\in[\ww]$
\[
\begin{split}
\Pi_*\mu([\ww])&=\sum_{\ii\in\Sigma_{A,|\ww|}}\mu([\ww,\ii])\\
&\leq C\sum_{\ii\in\Sigma_{A,|\ww|}}\frac{Y(f,\ww,\ii)}{W_{|\ww|}(f,\ww)}\nu([\ww])\\
&\leq C^3\sum_{\ii\in\Sigma_{A,|\ww|}}\frac{V(f,\zz,\ii)}{Z_{|\ww|}(f,\zz)}\nu([\ww])\\
&= C^3\nu([\ww]).
\end{split}
\]
The other inequality $\Pi_*\mu([\ww])\geq\nu([\ww])$ is similar. Since $\Pi_*\mu$ and $\nu$ are both ergodic, we have $\Pi_*\mu=\nu$.
Finally, by \eqref{prop:calcpres}
\[
\begin{split}
h_\mu&=\lim_{n\to\infty}\frac{1}{n}\sum_{(\ww,\ii)\in\Gamma_{A,n}}\mu([\ww,\ii])\log\mu([\ww,\ii])\\
&=\lim_{n\to\infty}\frac{-1}{n}\sum_{(\ww,\ii)\in\Gamma_{A,n}}\mu([\ww,\ii])\log\left(\frac{Y(f,\ww,\ii)}{W_{|\ww|}(f,\ww)}\nu([\ww])\right)\\
&=h_\nu-\int fd\mu+P_\nu(f).
\end{split}\]
By Theorem~\ref{thm:entconv}, $h_\mu^\xi=h_\mu-h_\nu$, which proves the statement.
\end{proof}
\begin{thm}\label{thm:differentiable}
Let $\nu$ be a $\sigma$-invariant ergodic quasi-Bernoulli measure on $\Omega$ and let $f,g\colon\Gamma_A\mapsto\R$ be a continuous potentials with bounded variation. Then the function $p\colon t\mapsto P((1-t)g+tf)$ is differentiable at $t=0$. In particular,
$$
p'(0)=\int(f-g)d\mu_g.
$$
\end{thm}
\begin{proof}
It is clear by the bounded distortion \eqref{eq:bd1} that there exists a constant $C>0$ such that for every $t\in\R$ and every $(\ww,\ii)\in\Gamma_{A,*}$
$$
C^{-1}Y(tf+(1-t)g,\ww,\ii)\leq Y(f,\ww,\ii)^tY(g,\ww,\ii)^{1-t}\leq CY(tf+(1-t)g,\ww,\ii).
$$
Let $\mu_f$ and $\mu_g$ be the unique ergodic measures defined in Theorem~\ref{thm:equillibrium}. Then for every $t\in\R$ and every $(\ww,\ii)\in\Gamma_{A,*}$
\[
\begin{split}
C^{-2}\frac{Y(tf+(1-t)g,\ww,\ii)}{W(f,\ww)^tW(g,\ww)^{1-t}}\nu(\ww)&\leq\mu_f([\ww,\ii])^t\mu_g([\ww,\ii])^{1-t}\\
&\leq C^2\frac{Y(tf+(1-t)g,\ww,\ii)}{W(f,\ww)^tW(g,\ww)^{1-t}}\nu(\ww).
\end{split}
\]
Hence,
\begin{multline*}
P_\nu((1-t)g+ft)\\
=(1-t)P_\nu(g)+tP_\nu(f)+\lim_{n\to\infty}\frac{1}{n}\sum_{\ww\in\Omega_n}\nu([\ww])\log\sum_{\ii\in\Sigma_{A,n}}\frac{\mu_f([\ww,\ii])^t\mu_g([\ww,\ii])^{1-t}}{\nu([\ww])}.
\end{multline*}
Thus, it is enough to show that
$$
H(t)=\lim_{n\to\infty}\frac{1}{n}\sum_{\ww\in\Omega_n}\nu([\ww])\log\sum_{\ii\in\Sigma_{A,n}}\frac{\mu_f([\ww,\ii])^t\mu_g([\ww,\ii])^{1-t}}{\nu([\ww])}
$$
is differentiable.
\medskip
{\rm Claim:}
There exists a constant $C>0$ such that the sequence
$$
\overline{H}_n(t)=\sum_{\ww\in\Omega_n}\nu([\ww])\log\sum_{\ii\in\Sigma_{A,n}}\frac{C\mu_f([\ww,\ii])^t\mu_g([\ww,\ii])^{1-t}}{\nu([\ww])}
$$
is submultiplicative $\overline{H}_{n+m}(t)\leq \overline{H}_n(t)+\overline{H}_m(t)$ and
$$
\underline{H}_n(t)=\sum_{\ww\in\Omega_n}\nu([\ww])\log\sum_{\ii\in\Sigma_{A,n}}\frac{C^{-1}\mu_f([\ww,\ii])^t\mu_g([\ww,\ii])^{1-t}}{\nu([\ww])}
$$
is supermultiplicative $\underline{H}_{n+m}(t)\geq \underline{H}_n(t)+\underline{H}_m(t)$.
\medskip
\begin{proof}[Proof of the Claim]
By Theorem~\ref{thm:equillibrium} and equations \eqref{eq:almostmulti}-\eqref{eq:compareZW}, we have that the measures $\mu_f$ and $\mu_g$ are quasi-Bernoulli, and hence, there exists a constant $C>0$
\begin{multline*}
\sum_{\ii\jj\in\Sigma_{A,n+m}}\mu_f([\ww,\ii\jj])^t\mu_g([\ww,\ii\jj])^{1-t}\\
\leq C\sum_{\ii\jj\in\Sigma_{A,n+m}}\mu_f([\ww|_n,\ii])^t\mu_f([\sigma^n\ww,\jj])^t\mu_g([\ww|_n,\ii])^t\mu_g([\sigma^n\ww,\jj])^t\\
\leq C\sum_{\substack{\ii\in\Sigma_{A,n}\\\jj\in\Sigma_{A,m}}}\mu_f([\ww|_n,\ii])^t\mu_f([\sigma^n\ww,\jj])^t\mu_g([\ww|_n,\ii])^t\mu_g([\sigma^n\ww,\jj])^t.
\end{multline*}
On the other hand,
\begin{multline*}
\sum_{\ii\jj\in\Sigma_{A,n+m}}\mu_f([\ww,\ii\jj])^t\mu_g([\ww,\ii\jj])^{1-t}\\
\geq C^{-1}\sum_{\ii\jj\in\Sigma_{A,n+m}}\mu_f([\ww|_n,\ii])^t\mu_f([\sigma^n\ww,\jj])^t\mu_g([\ww|_n,\ii])^t\mu_g([\sigma^n\ww,\jj])^t\\ C^{-1}C'\sum_{\substack{\ii\in\Sigma_{A,n-r}\\\jj\in\Sigma_{A,m-r}}}\mu_f([\ww|_{n-r},\ii])^t\mu_f([\sigma^{n+2r}\ww,\jj])^t\mu_g([\ww|_{n-r},\ii])^{1-t}\mu_g([\sigma^{n+2r}\ww,\jj])^{1-t}\\
\geq C^{-1}C'K^{-2r}\sum_{\substack{\ii\in\Sigma_{A,n}\\\jj\in\Sigma_{A,m}}}\mu_f([\ww|_{n},\ii])^t\mu_f([\sigma^{n}\ww,\jj])^t\mu_g([\ww|_{n},\ii])^{1-t}\mu_g([\sigma^{n}\ww,\jj])^{1-t}.\\
\end{multline*}
\end{proof}
Since $H(0)=0$ and $\overline{H}_n(t)$ is differentiable for every $n$, we get for every $n\geq 1$
\[
\begin{split}
\limsup_{t\to0}\frac{H(t)}{t}&\leq\limsup_{t\to0}\frac{\overline{H}_n(t)}{nt}\\
&=\left.\frac{1}{n}\sum_{\ww\in\Omega_n}\nu([\ww])\dfrac{\sum_{\ii\in\Sigma_{A,n}}\frac{\mu_f([\ww,\ii])^t\mu_g([\ww,\ii])^{1-t}(\log\mu_f([\ww,\ii])-\log\mu_g([\ww,\ii]))}{\nu([\ww])}}{\sum_{\ii\in\mathcal{A}^n}\frac{\mu_f([\ww,\ii])^t\mu_g([\ww,\ii])^{1-t}}{\nu([\ww])}}\right|_{t=0}\\
&=\frac{1}{n}\sum_{\ww\in\Omega_n}\nu([\ww])\sum_{\ii\in\Sigma_{A,n}}\frac{\mu_g([\ww,\ii])(\log\mu_f([\ww,\ii])-\log\mu_g([\ww,\ii]))}{\nu([\ww])}\\
&=\frac{1}{n}\sum_{\substack{\ww\in\Omega_n\\ \ii\in\Sigma_{A,n}}}\mu_g([\ww,\ii])(\log\mu_f([\ww,\ii])-\log\mu_g([\ww,\ii]))\\
&\leq\frac{C}{n}+\frac{1}{n}\sum_{\substack{\ww\in\Omega_n\\ \ii\in\Sigma_{A,n}}}\mu_g([\ww,\ii])(\log\frac{Y(f,\ww,\ii)\nu([\ww])}{W(f,\ww)}-\log\mu_g([\ww,\ii]))\\
&\to \int fd\mu_g-h_\nu-P_\nu(f)+h_{\mu_g}\text{ as }n\to\infty,
\end{split}
\]
where we applied again Theorem~\ref{thm:equillibrium}. The other inequality,
$$
\liminf_{t\to0}\frac{H(t)}{t}\geq \int fd\mu_g-h_\nu-P_\nu(f)+h_{\mu_g}\text{ as }n\to\infty
$$
is similar. Hence,
$$
p'(0)=-P_\nu(g)+P_\nu(f)+\int fd\mu_g-h_\nu-P_\nu(f)+h_{\mu_g}=\int fd\mu_g-\int gd\mu_g.
$$
\end{proof}
\subsection{Weighted Birkhoff average}
For an $\alpha,\p\in\R^d$, let us consider the potential $f_{\underline{p}}\colon\Gamma_A\mapsto\R$ defined as
$$
f_{\p}:=\langle \p,f-\alpha\rangle.
$$
First, we show the upper bound in Theorem~\ref{thm:typmain}.
\begin{lem}\label{lem:ub}
For every $\ww\in\Omega$ and $\alpha\in\R^d$
$$
h_{\rm top}(E_\ww(\alpha))\leq\inf_{\p\in\R^d}P(f_{\p},\ww).
$$
\end{lem}
\begin{proof}
The proof is standard, but for completeness, we give it here.
Let $s>s_0>\inf_{\p\in\R^d}P(f_{\p},\ww)$. Hence, there exists $\p\in\R^d$ such that $s_0>P(f_{\p},\ww)$. Thus there exists $N'\geq1$ such that for every $n\geq N'$
$$
\sum_{\ii\in\Sigma_{A,n}}e^{\langle\p,S_nf-n\alpha\rangle}<e^{s_0n}.
$$
By definition,
\begin{equation}\label{eq:defE}
E_\ww(\alpha)=\bigcap_{M=1}^\infty\bigcup_{N=1}^\infty\bigcap_{n\geq N}\left\{\ii\in X:\left|\frac{1}{n}S_nf(\ww,\ii)-\alpha\right|<\frac{1}{M}\right\}.
\end{equation}
Since $f(\ww,\cdot)\colon \Sigma_A\mapsto\R^d$ is continuous over a compact set, we get that it is uniformly continuous. Thus, for every $M\geq1$ there exists $C>0$ such that for every $n\geq1$, $\ii\in\Sigma_{A,n}$ and every $\jj\in[\ii]$
$$
\left|S_nf(\ww,\jj)-\sup_{\jj\in[\ii]}S_nf(\ww,\jj)\right|\leq\frac{Cn}{M}.
$$
Choose $M\geq1$ such that $|\p|\frac{1+C}{M}<(s-s_0)/2$. By \eqref{eq:defE}, we get that for every $N$ sufficiently large
\[
\begin{split}
\mathcal{H}_N^s(E_\ww(\alpha))&\leq\sum_{n=N}^\infty\sum_{\substack{\ii\in\Sigma_{A,n} \\ \left|\sup_{\jj\in[\ii]}S_nf(\ww,\jj)-n\alpha\right|<(1+C)n/M}}e^{-n s}\\
&\leq\sum_{n=N}^\infty e^{-n(s-s_0)/2}\sum_{\substack{\ii\in\Sigma_{A,n} \\ \left|\sup_{\jj\in[\ii]}S_nf(\ww,\jj)-n\alpha\right|<(1+C)n/M}}e^{-n s_0+\langle\p,S_nf-n\alpha\rangle}\\
&\leq\sum_{n=N}^\infty e^{-n(s-s_0)/2}\to0\text{ as }N\to\infty.
\end{split}
\]
\end{proof}
Recall that \begin{equation}\label{def:pa}
\mathcal{P}_A=\{\alpha\in\R^d:\text{ there exists }\mu\in\mathcal{M}_\nu(\Gamma_A)\text{ such that }\int fd\mu=\alpha\}.
\end{equation}
It is easy to see that $\mathcal{P}_A$ is a closed and convex set. Moreover, without loss of generality, we may assume that $\mathcal{P}_A$ has an interior point. Indeed, if $\mathcal{P}_A$ does not contain interior point then there exists a $d'$-dimensional hyperplane $V$ such that $\mathcal{P}_A\subset V$. By changing coordinates, we may assume that $f\colon\Gamma_A\mapsto\R^{d'}$. Also, for $\nu$-almost every $\ww$,
$$
\mathcal{P}_A=\{\alpha\in\R^d:\text{ there exists }\ii\in\Sigma_A\text{ such that }\lim_{n\to\infty}\frac{1}{n}S_nf(\ww,\ii)=\alpha\}.
$$
Indeed, take the sequence $\mu_n=\frac{1}{n}\sum_{k=0}^n\delta_{\sigma^k\ww,\sigma^k\ii}$ and let $\mu$ be an accumulation point of the sequence $\mu_n$ in the weak*-topology, we get $\int fd\mu=\lim_{k\to\infty}\int fd\mu_{n_k}=\alpha$ and for every $g\in L^1(\Omega)$, $\int gd\Pi_*\mu=\lim_{k\to\infty}\int g\circ\Pi d\mu_{n_k}=\lim_{k\to\infty}\frac{1}{n_k} \sum_{\ell=0}^{n_k}g(\sigma^\ell\ww)=\int gd\nu$. Moreover, since $\sigma_*\mu_n=\mu_n-\frac{1}{n}\delta_{\ww,\ii}+\frac{1}{n}\delta_{\sigma^{n+1}\ww,\sigma^{n+1}\ii}$, we get that $\mu$ is $\sigma$-invariant.
Theorem~\ref{thm:equillibrium} implies that for every $\p\in\R^d$ there exists a $\sigma$-invariant ergodic measure $\mu_{\p}$ such that
\begin{equation}
P_\nu(f_{\p})=h_{\mu_{\p}}^\xi+\int f_{\p}d\mu_{\p}.
\end{equation}
\begin{lem}\label{lem:convex}
The conditional pressure $\p\mapsto P_\nu(f_{\p})$ is convex.
\end{lem}
\begin{proof}
Let $\beta_1,\beta_2>0$ be with $\beta_1+\beta_2=1$ and $\p_1,\p_2\in\R^d$. Then there exist a measure $\mu=\mu_{\beta_1\p_1+\beta_2\p_2}\in\mathcal{E}_\nu(\Gamma_A)$ such that
\[
\begin{split}
P_\nu(f_{\beta_1\p_1+\beta_2\p_2})&= h_\mu^\xi+\int f_{\beta_1\p_1+\beta_2\p_2}d\mu\\
&=\beta_1h_\mu^\xi+\beta_2h_\mu^\xi+\beta_1\int f_{\p_1}d\mu+\beta_2\int f_{\p_2}d\mu\\
&\leq \beta_1P_\nu(f_{\p_1})+\beta_2P_\nu(f_{\p_2}).
\end{split}
\]
\end{proof}
\begin{lem}
For every $\alpha\in\mathcal{P}^o_A$, there there exists $\p^*\in\R^d$ such that $\inf_{\p}P_\nu(f_{\p})=P_\nu(f_{\p^*})$, where $\mathcal{P}^o_A$ denotes the interior of $\mathcal{P}_A$.
\end{lem}
\begin{proof}
Suppose that $\alpha\in\mathcal{P}^o_A$. Then there exists an $\eta>0$ such that for every $\underline{p}\in\R^d$ with $|\underline{p}|=1$ there exists $\mu\in\mathcal{M}_\nu(\Gamma_A)$ such that $\int fd\mu-\alpha=\eta\underline{p}$. Thus, for every $c>0$
$$
P_\nu(f_{c\p})\geq h_\mu^\xi+\int\langle c\underline{p},f-\alpha\rangle d\mu\geq c\eta|\underline{p}|^2=c\eta.
$$
Thus, $\lim_{|\p|\to\infty}P_\nu(f_{\p})=\infty$ and by the convexity of the conditional pressure Lemma~\ref{lem:convex}, we get the statement.
\end{proof}
\begin{lem}\label{lem:int}
Let $\p^*\in\R^d$ be such that $\inf_{\p}P_\nu(f_{\p})=P_\nu(f_{\p^*})$ and let $\mu_{\p^*}$ be the conditional equilibrium defined in Theorem~\ref{thm:equillibrium}. Then
$$
\int \phi d\mu_{\p^*}=\alpha.
$$
\end{lem}
\begin{proof}
Let us argue by contradiction. Suppose that $\int \phi d\mu_{\p^*}\neq\alpha$. Let $\underline{q}=\frac{\int\phi d\mu_{\p^*}-\alpha}{|\int\phi d\mu_{\p^*}-\alpha|}$.
Observe that for any $\p_1,\p_2\in\R^d$ and $t\in\R$, $t f_{\p_1}+(1-t) f_{\p_2}=f_{t\p_1+(1-t)\p_2}$. Hence, by Theorem~\ref{thm:differentiable}, the function $p\colon t\mapsto P_\nu(f_{(1-t)\p^*+(\p^*+\underline{q})t})$ is differentiable at $t=0$, moreover,
$$
p'(0)=\int f_{\p^*+\underline{q}}-f_{\p^*}d\mu_{\p^*}.
$$
But $p$ has a minimum at $t=0$ so
$$
0= p'(0)=\int f_{\p^*+\underline{q}}-f_{\p^*}d\mu_{\p^*}=\langle\underline{q},\int\phi d\mu_{\p^*}-\alpha\rangle=\left|\int\phi d\mu_{\p^*}-\alpha\right|,
$$
which is a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:typmain}]
It is enough to show that for every $\alpha\in\mathcal{P}^o_A$ and $\nu$-almost every $\ww$
$$
h_{\rm top}(E_\ww(\alpha))\geq h_{\mu_{\p^*}}-h_\nu,
$$
where $\mu_{\p^*}$ is the conditional equilibrium of $P_\nu(f_{\p^*})=\inf_{\p\in\R^d}P_\nu(f_\p)$ defined in Theorem~\ref{thm:equillibrium}. Indeed, Theorem~\ref{thm:entconv}, Lemma~\ref{lem:int} and Theorem~\ref{thm:equillibrium} imply that
$$
h_{\mu_{\p^*}}-h_\nu=h_{\mu_{\p^*}}^\xi=h_{\mu_{\p^*}}^\xi+\int f_{\p^*}d\mu_{\p^*}=P_\nu(f_{\p^*})=\inf_{\p\in\R^d}P_\nu(f_\p).
$$
The upper bound follows by equation \eqref{eq:condpres2} and Lemma~\ref{lem:ub}.
Let $\mu_\ww^\xi$ be the family of conditional measures with respect to the partition $\xi$ and $\mu_{\p^*}$ defined by Rohlin's Disintegration Theorem. By Theorem~\ref{thm:entconv},
$$
\lim_{n\to\infty}\frac{-1}{n}\log\mu_{\ww}^\xi([\ii|_n])=h_{\mu_{\p^*}}-h_\nu\text{ for $\mu_{\p^*}$-almost every $(\ww,\ii)\in\Gamma_A$.}
$$
By Egorov's Theorem, for every $\varepsilon>0$ there exists a set $J_1\subset\Gamma_A$ and a constant $C>0$ such that $\mu_{\p^*}(J_1)>1-\varepsilon$ and for every $(\ww,\ii)\in J_1$ and $n\geq1$
$$
\mu_{\ww}^\xi([\ii|_n])\leq Ce^{-n(h_{\mu_{\p^*}}-h_\nu-\varepsilon)}.
$$
Since $1-\varepsilon<\mu_{\p^*}(J_1)=\int\mu_\ww^\xi(J_1)d\nu(\ww)$, by Markov's inequality, we get that
$$
\nu(\{\ww\in\Omega: \mu_\ww^\xi(J_1\cap\xi(\ww))>1-\sqrt{\varepsilon})>1-\sqrt{\varepsilon}.
$$
By Birkhoff's Ergodic Theorem and Lemma~\ref{lem:int},
$$
\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(\sigma^k\ww,\sigma^k\ii)=\alpha.
$$
Hence, there exists $J\subset J_1$ such that $\nu(J_1\setminus J)=0$ and for every $\ww\in J$, $\mu_\ww^\xi(E_\ww(\alpha)\cap J_1)>1-\sqrt{\varepsilon}$. Thus, by Lemma~\ref{lem:Forstman} for every $\ww\in J$
$$
h_{\rm top}(E_\ww(\alpha))\geq h_{\rm top}(E_\ww(\alpha)\cap J_1)\geq h_{\mu_{\p^*}}-h_\nu-\varepsilon.
$$
Since $\varepsilon>0$ was arbitrary, the statement follows.
Finally, let $\widetilde{\mu}$ be the ergodic $\sigma$-invariant measure on $\Sigma_A$ such that $h_{\widetilde{\mu}}=h_{\rm top}(\Sigma_A)$. Then for $\alpha_0=\iint f(\ww,\ii)d\widetilde{\mu}(\ii)d\nu(\ww)$ we get $h_{\rm top}(E_\ww(\alpha_0))\geq h_{\rm top}(\Sigma_A)$ for $\nu$-almost every $\ww$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{cor:main}]
Let $I$ be the domain of the map
$$
p\colon\alpha\mapsto\inf_{p\in\R}P_\nu(p\cdot (f-\alpha))=\inf_{p\in\R}\left(P_\nu(p\cdot f)-p\alpha\right).
$$
If $I$ is empty or a single point then there is nothing to prove, so we might assume that $I$ has non-empty interior. By Theorem~\ref{thm:differentiable}, the map $p\mapsto P_\nu(p\cdot f)$ is differentiable and by Lemma~\ref{lem:convex}, the derivative $p\mapsto P_\nu'(p\cdot f)=\int fd\mu_p$ is monotone increasing. Hence, $I=[\lim_{p\to-\infty}P_{\nu}'(pf),\lim_{p\to\infty}P_{\nu}'(pf)]$. Moreover, the map $\alpha\mapsto p(\alpha)$ is concave and continuous over $I$.
By Theorem~\ref{thm:typmain}, for every $\alpha\in I^o$ and $\nu$-almost every $\ww$, $h_{\rm top}(E_\ww(\alpha))=p(\alpha)$. Then by Fubini's Theorem, for $\nu$-almost every $\ww$ and Lebesgue almost every $\alpha\in I^o$, $h_{\rm top}(E_\ww(\alpha))=p(\alpha)$.
Using Theorem~\ref{thm:cont} with the choice $\phi_i(\ii):= f(\sigma^i\ww,\ii)$, the map $\alpha\mapsto h_{\rm top}(E_\ww(\alpha))$ is continuous for every $\ww\in\Omega$. This together with the continuity of the map $\alpha\mapsto p(\alpha)$ implies that $p(\alpha)\equiv h_{\rm top}(E_\ww(\alpha))$ over $I$ for $\nu$-almost every $\ww$.
\end{proof}
\begin{comment}
\begin{proof}[Proof of Theorem~\ref{thm:irreg1}]
First, let $\ww\in\Omega$ be an arbitrary sequence for which the assertion of Theorem~\ref{cor:main} holds, but fixed.
By Theorem~\ref{cor:main}, we have that the map $\alpha\mapsto h_{\rm top}(E_\ww(\alpha))$ is continuous and concave over its domain $I$ which is a closed interval. We know that $I$ is non-empty by the last assertion of Theorem~\ref{thm:typmain}, that is, there exists $\alpha_0\in I$ for which $h_{\rm top}(E_\ww(\alpha_0))=h_{\rm top}(\Sigma_A)$.
Suppose now that the continuous $g(\ii)=\int f(\ww,\ii)d\nu(\ww)$ is not constant. Then there exists $\sigma$-invariant measures $\mu_1,\mu_2$ on $\Sigma_A$ such that $\int gd\mu_1\neq\int gd\mu_2$. Since $\nu\times\mu_1$ and $\nu\times\mu_2$ are invariant measures on $\Omega\times\Sigma_A$ and $I=\mathcal{P}_A$ (see \eqref{eq:defpa}), we get $I$ contains at least two points. Since $I$ is a closed interval we get that there exists a sequence $\alpha_m\in I$ such that $\alpha_m\neq\alpha_0$ for all $m$ but $\alpha_m\to\alpha_0$ as $m\to\infty$. By continuity of the spectrum $h_{\rm top}(E_\ww(\alpha_m))\to h_{\rm top}(\Sigma_A)$.
By Corollary~\ref{cor:egorov}, for every $\delta>0$ and every $\alpha\in I$ there exists $M_\delta(\alpha)\subset E_\ww(\alpha)$ such that $h_{\rm top}(M_\delta(\alpha))\geq h_{\rm top}(E_\ww(\alpha))-\delta$ and the convergence $\frac{1}{n}S_nf(\ww,\ii)\to\alpha$ is uniform on $M_\delta(\alpha)$.
Let
$$
D_\ww(\alpha,\beta)=\left\{\ii\in\Sigma_A:\liminf_{n\to\infty}\frac{1}{n}S_nf(\ww,\ii)=\alpha,\ \limsup_{n\to\infty}\frac{1}{n}S_nf(\ww,\ii)=\beta\right\},
$$
and let $\beta_n^{(m)}=\begin{cases}
\alpha_0&\text{ if }n\text{ is odd,}\\
\alpha_m&\text{ if $n$ is even.}
\end{cases}$ By the uniform convergence, for any sequence $\varepsilon_n$ of positive reals such that $\varepsilon_n\to0$ and for each $n,m$ there exists $T_{n,m}$ such that for every $\ii\in M_{\delta}(\beta_n^{(m)})$ and every $k\geq T_{n,m}$,
$$
\left|\frac{1}{k}S_kf(\ww,\ii)-\beta_n^{(m)}\right|<\varepsilon_n.
$$
Thus, applying Proposition~\ref{prop:technical}, we get
$$
h_{\rm top}(D_\ww)\geq h_{\rm top}(D_\ww(\alpha_0,\alpha_m))\geq\min\{h_{\rm top}(E_\ww(\alpha_0)),h_{\rm top}(E_\ww(\alpha_m))\}-\delta.
$$
Now, taking $m\to\infty$ we get $h_{\rm top}(D_\ww)\geq h_{\rm top}(\Sigma_A)-\delta$. Since $\delta>0$ was arbitrary, the statement follows.
\end{proof}
\end{comment}
\section{Frequency regular sequences}
In the rest of the paper, we assume that $\Sigma_A=\Sigma$, that is, we need to work on the full shift. In this section, we establish the connection between $\nu$-typical and frequency regular sequences and prove Theorem~\ref{thm:goal}.
The proof of our main theorem relies on the following two lemmas.
\begin{lem}\label{lem:hold}
Let $\ww,\ww'\in\Omega$ be two $\underline{q}$-frequency regular sequences. Then there is a map $G_{\ww,\ww'}\colon\Sigma\mapsto\Sigma$ such that for every $\alpha<1$ there exists $C>0$ such that for every $\ii,\jj\in\Sigma$
\begin{equation}\label{eq:hold}
d(G_{\ww,\ww'}(\ii),G_{\ww,\ww'}(\jj))\leq Cd(\ii,\jj)^{\alpha}.
\end{equation}
Moreover, $G_{\ww,\ww'}\circ G_{\ww',\ww}(\ii)=\ii$.
\end{lem}
\begin{proof}
First, we define a permutation $\gamma$ on $\N$ such that
\begin{multline*}
\gamma(k)=\ell\text{ if $\omega_k$ is the $n$th appearance of the symbol of $\omega_k$ in $\ww$ }\\
\text{then $\ell$ is the position of the $n$th appearance of $\omega_k$ in $\ww'$.}
\end{multline*}
More precisely, let
$$
M_{n,\lambda_i}(\ww)=\min\{k\geq1: \#\{1\leq j\leq k:w_j=\lambda_i\}=n\}
$$
and
$$
P_{k}(\ww)=\#\{1\leq i\leq k:w_i=w_k\}.
$$
Then
$$
\gamma(k)=M_{P_k(\ww),w_k}(\ww').
$$
By the definition of $\gamma(k)$, we have $w_k=w_{\gamma(k)}'$. Finally, we set the map
\begin{equation}\label{def:almlip}
G_{\ww,\ww'}(\ii):=(i_{\gamma(1)},i_{\gamma(2)},\ldots).
\end{equation} This clearly implies that
$G_{\ww,\ww'}\circ G_{\ww',\ww}$ is the identity map on $\Sigma$.
Since $\ww,\ww'\in\Omega$ are frequency regular sequences, we have that $\lambda_i$ appears infinitely often in $\ww,\ww'$. Thus, for every $n\geq1$ we can define $m_n$ such that $m_n$ is the smallest positive integer such that $\{1,\ldots,n\}\subseteq\{\gamma(1),\ldots,\gamma(m_n)\}$. Hence, for every $n\geq1$
$$
\text{ if }d(\ii,\jj)= e^{-m_n-1}\text{ then }d(G_{\ww,\ww'}(\ii),G_{\ww,\ww'}(\jj))= e^{-n-1}.
$$
Thus, to prove \eqref{eq:hold}, it is enough to show that
\begin{equation}\label{eq:limneed}
\lim_{n\to\infty}\frac{m_n}{n}=1.
\end{equation}
Clearly $m_n\geq n$, so $\liminf_{n\to\infty}\frac{m_n}{n}\geq1$.
By the definition of $m_n$, for every $i=1,\ldots,N$,
\begin{equation}\label{eq:ineq1}
\#\{1\leq k\leq m_n:w_k=\lambda_i\}\geq\#\{1\leq k\leq n:w_k'=\lambda_i\},
\end{equation}
and there exists (at least one) $j=j(n)$ such that
\begin{equation}\label{eq:ineq2}
\#\{1\leq k\leq m_n:w_k=\lambda_j\}=\#\{1\leq k\leq n:w_k'=\lambda_j\}.
\end{equation}
By frequency regularity, for every $0<\varepsilon<\min_iq_i/2$ there exists $N\geq1$ such that for every $n\geq N$
$$
\left|\frac{\#\{1\leq k\leq n:w_k=\lambda_i\}}{n}-q_i\right|,\left|\frac{\#\{1\leq k\leq n:w_k'=\lambda_i\}}{n}-q_i\right|<\varepsilon.
$$
Hence, by \eqref{eq:ineq2} for every $n\geq N$
\[
\begin{split}
\frac{m_n}{n}(q_{j(n)}-\varepsilon)&\leq\frac{m_n}{n}\frac{\#\{1\leq k\leq m_n:w_k=\lambda_{j(n)}\}}{m_n}\\
&=\frac{\#\{1\leq k\leq n:w_k'=\lambda_{j(n)}\}}{n}\leq q_{j(n)}+\varepsilon.
\end{split}
\]
Thus, for every $n\geq N$, $\frac{m_n}{n}\leq 1+4\varepsilon$.
\end{proof}
\begin{lem}\label{lem:equal}
For every $\underline{q}$-frequency regular sequences $\ww,\ww'$ with the same frequency
\begin{equation}\label{eq:enteq}
h_{\rm top}(E_\ww(\alpha))=h_{\rm top}(E_{\ww'}(\alpha)).
\end{equation}
\end{lem}
\begin{proof}
Let $\ww,\ww'$ be $\underline{q}$-frequency regular sequences. Let $G_{\ww,\ww'}$ be the map defined in Lemma~\ref{lem:hold}. It is enough to show that
\begin{equation}\label{eq:cont}
G_{\ww,\ww'}(E_{\ww'}(\alpha))\subseteq E_{\ww}(\alpha).
\end{equation}
Indeed, by \eqref{eq:hold},
$$
h_{\rm top}(E_{\ww'}(\alpha))=h_{\rm top}(G_{\ww',\ww}\circ G_{\ww,\ww'}(E_{\ww'}(\alpha))\leq h_{\rm top}(G_{\ww,\ww'}(E_{\ww'}(\alpha))\leq h_{\rm top}(E_{\ww}(\alpha)).
$$
The other inequality follows by symmetry.
Let $\gamma\colon\N\mapsto\N$ be as in the proof of Lemma~\ref{lem:hold}. Let us define $p_n$ as the largest non-negative integer such that $\{1,\ldots,p_n\}\subseteq\{\gamma(1),\ldots,\gamma(n)\}$. In other words, $p_n=\min\{k\geq1:k\notin\{\gamma(1),\ldots,\gamma(n)\}\}-1$. Similarly to \eqref{eq:limneed} one can show that
\begin{equation}\label{eq:limneed2}
\lim_{n\to\infty}\frac{p_n}{n}=1.
\end{equation}
Let $\ii\in E_{\ww'}(\alpha)$. Then by \eqref{eq:limneed2}
\[
\begin{split}
\frac{1}{n}\sum_{k=0}^{n-1}f(\sigma^k\ww,\sigma^{k}G_{\ww,\ww'}(\ii))&=\frac{1}{n}\sum_{k=0}^{n-1}f_{w_k,i_{\gamma(k)}}\\
&=\frac{1}{n}\sum_{k=0}^{n-1}f_{w_{\gamma(k)}',i_{\gamma(k)}}\\
&=\frac{1}{n}\sum_{k=0}^{p_n-1}f_{w_k',i_{k}}+\frac{1}{n}\sum_{\substack{k=0 \\ \gamma(k)>p_n}}^{n-1}f_{w_{\gamma(k)}',i_{\gamma(k)}}\\
&\leq\frac{p_n}{n}\frac{1}{p_n}\sum_{k=0}^{p_n-1}f_{w_k',i_{k}}+\frac{n-p_n}{n}\max_{i,j}f_{i,j}\to\alpha\text{ as }n\to\infty.
\end{split}
\]
Similarly,
$$
\frac{1}{n}\sum_{k=0}^{n-1}w_k\phi(\sigma^{k}G_{\ww,\ww'}(\ii))\geq\frac{p_n}{n}\frac{1}{p_n}\sum_{k=0}^{p_n-1}f_{w_k',i_{k}}+\frac{n-p_n}{n}\min_{i,j}f_{i,j}\to\alpha
$$
as $n\to\infty$. Hence, $G_{\ww,\ww'}(\ii)\in E_{\ww}(\alpha)$ which verifies \eqref{eq:cont}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:goal}] Let $\nu$ be the Bernoulli measure associated to the weights $\underline{q}=(q_1,\ldots,q_N)$. Simple calculations show that the conditional pressure $P_\nu(\langle f-\alpha,\p\rangle)$ defined in \eqref{eq:condpresdefdef} equals to $P_{\underline{q}}(\langle f-\alpha,\p\rangle)$ in \eqref{eq:simplepres}.
Hence, by applying Theorem~\ref{thm:typmain} we get that for every $\alpha$ and $\nu$-almost every $\ww$
\[
\begin{split}
h_{\rm top}(E_{\ww}(\alpha))&=\sup\{h_\mu:\mu\in\mathcal{E}_\nu(\Gamma)\text{ and }\sum_{i,j}^{K,N}f_{j,i}\mu([j,i])=\alpha\}-h_\nu\\
&=\inf_{\p\in\R^d}P_{\underline{q}}(\langle\p,f-\alpha\rangle).
\end{split}
\]
By convexity, $\inf_{\p\in\R^d}P_{\underline{q}}(\langle\p,f-\alpha\rangle)$ is attained at $\p^*$. By Theorem~\ref{thm:equillibrium}, we know that the measure $\mu_{\p^*}$ where the supremum is attained can be chosen such that
$$
C^{-1}\frac{Y(f_{\p^*},\ww,\ii)}{W_{|\ww|}(f_{\p^*},\ww)}\nu([\ww])\leq\mu_{\p^*}([\ww,\ii])\leq C\frac{Y(f_{\p^*},\ww,\ii)}{W_{|\ww|}(f_{\p^*},\ww)}\nu([\ww]),
$$
hold for some uniform constant $C>0$, where $f_{\p^*}(\ww,\ii)=\langle\p,\lambda_{w_0}\phi_{i_0}-\alpha\rangle$. However, in this case,
$$
\eta([\ww,\ii])=\frac{Y(f_{\p^*},\ww,\ii)}{W_{|\ww|}(f_{\p^*},\ww)}\nu([\ww])=\prod_{k=0}^{|\ww|-1}\frac{q_{w_k}e^{\langle\p^*,\lambda_{w_k}\phi_{i_k}-\alpha\rangle}}{\sum_{i=1}^Ke^{\langle\p^*,\lambda_{w_k}\phi_{i}-\alpha\rangle}}
$$
is clearly an ergodic Bernoulli measure on $\Gamma$, since $\mu_{\p^*}$ is equivalent to $\eta$, we have $\eta=\mu_{\p^*}$. This shows that the supreme is attained at Bernoulli measures.
Finally, since $\nu$-almost every sequence $\ww$ is $\underline{q}$-frequency regular, the statement follows by Lemma~\ref{lem:equal}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:irreg2}]
Since the function $g(i)=\sum_{j=1}^Nq_jf_{j,i}$ is not constant by assumption, the possible values of $\alpha$, for which $\sum_{i,j}p_{j,i}f_{j,i}=\alpha$ and $\sum_ip_{j,i}=q_j$ form a nontrivial closed interval. Hence, the statement follows by Theorem~\ref{thm:contgen}.
\begin{comment}
Let $\nu$ be the Bernoulli measure associated to the weights $\underline{q}=(q_1,\ldots,q_N)$. Recall that
$$
D_\ww=\{\ii\in\Sigma:\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^nf_{w_k,i_k}\text{ does not exists}\}.
$$
By Theorem~\ref{thm:irreg1}, for $\nu$-almost every $\ww$ we have $h_{\rm top}(D_\ww)=\log K$. Thus, it is enough to show that for every $\underline{q}$-frequency regular sequences $\ww,\ww'$
\begin{equation}\label{eq:enteq2}
h_{\rm top}(D_\ww)=h_{\rm top}(D_{\ww'}).
\end{equation}
The proof essentially corresponds to the proof of Lemma~\ref{lem:equal}. Let $\gamma\colon\N\mapsto\N$ be as in the proof of Lemma~\ref{lem:höld}, and let $G_{\ww,\ww'}\colon\Sigma\mapsto\Sigma$ be the map defined in \eqref{def:almlip}. It is sufficient to show that
$$
G_{\ww,\ww'}(D_{\ww'})\subseteq D_\ww.
$$
Let $m_n$ as the smallest non-negative integer such that $\{1,\ldots,n\}\subseteq\{\gamma(1),\ldots,\gamma(m_n)\}$, and by \eqref{eq:limneed}, $\lim_{n\to\infty}\frac{m_n}{n}=1$.
Let $\ii\in D_{\ww'}$, then there exists $\alpha_1<\alpha_2$ and sequence $n_k, \ell_k$ such that
\[
\begin{split}
\liminf_{n\to\infty}\frac{1}{n}\sum_{k=1}^nf_{w_k',i_k}&=\lim_{k\to\infty}\frac{1}{n_k}\sum_{k=1}^{n_k}f_{w_k',i_k}=\alpha_1\text{ and}\\
\limsup_{n\to\infty}\frac{1}{n}\sum_{k=1}^nf_{w_k',i_k}&=\lim_{k\to\infty}\frac{1}{\ell_k}\sum_{k=1}^{\ell_k}f_{w_k',i_k}=\alpha_2.
\end{split}
\]
Thus,
\[
\begin{split}
\frac{1}{m_{n_k}}\sum_{k=0}^{m_{n_k}-1}f(\sigma^k\ww,\sigma^{k}G_{\ww,\ww'}(\ii))&=\frac{1}{m_{n_k}}\sum_{k=0}^{m_{n_k}-1}f_{w_k,i_{\gamma(k)}}\\
&=\frac{1}{m_{n_k}}\sum_{k=0}^{m_{n_k}-1}f_{w_{\gamma(k)}',i_{\gamma(k)}}\\
&=\frac{1}{m_{n_k}}\sum_{k=0}^{n_k-1}f_{w_k',i_{k}}+\frac{1}{m_{n_k}}\sum_{\substack{k=0 \\ \gamma(k)>n_k}}^{m_{n_k}-1}f_{w_{\gamma(k)}',i_{\gamma(k)}}\\
&\leq\frac{n_k}{m_{n_k}}\frac{1}{n_k}\sum_{k=0}^{n_k-1}f_{w_k',i_{k}}+\frac{m_{n_k}-n_k}{m_{n_k}}\max_{i,j}f_{i,j}\to\alpha_1\text{ as }k\to\infty.
\end{split}
\]
Similar argument shows that
$$
\liminf_{k\to\infty}\frac{1}{m_{\ell_k}}\sum_{k=0}^{m_{\ell_k}-1}f(\sigma^k\ww,\sigma^{k}G_{\ww,\ww'}(\ii))\geq\alpha_2,
$$
and so the statement follows.
\end{comment}
\end{proof}
Now we finish the paper by showing necessity of the frequency regular condition to have non-degenerate spectrum. Example~\ref{ex:alter} follows by the next example.
\begin{ex}\label{ex:degen}
There exists a sequence $\ww\in\{0,1\}^\N$, which is not frequency regular, such that the following holds: For every continuous potential $\varphi\colon\{0,1\}^\N\mapsto\R$ either $E_\ww(\alpha)=\emptyset$ for every $\alpha\in\R$ or there exists at most one $\alpha_0\in\R$ for which $E_\ww(\alpha_0)\neq\emptyset$.
If $\varphi$ depends only the first symbol then $E_\ww(\alpha_0)\neq\emptyset$ if and only if $\varphi_0\varphi_1\leq0$, moreover if additionally $\varphi_0\neq-\varphi_1$ then $h_{\rm top}(E_\ww(\alpha_0))<\log 2$.
\end{ex}
\begin{proof}[Proof of Example~\ref{ex:degen}]
First, let us define the sequence $\ww\in\{0,1\}^\N$. Let $\{M_n\}_{n=0}^\infty$ be a fast increasing sequence, that is, suppose that $2M_{n}<M_{n+1}$ for every $n\geq0$ and $\lim_{n\to\infty}\frac{\sum_{j=1}^nM_j}{M_{n+1}}=0$. Let $\ww:=(w_0,w_1,\ldots)$, where
$$
w_k=\begin{cases}
0 & \text{if }2M_{n-1}< k\leq M_n,\\
1 & \text{if }M_n<k\leq 2M_n.
\end{cases}
$$
Clearly, $\ww$ is not frequency regular. Moreover, since for every $\ii\in\Sigma$
$$
\left|\frac{1}{M_n}\sum_{k=0}^{M_n}w_k\varphi(\sigma^k\ii)\right|\leq\frac{\max_{\ii\in\Sigma}|\varphi(\ii)|\sum_{\ell=0}^{n-1}M_\ell}{M_n}\to0\text{ as }n\to\infty,
$$
we get that $E_\ww(\alpha)=\emptyset$ for every $\alpha\neq0$. On the other hand, if $\min_{\ii\in\Sigma}\varphi(\ii)>0$ then
$$
\frac{1}{2M_n}\sum_{k=0}^{2M_n}w_k\varphi(\sigma^k\ii)\geq\frac{\min_{\ii\in\Sigma}\varphi(\ii)\sum_{\ell=0}^{n}M_\ell}{2M_n}\to\frac{\min_{\ii\in\Sigma}\varphi(\ii)}{2}\text{ as }n\to\infty,
$$
so $E_\ww(0)=\emptyset$ as well. Similarly, $E_\ww(0)=\emptyset$ also in the case if $\max_{\ii\in\Sigma}\varphi(\ii)<0$.
Now, suppose that $\varphi(\ii)=\varphi_{i_0}$. Using the previous calculations if $\varphi_0\varphi_1>0$ then $E_\ww(\alpha)=\emptyset$ for every $\alpha\in\R$. So we may assume that $\varphi_0\varphi_1\leq0$. Additionally, suppose that $\varphi_0\neq-\varphi_{1}$.
For every $m$, let $n_m$ be such that $M_{n_m}< m\leq M_{n_m+1}$. After some algebraic manipulation we get that
$$A_m(\ii)=\frac{1}{m}\sum_{k=0}^mw_k\varphi_{i_k}=\frac{M_{n_m}\sum\limits_{k=0}^{M_{n_m}}w_k\varphi_{i_k}}{mM_{n_m}}+\frac{\sum\limits_{k=M_{n_m}+1}^{\min\{m,2M_{n_m}\}}\varphi_{i_k}}{m}.
$$ Since $\frac{\sum\limits_{k=0}^{M_{n_m}}w_k\varphi_{i_k}}{M_{n_m}}\to0$ as $m\to\infty$ and $\frac{M_{n_m}}{m}$ is bounded, we get $A_m(\ii)\to0$ if and only if
$$
\frac{\#\{M_{n_m}<k\leq \min\{m,2M_{n_m}\}:i_k=0\}\varphi_0+\#\{M_{n_m}<k\leq \min\{m,2M_{n_m}\}:i_k=1\}\varphi_1}{m}\to0.
$$
In particular, $A_m(\ii)\to0$ implies that
\begin{equation}\label{eq:thiis}
\frac{\#\{M_{n}<k\leq 2M_{n}:i_k=0\}}{M_n}\to\frac{\varphi_1}{\varphi_1-\varphi_0}\text{ as }n\to\infty.
\end{equation}
Denote $F$ the set of all $\ii\in\Sigma$, which satisfy \eqref{eq:thiis}. Then $h_{\rm top}(E_\ww(0))\leq h_{\rm top}(F)$.
For short, let $p=\frac{\varphi_1}{\varphi_1-\varphi_0}$. Well known that for every $\varepsilon>0$ there exists $L\geq1$ such that for every $n\geq L$
$$
\#\left\{\ii\in\{0,1\}^n:\left|\frac{\#\{0<k\leq n:i_k=0\}}{n}-p\right|<\varepsilon\right\}\leq e^{(-p\log p-(1-p)\log(1-p)+\varepsilon)n}
$$ and by \eqref{eq:topentbasic},
\[
\begin{split}
h_{\rm top}(F)&\leq\liminf_{n\to\infty}\frac{1}{2M_n}\log\#\left\{\ii\in\Sigma_{2M_n}:F\cap[\ii]\neq\emptyset\right\}\\
&\leq\lim_{n\to\infty}\frac{1}{2M_n}\log\prod_{k=1}^n2^{M_{k}-2M_{k-1}}e^{(-p\log p-(1-p)\log(1-p)+\varepsilon)M_{k}}\\
&=\frac{\log2-p\log p-(1-p)\log(1-p)+\varepsilon}{2}.
\end{split}
\]
Since $\varepsilon>0$ was arbitrary and by assumption $p\neq1/2$, we get $h_{\rm top}(F)<\log2$, which completes the proof.
\end{proof}
|
1,116,691,501,234 | arxiv |
\section{Introduction}
The filtering algorithms inside constraint programming solvers (\cite{choco,oscar,jacop,gecode} etc.) are mainly tested using test suites implemented manually.
Creating such unit tests is a significant workload for the developers and is also error prone.
The most elementary yet important test to achieve for a constraint is that no feasible solution is removed.
One can always implement a checker verifying the feasibility of the constraint when all the variables are bound. By comparing the number of solutions generated with both the checker and the tested filtering algorithm, one can be confident that no solution is removed.
This procedure can be repeated for many (small) instances (possibly randomly generated).
Alternatively, one can compare with a decomposition of the constraint into (more) elementary ones. This latter approach can improve the coverage of the test suite.
Those unit tests verifying the non removal of feasible solutions do not verify other properties of constraints generally more difficult to test.
For instance, the domain-consistency property is rarely tested outside some hard-coded small test examples.
We introduce CPChecker as a tool to ease the solver developer’s life by automating the testing of properties of filtering algorithms. For instance, \textit{algorithm A should filter more than algorithm B} or \textit{Algorithm A should achieve arc or bound-consistency}, etc.
The tool does not ensure that the tested filtering does not contain any bug - as it is impossible to test all possible input domains - but it can reveal the presence of one, if a test fails.
The large variety of input domains pseudo-randomly generated should make the user confident that the tool would allow to detect most of the bugs.
Many constraint implementations are stateful and maintain some reversible data structures. Indeed, global constraints' filtering algorithms often maintain an internal state in order to be more efficient than their decomposition.
This reversible state is also a frequent source of bugs.
CPChecker includes the trail-based operations when testing constraints such that any bug due to the state management of the constraint also has a high chance to be detected.
CPChecker is generic and can be interfaced with any JVM trailed based solvers.
CPChecker is able to generate detailed explanations by generating minimal domain examples on which the user's filtering has failed, if any.\\
\paragraph{Related work}
In \cite{meier1995debugging,coffrin17,lazaar2012cp}, the authors introduce tools to debug models. Some researches have also been done to help programmers while debugging codes for constraint programming \cite{StoreInspection}.
To the best of our knowledge, these tools, unlike CPChecker, do not focus on the filtering properties of individual constraints.
In the next sections, we first detail how to test static filtering algorithms before explaining the testing of stateful filtering algorithms for trailed based solvers. Finally we introduce how CPChecker can be integrated into a test suite.
\section{Testing Static Filtering Algorithms}
CPChecker is able to test any static filtering algorithm acting over integer domains.
Therefore, the user needs to implement a function taking some domains (array of set of ints) as input and returning the filtered domains\footnote{Most of the code fragments presented are in Scala for the sake of conciseness but the library is compatible with any JVM-based language.}:
\begin{lstlisting}[language=scala,basicstyle=\small]
abstract class Filter {
def filter(variables: Array[Set[Int]]): Array[Set[Int]]
}
\end{lstlisting}
CPChecker also needs a trusted filtering algorithm serving as reference with the same signature.
The least effort for a user is to implement a checker for the constraint under the form of a predicate
that specifies the semantic of the constraint.
For instance a checker for the constraint $\sum_i x_i=15$ can be defined as
\begin{lstlisting}[language=scala]
def sumChecker(x: Array[Int]): Boolean = x.sum == 15
\end{lstlisting}
One can create with CPChecker an Arc/Bound-Z/Bound-D/Range Consistent filtering algorithm by providing in argument to the corresponding constructor the implementation of the checker. For instance
\begin{lstlisting}[language=scala,basicstyle=\small]
class ArcFiltering(checker: Array[Int] => Boolean) extends Filter
val trustedArcSumFiltering = new ArcFiltering(sumChecker)
\end{lstlisting}
This class implements the \texttt{filter} function as a trusted filtering algorithm reaching the arc consistency by 1) computing the Cartesian product of the domains, 2) filtering with the checker the non solutions and 3) creating the filtered domains as the the union of the values.
Similar filtering algorithms' (Bound-Z, Bound-D and Range) have been implemented from a checker.
Finally the \texttt{check} and \texttt{stronger} functions permit to respectively check that two compared filtering algorithms are the same or that the tested filtering is stronger than the trusted one.
\begin{lstlisting}[language=scala,basicstyle=\small,breaklines=true]
def check/stronger(trustedFiltering: Filter, testedFiltering: Filter) : Boolean
\end{lstlisting}
The testing involves the following steps:
\begin{enumerate}
\item Random Domains generation \footnote{A seed can be set to reproduce the same tests.}.
\item Execution of the tested and trusted filtering algorithms (from CPChecker's filterings or another trusted one) to these random domains.
\item Comparison of the domains returned by the two filtering algorithms.
\end{enumerate}
This process is repeated by default 100 times although all the parameters can be overridden for the creation of random domains, number of tests, etc.
\subsection{Generation of Random Test Instances}\label{section:gen
In order to test a filtering implementation, CPChecker relies on a \textit{property based testing} library called \textit{ScalaCheck}\cite{scalaCheck}\footnote{Similar libraries exist for most programming languages, all inspired by QuickCheck for Haskell.}.
This library includes support for the creation of random generators and for launching multiple test cases given those.
CPChecker also relies on the ability of \textit{ScalaCheck} of reducing the instance to discover a smaller test instance over which the error occurs.
\subsection{Example}
Here is an example for testing with CPChecker the arc-consistent \textit{AllDifferent} constraint's in \textit{\textbf{OscaR}} \cite{oscar} solver :
\begin{lstlisting}[language=scala, basicstyle=\small]
object ACAllDiffTest extends App {
def allDiffChecker(x: Array[Int]): Boolean = x.toSet.size == x.length
val trustedACAllDiff: Filter = new ArcFiltering(allDiffChecker)
val oscarACAllDiff: Filter = new Filter {
override def filter(variables: Array[Set[Int]]): Array[Set[Int]] = {
val cp: CPSolver = CPSolver()
val vars = variables.map(x => CPIntVar(x)(cp))
val constraint = new AllDiffAC(vars)
try {
cp.post(constraint)
} catch {
case _: Inconsistency => throw new NoSolutionException
}
vars.map(x => x.toArray.toSet)
}
}
check(trustedACAllDiff, oscarACAllDiff)
}
\end{lstlisting}
The trusted filtering algorithm is created thanks to the \texttt{ArcFiltering} class at line 3. The checker for AllDifferent simply verifies that the union of the values in the array has a cardinality equal to the size of the array, as defined at line 2.
The tested filtering implements the \texttt{filter} function using \textit{\textbf{OscaR}}'s filtering. It first transforms the variables into \textit{\textbf{OscaR}}'s variables (line 7) then creates the constraint over them (line 8). It is then posted to the solver which filters the domains until fix-point before returning them.
\section{Testing stateful constraints}
Incremental Filtering Algorithms usually maintain some form of state in the constraints.
It can for instance be reversible data-structures for trailed-based solvers.
CPChecker allows to test a stateful filtering algorithm by testing it during a search while checking the state restoration.
In terms of implementation, the incremental \texttt{check} and \texttt{stronger} functions compare \texttt{FilterWithState} objects that must implement two functions. The \textit{setup} function reaches the fix-point while setting up the solver used for the search. The \textit{branchAndFilter} function applies a branching operation on the current state of the solver and reaches a new fix-point for the constraint.
The branching operations represent standard branching constraints such as $=,\neq,<,>$ and the \texttt{push/pop} operations on the trail
allowing to implement the backtracking mechanism (see \cite{minicp} for further details on this mechanism).
\begin{lstlisting}[language=scala, basicstyle=\small]
abstract class FilterWithState {
def setup(variables: Array[Set[Int]]): Array[Set[Int]]
def branchAndFilter(branching: BranchOp): Array[Set[Int]]
}
\end{lstlisting}
The process of testing an incremental/stateful filtering algorithm is divided into four consecutive steps :
\begin{enumerate}
\item Domains generation
\item Application of the \textit{setup} function of the tested and trusted filtering algorithms.
\item Comparing the filtered domains returned at step 2.
\item Execution of a number of fixed number dives as explained next based on the application of \textit{branchAndFilter} function.
\end{enumerate}
\subsection{Dives}
A dive is performed by successively interleaving a push of the state and a domain restriction operation. When a leaf is reached (no or one solution remaining) the dive is finished and a random number of states are popped to start a new dive as detailed in the algorithm \ref{algo:dives}.
\begin{algorithm}
\SetKwProg{Dives}{Dives}{}{}
\Dives{(root, trail, nbDives)}{
dives $\leftarrow$ 0\\
currentDomains $\leftarrow$ root \\
\While{dives $<$ nbDives}{
\While{!currentDomains.isLeaf}{
trail.push(currentDomains)\\
restriction $\leftarrow$ new RandomRestrictDomain(currentDomains)\\
currentDomains $\leftarrow$ branchAndFilter(currentDomains, restriction)\\
}
dives $\leftarrow$ dives + 1 \\
\For{i $\leftarrow$ 1 to Random(1,trail.size-1)}{
trail.pop()
}
}
}
\caption{Algorithm performing dives}
\label{algo:dives}
\end{algorithm}
\newpage
\subsection{Illustration over an Example}
The next example illustrates CPChecker to test the \textit{\textbf{OscaR}}\cite{oscar}'s filtering for the constraint $\sum_i{x_i}=15$. It should reach Bound-Z consistency.
\begin{lstlisting}[language=scala, basicstyle=\small]
object SumBCIncrTest extends App {
def sumChecker(x: Array[Int]): Boolean = x.sum == 15
val trusted = new IncrementalFiltering(new BoundZFiltering(sumChecker))
val tested = new FilterWithState {
val cp: CPSolver = CPSolver()
var currentVars: Array[CPIntVar] = _
override def branchAndFilter(branching: BranchOp): Array[Set[Int]] ={
branching match {
case _: Push => cp.pushState()
case _: Pop => cp.pop()
case r: RestrictDomain => try {
r.op match {
case "=" => cp.post(currentVars(r.index) === r.constant)
...}
} catch {
case _: Exception => throw new NoSolutionException
}
}
currentVars.map(x => x.toArray.toSet)
}
override def setup(variables: Array[Set[Int]]): Array[Set[Int]] = {
currentVars = variables.map(x => CPIntVar(x))
try {
solver.post(sum(currentVars) === 15)
} catch {
case _: Exception => throw new NoSolutionException
}
currentVars.map(x => x.toArray.toSet)
}
}
check(trusted, tested)
}
\end{lstlisting}
In this example, two \texttt{FilterWithState} are compared with the \texttt{check} function.
In CPChecker, the \texttt{IncrementalFiltering} class implements the \\ \texttt{FilterWithState} abstract class for any \texttt{Filter} object. Therefore, the \\ \texttt{IncrementalFiltering} created with a \texttt{BoundZFiltering} object is used as the trusted filtering (line 4) which it-self relies on the very simple \texttt{sumChecker} function provided by the user and assumed to be bug-free.
\section{Custom Assertions}
To ease the integration into a JUnit like test suite, CPChecker has custom assertions extending the \textit{AssertJ}\cite{assertJ} library. The classes \texttt{FilterAssert} and \\ \texttt{FilterWithStateAssert} follow the conventions of the library with the \texttt{filterAs} and \texttt{weakerThan} functions to respectively test a filtering algorithm, as in the \texttt{check} and \texttt{stronger} functions. An example of assertion is:
\begin{lstlisting}[language=scala, basicstyle=\small]
assertThat(tested).filterAs(trusted1).weakerThan(trusted2)
\end{lstlisting}
\section{Code Source}
CPChecker's code source is publicly available in the \textit{Github} repository\footnotemark. This repository also contains several examples of usage of CPChecker with both \textit{Scala} solver and \textit{Java} solvers, namely \textit{\textbf{OscaR}}\cite{oscar}, \textit{\textbf{Choco}}\cite{choco} and \textit{\textbf{Jacop}}\cite{jacop}.
From those examples, \textit{CPChecker} detected that the arc consistent filtering of the \textit{Global Cardinality} constraint of \textit{\textbf{OscaR}} was not arc consistent for all the variables (the cardinality variables).
This shows the genericity of \textit{CPChecker} and that it can be useful to test and debug filtering algorithms with only a small workload for the user.
Further details on the architecture and implementation of CPChecker can be found in the Master Thesis document available at the github repository\footnotemark[\value{footnote}].\footnotetext{https://github.com/vrombouts/Generic-checker-for-CP-Solver-s-constraints}
\section{Conclusion and Future Work}
This article presented CPChecker, a tool to test filtering algorithms implemented in any JVM-based programming language based on the JVM. Filtering algorithms are tested over domains randomly generated which is efficient to find unexpected bugs. Principally written in \textit{Scala}, CPChecker can be used to test simple and stateful filtering algorithms.
It also contains its own assertions system to be directly integrated into test suites.
As future work, we would like to integrate into CPChecker properties of scheduling filtering algorithms \cite{baptiste2001}
such as edge-finder, not-first not-last, time-table consistency, energy filtering, etc.
for testing the most recent implementation of scheduling algorithms \cite{gay2015time,dejemeppe2015unary,fahimi2014linear,vilim2007global,vilim2011timetable,tesch2016nearly}.
\input{bib.tex}
\end{document}
|
1,116,691,501,235 | arxiv |
\section{Introduction}
Recently, the filter-and-forward (FF) relaying scheme has gained
an interest from the research communities as an alternative
relaying strategy due to its capability of performance improvement
over simple AF relays and still low complexity compared with other
relaying strategies such as decode-and-forward (DF) and
compress-and-forward (CF) schemes
\cite{ElGamal&Mohseni&Zahedi:06IT, DelCoso&Ibars:09WC,
Chen10:SP,Liang:11WCOM, SungKim:11arXiv,
KimSungLee:12SP,Dong:13VT}. It is shown that the FF scheme can
outperform the AF scheme considerably. However, most of the works
regarding the FF relay scheme were conducted for single-carrier
systems
\cite{ElGamal&Mohseni&Zahedi:06IT,Chen10:SP,SungKim:11arXiv,
KimSungLee:12SP}. Recently, Kim {\it et al.} considered the FF
relay design for single-input and single-output (SISO) OFDM
systems \cite{DGKim:12APSIPA,Dong:13VT}, but their result based on
worst subcarrier signal-to-noise ratio (SNR) maximization or
direct rate maximization is not easily extended to the MIMO case
since SNR is not clearly defined for MIMO channels and furthermore
in the MIMO case the design of the MIMO precoder at the souce and
the MIMO decoder at the destination should be considered jointly
with the FF relay design. Thus, although there exists vast
literature regarding the relay design for MIMO-OFDM systems in the
case that the relay performs OFDM processing \footnote{In this case, each subcarrier channel is independent and we only need to consider a single flat MIMO channel.}
\cite{Hammerstr:06ConfCommun,Ng:07JSAC,Dong:10ICASSP,Dang:10WC,Fang&Hua&Koshy:06SAM,Simoens&Medina&Vidal&Coso:09SP},
not many results are available for the FF relay design for
MIMO-OFDM transmission, which is the current industry standard for
the physical layer of many commercial wireless communication
systems.
In this paper, we consider the FF relay design for MIMO-OFDM
systems. In the MIMO case, the FF relay should not be designed
alone without considering the MIMO precoder and decoder at the
source and the destination. Thus, we consider the problem of joint
design of the linear MIMO transceiver at the source and the
destination and the FF relay at the relay. As mentioned, in the
MIMO case, it is not easy to use SNR as the design metric as in
the SISO case \cite{Dong:13VT}. Thus, we approach the design
problem based on the tractable criterion of
minimization of weighted sum MSE first
and then consider the rate-maximizing design problem based on the
equivalence relationship between rate maximization and weighted
MSE minimization with a properly chosen weight matrix \cite{Sampath:01COM, Guo:05INF,
Palomar:06Inf,Cioffi:08WCOM,Luo:11SP}. We tackle the complicated
joint design problems by using alternating optimization, which
enables us to exploit the existing results for the MIMO precoder
and decoder design when all channel information is given. The
proposed alternating optimization is based on the iteration
between optimal design of the FF relay for a given set of MIMO
precoder and decoder and optimal design of the MIMO precoder and
decoder for a given FF relay filter. While the linear MIMO
transceiver design for a given FF relay filter can be addressed
by existing results e.g. \cite{Sampath:01COM}, the problem of
optimal design of the FF relay for a given MIMO transceiver is
newly formulated based on the block circulant matrix theorem and
reparameterization. It is shown that the FF relay design problem
for a given MIMO transceiver reduces to a quadratically
constrained quadratic program (QCQP) problem and a solution to
this QCQP problem is proposed based on conversion to a
semi-definite program (SDP). Numerical results show the
effectiveness of the proposed FF relay design and significant
performance improvement by FF relays over widely-considered simple
AF relays, and suggests that it is worth considering the FF
relaying scheme for MIMO-OFDM systems over the AF scheme with a
certain amount of complexity increase.
\subsection{Notation and Organization}
\vspace{-0.5em}
In this paper, we will make use of standard notational
conventions. Vectors and matrices are written in boldface with
matrices in capitals. All vectors are column vectors. For a
matrix ${\bf X}$, ${\bf X}^*$, ${\bf X}^T$, ${\bf X}^H$, $\mbox{tr}({\bf X})$,
and ${\bf X}(i,j)$ indicate the complex conjugate, transpose,
conjugate transpose, trace, and $(i,j)$-element of ${\bf X}$,
respectively. ${\bf X} \succeq 0$ and ${\bf X} \succ 0$ mean that ${\bf X}$
is positive semi-definite and that ${\bf X}$ is strictly positive
definite, respectively. ${\bf I}_n$ stands for the identity matrix of
size $n$ (the subscript is omitted
when unnecessary), ${\mathbf {I}}_{m \times n}$ denotes the first $m\times n$ submatrix of ${\bf I}$, and ${\mathbf {0}}_{m \times n}$ denotes a $m
\times n$ matrix of all zero elements (the subscript is omitted
when unnecessary).
The notation $\text{blkToeplitz}(\overline{{\bf F}},N)$ indicates a $N A \times
(N+L_f-1) B$ block Toeplitz matrix with $N $ row blocks and $[
\overline{{\bf F}}, \bf{0}, \cdots, \bf{0} ] $ as its first row block,
where $\overline{{\bf F}} = [ {\bf F}_0, {\bf F}_1, \cdots, {\bf F}_{L_f-1}] $ is a
row block composed of $A \times B$ matrices $\{{\bf F}_k\}$;
$\text{diag}({\bf X}_1, {\bf X}_2, \cdots, {\bf X}_n)$ means a (block) diagonal
matrix with diagonal entries ${\bf X}_1, {\bf X}_2, \cdots, {\bf X}_n$. The
notation ${\bf x}\sim {\cal{CN}}(\hbox{\boldmath$\mu$\unboldmath},\hbox{$\bf \Sigma$})$ means that ${\bf x}$
is complex circularly-symmetric Gaussian distributed with mean
vector $\hbox{\boldmath$\mu$\unboldmath}$ and covariance matrix $\hbox{$\bf \Sigma$}$. $\Ebb\{\cdot\}$
denotes the expectation. $\iota=\sqrt{-1}$.
The remainder of this paper is organized as follows. The system
model is described in Section \ref{sec:systemmodel}. In Section
\ref{sec:ProblemFormulation}, the joint transceiver and FF relay
design problems for minimizing the weighted sum MSE and for
maximizing the data rate are formulated and solved by using convex
optimization theory and existing results. The performance of the
proposed design methods is investigated in Section
\ref{sec:numericalresults}, followed by the conclusion in Section
\ref{sec:conclusion}.
\vspace{-0.8em}
\section{System Model}
\label{sec:systemmodel}
We consider a point-to-point MIMO-OFDM system with a
relay, as shown in Fig.
\ref{fig:system}, where the source has $N_t$ transmit antennas,
the relay has $M_r$ receive antennas and $M_t$ transmit antennas,
and the destination has $N_r$ receive antennas. The source and the
destination employ MIMO-OFDM modulation and demodulation with $N$
subcarriers, respectively, as in a conventional MIMO-OFDM system.
However, we assume that the relay is a full-duplex\footnote{In the
case of half-duplex, the problem can be formulated similarly.} FF
relay equipped with a bank of $M_t M_r$ finite impulse response
(FIR) filters with order $L_g$, i.e., the relay performs FIR
filtering on the incoming signals received at the $M_r$ receive
antennas at the chip rate\footnote{The FIR filtering is assumed to
be performed at the baseband. Thus, up and down converters are
necessary for FF operation and one common local oscillator (LO) at
the relay is sufficient.} of the OFDM modulation and transmits the
filtered signals instantaneously through the $M_t$ transmit
antennas to the destination without OFDM processing. Thus, the FF
relay can be regarded as an extension of an amplify-and-forward
(AF) relay and as an additional frequency-selective fading channel
between the source and the destination. We assume that there is no
direct link between the source and the destination and that the
source-to-relay (SR) and relay-to-destination (RD) channels are
multi-tap filters with finite impulse responses and their state
information is known to the system.
\begin{figure}[http]
\begin{psfrags}
\psfrag{s0}{\scriptsize${\bf s}_0$} %
\psfrag{s1}{\scriptsize${\bf s}_1$} %
\psfrag{sn}{\scriptsize${\bf s}_{n}$} %
\psfrag{v0}{\scriptsize${\bf V}_0$} %
\psfrag{v1}{\scriptsize${\bf V}_1$} %
\psfrag{vn}{\scriptsize${\bf V}_{n}$} %
\psfrag{x0}{\scriptsize${\bf x}_0$} %
\psfrag{x1}{\scriptsize${\bf x}_1$} %
\psfrag{xn}{\scriptsize${\bf x}_{n}$} %
\psfrag{a0}{\scriptsize${\bf A}_0$} %
\psfrag{a1}{\scriptsize${\bf A}_1$} %
\psfrag{ann}{\scriptsize${\bf A}_{n}$} %
\psfrag{s00}{\scriptsize$\hat{{\bf s}}_0$} %
\psfrag{s11}{\scriptsize$\hat{{\bf s}}_1$} %
\psfrag{snn}{\scriptsize$\hat{{\bf s}}_{n}$} %
\psfrag{idft}{\scriptsize IDFT} %
\psfrag{dft}{\scriptsize DFT} %
\psfrag{ps}{\scriptsize P/S} %
\psfrag{an}{\scriptsize \&} %
\psfrag{cp}{\scriptsize CP} %
\psfrag{sp}{\scriptsize S/P} %
\psfrag{cpr}{\scriptsize CPR}%
\psfrag{ff}{\scriptsize FF relay} %
\psfrag{fir}{\scriptsize FIR filter} %
\psfrag{vd}{{\scriptsize$\vdots$}} %
\psfrag{Nt}{{\scriptsize$N_t$}} %
\psfrag{Mr}{{\scriptsize$M_r$}} %
\psfrag{Mt}{{\scriptsize$M_t$}} %
\psfrag{Nr}{{\scriptsize$N_r$}} %
\centerline{ \scalefig{0.95} \epsfbox{figures/system.eps} }
\captionsetup{justification=centering} \caption{System model}
\label{fig:system}
\end{psfrags}
\end{figure}
The considered baseband system model is described in detail as
follows. At the source, a block of $N$ input data vectors of size
$\Gamma \times 1$, denoted as $\{{\bf s}_n = [s_n[1], s_n[2], \cdots,
s_n[\Gamma]]^T$, $n = 0, 1, \cdots, N-1$\}, is processed for one
OFDM symbol time. Here, ${\bf s}_n$ is the input data vector for the
effective parallel flat MIMO channel at the $n$-th subcarrier provided by MIMO-OFDM
processing and $\Gamma \le \min(N_t, M_r, M_t, N_r)$ is the number
of data streams for the effective flat MIMO channel at each
subcarrier. We assume that each data symbol is a zero-mean
independent complex Gaussian random variable with unit variance,
i.e., $s_n[k] ~{\sim}~ {\cal{CN}}(0, 1)$ for $k= 1, 2, \cdots,
\Gamma$ and $n = 0, 1, \cdots, N-1$. Let the concatenated data
vector be denoted by ${\bf s} = [ {\bf s}_{N-1}^T, {\bf s}_{N-2}^T, \cdots,
{\bf s}_0^T ]^T$. Although MIMO precoding can be applied to the
concatenated vector ${\bf s}$, such processing is complexity-wise
inefficient and thus we assume that MIMO precoding is applied to
the effective flat MIMO channel of each subcarrier separately, as
in most practical MIMO-OFDM systems, with a precoding matrix
${\bf V}_n$ for the $n$-th subcarrier MIMO channel. The MIMO precoded
$N$ symbols for each transmit antenna are collected and processed
by inverse discrete Fourier transform (IDFT). By concatenating all
IDFT symbols for all transmit antennas, we have the overall
time-domain signal vector ${\bf x}$, given by
\begin{equation}
{\bf x} = ( {\bf W}_{N} \otimes {\bf I}_{N_t}){\bf V}{\bf s}
\end{equation}
where
\begin{eqnarray}
{\bf V} &=& \text{diag}({\bf V}_{N-1}, {\bf V}_{N-2}, \cdots, {\bf V}_0)\\
{\bf W}_{N} (k+1,l+1) &=& \frac{1}{\sqrt{N}} e^{\iota \frac{2\pi
kl}{N} }, ~~~k,l=0,1,\cdots, N-1,
\end{eqnarray}
and ${\bf x}$ is cyclic-prefix attached and transmitted. The cyclic
prefix attached signal vector ${\bf x}_{cp}$ can be expressed as
\begin{equation} \label{eq:xbfcp}
{\bf x}_{cp} = \underbrace{\left (\left [
\begin{array}{c}
{\bf I}_{N} \\
{\bf I}_{N_{cp}}~~{\bf{0}} \\
\end{array}
\right ] \otimes {\bf I}_{N_t} \right )}_{\stackrel{\Delta}{=}~{\bf T}_{cp}} {\bf x},
\end{equation}
where $N_{cp}$ is the cyclic prefix length, and ${\bf{0}}$ in
\eqref{eq:xbfcp} is an $N_{cp} \times (N - N_{cp})$ all-zero
matrix. We assume that the length of the overall FIR channel
between the source and the destination is not larger than that of
the OFDM cyclic prefix, i.e., $N_{cp} \ge L_f + L_r +L_g -3$,
where $L_f$, $L_r$, and $L_g$ denote the SR channel length, the
FIR filter order at the relay, and the RD channel length,
respectively.
The transmitted signal
${\bf x}_{cp}$ passes through the SR channel, the relay FIR filter,
and the RD channel; is corrupted by white Gaussian noise; and is
received at the destination. Then, the transmitted signal vector
at the relay and the received signal vector at the destination are
respectively given by
\begin{equation} \label{eq:relaypower1}
{\bf y}_t = {\bf R}{\bf F} {\bf x}_{cp} + {\bf R}{\bf n}_r~~ \text{and} ~~ {\bf y}_d =
{\bf G}{\bf R}{\bf F}{\bf x}_{cp} + {\bf G}{\bf R}{\bf n}_r + {\bf n}_{d},
\end{equation}
where {\@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI
\begin{align}
&{\bf y}_d = \left [{\bf y}_{d,N-1}^T, {\bf y}_{d,N-2}^T, \cdots, {\bf y}_{d,0}^T \right]^T,\\
&{\bf y}_t = \left [{\bf y}_{t,N-1}^T, {\bf y}_{t,N-2}^T, \cdots, {\bf y}_{t,0}^T, {\bf y}_{t,-1}^T, \cdots ,{\bf y}_{t,-L_g+1}^T \right]^T,\\
&{\bf x}_{cp} = \left [ {\bf x}_{N-1}^T, {\bf x}_{N-2}^T, \cdots, {\bf x}_{0}^T, {\bf x}_{-1}^T,
\cdots,{\bf x}_{-L_g-L_r-L_f+3}^T \right]^T, \\
&{\bf n}_r = \left [ {\bf n}_{r,N-1}^T, {\bf n}_{r,N-2}^T, \cdots, {\bf n}_{r,0}^T, {\bf n}_{r,-1}^T, \cdots, {\bf n}_{r,-L_g-L_r+2}^T \right]^T,\\
&{\bf n}_d = \left [ {\bf n}_{d,N-1}^T, {\bf n}_{d,N-2}^T, \cdots, {\bf n}_{d,0}^T \right]^T,\\
&{\bf G} = \text{blkToeplitz}(\overline{{\bf G}},N),~{\bf R} =
\text{blkToeplitz}(\overline{{\bf R}},N+L_g-1),~{\bf F} = \text{blkToeplitz}(\overline{{\bf F}},N+L_g+L_r-2),\label{eq:firstrowblock22}\\
&\overline{{\bf G}} = [ {\bf G}_0, {\bf G}_1, \cdots, {\bf G}_{L_g-1} ],~\overline{{\bf R}}
= [ {\bf R}_0, {\bf R}_1, \cdots, {\bf R}_{L_r-1} ],~\overline{{\bf F}} = [ {\bf F}_0,
{\bf F}_1, \cdots, {\bf F}_{L_f-1} ].\label{eq:firstrowblock}
\end{align}} \noindent
Here, ${\bf y}_{d,k}$ and ${\bf n}_{d,k}$ are $N_r \times 1$ vectors;
${\bf y}_{t,k}$ is a $M_t \times 1$ vector; ${\bf x}_k$ is a $N_t \times 1$ vector; ${\bf n}_{r,k}$ is a $M_r \times 1$ vector;
${\bf G}_k$ is a $N_r \times M_t$ matrix; ${\bf R}_k$ is a $M_t \times
M_r$ matrix; and ${\bf F}_k$ is a $M_r \times N_t$ matrix. The
entries of the noise vectors, ${\bf n}_{r,k}$ and ${\bf n}_{d,k}$, are
independently and identically distributed (i.i.d) Gaussian with
${\bf n}_{r,k}[i] \stackrel{i.i.d.}{\sim} {\cal{CN}}(0, \sigma_r^2)$
and ${\bf n}_{d,k}[i] \stackrel{i.i.d.}{\sim} {\cal{CN}}(0,
\sigma_d^2)$. Then, the (cyclic-prefix portion removed) $N$-point
vector DFT of the received vector at the destination is given by
\begin{align}
& {\bf y} = ( {\bf W}_{N}^H \otimes {\bf I}_{N_r}) {\bf G} {\bf R}{\bf F}{\bf x}_{cp} + ( {\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf G}{\bf R}{\bf n}_r + ({\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf n}_d, \nonumber\\
& ~~= ( {\bf W}_{N}^H \otimes {\bf I}_{N_r}) {\bf G} {\bf R}{\bf F}{\bf T}_{cp} ( {\bf W}_{N} \otimes {\bf I}_{N_t}) {\bf V}{\bf s} + ( {\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf G}{\bf R}{\bf n}_r + ({\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf n}_d, \nonumber\\
&~~= ( {\bf W}_{N}^H \otimes {\bf I}_{N_r}) {\bf H}_c ( {\bf W}_{N} \otimes {\bf I}_{N_t}) {\bf V}{\bf s} + ( {\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf G}{\bf R}{\bf n}_r + ({\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf n}_d, \label{eq:circulant} \\
&~~= {\bf D} {\bf V}{\bf s} + ( {\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf G}{\bf R}{\bf n}_r + ({\bf W}_{N}^H \otimes {\bf I}_{N_r}){\bf n}_d, \label{eq:diagonal}
\end{align}
where ${\bf y} = [ {\bf y}_{N-1}^T, {\bf y}_{N-2}^T, \cdots, {\bf y}_{0}^T
]^T$, ${\bf y}_n$ is a $N_r \times 1$ received signal vector at the
$n$-th subcarrier, ${\bf W}_{N}^H $ is the normalized DFT matrix of
size $N$, ${\bf H}_c$ is a $N N_r \times N N_t$ block circulant
matrix generated from the block Toeplitz overall channel matrix
${\bf G} {\bf R}{\bf F}$ from the source to the destination, and ${\bf D} = (
{\bf W}_{N}^H \otimes {\bf I}_{N_r}) {\bf H}_c ( {\bf W}_{N} \otimes
{\bf I}_{N_t})$ is a block diagonal matrix generated by the block
circulant matrix theorem described in the next section. The $n$-th
subcarrier output of the $N$-point vector DFT is processed by a
linear receiver filter ${\bf U}_n$ of size $\Gamma \times N_r$ to
yield an estimate of ${\bf s}_n$. The overall receiver processing for
all the subcarrier channels can be expressed as
\begin{align}
& \hat{{\bf s}} = {\bf U}{\bf D} {\bf V}{\bf s} + {\bf U}( {\bf W}_{N}^H \otimes
{\bf I}_{N_r}){\bf G}{\bf R}{\bf n}_r + {\bf U}({\bf W}_{N}^H \otimes
{\bf I}_{N_r}){\bf n}_d,
\end{align}
where ${\bf U} = \text{diag}({\bf U}_{N-1}, {\bf U}_{N-2}, \cdots, {\bf U}_0)$.
\subsection{Derivation of the subcarrier channel and mean square error}
To facilitate the optimization problem formulation in the next
section, we need to derive an explicit expression for the received
signal vector ${\bf y}_n$, $n = 0, 1, \cdots, N-1$, at the $n$-th
subcarrier.
\vspace{0.5em}
\begin{lemma} \label{lem:circulantM}
If ${\bf H}_c$ is a block circulant matrix with ${\bf K} = [ {\bf H}_0,
{\bf H}_1, \cdots, {\bf H}_{N-1} ]$ as its first row block, then
it is block-diagonalizable as
\begin{equation*}
\mbox{$ \bf \Lambda $}_b = ({\bf W}^H_N \otimes {\bf I}_{N_r})~ {\bf H}_c~ ({\bf W}_N \otimes {\bf I}_{N_t})
\end{equation*}
where $\mbox{$ \bf \Lambda $}_b $ is a block diagonal matrix defined as
\begin{equation*}
\mbox{$ \bf \Lambda $}_b = \left [
\begin{array}{ccc}
{\bf K}(\sqrt{N} {\bf w}_{N-1}^H \otimes {\bf I}_{N_t})^T & & 0 \\
& \ddots & \\
0 & & {\bf K}(\sqrt{N} {\bf w}_{0}^H \otimes {\bf I}_{N_t})^T
\end{array}
\right ]
\end{equation*}
with $\sqrt{N}{\bf w}_k^H$ denoting the $-(k-N)$-th row of the DFT
matrix $\sqrt{N}{\bf W}^H_N$, and
\begin{equation*}
{\bf K}(\sqrt{N}{\bf w}_k^H \otimes {\bf I}_{N_t})^T = \sum_{n=0}^{N-1}{\bf H}_n~ e^{-\iota 2\pi \frac{n(N-k-1)}{N}}.
\end{equation*}
\end{lemma}
\vspace{0.5em} \textit{Proof} : In \cite{Gray:06Toeplitzbook}, it
is shown that a circulant matrix can be diagonalized by a DFT
matrix. This can easily be extended to the block circulant case.
$\hfill{\square}$
By lemma 1, to derive the diagonal blocks of ${\bf D}$ in
\eqref{eq:diagonal}, we only need to know the first row block of
${\bf H}_c$ in \eqref{eq:circulant}. Let the first row block of the
RD channel matrix ${\bf G}$ be denoted by a $N_r \times M_t(N+L_g
-1)$ matrix $\widetilde{{\bf G}} = [ {\bf G}_0, {\bf G}_1, \cdots, {\bf G}_{L_g-1},
\bf{0}, \cdots, \bf{0}]$. Then, the first row block of the
effective channel filtering matrix ${\bf G}{\bf R}{\bf F}$ is given by
$\widetilde{{\bf G}} {\bf R} {\bf F}$. Note that the cyclic prefix adding and
removing operations make ${\bf G}{\bf R}{\bf F}$ into the block circulant
matrix ${\bf H}_c$ by truncating out the blocks of ${\bf G}{\bf R}{\bf F}$
outside the first $N \times N$ blocks and by moving the lower
$(L_g + L_r +L_f -3) \times (L_g + L_r +L_f -3)$ blocks of the
truncated part to the lower left of the untruncated $N \times N$
block matrix, where each block is a $N_r \times N_t$ matrix.
Therefore, the first row block $\widetilde{{\bf H}}_c$ of ${\bf H}_c$ is
simply the first $N$ blocks of $\widetilde{{\bf G}} {\bf R} {\bf F}$, given by
\begin{equation} \label{eq:tHderivationI}
\widetilde{{\bf H}}_c = \widetilde{{\bf G}} {\bf R} {\bf F} {\bf T} ~~ \text{and}~~ {{\bf T}} = {\left [
\begin{array}{c}
{\bf I}_{N N_t}\\
{\bf{0}}_{(L_f+L_r+L_g-3) N_t \times N N_t}
\end{array}
\right ] }
\end{equation}
where ${\bf T}$ is a truncation matrix for truncating out the
remaining blocks of $\widetilde{{\bf G}} {\bf R} {\bf F}$ except the first $N$
column blocks. By using the first row block $\widetilde{{\bf H}}_c$ and
Lemma 1, we obtain the diagonal blocks of ${\bf D}$ as
\begin{equation} \label{eq:tHderivationII}
{\bf D} = \text{diag}(\widetilde{{\bf H}}_c(\sqrt{N} {\bf w}_{N-1}^H \otimes
{\bf I}_{N_t})^T, \widetilde{{\bf H}}_c(\sqrt{N} {\bf w}_{N-2}^H \otimes
{\bf I}_{N_t})^T, \cdots, \widetilde{{\bf H}}_c(\sqrt{N} {\bf w}_{0}^H \otimes
{\bf I}_{N_t})^T).
\end{equation}
Based on \eqref{eq:tHderivationI} and \eqref{eq:tHderivationII},
the received signal vector on the $n$-th subcarrier at the
destination is expressed as
\begin{align}
{\bf y}_n &= \sqrt{N}\widetilde{{\bf G}}{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T {\bf V}_n{\bf s}_n + \mathcal{W}_{r,n}{\bf G}{\bf R}{\bf n}_{r,n} + \mathcal{W}_{r,n}{\bf n}_{d,n}, \\
&= \hat{{\bf y}}_{n} + {\bf z}_n,
\end{align}
where $\mathcal{W}_{t,n} = {\bf w}_n^H\otimes {\bf I}_{N_t}$,
$\mathcal{W}_{r,n} = {\bf w}_n^H\otimes {\bf I}_{N_r}$, $~\hat{{\bf y}}_{n}
= \sqrt{N}\widetilde{{\bf G}}{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T {\bf V}_n{\bf s}_n$,
and ${\bf z}_n = \mathcal{W}_{r,n}{\bf G}{\bf R}{\bf n}_{r,n} +
\mathcal{W}_{r,n}{\bf n}_{d,n} $. This received signal vector
${\bf y}_n$ is filtered by the receive filter ${\bf U}_n$ and its output
is given by
\begin{equation} \label{eq:systemmodel1}
\hat{{\bf s}}_n =
\sqrt{N}{\bf U}_n\widetilde{{\bf G}}{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T
{\bf V}_n{\bf s}_n + {\bf U}_n\mathcal{W}_{r,n}{\bf G}{\bf R}{\bf n}_{r,n} +
{\bf U}_n\mathcal{W}_{r,n}{\bf n}_{d,n}.
\end{equation}
Finally, the weighted MSE between ${\bf s}_n$ and $\hat{{\bf s}}_n$ is
given by
\begin{align}
\mbox{${\mbox{tr}}$}( \hbox{$\bf \Theta$}_n {\cal{M}}_n )
&= \mbox{${\mbox{tr}}$}\left( \hbox{$\bf \Theta$}_n\Ebb\left\{(\hat{{\bf s}}_n-{\bf s}_n)(\hat{{\bf s}}_n-{\bf s}_n)^H \right\} \right), \nonumber\\
&= \mbox{${\mbox{tr}}$}\left( \hbox{$\bf \Theta$}_n\Ebb\left\{({\bf U}_n{\bf y}_n-{\bf s}_n)({\bf U}_n{\bf y}_n-{\bf s}_n)^H \right\} \right), \nonumber\\
&= \mbox{${\mbox{tr}}$} \left( \hbox{$\bf \Theta$}_n \left(\Ebb \{
{\bf U}_n{\bf y}_n{\bf y}_n^H{\bf U}_n^H \} -\Ebb\{{\bf s}_n{\bf y}_n^H{\bf U}_n^H\}
-\Ebb\{{\bf U}_n{\bf y}_n{\bf s}_n^H\}
+\Ebb\{{\bf s}_n{\bf s}_n^H\} \right) \right), \nonumber \\
&= \mbox{${\mbox{tr}}$} \left( \hbox{$\bf \Theta$}_n \Ebb \{{\bf U}_n\hat{{\bf y}}_n\hat{{\bf y}}_n^H{\bf U}_n^H \} \right )+ \mbox{${\mbox{tr}}$} \left( \hbox{$\bf \Theta$}_n \Ebb \{ {\bf U}_n{\bf z}_n{\bf z}_n^H{\bf U}_n^H \} \right) -\mbox{${\mbox{tr}}$} \left( \hbox{$\bf \Theta$}_n \Ebb\{{\bf s}_n\hat{{\bf y}}^H{\bf U}_n^H\}\right) \nonumber \\
&~~~- \mbox{${\mbox{tr}}$} \left( \hbox{$\bf \Theta$}_n
\Ebb\{{\bf U}_n\hat{{\bf y}}_n{\bf s}_n^H\}\right) +\mbox{${\mbox{tr}}$} \left( \hbox{$\bf \Theta$}_n
\Ebb\{{\bf s}_n{\bf s}_n^H\} \right), \label{eq:MSE}
\end{align}
where
${\cal{M}}_n\stackrel{\Delta}{=}\Ebb\left\{(\hat{{\bf s}}_n-{\bf s}_n)(\hat{{\bf s}}_n-{\bf s}_n)^H
\right\}$ is the MSE matrix at the $n$-th subcarrier and
$\hbox{$\bf \Theta$}_n$ is a $\Gamma\times \Gamma$ diagonal positive definite
weight matrix.
\section{Problem Formulation and Proposed Design Method} \label{sec:ProblemFormulation}
In this section, we consider optimal design of the FIR MIMO relay
filter $\{{\bf R}_0,{\bf R}_1,\cdots,{\bf R}_{L_r-1}\}$ and the linear
precoders and decoders $\{{\bf V}_n,{\bf U}_n, n=0,1,\cdots,N-1\}$.
Among several optimality criteria, we first consider the
minimization of the weighted sum mean-square-error (MSE) for given
weight matrices, and then consider the rate maximization via the
weighted sum MSE minimization based on the fact that the rate
maximization for MIMO channels is equivalent to the weighted MSE
minimization with properly chosen weight matrices $\{\hbox{$\bf \Theta$}_n\}$
\cite{Sampath:01COM}. (Here, the summation is across the
subcarrier channels.) The first problem is formally stated as
follows.
\vspace{0.3em}
\begin{problem} For given weight matrices $\{\hbox{$\bf \Theta$}_n\}$,
SR channel ${\bf F}$, RD channel ${\bf G}$, FF relay filter order $L_r$,
maximum source transmit power $P_{s,max}$, and maximum relay
transmit power $P_{r,max}$, optimize the transmit filter
${\bf V}=\text{diag}({\bf V}_0,\cdots,{\bf V}_{N-1})$, the relay filter
$\overline{{\bf R}}$, and the receive filter
${\bf U}=\text{diag}({\bf U}_0,\cdots,{\bf U}_{N-1})$ in order to minimze the
weighted sum MSE:
\begin{eqnarray}
\underset{{\bf V}, \overline{{\bf R}}, {\bf U}} {\min} && \sum_{n = 0}^{N-1}
\mbox{${\mbox{tr}}$}(\hbox{$\bf \Theta$}_n{\cal{M}}_n) ~~\mbox{s.t.} ~~ \mbox{${\mbox{tr}}$}({\bf V}\Vbf^H) \leq
P_{s,\max} ~~\text{and}~~ \mbox{${\mbox{tr}}$}({\bf y}_t {\bf y}_t^H) \leq P_{r,\max}.
\end{eqnarray}
\end{problem}
\vspace{0.3em}
Note that Problem 1 is a complicated non-convex optimization
problem, which does not yield an easy solution. To circumvent the
difficulty in joint optimization, we approach the problem based on
alternating optimization. That is, we first optimize the relay
filter for given transmit and receive filters under the power
constraints. Then, with the obtained relay filter we optimize the
transmit and receive filters. Problem 1 is solved in this
alternating fashion until the iteration converges. A solution to
each step is provided in the following subsections.
\subsection{Relay Filter Optimization}
Whereas the linear precoder ${\bf V}_n$ and decoder ${\bf U}_n$ are
applied to each subcarrier channel separately, the relay filter
affects all the subcarrier channels simultaneously since the FF
relay does not perform OFDM processing. Here we consider the relay
filter optimization for given transmit and receive filters, and
the problem is formulated as follows.
\vspace{0.3cm} {\it Problem 1-1:} For given weight matrices
$\{\hbox{$\bf \Theta$}_n\}$, SR channel ${\bf F}$, RD channel ${\bf G}$, FF relay
filter order $L_r$, transmit filter ${\bf V}$, receive filter
${\bf U}$, and maximum relay transmit power $P_{r,max}$, optimize the
relay filter $\overline{{\bf R}}$ in order to minimize the weighted sum
MSE:
\begin{equation} \label{eq:problem1d1}
\underset{\overline{{\bf R}} } {\min} \sum_{n = 0}^{N-1}
\mbox{${\mbox{tr}}$}(\hbox{$\bf \Theta$}_n{\cal{M}}_n) ~~\mbox{s.t.} ~~ \mbox{${\mbox{tr}}$}({\bf y}_t {\bf y}_t^H) \leq
P_{r,\max}.
\end{equation}
\vspace{0.3em}
To solve Problem 1-1, we first need to express each term in \eqref{eq:problem1d1} as a function of the design variable $\overline{{\bf R}}$.
Note that the relay block-Toeplitz filtering matrix ${\bf R}$ is
redundant since the true design variable $\overline{{\bf R}}$ is embedded
in the block Toeplitz structure of ${\bf R}$. (See
\eqref{eq:firstrowblock22}.) Hence, taking ${\bf R}$ as the design
variable directly is inefficient and we need reparameterization of
the weighted MSE in terms of $\overline{{\bf R}}$. This is possible
through successive manipulation of the terms constructing the
weight MSE shown in \eqref{eq:MSE}. First, using similar techniques to those used in \cite{Dong:13VT}, we can express the
first term of \eqref{eq:MSE} in terms of $\overline{{\bf R}}$ as follows:
\begin{align}
&\mbox{${\mbox{tr}}$}(\hbox{$\bf \Theta$}_n\Ebb\{{\bf U}_n \hat{{\bf y}}_{n}\hat{{\bf y}}_{n}^H{\bf U}_n^H\}) \nonumber\\
&= N \mbox{${\mbox{tr}}$} ( \hbox{$\bf \Theta$}_n\Ebb\{ {\bf U}_n\widetilde{{\bf G}}{\bf R}{\bf F}{\bf T}
\mathcal{W}_{t,n}^T
{\bf V}_n{\bf s}_n{\bf s}_n^H{\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H{\bf F}^H{\bf R}^H\widetilde{{\bf G}}^H{\bf U}_n^H
\} ), \nonumber\\
&\stackrel{(a)}{=} N\mbox{tr} ({\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H{\bf F}^H{\bf R}^H
\widetilde{{\bf G}}^H{\bf U}_n^H \hbox{$\bf \Theta$}_n{\bf U}_n\widetilde{{\bf G}}
{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T{\bf V}_n\Ebb\left\{{\bf s}_n{\bf s}_n^H \right\} ),\nonumber\\
&= N\mbox{tr} ({\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H{\bf F}^H{\bf R}^H
\widetilde{{\bf G}}^H{\bf U}_n^H \hbox{$\bf \Theta$}_n{\bf U}_n\widetilde{{\bf G}}
{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T{\bf V}_n ),\nonumber\\
&= N ~ \mbox{tr} \big( \hbox{$\bf \Theta$}_n^{1/2}{\bf U}_n\widetilde{{\bf G}} {\bf R}
\underbrace{{\bf F}{\bf T}\mathcal{W}_{t,n}^T{\bf V}_n{\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H{\bf F}^H}_{=:{\bf K}_n}
{\bf R}^H\widetilde{{\bf G}}^H{\bf U}_n^H \hbox{$\bf \Theta$}_n^{1/2}\big),\nonumber\\
&\stackrel{(b)}{=} N \left[\mbox{vec}({\bf R}^T\widetilde{{\bf G}}^T{\bf U}_n^T
\hbox{$\bf \Theta$}^{1/2}_n)\right]^T
\overline{{\bf K}}_n \left[\mbox{vec}({\bf R}^T\widetilde{{\bf G}}^T{\bf U}_n^T\hbox{$\bf \Theta$}^{1/2}_n)\right]^*, \nonumber\\
&\stackrel{(c)}{=}
N \left[\mbox{vec}({\bf R}^T)\right]^T (\hbox{$\bf \Theta$}^{1/2}_n{\bf U}_n\widetilde{{\bf G}}\otimes {\bf I}_Q)^T
\overline{{\bf K}}_n (\hbox{$\bf \Theta$}^{1/2}_n{\bf U}_n\widetilde{{\bf G}}\otimes {\bf I}_Q)^* \left[\mbox{vec}({\bf R}^T)\right]^*, \nonumber\\
&\stackrel{(d)}{=}
N {\bf r}^T {{\bf E}}_1 (\hbox{$\bf \Theta$}^{1/2}_n{\bf U}_n\widetilde{{\bf G}}\otimes {\bf I}_Q)^T
\overline{{\bf K}}_n (\hbox{$\bf \Theta$}^{1/2}_n{\bf U}_n\widetilde{{\bf G}}\otimes {\bf I}_Q)^* {\bf E}_1^H{\bf r}^*, \nonumber\\
&= {\bf r}^H {\bf Q}_{1,n} {\bf r}, \label{eq:MSE1stterm}
\end{align}
where
\begin{eqnarray}
\overline{{\bf K}}_n = { {\bf I}_{\Gamma}\otimes{\bf K}_n }; ~~ {\bf I}_Q = {\bf I}_{(N+L_r+L_g-2)M_r}; ~~{\bf r} = \text{vec}(\overline{{\bf R}}^T); ~~~~~\nonumber\\
{\bf Q}_{1,n} = N {{\bf E}}_1^* (\hbox{$\bf \Theta$}^{1/2}_n{\bf U}_n\widetilde{{\bf G}}\otimes
{\bf I}_Q)^H
\overline{{\bf K}}_n^* (\hbox{$\bf \Theta$}^{1/2}_n{\bf U}_n\widetilde{{\bf G}}\otimes {\bf I}_Q) {\bf E}_1^T;~~~~ \nonumber
\end{eqnarray}
and ${\bf E}_1$ is defined in Appendix \ref{sec:appendix1}.
Here, (a) holds due to $\mbox{${\mbox{tr}}$}({\bf U}{\bf B}{\bf C}) =
\mbox{${\mbox{tr}}$}({\bf C}{\bf U}{\bf B})$; (b) holds due to $\mbox{${\mbox{tr}}$}({\bf X} {\bf K}_n {\bf X}^H) =
\text{vec}({\bf X}^T)^T \overline{{\bf K}}_n \text{vec}({\bf X}^T)^*$; (c) holds due to the
kronecker product identity,
$\mbox{vec}({\bf I}{\bf B}{\bf C})=({\bf C}^T\otimes {\bf I})\mbox{vec}({\bf B})$;
and (d) is obtained because ${\bf R} = \text{blkToeplitz}(\overline{{\bf R}},N+L_g-1)$
and $\mbox{vec}({{\bf R}}^T) = {\bf E}_1^T{\bf r}$.
In a similar way, the remaining terms of \eqref{eq:MSE} and the relay power constraint can also be represented as functions of the design variable
${\bf r}$. That is, the second term of \eqref{eq:MSE} can be
rewritten as
\begin{align}
&\mbox{${\mbox{tr}}$}( \hbox{$\bf \Theta$}_n\Ebb\left\{{\bf U}_n{\bf z}_n{\bf z}_n^H{\bf U}_n^H\right\} ) \nonumber \\
&= \mbox{${\mbox{tr}}$}\left( \hbox{$\bf \Theta$}_n\Ebb\left\{
{\bf U}_n\mathcal{W}_{r,n}{\bf G}{\bf R}{\bf n}_{r,n}
{\bf n}_{r,n}^H {\bf R}^H{\bf G}^H\mathcal{W}_{r,n}^H{\bf U}_n^H \right\}
+ \hbox{$\bf \Theta$}_n\Ebb\left\{ {\bf U}_n\mathcal{W}_{r,n}{\bf n}_{d,n} {\bf n}_{d,n}^H\mathcal{W}_{r,n}^H{\bf U}_n^H \right\} \right), \nonumber \\
&=
\mbox{${\mbox{tr}}$}({\bf R}^H{\bf G}^H\mathcal{W}_{r,n}^H{\bf U}_n^H\hbox{$\bf \Theta$}_n{\bf U}_n\mathcal{W}_{r,n}{\bf G}
{\bf R} \Ebb\left\{{\bf n}_{r,n}{\bf n}_{r,n}^H \right\} ) +\mbox{${\mbox{tr}}$}(\hbox{$\bf \Theta$}_n{\bf U}_n\mathcal{W}_{r,n}\Ebb\left\{{\bf n}_{d,n}^H{\bf n}_{d,n}\right\}\mathcal{W}_{r,n}^H{\bf U}_n^H),
\nonumber\\
&= \sigma_r^2 \mbox{tr} ({\bf R}^H
\underbrace{{\bf G}^H\mathcal{W}_{r,n}^H{\bf U}_n^H\hbox{$\bf \Theta$}_n{\bf U}_n\mathcal{W}_{r,n}{\bf G}}_{=:{\bf M}_n} {\bf R} )
+\sigma_d^2\mbox{tr}(\hbox{$\bf \Theta$}_n{\bf U}_n\mathcal{W}_{r,n}\mathcal{W}_{r,n}^H{\bf U}_n^H), \nonumber\\
&= \sigma_r^2 \mbox{tr} ({\bf R}^H {\bf M}_n {\bf R} )
+\sigma_d^2\mbox{tr}(\hbox{$\bf \Theta$}_n{\bf U}_n({\bf w}_n^H\otimes{\bf I}_{N_r})({\bf w}_n\otimes{\bf I}_{N_r}){\bf U}_n^H), \nonumber\\
&\stackrel{(a)}{=} \sigma_r^2 \text{vec}({\bf R})^H \overline{{\bf M}}_n
\text{vec}({\bf R})
+\sigma_d^2\mbox{tr}(\hbox{$\bf \Theta$}_n{\bf U}_n({\bf w}_n^H{\bf w}_n\otimes{\bf I}_{N_r}){\bf U}_n^H), \nonumber \\
&\stackrel{(b)}{=} \sigma_r^2 {\bf r}^H{\bf E}_2 \overline{{\bf M}}_n {\bf E}_2^H {\bf r}
+\sigma_d^2\mbox{tr}(\hbox{$\bf \Theta$}_n{\bf U}_n{\bf U}_n^H), \nonumber \\
&= {\bf r}^H{\bf Q}_{2,n} {\bf r}
+ c_n, \label{eq:MSEsecondterm}
\end{align}
where
\begin{equation*}
\overline{{\bf M}}_n= {\bf I}_{(N+L_g+L_r-2)M_r} \otimes {\bf M}_n,~~ {\bf Q}_{2,n}
= \sigma_r^2 {\bf E}_2 \overline{{\bf M}}_n {\bf E}_2^H, ~~ c_n =
\sigma_d^2\mbox{tr}(\hbox{$\bf \Theta$}_n{\bf U}_n{\bf U}_n^H),
\end{equation*}
and ${\bf E}_2$ is defined in Appendix \ref{sec:appendix1}. Here, (a) follows from the kronecker product identity
$({\bf U}{\bf B} \otimes {\bf C}{\bf D}) = ({\bf U} \otimes {\bf C})({\bf B} \otimes
{\bf D})$, and (b) is obtained due to $\text{vec}({\bf R})^H = {\bf r}^H{\bf E}_2$.
The third term of \eqref{eq:MSE} can be
rewritten as
\begin{align}
\mbox{${\mbox{tr}}$} \left( \hbox{$\bf \Theta$}_n \Ebb\{{\bf s}_n\hat{{\bf y}}_n^H{\bf U}_n^H\}\right)
&= \sqrt{N}\mbox{tr} \left(\hbox{$\bf \Theta$}_n \Ebb\left\{
{\bf s}_n{\bf s}_n^H \right\}{\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H {\bf F}^H {\bf R}^H \widetilde{{\bf G}}^H {\bf U}_n^H \right), \nonumber \\
&= \sqrt{N}\mbox{tr} \left(\hbox{$\bf \Theta$}_n {\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H {\bf F}^H {\bf R}^H \widetilde{{\bf G}}^H {\bf U}_n^H \right), \nonumber \\
&= \sqrt{N}\mbox{tr} \left({\bf R}^H \widetilde{{\bf G}}^H {\bf U}_n^H\hbox{$\bf \Theta$}_n {\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H {\bf F}^H \right), \nonumber \\
&= \sqrt{N} \text{vec}({\bf R})^H \text{vec}(\widetilde{{\bf G}}^H {\bf U}_n^H\hbox{$\bf \Theta$}_n {\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H {\bf F}^H), \nonumber \\
&= \sqrt{N} {\bf r}^H {\bf E}_2 \text{vec}(\widetilde{{\bf G}}^H {\bf U}_n^H\hbox{$\bf \Theta$}_n {\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H {\bf F}^H), \nonumber \\
&= {\bf r}^H {\bf q}_n, \label{eq:MSEthirdterm}
\end{align}
where ${\bf q}_n = \sqrt{N}{\bf E}_2 \text{vec}(\widetilde{{\bf G}}^H {\bf U}_n^H\hbox{$\bf \Theta$}_n
{\bf V}_n^H\mathcal{W}_{t,n}^*{\bf T}^H {\bf F}^H)$. Finally, the relay
transmit power can be rewritten as
\begin{align}
&\Ebb\{\mbox{${\mbox{tr}}$}({\bf y}_{t}{\bf y}_{t}^H )\} \nonumber \\
&= \mbox{${\mbox{tr}}$}\left( {\bf R} {\bf F}{\bf T}_{cp}( {\bf W}_{N} \otimes {\bf I}_{N_t}) {\bf V}\Ebb\{{\bf s} {\bf s}^H \} {\bf V}^H ( {\bf W}_{N}^H \otimes {\bf I}_{N_t}) {{\bf T}_{cp}}^H {\bf F}^H {\bf R}^H \right) +\mbox{${\mbox{tr}}$} \left( {\bf R} \Ebb\{{\bf n}_{r}{\bf n}_r^H\}{\bf R}^H \right), \nonumber \\
&= \mbox{${\mbox{tr}}$}\left( {\bf R} {\bf F}{\bf T}_{cp}( {\bf W}_{N} \otimes {\bf I}_{N_t}) {\bf V} {\bf V}^H ( {\bf W}_{N}^H \otimes {\bf I}_{N_t}) {{\bf T}_{cp}}^H {\bf F}^H {\bf R}^H \right) + \sigma^2_r\mbox{${\mbox{tr}}$} \left( {\bf R} {\bf R}^H \right), \nonumber \nonumber\\
&= \mbox{tr} \left( {\bf R} \underbrace{( {\bf F}{\bf T}_{cp}( {\bf W}_{N} \otimes {\bf I}_{N_t}) {\bf V} {\bf V}^H ( {\bf W}_{N}^H \otimes {\bf I}_{N_t}) {{\bf T}_{cp}}^H {\bf F}^H+\sigma^2_r{\bf I})}_{{\bf \Pi}} {\bf R}^H \right), \nonumber\\
&= \mbox{vec}({\bf R}^T)^T \overline{{\bf \Pi}} \mbox{vec}({\bf R}^T)^*, \nonumber\\
&= {\bf r}^H\widetilde{{\bf \Pi}} {\bf r}, \label{eq:relaypower2}
\end{align}
where $\overline{{\bf \Pi}} = {\bf I}_{(N+L_g -1)M_t} \otimes {\bf \Pi}$ and $\widetilde{{\bf \Pi}} = {\bf E}_1^* \overline{{\bf \Pi}}^* {\bf E}_1^T$.
Based on \eqref{eq:MSE1stterm}, \eqref{eq:MSEsecondterm},
\eqref{eq:MSEthirdterm}, and \eqref{eq:relaypower2}, the
weighted MSE for the $n$-th subcarrier channel is expressed as
\begin{equation} \label{eq:MSE2}
\mbox{${\mbox{tr}}$}( \hbox{$\bf \Theta$}_n {\cal{M}}_n ) = {\bf r}^H {\bf Q}_n {\bf r} - {\bf r}^H {\bf q}_n - {\bf q}_n^H{\bf r} + z_n
\end{equation}
where $ {\bf Q}_n = {\bf Q}_{1,n} + {\bf Q}_{2,n}$ and $z_n = c_n +
\mbox{${\mbox{tr}}$}(\hbox{$\bf \Theta$}_n)$, and Problem 1-1 is reformulated as
\begin{eqnarray} \label{eq:QCQP}
\underset{{\bf r}}{\min} && {\bf r}^H {\bf Q} {\bf r} - {\bf r}^H {\bf q} - {\bf q}^H{\bf r} + z \nonumber \\
\mbox{s.t.} && {\bf r}^H\widetilde{{\bf \Pi}} {\bf r} \leq P_{r,\max}.
\end{eqnarray}
where $ {\bf Q} = \sum_{n=1}^N {\bf Q}_n $, $ {\bf q} = \sum_{n=1}^N {\bf q}_n $, and $ z = \sum_{n=1}^N z_n $.
The key point of the derivation of \eqref{eq:QCQP} is that Problem 1-1 reduces to a {\em quadratically constrained quadratic programming
(QCQP) problem} with a constraint. It is known that QCQP is
NP-hard in general. However, QCQP has been well studied in the
case that the number of constraints is small. Using the results of
\cite{Huang:Math07} and \cite{Wenbao:Math07}, we obtain an optimal
solution to Problem 1-1 as follows. Let $\overline{{\bf r}}/{t} = {\bf r}$,
where $t \in {\cal{C}}$, and $\widetilde{{\bf r}} = [ \overline{{\bf r}}^T, t ]^T
\in {\cal{C}}^{(M_t L_r M_r + 1) \times 1}$. Then, we rewrite
\eqref{eq:QCQP} equivalently as
\begin{eqnarray}
\underset{\widetilde{{\bf r}}}{\min} && \widetilde{{\bf r}}^H {\bf B}_1 \widetilde{{\bf r}} \nonumber \\
\mbox{s.t.} && \widetilde{{\bf r}}^H {\bf B}_2 \widetilde{{\bf r}} \le 0
\end{eqnarray}
where
\begin{equation*}
{\bf B}_1 =\left[
\begin{array}{cc}
{\bf Q}& -{\bf q}\\
-{\bf q}^H & z
\end{array}
\right]
~~ \text{and}~~
{\bf B}_2 =\left[
\begin{array}{cc}
\widetilde{{\bf \Pi}}& \bf{0}\\
\bf{0} & - P_{r,max}
\end{array}
\right].
\end{equation*}
By defining ${\cal{R}} := \widetilde{{\bf r}}\widetilde{{\bf r}}^H$ and removing the
rank-one constraint $\text{rank}( {\cal{R}} ) =1 $, we obtain the
following convex optimization problem:
\begin{eqnarray} \label{problem:SDP}
\underset{{\cal{R}}}{\min} && \mbox{${\mbox{tr}}$}({\bf B}_1 {\cal{R}}) \nonumber \\
\mbox{s.t.} && \mbox{${\mbox{tr}}$}({\bf B}_2 {\cal{R}}) \le 0
\end{eqnarray}
which is a semi-definite program (SDP) and can be solved
efficiently by using the standard interior point method for convex
optimization \cite{Boyd:04cvxbook, Helmberg:02SDP,
Sturm:99OptSeduMi, Boyd:CVX}. With an additional constraint
$\text{rank}( {\cal{R}} ) =1 $, the problem \eqref{problem:SDP} is
equivalent to Problem 1-1. That is, if the optimal solution of
\eqref{problem:SDP} has rank one, then it is also the optimal
solution of Problem 1-1. However, there is no guarantee that an
algorithm for solving the problem \eqref{problem:SDP} yields a
rank-one solution. In such a case, a rank-one solution from
$\cal{R}$ can always be obtained by using the rank-one
decomposition procedure \cite{Wenbao:Math07}.
\subsection{Transmit and receive filter optimization}
Now consider the joint design of the transmit and receive filters
$\{({\bf V}_n,{\bf U}_n), n=0,1,\cdots,N-1\}$ for a given relay FIR
filter. Note that when the transmit power $P_{n,max} ~(\ge
\mbox{tr}({\bf V}_n{\bf V}_n^H))$ for each $n$ and the relay filter are
given, the problem
simply reduces to $N$ independent problems of designing the
transmit filter ${\bf V}_n$ and the receive filter ${\bf U}_n$ for the
$n$-th subcarrier MIMO channel for $n = 0, \cdots, N-1$, as in typical MIMO-OFDM systems. This is
because we get an independent MIMO channel per subcarrier owing to
MIMO-OFDM processing. However, we have an additional freedom to
distribute the total source transmit power $P_{s,\max}$ to $N$
subcarriers such that $P_{s,\max}=\sum_{n=0}^{N-1} P_{n,\max}$,
and should take this overall power allocation into consideration. So, we solve this problem by
separating the power allocation problem out and applying the
existing result \cite{Sampath:01COM} to this problem. First,
consider the transmit and receive filter design problem when the
transmit power $P_{n,max}$ for each $n$ and the relay filter are
given:
\vspace{0.3cm} {\it Problem 1-2:} For given weight matrices
$\{\hbox{$\bf \Theta$}_n\}$, maximum per-subcarrier transmit power
$P_{n,max}$ for $n= 0, 1, \cdots, N-1$, SR channel ${\bf F}$, RD
channel ${\bf G}$, relay filtering matrix ${\bf R}$, jointly optimize
$({\bf V}_n,{\bf U}_n)$ in order to minimize the weighted MSE at the
$n$-th subcarrier MIMO channel:
\begin{eqnarray}
\underset{ {\bf V}_n, {\bf U}_n } {\min} && \mbox{${\mbox{tr}}$}(\hbox{$\bf \Theta$}_n{\cal{M}}_n)
~~\mbox{s.t.} ~~ \mbox{${\mbox{tr}}$}({\bf V}_n {\bf V}_n^H) \leq P_{n,\max} , ~~
\text{for}~ n = 0, 1, \cdots, N-1.
\end{eqnarray}
Problem 1-2 has already been solved and the optimal transceiver
structure for Problem 1-2 is available in \cite{Sampath:01COM}
and \cite{Sampath:Conf99}. It is shown in \cite{Sampath:01COM} that
the optimal transmit filter and receive filter diagonalize the
MIMO channel into eigen-subchannels for any weight matrix. Lemma
1 and Theorem 1 of \cite{Sampath:01COM} provide the optimal
transmit filter ${\bf V}_n$ and receive filter ${\bf U}_n$, and the
solution can be expressed as ${\bf V}_n = \widetilde{{\bf V}}_n\widetilde{{\bf P}}_n$,
where $\widetilde{{\bf V}}_n^H\widetilde{{\bf V}}_n={\bf I}_{\Gamma}$ and $\widetilde{{\bf P}}_n$
is a diagonal matrix with nonnegative entries s.t.
$\mbox{tr}(\widetilde{{\bf P}}_n^2)=P_{n,\max}$ determining the transmit
power of each of $\Gamma$ data streams of the $n$-th subcarrier
MIMO channel. (Please refer to \cite{Sampath:01COM}.)
Note that the solution to Problem 1-2 only optimizes the power
allocation within multiple data streams for each subcarrier when
the transmit power is allocated to each subcarrier. Now, consider
the problem of total source power allocation $P_{s,\max}$ to
subcarrier channels. Here, we exploit the {\em diagonalizing}
property \cite{Sampath:01COM} of the solution to Problem 1-2, take the direction information only for the transmit filter from the solution to Problem 1-2, and apply alternating
optimization. That is, when the relay filtering matrix ${\bf R}$ from
Problem 1-1 and the normalized transmit filters $\{\widetilde{{\bf V}}_n\}$
and the receive filters $\{{\bf U}_n\}$ from Problem 1-2 are given,
each subcarrier MIMO channel is diagonalized into
eigen-subchannels. Thus, the effective parallel MIMO channel
\eqref{eq:systemmodel1} for the $n$-th subcarrier is rewritten as
\begin{align}
\hat{{\bf s}}_n &= \sqrt{N}{\bf U}_n\widetilde{{\bf G}}{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T {\bf V}_n{\bf s}_n + {\bf U}_n\mathcal{W}_{r,n}{\bf G}{\bf R}{\bf n}_{r,n} + {\bf U}_n\mathcal{W}_{r,n}{\bf n}_{d,n} \nonumber \\
&= \sqrt{N}{\bf U}_n\widetilde{{\bf G}}{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T \widetilde{{\bf V}}_n \widetilde{{\bf P}}_n{\bf s}_n + {\bf U}_n\mathcal{W}_{r,n}{\bf G}{\bf R}{\bf n}_{r,n} + {\bf U}_n\mathcal{W}_{r,n}{\bf n}_{d,n}, ~~~ \label{eq:subcarrierChannel} \\
&= {\bf D}_n \widetilde{{\bf P}}_n{\bf s}_n + {\bf U}_n\mathcal{W}_{r,n}{\bf G}{\bf R}{\bf n}_{r,n} + {\bf U}_n\mathcal{W}_{r,n}{\bf n}_{d,n}
\end{align}
where ${\bf D}_n = \text{diag}(d_n[1], d_n[2], \cdots, d_n[\Gamma]$) is
obtained from the optimal transceiver $(\tilde{{\bf V}}_n,{\bf U}_n)$ of
Problem 1-2 with each $d_n[k]$ being a non-negative value
\cite{Sampath:01COM}, and $\widetilde{{\bf P}}_n = \text{diag}( p_n[1], p_n[2],
\cdots, p_n[\Gamma]$). Therefore, we obtain $N \Gamma$ parallel
eigen-subchannels for the overall MIMO-OFDM system as
\begin{equation}
\hat{s}_n[k] = d_n[k] p_n[k] s_n[k] + n_n[k],~~\text{for}~n = 0, 1, \cdots, N-1 ~\text{and}~ k = 1, 2, \cdots,
\Gamma,
\end{equation}
where $n_n[k] = {\bf U}_{n,k}^H \mathcal{W}_{r,n}({\bf G}{\bf R}{\bf n}_{r,n} +{\bf n}_{d,n}) $ and ${\bf U}_{n,k}^H$ is the $k$-th row of ${\bf U}_n$.
The total power $P_{s,\max}$ should now be optimally allocated to
these $N\Gamma$ parallel channels to minimize the weighted sum
MSE, where the weighted sum MSE of $N\Gamma$ parallel eigen-subchannels
is derived as
\begin{equation}
\sum_{n=0}^{N-1}\sum_{k=1}^B \theta_{nk}\Ebb \{ |\hat{s}_n[k] - s_n[k] |^2 \} = \sum_{n=0}^{N-1}\sum_{k=1}^\Gamma \theta_{nk} (d_n[k]^2 p_n[k]^2 - 2 d_n[k] p_n[k] + c_n[k] )\\
\end{equation}
where $c_n[k] = \sigma_r^2{\bf U}_{n,k}^H \mathcal{W}_{r,n}{\bf G}{\bf R}\Rbf^H{\bf G}^H \mathcal{W}_{r,n}^H {\bf U}_{n,k} + \sigma_d^2 {\bf U}_{n,k}^H
{\bf U}_{n,k}+1$, and $\theta_{nk}$ is properly derived from
$\hbox{$\bf \Theta$}_n$. Thus, the problem of overall source power allocation to
minimize the weight sum MSE subject to the source power constraint
is stated as follows.
\vspace{0.3cm} {\it Problem 1-3:} For given any weight matrices
$\{\hbox{$\bf \Theta$}_n\}$, SR channel ${\bf F}$, RD channel ${\bf G}$, relay
filtering matrix ${\bf R}$, maximum source power $P_{s,max} =
\sum_{n=0}^{N-1} P_{n,max}$,
normalized transmit filters $\{\widetilde{{\bf V}}_n\}$, and receive filters
$\{{\bf U}_n\}$,
\begin{eqnarray} \label{problem:sourcePA}
\underset{p_n[k]} {\min} && \sum_{n=0}^{N-1}\sum_{k=1}^\Gamma
\theta_{nk} (d_n[k]^2 p_n[k]^2 - 2 d_n[k] p_n[k] + c_n[k] )
~~\mbox{s.t.} ~~ \sum_{n=0}^{N-1}\sum_{k=1}^\Gamma p_n[k]^2 =
P_{s,max}.
\end{eqnarray}
Note that Problem 1-3 is a convex optimization problem with
respect to $p_n[k]$. The optimal solution to Problem 1-3 is given
in the following proposition:
\vspace{0.3em}
\begin{proposition}
The optimal solution to Problem 1-3 is given by
\begin{equation}
p_n[k] = \left (\frac{\theta_{nk}d_n[k]}{\theta_{nk}d_n[k]^2 +
\mu} \right)_+~~\text{s.t.}~~ \sum_{n=0}^{N-1}\sum_{k=1}^\Gamma
\left (\frac{\theta_{nk}d_n[k]}{\theta_{nk}d_n[k]^2 + \mu} \right
)^2 = P_{s,max}.
\end{equation}
\end{proposition}
{\it{Proof}} : See Appendix \ref{sec:appendix2}
\vspace{0.3em}
The solution in Proposition 1 allocates power inverse-proportionally
to the power of the effective channel $d_n[k]$ in most cases
similarly to the method in \cite{Sampath:Conf99}.
Now summarizing the results, we propose our method to design the
linear transceiver at the source and the destination and the FF
relay filter jointly to minimize the weighted sum MSE, based on
alternating optimization solving Problem 1-1, Problem 1-2, and
Problem 1-3 iteratively.
\begin{algorithm} Given parameters: $\{\hbox{$\bf \Theta$}_n\}$, ${\bf F}$, ${\bf G}$, $L_r$, $P_{s,max}$, and $P_{r,max}$ \\
Step 1: Initialize $\{\widetilde{{\bf P}}_n\}$, $\{\widetilde{{\bf V}}_n\}$, and
$\{{\bf U}_n\}$ for $n=0, 1, \cdots, N-1$. For example, $p_n[k] = \frac{P_{s,max}}{N\Gamma}$, $\widetilde{{\bf V}}_n = {\bf I}_{N_t \times \Gamma}$, and ${\bf U}_n = {\bf I}_{\Gamma\times N_r}$.\\
Step 2: Solve Problem 1-1 and obtain ${\bf R}$. \\
Step 3: Solve Problem 1-2 and obtain $\{\widetilde{{\bf V}}_n,{\bf U}_n\}$. \\
Step 4: Solve Problem 1-3 and obtain $\{\widetilde{{\bf P}}_n\}$. \\
Step 5: Go to Step 2 and repeat until the change in the weighted sum MSE falls within a given tolerance.\\
\end{algorithm}
\vspace{-0.7cm} \noindent The weighted sum MSE is a function of
${\bf R}$ and $\{\widetilde{{\bf V}}_n, {\bf U}_n, \widetilde{{\bf P}}_n\}$ denoted by
${\cal{M}}({\bf R}, \widetilde{{\bf V}}_n, {\bf U}_n, \widetilde{{\bf P}}_n)$. Let
${\bf X}^{(i)}$ denotes the solution at the $(i)$-th step. Then, it
is easy to see that ${\cal{M}}({\bf R}^{(0)}, \widetilde{{\bf V}}_n^{(0)},
{\bf U}_n^{(0)}, \widetilde{{\bf P}}_n^{(0)})$ $\ge {\cal{M}}({\bf R}^{(1)},
\widetilde{{\bf V}}_n^{(0)}, {\bf U}_n^{(0)}, \widetilde{{\bf P}}_n^{(0)}) \ge
{\cal{M}}({\bf R}^{(1)}, \widetilde{{\bf V}}_n^{(2)}, {\bf U}_n^{(2)},
\widetilde{{\bf P}}_n^{(0)}) \ge {\cal{M}}({\bf R}^{(1)}, \widetilde{{\bf V}}_n^{(2)},
{\bf U}_n^{(2)}, \widetilde{{\bf P}}_n^{(3)}) \ge \cdots \ge 0$ because the
optimal solution is obtained at each step and the possible
solution set of the current step includes the solution of the
previous step. In this way, the proposed algorithm converges by
the monotone convergence theorem although it yields a suboptimal
solution and the initialization of the algorithm affects its
performance.
\subsection{Rate maximization}
Now we consider the problem of rate maximization. In general,
the rate maximization problem is not equivalent to the MSE
minimization problem. However, they are closely related to each
other. The relationship has been studied in \cite{Sampath:01COM,
Guo:05INF, Palomar:06Inf}. By using the relationship, the rate
maximization problem for MIMO broadcast channels and MIMO
interference-broadcast channels has recently been considered in
\cite{Cioffi:08WCOM} and \cite{Luo:11SP}. In the case of the joint
design of the FF relay at the relay and the linear transceiver at
the source and the destination, the result regarding the weighted
sum MSE minimization in the previous subsection can be modified
and used to maximize the sum rate based on the existing
relationship between the weighed MSE and the rate. It was shown in
\cite{Sampath:01COM} that the rate maximization for the $n$-th
subcarrier MIMO channel \eqref{eq:subcarrierChannel} is equivalent
to the weighted MSE minimization when the weight matrix
$\hbox{$\bf \Theta$}_n$ is set as a diagonal matrix composed of the
eigenvalues of ${\bf H}^H \hbox{$\bf \Sigma$}_{n}^{-1}{\bf H}$, where ${\bf H}=
\sqrt{N}\widetilde{{\bf G}}{\bf R}{\bf F}{\bf T}\mathcal{W}_{t,n}^T $ is the effective MIMO
channel matrix and $\hbox{$\bf \Sigma$}_{n}$ is the effective noise
covariance matrix of the $n$-th subcarrier MIMO channel
\eqref{eq:subcarrierChannel}. (See Lemma 3 of
\cite{Sampath:01COM}.) Exploiting this result, we propose our
algorithm to design the linear transceiver and the relay filter to
maximize the sum rate below.
\vspace{0.3em}
\begin{algorithm} Given parameters: ${\bf F}$, ${\bf G}$, $L_r$, $P_{s,max}$, and $P_{r,max}$ \\
Step 1: Initialize $\{\hbox{$\bf \Theta$}_n\}$, $\{\widetilde{{\bf P}}_n\}$,
$\{\widetilde{{\bf V}}_n\}$, and
$\{{\bf U}_n\}$ for $n=0, 1, \cdots, N-1$. For example, $\hbox{$\bf \Theta$}_n = {\bf I}$, $p_n[k] = \frac{P_{s,max}}{N\Gamma}$, $\widetilde{{\bf V}}_n = {\bf I}_{N_t \times \Gamma}$, and ${\bf U}_n = {\bf I}_{\Gamma\times N_r}$.\\
Step 2: Solve Problem 1-1 and obtain ${\bf R}$. \\
Step 3: \footnote{When ${\bf R}$ is given, all the parallel subcarrier MIMO channels are determined and a solution $\{\widetilde{{\bf V}}_n,{\bf U}_n,\hbox{$\bf \Theta$}_n\}$ is given by Lemma 1 and Theorem 1 of \cite{Sampath:01COM}.}Solve Problem 1-2 and obtain $\{\widetilde{{\bf V}}_n,{\bf U}_n,\hbox{$\bf \Theta$}_n\}$. \\
Step 4: Compute $\{\widetilde{{\bf P}}_n\}$ for the $N\Gamma$ parallel scalar channels obtained from Step 3 by water-filling. \\
Step 5: Go to Step 2 and repeat until the change in the weighted sum MSE falls within a given tolerance.\\
\end{algorithm}
\vspace{0.3em}
\noindent Note that the weight matrices $\{\hbox{$\bf \Theta$}_n\}$ in
Algorithm 2 are updated in each iteration so that the weighted MSE
minimization is equivalent to the rate maximization for an updated
relay filter, whereas the weight matrices are fixed over
iterations in Algorithm 1.
Now consider the complexity of the proposed algorithms. Note that
solving Problem 1-2 involves $N$ separate small MIMO systems of
size $N_r \times N_t$, and the solution to Problem 1-3 (Algorithm
1) and the water-filling power allocation solution (Algorithm 2)
are explicitly given. Thus, the main complexity of the proposed
algorithms lies in solving Problem 1-1 that requires solving an
SDP problem of size $M_tM_rL_g$. Due to the existence of fast
approximate algorithms for solving SDP problems
\cite{Arora:Conf05, Arora:12TOC}, the proposed algorithm is
implementable if the number of iterations for convergence is not
so large, which will be seen in Fig. \ref{fig:Converge}. For other
practical issues such as channel estimation and self-interference
caused by full-duplex operation, please see \cite{Dong:13VT}.
\section{Numerical results} \label{sec:numericalresults}
In this section, we provide some numerical results to evaluate the
performance of the proposed FF relay design in Section \ref{sec:ProblemFormulation}. Throughout the simulation, we fixed the number of OFDM subcarriers
as $N=16$ with a minimal cyclic prefix covering the overall
FIR channel length in each simulation case. In all cases, each
channel tap coefficient of the SR and RD channel matrices,
${\bf F}_k$ and ${\bf G}_k$, was generated i.i.d according to a
Rayleigh distribution, i.e., ${\bf F}_k(i,j) \stackrel{i.i.d.}{\sim}
{\cal{CN}}(0, \sigma_f^2)$ and ${\bf G}_k(i,j)
\stackrel{i.i.d.}{\sim} {\cal{CN}}(0, \sigma_g^2)$, where
$\sigma_f = \sigma_g = 1$. The SR channel length and the RD
channel length were set as $L_f = L_g = 3$, and $N_t = M_r = M_t
= N_r = 2$. The relay and the destination had the same noise
power $\sigma_r^2 = \sigma_d^2 = 1$, and the source transmit power
was 20 dB higher than the noise power, i.e., {$P_{s,max}=100$.
(From here on, all dB power values are relative to
$\sigma_r^2=\sigma_d^2=1$.)}
\begin{figure}[http] \centerline{
\begin{psfrags}
\scalefig{0.6}\epsfbox{figures/Sum_MSE.eps}
\end{psfrags}
}
\captionsetup{justification=centering} \caption{Sum MSE versus FF relay transmit
power.} \label{fig:Sum_MSE}
\end{figure}
We first evaluated the MSE performance of the proposed FF relay
design
method, Algorithm 1, to minimize the sum MSE subject to a source
power constraint and a relay
power constraint. Figs. \ref{fig:Sum_MSE} and \ref{fig:Sum_MSE2} show the resulting sum MSE over all subcarriers.
For the curves in the figures, 200 channels were randomly realized
with $L_f=L_g=3$ and each plotted value is the average over the
200 channel realizations. As expected, it is seen in Figs.
\ref{fig:Sum_MSE} and \ref{fig:Sum_MSE2} that the performance of
the FF relay improves as the FF relay filter length increases, and the FF relay significantly outperforms the
simple AF relay ($L_r = 1$). It
is also seen that most of the gain is achieved by
only a few filter taps for the FF relay.
\begin{figure}[http] \centerline{
\begin{psfrags}
\scalefig{0.6}\epsfbox{figures/Sum_MSE2.eps}
\end{psfrags}
} \captionsetup{justification=centering} \caption{Sum MSE versus relay filter
length.} \label{fig:Sum_MSE2}
\end{figure}
\begin{figure}[http] \centerline{
\begin{psfrags}
\scalefig{0.6}\epsfbox{figures/BER.eps}
\end{psfrags}
} \captionsetup{justification=centering} \caption{Overall BER versus FF relay
transmit power.} \label{fig:BER}
\end{figure}
Next, we investigated the BER performance corresponding to Fig.
\ref{fig:Sum_MSE}. Here, we assumed uncoded QPSK modulation for
each subcarrier channel. From the result of Fig.
\ref{fig:Sum_MSE}, we obtained the SNR of each subcarrier channel
of the total $N=16$ subcarrier channels for the designed FF relay
filter, transmit filer, receive filter and source power
allocation. Based on this, we computed the subcarrier BER based
on the SNR of each subcarrier and averaged all the subcarrier
channel BERs to obtain the overall BER, and the result is shown in
Fig. \ref{fig:BER}. It is seen in Fig. \ref{fig:BER} that the FF
relay significantly improves the BER performance over the AF
relay. Next, we tested the convergence property of the proposed
algorithm, and Fig. \ref{fig:Converge} shows the result. It is
seen that the proposed algorithm converges with a few iterations.
Finally, we examined the rate performance of the proposed
rate-targeting design method, Algorithm 2. (Rate maximization
may be the ultimate goal of design in many cases.)
Fig. \ref{fig:rate} shows the result. Again, for the figure 200
channels were randomly realized with $L_f=L_g=3$ and each plotted
value is the average over the 200 channel realizations, and the
sum rate is the sum over the total subcarrier channels. It is
shown in Fig. \ref{fig:rate} that the FF relay improves the rate
performance as the FF relay filter length increases, and the
improvement gap shows that it is worth considering FF relays over
simple AF relays even though FF relays require more processing
than AF relays.
\begin{figure}[http] \centerline{
\begin{psfrags}
\scalefig{0.6}\epsfbox{figures/Converge.eps}
\end{psfrags}
} \captionsetup{justification=centering} \caption{Sum MSE versus the number of
iteration.} \label{fig:Converge}
\end{figure}
\begin{figure}[http] \centerline{
\begin{psfrags}
\scalefig{0.6}\epsfbox{figures/rate.eps}
\end{psfrags}
} \captionsetup{justification=centering} \caption{Sum rate versus FF relay
transmit power.} \label{fig:rate}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
In this paper, we have considered the joint design of the linear
transceiver and the FF relay for MIMO-OFDM systems for weighted
sum MSE minimization and sum rate maximization, and have proposed
algorithms for this purpose based on alternating optimization that
iterates between optimal design of the FF relay for a MIMO
transceiver at the source and the destination and optimal design
of the MIMO transceiver for a given FF relay filter. We have shown
that the FF relay design problem for a given MIMO transceiver
reduces to a quadratically constrained quadratic program (QCQP)
and have proposed a solution to this QCQP problem based on
conversion to a semi-definite program (SDP). We have provided some
numerical results to evaluate the performance gain of the FF
relaying scheme over the simple AF scheme for MIMO-OFDM systems.
Numerical results show the effectiveness of the proposed FF relay
design and suggest that it is worth considering the FF relaying
scheme over the widely-considered simple AF scheme for MIMO-ODFM
systems.
\newpage
\appendices
\section{${\bf E}_1$ and ${\bf E}_2$ matrices} \label{sec:appendix1}
${\bf E}_1$ and ${\bf E}_2$ are ${M_tL_rM_r ~\times~ M_t(N+L_r+L_g-2)(N+L_g-1) M_r}$ matrices and defined as follows:
{ \@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI
\begin{eqnarray}
&&{\bf E}_1= \nonumber \\
&&
\left[
\underbrace{\left|
\begin{array}{c} {\bf I}\\ \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array}
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-2}
\left|
\begin{array}{c} \bf{0}\\ {\bf I}\\ \bf{0}\\ \vdots\\ \bf{0}\\ \end{array}
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-2}
\right|
\begin{array}{c} \cdots\\ \end{array}
\left|
\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ {\bf I}\\ \end{array}
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-2}
\right.
\right|}_{M_t(N+L_r+L_g-2)}
\right.
\underbrace{\left|
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{1}
\begin{array}{c} {\bf I}\\ \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array}
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-3}
\left|
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{1}
\begin{array}{c} \bf{0}\\ {\bf I}\\ \bf{0}\\ \vdots\\ \bf{0}\\ \end{array}
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-3}
\right|
\begin{array}{c} \cdots\\ \end{array}
\left|
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{1}
\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ {\bf I}\\ \end{array}
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-3}
\right.
\right|}_{M_t(N+L_r+L_g-2)}
\nonumber \\ &&
\begin{array}{c} \cdots\\ \end{array} {
\left.\underbrace{\left|
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-2}
\begin{array}{c} {\bf I}\\ \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array}
\left|
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-2}
\begin{array}{c} \bf{0}\\ {\bf I}\\ \bf{0}\\ \vdots\\ \bf{0}\\ \end{array}
\right|
\begin{array}{c} \cdots\\ \end{array}
\left|
\underbrace{\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \vdots\\ \bf{0}\\ \end{array} }_{N+L_g-2}
\begin{array}{c} \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ {\bf I}\\ \end{array}
\right.
\right|}_{M_t(N+L_r+L_g-2)} \right] \otimes{\bf I}_{M_r} }
\end{eqnarray}
}
where ${\bf I}={\bf I}_{L_r}$.
\newpage
{\@setsize\small{9pt}\viiipt\@viiipt\let\@listi\@listI
\begin{eqnarray}
&&{\bf E}_2 = \nonumber \\
&&\left[
\begin{array}{|c|}
{\cal E}_1\\ \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ \hline
\vdots\\ \hline
{\cal E}_{M_t}\\ \bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\
\end{array}
\begin{array}{c|}
{\cal E}_{M_t+1}\\ {\cal E}_1\\ \bf{0}\\ \vdots\\ \bf{0}\\ \hline
\vdots\\ \hline
{\cal E}_{M_t+M_t}\\ {\cal E}_{M_t}\\ \bf{0}\\ \vdots\\ \bf{0}\\
\end{array}
\begin{array}{c} \cdots\\ \end{array}
\underbrace{
\begin{array}{|c|}
{\cal E}_{(L_r-1)M_t+1}\\ \vdots\\ \vdots\\ \vdots\\ {\cal E}_1\\ \hline
\vdots\\ \hline
{\cal E}_{(L_r-1)M_t+M_t}\\ \vdots\\ \vdots\\ \vdots\\ {\cal E}_{M_t}\\
\end{array}
}_{\mbox{$L_r$-th block}}
\begin{array}{c|}
{\cal E}_{(L_r)M_t+1}\\ \vdots\\ \vdots\\ \vdots\\ {\cal E}_{M_t+1}\\ \hline
\vdots\\ \hline
{\cal E}_{(L_r)M_t+M_t}\\ \vdots\\ \vdots\\ \vdots\\ {\cal E}_{M_t+M_t}\\
\end{array}
\begin{array}{c} \cdots\\ \end{array}
\underbrace{
\begin{array}{|c|}
{\cal E}_{(N+L_g-2)M_t+1}\\ \vdots\\ \vdots\\ \vdots\\ {\cal E}_{(N+L_g-L_r-1)M_t+1}\\ \hline
\vdots\\ \hline
{\cal E}_{(N+L_g-2)M_t+M_t}\\ \vdots\\ \vdots\\ \vdots\\ {\cal E}_{(N+L_g-L_r-1)M_t+M_t}\\
\end{array}
}_{\mbox{$(N+L_g-1)$-th block}}
\underbrace{
\begin{array}{c|}
\bf{0}\\ {\cal E}_{(N+L_g-2)M_t+1}\\ \vdots\\ \vdots\\ {\cal E}_{(N+L_g-L_r)M_t+1}\\ \hline
\vdots\\ \hline
\bf{0}\\ {\cal E}_{(N+L_g-2)M_t+M_t}\\ \vdots\\ \vdots\\ {\cal E}_{(N+L_g-L_r)M_t+M_t}\\
\end{array}
}_{\mbox{$(N+L_g)$-th block}}\right. \nonumber\\
&& \left.
\begin{array}{c} \cdots\\ \end{array}
\underbrace{
\begin{array}{|c|}
\bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ {\cal E}_{(N+L_g-2)M_t+1}\\ \hline
\vdots\\ \hline
\bf{0}\\ \vdots\\ \vdots\\ \bf{0}\\ {\cal E}_{(N+L_g-2)M_t+M_t}\\
\end{array}
}_{\mbox{$(N+L_g+L_r-2)$-th block}}
\right] ~~\text{and}~~
{\cal E}_k=
\left[
\begin{array}{c}
e_k^T\\ e_{(N+L_g-1)M_t+k}^T\\ e_{2(N+L_g-1)M_t+k}^T\\ \vdots\\ e_{(M_r-1)(N+L_g-1)M_t+k}^T\\
\end{array}
\right]
\end{eqnarray}
}
where $e_i^T$ is the $i$-th row of ${\bf I}_{(N+L_g-1)M_tM_r}$.
\section{} \label{sec:appendix2}
{\it{Proof of Proposition 1 }}
The Lagrangian of \eqref{problem:sourcePA} is given by
\begin{align}
{\cal{L}}(p_n[k], \mu) &= \sum_{n=0}^{N-1}\sum_{k=1}^B \theta_{nk} (d_n[k]^2 p_n[k]^2 - 2 d_n[k] p_n[k] + c_n[k] ) + \mu (\sum_{n=0}^{N-1}\sum_{k=1}^B p_n[k]^2 -P_{s,max}) \nonumber \\
& ~~ - \sum_{n=0}^{N-1}\sum_{k=1}^B\lambda_{n,k} p_n[k]
\end{align}
where $\mu \in {\cal{R}}$ and $\lambda_{n,k} \ge 0$ are dual variables associated with the source power constraint and the positiveness of power, respectively.
\noindent Then, the following KKT conditions are necessary and sufficient for optimality because the problem \eqref{problem:sourcePA} is a
convex optimization problem:
\begin{align}
&p_n[k] \ge 0,~~ \sum_{n=0}^{N-1}\sum_{k=1}^B p_n[k]^2 -P_{s,max} = 0, \label{eq:primal} \\
&\mu \in {\cal{R}},~~ \lambda_{n,k} \ge 0, \label{eq:dual}\\
& \lambda_{n,k} p_n[k] = 0 \label{eq:complementaryslackness}\\
& \nabla_{p_n[k]} {\cal{L}} = 2\theta_{nk}d_n[k]^2 p_n[k] - 2 \theta_{nk}d_n[k] + 2\mu p_n[k] - \lambda_{n,k} = 0 \label{eq:stationary}
\end{align}
for $ n = 0, 1, \cdots, N-1 $ and $k=1, \cdots, B$.
\noindent The gradient \eqref{eq:stationary} can be rewritten as $\lambda_{n,k} = 2(\theta_{nk}d_n[k]^2 + \mu )p_n[k] - 2\theta_{nk} d_n[k] $. Plugging this into \eqref{eq:dual} and \eqref{eq:complementaryslackness}, we get
\begin{align}
& \mu p_n[k] \ge \theta_{nk}d_n[k] - \theta_{nk}d_n[k]^2 p_n[k] \label{eq:condition1} \\
& ((\theta_{nk}d_n[k]^2 + \mu )p_n[k] - \theta_{nk}d_n[k]) p_n[k] = 0 \label{eq:condition2}
\end{align}
Let us consider the case that $p_n[k] = 0$. Then, \eqref{eq:condition1} is satisfied only if $d_n[k] = 0$ because $d_n[k] \ge 0$.
If $p_n[k] > 0$, $p_n[k] = \left (\frac{\theta_{nk}d_n[k]}{\theta_{nk}d_n[k]^2 + \mu} \right)$ by the complementary slackness \eqref{eq:condition2}. This also satisfies
\eqref{eq:condition1}. Therefore, we get the desired result satisfying the primal constraints \eqref{eq:primal}.
~~~~~~$\hfill{\square}$
\newpage
\vspace{10cm}
|
1,116,691,501,236 | arxiv | \section{Introduction}
According to well verified observations the universe is currently exhibiting
an accelerated expansion. Although the simplest explanation would be the
cosmological constant \cite{Weinberg:1988cp}, the possible dynamical features
require for more radical modifications. Hence, one can either alter the
universe content, by introducing new, exotic forms, collectively called
\textquotedblleft dark energy\textquotedblrigh
\ \cite{Copeland:2006wr,Cai:2009zp}, e.g., quintom models \cite{Cai:2009zp,Dutta:2009yb,Guo:2004fq,Zhao:2006mp,Lazkoz:2006pa,Lazkoz:2007mx,MohseniSadjadi:2006hb,Setare:2008pz,Setare:2008dw,Saridakis:2009ej,Qiu:2010ux,Leon:2012vt,Leon:2018lnd,Paliathanasis:2018vru}, which generalizes phantom fields \cite{Singh:2003vx,Sami:2003xv,Andrianov:2005tm,Elizalde:2008yf,Sadatian:2008sv}, or modify the gravitational sector adding
new degrees of freedom, like in $f$- theories,
\cite{DeFelice:2010aj,Nojiri:2005jg}, in Lovelock gravity
\cite{Lovelock:1971yv,Deruelle:1989fj}, in scalar field theories like the
Galileon theory
\cite{Nicolis:2008in,Deffayet:2009mn,Leon:2012mt,DeArcia:2015ztd,Dimakis:2017kwx,Giacomini:2017yuk,DeArcia:2018pjp
; or in the Lorentz invariant Ho\v{r}ava-Lifshitz gravity
\cite{Horava:2009uw,Leon:2009rc,Leon:2019mbo}, and many others.
A very interesting theory of gravitational modification is the Einstein-aether
theory
\cite{Jacobson:2000xp,Eling:2004dk,Carroll:2004ai,Kanno:2006ty,Zlosnik:2006zu,Donnelly:2010cr,Carruthers:2010ii,Jacobson:2010mx,Jacobson,Garfinkle:2011iw,Barrow:2012qy,Sandin:2012gq,Alhulaimi:2013sha,Coley:2015qqa,Latta:2016jix,Alhulaimi:2017ocb,VanDenHoogen:2018anx,Coley:2019tyx,Leon:2019jnu
. It corresponds to the class of Lorentz-violating theories of gravity, where
one considers the existence of a unit vector, the aether, which is everywhere
non-zero in any solution. The aether spontaneously breaks the boost sector of
the Lorentz symmetry by selecting a preferred frame at each point in
spacetime while maintaining local rotational symmetry. The action for
Einstein-aether theory is the most general generally covariant functional of
the spacetime metric $g_{ab}$ and aether field $u^{a}$ involving no more than
two derivatives, excluding total derivatives
\cite{Jacobson,Carroll:2004ai,Garfinkle:2011iw}. Exact solutions and
qualitative analysis of Einstein-aether were presented elsewhere, e.g., in
\cite{Barrow:2012qy,Sandin:2012gq,Alhulaimi:2013sha,Coley:2015qqa,Latta:2016jix,Alhulaimi:2017ocb,VanDenHoogen:2018anx,Coley:2019tyx,Leon:2019jnu
.
In \cite{Kanno:2006ty} it was explored the impact of Lorentz violation on the
inflationary scenario. More precisely, it is studied homogeneous but
anisotropic solutions in the presence of a positive cosmological constant,
with a Bianchi type I (Kasner-like) symmetry with three orthogonal principal
directions of expansion, and with the aether tilted in one of the principal
directions. In this model the inflationary stage is divided into two parts;
the Lorentz violating stage and the standard slow-roll stage. In the first
stage the universe expands as an exact de Sitter spacetime, although the
inflaton field is rolling down the potential. Interestingly, exact Lorentz
violating inflationary solutions can be found in the absence of an inflaton
potential. To linear order in the anisotropy, the system relaxes exponentially
to the isotropic, de Sitter solution. This approach was an special case of the
perturbative treatments used in \cite{Carruthers:2010ii}. In
\cite{Carruthers:2010ii}, it was investigated large deviations from isotropy,
maintaining homogeneity. It was found that, for generic values of the coupling
constants, the aether and metric isotropizes if the initial aether hyperbolic
boost angle and its time derivative in units of the cosmological constant are
less than order $\mathcal{O}(1)$. For larger angles or larger angle derivatives, the
behavior is strongly dependent on the values of the coupling constants.
In general, there is a runaway behavior in which the anisotropy increases with
time, and/or singularities occur. In \cite{Donnelly:2010cr} it was studied the
Einstein-aether theory with an scalar inflaton coupled bilinearly to the
expansion of the aether. There were determined the
conditions for linearized stability, positive energy, and vanishing of
preferred-frame post-Newtonian parameters, and examined whether all of these
restrictions can be simultaneously satisfied. In a homogeneous and isotropic
cosmology, the inflaton-aether expansion coupling leads to a driving force on
the inflaton that is proportional to the Hubble parameter. This force affects
the slow-roll dynamics, but still allows a graceful exit of inflation.
Einstein-aether theory have been applied also in various anisotropic and
inhomogeneous models with many interesting results. In \cite{Coley:2015qqa}
were studied spherically symmetric cosmological models in Einstein-aether
theory with a non-comoving perfect fluid source using a 1+3 frame formalism,
in the context of inhomogeneous cosmological models. Adopting the comoving
aether gauge it is derived the evolution equations in normalized variables to
provide numerical computations and studying the local stability of the
equilibrium points of the resulting dynamical system. Special emphasis was
made on spatially homogeneous Kantowski-Sachs models, see also
\cite{Latta:2016jix,Alhulaimi:2017ocb,VanDenHoogen:2018anx}.
In \cite{Alhulaimi:2017ocb} was studied the dynamics of spatially homogeneous (SH)
Einstein-aether cosmological models with an scalar field with a self-interaction generalized
harmonic potential, in which the scalar field is coupled to both the aether field
expansion and shear scalars. The stability analysis indicated that there exists a range of values of the
parameters where the late-time attractor corresponds to an accelerated
expansion phase. For the analysis are considered spatially curvature and
anisotropic perturbations. On the other hand, static anisotropic
models for a mixture of a necessarily non-tilted perfect fluid with a
barotropic equation of state (linear and/or polytropic equations of state) and a self-interacting scalar field were studied in
\cite{Coley:2015qqa,Coley:2019tyx,Leon:2019jnu}. In \cite{Roumeliotis:2019tvu} it was presented the
solution space of the field equations in the Einstein-aether theory for the
case of a vacuum Bianchi Type V space-time. In this model the reduced
equations not always admits a solution. Whenever a solution do exist, their
physical interpretation was examined through the analysis of the behavior of Ricci and/or
Kretschmann scalar, as well as with the identification of the effective energy
momentum tensor in terms of a perfect fluid. There are cases in which no
singularities appears and in other cases the effective fluid is isotropic. Friedmann--Lema\^{\i}tre--Robertson--Walker metric (FLRW) and
a Locally Rotationally Symmetric (LRS) Bianchi Type III space-time were studied in \cite{Roumeliotis:2018ook}.
It was examined whether the reduced equations do have a solution, and it was found that
there are portions of the initial parameters space for which no solution is
admitted by the reduced equations.
In \cite{Paliathanasis:2020bgs} it is considered an Einstein-aether scalar
field cosmological model where the aether and the scalar field are
interacting through two different interactions proposed in the literature by
Kanno and Soda \cite{Kanno:2006ty} and by Donnelly et al. \cite{Donnelly:2010cr
. It was provided an extended dynamical systems analysis of the cosmological
evolution. The reduced Lagrangians deduced from the full action are, in
general, correctly describing the dynamics whenever solutions do exist.
Furthermore, the cosmological evolution of the field equations in the context
of Einstein-aether cosmology by including a scalar field in a spatially flat
FLRW spacetime was studied in \cite{Paliathanasis:2019pcl} by using dynamical system tools. The
analysis was separated into two cases: a pressureless fluid source is included
or it is absent. The limit of general relativity is fully recovered, while the
dynamical system admits de Sitter solutions which can describe the past
inflationary era and the future late-time attractor. Results for generic
scalar field potentials were presented, while some numerical behaviors were
given for specific potential forms.
The plan of the paper is as follows.
In Section \ref{sec2}, we present the cosmological model under consideration
which is that of the Einstein-aether gravity with an scalar field coupled to the aether through an effective coupling $B\left( \phi\right) $ as defined by $B\left( \phi\right)
=\beta_{1}\left( \phi\right) +3\beta_{2}\left( \phi\right) +\beta
_{3}\left( \phi\right) -1$ in terms of the aether parameters $\beta_1, \ldots \beta_4$
\cite{Kanno:2006ty}. As far as for the physical space is concerned, we consider
that it is described by the spatially flat FLRW metric. For the latter
cosmological model we present the field equations and we give the
minisuperspace description of the theory as well.
In Section \ref{sec3}, we determine exact solutions of the field equations of physical interest.
Specifically we find the scalar field potential such
that the scale factor of the FLRW metric describes either the de Sitter universe, or
it describes an scaling solution. In addition, we study the stability of these
solutions by calculating their first order perturbations around the
exact solutions and analyzing their evolution.
The main results of our analysis are presented in Section \ref{sec4}.
We assume the presence of a dust fluid in the cosmological model. We determine
the functional forms of the scalar field potential such that the field
equations are Liouville--integrable, with at least the existence of a second
conservation law, quadratic in the momentum. We find five families of power-law potentials,
for which we present the analytic solutions as functions
in a close-form or in algebraic form by solving the Hamilton-Jacobi equations
and reducing the dimensionality of the field equations. By using the results
of Section \ref{sec3}, we infer the asymptotic behavior of the cosmological
solutions, since we can relate the dominant terms of the scalar field
potential with the exact solutions presented in Section \ref{sec3}. Recall that a system of polynomial differential equations is said to be Liouville--integrable, if it has first integrals given by elementary functions or integrals of elementary functions, that is, functions expressed in terms of combinations of exponential functions, trigonometric functions, logarithmic functions or polynomial functions (see, e.g., \cite{Iacono:2014uga}, in the context of Tolman-Oppenheimer-Volkoff approach for a relativistic star model
with the isothermal equation of state $p_{m}=\rho_{m}/n$; which is Liouville--integrable if and only if $n\in\{-1, -3, -5, -6\}$).
In Appendix \ref{appa}, we present the five Liouville integrable scalar field
potentials where the additional matter source in the cosmological fluid it is
an ideal gas with equation of state $p_{m}=\left( \gamma-1\right) \rho_{m}$.
Note that when $\gamma=\frac{2}{3}$, our results describe the case of a non
spatially flat FLRW spacetime.
Finally, in Section \ref{sec5}, we summarize
the results and we draw our conclusions.
\section{Einstein-aether Scalar field Cosmology}
\label{sec2}
We consider the Einstein-aether scalar field theory with Action Integral \cite{Kanno:2006ty}:
\begin{equation}
S=\int dx^{4}\sqrt{-g}\left( \frac{R}{2}-\frac{1}{2}g^{\mu\nu}\phi_{;\mu
\phi_{;\nu}-V\left( \phi\right) \right) -S_{\text{Aether}}, \label{ac.01
\end{equation}
where $S_{\text{Aether}}$ describes the terms of the aether field $u^{\mu}$ as
follows
\begin{align}
S_{\text{Aether}} & =\int dx^{4}\sqrt{-g}\left( \beta_{1}\left( \phi\right)
u^{\nu;\mu}u_{\nu;\mu}+\beta_{2}\left( \phi\right) u^{\nu;\mu}u_{\mu;\nu
}\right) +\nonumber\\
& +\int dx^{4}\sqrt{-g}\left( \beta_{3}\left( \phi\right) \left(
g^{\mu\nu}u_{\mu;\nu}\right) ^{2}+\beta_{4}\left( \phi\right) u^{\mu
u^{\nu}u_{;\mu}u_{\nu}-\lambda\left( u^{\mu}u_{\nu}+1\right) \right) .
\label{ac.02
\end{align}
Coefficients $\beta_{1},~\beta_{2},~\beta_{3}~$and $\beta_{4}$ define the
coupling between the aether field and the gravitational field. In
Einstein-aether theory, the coefficients are constants, though in this model,
coefficients $\beta_{1},~\beta_{2},~\beta_{3}~$and $\beta_{4}$ define a coupling
between the aether field $u^{\mu}$ and the scalar field $\phi\left( x^{\mu
}\right)$, by promoting themselves to be functions of $\phi$. Additionally, function $\lambda$ is a Lagrange multiplier which
ensures the unitarity, $u^{\mu}u_{\mu}+1=0$, of the aether field $u^{\mu}$.
In large scales the universe it is assumed to be isotropic and homogeneous
described by the spatially flat FLRW metric, with line elemen
\begin{equation}
ds^{2}=-N^{2}\left( t\right) dt^{2}+a^{2}\left( t\right) \left(
dx^{2}+dy^{2}+dz^{2}\right) , \label{ac.03
\end{equation}
where $a\left( t\right) $ is the scale factor, $N\left( t\right) $ is the
lapse function while the Hubble function is defined as $H\left( t\right)
=\frac{1}{N}\frac{\dot{a}}{a}$, where a dot denotes total derivative with
respect the variable $t$.
For the aether field $u^{\mu}=\frac{1}{N}\delta_{t}^{\mu}$, and the line
element (\ref{ac.03}), the Action Integral (\ref{ac.01}) is simplified as
follows \cite{Kanno:2006ty}
\begin{equation}
S=\int dx^{4}\sqrt{-g}L\left( N,a,\dot{a},\phi,\dot{\phi}\right),
\label{ac.04
\end{equation}
where~$L\left( N,a,\dot{a},\phi,\dot{\phi}\right) $ is the point-like
Lagrangian
\begin{equation}
L\left( N,a,\dot{a},\phi,\dot{\phi}\right) =\frac{1}{N}\left( -3B\left(
\phi\right) a\dot{a}^{2}+\frac{1}{2}a^{3}\dot{\phi}^{2}-N^{2}a^{3}V\left(
\phi\right) \right) , \label{ac.05
\end{equation}
while function $B\left( \phi\right) $ is defined as $B\left( \phi\right)
=\beta_{1}\left( \phi\right) +3\beta_{2}\left( \phi\right) +\beta
_{3}\left( \phi\right) -1$, and we have assumed that the scalar field $\phi$
inherits the symmetries of the spacetime such that $\phi=\phi\left( t\right)
$.
Variation with respect the variables $a$ and $\phi$ of the Action Integral
(\ref{ac.04}) gives the second-order field equation
\begin{equation}
\left( 2\ddot{a}-\frac{2}{N}a\dot{a}\dot{N}\right) B\left( \phi\right)
+2aB_{,\phi}\dot{a}\dot{\phi}+B\left( \phi\right) \dot{a}^{2}+\frac{1
{2}a^{2}\dot{\phi}^{2}-N^{2}a^{2}V\left( \phi\right) =0, \label{ac.06
\end{equation
\begin{equation}
\ddot{\phi}+3\frac{\dot{a}}{a}\dot{\phi}-\frac{\dot{N}}{N}\dot{\phi}+\frac
{3}{A^{2}}B_{,\phi}\dot{a}^{2}+N^{2}V_{,\phi}=0. \label{ac.07
\end{equation}
Equation (\ref{ac.07}) is the modified Klein-Gordon equation for the scalar
field $\phi$, while equation (\ref{ac.06}) is the modified second Friedmann
equation. Moreover, variation of (\ref{ac.04}) with respect to the variable $N$
produces the modified first Friedmann equation, that is, the constraint
equation
\begin{equation}
-\frac{3}{N^{2}}B\left( \phi\right) a\dot{a}^{2}+\frac{1}{2N^{2}}a^{3
\dot{\phi}^{2}+a^{3}V\left( \phi\right) =0. \label{ac.08
\end{equation}
The field equations (\ref{ac.06}), (\ref{ac.07}) and (\ref{ac.08}) can be
written as follow
\begin{equation}
3H^{2}=k_\text{eff}\rho_\text{eff}, \label{ac.09
\end{equation
\begin{equation}
-\left( 2\dot{H}+3H^{2}\right) =k_\text{eff}p_\text{eff}, \label{ac.10
\end{equation}
an
\begin{equation}
\ddot{\phi}+3H\dot{\phi}+3B_{,\phi}H^{2}+V_{,\phi}=0, \label{ac.11
\end{equation}
where $k_\text{eff}=\frac{1}{B\left( \phi\right) }$, and $\rho_\text{eff}$ and
$p_\text{eff}$ describe the energy density and the pressure of the effective fluid,
defined as
\begin{align}
\rho_\text{eff} & =\frac{1}{2}\dot{\phi}^{2}+V\left( \phi\right),
\label{ac.12}\\
p_\text{eff} & =\left( 2B_{,\phi}H\dot{\phi}+\frac{1}{2}\dot{\phi}^{2}-V\left(
\phi\right) \right). \label{ac.13
\end{align}
We observe that there are similarities with the Scalar-tensor theories as
mentioned in \cite{Kanno:2006ty}, indeed $k_\text{eff}$ is not a constant but
changes in time, however the two theories are different. The effective
$k_\text{eff}$ is the only contribution of the aether field in the first Friedmann
equation, because the effective energy density $\rho_\text{eff}$ is that of the
scalar field, i.e. $\rho_\text{eff}=\rho_{\phi}$. On the other, hand from second
Friedmann equation we see that the $2B_{,\phi}H\dot{\phi}$ \ modifies the
effective pressure from that of the scalar field, that is, $p_\text{eff
=p_{\phi}+2B_{,\phi}H\dot{\phi}$. Consequently, the parameter for the
effective equation of state it is defined as
\begin{equation}
w_\text{eff}=w_{\phi}+\frac{2B_{,\phi}H\dot{\phi}}{\rho_\text{eff}}. \label{ac.14
\end{equation}
As far as equation (\ref{ac.11}) is concerned, this read
\begin{equation}
\dot{\rho}_{\phi}+3H\left( \rho_{\phi}+p_{\phi}\right) =-3B_{,\phi
H^{2}.\label{ac.15
\end{equation}
which looks like the particle creation, bulk viscosity or varying vacuum
theories \cite{par1,par2,par3,par4,par5,par6,par7}. Positive values of
$B_{\phi}$ indicate particle annihilation while negative values of $B_{\phi}$
indicate particle creation.
In the presence of an additional fluid source, such that of an ideal gas
$p_{m}=\left( \gamma-1\right) \rho_{m}$ which we assume that it is not
interacting with the scalar field or with the aether field, the effective energy
density and pressure terms are modified as
\begin{align}
\rho_\text{eff} & =\frac{1}{2}\dot{\phi}^{2}+V\left( \phi\right) +\rho
_{m},\label{ac.16}\\
p_\text{eff} & =\left( 2B_{,\phi}H\dot{\phi}+\frac{1}{2}\dot{\phi}^{2}-V\left(
\phi\right) \right) +\left( \gamma-1\right) p_{m}, \label{ac.17
\end{align}
with the additional conservation equatio
\begin{equation}
\dot{\rho}_{m}+3\gamma H\rho_{m}=0, \label{ac.18
\end{equation}
from which we infer $\rho_{m}=\rho_{m0}a^{-3\gamma}$, $\rho_{m0}$ is an
integration constant. In the following Section, we assume that the additional
matter source is that of a dust fluid, that is, $\gamma=1$ and $\rho_{m
=\rho_{m0}a^{-3}$.
\section{Exact solutions}
\label{sec3}
In this section we present some exact solutions of the field equations.
In particular we determine the functional forms of the potential $V\left(
\phi\right)$ and the function $\phi\left(t\right)$ by incorporating the
requirement that the de Sitter solution $a\left(t\right) =a_{0}e^{H_{C}t}$
and the scaling solution $a\left( t\right) =a_{0}t^{p}$, are special solutions
of the field equations.
Recall that we have assumed that $N\left(t\right)=1$. In addition we
assume that there is not any contribution in the cosmological fluid by the
ideal gas, i.e. $\rho_{m0}=0$. Because there are only two independent equations and there are three unknown
functions, namely, $\phi\left( t\right),~V\left( t\right)$ and $B\left(t\right)$, we proceed further by
defining the exact form of $B\left(\phi\left(t\right)\right)$. In particular we select $B\left(\phi\left(t\right)\right)=6B_{0}\phi^{2}$.
\subsection{De\ Sitter solution}
The exponential scale factor $a\left( t\right) =a_{0}e^{H_{0}t}$ solves the
field equations (\ref{ac.06})-(\ref{ac.08}) if and only if \cite{Kanno:2006ty
\begin{equation}
\phi\left( t\right) =\phi_{0}e^{-24\sqrt{B_{0}}H_{0}t}~,~V\left( t\right)
=18\left( 1-16B_{0}\right) \phi_{0}^{2}H_{0}^{2}e^{-48H_{0}t},
\end{equation}
that i
\begin{equation}
V\left( \phi\left( t\right) \right) =18\left( 1-16B_{0}\right) H_{0
^{2}\phi^{2}.
\end{equation}
In order to determine the stability of the de Sitter solution we substitute
$a\left( t\right) =a_{0}e^{\frac{H_{0}}{\sqrt{B_{0}}}t}+\varepsilon\delta
a\left( t\right) $ and $\phi\left( t\right) =\phi_{0}e^{-24\sqrt{B_{0
}H_{0}t}+\varepsilon\delta\phi\left( t\right) $ in the field equations and
we linearize around $\varepsilon\rightarrow0$. We end with the linearized
syste
\begin{align}
\delta\ddot{a}+\left( \frac{1}{B_{0}}-48\right) \left( \sqrt{B_{0}
\delta\dot{a}-2H_{0}\delta\dot{a}^{2}\right) & =0,\\
\delta\ddot{\phi}+3\frac{H_{0}}{\sqrt{B_{0}}}\dot{\phi}+72\left(
1-8B_{0}\right) H_{0}^{2}\delta\phi & =0,
\end{align}
with closed-form solutio
\begin{align}
\delta a & =\delta a_{0}e^{\frac{H_{0}}{\sqrt{B_{0}}}t}+\delta
a_{1}e^{-\frac{2}{\sqrt{B_{0}}}\left( 1-24B_{0}\right) H_{0}t},\\
\delta\phi & =\delta\phi_{0}e^{-24\sqrt{B_{0}}H_{0}t}+\delta\phi_{1
e^{-\frac{3}{\sqrt{B_{0}}}\left( 1-8B_{0}\right) H_{0}t}.
\end{align}
Therefore we conclude that the expanding de Sitter universe is stable when
$0<B_{0}<\frac{1}{24}$, while when $H_{0}<0$ the exact solution is stable when
$B_{0}>\frac{1}{24}$.
\subsection{Scaling solution}
In a similar way, we find that the scaling solution $a\left( t\right)
=a_{0}t^{\frac{p}{\sqrt{B_{0}}}}$ satisfies the field equations (\ref{ac.06
)-(\ref{ac.08}) when \cite{Kanno:2006ty
\begin{equation}
\phi\left( t\right) =\phi_{0}t^{-12p-2\sqrt{3p\left( 1+12p\right)
}~,~V\left( t\right) =\frac{6p\phi_{0}^{2}}{B_{0}}\left( 3\left(
1-8\right) p-B_{0}-4\sqrt{3p\left( 1+12p\right) }\right) \phi^{-2\left(
1+12p\right) -4\sqrt{3p\left( 1+12p\right) }},
\end{equation}
that i
\begin{equation}
V\left( \phi\left( t\right) \right) =\frac{6p\phi_{0}^{2-\sqrt
{\frac{1+12p}{3p}}}}{B_{0}}\left( 3\left( 1-8\right) p-B_{0}-4\sqrt
{3p\left( 1+12p\right) }\right) \phi^{\sqrt{\frac{1+12p}{3p}}}.
\end{equation}
We take linear perturbations around the exact solution as before and for the perturbations we find
$\delta a\simeq t^{R}~,~\delta\phi=t^{S}$, wher
\begin{align}
R & =1+2\left( 12-\frac{1}{B_{0}}\right) p+4\sqrt{3p\left( 1+12p\right)
},\\
S & =1+3\left( 4-\frac{1}{B_{0}}\right) p+2\sqrt{3p\left( 1+12p\right)
},
\end{align}
from which we infer that the scaling solution is attractor when $0<B_{0
<\frac{1}{24}$, for $p>\frac{B_{0}+2\sqrt{6}B_{0}^{3/2}}{\left(
1-24B_{0}\right) }$.
We proceed with the presentation of the analytic solutions.
\section{Analytic Solutions}
\label{sec4}
For the lapse function $N=1$, and when dust fluid is included in the model,
the point-like Lagrangian (\ref{ac.05}) is written a
\begin{equation}
L\left( a,\dot{a},\phi,\dot{\phi}\right) =-3B\left( \phi\right) a\dot
{a}^{2}+\frac{1}{2}a^{3}\dot{\phi}^{2}-a^{3}V\left( \phi\right) -\rho_{m0},
\label{ac.19
\end{equation}
which describes the motion of a particle in a two-dimensional space, where now
the constraint equation (\ref{ac.09}) correspond to the Hamiltonian
conservation law for Lagrangian (\ref{ac.19}) with value the $\rho_{m0}$. The
equation of motions depend on two unknown functions, the $B\left(
\phi\right) $ and the $V\left( \phi\right) $. Function $V\left(
\phi\right) $ is a potential term, while function $B\left( \phi\right) $
defines the geometry of the two-dimensional space where the motion of the
point-like particle occurs.
The authors of \cite{Kanno:2006ty} considered the function $B\left(\phi\right)$
in the particular form $B\left( \phi\right) =6B_{0}\phi^{2}$, and, in our work,
this specific function will be selected as well. The reason is that
$B\left(\phi\right)=B_{0}\phi^{2}$ simplifies the dynamics such that the
minisuperspace defined by the kinetic part of Lagrangian (\ref{ac.19}), will be
a maximally symmetric two-dimensional space with zero curvature, that is, a
two-dimensional flat space. Therefore, the field equations describes a typical
dynamical system of Classical Mechanics.
As we commented before, we follow~\cite{Kanno:2006ty} and we set $V\left( \phi\right) =V_{0}\phi
^{2}$. For that specific functional forms of $B\left( \phi\right) $ and
$V\left( \phi\right) $, the field equations are written
\begin{equation}
18B_{0}\phi^{2}H^{2}-\frac{1}{2}\dot{\phi}^{2}-V_{0}\phi^{2}-\rho_{m0
a^{-3}=0, \label{ac.20
\end{equation
\begin{equation}
6B_{0}\phi^{2}\left( 2\dot{H}+3H^{2}\right) +\frac{1}{2}\dot{\phi
^{2}+24\phi\dot{\phi}H-V_{0}\phi^{2}=0, \label{ac.21
\end{equation
\begin{equation}
\ddot{\phi}+3\dot{\phi}H+36B_{0}\phi H^{2}+2V_{0}\phi=0. \label{ac.22
\end{equation}
We define the canonical variable
\begin{equation}
a=x^{\frac{1}{3-12\sqrt{B_{0}}}}y^{\frac{1}{3+12\sqrt{B_{0}}}}~,~\phi
=x^{\frac{2\left( 4B_{0}+\sqrt{B_{0}}\right) }{16B_{0}-1}}y^{\frac{2\left(
4B_{0}-\sqrt{B_{0}}\right) }{16B_{0}-1}}~,~B_{0}\neq\frac{1}{16}
\label{ac.23
\end{equation}
such that equations (\ref{ac.20}), (\ref{ac.21}), (\ref{ac.22}) are written in
simpler expressions as follow
\begin{equation}
\frac{8B_{0}}{16B_{0}-1}\dot{x}\dot{y}+V_{0}xy+\rho_{m0}=0, \label{ac.24
\end{equation
\begin{equation}
\ddot{x}+2V_{0}\left( 1-\frac{1}{16B_{0}}\right) x=0, \label{ac.25
\end{equation
\begin{equation}
\ddot{y}+2V_{0}\left( 1-\frac{1}{16B_{0}}\right) y=0, \label{ac.26
\end{equation}
from which we derive the analytic solution in closed-form function
\begin{equation}
x\left( t\right) =x_{0}\sinh\left( \omega t+x_{1}\right) ,~y\left(
t\right) =y_{0}\sinh\left( \omega t+y_{2}\right) , \label{ac.27
\end{equation}
with constraint $\rho_{m0}=-V_{0}x_{0}y_{0}\cosh\left( x_{1}-y_{1}\right) $,
and $\omega^{2}=V_{0}\left( \frac{1}{8B_{0}}-2\right) $. Because $V_{0}$ is
positive, we conclude that when $B_{0}>\frac{1}{16}$ we have a bounced
universe, while when $0<B<\frac{1}{16}$ the scale factor $a\left( t\right) $
is described by hyperbolic functions.
Consider now the special case where $x_{1}=y_{1}=0,$ then we find the scale
facto
\begin{equation}
a\left( t\right) =a_{0}\left( \sinh\left( \omega t\right) \right)
^{\frac{2}{3-48B_{0}}}.
\end{equation}
From the latter scale factor we see that the $\Lambda$-cosmology is not
recovered since $\frac{2}{3-48B_{0}}=\frac{2}{3}$ gives that $B_{0}=0$ which
is neglected. However when $B=\frac{3\varepsilon}{16\left( 3\varepsilon
+2\right) }$ it follows $\frac{2}{3-48B_{0}}=\frac{2}{3}+\varepsilon$, hence
for small values of $\varepsilon$ or $B\simeq\frac{3}{32}\varepsilon$, the
exact solution is $a\left( t\right) =a_{0}\left( \sinh\left( \omega
t\right) \right) ^{\frac{2}{3}+\varepsilon},~$from which we calculate the
Hubble functio
\begin{equation}
H\left( a\right) =\frac{\omega}{3}\left( 2+3\varepsilon\right)
\sqrt{1+\left( \frac{a_{0}}{a}\right) ^{\frac{6}{2+3\varepsilon}}},
\end{equation}
which is the solution of General Relativity of an ideal gas with a
cosmological constant term. Recall, that the field $\phi$ contributes in the
cosmological fluid while function $B\left(\phi\right)$ affect the total
fluid source.
When $B_{0}=\frac{1}{16}$, we introduce the new canonical variable
\begin{equation}
a=u^{\frac{1}{6}}e^{\frac{v}{2}}~,~\phi=u^{\frac{1}{4}}e^{-\frac{3}{4}v},
\label{ac.28
\end{equation}
where the field equations take the for
\begin{equation}
\frac{3}{8}\dot{u}\dot{v}-V_{0}u-\rho_{m0}=0 \label{ac.29
\end{equation
\begin{equation}
\ddot{u}=0,~\ddot{v}=\frac{8}{3}V_{0}, \label{ac.30
\end{equation}
with solution $u\left( t\right) =u_{1}t+u_{0}~,~v\left( t\right) =\frac
{4}{3}V_{0}t^{2}+v_{1}t+v_{0}$ with constraint equation $\rho_{m0}+V_{0
u_{1}-3u_{1}v_{1}=0$, from which we find the scale factor $a\left( t\right)
=a_{0}t^{\frac{1}{6}}e^{\frac{2V_{0}}{3}t^{2}+\frac{v_{1}}{2}}$.
When the scalar field is massless, i.e. $V\left( \phi\right) =0$, then field
equations are reduced, and the generic solution can be easily
constructed by equations (\ref{ac.25}), (\ref{ac.26}) for $V_{0}=0$ and the
transformation rule (\ref{ac.23}).
These are not the only functional forms of the potential $V\left( \phi\right)$
for which we can write the analytic solutions of the field equations. Some
power-law functions $V\left(\phi\right)$ and their analytic solutions are
presented in what it follows. Specifically, the potentials for which we shall
present the analytic solution of the field equations ar
\begin{equation}
V_{A}\left( \phi\right) =V_{0}\phi^{2}+V_{1}\phi^{\frac{1}{2\sqrt{B_{0}}}}~,
\label{ac.31
\end{equation
\begin{equation}
V_{B}\left( \phi\right) =V_{0}\phi^{2}+V_{1}\phi^{-\frac{1}{2\sqrt{B_{0}}}},
\label{ac.32
\end{equation
\begin{equation}
V_{C}\left( \phi\right) =V_{0}\phi^{2}+V_{1}\phi^{-2+\frac{1}{4B_{0}}},
\label{ac.33
\end{equation
\begin{equation}
V_{D}\left( \phi\right) =V_{0}\phi^{\frac{1}{2\sqrt{B_{0}}}}+V_{1
\phi^{-1+\frac{3}{4\sqrt{B_{0}}}}, \label{ac.34
\end{equation
\begin{equation}
V_{E}\left( \phi\right) =V_{0}\phi^{-\frac{1}{2\sqrt{B_{0}}}}+V_{1
\phi^{-1-\frac{3}{4\sqrt{B_{0}}}}. \label{ac.35
\end{equation}
Potentials (\ref{ac.31})-(\ref{ac.35}) have a common feature, they are
Liouville--integrable, for which the field equations admit an additional
conservation law for each potential, more specifically a quadratic
conservation law, different for each potential. The $V\left( \phi\right)
=V_{0}\phi^{2}$ is also a super-integrable potential. Another superintegrable
potentials we observe are $\bar{V}_{A}\left( \phi\right) =V_{1
\phi^{\frac{1}{2\sqrt{B_{0}}}}$ and $\bar{V}_{B}\left( \phi\right)
=V_{1}\phi^{-\frac{1}{2\sqrt{B_{0}}}}$. Integrable cosmological models in modified
theories of gravity have been widely studied in the literature, and have been drawn
the attention of cosmologists and of the mathematicians ever. The main reason is
that analytic solutions can be used as toy models in order to understand the main
properties of a proposed cosmological theory
\cite{ns01,ns02,ns03,ns04,ns05,ns06,ns07,ns08,ns09,ns10,ns11}.
In order to describe Nature, we need a large number of free
parameters or boundary conditions, which makes numerical treatment worthless.
Because in general the equations which describe a specific theory are nonlinear
such that numerical solutions may be sensitive on small changes of the initial
conditions. Consequently, we refer to analytic techniques in order to understand
the generic properties of a propose physical theory. Hence, the knowledge for the
existence and the determination of analytical or exact solutions for a given
dynamical system is important for the detailed study and understanding of the given physical theory.
According to the results of the previous Section, we observe that according to
which term of the potential dominates, the behavior of the scale factor will
be near to that of the scaling solution or to that of an exponential solution. For instance
consider the potential $V_{B}\left(\phi\right)$. For large values of
$\phi$, it follows $V_{B}\left(\phi\right) \simeq\phi^{2}$ from which we
infer that solution approaches the de Sitter universe, on the other hand, for
small values of $\phi$, $V_{B}\left( \phi\right) \simeq\phi^{-\frac
{1}{2\sqrt{B_{0}}}}$, from which we infer that the scale factor behaves like
that of the scaling solution. As far as potentials $V_{D}\left(\phi\right)
,~V_{E}\left(\phi\right) $, are concerned, we remark that they have two
terms, where only scaling solutions are described.
The method that we apply in order to determine the analytic solutions is based
on canonical coordinates, as described in the example $V\left( \phi\right)
=V_{0}\phi^{2}$. \ In the following, we present the analytic solutions.
\subsection{Analytic solution for potential $V_{A}\left( \phi\right) $}
For the potential $V_{A}\left( \phi\right) ,$ and for $B_{0}\neq\frac{1
{16}$ we find the generic solution in the canonical coordinates $\left\{
x,y\right\} $ defined by expression (\ref{ac.23}
\begin{align}
y\left( t\right) & =y_{1}e^{\omega t}+y_{2}e^{-\omega t},\label{ac.36}\\
x\left( t\right) & =x_{1}e^{\omega t}+x_{2}e^{\omega t}+x_{sp}\left(
t\right), \label{ac.37
\end{align}
wher
\begin{equation}
x_{sp}=-\frac{y_{2}V_{1}}{8\sqrt{B_{0}}V_{0}}\left( y_{2}e^{\omega t}\right)
^{\left( 1-\frac{2}{1+4\sqrt{B_{0}}}\right) }\left( _{2}F_{1}\left(
\alpha,\beta,\gamma,\zeta\left( t\right) \right) +4\sqrt{B_{0}}~_{2
F_{1}\left( \alpha^{\prime},\beta^{\prime},\gamma^{\prime},\zeta\left(
t\right) \right) \right), \label{ac.38
\end{equation}
where $_{2}F_{1}\left( \alpha,\beta,\gamma,\zeta\left( t\right) \right) $
denote the hypergeometric function represented by the hypergeometric series;
while $\alpha=1-\frac{1}{1+4\sqrt{B_{0}}},~\beta=1-\frac{2}{1+4\sqrt{B_{0}
},~\gamma=1+\alpha,~\zeta\left( t\right) =-\frac{y_{1}}{y_{2}}e^{\omega t}$
and $\alpha^{\prime}=\beta,~\beta^{\prime}=a-1,~\gamma^{\prime}=\alpha$, where
$\omega^{2}=V_{0}\left( \frac{1}{8B_{0}}-2\right) $ and constraint~$\rho
_{m0}+2\left( y_{1}x_{2}+y_{2}x_{1}\right) =0$.
In the special case where $y_{2}=0$ \ exact solution is simplified
\begin{align}
y\left( t\right) & =y_{1}e^{\omega t},\label{ac.39}\\
x\left( t\right) & =x_{1}e^{\omega t}+x_{2}e^{-\omega t}-\frac
{1+4\sqrt{B_{0}}}{8\sqrt{B_{0}}V_{0}}V_{1}\left( y_{1}e^{\omega t}\right)
^{\left( 1-\frac{2}{1+4\sqrt{B_{0}}}\right) }. \label{ac.40
\end{align}
For the latter two solutions, namely (\ref{ac.36}), (\ref{ac.37}) and
(\ref{ac.39}), (\ref{ac.40}) when $B_{0}<\frac{1}{16},~$that is, $1-\frac
{2}{1+4\sqrt{B_{0}}}<0$, for large time the dominated term is $e^{\omega t}$,
which means that the scale factor for large values of $t$,
approaches that of the de Sitter universe $a\left(t\right)=a_{0}e^{H_{0}t}$.
For the latter exact solution (\ref{ac.39}), (\ref{ac.40}), in Fig. \ref{fig1}
we present the qualitative behaviour of the Hubble function $H\left(
z\right) $ and of the parameter for the equation of state for the effective
fluid $w\left( z\right) $ in terms of the redshift $1+z=\frac{1}{a}$.
\begin{figure}[ptb]
\centering\includegraphics[width=0.8\textwidth]{pota1} \newlin
\caption{Qualitative evolution for the Hubble function $H\left( z\right) $
and for the equation of state parameter $w\left( z\right) $ for the
total fluid for the analytic solution (\ref{ac.39}), (\ref{ac.40}). For the
figures we considered, $\left( x_{1},x_{2},y_{1},V_{0}\right) =\left(
0.1,-0.1,1,0.5\right) $. Left figures are for $V_{1}=-0.1$ and different
values of $B_{0}=\left\{ \frac{1}{60},\frac{1}{50},\frac{1}{40}\right\} $.
Right figures are for $B_{0}=\frac{1}{40}$ and different values of
$V_{1}=\left\{ -0.01,-0.05,-0.1\right\} $. From the plots we observe that
the cosmological fluid pass the phantom divide line while the de Sitter
universe is a future attractor.
\label{fig1
\end{figure}
Now, if we assume that $V_{0}=0$, then for the super-integrable potential
$\bar{V}_{A}\left( \phi\right) $ we find the exact solutio
\begin{align}
y\left( t\right) & =y_{1}t+y_{2},\label{ac.41}\\
x\left( t\right) & =x_{1}t+x_{2}+\frac{\left( 1+4\sqrt{B_{0}}\right)
^{2}}{8B_{0}y_{1}^{2}}V_{1}\left( 1-4\sqrt{B_{0}}\right) \left(
y_{1}t+y_{2}\right) ^{1+\frac{2}{1+4\sqrt{B_{0}}}}, \label{ac.42
\end{align}
with constraint equation $\rho_{m0}-\frac{8B_{0}}{1-16B_{0}}y_{1}x_{1}=0.$ For
solution (\ref{ac.41}), (\ref{ac.42}) for large values of time the scale
factor has a power-law behaviour $a\left( t\right) =a_{0}t^{p}$, where
$p=p\left( B_{0}\right) $.
When $B_{0}=\frac{1}{16}$, potential $V_{A}\left( \phi\right) $ reads
$V_{A}\left( \phi\right) =\left( V_{0}+V_{1}\right) \phi^{2}$, which was
the one studied before. Hence, we continue by presenting the analytic solution
for potential $V_{B}\left( \phi\right) $.
\subsection{Potential $V_{B}\left( \phi\right) $}
For the second potential of our consideration, namely potential $V_{B}\left(
\phi\right) $, and for $B_{0}\neq\frac{1}{16}$, we find the generic analytic
solution written in closed-form expressio
\begin{align}
x\left( t\right) & =x_{1}e^{\omega t}+x_{2}e^{-\omega t},\label{ac.43}\\
y\left( t\right) & =y_{1}e^{\omega t}+y_{2}e^{-\omega t}+y_{sp}\left(
t\right), \label{ac.44
\end{align}
in which
\begin{equation}
y_{sp}\left( t\right) =\frac{\left( 4\sqrt{B_{0}}-1\right) ^{2}
{64B_{0}^{3/2}\omega^{2}x_{2}}V_{1}e^{\omega t}\left( \frac{\left(
1+\frac{x_{1}}{x_{2}}e^{2\omega t}\right) }{x_{1}e^{\omega t}+x_{2}e^{-\omega
t}}\right) ^{\frac{2}{4\sqrt{B_{0}}-1}}\left( 4\sqrt{B_{0}}_{2}F_{1}\left(
\bar{\alpha},\bar{\beta},\bar{\gamma},\bar{\zeta}\left( t\right) \right)
-_{2}F_{1}\left( \bar{a}^{\prime},\bar{\beta}^{\prime},\bar{\gamma}^{\prime
},\bar{\zeta}\left( t\right) \right) \right) , \label{ac.45
\end{equation}
in which $\zeta\left( t\right) =-\frac{x_{1}}{x_{2}}e^{2\omega t
,~\bar{\alpha}=1-\frac{2}{1-4\sqrt{B_{0}}},~\bar{\beta}=\frac{\bar{a}+1
{2},~\bar{\gamma}=4\sqrt{B_{0}}\bar{\beta},~\bar{\alpha}^{\prime}=1+\bar
{\beta},~\bar{\beta}^{\prime}=\bar{a}$ and $\bar{\gamma}^{\prime}=1+\bar{a}$.
\ Furthermore, from the constraint equation it follows the algebraic condition
$\rho_{m0}+2\left( y_{1}x_{2}+y_{2}x_{1}\right) =0$.
If $x_{2}=0$, the closed-form solution is
\begin{align}
x\left( t\right) & =x_{1}e^{\omega t},\label{ac.46}\\
y\left( t\right) & =y_{1}e^{\omega t}+y_{2}e^{-\omega t}+\frac{\left(
4\sqrt{B_{0}}-1\right) ^{3}}{64B_{0}^{3/2}\omega^{2}}V_{1}\left(
x_{1}e^{\omega t}\right) ^{-1+\frac{2}{1-4\sqrt{B_{0}}}}, \label{ac.47
\end{align}
from which we infer, similarly as for the potential $V_{A}\left( \phi\right)$, that
the scale factor for large values of $t$, it is approximated by that of the de
Sitter universe.
When $V_{0}=0$, the analytic solution is found to b
\begin{align}
x\left( t\right) & =x_{1}t+x_{2},\label{ac.48}\\
y\left( t\right) & =y_{1}t+y_{2}+\frac{\left( 4\sqrt{B_{0}}-1\right)
^{3}V_{1}}{\left( 4\sqrt{B_{0}}-3\right) 8B_{0}x_{1}^{2}}\left(
x_{1}t+x_{2}\right) ^{1+\frac{2}{1-4\sqrt{B_{0}}}}. \label{ac.49
\end{align}
with constraint condition~$\rho_{m0}-\frac{8B_{0}}{1-16B_{0}}y_{1}x_{1}=0$,
while when $x_{1}=0$ the generic analytic solution is
\begin{align}
x\left( t\right) & =x_{2},\label{ac.50}\\
y\left( t\right) & =y_{1}t+y_{2}+\frac{\left( 4\sqrt{B_{0}}-1\right)
}{8B_{0}}V_{1}\left( x_{2}\right) ^{-1+\frac{2}{1-4\sqrt{B_{0}}}}t^{2}.
\label{ac.51
\end{align}
The latter solutions are physically accepted if and only if $\rho_{m0}=0$, that
is, there is not any contribution by the dust fluid in the cosmological fluid.
Consequently from (\ref{ac.23}) we infer that the scale factor for the
solutions with $V_{0}=0$ have a power-law expression.
For $B=\frac{1}{16}$, we work with the variables $\left\{ u,v\right\} $
which are defined by expression (\ref{ac.28}). Hence, the field equations are
reduced to the following syste
\begin{align}
\ddot{u}-8V_{1}e^{3v} & =0,\label{ac.52}\\
\ddot{v}-\frac{8}{3}V_{0} & =0, \label{ac.53
\end{align}
from where it follows the generic solution
\begin{align}
v\left( t\right) & =\frac{4}{3}V_{0}t^{2}+v_{1}t+v_{2},\label{ac.54}\\
u\left( t\right) & =u_{1}t+u_{2}+\frac{e^{3v\left( t\right)
V_{1}\left( \left( 8tV_{0}+3V_{1}\right) D_{+}\left( \frac{8V_{0}t+3V_{1
}{4\sqrt{V_{0}}}\right) -2\sqrt{V_{0}}\right) }{2V_{0}}, \label{ac.55
\end{align}
where $D_{+}\left( t\right) $ is the Dawson function defined as
$D_{+}\left( t\right) =e^{-t^{2}}\int_{0}^{t}e^{r^{2}}dr$, while from the
constraint equation it follows $\rho_{m0}=\frac{3}{8}v_{1}u_{1}$. Last, but not
least, when $V_{0}=0$, i.e. $V_{B}\left( \phi\right) =V_{1}\phi^{-\frac
{1}{2\sqrt{B_{0}}}}$ the analytic solution is
\begin{align}
v\left( t\right) & =v_{1}t+v_{2},\label{ac.56}\\
u\left( t\right) & =u_{1}t+u_{2}+\frac{8V_{1}}{9v_{1}^{2}}e^{3\left(
v_{1}t+v_{2}\right) }. \label{ac.57
\end{align}
From solution (\ref{ac.56}), (\ref{ac.57}) we calculate
\begin{equation}
a\left( t\right) =a_{0}\left( e^{\frac{12}{5}v_{1}t}+a_{1}te^{-\frac{3
{5}v_{1}t}\right) ^{\frac{5}{12}}, \label{ac.58
\end{equation}
which give
\begin{equation}
H\left( t\right) =v_{1}+\frac{5a_{1}\left( 1-3tv_{1}\right)
{12e^{3v_{1}t}+a_{1}t}, \label{ac.59
\end{equation}
which means that for large values of $t$ and for positive $v_{1}$, the
solution behaves like that of the de\ Sitter universe.
For the scale factor (\ref{ac.58}) we present in Fig. \ref{fig2} the qualitative behavior of the
Hubble function $H\left(z\right)$, as well as the behavior of the equation of
state parameter $w\left(z\right)$ of the effective fluid in terms of the redshift
$1+z=\frac{1}{a}$.
\begin{figure}[ptb]
\centering\includegraphics[width=0.8\textwidth]{potb1} \newlin
\caption{Qualitative evolution for the Hubble function $H\left( z\right) $
and for the equation of state parameter $w\left( z\right) $ for the
total fluid for the scale factor (\ref{ac.59}) which holds for $B_{0}=\frac
{1}{16}$. For the figures we considered \thinspace$a_{0}=1$. Left figures are
for $V_{1}=0.2$ and different values of $a_{1}=\left\{ -0.1,0.5,1\right\} $.
Right figures are for $a_{1}=0.1$ and different values of $V_{1}=\left\{
0.2,0.4,0.5\right\} $.
\label{fig2
\end{figure}
\subsection{Potential $V_{C}\left( \phi\right) $}
For the potential $V_{C}\left( \phi\right)$ we choose the
coordinates
\begin{align}
a & =r^{\frac{2}{3\left( 1-16B_{0}\right) }}\left( \cosh\theta
+\sinh\theta\right) ^{\frac{1}{3-12\sqrt{B_{0}}}}\left( \cosh\theta
-\sinh\theta\right) ^{\frac{1}{3+12\sqrt{B_{0}}}}~,\label{ac.60}\\
~\phi & =r^{1+\frac{1}{16B_{0}-1}}\left( \cosh\theta+\sinh\theta\right)
^{\frac{2\left( 4B_{0}+\sqrt{B_{0}}\right) }{16B_{0}-1}}\left( \cosh
\theta-\sinh\theta\right) ^{\frac{2\left( 4B_{0}-\sqrt{B_{0}}\right)
}{16B_{0}-1}}~,~B_{0}\neq\frac{1}{16}. \label{ac.61
\end{align}
In the new coordinates the point-like Lagrangian reads
\begin{equation}
L_{C}\left( r,\dot{r},\theta,\dot{\theta}\right) =\frac{8B_{0}}{1-16B_{0
}\left( -\dot{r}^{2}+r^{2}\dot{\theta}^{2}\right) -V_{0}r^{2}-V_{1
\frac{e^{-\frac{\theta}{\sqrt{B_{0}}}}}{r^{2}}-\rho_{m0}. \label{ac.62
\end{equation}
Easily we observe that it is the Ermakov-Pinney system defined in the
two-dimensional space of Lorentzian signature.
The Hamiltonian function is written as follow
\begin{equation}
\frac{1}{2}\left( -\dot{r}^{2}+r^{2}\dot{\theta}^{2}\right) -\bar{V
_{0}r^{2}+\bar{V}_{1}\frac{e^{-\frac{\theta}{\sqrt{B_{0}}}}}{r^{2}}+\bar{\rho
}_{m0}=0, \label{ac.63
\end{equation}
where $\left( V_{0},V_{1},\rho_{m0}\right) =\frac{4B_{0}}{1-16B_{0}}\left(
-\bar{V}_{0},\bar{V}_{1},\bar{\rho}_{m0}\right) .$ By using the momentum,
$p_{r}=\frac{\partial L}{\partial\dot{r}}$ and $p_{\theta}=\frac{\partial
L}{\partial\theta}$, the constraint equation become
\begin{equation}
-\frac{1}{2}p_{r}^{2}+\bar{V}_{0}r^{2}+\bar{\rho}_{m0}+\frac{p_{\theta
^{2}+2\bar{V}_{1}e^{-\frac{\theta}{\sqrt{B_{0}}}}}{2r^{2}}=0, \label{ac.64
\end{equation}
from which we infer the two first-order ordinary differential equation
\begin{align}
\frac{1}{2}p_{\theta}^{2}+\bar{V}_{1}e^{-\frac{\theta}{\sqrt{B_{0}}}} &
=J_{0},\label{ac.65}\\
-\frac{1}{2}p_{r}^{2}+\bar{V}_{0}r^{2}+\bar{\rho}_{m0}+\frac{J_{0}}{r^{2}} &
=0. \label{ac.66
\end{align}
Thus the generic solution is
\begin{equation}
r^{2}\left( t\right) =r_{1}e^{2\sqrt{2V_{0}}t}+r_{2}e^{-2\sqrt{2V_{0}
t}+r_{3}, \label{ac.67
\end{equation}
with constraints $V_{0}r_{3}^{2}-\left( 4r_{1}r_{2}V_{0}-J_{0}V_{1}\right) $
and $\rho_{m0}=2\left\vert V_{0}r_{3}\right\vert $, while $\theta\left(
t\right) $ is given by the first-order ordinary differential equatio
\begin{equation}
\frac{1}{2}r^{2}\dot{\theta}^{2}+V_{1}e^{-\frac{\theta}{\sqrt{B_{0}}}
-J_{0}=0. \label{ac.68
\end{equation}
When $V_{0}=0$, the exact solution is
\begin{equation}
r\left( t\right) =r_{1}\left( t-t_{0}\right) ^{2}+r_{3}\left(
t-t_{0}\right) \label{ac.69
\end{equation}
with constraints $8V_{1}J_{0}=-r_{3}^{2}$ and $2\rho_{m1}+r_{1}=0$, while
$\theta\left( t\right) $ is given again by equation (\ref{ac.68}). Remark
that for $B_{0}=\frac{1}{16}$ it follows $V_{C}\left( \phi\right) =\left(
V_{0}+V_{1}\right) \phi^{2}$.
\subsection{Potential $V_{D}\left( \phi\right) $}
For potential $V_{D}\left( \phi\right) $, in the canonical coordinates
$\left\{ x,y\right\} $ the constraint equation, i.e. the Hamiltonian of the
dynamical system is written as
\begin{equation}
p_{x}p_{y}-\bar{V}_{0}y^{\frac{2}{1+4\sqrt{B_{0}}}}-\bar{V}_{1}\frac
{y^{-\frac{1}{2}+\frac{3}{1+4\sqrt{B_{0}}}}}{\sqrt{x}}-\bar{\rho}_{m0}=0,
\label{ac.70
\end{equation}
where $\left\{ p_{x},p_{y}\right\} =\left\{ \dot{y},\dot{x}\right\} $ and
$\left( \bar{V}_{0},\bar{V}_{1},\bar{\rho}_{m0}\right) =\left( \frac
{1}{8B_{0}}-2\right) \left( V_{0},V_{1},\rho_{m0}\right) $. The dynamical
system admits the additional conservation la
\begin{equation}
xp_{x}^{2}-yp_{x}p_{y}-\bar{V}_{1}\frac{y^{-\frac{1}{2}+\frac{3
{1+4\sqrt{B_{0}}}}}{\sqrt{x}}+\frac{2\bar{V}_{0}}{3+4\sqrt{B_{0}}
y^{1+\frac{2}{1+4\sqrt{B_{0}}}}=I_{0}. \label{ac.71
\end{equation}
The Action which follows as a solution of the Hamilton Jacobi equation is
calculated as
\begin{equation}
S_{D}\left( x,y\right) =2\sqrt{x\left( \bar{\rho}_{m0}y+I_{0}\right)
+\frac{1+4\sqrt{B_{0}}}{3+4\sqrt{B_{0}}}\bar{V}_{0}xy^{1+\frac{2
{1+4\sqrt{B_{0}}}}}+\bar{V}_{1}\bigint\frac{y^{-\frac{-5+4\sqrt{B_{0}}}{2\left(
1+4\sqrt{B_{0}}\right) }}}{\sqrt{\left( \rho_{m0}y+I_{0}\right)
+\frac{1+4\sqrt{B_{0}}}{3+4\sqrt{B_{0}}}\bar{V}_{0}y^{1+\frac{2
{1+4\sqrt{B_{0}}}}}}dy, \label{ac.72
\end{equation}
such that the analytic solution of the field equations is given by the
following system of two first-order ordinary differential equation
\begin{equation}
\dot{x}=\frac{\partial S_{D}}{\partial y}~,~\dot{y}=\frac{\partial S_{D
}{\partial x}, \label{ac.73
\end{equation}
where in the special case where $\bar{\rho}_{m0}=0,~\bar{V}_{0}=0$ it become
\begin{equation}
\dot{x}=\frac{\bar{V}_{1}}{\sqrt{I_{0}}}y^{-\frac{-5+4\sqrt{B_{0}}}{2\left(
1+4\sqrt{B_{0}}\right) }}~,~\dot{y}=\sqrt{\frac{I_{0}}{x}}, \label{ac.74
\end{equation}
or equivalently
\begin{equation}
x^{\frac{3}{2}}=\frac{2I_{0}\bar{V}_{1}\left( 1+4\sqrt{B_{0}}\right)
}{7+4\sqrt{B_{0}}}y^{1-\frac{-5+4\sqrt{B_{0}}}{2\left( 1+4\sqrt{B_{0
}\right) }}+c_{0},
\end{equation}
o
\begin{equation}
I_{0}^{3/2}\ddot{y}+\bar{V}_{1}y^{-\frac{-5+4\sqrt{B_{0}}}{2\left(
1+4\sqrt{B_{0}}\right) }}\dot{y}^{3}=0. \label{ac.75
\end{equation}
A special solution of the latter equation is the power-law expression $y\simeq
t^{p},~p=\frac{2}{3}\frac{\left( 1+3\sqrt{B_{0}}\right) }{3+3\sqrt{B_{0}}}$,
which leads to a power law scale factor. We remark that when $B_{0}=\frac{1}{16}$ for the potential $V_{D}\left(
\phi\right) ~$it follows $V_{D}\left( \phi\right) =\left( V_{0
+V_{1}\right) \phi^{2}$.
\subsection{Potential $V_{E}\left( \phi\right) $}
The procedure that we follow to write the analytic solution for potential
$V_{E}\left( \phi\right) $ is based on the derivation of the Action by
solving the Hamilton-Jacobi equation, as we did for potential $V_{D}\left(
\phi\right) $.
In the canonical coordinates $\left\{ x,y\right\} $ the constraint equation
read
\begin{equation}
p_{x}p_{y}-\bar{V}_{1}\frac{x^{-\frac{1}{2}+\frac{3}{1-4\sqrt{B_{0}}}}
{\sqrt{y}}-\bar{V}_{0}x^{\frac{2}{1-4\sqrt{B_{0}}}}-\bar{\rho}_{m0}=0,
\label{ac.76
\end{equation}
where $\left( \bar{V}_{0},\bar{V}_{1},\bar{\rho}_{m0}\right) =\left(
\frac{1}{8B_{0}}-2\right) \left( V_{0},V_{1},\rho_{m0}\right) $. The
quadratic conservation law admitted by the field equations i
\begin{equation}
yp_{y}^{2}-xp_{x}p_{y}+\bar{V}_{1}\frac{x^{\frac{1}{2}+\frac{3}{1-4\sqrt
{B_{0}}}}}{\sqrt{y}}+\frac{2\bar{V}_{0}}{3-4\sqrt{B_{0}}}x^{1+\frac
{2}{1-4\sqrt{B_{0}}}}=I_{0}. \label{ac.77
\end{equation}
Consequently, the Action is calculated a
\begin{equation}
S_{E}\left( x,y\right) =\sqrt{y\left( \bar{\rho}_{m0}x+I_{0}\right)
+\frac{1-4\sqrt{B_{0}}}{3-4\sqrt{B_{0}}}x^{\frac{3-4\sqrt{B_{0}}
{1-4\sqrt{B_{0}}}}}+\bar{V}_{1}\bigint\frac{x^{-\frac{5+4\sqrt{B_{0}}}{2\left(
1-4\sqrt{B_{0}}\right) }}}{\sqrt{\left( \bar{\rho}_{m0}x+I_{0}\right)
+\frac{1-4\sqrt{B_{0}}}{3-4\sqrt{B_{0}}}x^{\frac{3-4\sqrt{B_{0}}
{1-4\sqrt{B_{0}}}}}}dx, \label{ac.78
\end{equation}
where the reduced equations are
\begin{equation}
\dot{x}=\frac{\partial S_{E}}{\partial y}~,~\dot{y}=\frac{\partial S_{E
}{\partial x}. \label{ac.79
\end{equation}
There are similarities of the latter solution with that of potential
$V_{D}\left( \phi\right) $, but for different value of the constant $B_{0}$,
specifically by replacing mathematically $B_{0}\rightarrow i^{4}B_{0}$.
For $B_{0}=\frac{1}{16}$ in the canonical coordinates $\left\{ u,v\right\} $
the constraint equation become
\begin{equation}
\frac{3}{8}p_{u}p_{v}-V_{0}e^{3v}-V_{1}\frac{e^{\frac{9}{2}v}}{\sqrt{u}
-\rho_{m0}=0, \label{ac.80
\end{equation}
while the quadratic conservation law read
\begin{equation}
up_{u}^{2}-vp_{u}p_{v}+\frac{8}{3}V_{1}\frac{ve^{\frac{9}{2}v}}{\sqrt{u
}+\frac{8}{3}e^{3v}\left( v-\frac{1}{3}\right) -I_{0}=0, \label{ac.81
\end{equation}
in which $\left\{ \dot{u},\dot{v}\right\} =\left\{ p_{v},p_{u}\right\} $.
Hence, the Action is derive
\begin{equation}
S_{E}\left( u,v\right) =-\frac{2}{3}\sqrt{u\left( 24u\rho_{m0
+9I_{0}+8V_{0}e^{3v}\right) }-8V_{1}\bigint\frac{e^{\frac{9}{2}v}}{\sqrt{\left(
24u\rho_{m0}+9I_{0}+8V_{0}e^{3v}\right) }}dv. \label{ac.82
\end{equation}
In the special case where $V_{0}=0$ and $\rho_{m0}$ the field equations reduce
to the simple for
\begin{equation}
\dot{u}=\frac{8}{3}\frac{V_{1}}{\sqrt{I_{0}}}e^{\frac{9}{2}v}~,~\dot{v
=\sqrt{\frac{I_{0}}{u}}, \label{ac.83
\end{equation}
that is $u^{\frac{3}{2}}=\frac{1}{12}\frac{V_{1}}{I_{0}}e^{\frac{9}{2}y}$ or
equivalentl
\begin{equation}
\ddot{v}+\frac{4}{3}\frac{V_{1}}{\left( I_{0}\right) ^{\frac{3}{2}}
e^{\frac{9}{2}y}\dot{v}^{3}=0, \label{ac.84
\end{equation}
which is an integrable differential equation and admits the special solution
$e^{v}\simeq t^{\frac{2}{9}}.$ Finally, from (\ref{ac.28}) and (\ref{ac.83})
the scale factor is found to be a power-law function.
\section{Conclusions}
\label{sec5}
In this work we have considered a Lorentz--violating scalar field cosmological model
in a spatially flat FLRW background space. Specifically, we have included an aether field in the Einstein-Hilbert action leading to the
Einstein-aether theory, where the aether field is coupled to the scalar field through the aether parameters as proposed by Kanno et al. \cite{Kanno:2006ty}.
The resulting field equations are of second-order, as expected for such kind of theories, and they can be produced by a point-like Lagrangian. There are similarities with
scalar-tensor theories although they are quite different theories.
We have focused on the construction of scalar field potentials to see whether the field
equations are Liouville--integrable, that means that the field equations can be
solved in quadratures. Consequently, we investigated the functional forms of
the scalar field potential where the field equations admit conservation laws quadratic in the
momenta. By using the second conservation law we were able to write the analytic solution of the field equations for that specific scalar
field potentials and whenever it was feasible, we have expressed the scale factor and the scalar field in closed-form functions.
We have found five families of scalar field potentials which are Liouville--integrable,
and that admits conservation laws quadratic in the momenta, and they are in the form
$V_{A}\left( \phi\right) =V_{0}\phi^{p}+V_{1}\phi^{r}$, where $p,r$ are
constants. For each dominant term of the potential the analytic solution for
the scale factor behaves like a power-law function or like an exponential function
which describes the de Sitter universe when the dominant power has the value
two. We remark that we have selected a specific interaction function between
the aether and the scalar fields. The interaction form that we selected has
also geometric origins since for that function in the minisuperspace
description, the dynamical variables of the field equations evolve in a
two-dimensional space of maximally symmetry; in particular in two-dimensional
flat space of Lorentzian signature. That is also a condition that we have
assumed, in order the field equations to admit conservation laws quadratic in
the momentum.
For some of the close-form solutions that we have found, we have studied the
qualitative behavior of the Hubble factor, and we have presented the evolution
of the equation of state parameter of the effective fluid in terms of redshift. From
which we found that the effective fluid can cross the phantom divide line and
behaves like a quintom field \cite{Dutta:2009yb,Guo:2004fq,Zhao:2006mp,Lazkoz:2006pa,Lazkoz:2007mx,MohseniSadjadi:2006hb,Setare:2008pz,Setare:2008dw,Saridakis:2009ej,Cai:2009zp,Qiu:2010ux,Leon:2012vt,Leon:2018lnd,Paliathanasis:2018vru} or as a phantom field \cite{Singh:2003vx,Sami:2003xv,Andrianov:2005tm,Elizalde:2008yf,Sadatian:2008sv}. However the final attractor for that solutions is that of the de Sitter universe. More analysis should be done in that models in
order to specify their physical viability, specifically if they can be contrasted against of
cosmological observations. However such an analysis extends the scopes of this
work and will be published elsewhere.
\begin{acknowledgments}
G. L. was funded by Agencia Nacional de Investigaci\'{o}n y Desarrollo--ANID
through the program FONDECYT Iniciaci\'{o}n grant no. 11180126.
Additionally, G. L. acknowledges the financial support of Vicerrector\'{\i}a de Investigaci\'{o}n y Desarrollo Tecnol\'{o}gico at
Universidad Catolica del Norte.
\end{acknowledgments}
|
1,116,691,501,237 | arxiv | \section{INTRODUCTION}
Connected and autonomous vehicles have emerged as an extensive and promising research area over the past two decades~\cite{qayyum2020securing}. As a closely related topic, vehicular platooning earns its reputation by providing driving/passenger comfort, improved energy efficiency, pollution reduction as well as increase of traffic throughput. The platooning concept involves a group of vehicles travelling in a tightly coupled manner from an origin to a destination as a single unit. A platoon member receives other vehicles' dynamics and maneuver-related information via a vehicle-to-vehicle (V2V) communication network to compute control commands accordingly and maintain platoon stability, i.e. to maintain a narrow inter-vehicle distance and relative velocity.
However, such V2V communication implementations also expose novel attack vectors, which increase security vulnerabilities and highlight vehicle platoons as an appealing target for cyber-physical attacks. Adversaries could inject multiple falsified vehicle nodes into the platoon remotely, which allows them to publish carefully crafted beacon messages to gain the privilege of the road or to cause traffic congestion and even serious collisions~\cite{boeira2017effects}. \textit{There is urgent need to address the safety risks caused by such communication-based attacks}.
This paper focuses on a longitudinal control system for vehicular platooning, characterised by two upper controllers, which provide individual vehicle stability and/or string stability. We propose
a novel dual-mode control system reconfiguration scheme that involves a switching process between the two controllers. To ensure stable switching, we firstly provide conditions on controller gains that is sufficient to guarantee global uniform exponential stability (GUES). Secondly, a minimum dwell time constraint for the string stable controller is then provided, which means this controller is required to be activated at least for this amount of the time to mitigate the defects brought by the other controller in terms of string stability. Thirdly, a security game with imperfect information is then constructed and solved to guide the switching process based on Nash equilibrium solutions, which creates a hybrid combination of game theory and control theory. This game models intelligent and stealthy attacks from rational adversaries, who attempts to bypass detection using the knowledge about the system. It takes the limitations of existing anomaly detection approaches into account while modeling the interactions between attacker and defender.
Fourthly, a dedicated switching surface is also introduced to capture practical constraints. Lastly, the effectiveness of the proposed approach is shown with some numerical and simulation examples.
The contributions of this paper include:
\begin{itemize}
\item We present a dual-mode control system reconfiguration scheme to mitigate communication-based attacks.
along with a sufficient condition in terms of controller gains to ensure stability of the switched system.
\item We provide a lower bound on the dwell time of the string-stable controller to ensure string stability of the switched system after an attack is mitigated.
\item We develop a unique approach that uses game theory to guide a switched system for communication-based attack mitigation purposes. Our security game formulation captures imperfect detectors as a chance node in our security game structure, and takes detection errors (e.g., false alarms and misses) into account.
\item The results are illustrated using sophisticated, system-level simulations.
\end{itemize}
\vspace{-0.1cm}
The rest of the paper is organised as follows.
Section~\ref{Background} formulates the considered platoon framework. Section~\ref{Attack Model} presents the attack model. The proposed control system reconfiguration scheme is discussed in Section~\ref{DefenseFramework} along with the derived stability conditions. Game theoretic analysis is performed in Section~\ref{Game}. Section~\ref{conclusion} outlines some concluding remarks and future work.
\subsection{Literature Review} \label{sec:litreview}
Sumra \emph{et al.}~\cite{sumra2015attacks} provide a comprehensive survey of the attacks on major security goals, i.e., confidentiality, integrity and availability. Malicious attackers can breach privacy by attempting an eavesdropping attack to steal and misuse confidential information \cite{wiedersheim2010privacy}. The use of vehicular botnets to attempt a denial-of-service (DoS) or distributed denial-of-service (DDoS) attack may cause serious network congestion or disruption \cite{zhang2020distributed, feng2020dynamic}. The attacker may disrupt the platoon operation by hijacking the sensors to conduct the replay attack with the aim to feed the control system with outdated signals generated previously by the system~\cite{merco2018replay}.
Several attempts have been made to detect communication-based attacks. A sliding mode observer is proposed for cyber-attack detection and estimation in the case of event-triggered communication~\cite{keijzer2019sliding}.
There is also a growing body of literature that recognises the effectiveness of machine learning based intrusion detection system applied on vehicular platooning~\cite{dadras2018identification, yang2019tree, alotibi2020anomaly, sunstrategic}. Even though such studies aim to maximise their detection performance, inevitable false alarm and miss rates are problematic for real-world applications. Moreover, attack mitigation still remains as an open and active research area.
An intelligent adversaries may also use their knowledge about system vulnerabilities to perform stealthy attacks that maximise their effect and minimise detection rate.
There has been an increasing interest in cybersecurity analysis from a game-theoretic viewpoint \cite{alpcan2010network}. Extensive research has been carried out to examine problems of allocating limited defense resources (e.g. energy \cite{sedjelmaci2016lightweight}, manpower \cite{fang2016deploying}, communication bandwidth \cite{subba2018game1}) against adversaries in a network. In addition, several studies~\cite{zohdy2012game,dextreit2013game,marden2015game,junhui2013power} have also applied game theory for the design of control systems. For instance, authors in\cite{dextreit2013game} model the interaction of the driver and the powertrain of an electric vehicle as a non-coorperative game and construct a controller based on a feedback Stackelberg equilibrium. Game theory is also applied to coordinate autonomous vehicles at an uncontrolled intersection to minimize the total delay~\cite{zohdy2012game}. The switched system concept provides a systems-oriented alternative pathway to mitigate attack effects and enhance safety in such adversarial environments. A specific class of switched systems is defined as a family of linear time-invariant systems whose parameters vary within a single finite set according to a switching signal or switching surface. There is large body of literature on observer design for switched systems with unknown inputs \cite{yang2017simultaneous, zammali2020interval, van2014hybrid} and attempts to stabilise the system under such situations \cite{lee2006optimal, yang2016finite, sanchez2019practical}. The present work builds on these existing literature and introduces a novel dual-mode control system reconfiguration scheme defined as a switched system consisting of a communication-based cooperative controller and a sensor-based local controller. This article presents a novel and systematic approach into game-theory-powered switching system as a local online defense strategy.
\section{BACKGROUND}\label{Background}
\subsection{Autonomous Vehicle Platoon Model}
We consider a hierarchical longitudinal control structure~\cite{rajamani2011vehicle}, with an upper level controller and a lower lever controller as shown in Fig.~\ref{fig:HighLevelControlStuct}. The upper level controller uses information on other vehicles acquired via sensors or communication and internal vehicle dynamics to compute the desired acceleration $a_{dir}$ for each vehicle. The lower level controller generates actuator inputs (e.g. throttle and/or brake commands) to track the desired acceleration.
\begin{figure}[tp]
\centering
\includegraphics[width=\linewidth]{Figures/StructureOfLongitudinalControlSystem.pdf}
\caption{Structure of the longitudinal control system from the perspective of a vehicle in a platoon.}
\label{fig:HighLevelControlStuct}
\end{figure}
In more detail, consider a platoon with $2\leq N< \infty$ vehicles. The $i^{th}$ vehicle's state is defined as $[x_i(t), v_i(t)]^T$, where $x_i(t)\in \mathbb{R}$ and $v_i(t)\in \mathbb{R}$ are the position and velocity of vehicle $i$. For the present work, we focus on the upper level controller in which each vehicle follows second-order dynamics
\begin{equation} \label{eq:UpperController}
\begin{split}
\dot x_i(t) &= v_i(t),\\
\dot v_i(t) &= u_i(t),
\end{split}
\end{equation}
where $u_i(t)$ is the control input (acceleration) to the system. To simplify the notation, time $t$ is omitted thereafter. The control policies considered in our defense framework can take either of the following two forms:\\
\noindent \textbf{1)} \textbf{Cooperative Adaptive Cruise Control (CACC) } \\
The control input in CACC is given by
\begin{equation}\label{CACC]}
u_i = \sum_{j\in \mathpzc{N}_{i}} \alpha_{ij}(x_i-x_j+L_{ij}) +\sum_{j\in \mathpzc{N}_{i}}\beta_{ij} (v_i-v_j)+\sum_{j\in \mathpzc{N}_{i}} \gamma_{ij} a_j ,
\end{equation}
where the set $\mathpzc{N}_{i}$ contains vehicles that communicate with vehicle $i$ (i.e., vehicle $i$'s \textit{neighbors}) and $a_j$ is the acceleration of vehicle $j$. Here $\alpha_{ij}\in \mathbb{R}$, $\beta_{ij}\in \mathbb{R}$ and $\gamma_{ij}\in \mathbb{R}$ are controller gains. In this way, the $i^{th}$ vehicle adjusts the desired acceleration in order to coordinate its speed with its \textit{neighbors} and to maintain the relative position of itself with its around a desired (or targeted) inter-vehicle distance $L_{ij}$.
We choose the desired distance $L_{ij} = L\Delta_{i,j}$, where
$\Delta_{i,j}\in \mathbb{N}$ is the number of vehicles (hops) between vehicle $i$ and $j$, and $L$ is constant and uniform for all vehicles of the platoon.
Note that other vehicles' dynamics information, the position, speed and acceleration tuple $(x_j,v_j,a_j)$, is acquired via wireless communication through a V2V communication network, which can be implemented for example in the form of a 5G or vehicular ad-hoc network. \\
\noindent \textbf{2)} \textbf{Adaptive Cruise Control (ACC)}\\
In this controller, the control input is given by
\begin{equation}\label{ACC]}
u_i =\sum_{j\in \mathpzc{N}_{i}} \alpha_{ij}(x_i-x_j+L_{ij}) +\sum_{j\in \mathpzc{N}_{i}}\beta_{ij} (v_i-v_j) ,
\end{equation}
where the set $j\in \mathpzc{N}_{i}$ contains the \textit{neighbors} of vehicle $i$ that are detectable by on-board range sensors. As above, $\alpha_{ij}\in \mathbb{R}$ and $\beta_{ij}\in \mathbb{R}$ are control gains. ACC control policy uses only the relative position and velocity as feedbacks in order to generate the desired acceleration to maintain a prefixed inter-vehicle distance $L_{ij}$ and relative velocity. In comparison to CACC, other vehicles' dynamics information is obtained only by sensor measurements, which we assume in this paper to be reliable unlike communication messages.
In general, the double-integrator feedback system \eqref{eq:UpperController} for vehicle $i$ can be represented as
\begin{equation}\label{eq:MatrixForm}
\dot z = Az + B R,
\end{equation}
where $z = [x_i(t), v_i(t)]^T$ is the state vector
for vehicle $i$ (abusing notation for simplicity) and
$$ R= [[x_{j}-L_{ij}] [v_{j}] [a_j]]^T, \ \ \forall j\in \mathpzc{N}_{i},$$
is an external input vector that consists of other vehicle dynamics, where $[\cdot]$ represent a row vector of appropriate size. The matrix $A$ will have the following matrix forms respectively, depending on whether control policy CACC \eqref{CACC]} or ACC \eqref{ACC]} is in place
\begin{subequations}
\begin{equation}
\label{A:CACC}
A_{CACC} = \left[
\begin{array}{cc}
0 & 1 \\
k_1 & k_2 \\
\end{array}
\right]= \left[
\begin{array}{cc}
0 & 1 \\
\sum_{j\in \mathpzc{N}_{i}} \alpha_{ij} & \sum_{j\in \mathpzc{N}_{i}}\beta_{ij} \\
\end{array}
\right],
\end{equation}
\begin{equation}
\label{A:ACC}
A_{ACC} = \left[
\begin{array}{cc}
0 & 1 \\
k_3 & k_4 \\
\end{array}
\right]= \left[
\begin{array}{cc}
0 & 1 \\
\sum_{j\in \mathpzc{N}_{i}} \alpha_{ij} & \sum_{j\in \mathpzc{N}_{i}}\beta_{ij} \\
\end{array}
\right],
\end{equation}
\end{subequations}
where $k$'s consist of the corresponding combinations of $\alpha$ and $\beta$ parameters as in \eqref{CACC]} and \eqref{ACC]}. Similarly, the matrix $B$ takes the following forms
\begin{subequations}
\begin{equation}
B_{CACC} =
\begin{bmatrix}
[0] & [0] & [0]\\
[-\alpha_{ij}] & [-\beta_{ij}] & [\gamma_{ij}] \\
\end{bmatrix}, \ \ \forall j\in \mathpzc{N}_{i},
\end{equation}
\begin{equation}
B_{ACC} = \begin{bmatrix}
\begin{array}{ccc}
[0] & [0] & [0]\\
[-\alpha_{ij}] & [-\beta_{ij}]&[0]\\
\end{array}
\end{bmatrix},\ \ \forall j\in \mathpzc{N}_{i},
\end{equation}
\end{subequations}
where $[\cdot]$ represents a row vector of appropriate size. \\[-0.75em]
\noindent\textbf{Assumptions.} To simplify our analysis, we adopt a specific CACC setup as in \cite{segata2014plexe}, which is based on the \textit{predecessor-leader following} information flow topology \cite{zheng2015stability}. In particular, each vehicle receives communicated position, velocity and acceleration information from only the platoon leader and its immediate proceeding vehicle, which equivalently means $\mathpzc{N}_{i} = \{1,\,i-1\}$ in \eqref{CACC]}.
We also assume each vehicle in the platoon only equips a Radar sensor at the vehicle's front, which measures the position and velocity of its predecessor (i.e., $\mathpzc{N}_{i} = \{i-1\}$ in \eqref{ACC]}). The platoon leader is also assumed to be driven by a human driver who is not affected by communication attacks or sensor noise. Communication noise and sensor noise are ignored due to their low impact on system safety compared to intentional attacks.
\subsection{Basic Platoon Stability Analysis}\label{BasicStability}
In the platoon operation with constant spacing policy, two fundamental specifications, namely individual vehicle stability and string stability, must be satisfied to achieve long-term stable operation.
\begin{definition}[Individual Vehicle Stability \cite{rajamani2011vehicle}]
Let's define the spacing error for the $i^{th}$ vehicle to be $\epsilon_i = x_i-x_{i-1}+L$. The system \eqref{eq:MatrixForm} is said to have individual vehicle stability if the following condition holds:
\begin{equation}\label{indStableCond}
\ddot x_{i-1} \rightarrow 0 \; \Rightarrow \; \epsilon_i\rightarrow 0,
\;\; i = 2,\dots,N .
\end{equation}
\end{definition}
This definition essentially means a vehicle achieves asymptotic stability if the preceding vehicle operates at a constant velocity. Since (\ref{eq:MatrixForm}) is a linear time-invariant (LTI) system, this stability can be achieved according to Lemma~\ref{individualStabilityLemma}.
\begin{lemma}\label{individualStabilityLemma}
If conditions \eqref{CACCinvCod} hold for the CACC control system, then the system achieves bounded-input-bounded-output (BIBO) stability, and therefore individual vehicle stability is guaranteed.
\begin{subequations}\label{CACCinvCod}
\begin{equation}
k_{1}<0,
\end{equation}
\begin{equation}
k_{2}\leq -2 \sqrt{-k_{1}}.
\end{equation}
\end{subequations}
\end{lemma}
\begin{proof}
The proof is standard BIBO argument. An LTI continuous-time system with state representation ($A,B,C$) is BIBO stable if and only if the eigenvalues of $A$ are in the left-hand complex plane. In other words, $A$ is a Hurwitz matrix \cite{chen1984linear}.
\end{proof}
Similarly, the following conditions on control gain must hold to achieve individual vehicle stability for ACC control policy
\begin{subequations}\label{ACCinvCod}
\begin{equation}
k_{3}<0,
\end{equation}
\begin{equation}
k_{4}\leq -2 \sqrt{-k_{3}}.
\end{equation}
\end{subequations}
\begin{remark}
Even if each system is analytically proven to be stable, the bounded solution resulting from bounded disturbances may violate practical constraints that lead to collisions. More details will be discussed in later sections.
\end{remark}
If the preceding vehicle is not operating at constant velocity (i.e., accelerating or braking), the spacing error $\epsilon_i$
is expected to be nonzero. Therefore it is important to make sure spacing errors are guaranteed not to amplify as they propagate down to the tail of the platoon.
\begin{definition}[String Stability \cite{rajamani2011vehicle}]\label{def:stringStablity}
Let's define the spacing error for the $i^{th}$ vehicle to be $\epsilon_i = x_i-x_{i-1}+L$. The system \eqref{eq:MatrixForm} is said to have string stability if the following condition holds:
\begin{equation}\label{stringStability}
\|\epsilon_i\|_\infty\leq \|\epsilon_{i-1}\|_\infty,
\;\; i = 2,\dots,N .
\end{equation}
\end{definition}
\begin{theorem}[\cite{rajamani2011vehicle}]
Let the spacing errors of consecutive vehicles be related by the transfer function $\hat{H}(s) = \frac{\epsilon_i}{\epsilon_{i-1}}$. The string stability condition \eqref{stringStability} holds, if
\begin{equation*}
\|\hat{H}(s)\|_\infty\leq 1,
\end{equation*}
\begin{equation*}
\hat{h}(t) > 0,
\end{equation*}
where $\hat{h}(t)$ is the impulse response of $\hat{H}(s)$. \end{theorem}
The string stability proofs for both systems are omitted due to space limitation. In summary, the ACC controller can achieve individual vehicle stability via proper controller tuning but fails to ensure string stability under a constant spacing policy. Whereas the CACC controller achieves both stabilities when vehicle-to-vehicle communication is established.
\section{ATTACK MODEL} \label{Attack Model}
We consider a particular type of communication attack, namely a \textit{message falsification} attack as described in \cite{qayyum2020securing}. By continuously monitoring the communication network, the adversary may change the content of received messages and subsequently insert them back into the network. The presence of this type of attack could cause instabilities to the vehicle platoon or even collisions.
Let $\mathcal{U}$ represent the set of vehicles that are affected by the attack. The state of an affected vehicle evolves as
\begin{equation} \label{eq:CACC_withAttack}
\begin{split}
\dot x_i(t) &= v_i(t),\\
\dot v_i(t) &= u_i(t)+\xi(t),\ \ i\in \mathcal{U},
\end{split}
\end{equation}
where $\xi(t)$ is the intentional modifications on communicated messages. Unlike a noise term that only moderately degrades the system performance, an adversary could target specific platoon members and carry out stealthy, adaptive and aggressive attacks, which compromises platoon safety.
\begin{assumption}\label{def:CommAttacknotAffectSensor}
Communication-related cyber-attacks do not alter the physical properties of individual vehicles, which means the integrity of the vehicle sensors and controllers are protected.
\end{assumption}
Assumption~\ref{def:CommAttacknotAffectSensor} clarifies our focus on communication-based attacks in this paper.
In comparison to attacks on actuators and sensors, \textit{message falsification} attacks do not directly alter the targeted physical systems but achieve malicious outcomes through modifications of system inputs. Although the ACC controller shares similar feedback structure (i.e., $A$ matrix structure) with CACC, they have different mechanisms to obtain other vehicle dynamics information (through on-board sensors). The immunity against \textit{message falsification} attack and the fact that physical system is still reliable under such attack mean the ACC controller is an appropriate supplementary controller in such an adversarial environment. However, \textit{recall that ACC can not guarantee string stability, which limits its suitability for long-term deployment in a platoon}.
\section{CONTROLLER SWITCHING FOR ATTACK MITIGATION}\label{DefenseFramework}
In this section, we firstly present the overall structure of the dual-mode control system reconfiguration scheme and find the conditions on controller gains that lead to GUES and string stability for the proposed system.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\columnwidth]{Figures/DuelSwitchingModule.pdf}
\caption{Structure of Upper Controller.}
\label{fig:UpperControllerStructure}
\end{figure}
\subsection{A Security Game-based Switched System}
Given the benefits and limitations of both ACC and CACC controllers discussed in Section~\ref{Attack Model}, we suggest using ACC as a secondary controller operating as a back-up when the communication network is suspicious. In this way, the advantages of both controllers will be retained while minimising the effects caused by cyber-attacks.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\columnwidth]{Figures/switchedsys1.pdf}
\caption{State space representation of the switched system.}
\label{fig:switchedspace}
\end{figure}
The overall structure of the improved upper controller is shown in Fig.~\ref{fig:UpperControllerStructure}, which delivers insight into attack detection and mitigation. Both controllers form a switched system with the switching decision coming from novel game-theoretic analysis of the attacker and the defender's interactions. Details of the game structure will be discussed in Section~\ref{Game}. A dedicated state constraint further enhances safety against bounded but aggressive message modifications in which the solution is bounded but violates practical constraints ($\epsilon_{max}$), which represent vehicles nearly crashing. The resulting state space is visualised in Fig.~\ref{fig:switchedspace}.
Formally, the switched system combined attack disturbance can be written in matrix form
\begin{subequations}\label{switchedSys}
\begin{equation}
\begin{split}
\dot z = A_p z +B_pR+M_p\mathbf{\xi}, \ \ p\in \mathcal{P},
\end{split}
\end{equation}
\begin{equation}
p = \begin{cases}
ACC, & \text{if $|x_i-x_{i-1}+L| < \epsilon_{max}$},\\
\sigma(t),& \text{otherwise},
\end{cases}
\end{equation}
\end{subequations}
where $\mathcal{P}$ is an index set of available subsystems and $\mathcal{P} = \{CACC, ACC\}$ consisting of two control systems in our case, $\sigma(t): [0,\infty) \rightarrow \mathcal{P}$ is a piecewise right-continuous constant function generated based on the security game that specifies the index of the active system at each time, $\mathbf{\xi}$ represent attack effects, $\{(A_p,B_p,M_p)\}_{p\in \mathcal{P}}$ is a set of state matrix triples with suitable dimensions for different control systems and the state constraint $|x_i-x_{i-1}+L| < \epsilon_{max}$ is the aforementioned switching surface, which defines the collision avoidance boundary in Fig.~\ref{fig:UpperControllerStructure} and \ref{fig:switchedspace}. Note that, $M_{ACC}=0$ based on Assumption~\ref{def:CommAttacknotAffectSensor}.
\subsection{Stability Analysis}
Stability analysis of such a switched system is necessary: even if both controllers satisfy the individual vehicle stability condition~\eqref{indStableCond}, unconstrained switching may destabilize such a switched system \cite{liberzon2003switching}.
\begin{definition}[Global Uniform Exponential Stability \cite{liberzon2003switching}]
A switched system has global uniform exponential stability (GUES) if there exist a positive constant $\delta>0$ such that for all switching signals $\sigma$ the solutions of (\ref{switchedSys}) with initial state $|z(0)|\leq \delta$ satisfy
\begin{equation}\label{switchedSys_stableCond}
|z(t)|\leq c|z(0)|\exp^{-\lambda t}, \ \ \forall t\geq 0,
\end{equation} for some $c,\lambda > 0$.
\end{definition}
\begin{theorem}
If conditions (\ref{commonV}) are satisfied, then all sub-systems of the platoon in (\ref{switchedSys}) share a radially unbounded common Lyapunov function, and therefore the switched system has global uniform exponential stability.
\end{theorem}
\begin{proof}
The proof is based on the results in Chapter 2 of \cite{liberzon2003switching}. It is natural to consider quadratic common Lyapunov functions of the form \eqref{quadraticLya} for switched linear systems,
\begin{equation}\label{quadraticLya}
V(z) = z^T P z, \ \ \ \ P=P^T>0.
\end{equation}
Assuming that \eqref{CACCinvCod} and \eqref{ACCinvCod} are satisfied, we have to find a positive definite symmetric matrix $P$ such that the inequality \eqref{Lyaineqs} is fulfilled
\begin{equation}\label{Lyaineqs}
A_p^TP+PA_p<0,\ \ \ \forall p\in \mathcal{P}.
\end{equation}
The inequality $M < N$ for two symmetric matrices M and N means that the matrix $M-N$ is negative definite. Note that, if a matrix is positive (negative) definite, all its eigenvalues are positive (negative).
For the switched system consisting of $A_{CACC}$ \eqref{A:CACC} and $A_{ACC}$ \eqref{A:ACC}, $P=\left[
\begin{array}{cc}
p_{11} & p_{12} \\
p_{12} & p_{22} \\
\end{array}
\right]$ has to satisfy \eqref{commonV} to guarantee GUES.
\begin{subequations}\label{commonV}
\begin{align}
p_{11}>0, & \;\; p_{12}>0, \\
p_{22}&>\frac{p_{12}^2}{p_{11}},\\
k_1<0, & \;\; k_{3}<0,
\end{align}
\\[-2.75em]
\begin{align}
k_2&>\frac{-2 p_{12} \sqrt{k_1-\frac{k_{1} p_{11} p_{22}}{p_{12}^2}}+k_{1} p_{22}-p_{11}}{p_{12}},\\
k_{2}&<\frac{2 p_{12} \sqrt{k_{1}-\frac{k_{1} p_{11} p_{22}}{p_{12}^2}}+k_{1} p_{22}-p_{11}}{p_{12}},\\
k_{4}&>\frac{-2 p_{12} \sqrt{k_{3}-\frac{k_{3} p_{11} p_{22}}{p_{12}^2}}+k_{3} p_{22}-p_{11}}{p_{12}},\\
k_{4}&<\frac{2 p_{12} \sqrt{k_{3}-\frac{k_{3} p_{11} p_{22}}{p_{12}^2}}+k_{3} p_{22}-p_{11}}{p_{12}}.
\end{align}
\end{subequations}
\end{proof}
\begin{remark}
The conditions in \eqref{commonV} are sufficient but not necessary to achieve GUES. There may be other types of common Lyapunov functions other than \eqref{quadraticLya}, which may lead to other sufficient conditions.
\end{remark}
String stability of vehicular platooning is important for long-term stable platoon operation. However, as discussed in Section~\ref{BasicStability}, sensor-based control policy ACC fails to guarantee this property. Due to erroneous detection results (e.g., false alarms), the switching signal $\sigma(\cdot)$ generated from the security game may initiate control system reconfiguration process even if the vehicle is not exposed to an attack. Therefore, we present a lower bound on the dwell time $\tau_n$ for the string stable controller CACC to retain platoon string stability of the switched system in an attack-free environment. This constraint is updated dynamically at the beginning of each interval on which $\sigma=CACC$ based on the current system states.
\begin{theorem}\label{StringinBenignEnv}
Consider a dual-mode control system reconfiguration scheme that consists of a control policy (e.g., CACC) which guarantees platoon string stability and a control policy (e.g., ACC) which cannot guarantee string stability. Assume CACC is globally exponentially string stable with a Lyapunov function $V$ satisfying
\begin{equation}\label{eq:radiallyUnbounded}
a\abs{z}^2\leq V(z)\leq b\abs{z}^2,
\end{equation}
and \begin{equation}\label{eq:negDefinite}
\dot V(z)\leq -c\abs{z}^2,
\end{equation}
for some positive constants $a$, $b$ and $c$.
Suppose switching signal $\sigma$ alternatively chooses CACC on $[t_n, t_{n+1})$ and ACC on $[t_{n+1},t_{n+2})$ where $n\in\mathbb{Z}_{\geq 0}$, and repeats this infinitely many times. The switched system \eqref{switchedSys} guarantees platoon string stability if the dwell time $\tau_n$ for the stable system CACC satisfies
\begin{equation}\label{eq:switchingConstraint}
\tau_n > \frac{1}{\lambda}\log \abs{z(t_n)},
\end{equation}
where $\lambda = \frac{c}{2b}$.
\end{theorem}
\begin{proof}
Combining \eqref{eq:radiallyUnbounded} and \eqref{eq:negDefinite}, we have
\begin{equation*}
\dot V(z)\leq -2\lambda V(z),
\end{equation*} where $\lambda = \frac{c}{2b}$.
This leads to the inequality
\begin{equation}\label{eq:LyaChangedForm}
V(z(t_n+\tau_n))\leq e^{-2\lambda \tau}V(z(t_n)).
\end{equation} provided that $\sigma(t) = CACC$ for $t\in [t_n, t_n+\tau_n)$,
\noindent
Suppose $\delta = \|\epsilon_{i-1}\|_\infty -\|\epsilon_i\|_\infty$ is the worst-case disturbance amplification as disturbance propagates down to the tail of the platoon with $\|\epsilon_i\|_\infty$ defined by \eqref{def:stringStablity}.
To compute an explicit lower bound on $\tau_n$ to guarantee string stability for the switched system, it is sufficient to ensure that
\begin{equation}\label{eq:essenseLyaCond}
V(z(t_n))>V(z(t_{n+2})),
\end{equation}
Note that, $V(z(t_n+\tau_n)) = V(z(t_{n+1}))$ and $V(z(t_{n+1})+\delta) = V(z(t_{n+2}))$ because the Lyapunov function is continuous.
Therefore, it is equivalent to satisfy
\begin{equation}\label{eq:interStep1}
V(z(t_n))-V(z(t_n+\tau_n))> V(z(t_{n+1})+\delta)-V(z(t_{n+1})).
\end{equation}
From \eqref{eq:LyaChangedForm}, we have
\begin{equation*}
V(z(t_n))-V(z(t_n+\tau_n)) \geq V(z(t_n))-e^{-2\lambda \tau_n}V(z(t_n)).
\end{equation*}
Then, \eqref{eq:interStep1} will hold, if we have
\begin{equation*}
V(z(t_n))+V(z(t_{n+1}))>e^{-2\lambda \tau_n}V(z(t_n))+V(x(t_{n+1})+\delta).
\end{equation*}
By virtue of \eqref{eq:radiallyUnbounded}, we have
\begin{subequations}
\begin{equation*}
b\abs{z(t_n)}^2+b\abs{z(t_{n+1})}^2 \geq V(z(t_n))+V(z(t_{n+1})).
\end{equation*}
\begin{equation*}
\scalebox{0.9}{$e^{-2\lambda \tau_n}V(z(t_n))+V(z(t_{n+1})+\delta) \geq a e^{-2\lambda \tau_n} \abs{z(t_n)}^2+a\abs{z(t_{n+1})+\delta}^2$}.
\end{equation*}
\end{subequations}
Then, all we need to have is
\begin{equation*}
b\abs{z(t_n)}^2+b\abs{z(t_{n+1})}^2 >a e^{-2\lambda \tau_n} \abs{z(t_n)}^2,
\end{equation*} which can be rewritten as
\begin{equation*}
\tau_n > \frac{1}{2\lambda}\log{\frac{a\abs{z(t_n)}^2}{b(\abs{z(t_{n+1})}^2+\abs{z(t_n)}^2)}}.
\end{equation*}
This immediately yields the following lower bound on dwell time in
CACC:
\begin{equation*}
\tau_n > \frac{1}{\lambda}\log \abs{z(t_n)}.
\end{equation*}
\end{proof}
Note that the assumption on exponential stability may be justified
in certain cases as discussed in \cite{feng2019string}. Other estimates in \eqref{eq:radiallyUnbounded} and \eqref{eq:negDefinite} can be used instead of quadratic ones. In essence, all we need is for \eqref{eq:essenseLyaCond} to hold for all switching times. We observe the further the system state deviation from the equilibrium point the larger $\tau_n$ should be, meaning that CACC controller should be activated for a longer time to compensate the negative effects brought by ACC in terms of string stability. Moreover, if CACC could be tuned to converge faster, i.e. $c$ is large, then
$\tau_n$ can be smaller.
\begin{remark}
The switched system may not guarantee string stability when the communication-based controller CACC is compromised in an adversarial environment. In this case, the proposed switched system can be seen as an emergency measure to prevent a collision. Providing a switching function that also guarantees string stability for this case is a research direction for future work.
\end{remark}
\subsection{Numerical Example}
If state matrices \eqref{A:CACC} and \eqref{A:ACC} of two control systems take the following control gains
\begin{subequations}
\begin{equation*}
A_{CACC} = \left[
\begin{array}{cc}
0 & 1 \\
k_1 & k_2 \\
\end{array}
\right] = \left[
\begin{array}{cc}
0 & 1 \\
-1.58 & -2.51 \\
\end{array}
\right],
\end{equation*}
\begin{equation*}
A_{ACC\ } = \left[
\begin{array}{cc}
0 & 1 \\
k_3 & k_4 \\
\end{array}
\right] = \left[
\begin{array}{cc}
0 & 1 \\
-0.25 & -1 \\
\end{array}
\right]. \end{equation*}
\end{subequations}
A suitable symmetric positive definite matrix that satisfies \eqref{Lyaineqs} is
\begin{equation*}
P = \left[
\begin{array}{cc}
1. & 0.154297 \\
0.154297 & 1.57813 \\
\end{array}
\right].
\end{equation*}
\section{SECURITY GAME FORMULATION} \label{Game}
We propose a non-cooperative cybersecurity defense game played between the \textit{Attacker}, the anomaly detector and the \textit{Defender} to guide the controller switching process in an online fashion. By \textit{Attacker} and \textit{Defender} we mean the malicious adversaries and the unit that generates switching signals respectively. Note that, both players' actions and detection reports are represented as edges and the resulting states are represented as nodes in the game tree as in Fig.~\ref{fig:extensiveGame}. The game begins with the Attacker choosing whether to attack the vehicle platoon or not, represented by the leftmost branches in red. If the Attacker chooses to attack then it performs the \textit{message falsification} attack as modelled by \eqref{eq:CACC_withAttack}.
The notion of ``detector'' is kept very general to make sure our game structure is suitable to various types anomaly detection approaches with the understanding that the design process for all detectors can not perfectly anticipate all complex real-world situations and attack models leading to a certain probability of error.
\begin{definition}[Chance node]
A chance node can be seen as a fictitious player who performs actions according to a probability distribution.
\end{definition}
We use chance nodes to model the uncertainty about detection results as highlighted in green in Fig.~\ref{fig:extensiveGame}. There should be an ongoing update process regarding these prior beliefs based on detection results in the real-world deployment.
\begin{definition}[Information set]
An information set of player $i$ is a collection of player $i$'s nodes among which $i$ cannot distinguish.
\end{definition}
Once the Defender obtains the detection results, it is unclear whether an actual attack has been performed or not. This unique situation is modeled by \textit{information sets} shown as dashed lines in the figure. There are two information sets for the Defender based on the detection reports: one indicating an attack and the other for no attack. This means the Defender must consider the consequences of both an actual attack having occurred, and no attack having occurred, when an attack has been reported by the detector.
Lastly, after considering a rational Attacker's action and the chances of detection errors, the Defender decides whether to downgrade the CACC controller to the ACC controller or remain with the CACC controller, for example, in the case when an attack is reported with low but non-zero probability. The dual-mode control system reconfiguration scheme leverages the Defender's decisions in the game as the switching signal $\sigma$ of the switched control system. As shown in green in Fig.~\ref{fig:switchedspace}, the solution of the game activates one of the subsystems (i.e. ACC or CACC) which in turn generates new state values for another game before the next switching is required.
Formally, we model the game with
\begin{itemize}
\item Attacker's action space $\mathpzc{A}^A:=\ $\{$a$: engage an attack; $na$: not attacking\}
\item Chance nodes (anomaly detector) $C:=\ $\{$r$: reporting an attack; $nr$: not reporting an attack\}
\item Defender's action space $\mathpzc{A}^D:=\ $\{$d$: switch to ACC; $nd$: switch to CACC\}
\end{itemize}
The strategy profile is modeled as $\langle a, c, d \rangle$ for $a\in A$, $c\in C$ and $d\in D$.
The utility values for the Attacker and the Defender are denoted as $[(R_1^A, R_1^D), \dots, (R_8^A, R_8^D)]$, which can be chosen to reflect specific vehicle platoon security trade-offs and risks.
\subsection{Numerical Example}
\begin{figure}[bp]
\centering
\includegraphics[width=\linewidth]{Figures/CDC_gameTree.pdf}
\caption{An example game model: the attacker's actions are in red, detection results are in green, and defender's controller switching decisions are in blue.}
\label{fig:extensiveGame}
\end{figure}
One instance of the game is shown graphically as in Fig.~\ref{fig:extensiveGame}. The utilities of each strategy profile are highlighted in red for the Attacker and blue for the Defender at the leaves of the game tree. The example anomaly detector: $90\%$ of the time correctly reports benign data when there is no attack; it makes false alarms for the remaining $10\%$ of the time. When an attack has truly occurred, this detector correctly reports it $70\%$ of the time and misses for the remaining $30\%$ of the time. Note that, these values should be calibrated based on the deployed anomaly detector in different problem settings.
We use a popular open-source game solver \textit{Gambit} (\cite{mckelvey2006gambit}) to find Nash equilibrium solutions. A unique mixed strategy is numerically computed. The probability distribution of each player's actions is shown under the figure's edges. For example, the Attacker with the above utilities will attack with $82.98\%$ of a chance to attack and if the Detector reports no attack being detected, the Defender would still choose to downgrade to sensor based ACC controller with $13.04\%$ of the chance to react to this high attack intention and imperfect detection results.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{Figures/WebtosWithoutDefense.png}
\caption{Without attack mitigation}
\label{fig:noD}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{Figures/WebtosWithDefense.png}
\caption{With attack mitigation}
\label{fig:withD}
\end{subfigure}
\hfill
\caption{Simulation results of a vehicle platoon of size $4$ under message falsification attack, showing a crash without attack mitigation (a) and crash prevention (b).}
\label{fig:simulation}
\end{figure}
\subsection{Simulation Example}
The effectiveness of the proposed dual-mode control system reconfiguration scheme can be further demonstrated by comparing simulation results as in Fig.~\ref{fig:simulation}. The simulations are conducted in a sophisticated simulator \textit{Webots} \cite{Webots}, which supports realistic simulation of traffic flow, the ambient environmental impact and the vehicle's physical properties (e.g., motor torques, body mass, suspension etc.). In the simulation, the communication channel of the blue \textit{BMW X5} vehicle is compromised. If there is no attack mitigation, malicious message modification would cause the vehicle to accelerate and collide with its predecessor as shown in Fig.~\ref{fig:noD}. With the proposed approach, the attack effects could be significantly reduced. Instead of a collision, the adversary could at most cause the vehicle to reduce the inter-vehicle distance. After reaching the minimum possible inter-vehicle distance as seen in Fig.~\ref{fig:withD}, the scheme is able to guide the vehicle to regain its desired inter-vehicle distance and relative velocity.
\begin{remark}
The Nash equilibrium solution is highly dependent on the utility values of both players and the detection probabilities of the detector. Therefore, it is critical that these values properly reflect the trade-offs and risks in terms of platoon security. A detailed design of a machine learning based anomaly detector and a set of utility functions that fulfills these requirements is ongoing research.
\end{remark}
\section{CONCLUSION}\label{conclusion}
Although V2V communication empowers vehicle platoons with improved operation stability, the resulting high level of connectivity and openness may attract communications-based cyber-physical attacks. Consequently, we have investigated a controller reconfiguration scheme to mitigate attack effects, and thereby enhanced system safety in an adversarial environment. This paper constitutes a first attempt to use security games to guide the switching process in the context of switched systems, where the interactions between an intelligent attacker and defender in possession of an imperfect detector have been investigated. Two common controllers CACC and ACC for autonomous vehicle platooning have been carefully analysed. A sliding surface based on a prefixed state constraint highlights the inadequacy of common stability definitions
in this context and further guarantees system safety. A minimum dwell time constraint is derived to ensure string stability of the switched system in a benign environment. While the presented approach
is effective for attack mitigation in short-term operation, we cannot ensure string stability in an adversarial environment, which would possibly require
novel control designs beyond CACC or ACC.
Hence, an interesting open problem motivated by our work is to ascertain whether a switching signal could achieve string stability under communication-based attacks (e.g., to find probabilistic string stability guarantees based on the characteristics of the imperfect anomaly detector). Moreover, additional simulation studies could support usefulness and reliability when the low-level controller and vehicle models are taken into account.
\section{ACKNOWLEDGMENTS}
We gratefully acknowledge support from the DSTG Next Generation Technology Fund and CSIRO Data61 CRP on `Adversarial Machine Learning for Cyber', and CSIRO Data61 PhD scholarship.
\bibliographystyle{ieeetr}
|
1,116,691,501,238 | arxiv | \section{Introduction}
Data poisoning \cite{biggio2012poisoning,jagielski2018manipulating,shafahi2018poison,steinhardt2017certified,zhu2019transferable}
is a training-time attack where the attacker is assumed to have access to the training data on which the victim will train the model.
The attacker can modify the training data
in a manner that the model trained on this poisoned data performs as the attacker desires.
The data hungry nature of modern machine learning methods make them vulnerable to poisoning attacks. Attackers can place the poisoned data online and wait for it to be scraped by victims trying to increase the size for their training sets.
Another easy target for data poisoning is data collection by crowd sourcing where malicious users can corrupt the data they contribute. In most cases, an attacker can modify only certain parts of the training data such as change the features or labels for a specific class or modify a small subset of the data from all classes.
In this work, we assume the attacker wants to affect the performance of the victim's models on a target class and modifies only the features of the points belonging to that class (without affecting the labels). To evade detection, the attacker is constrained to only add imperceptibly small perturbations to the points of the target class.
Many previous works \cite{munoz2017towards,shafahi2018poison,huang2020metapoison,koh2017understanding,zhu2019transferable,chen2017targeted,ji2017backdoor,turner2018clean} have shown the effectiveness of poisoning in affecting the accuracy of models trained on poisoned data compared to the accuracy achievable by training with clean data. In most works, the victim is assumed to use standard training by minimizing the empirical loss on the poisoned data to train the models and thus the attack is optimized to hurt the accuracy of standard training.
However, recent research on test-time evasion attacks \cite{carlini2017adversarial,athalye2018obfuscated,uesato2018adversarial,bulusu2020anomalous} suggests that models trained with standard training are not robust to adversarial examples, making the assumption of victim relying on standard training to train the models for deployment questionable.
\begin{figure*}[tb]
\centering{\includegraphics[width=0.99\textwidth]{images/overview_c.pdf}}
\caption{Overview of our poisoning against certified defenses (PACD) attack which generates poisoned data to reduce the certified robustness of the victim's model trained with methods such as Gaussian data augmentation(GA)\cite{cohen2019certified}, SmoothAdv\cite{salman2019provably} and MACER\cite{zhai2020macer} on a target class.
}
\label{fig:overview}
\end{figure*}
Thus, in a realistic scenario, where the aim of the victim is to deploy the model, it's better to assume that the victim will rely on training procedures that yield classifiers which are provably robust to test-time attacks.
Several recent works have proposed methods for training certifiably robust models whose predictions are guaranteed to be constant in a neighbourhood of a point. However, many of these methods \cite{raghunathan2018semidefinite,gowal2018effectiveness,huang2019achieving,xu2020automatic} do not scale to deep neural networks or large datasets, due to their high complexity. Moreover, the effect of training data quality on the performance of these certified defenses at test time remains largely unexplored.
Recently, randomized smoothing (RS) based certification methods \cite{lecuyer2019certified,li2019certified,cohen2019certified} were shown to be scalable to deep neural networks and high dimensional datasets enabling researchers to propose training procedures \cite{salman2019provably,zhai2020macer} that lead to models with high certified robustness.
Thus, we assume that a victim will rely on RS based certification methods to measure the certified robustness and use RS based training procedures to train the models.
The fact that a victim can train with a method that improves certified adversarial robustness is an immediate challenge for current poisoning attacks which optimize the poison data to affect the accuracy of models trained with standard training.
Table~\ref{Table:difficulty_of_poisoning} shows that poisons optimized against standard training can significantly reduce the accuracy of the victim's model (left to right) when the victim also uses standard training (1st and 5th row). However, this poison data is rendered ineffective when the victim uses a certifiably robust training method such as \cite{cohen2019certified,salman2019provably,zhai2020macer}.
\emph{Are certified defenses robust to data poisoning?}
We study this question and demonstrate that data poisoning is a serious concern even for certified defenses.
\begin{table}
\caption{Failure of traditional data poisoning attacks optimized against standard training in affecting the test accuracy (of target class) of models trained with certifiably robust training procedures. Details of the experiment are present in Appendix~\ref{app:standard_acc}. Certifiably robust training methods \cite{cohen2019certified,salman2019provably,zhai2020macer} are trained with $\sigma$ = 0.25 and accuracy of their base classifiers are reported.
}
\label{Table:difficulty_of_poisoning}
\centering
\small
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c|c|c|c}
\toprule
&\multirow{1}{*}{\makecell{Training method }} & \multicolumn{1}{c|}{\makecell{Model trained \\ on clean data}} & \multicolumn{1}{c}{\makecell{Model trained \\ on poison data}} \\
\midrule
\multirow{4}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
Standard & 99.28$\pm$0.01 & 60.08$\pm$12.6 \\
&\makecell{GA\cite{cohen2019certified}} & 98.99$\pm$0.14 & 98.31$\pm$1.65\\
&SmoothAdv \cite{salman2019provably}& 99.18$\pm$0.23 & 99.31$\pm$0.29\\
&MACER \cite{zhai2020macer}& 99.21$\pm$0.56 & 98.31$\pm$0.58\\
\midrule
\midrule
\multirow{4}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
Standard & 92.71$\pm$1.31 & 0.36$\pm$0.37 \\
&\makecell{GA \cite{cohen2019certified}} & 88.84$\pm$2.39 & 88.38$\pm$2.13\\
&SmoothAdv \cite{salman2019provably} & 79.48$\pm$2.69 & 74.95$\pm$3.45\\
&MACER \cite{zhai2020macer}& 87.12$\pm$1.17 & 88.54$\pm$4.52\\
\bottomrule
\end{tabular}
}
\end{table}
We propose a novel data poisoning attack that can significantly compromise the certified robustness guarantees achievable from training with robust training procedures. We formulate the Poisoning Against Certified Defenses (PACD) attack as a constrained bilevel optimization problem and theoretically analyze its solution for the case when the victim uses linear classifiers. Our theoretical analysis and empirical results suggests that the decision boundary of the smoothed classifiers (used for RS) learned from
the poisoned data is significantly different from the one learned from clean data there by causing a reduction in certified radius.
Our bilevel optimization based attack formulation is general since it can generate poisoned data against a model trained with any certifiably robust training method (lower-level problem) and certified with any certification procedure (upper-level problem). Fig.~\ref{fig:overview} shows the overview of the proposed PACD attack.
Unlike previous poisoning attacks that aim to reduce the accuracy of the models on a small subset of data, our attack can reduce the certified radius of an entire target class.
The poison points generated by our attack have clean labels and imperceptible distortion making them difficult to detect.
The poison data remains effective when the victim trains the models from scratch or uses data augmentation or weight regularization during training. Moreover, the attack points generated against a certified defense are transferable to models trained with other RS based certified defenses and to models with different architectures.
This highlights the importance of training-data quality and curation for obtaining meaningful gains from certified defenses at test time, a factor not considered by current certified defense research.
Our main contributions are as follows
\begin{itemize}
\item We study the problem of using data poisoning attacks to affect the robustness guarantees of classifiers trained using certified defense methods. To the best of our knowledge, this is the first clean label poisoning attack that significantly reduces the certified robustness guarantees of the models trained on the poisoned dataset.
\item We propose a bilevel optimization based attack which can generate poison data against several robust training and certification methods. We specifically use the attack to highlight the vulnerability of randomized smoothing based certified defenses to data poisoning.
\item We demonstrate the effectiveness of our attack in reducing the certifiable robustness obtained using randomized smoothing on models trained with state-of-the-art certified defenses \cite{cohen2019certified,salman2019provably,zhai2020macer}. Our attack reduces the ACR of the target class by more than 30\%.
\end{itemize}
\vspace{-0.3cm}
\section{Background and related work}
{\bf Randomized smoothing:}
The RS procedure \cite{cohen2019certified} uses a smoothed version of the original classifier $f:\mathbb{R}^d \xrightarrow{} \mathcal{Y}$ and certifies the adversarial robustness of the new classifier. The smoothed classifier, $g(x) = \arg \max_c \mathbb{P}_{\eta \sim \mathcal{N}(0, \sigma^2I)}(f(x+\eta)=c)$, assigns $x$ the class whose decision region $\{x' \in \mathbb{R}^d: f(x') = c\}$ has the largest measure under the distribution $\mathcal{N}(x, \sigma^2I)$, where $\sigma$ is used for smoothing. Suppose that while classifying a point $\mathcal{N}(x, \sigma^2I)$, the original classifier $f$ returns the class $c_A$ with probability $p_A = \mathbb{P}(f(x + \eta) = c_A)$, and the “runner-up” class $c_B$ is returned with probability $p_B = \max_{c \neq c_A} \mathbb{P}(f(x + \eta) = c)$, then the prediction of the point $x$ under the smoothed classifier $g$ is robust within the radius $r(g;\sigma) = \frac{\sigma}{2}(\Phi^{-1}(p_A) - \Phi^{-1}(p_B)),$ where $\Phi^{-1}$ is the inverse CDF of the standard Normal distribution. In practice, Monte Carlo sampling is used to estimate a lower bound on $p_A$ and an upper bound on $p_B$ as its difficult to estimate the actual values for $p_A$ and $p_B$.
Since standard training of the base classifier does not achieve high robustness guarantees, \cite{cohen2019certified} proposed to use GA based training in which the base classifier is trained on Gaussian noise corruptions of the clean data. Recent works \cite{zhai2020macer,salman2019provably} showed that the certified robustness guarantees of RS can be boosted by using different training procedures. In particular, \cite{salman2019provably} proposed to train the base classifier using adversarial training where the adversarial examples are generated against the smoothed classifier.
Although effective at increasing the certified radius, the method can be slow to train due to the requirement of generating adversarial examples against the smoothed classifier at every step. Another recent work \cite{zhai2020macer} proposed a different training procedure which is significantly faster to train and relies on directly maximizing the certified radius for achieving high robustness guarantees. Due to their effectiveness in improving the certified robustness guarantees of machine learning models, we craft poison data against these methods. A recent attack method \cite{ghiasi2020breaking} showed that it is possible to fool a robust classifier to mislabel an input and give an incorrect certificate using perturbation large in $\ell_p$ norm at test-time.
Our work is different since we focus on train-time attacks against certified defenses using imperceptibly small perturbations to the poison data.
{\bf Bilevel optimization:}
A bilevel optimization problem has the form $\min_{u \in \mathcal{U}} \xi(u,v^*)\;\mathrm{s.t.}\;v^* = \arg\min_{v\in \mathcal{V}(u)}\;\zeta(u,v)$, where the upper-level problem is a minimization problem with $v$ constrained to be the optimal solution to the lower-level problem (see \cite{bard2013practical}). Our data poisoning attack is a constrained bilevel optimization problem. Although general bilevel problems are difficult to solve, under some simplifying assumptions their solution can be obtained using gradient based methods. Several methods for solving bilevel problems in machine learning have been proposed previously \cite{domke2012generic,pedregosa2016hyperparameter,franceschi2017forward,maclaurin2015gradient,shaban2018truncated,mehra2019penalty} (See Appendix~\ref{app:approxgrad} for an overview). We use the method based on approximating the hypergradient by approximately solving a linear system (ApproxGrad Alg.~\ref{alg:approxgrad} in Appendix~\ref{app:approxgrad}) in this work.
Previous works \cite{mei2015using,munoz2017towards,mehra2019penalty,huang2020metapoison,carnerero2020regularisation} have shown the effectiveness of solving bilevel optimization problem for data poisoning to affect the accuracy of models trained with standard training. Our work on the other hand proposes a bilevel optimization based formulation to generate a data poisoning attack against RS based certified defenses and shows its effectiveness against state-of-the-art robust training methods.
\section{Poisoning against certified defenses}
Here we present the bilevel formulation of our PACD attack for generating poisoned data to compromise the certified robustness guarantees of the models trained using certified defenses. Specifically, we discuss how to generate poison data against GA \cite{cohen2019certified}, SmoothAdv \cite{salman2019provably} and MACER \cite{zhai2020macer} and affect the certified robustness guarantees obtained using RS.
\subsection{General attack formulation}
Let $\mathcal{D^\mathrm{clean}} = \{(x_i^\mathrm{clean}, y_i^\mathrm{clean})\}_{i=1}^{N_{\mathrm{clean}}}$
be the clean, unalterable portion of the training set. Let $u=\{u_1,...,u_n\}$ denote the attacker's poisoning data which is added to the clean data.
For clean-label attack, we require that each poison example $u_i$ has a limited perturbation, for example, $\|u_i - x_i^\mathrm{base}\| =\|\delta_i\| \leq \epsilon$ from the base data $x_i^\mathrm{base}$ and has the same label $y_i^\mathrm{base}$, for $i=1,...,n$. Thus $\mathcal{D^\mathrm{poison}} = \{(u_i, y_i^\mathrm{base})\}_{i=1}^{N_{\mathrm{poison}}}$.
The goal of the attacker is to find $u$ such that when the victim uses $\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}$ to train a classifier,
the certified robustness guarantees of the model on the target class ($\mathcal{D^\mathrm{val}} = \{(x_i^\mathrm{val},y_i^\mathrm{val})\}_{i=1}^{N_\mathrm{val}}$) are significantly diminished compared to a classifier trained on clean data.
The attack can be formulated as follows:
\begin{equation}
\begin{split}
&\min_{u \in \mathcal{U}}\;\; \mathcal{R}(\mathcal{D}^\mathrm{val}; \theta^\ast) \\
\mathrm{s.t.}\;\; \theta^* = \arg&\min_{\theta}\; \mathcal{L}_\mathrm{robust}(\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}; \theta).
\end{split}
\label{eq:bilevel_simple}
\end{equation}
The upper-level cost $\mathcal{R}$ denotes a certified robustness metric such as the certified radius from RS.
The goal of the upper-level problem is to compromise the certified robustness guarantees of the model trained on validation data $\mathcal{D}^\mathrm{val}$.
The solution to the lower-level problem $\theta^\ast$ are the parameters of the machine learning model learnt from $\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}$ using a robust training method with loss function $\mathcal{L}_\mathrm{robust}$.
The fact that any training method that achieves high certified robustness to test-time attacks and any certification procedure can be incorporated into this formulation by changing the lower- and upper-level problems, respectively, makes the attack formulation broadly applicable.
Recent works \cite{cohen2019certified,salman2019provably,zhai2020macer} have shown RS based methods to be effective at certifying and producing robust classifiers.
The scalability of these methods to large datasets and deep models make them useful for real-world application.
Thus, we focus on using our poisoning attack against these methods.
\subsection{Poison randomized smoothing based defenses}
For an input at test time, RS produces a prediction from the smoothed classifier $g$ and a radius in which this prediction remains constant.
Since the certified radius of a ``hard'' smooth classifier $g$ is non-differentiable, it cannot be directly incorporated in the upper-level of the attack formulation Eq.~(\ref{eq:bilevel_simple}).
To overcome this challenge, we use the ``soft'' smooth classifier $\tilde{g}$ as an approximation. Similar technique has been used in \cite{salman2019provably, zhai2020macer}.
Let $z_{\theta}:X\xrightarrow{}\mathcal{P}(K)$ be a classifier whose last layer is softmax with parameters $\theta$ and $\sigma > 0$ is the noise used for smoothing, then soft smoothed classifier $\Tilde{g}_{\theta}$ of $z_{\theta}$ is
\(\Tilde{g}_{\theta}(x) = \arg \max_{c \in Y} \mathbb{E}_{\eta \sim \mathcal{N}(0,\sigma^2I)}[z^c_{\theta}(x + \eta)].\)
It was shown in \cite{zhai2020macer} that if the ground truth of an input $x$ is $y$ and $\Tilde{g}_{\theta}$ classifies $x$ correctly
then $\tilde{g}_{\theta}$ is provably robust at $x$, with the certified radius $\tilde{r}(\tilde{g}_{\theta}; x, y, \sigma) = \frac{\sigma}{2}[\Phi^{-1}(\mathbb{E}_{\eta}[z^y_{\theta}(x + \eta)]) - \Phi^{-1}(\max_{y' \neq y}\mathbb{E}_{\eta}[z^{y'}_{\theta}(x + \eta)])].$
Assuming $\tilde{r}(\tilde{g}_{\theta}; x, y, \sigma) = 0$ when $x$ is misclassified, the ACR is $\tilde{R}(\tilde{g}_{\theta}; \mathcal{D}, \sigma) = \frac{1}{|\mathcal{D}|}\sum_{(x,y) \in \mathcal{D}} \tilde{r}(\tilde{g}_{\theta}; x, y, \sigma).$
Since $\tilde{R}$ is differentiable we can use it in the upper-level of Eq.~(\ref{eq:bilevel_simple}). The lower-level problem can be any robust training procedure and we focus on using \cite{cohen2019certified,zhai2020macer,salman2019provably} in this work.
{\bf Poisoning against GA \cite{cohen2019certified}.}
We start by showing how to generate poison data against models trained with GA \cite{cohen2019certified}, which was shown to yield higher certified robustness compared to models trained with standard training. In this method the classifier $f_{\theta}$ is obtained by optimizing the loss function $\mathcal{L}_\mathrm{GaussAug}(\mathcal{D};\theta,\sigma) = \frac{1}{|\mathcal{D}|}\sum_{(x_i,y_i) \in \mathcal{D}} l_{ce}(x_i+\eta,y_i;\theta)$, where $l_{ce}$ is the cross entropy loss and $\eta\sim \mathcal{N}(0,\sigma^2 I)$. To control the perturbation added to the poison data we used $\ell_\infty$-norm here but other norms can also be used. The bilevel formulation to generate poison data to reduce the certified robustness guarantees obtained using RS for a classifier trained with GA is as follows.
\begin{equation}
\begin{split}
&\min_{u}\;\; \tilde{R}(\tilde{g}_{\theta^*}; \mathcal{D}^\mathrm{val}, \sigma) \\
&\mathrm{s.t.}\;\; \|\delta_i\|_{\infty} \leq \epsilon,\;\; i=1,...,n,\;\;\mathrm{and} \\
\theta^* = \arg&\min_{\theta}\; \mathcal{L}_\mathrm{GaussAug}(\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}; \theta,\sigma).
\end{split}
\label{Eq:Bilevel}
\end{equation}
\begin{figure*}[tb]
\centering{\includegraphics[width=0.99\textwidth]{images/poisoning_two_cases_cropped.pdf}}
\caption{Analytical solutions of problem (\ref{eq:bilevel_linear}) with linear classifiers. The poison distribution $(P^{-}_\mathrm{poison})$ can change the decision boundary (broken line) and reduce the ACR of the clean distribution $(P^{-})$ in two ways (Cases 1 and 2). Perturbation is exaggerated for illustration.}
\label{fig:linear}
\end{figure*}
\begin{algorithm
\caption{Poisoning GA based certified defense \cite{cohen2019certified}
}
\label{alg:main}
\textbf{Input}: $\mathcal{D}^\mathrm{clean}, \mathcal{D}^\mathrm{base}, \mathcal{D}^\mathrm{val}, \mathrm{perturbation \;strength}\; \epsilon, \\ \mathrm{noise \;level}\; \sigma, \mathrm{number\; of\; noise\; samples}\; k, \\ \mathrm{inverse\; temperature}\;\alpha, \mathrm{total\; epochs}\; P,\\ \mathrm{lower-level\; epochs}\; T_1, \mathrm{epochs\; for \;linear\; system}\;T_2$ \\
\textbf{Output}: $\mathcal{D}^\mathrm{poison}$
\begin{algorithmic}
\STATE{$\mathcal{D}^\mathrm{poison} := \mathcal{D}^\mathrm{base}$}
\FOR{$p=0,\;\cdots\;,P\textrm{-}1$}
\STATE{Sample a mini-batch $(x^\mathrm{clean}, y^\mathrm{clean}) \;\sim\; \mathcal{D}^\mathrm{clean}$}
\STATE{Sample a mini-batch of $n$ points $(x^\mathrm{val}, y^\mathrm{val}) \;\sim\; \mathcal{D}^\mathrm{val}$}
\STATE{Sample a mini-batch $(x^\mathrm{poison}, y^\mathrm{poison}) \;\sim\;\mathcal{D}^\mathrm{poison}$}
\STATE{Pick the base samples for poison data $(x^\mathrm{base}, y^\mathrm{base})$}
\STATE{For each $x^\mathrm{val}_{i}$, sample $k$ i.i.d. Gaussian samples}
\STATE{$x^\mathrm{val}_{i_1}, \cdots , x^\mathrm{val}_{i_n} \sim \mathcal{N}(x^\mathrm{val}_{i}, \sigma^2I)$}
\STATE{Compute $\tilde{z}_{\theta}(x^\mathrm{val}_i) \xleftarrow{} \frac{1}{k} \sum^k_{j=1} \alpha z_{\theta}(x^\mathrm{val}_{i_j})$ for each $i$}
\STATE{$\mathcal{G_{\theta}} := \{(x^\mathrm{val}_i, y^\mathrm{val}_i): y^\mathrm{val}_i = \arg \max_{c \in \mathcal{Y}}\; \tilde{z}^c_{\theta}(x^\mathrm{val}_i)\}$}
\STATE{}
\STATE{For each $(x_i, y_i) \in \mathcal{G_{\theta}}$, compute $\tilde{y}_i$}
\STATE{$\tilde{y}_i \xleftarrow{} \arg \max_{c \in \mathcal{Y}\textbackslash\{y_i\}} \tilde{z}^c_{\theta}(x_i)$}
\STATE{For each $(x_i, y_i) \in \mathcal{G_{\theta}}$, compute $\tilde{r}(x_i, y_i)$}
\STATE{$\tilde{r}(x_i, y_i) = \frac{\sigma}{2}(\Phi^{-1}(\tilde{z}^{y_i}_{\theta}(x_i)) - \Phi^{-1}(\tilde{z}^{\tilde{y}_i}_{\theta}(x_i)))$}
\STATE{}
\STATE{$\xi := \frac{1}{n}\sum_{(x_i, y_i) \in \mathcal{G_{\theta}}}\tilde{r}(x_i, y_i)$}
\STATE{$\zeta := \mathcal{L}_\mathrm{GaussAug}((x^\mathrm{clean}, y^\mathrm{clean}) \bigcup (x^\mathrm{poison}, y^\mathrm{poison}), \sigma)$}
\STATE{$(x^\mathrm{poison}, y^\mathrm{poison}):=$ApproxGrad$(\xi, \zeta, 1, T_1, T_2,\epsilon,x^{base})$}
\STATE{Update $\mathcal{D}^\mathrm{poison}$ with $(x^\mathrm{poison}, y^\mathrm{poison})$}
\ENDFOR
\end{algorithmic}
\end{algorithm}
{\bf Poisoning against MACER \cite{zhai2020macer}.} Another recent work proposed a method for robust training by maximizing the certified radius (MACER). Their approach uses a loss function which is a combination of the classification loss and the robustness loss of the soft smoothed classifier $\tilde{g}_\theta$. In particular, the loss of the smoothed classifier on a point $(x,y)$ is given by $l_{macer}(\tilde{g}_\theta; x, y) = -\mathrm{log} \;\hat{z}^y_{\theta}(x) + \frac{\lambda \sigma}{2} \mathrm{max}\{\gamma - \tilde{\xi}_{\theta}(x, y), 0\}\cdot\mathbf{1}_{\tilde{g}_{\{\theta}(x)=y\}}$. where $\eta_1, ..., \eta_k$ are $k$ i.i.d. samples from $\mathcal{N}(0, \sigma^2\mathbf{I})$, $\hat{z}^y_{\theta}(x) = \frac{1}{k}\Sigma^k_{j=1} z^y_{\theta}(x + \eta_j)$ is the empirical expectation of $z_{\theta}(x + \eta)$, $\tilde{\xi}_{\theta}(x, y) = \Phi^{-1}(\hat{z}^y_{\theta}(x)) - \Phi^{-1}(\mathrm{max}_{y \neq y} \hat{z}^{y'}_{\theta}(x))$, $\gamma$ is the hinge factor, and $\lambda$ balances the accuracy and robustness trade-off. Using this we can define $\mathcal{L}_\mathrm{macer}(\mathcal{D};\theta,\sigma) = \frac{1}{|\mathcal{D}|}\sum_{(x_i,y_i) \in \mathcal{D}} l_{macer}(\tilde{g}_{\theta};x_i,y_i)$. To generate poison data that reduces the robustness guarantees of classifier trained with MACER we can use the loss $\mathcal{L}_\mathrm{macer}(\mathcal{D};\theta,\sigma)$ in the lower-level problem in Eq.~(\ref{Eq:Bilevel}).
{\bf Poisoning against SmoothAdv \cite{salman2019provably}.} It was shown that the certified robustness guarantees obtained from RS can be improved by training the classifiers using adversarial training with adversarial examples generated against the smooth classifier. In particular the classifier trained with SmoothAdv optimizes the following objective for a point $(x,y)$.
$\min_{\theta}\max_{\|x' - x\|_2 \leq \alpha} -\mathrm{log} \frac{1}{k}\Sigma^k_{j=1} z^y_{\theta}(x' + \eta_j)$ where $\eta_1, ..., \eta_k$ are $k$ i.i.d. samples from $\mathcal{N}(0, \sigma^2\mathbf{I})$ and $\alpha$ is the permissible $\ell_2$ distortion to $x$. To generate poisoning data against SmoothAdv we must use this objective as the lower-level problem in Eq.~(\ref{Eq:Bilevel}). To make it easier for bilevel solvers to solve this problem we use an approximation to the mini-max problem. For doing that we first compute the adversarial example
$x' = \arg\max_{\|x' - x\|_2 \leq \alpha} -\mathrm{log} \frac{1}{k}\Sigma^k_{j=1} z^y_{\theta}(x' + \eta_j)$ using PGD attack on the points in $\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}$ and then use these examples as our new dataset to train the model parameters in the lower-level as in Eq.~(\ref{Eq:Bilevel}). Specifically, the lower-level problem in Eq.~(\ref{Eq:Bilevel}) becomes $\arg\min_{\theta}\; \mathcal{L}_\mathrm{GaussAug}(\mathcal{D}^\mathrm{clean}_{adv} \bigcup \mathcal{D}^\mathrm{poison}_{adv}; \theta,\sigma)$ where $\mathcal{D}_{adv}$ denotes the adversarial examples generated against $\tilde{g}_{\theta}$. We update $\mathcal{D}_{adv}$ in every step of the bilevel optimization.
\vspace{-0.1cm}
\subsection{Generation and evaluation of poisoning attack}
In this work, we focus on creating a poisoned set to compromise the certified adversarial robustness guarantees of all points in a target class. We initialize the poison data with clean data from the target class (i.e., base data) and optimize the perturbation to be added to each point by solving the bilevel problem in Eq.~(\ref{Eq:Bilevel}) for attack against GA based training. We use a small value of $\epsilon$ to ensure the perturbations added are imperceptible and the poison points have clean labels when inspected visually (See Fig.~\ref{fig:attack_egs} in the Appendix). The bilevel optimization is solved using the ApproxGrad algorithm (Alg.~\ref{alg:approxgrad} in Appendix~\ref{app:approxgrad}). The full attack algorithm for generating poison data against GA \cite{cohen2019certified} is shown in Alg.~\ref{alg:main}. Attack against other methods are generated similarly by replacing the lower-level objective ($\zeta$ in Alg.~\ref{alg:main}) with the appropriate loss function for MACER~\cite{zhai2020macer} and SmoothAdv~\cite{salman2019provably}. We evaluate the effect of poisoning, by training the models from scratch using GA, MACER and SmoothAdv on their respective poisoned sets and report ACR and approximate certified accuracy (points with certified $\ell_2$ radius greater than zero) on the clean test points from the target class. Previous works \cite{huang2020metapoison, shafahi2018poison} had shown the effectiveness of poisoning by lowering the accuracy on specific target points from the test set. Our attack is also effective, under a similar setting, at reducing the certified radius for target points (Appendix~\ref{app:partial_poisoning}).
\vspace{-0.1cm}
\subsection{Analysis of poisoning with linear classifiers}
To gain a deeper insight into the effect of poisoning, we analyze the
analytical solution of our bilevel problem for the case of linear classifiers trained with GA.
Suppose we have a one-dimensional two-class problem and the attacker's goal is to poison the distribution of the \emph{negative} class $P^{-}$ so that the ACR ($\tilde{R}$) of the poisoned model on the test points of the \emph{negative} class is reduced.
Let $\epsilon$ be the the maximum permissible perturbation that can be added by the attacker to the points of the class $P^{-}$.
We do not assume any specific distributions for $P^{+}$ and $P^{-}$ here, but only that $\sum_i x_i^{-}<\sum_i x_i^{+}$ without loss of generality. Here $x_i^{+}$ and $x_i^{-}$ refer to the training points of the positive and the negative class, respectively.
A linear classifier in one-dimension is either $f(x)=1\;\mathrm{iff}\;x \geq t$ or $f(x)=-1\;\mathrm{iff}\;x \leq t$ parameterized by the threshold $t$.
For linear classifiers,
the smoothed classifier $g$ is the same as the unsmoothed classifier $f$ and the certified radius for a point is the distance to the decision boundary \cite{cohen2019certified}.
To make the problem analytically tractable, we use the squared-loss at the lower-level i.e., $f(x) = wx + b$ and $l(x,y;f)=(f(x) - y)^2$.
The bilevel problem for poisoning is as follows
\begin{equation}
\begin{split}
&\min_{u}\;\; \mathbb{E}_{{P}_{-}}[\max(\mathrm{sign}(w^\ast)(-b^\ast/w^\ast-x),0)]\\
&\mathrm{s.t.}\;\; -\epsilon \leq u_i - x_i^{-} \leq \epsilon, \;\; \mathrm{for \;\; i = 1, ..., n}\\
w^\ast,&b^\ast = \arg\min_{w, b}\; \frac{1}{2n}\big[\sum_{i=1}^{n}l(x_i^{+},1) + \sum_{i=1}^{n}l(u_i,-1)\big].
\end{split}
\label{eq:bilevel_linear}
\end{equation}
\begin{theorem}
\label{thm:linear}
If the perturbation is large enough, i.e., $\epsilon \geq \frac{\sum_i x_i^{+} - \sum_i x_i^{-}}{n}$ then there are two locally optimal solutions to (\ref{eq:bilevel_linear}) which are $u_i = x_i^{-} - \epsilon$ (Case 1) and $u_i = x_i^{-} + \epsilon$ (Case 2) for $i=1,...,n$.
Otherwise, there is a unique globally optimal solution $u_i = x_i^{-} - \epsilon$ (Case 1) for $i=1,...,n$.
\end{theorem}
Thus, optimal poisoning is achieved by shifting points of the $P^{-}$ class either towards left or right by the maximum amount $\epsilon$ (Fig.~\ref{fig:linear} and Appendix~\ref{app:isotropic_gaussian}). Moreover, the effect of poisoning an $\alpha$ fraction of points from the $P^{-}$ class with maximum permissible perturbation $\Tilde{\epsilon}$ is same as that of poisoning all points of $P^{-}$ class with $\epsilon = \alpha \Tilde{\epsilon}$ (Corollary~\ref{cor:partial_poisoning} in Appendix~\ref{app:proofs}).
Although a direct analysis is intractable for non-linear cases, we empirically observed that our attack moved the decision boundary of neural networks closer to the points of the target class as measured by the mean distance of points to the decision boundary of the smoothed classifier (Sec.~\ref{sec:exp_emp_robustness}).
\vspace{-0.1cm}
\section{Experiments}\label{sec:experiments}
In this section we present the results of our PACD\footnote{The code is available at \url{https://github.com/akshaymehra24/poisoning_certified_defenses}} attack on poisoning deep neural networks trained using methods that make the model certifiably robust to test-time attacks. All the results presented here are averaged over models trained with five random initialization.
We report the average certified radius (ACR) as the average of the certified radius obtained from the RS based certification procedure of \cite{cohen2019certified} for correctly classified points. Certified radius is zero for misclassified and abstained points.
The approximate certified accuracy (ACA) is the fraction of points correctly classified by the smoothed classifier ($\ell_2$ radius greater than zero). All results are reported over 500 randomly sampled images from the target classes.
We use the same value of $\sigma$ for smoothing during attack, retraining and evaluation. We compare our results to watermarking \cite{shafahi2018poison}
which has been used previously for clean label attacks (opacity 0.1 followed by clipping to make $\ell_{\infty}$ distortion equal to $\epsilon$),
and show that poison data generated using the bilevel optimization is significantly better at reducing the average certified radius.
\begin{table}[tb]
\caption{Decrease in certified radius and certified accuracy of models trained with Gaussian augmentation \cite{cohen2019certified} on poison data compared to those of models trained on clean and watermarked data.
}
\label{Table:cohen_attack}
\centering
\small
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{c|c|c|cc}
\toprule
& \multirow{2}{*}{$\sigma$} & \multirow{2}{*}{Data} & \multicolumn{2}{c}{\makecell{Certified robustness of target class}}\\
& & & ACR & ACA(\%) \\
\midrule
\multirow{9}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
\multirow{3}{*}{0.25} & Clean & 0.896$\pm$0.01 & 98.92$\pm$0.32 \\
& & Watermarked & 0.908$\pm$0.01 & 99.24$\pm$0.29 \\
& & Poisoned & {\bf0.325}$\pm$0.10 & {\bf71.96}$\pm$8.28 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.5} & Clean & 1.481$\pm$0.02 & 99.16$\pm$0.34\\
& & Watermarked & 1.514$\pm$0.06 & 99.12$\pm$0.47\\
& & Poisoned & {\bf0.733}$\pm$0.10 & {\bf90.68}$\pm$3.37 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.75} & Clean & 1.549$\pm$0.11 & 98.48$\pm$0.35\\
& & Watermarked & 1.566$\pm$0.06 & 98.36$\pm$0.39\\
& & Poisoned & {\bf0.698}$\pm$0.13 & {\bf84.92}$\pm$5.14 \\
\midrule
\midrule
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
\multirow{3}{*}{0.25} & Clean & 0.521$\pm$0.05 & 85.76$\pm$3.31 \\
& & Watermarked & 0.470$\pm$0.01 & 83.22$\pm$1.41 \\
& & Poisoned & {\bf0.059}$\pm$0.02 & {\bf26.84}$\pm$6.04 \\
\cmidrule{2-5}
& \multirow{3}{*}{0.5} & Clean & 0.634$\pm$0.04 & 75.04$\pm$1.65\\
& & Watermarked & 0.611$\pm$0.18 & 74.01$\pm$9.22\\
& & Poisoned & {\bf0.221}$\pm$0.04 & {\bf42.28}$\pm$6.01 \\
\bottomrule
\end{tabular}
}
\end{table}
We use our attack to poison MNIST and CIFAR10 dataset and use ApproxGrad to solve the bilevel optimization. The time complexity for ApproxGrad is $O(VT)$ where $V$ are the number of parameters in the machine learning model and $T$ is the number of lower-level updates. For datasets like Imagenet where the optimization must be performed over a very large number of batches, obtaining the solution to bilevel problems becomes computationally hard. Due to this bottleneck we leave the problem of poisoning Imagenet for future work. For the experiments with MNIST we randomly selected digit 8 and for CIFAR10 the class ``Ship'' as the target class for the attacker.
The attack results for other target classes are similar and are presented in the Appendix~\ref{app:additional_experiments}.
To ensure that the attack points satisfy the clean label constraint, the maximum permissible $\ell_{\infty}$ distortion is bounded by $\epsilon = 0.1$ for MNIST and $\epsilon = 0.03$ for CIFAR10 which is similar to the value used to generate imperceptible adversarial examples in previous works \cite{madry2017towards,goodfellow2014explaining}. We used convolutional neural networks for our experiments on MNIST and Resnet-20 model for our experiments with CIFAR10. Model architectures, hyperparameters, generated attack examples (Fig.~\ref{fig:attack_egs} in Appendix), and additional results on transferability of our poisoned samples to models with different architectures
are presented in Appendix \ref{app:additional_experiments}.
Since models trained with standard training do not achieve high certified radius \cite{cohen2019certified}, we considered poisoning models trained with methods that improve the certified robustness guarantees to test time attacks.
For comparison, ACR on the target class ``Ship'' with Resnet-20 trained with standard training on clean CIFAR10 dataset is close to zero whereas for the same model trained with GA ($\sigma = 0.25$) ACR is close to 0.5.
Finally, we show that our attack can withstand the use of weight regularization, which has been shown to be effective at mitigating the effect of poisoning attacks \cite{carnerero2020regularisation}. The results of this experiment are present in Appendix~\ref{app:weight_reg}.
\begin{table}[tb]
\caption{Decrease in certified radius and certified accuracy of models trained with MACER \cite{zhai2020macer} on poison data compared to those of models trained on clean and watermarked data.
}
\label{Table:macer_attack}
\centering
\small
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{c|c|c|cc}
\toprule
&\multirow{2}{*}{$\sigma$} & \multirow{2}{*}{Data} &
\multicolumn{2}{c}{\makecell{Certified robustness of target class}}\\
&&
& ACR & ACA(\%) \\
\midrule
\multirow{9}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
\multirow{3}{*}{0.25} & Clean & 0.915$\pm$0.01 & 99.64$\pm$0.21\\
& & Watermarked & 0.894$\pm$0.01 & 98.84$\pm$0.53\\
& & Poisoned & {\bf0.431}$\pm$0.13 & {\bf79.81}$\pm$9.26 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.5} & Clean & 1.484$\pm$0.11 & 98.56$\pm$0.41\\
& & Watermarked & 1.475$\pm$0.08 & 98.68$\pm$0.39\\
& & Poisoned & {\bf0.685}$\pm$0.16 & {\bf84.36}$\pm$6.17 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.75} & Clean & 1.353$\pm$0.13 & 93.81$\pm$2.08\\
& & Watermarked & 1.415$\pm$0.11 & 94.52$\pm$1.58\\
& & Poisoned & {\bf1.008}$\pm$0.19 & {\bf88.41}$\pm$4.64 \\
\midrule
\midrule
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
\multirow{3}{*}{0.25} & Clean & 0.593$\pm$0.05 & 83.84$\pm$2.26 \\
& & Watermarked & 0.486$\pm$0.04 & 77.01$\pm$0.21 \\
& & Poisoned & {\bf0.379}$\pm$0.11 & {\bf72.41}$\pm$9.79\\
\cmidrule{2-5}
& \multirow{3}{*}{0.5} & Clean & 0.759$\pm$0.11 & 72.92$\pm$5.06\\
& & Watermarked & 0.811$\pm$0.10 & 75.66$\pm$2.99\\
& & Poisoned & {\bf0.521}$\pm$0.11 & {\bf65.24}$\pm$6.55 \\
\bottomrule
\end{tabular}
}
\end{table}
\vspace{-0.1cm}
\subsection{Poisoning Gaussian data augmentation \cite{cohen2019certified}}
Here we show the effectiveness of our attack at compromise the certified robustness guarantees obtained with RS on a model trained using the GA. The results of the attack, present in Table~\ref{Table:cohen_attack}, show a significant decrease in the ACR and the certified accuracy of the target class of the model trained on poisoned data compared to the model trained on clean and watermarked data. Since the certified radius and certified accuracy are correlated, our poisoning attack which targets the reduction of certified radius (upper-level problem in Eq.~(\ref{Eq:Bilevel})) also causes a decrease in the certified accuracy. Significant degradation in average certified radius from 0.52 to 0.06 on CIFAR10 with imperceptibly distorted poison data shows the extreme vulnerability of GA to poisoning.
\vspace{-0.1cm}
\subsection{Poisoning MACER \cite{zhai2020macer}}
Here we use the bilevel formulation in Eq.~(\ref{Eq:Bilevel}) with $\mathcal{L}_\mathrm{macer}$ loss in the lower-level and generate poison data to reduce the certification guarantees of models trained with MACER. The poison data is generated with $k=2$, where $k$ are the number of noisy images used per training point to ease the bilevel optimization. However, during retraining $k=16$ is used, which is similar to the one used in the original work \cite{zhai2020macer}.
The ACR obtained using MACER is higher than that achievable using Gaussian augmentation based training consistent with \cite{zhai2020macer}. However, our data poisoning attack is still able to reduce the average certified radius of the method by more than 30\% (Table~\ref{Table:macer_attack}) even though the attack is evaluated against a much stronger defense ($k=16$ for retraining compared to $k=2$ for poisoning) than what the poison data was optimized against. This shows that the use of a larger number of noisy samples ($k$) cannot eliminate the effect of the attack, emphasising the importance of the threat posed by data poisoning.
\begin{table}[tb]
\caption
Decrease in certified radius and certified accuracy of models trained with SmoothAdv \cite{salman2019provably} on poison data compared to those of models trained on clean and watermarked data.
}
\label{Table:smoothadv_attack}
\centering
\small
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{c|c|c|cc}
\toprule
&\multirow{2}{*}{$\sigma$} & \multirow{2}{*}{Data} &
\multicolumn{2}{c}{\makecell{Certified robustness of target class}}\\
&&
& ACR & ACA(\%) \\
\midrule
\multirow{9}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
\multirow{3}{*}{0.25} & Clean & 0.896$\pm$0.01 & 99.16$\pm$0.45 \\
& & Watermarked & 0.906$\pm$0.01 & 99.28$\pm$0.16 \\
& & Poisoned & {\bf0.672}$\pm$0.04 & {\bf93.21}$\pm$1.92 \\
\cmidrule{2-5}
& \multirow{3}{*}{0.5} & Clean & 1.408$\pm$0.05 & 99.21$\pm$0.25\\
& & Watermarked & 1.401$\pm$0.02 & 98.01$\pm$0.18\\
& & Poisoned & {\bf1.037}$\pm$0.06 & {\bf93.81}$\pm$1.31 \\
\cmidrule{2-5}
& \multirow{3}{*}{0.75} & Clean & 1.262$\pm$0.05 & 95.68$\pm$0.47\\
& & Watermarked & 1.433$\pm$0.03 & 97.21$\pm$0.13\\
& & Poisoned & {\bf0.924}$\pm$0.06 & {\bf88.88}$\pm$0.18 \\
\midrule
\midrule
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
\multirow{3}{*}{0.25} & Clean & 0.504$\pm$0.02 & 78.76$\pm$0.81 \\
& & Watermarked & 0.441$\pm$0.02 & 70.16$\pm$2.12 \\
& & Poisoned & {\bf0.271}$\pm$0.02 & {\bf55.78}$\pm$0.96 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.5} & Clean & 0.479$\pm$0.07 & 65.84$\pm$4.81\\
& & Watermarked & 0.473$\pm$0.02 & 62.51$\pm$2.12\\
& & Poisoned & {\bf0.277}$\pm$0.02 & {\bf49.11}$\pm$3.19 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\columnwidth]{images/teaser_c.pdf}
\includegraphics[width=0.6\columnwidth]{images/attack_egs_c.pdf}
\caption
(Upper) ACR degrades more if larger perturbations are permitted to create poison data. But larger perturbation makes the poison points visibly distorted making them easier to detect with inspection (Lower).
Poison data are generated with $\epsilon \in \{0,0.1,0.2,0.3\}$. We have used $\epsilon = 0.1$ for our attacks.}
\label{fig:acr_vs_epsilon}
\end{figure}
\begin{figure*}[tb]
\centering{
\subfigure[Average certified radius of digit 8 in MNIST]{\includegraphics[width=0.775\columnwidth]{images/acr_mnist_c.pdf}}
\hspace{0.15in}
\subfigure[Approximate certified accuracy of digit 8 in MNIST]{\includegraphics[width=0.775\columnwidth]{images/aca_mnist_c.pdf}}
\subfigure[Average certified radius of ``Ship'' in CIFAR10]{\includegraphics[width=0.775\columnwidth]{images/acr_cifar10_c.pdf}}
\hspace{0.15in}
\subfigure[Approximate certified accuracy of ``Ship'' in CIFAR10]{\includegraphics[width=0.775\columnwidth]{images/aca_cifar10_c.pdf}}
\includegraphics[width=0.9\textwidth]{images/legend_c.pdf}
}
\caption{Successful transferability of our poisoned data when victim uses a training procedure different than the one used by the attacker to optimize the poison data. $\sigma = 0.5$ and $\sigma = 0.25$ are used for smoothing during training and evaluation for MNIST and CIFAR10.
}
\label{fig:transfer}
\end{figure*}
\begin{table}[tb]
\caption{Decrease in the mean $\ell_2$-distortion needed to generate adversarial examples using PGD attack against the smooth classifier. This shows that the decision boundary of the smooth classifier is closer to the clean test points of the target class after poisoning.
}
\label{Table:empirical_robustness}
\centering
\small
\resizebox{0.75\columnwidth}{!}{
\begin{tabular}{c|c|c|c}
\toprule
& $\sigma$ & Clean data & Poisoned data\\
\midrule
\multirow{3}{*}{MNIST} & 0.25 & 3.271$\pm$0.10 & {\bf1.339}$\pm$0.16 \\
& 0.5 & 3.637$\pm$0.15 & {\bf2.170}$\pm$0.09 \\
& 0.75 & 3.961$\pm$0.18 & {\bf2.213}$\pm$0.31 \\
\midrule
\multirow{2}{*}{CIFAR10} &0.25 & 1.754$\pm$0.17 & {\bf0.132}$\pm$0.04 \\
&0.5 & 1.996$\pm$0.09 & {\bf0.367}$\pm$0.06 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Poisoning SmoothAdv \cite{salman2019provably}}
Here we present the results of our attack on models trained with SmoothAdv. To yield model with high certified robustness this training method requires the model parameters to be optimized using adversarial training of the smoothed classifier. We used 2 step PGD attack to obtain adversarial example of each point in the batch. We used a single noisy instance of the adversarial example while doing adversarial training. Although, using larger $k$ makes the certified robustness guarantees better, we used $k=1$ to save the computational time required for adversarial training. For bilevel training, we followed the similar procedure to generate adversarial examples for clean and poison data. The adversarial examples are then used as the data for the lower-level problem of Eq.~(\ref{Eq:Bilevel}) to do GA training for optimizing the network parameters. The batch of adversarial examples are recomputed against the updated model after each step of bilevel training. Note that this is an approximation to the actual solution of the minimax problem that has to be solved in the lower-level for generating poison data against SmoothAdv. However, the effectiveness of the attack (results in Table~\ref{Table:smoothadv_attack}) suggests that our approximation works well in practice and certified robustness guarantees achieved from SmoothAdv can be degraded by poisoning.
\subsection{Effect of the imperceptibility constraint}
Here we evaluate the effect of using different values of the perturbation strength $\epsilon$ which controls the maximum permissible distortion in Eq.~(\ref{Eq:Bilevel}). We use $\sigma = 0.25$ for smoothing and GA based training to generate and evaluate the attack. The results are summarized in Fig.~\ref{fig:acr_vs_epsilon}, which show that the ACR of the target class decreases as $\epsilon$ increases rendering certification guarantees useless. This is expected since larger $\epsilon$ creates a larger distribution shift among the target class data in the training and the test sets. However, larger permissible distortion to the data make the attack easier to detect by inspection. This is not be desirable from an attacker's perspective who wants to evade detection.
\subsection{Transferability of poisoned data}
Here we report the performance of the models trained on the poison data using different training procedures than the one assumed by the attacker for crafting poison data. We used $k=1$ and 2 steps of PGD attack to generate adversarial examples for SmoothAdv and $k=16$ for MACER during retraining.
The poison data generated against MACER was optimized using $k=2$.
The results are summarized in Fig.~\ref{fig:transfer}, which show that poisoned data optimized against any robust training procedure causes significant reduction in the certified robustness of models trained with a different training methods. Interestingly, poisoned data optimized against GA is extremely effective against other methods, considering the fact that it is the simplest of the three methods. The successful transferability of the poisoned data
across different training methods shows how brittle these methods can be when faced with a poisoned dataset.
We observe a similar success in transferability of the poison data to models with different architectures (Appendix~\ref{app:transfer_architecture}).
\if0
\begin{table*}
\caption{Transfer-ability of the attack points when victim uses a different training procedure than used by the attacker to generate poison data for MNIST with digit 8 being the target class. $\sigma = 0.5$ is used for smoothing during training and evaluation. Certified robustness guarantees of each training method on clean data are present in the last row for reference.
}
\label{Table:mnist_transferability}
\centering
\small
\resizebox{0.85\textwidth}{!}{
\begin{tabular}{c|cc|cc|cc}
\toprule
\multirow{2}{*}{\makecell{Training method used for\\ generating poison data}} & \multicolumn{6}{c}{\makecell{Training method used for evaluation}}\\
&\multicolumn{2}{c|}{\makecell{Gaussian Aug}} & \multicolumn{2}{c|}{\makecell{MACER \cite{zhai2020macer}}} & \multicolumn{2}{c}{\makecell{SmoothAdv \cite{salman2019provably}}}\\
\midrule
& ACR & ACA(\%) & ACR & ACA(\%) & ACR & ACA(\%) \\
\midrule
Gaussian Aug & 0.733$\pm$0.10 & 90.68$\pm$0.03 & 0.741$\pm$0.09 & 87.88$\pm$0.04 & 1.116$\pm$0.12 & 96.91$\pm$0.01\\
MACER \cite{zhai2020macer} & 0.881$\pm$0.03 & 93.10$\pm$0.01 & 0.685$\pm$0.16 & 84.45$\pm$0.06 & 1.036$\pm$0.05 & 94.65$\pm$0.01\\
SmoothAdv \cite{salman2019provably} & 0.733$\pm$0.09 & 88.72$\pm$0.04 & 0.8732$\pm$0.23 & 88.68$\pm$0.09 & 1.037$\pm$0.06 & 93.81$\pm$0.01\\
\midrule
Clean data & 1.481$\pm$0.02 & 99.16$\pm$0.01& 1.484$\pm$0.11 & 98.56$\pm$0.01 & 1.408$\pm$0.05 & 99.21$\pm$0.01\\
\bottomrule
\end{tabular}
}
\end{table*}
\fi
\if0
\begin{table*}
\caption{Transferability of the attack points when victim uses a different training procedure than used by the attacker to generate poison data for CIFAR10 with class ``Ship'' being the target class. $\sigma = 0.25$ is used for smoothing during training and evaluation. Certified robustness guarantees of each training method on clean data are present in the last row for reference.
}
\label{Table:cifar10_transferability}
\centering
\small
\resizebox{0.85\textwidth}{!}{
\begin{tabular}{c|cc|cc|cc}
\toprule
\multirow{2}{*}{\makecell{Training method used for\\ generating poison data}} & \multicolumn{6}{c}{\makecell{Training method used for evaluation}}\\
&\multicolumn{2}{c|}{\makecell{Gaussian Aug}} & \multicolumn{2}{c|}{\makecell{MACER \cite{zhai2020macer}}} & \multicolumn{2}{c}{\makecell{SmoothAdv \cite{salman2019provably}}}\\
\midrule
& ACR & ACA(\%) & ACR & ACA(\%) & ACR & ACA(\%) \\
\midrule
Gaussian Aug & 0.059$\pm$0.01 & 26.84$\pm$0.06& 0.134$\pm$0.04& 41.08$\pm$0.08 &0.238$\pm$0.03& 55.01$\pm$0.05\\
MACER \cite{zhai2020macer}& 0.235$\pm$0.02& 61.01$\pm$0.04& 0.379$\pm$0.10& 72.41$\pm$0.10& 0.3205$\pm$0.01 & 59.01$\pm$0.01\\
SmoothAdv \cite{salman2019provably} & & & & &\\
\midrule
Clean data & 0.521$\pm$0.05 & 85.76$\pm$0.03 & 0.593$\pm$0.05 & 83.84$\pm$0.03 & 0.504$\pm$0.02 & 78.76$\pm$0.01 \\
\bottomrule
\end{tabular}
}
\end{table*}
\fi
\vspace{-0.2cm}
\subsection{Empirical robustness of poisoned classifiers}\label{sec:exp_emp_robustness}
\vspace{-0.1cm}
Finally, we report the empirical robustness of the smoothed classifier where the base classifier is trained on clean and poisoned data using GA.
The poisoned data is generated against GA training in the lower-level as in Eq.~(\ref{Eq:Bilevel}). We report the mean $\ell_2$-distortion required to generate an adversarial example using the PGD attack \cite{salman2019provably} against the smoothed classifier using 200 and 100 randomly sampled test points of the target class from MNIST and CIFAR10, respectively, in Table~\ref{Table:empirical_robustness}.
We observe that our poisoning leads to a decrease in the empirical robustness of the smoothed classifier on clean test data.
This backs up our hypothesis
that the decision boundary of the smooth classifier must be changed to reduce the certified radius in nonlinear classifiers, similar to linear classifiers (Fig.~\ref{fig:linear}).
\if0
\begin{figure}[tb]
\centering{
\subfigure[]{\includegraphics[width=0.9\columnwidth]{images/acr_cifar10_c.pdf}}
\subfigure[]{\includegraphics[width=0.9\columnwidth]{images/aca_cifar10_c.pdf}}
}
\caption{Transferability of the attack points when victim uses a different training procedure than used by the attacker to generate poison data for CIFAR10 with class ``Ship'' being the target class. $\sigma = 0.25$ is used for smoothing during training and evaluation.}
\label{fig:transfer_cifar10}
\end{figure}
\fi
\vspace{-0.1cm}
\section{Conclusion}
\vspace{-0.15cm}
Certified robustness has emerged as a gold standard to gauge with certainty the susceptibility of machine learning models to test-time attacks.
In this work, we showed that these guarantees can be rendered ineffective by our bilevel optimization based data poisoning attack that adds imperceptible perturbations to the points of the target class.
Unlike previous data poisoning attacks, our attack can reduce the ACR of an entire target class and is even effective against models trained using training methods that have been shown to improve certified robustness. Our results suggests that data quality is a crucial factor in achieving high certified robustness guarantees but is overlooked by current approaches.
\vspace{-0.15cm}
\section{Acknowledgments}
\vspace{-0.15cm}
This work was supported by the NSF EPSCoR-Louisiana Materials Design Alliance (LAMDA) program \#OIA-1946231 and by LLNL Laboratory Directed Research and Development project 20-ER-014 (LLNL-CONF-817233).
This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, Lawrence Livermore National Security, LLC.\footnote{The views and opinions of the authors do not necessarily reflect those of the U.S. government or Lawrence Livermore National Security, LLC neither of whom nor any of their employees make any endorsements, express or implied warranties or representations or assume any legal liability or responsibility for the accuracy, completeness, or usefulness of the information contained herein.}
\clearpage
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
Data poisoning \cite{biggio2012poisoning,jagielski2018manipulating,shafahi2018poison,steinhardt2017certified,zhu2019transferable}
is a training-time attack where the attacker is assumed to have access to the training data on which the victim will train the model.
The attacker can modify the training data
in a manner that the model trained on this poisoned data performs as the attacker desires.
The data hungry nature of modern machine learning methods make them vulnerable to poisoning attacks. Attackers can place the poisoned data online and wait for it to be scraped by victims trying to increase the size for their training sets.
Another easy target for data poisoning is data collection by crowd sourcing where malicious users can corrupt the data they contribute. In most cases, an attacker can modify only certain parts of the training data such as change the features or labels for a specific class or modify a small subset of the data from all classes.
In this work, we assume the attacker wants to affect the performance of the victim's models on a target class and modifies only the features of the points belonging to that class (without affecting the labels). To evade detection, the attacker is constrained to only add imperceptibly small perturbations to the points of the target class.
Many previous works \cite{munoz2017towards,shafahi2018poison,huang2020metapoison,koh2017understanding,zhu2019transferable,chen2017targeted,ji2017backdoor,turner2018clean} have shown the effectiveness of poisoning in affecting the accuracy of models trained on poisoned data compared to the accuracy achievable by training with clean data. In most works, the victim is assumed to use standard training by minimizing the empirical loss on the poisoned data to train the models and thus the attack is optimized to hurt the accuracy of standard training.
However, recent research on test-time evasion attacks \cite{carlini2017adversarial,athalye2018obfuscated,uesato2018adversarial,bulusu2020anomalous} suggests that models trained with standard training are not robust to adversarial examples, making the assumption of victim relying on standard training to train the models for deployment questionable.
\begin{figure*}[tb]
\centering{\includegraphics[width=0.99\textwidth]{images/overview_c.pdf}}
\caption{Overview of our poisoning against certified defenses (PACD) attack which generates poisoned data to reduce the certified robustness of the victim's model trained with methods such as Gaussian data augmentation(GA)\cite{cohen2019certified}, SmoothAdv\cite{salman2019provably} and MACER\cite{zhai2020macer} on a target class.
}
\label{fig:overview}
\end{figure*}
Thus, in a realistic scenario, where the aim of the victim is to deploy the model, it's better to assume that the victim will rely on training procedures that yield classifiers which are provably robust to test-time attacks.
Several recent works have proposed methods for training certifiably robust models whose predictions are guaranteed to be constant in a neighbourhood of a point. However, many of these methods \cite{raghunathan2018semidefinite,gowal2018effectiveness,huang2019achieving,xu2020automatic} do not scale to deep neural networks or large datasets, due to their high complexity. Moreover, the effect of training data quality on the performance of these certified defenses at test time remains largely unexplored.
Recently, randomized smoothing (RS) based certification methods \cite{lecuyer2019certified,li2019certified,cohen2019certified} were shown to be scalable to deep neural networks and high dimensional datasets enabling researchers to propose training procedures \cite{salman2019provably,zhai2020macer} that lead to models with high certified robustness.
Thus, we assume that a victim will rely on RS based certification methods to measure the certified robustness and use RS based training procedures to train the models.
The fact that a victim can train with a method that improves certified adversarial robustness is an immediate challenge for current poisoning attacks which optimize the poison data to affect the accuracy of models trained with standard training.
Table~\ref{Table:difficulty_of_poisoning} shows that poisons optimized against standard training can significantly reduce the accuracy of the victim's model (left to right) when the victim also uses standard training (1st and 5th row). However, this poison data is rendered ineffective when the victim uses a certifiably robust training method such as \cite{cohen2019certified,salman2019provably,zhai2020macer}.
\emph{Are certified defenses robust to data poisoning?}
We study this question and demonstrate that data poisoning is a serious concern even for certified defenses.
\begin{table}
\caption{Failure of traditional data poisoning attacks optimized against standard training in affecting the test accuracy (of target class) of models trained with certifiably robust training procedures. Details of the experiment are present in Appendix~\ref{app:standard_acc}. Certifiably robust training methods \cite{cohen2019certified,salman2019provably,zhai2020macer} are trained with $\sigma$ = 0.25 and accuracy of their base classifiers are reported.
}
\label{Table:difficulty_of_poisoning}
\centering
\small
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c|c|c|c}
\toprule
&\multirow{1}{*}{\makecell{Training method }} & \multicolumn{1}{c|}{\makecell{Model trained \\ on clean data}} & \multicolumn{1}{c}{\makecell{Model trained \\ on poison data}} \\
\midrule
\multirow{4}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
Standard & 99.28$\pm$0.01 & 60.08$\pm$12.6 \\
&\makecell{GA\cite{cohen2019certified}} & 98.99$\pm$0.14 & 98.31$\pm$1.65\\
&SmoothAdv \cite{salman2019provably}& 99.18$\pm$0.23 & 99.31$\pm$0.29\\
&MACER \cite{zhai2020macer}& 99.21$\pm$0.56 & 98.31$\pm$0.58\\
\midrule
\midrule
\multirow{4}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
Standard & 92.71$\pm$1.31 & 0.36$\pm$0.37 \\
&\makecell{GA \cite{cohen2019certified}} & 88.84$\pm$2.39 & 88.38$\pm$2.13\\
&SmoothAdv \cite{salman2019provably} & 79.48$\pm$2.69 & 74.95$\pm$3.45\\
&MACER \cite{zhai2020macer}& 87.12$\pm$1.17 & 88.54$\pm$4.52\\
\bottomrule
\end{tabular}
}
\end{table}
We propose a novel data poisoning attack that can significantly compromise the certified robustness guarantees achievable from training with robust training procedures. We formulate the Poisoning Against Certified Defenses (PACD) attack as a constrained bilevel optimization problem and theoretically analyze its solution for the case when the victim uses linear classifiers. Our theoretical analysis and empirical results suggests that the decision boundary of the smoothed classifiers (used for RS) learned from
the poisoned data is significantly different from the one learned from clean data there by causing a reduction in certified radius.
Our bilevel optimization based attack formulation is general since it can generate poisoned data against a model trained with any certifiably robust training method (lower-level problem) and certified with any certification procedure (upper-level problem). Fig.~\ref{fig:overview} shows the overview of the proposed PACD attack.
Unlike previous poisoning attacks that aim to reduce the accuracy of the models on a small subset of data, our attack can reduce the certified radius of an entire target class.
The poison points generated by our attack have clean labels and imperceptible distortion making them difficult to detect.
The poison data remains effective when the victim trains the models from scratch or uses data augmentation or weight regularization during training. Moreover, the attack points generated against a certified defense are transferable to models trained with other RS based certified defenses and to models with different architectures.
This highlights the importance of training-data quality and curation for obtaining meaningful gains from certified defenses at test time, a factor not considered by current certified defense research.
Our main contributions are as follows
\begin{itemize}
\item We study the problem of using data poisoning attacks to affect the robustness guarantees of classifiers trained using certified defense methods. To the best of our knowledge, this is the first clean label poisoning attack that significantly reduces the certified robustness guarantees of the models trained on the poisoned dataset.
\item We propose a bilevel optimization based attack which can generate poison data against several robust training and certification methods. We specifically use the attack to highlight the vulnerability of randomized smoothing based certified defenses to data poisoning.
\item We demonstrate the effectiveness of our attack in reducing the certifiable robustness obtained using randomized smoothing on models trained with state-of-the-art certified defenses \cite{cohen2019certified,salman2019provably,zhai2020macer}. Our attack reduces the ACR of the target class by more than 30\%.
\end{itemize}
\vspace{-0.3cm}
\section{Background and related work}
{\bf Randomized smoothing:}
The RS procedure \cite{cohen2019certified} uses a smoothed version of the original classifier $f:\mathbb{R}^d \xrightarrow{} \mathcal{Y}$ and certifies the adversarial robustness of the new classifier. The smoothed classifier, $g(x) = \arg \max_c \mathbb{P}_{\eta \sim \mathcal{N}(0, \sigma^2I)}(f(x+\eta)=c)$, assigns $x$ the class whose decision region $\{x' \in \mathbb{R}^d: f(x') = c\}$ has the largest measure under the distribution $\mathcal{N}(x, \sigma^2I)$, where $\sigma$ is used for smoothing. Suppose that while classifying a point $\mathcal{N}(x, \sigma^2I)$, the original classifier $f$ returns the class $c_A$ with probability $p_A = \mathbb{P}(f(x + \eta) = c_A)$, and the “runner-up” class $c_B$ is returned with probability $p_B = \max_{c \neq c_A} \mathbb{P}(f(x + \eta) = c)$, then the prediction of the point $x$ under the smoothed classifier $g$ is robust within the radius $r(g;\sigma) = \frac{\sigma}{2}(\Phi^{-1}(p_A) - \Phi^{-1}(p_B)),$ where $\Phi^{-1}$ is the inverse CDF of the standard Normal distribution. In practice, Monte Carlo sampling is used to estimate a lower bound on $p_A$ and an upper bound on $p_B$ as its difficult to estimate the actual values for $p_A$ and $p_B$.
Since standard training of the base classifier does not achieve high robustness guarantees, \cite{cohen2019certified} proposed to use GA based training in which the base classifier is trained on Gaussian noise corruptions of the clean data. Recent works \cite{zhai2020macer,salman2019provably} showed that the certified robustness guarantees of RS can be boosted by using different training procedures. In particular, \cite{salman2019provably} proposed to train the base classifier using adversarial training where the adversarial examples are generated against the smoothed classifier.
Although effective at increasing the certified radius, the method can be slow to train due to the requirement of generating adversarial examples against the smoothed classifier at every step. Another recent work \cite{zhai2020macer} proposed a different training procedure which is significantly faster to train and relies on directly maximizing the certified radius for achieving high robustness guarantees. Due to their effectiveness in improving the certified robustness guarantees of machine learning models, we craft poison data against these methods. A recent attack method \cite{ghiasi2020breaking} showed that it is possible to fool a robust classifier to mislabel an input and give an incorrect certificate using perturbation large in $\ell_p$ norm at test-time.
Our work is different since we focus on train-time attacks against certified defenses using imperceptibly small perturbations to the poison data.
{\bf Bilevel optimization:}
A bilevel optimization problem has the form $\min_{u \in \mathcal{U}} \xi(u,v^*)\;\mathrm{s.t.}\;v^* = \arg\min_{v\in \mathcal{V}(u)}\;\zeta(u,v)$, where the upper-level problem is a minimization problem with $v$ constrained to be the optimal solution to the lower-level problem (see \cite{bard2013practical}). Our data poisoning attack is a constrained bilevel optimization problem. Although general bilevel problems are difficult to solve, under some simplifying assumptions their solution can be obtained using gradient based methods. Several methods for solving bilevel problems in machine learning have been proposed previously \cite{domke2012generic,pedregosa2016hyperparameter,franceschi2017forward,maclaurin2015gradient,shaban2018truncated,mehra2019penalty} (See Appendix~\ref{app:approxgrad} for an overview). We use the method based on approximating the hypergradient by approximately solving a linear system (ApproxGrad Alg.~\ref{alg:approxgrad} in Appendix~\ref{app:approxgrad}) in this work.
Previous works \cite{mei2015using,munoz2017towards,mehra2019penalty,huang2020metapoison,carnerero2020regularisation} have shown the effectiveness of solving bilevel optimization problem for data poisoning to affect the accuracy of models trained with standard training. Our work on the other hand proposes a bilevel optimization based formulation to generate a data poisoning attack against RS based certified defenses and shows its effectiveness against state-of-the-art robust training methods.
\section{Poisoning against certified defenses}
Here we present the bilevel formulation of our PACD attack for generating poisoned data to compromise the certified robustness guarantees of the models trained using certified defenses. Specifically, we discuss how to generate poison data against GA \cite{cohen2019certified}, SmoothAdv \cite{salman2019provably} and MACER \cite{zhai2020macer} and affect the certified robustness guarantees obtained using RS.
\subsection{General attack formulation}
Let $\mathcal{D^\mathrm{clean}} = \{(x_i^\mathrm{clean}, y_i^\mathrm{clean})\}_{i=1}^{N_{\mathrm{clean}}}$
be the clean, unalterable portion of the training set. Let $u=\{u_1,...,u_n\}$ denote the attacker's poisoning data which is added to the clean data.
For clean-label attack, we require that each poison example $u_i$ has a limited perturbation, for example, $\|u_i - x_i^\mathrm{base}\| =\|\delta_i\| \leq \epsilon$ from the base data $x_i^\mathrm{base}$ and has the same label $y_i^\mathrm{base}$, for $i=1,...,n$. Thus $\mathcal{D^\mathrm{poison}} = \{(u_i, y_i^\mathrm{base})\}_{i=1}^{N_{\mathrm{poison}}}$.
The goal of the attacker is to find $u$ such that when the victim uses $\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}$ to train a classifier,
the certified robustness guarantees of the model on the target class ($\mathcal{D^\mathrm{val}} = \{(x_i^\mathrm{val},y_i^\mathrm{val})\}_{i=1}^{N_\mathrm{val}}$) are significantly diminished compared to a classifier trained on clean data.
The attack can be formulated as follows:
\begin{equation}
\begin{split}
&\min_{u \in \mathcal{U}}\;\; \mathcal{R}(\mathcal{D}^\mathrm{val}; \theta^\ast) \\
\mathrm{s.t.}\;\; \theta^* = \arg&\min_{\theta}\; \mathcal{L}_\mathrm{robust}(\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}; \theta).
\end{split}
\label{eq:bilevel_simple}
\end{equation}
The upper-level cost $\mathcal{R}$ denotes a certified robustness metric such as the certified radius from RS.
The goal of the upper-level problem is to compromise the certified robustness guarantees of the model trained on validation data $\mathcal{D}^\mathrm{val}$.
The solution to the lower-level problem $\theta^\ast$ are the parameters of the machine learning model learnt from $\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}$ using a robust training method with loss function $\mathcal{L}_\mathrm{robust}$.
The fact that any training method that achieves high certified robustness to test-time attacks and any certification procedure can be incorporated into this formulation by changing the lower- and upper-level problems, respectively, makes the attack formulation broadly applicable.
Recent works \cite{cohen2019certified,salman2019provably,zhai2020macer} have shown RS based methods to be effective at certifying and producing robust classifiers.
The scalability of these methods to large datasets and deep models make them useful for real-world application.
Thus, we focus on using our poisoning attack against these methods.
\subsection{Poison randomized smoothing based defenses}
For an input at test time, RS produces a prediction from the smoothed classifier $g$ and a radius in which this prediction remains constant.
Since the certified radius of a ``hard'' smooth classifier $g$ is non-differentiable, it cannot be directly incorporated in the upper-level of the attack formulation Eq.~(\ref{eq:bilevel_simple}).
To overcome this challenge, we use the ``soft'' smooth classifier $\tilde{g}$ as an approximation. Similar technique has been used in \cite{salman2019provably, zhai2020macer}.
Let $z_{\theta}:X\xrightarrow{}\mathcal{P}(K)$ be a classifier whose last layer is softmax with parameters $\theta$ and $\sigma > 0$ is the noise used for smoothing, then soft smoothed classifier $\Tilde{g}_{\theta}$ of $z_{\theta}$ is
\(\Tilde{g}_{\theta}(x) = \arg \max_{c \in Y} \mathbb{E}_{\eta \sim \mathcal{N}(0,\sigma^2I)}[z^c_{\theta}(x + \eta)].\)
It was shown in \cite{zhai2020macer} that if the ground truth of an input $x$ is $y$ and $\Tilde{g}_{\theta}$ classifies $x$ correctly
then $\tilde{g}_{\theta}$ is provably robust at $x$, with the certified radius $\tilde{r}(\tilde{g}_{\theta}; x, y, \sigma) = \frac{\sigma}{2}[\Phi^{-1}(\mathbb{E}_{\eta}[z^y_{\theta}(x + \eta)]) - \Phi^{-1}(\max_{y' \neq y}\mathbb{E}_{\eta}[z^{y'}_{\theta}(x + \eta)])].$
Assuming $\tilde{r}(\tilde{g}_{\theta}; x, y, \sigma) = 0$ when $x$ is misclassified, the ACR is $\tilde{R}(\tilde{g}_{\theta}; \mathcal{D}, \sigma) = \frac{1}{|\mathcal{D}|}\sum_{(x,y) \in \mathcal{D}} \tilde{r}(\tilde{g}_{\theta}; x, y, \sigma).$
Since $\tilde{R}$ is differentiable we can use it in the upper-level of Eq.~(\ref{eq:bilevel_simple}). The lower-level problem can be any robust training procedure and we focus on using \cite{cohen2019certified,zhai2020macer,salman2019provably} in this work.
{\bf Poisoning against GA \cite{cohen2019certified}.}
We start by showing how to generate poison data against models trained with GA \cite{cohen2019certified}, which was shown to yield higher certified robustness compared to models trained with standard training. In this method the classifier $f_{\theta}$ is obtained by optimizing the loss function $\mathcal{L}_\mathrm{GaussAug}(\mathcal{D};\theta,\sigma) = \frac{1}{|\mathcal{D}|}\sum_{(x_i,y_i) \in \mathcal{D}} l_{ce}(x_i+\eta,y_i;\theta)$, where $l_{ce}$ is the cross entropy loss and $\eta\sim \mathcal{N}(0,\sigma^2 I)$. To control the perturbation added to the poison data we used $\ell_\infty$-norm here but other norms can also be used. The bilevel formulation to generate poison data to reduce the certified robustness guarantees obtained using RS for a classifier trained with GA is as follows.
\begin{equation}
\begin{split}
&\min_{u}\;\; \tilde{R}(\tilde{g}_{\theta^*}; \mathcal{D}^\mathrm{val}, \sigma) \\
&\mathrm{s.t.}\;\; \|\delta_i\|_{\infty} \leq \epsilon,\;\; i=1,...,n,\;\;\mathrm{and} \\
\theta^* = \arg&\min_{\theta}\; \mathcal{L}_\mathrm{GaussAug}(\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}; \theta,\sigma).
\end{split}
\label{Eq:Bilevel}
\end{equation}
\begin{figure*}[tb]
\centering{\includegraphics[width=0.99\textwidth]{images/poisoning_two_cases_cropped.pdf}}
\caption{Analytical solutions of problem (\ref{eq:bilevel_linear}) with linear classifiers. The poison distribution $(P^{-}_\mathrm{poison})$ can change the decision boundary (broken line) and reduce the ACR of the clean distribution $(P^{-})$ in two ways (Cases 1 and 2). Perturbation is exaggerated for illustration.}
\label{fig:linear}
\end{figure*}
\begin{algorithm
\caption{Poisoning GA based certified defense \cite{cohen2019certified}
}
\label{alg:main}
\textbf{Input}: $\mathcal{D}^\mathrm{clean}, \mathcal{D}^\mathrm{base}, \mathcal{D}^\mathrm{val}, \mathrm{perturbation \;strength}\; \epsilon, \\ \mathrm{noise \;level}\; \sigma, \mathrm{number\; of\; noise\; samples}\; k, \\ \mathrm{inverse\; temperature}\;\alpha, \mathrm{total\; epochs}\; P,\\ \mathrm{lower-level\; epochs}\; T_1, \mathrm{epochs\; for \;linear\; system}\;T_2$ \\
\textbf{Output}: $\mathcal{D}^\mathrm{poison}$
\begin{algorithmic}
\STATE{$\mathcal{D}^\mathrm{poison} := \mathcal{D}^\mathrm{base}$}
\FOR{$p=0,\;\cdots\;,P\textrm{-}1$}
\STATE{Sample a mini-batch $(x^\mathrm{clean}, y^\mathrm{clean}) \;\sim\; \mathcal{D}^\mathrm{clean}$}
\STATE{Sample a mini-batch of $n$ points $(x^\mathrm{val}, y^\mathrm{val}) \;\sim\; \mathcal{D}^\mathrm{val}$}
\STATE{Sample a mini-batch $(x^\mathrm{poison}, y^\mathrm{poison}) \;\sim\;\mathcal{D}^\mathrm{poison}$}
\STATE{Pick the base samples for poison data $(x^\mathrm{base}, y^\mathrm{base})$}
\STATE{For each $x^\mathrm{val}_{i}$, sample $k$ i.i.d. Gaussian samples}
\STATE{$x^\mathrm{val}_{i_1}, \cdots , x^\mathrm{val}_{i_n} \sim \mathcal{N}(x^\mathrm{val}_{i}, \sigma^2I)$}
\STATE{Compute $\tilde{z}_{\theta}(x^\mathrm{val}_i) \xleftarrow{} \frac{1}{k} \sum^k_{j=1} \alpha z_{\theta}(x^\mathrm{val}_{i_j})$ for each $i$}
\STATE{$\mathcal{G_{\theta}} := \{(x^\mathrm{val}_i, y^\mathrm{val}_i): y^\mathrm{val}_i = \arg \max_{c \in \mathcal{Y}}\; \tilde{z}^c_{\theta}(x^\mathrm{val}_i)\}$}
\STATE{}
\STATE{For each $(x_i, y_i) \in \mathcal{G_{\theta}}$, compute $\tilde{y}_i$}
\STATE{$\tilde{y}_i \xleftarrow{} \arg \max_{c \in \mathcal{Y}\textbackslash\{y_i\}} \tilde{z}^c_{\theta}(x_i)$}
\STATE{For each $(x_i, y_i) \in \mathcal{G_{\theta}}$, compute $\tilde{r}(x_i, y_i)$}
\STATE{$\tilde{r}(x_i, y_i) = \frac{\sigma}{2}(\Phi^{-1}(\tilde{z}^{y_i}_{\theta}(x_i)) - \Phi^{-1}(\tilde{z}^{\tilde{y}_i}_{\theta}(x_i)))$}
\STATE{}
\STATE{$\xi := \frac{1}{n}\sum_{(x_i, y_i) \in \mathcal{G_{\theta}}}\tilde{r}(x_i, y_i)$}
\STATE{$\zeta := \mathcal{L}_\mathrm{GaussAug}((x^\mathrm{clean}, y^\mathrm{clean}) \bigcup (x^\mathrm{poison}, y^\mathrm{poison}), \sigma)$}
\STATE{$(x^\mathrm{poison}, y^\mathrm{poison}):=$ApproxGrad$(\xi, \zeta, 1, T_1, T_2,\epsilon,x^{base})$}
\STATE{Update $\mathcal{D}^\mathrm{poison}$ with $(x^\mathrm{poison}, y^\mathrm{poison})$}
\ENDFOR
\end{algorithmic}
\end{algorithm}
{\bf Poisoning against MACER \cite{zhai2020macer}.} Another recent work proposed a method for robust training by maximizing the certified radius (MACER). Their approach uses a loss function which is a combination of the classification loss and the robustness loss of the soft smoothed classifier $\tilde{g}_\theta$. In particular, the loss of the smoothed classifier on a point $(x,y)$ is given by $l_{macer}(\tilde{g}_\theta; x, y) = -\mathrm{log} \;\hat{z}^y_{\theta}(x) + \frac{\lambda \sigma}{2} \mathrm{max}\{\gamma - \tilde{\xi}_{\theta}(x, y), 0\}\cdot\mathbf{1}_{\tilde{g}_{\{\theta}(x)=y\}}$. where $\eta_1, ..., \eta_k$ are $k$ i.i.d. samples from $\mathcal{N}(0, \sigma^2\mathbf{I})$, $\hat{z}^y_{\theta}(x) = \frac{1}{k}\Sigma^k_{j=1} z^y_{\theta}(x + \eta_j)$ is the empirical expectation of $z_{\theta}(x + \eta)$, $\tilde{\xi}_{\theta}(x, y) = \Phi^{-1}(\hat{z}^y_{\theta}(x)) - \Phi^{-1}(\mathrm{max}_{y \neq y} \hat{z}^{y'}_{\theta}(x))$, $\gamma$ is the hinge factor, and $\lambda$ balances the accuracy and robustness trade-off. Using this we can define $\mathcal{L}_\mathrm{macer}(\mathcal{D};\theta,\sigma) = \frac{1}{|\mathcal{D}|}\sum_{(x_i,y_i) \in \mathcal{D}} l_{macer}(\tilde{g}_{\theta};x_i,y_i)$. To generate poison data that reduces the robustness guarantees of classifier trained with MACER we can use the loss $\mathcal{L}_\mathrm{macer}(\mathcal{D};\theta,\sigma)$ in the lower-level problem in Eq.~(\ref{Eq:Bilevel}).
{\bf Poisoning against SmoothAdv \cite{salman2019provably}.} It was shown that the certified robustness guarantees obtained from RS can be improved by training the classifiers using adversarial training with adversarial examples generated against the smooth classifier. In particular the classifier trained with SmoothAdv optimizes the following objective for a point $(x,y)$.
$\min_{\theta}\max_{\|x' - x\|_2 \leq \alpha} -\mathrm{log} \frac{1}{k}\Sigma^k_{j=1} z^y_{\theta}(x' + \eta_j)$ where $\eta_1, ..., \eta_k$ are $k$ i.i.d. samples from $\mathcal{N}(0, \sigma^2\mathbf{I})$ and $\alpha$ is the permissible $\ell_2$ distortion to $x$. To generate poisoning data against SmoothAdv we must use this objective as the lower-level problem in Eq.~(\ref{Eq:Bilevel}). To make it easier for bilevel solvers to solve this problem we use an approximation to the mini-max problem. For doing that we first compute the adversarial example
$x' = \arg\max_{\|x' - x\|_2 \leq \alpha} -\mathrm{log} \frac{1}{k}\Sigma^k_{j=1} z^y_{\theta}(x' + \eta_j)$ using PGD attack on the points in $\mathcal{D^\mathrm{clean}} \bigcup \mathcal{D^\mathrm{poison}}$ and then use these examples as our new dataset to train the model parameters in the lower-level as in Eq.~(\ref{Eq:Bilevel}). Specifically, the lower-level problem in Eq.~(\ref{Eq:Bilevel}) becomes $\arg\min_{\theta}\; \mathcal{L}_\mathrm{GaussAug}(\mathcal{D}^\mathrm{clean}_{adv} \bigcup \mathcal{D}^\mathrm{poison}_{adv}; \theta,\sigma)$ where $\mathcal{D}_{adv}$ denotes the adversarial examples generated against $\tilde{g}_{\theta}$. We update $\mathcal{D}_{adv}$ in every step of the bilevel optimization.
\vspace{-0.1cm}
\subsection{Generation and evaluation of poisoning attack}
In this work, we focus on creating a poisoned set to compromise the certified adversarial robustness guarantees of all points in a target class. We initialize the poison data with clean data from the target class (i.e., base data) and optimize the perturbation to be added to each point by solving the bilevel problem in Eq.~(\ref{Eq:Bilevel}) for attack against GA based training. We use a small value of $\epsilon$ to ensure the perturbations added are imperceptible and the poison points have clean labels when inspected visually (See Fig.~\ref{fig:attack_egs} in the Appendix). The bilevel optimization is solved using the ApproxGrad algorithm (Alg.~\ref{alg:approxgrad} in Appendix~\ref{app:approxgrad}). The full attack algorithm for generating poison data against GA \cite{cohen2019certified} is shown in Alg.~\ref{alg:main}. Attack against other methods are generated similarly by replacing the lower-level objective ($\zeta$ in Alg.~\ref{alg:main}) with the appropriate loss function for MACER~\cite{zhai2020macer} and SmoothAdv~\cite{salman2019provably}. We evaluate the effect of poisoning, by training the models from scratch using GA, MACER and SmoothAdv on their respective poisoned sets and report ACR and approximate certified accuracy (points with certified $\ell_2$ radius greater than zero) on the clean test points from the target class. Previous works \cite{huang2020metapoison, shafahi2018poison} had shown the effectiveness of poisoning by lowering the accuracy on specific target points from the test set. Our attack is also effective, under a similar setting, at reducing the certified radius for target points (Appendix~\ref{app:partial_poisoning}).
\vspace{-0.1cm}
\subsection{Analysis of poisoning with linear classifiers}
To gain a deeper insight into the effect of poisoning, we analyze the
analytical solution of our bilevel problem for the case of linear classifiers trained with GA.
Suppose we have a one-dimensional two-class problem and the attacker's goal is to poison the distribution of the \emph{negative} class $P^{-}$ so that the ACR ($\tilde{R}$) of the poisoned model on the test points of the \emph{negative} class is reduced.
Let $\epsilon$ be the the maximum permissible perturbation that can be added by the attacker to the points of the class $P^{-}$.
We do not assume any specific distributions for $P^{+}$ and $P^{-}$ here, but only that $\sum_i x_i^{-}<\sum_i x_i^{+}$ without loss of generality. Here $x_i^{+}$ and $x_i^{-}$ refer to the training points of the positive and the negative class, respectively.
A linear classifier in one-dimension is either $f(x)=1\;\mathrm{iff}\;x \geq t$ or $f(x)=-1\;\mathrm{iff}\;x \leq t$ parameterized by the threshold $t$.
For linear classifiers,
the smoothed classifier $g$ is the same as the unsmoothed classifier $f$ and the certified radius for a point is the distance to the decision boundary \cite{cohen2019certified}.
To make the problem analytically tractable, we use the squared-loss at the lower-level i.e., $f(x) = wx + b$ and $l(x,y;f)=(f(x) - y)^2$.
The bilevel problem for poisoning is as follows
\begin{equation}
\begin{split}
&\min_{u}\;\; \mathbb{E}_{{P}_{-}}[\max(\mathrm{sign}(w^\ast)(-b^\ast/w^\ast-x),0)]\\
&\mathrm{s.t.}\;\; -\epsilon \leq u_i - x_i^{-} \leq \epsilon, \;\; \mathrm{for \;\; i = 1, ..., n}\\
w^\ast,&b^\ast = \arg\min_{w, b}\; \frac{1}{2n}\big[\sum_{i=1}^{n}l(x_i^{+},1) + \sum_{i=1}^{n}l(u_i,-1)\big].
\end{split}
\label{eq:bilevel_linear}
\end{equation}
\begin{theorem}
\label{thm:linear}
If the perturbation is large enough, i.e., $\epsilon \geq \frac{\sum_i x_i^{+} - \sum_i x_i^{-}}{n}$ then there are two locally optimal solutions to (\ref{eq:bilevel_linear}) which are $u_i = x_i^{-} - \epsilon$ (Case 1) and $u_i = x_i^{-} + \epsilon$ (Case 2) for $i=1,...,n$.
Otherwise, there is a unique globally optimal solution $u_i = x_i^{-} - \epsilon$ (Case 1) for $i=1,...,n$.
\end{theorem}
Thus, optimal poisoning is achieved by shifting points of the $P^{-}$ class either towards left or right by the maximum amount $\epsilon$ (Fig.~\ref{fig:linear} and Appendix~\ref{app:isotropic_gaussian}). Moreover, the effect of poisoning an $\alpha$ fraction of points from the $P^{-}$ class with maximum permissible perturbation $\Tilde{\epsilon}$ is same as that of poisoning all points of $P^{-}$ class with $\epsilon = \alpha \Tilde{\epsilon}$ (Corollary~\ref{cor:partial_poisoning} in Appendix~\ref{app:proofs}).
Although a direct analysis is intractable for non-linear cases, we empirically observed that our attack moved the decision boundary of neural networks closer to the points of the target class as measured by the mean distance of points to the decision boundary of the smoothed classifier (Sec.~\ref{sec:exp_emp_robustness}).
\vspace{-0.1cm}
\section{Experiments}\label{sec:experiments}
In this section we present the results of our PACD\footnote{The code is available at \url{https://github.com/akshaymehra24/poisoning_certified_defenses}} attack on poisoning deep neural networks trained using methods that make the model certifiably robust to test-time attacks. All the results presented here are averaged over models trained with five random initialization.
We report the average certified radius (ACR) as the average of the certified radius obtained from the RS based certification procedure of \cite{cohen2019certified} for correctly classified points. Certified radius is zero for misclassified and abstained points.
The approximate certified accuracy (ACA) is the fraction of points correctly classified by the smoothed classifier ($\ell_2$ radius greater than zero). All results are reported over 500 randomly sampled images from the target classes.
We use the same value of $\sigma$ for smoothing during attack, retraining and evaluation. We compare our results to watermarking \cite{shafahi2018poison}
which has been used previously for clean label attacks (opacity 0.1 followed by clipping to make $\ell_{\infty}$ distortion equal to $\epsilon$),
and show that poison data generated using the bilevel optimization is significantly better at reducing the average certified radius.
\begin{table}[tb]
\caption{Decrease in certified radius and certified accuracy of models trained with Gaussian augmentation \cite{cohen2019certified} on poison data compared to those of models trained on clean and watermarked data.
}
\label{Table:cohen_attack}
\centering
\small
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{c|c|c|cc}
\toprule
& \multirow{2}{*}{$\sigma$} & \multirow{2}{*}{Data} & \multicolumn{2}{c}{\makecell{Certified robustness of target class}}\\
& & & ACR & ACA(\%) \\
\midrule
\multirow{9}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
\multirow{3}{*}{0.25} & Clean & 0.896$\pm$0.01 & 98.92$\pm$0.32 \\
& & Watermarked & 0.908$\pm$0.01 & 99.24$\pm$0.29 \\
& & Poisoned & {\bf0.325}$\pm$0.10 & {\bf71.96}$\pm$8.28 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.5} & Clean & 1.481$\pm$0.02 & 99.16$\pm$0.34\\
& & Watermarked & 1.514$\pm$0.06 & 99.12$\pm$0.47\\
& & Poisoned & {\bf0.733}$\pm$0.10 & {\bf90.68}$\pm$3.37 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.75} & Clean & 1.549$\pm$0.11 & 98.48$\pm$0.35\\
& & Watermarked & 1.566$\pm$0.06 & 98.36$\pm$0.39\\
& & Poisoned & {\bf0.698}$\pm$0.13 & {\bf84.92}$\pm$5.14 \\
\midrule
\midrule
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
\multirow{3}{*}{0.25} & Clean & 0.521$\pm$0.05 & 85.76$\pm$3.31 \\
& & Watermarked & 0.470$\pm$0.01 & 83.22$\pm$1.41 \\
& & Poisoned & {\bf0.059}$\pm$0.02 & {\bf26.84}$\pm$6.04 \\
\cmidrule{2-5}
& \multirow{3}{*}{0.5} & Clean & 0.634$\pm$0.04 & 75.04$\pm$1.65\\
& & Watermarked & 0.611$\pm$0.18 & 74.01$\pm$9.22\\
& & Poisoned & {\bf0.221}$\pm$0.04 & {\bf42.28}$\pm$6.01 \\
\bottomrule
\end{tabular}
}
\end{table}
We use our attack to poison MNIST and CIFAR10 dataset and use ApproxGrad to solve the bilevel optimization. The time complexity for ApproxGrad is $O(VT)$ where $V$ are the number of parameters in the machine learning model and $T$ is the number of lower-level updates. For datasets like Imagenet where the optimization must be performed over a very large number of batches, obtaining the solution to bilevel problems becomes computationally hard. Due to this bottleneck we leave the problem of poisoning Imagenet for future work. For the experiments with MNIST we randomly selected digit 8 and for CIFAR10 the class ``Ship'' as the target class for the attacker.
The attack results for other target classes are similar and are presented in the Appendix~\ref{app:additional_experiments}.
To ensure that the attack points satisfy the clean label constraint, the maximum permissible $\ell_{\infty}$ distortion is bounded by $\epsilon = 0.1$ for MNIST and $\epsilon = 0.03$ for CIFAR10 which is similar to the value used to generate imperceptible adversarial examples in previous works \cite{madry2017towards,goodfellow2014explaining}. We used convolutional neural networks for our experiments on MNIST and Resnet-20 model for our experiments with CIFAR10. Model architectures, hyperparameters, generated attack examples (Fig.~\ref{fig:attack_egs} in Appendix), and additional results on transferability of our poisoned samples to models with different architectures
are presented in Appendix \ref{app:additional_experiments}.
Since models trained with standard training do not achieve high certified radius \cite{cohen2019certified}, we considered poisoning models trained with methods that improve the certified robustness guarantees to test time attacks.
For comparison, ACR on the target class ``Ship'' with Resnet-20 trained with standard training on clean CIFAR10 dataset is close to zero whereas for the same model trained with GA ($\sigma = 0.25$) ACR is close to 0.5.
Finally, we show that our attack can withstand the use of weight regularization, which has been shown to be effective at mitigating the effect of poisoning attacks \cite{carnerero2020regularisation}. The results of this experiment are present in Appendix~\ref{app:weight_reg}.
\begin{table}[tb]
\caption{Decrease in certified radius and certified accuracy of models trained with MACER \cite{zhai2020macer} on poison data compared to those of models trained on clean and watermarked data.
}
\label{Table:macer_attack}
\centering
\small
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{c|c|c|cc}
\toprule
&\multirow{2}{*}{$\sigma$} & \multirow{2}{*}{Data} &
\multicolumn{2}{c}{\makecell{Certified robustness of target class}}\\
&&
& ACR & ACA(\%) \\
\midrule
\multirow{9}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
\multirow{3}{*}{0.25} & Clean & 0.915$\pm$0.01 & 99.64$\pm$0.21\\
& & Watermarked & 0.894$\pm$0.01 & 98.84$\pm$0.53\\
& & Poisoned & {\bf0.431}$\pm$0.13 & {\bf79.81}$\pm$9.26 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.5} & Clean & 1.484$\pm$0.11 & 98.56$\pm$0.41\\
& & Watermarked & 1.475$\pm$0.08 & 98.68$\pm$0.39\\
& & Poisoned & {\bf0.685}$\pm$0.16 & {\bf84.36}$\pm$6.17 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.75} & Clean & 1.353$\pm$0.13 & 93.81$\pm$2.08\\
& & Watermarked & 1.415$\pm$0.11 & 94.52$\pm$1.58\\
& & Poisoned & {\bf1.008}$\pm$0.19 & {\bf88.41}$\pm$4.64 \\
\midrule
\midrule
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
\multirow{3}{*}{0.25} & Clean & 0.593$\pm$0.05 & 83.84$\pm$2.26 \\
& & Watermarked & 0.486$\pm$0.04 & 77.01$\pm$0.21 \\
& & Poisoned & {\bf0.379}$\pm$0.11 & {\bf72.41}$\pm$9.79\\
\cmidrule{2-5}
& \multirow{3}{*}{0.5} & Clean & 0.759$\pm$0.11 & 72.92$\pm$5.06\\
& & Watermarked & 0.811$\pm$0.10 & 75.66$\pm$2.99\\
& & Poisoned & {\bf0.521}$\pm$0.11 & {\bf65.24}$\pm$6.55 \\
\bottomrule
\end{tabular}
}
\end{table}
\vspace{-0.1cm}
\subsection{Poisoning Gaussian data augmentation \cite{cohen2019certified}}
Here we show the effectiveness of our attack at compromise the certified robustness guarantees obtained with RS on a model trained using the GA. The results of the attack, present in Table~\ref{Table:cohen_attack}, show a significant decrease in the ACR and the certified accuracy of the target class of the model trained on poisoned data compared to the model trained on clean and watermarked data. Since the certified radius and certified accuracy are correlated, our poisoning attack which targets the reduction of certified radius (upper-level problem in Eq.~(\ref{Eq:Bilevel})) also causes a decrease in the certified accuracy. Significant degradation in average certified radius from 0.52 to 0.06 on CIFAR10 with imperceptibly distorted poison data shows the extreme vulnerability of GA to poisoning.
\vspace{-0.1cm}
\subsection{Poisoning MACER \cite{zhai2020macer}}
Here we use the bilevel formulation in Eq.~(\ref{Eq:Bilevel}) with $\mathcal{L}_\mathrm{macer}$ loss in the lower-level and generate poison data to reduce the certification guarantees of models trained with MACER. The poison data is generated with $k=2$, where $k$ are the number of noisy images used per training point to ease the bilevel optimization. However, during retraining $k=16$ is used, which is similar to the one used in the original work \cite{zhai2020macer}.
The ACR obtained using MACER is higher than that achievable using Gaussian augmentation based training consistent with \cite{zhai2020macer}. However, our data poisoning attack is still able to reduce the average certified radius of the method by more than 30\% (Table~\ref{Table:macer_attack}) even though the attack is evaluated against a much stronger defense ($k=16$ for retraining compared to $k=2$ for poisoning) than what the poison data was optimized against. This shows that the use of a larger number of noisy samples ($k$) cannot eliminate the effect of the attack, emphasising the importance of the threat posed by data poisoning.
\begin{table}[tb]
\caption
Decrease in certified radius and certified accuracy of models trained with SmoothAdv \cite{salman2019provably} on poison data compared to those of models trained on clean and watermarked data.
}
\label{Table:smoothadv_attack}
\centering
\small
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{c|c|c|cc}
\toprule
&\multirow{2}{*}{$\sigma$} & \multirow{2}{*}{Data} &
\multicolumn{2}{c}{\makecell{Certified robustness of target class}}\\
&&
& ACR & ACA(\%) \\
\midrule
\multirow{9}{*}{\STAB{\rotatebox[origin=c]{90}{MNIST}}} &
\multirow{3}{*}{0.25} & Clean & 0.896$\pm$0.01 & 99.16$\pm$0.45 \\
& & Watermarked & 0.906$\pm$0.01 & 99.28$\pm$0.16 \\
& & Poisoned & {\bf0.672}$\pm$0.04 & {\bf93.21}$\pm$1.92 \\
\cmidrule{2-5}
& \multirow{3}{*}{0.5} & Clean & 1.408$\pm$0.05 & 99.21$\pm$0.25\\
& & Watermarked & 1.401$\pm$0.02 & 98.01$\pm$0.18\\
& & Poisoned & {\bf1.037}$\pm$0.06 & {\bf93.81}$\pm$1.31 \\
\cmidrule{2-5}
& \multirow{3}{*}{0.75} & Clean & 1.262$\pm$0.05 & 95.68$\pm$0.47\\
& & Watermarked & 1.433$\pm$0.03 & 97.21$\pm$0.13\\
& & Poisoned & {\bf0.924}$\pm$0.06 & {\bf88.88}$\pm$0.18 \\
\midrule
\midrule
\multirow{6}{*}{\STAB{\rotatebox[origin=c]{90}{CIFAR10}}} &
\multirow{3}{*}{0.25} & Clean & 0.504$\pm$0.02 & 78.76$\pm$0.81 \\
& & Watermarked & 0.441$\pm$0.02 & 70.16$\pm$2.12 \\
& & Poisoned & {\bf0.271}$\pm$0.02 & {\bf55.78}$\pm$0.96 \\
\cmidrule{2-5}
&\multirow{3}{*}{0.5} & Clean & 0.479$\pm$0.07 & 65.84$\pm$4.81\\
& & Watermarked & 0.473$\pm$0.02 & 62.51$\pm$2.12\\
& & Poisoned & {\bf0.277}$\pm$0.02 & {\bf49.11}$\pm$3.19 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\columnwidth]{images/teaser_c.pdf}
\includegraphics[width=0.6\columnwidth]{images/attack_egs_c.pdf}
\caption
(Upper) ACR degrades more if larger perturbations are permitted to create poison data. But larger perturbation makes the poison points visibly distorted making them easier to detect with inspection (Lower).
Poison data are generated with $\epsilon \in \{0,0.1,0.2,0.3\}$. We have used $\epsilon = 0.1$ for our attacks.}
\label{fig:acr_vs_epsilon}
\end{figure}
\begin{figure*}[tb]
\centering{
\subfigure[Average certified radius of digit 8 in MNIST]{\includegraphics[width=0.775\columnwidth]{images/acr_mnist_c.pdf}}
\hspace{0.15in}
\subfigure[Approximate certified accuracy of digit 8 in MNIST]{\includegraphics[width=0.775\columnwidth]{images/aca_mnist_c.pdf}}
\subfigure[Average certified radius of ``Ship'' in CIFAR10]{\includegraphics[width=0.775\columnwidth]{images/acr_cifar10_c.pdf}}
\hspace{0.15in}
\subfigure[Approximate certified accuracy of ``Ship'' in CIFAR10]{\includegraphics[width=0.775\columnwidth]{images/aca_cifar10_c.pdf}}
\includegraphics[width=0.9\textwidth]{images/legend_c.pdf}
}
\caption{Successful transferability of our poisoned data when victim uses a training procedure different than the one used by the attacker to optimize the poison data. $\sigma = 0.5$ and $\sigma = 0.25$ are used for smoothing during training and evaluation for MNIST and CIFAR10.
}
\label{fig:transfer}
\end{figure*}
\begin{table}[tb]
\caption{Decrease in the mean $\ell_2$-distortion needed to generate adversarial examples using PGD attack against the smooth classifier. This shows that the decision boundary of the smooth classifier is closer to the clean test points of the target class after poisoning.
}
\label{Table:empirical_robustness}
\centering
\small
\resizebox{0.75\columnwidth}{!}{
\begin{tabular}{c|c|c|c}
\toprule
& $\sigma$ & Clean data & Poisoned data\\
\midrule
\multirow{3}{*}{MNIST} & 0.25 & 3.271$\pm$0.10 & {\bf1.339}$\pm$0.16 \\
& 0.5 & 3.637$\pm$0.15 & {\bf2.170}$\pm$0.09 \\
& 0.75 & 3.961$\pm$0.18 & {\bf2.213}$\pm$0.31 \\
\midrule
\multirow{2}{*}{CIFAR10} &0.25 & 1.754$\pm$0.17 & {\bf0.132}$\pm$0.04 \\
&0.5 & 1.996$\pm$0.09 & {\bf0.367}$\pm$0.06 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Poisoning SmoothAdv \cite{salman2019provably}}
Here we present the results of our attack on models trained with SmoothAdv. To yield model with high certified robustness this training method requires the model parameters to be optimized using adversarial training of the smoothed classifier. We used 2 step PGD attack to obtain adversarial example of each point in the batch. We used a single noisy instance of the adversarial example while doing adversarial training. Although, using larger $k$ makes the certified robustness guarantees better, we used $k=1$ to save the computational time required for adversarial training. For bilevel training, we followed the similar procedure to generate adversarial examples for clean and poison data. The adversarial examples are then used as the data for the lower-level problem of Eq.~(\ref{Eq:Bilevel}) to do GA training for optimizing the network parameters. The batch of adversarial examples are recomputed against the updated model after each step of bilevel training. Note that this is an approximation to the actual solution of the minimax problem that has to be solved in the lower-level for generating poison data against SmoothAdv. However, the effectiveness of the attack (results in Table~\ref{Table:smoothadv_attack}) suggests that our approximation works well in practice and certified robustness guarantees achieved from SmoothAdv can be degraded by poisoning.
\subsection{Effect of the imperceptibility constraint}
Here we evaluate the effect of using different values of the perturbation strength $\epsilon$ which controls the maximum permissible distortion in Eq.~(\ref{Eq:Bilevel}). We use $\sigma = 0.25$ for smoothing and GA based training to generate and evaluate the attack. The results are summarized in Fig.~\ref{fig:acr_vs_epsilon}, which show that the ACR of the target class decreases as $\epsilon$ increases rendering certification guarantees useless. This is expected since larger $\epsilon$ creates a larger distribution shift among the target class data in the training and the test sets. However, larger permissible distortion to the data make the attack easier to detect by inspection. This is not be desirable from an attacker's perspective who wants to evade detection.
\subsection{Transferability of poisoned data}
Here we report the performance of the models trained on the poison data using different training procedures than the one assumed by the attacker for crafting poison data. We used $k=1$ and 2 steps of PGD attack to generate adversarial examples for SmoothAdv and $k=16$ for MACER during retraining.
The poison data generated against MACER was optimized using $k=2$.
The results are summarized in Fig.~\ref{fig:transfer}, which show that poisoned data optimized against any robust training procedure causes significant reduction in the certified robustness of models trained with a different training methods. Interestingly, poisoned data optimized against GA is extremely effective against other methods, considering the fact that it is the simplest of the three methods. The successful transferability of the poisoned data
across different training methods shows how brittle these methods can be when faced with a poisoned dataset.
We observe a similar success in transferability of the poison data to models with different architectures (Appendix~\ref{app:transfer_architecture}).
\if0
\begin{table*}
\caption{Transfer-ability of the attack points when victim uses a different training procedure than used by the attacker to generate poison data for MNIST with digit 8 being the target class. $\sigma = 0.5$ is used for smoothing during training and evaluation. Certified robustness guarantees of each training method on clean data are present in the last row for reference.
}
\label{Table:mnist_transferability}
\centering
\small
\resizebox{0.85\textwidth}{!}{
\begin{tabular}{c|cc|cc|cc}
\toprule
\multirow{2}{*}{\makecell{Training method used for\\ generating poison data}} & \multicolumn{6}{c}{\makecell{Training method used for evaluation}}\\
&\multicolumn{2}{c|}{\makecell{Gaussian Aug}} & \multicolumn{2}{c|}{\makecell{MACER \cite{zhai2020macer}}} & \multicolumn{2}{c}{\makecell{SmoothAdv \cite{salman2019provably}}}\\
\midrule
& ACR & ACA(\%) & ACR & ACA(\%) & ACR & ACA(\%) \\
\midrule
Gaussian Aug & 0.733$\pm$0.10 & 90.68$\pm$0.03 & 0.741$\pm$0.09 & 87.88$\pm$0.04 & 1.116$\pm$0.12 & 96.91$\pm$0.01\\
MACER \cite{zhai2020macer} & 0.881$\pm$0.03 & 93.10$\pm$0.01 & 0.685$\pm$0.16 & 84.45$\pm$0.06 & 1.036$\pm$0.05 & 94.65$\pm$0.01\\
SmoothAdv \cite{salman2019provably} & 0.733$\pm$0.09 & 88.72$\pm$0.04 & 0.8732$\pm$0.23 & 88.68$\pm$0.09 & 1.037$\pm$0.06 & 93.81$\pm$0.01\\
\midrule
Clean data & 1.481$\pm$0.02 & 99.16$\pm$0.01& 1.484$\pm$0.11 & 98.56$\pm$0.01 & 1.408$\pm$0.05 & 99.21$\pm$0.01\\
\bottomrule
\end{tabular}
}
\end{table*}
\fi
\if0
\begin{table*}
\caption{Transferability of the attack points when victim uses a different training procedure than used by the attacker to generate poison data for CIFAR10 with class ``Ship'' being the target class. $\sigma = 0.25$ is used for smoothing during training and evaluation. Certified robustness guarantees of each training method on clean data are present in the last row for reference.
}
\label{Table:cifar10_transferability}
\centering
\small
\resizebox{0.85\textwidth}{!}{
\begin{tabular}{c|cc|cc|cc}
\toprule
\multirow{2}{*}{\makecell{Training method used for\\ generating poison data}} & \multicolumn{6}{c}{\makecell{Training method used for evaluation}}\\
&\multicolumn{2}{c|}{\makecell{Gaussian Aug}} & \multicolumn{2}{c|}{\makecell{MACER \cite{zhai2020macer}}} & \multicolumn{2}{c}{\makecell{SmoothAdv \cite{salman2019provably}}}\\
\midrule
& ACR & ACA(\%) & ACR & ACA(\%) & ACR & ACA(\%) \\
\midrule
Gaussian Aug & 0.059$\pm$0.01 & 26.84$\pm$0.06& 0.134$\pm$0.04& 41.08$\pm$0.08 &0.238$\pm$0.03& 55.01$\pm$0.05\\
MACER \cite{zhai2020macer}& 0.235$\pm$0.02& 61.01$\pm$0.04& 0.379$\pm$0.10& 72.41$\pm$0.10& 0.3205$\pm$0.01 & 59.01$\pm$0.01\\
SmoothAdv \cite{salman2019provably} & & & & &\\
\midrule
Clean data & 0.521$\pm$0.05 & 85.76$\pm$0.03 & 0.593$\pm$0.05 & 83.84$\pm$0.03 & 0.504$\pm$0.02 & 78.76$\pm$0.01 \\
\bottomrule
\end{tabular}
}
\end{table*}
\fi
\vspace{-0.2cm}
\subsection{Empirical robustness of poisoned classifiers}\label{sec:exp_emp_robustness}
\vspace{-0.1cm}
Finally, we report the empirical robustness of the smoothed classifier where the base classifier is trained on clean and poisoned data using GA.
The poisoned data is generated against GA training in the lower-level as in Eq.~(\ref{Eq:Bilevel}). We report the mean $\ell_2$-distortion required to generate an adversarial example using the PGD attack \cite{salman2019provably} against the smoothed classifier using 200 and 100 randomly sampled test points of the target class from MNIST and CIFAR10, respectively, in Table~\ref{Table:empirical_robustness}.
We observe that our poisoning leads to a decrease in the empirical robustness of the smoothed classifier on clean test data.
This backs up our hypothesis
that the decision boundary of the smooth classifier must be changed to reduce the certified radius in nonlinear classifiers, similar to linear classifiers (Fig.~\ref{fig:linear}).
\if0
\begin{figure}[tb]
\centering{
\subfigure[]{\includegraphics[width=0.9\columnwidth]{images/acr_cifar10_c.pdf}}
\subfigure[]{\includegraphics[width=0.9\columnwidth]{images/aca_cifar10_c.pdf}}
}
\caption{Transferability of the attack points when victim uses a different training procedure than used by the attacker to generate poison data for CIFAR10 with class ``Ship'' being the target class. $\sigma = 0.25$ is used for smoothing during training and evaluation.}
\label{fig:transfer_cifar10}
\end{figure}
\fi
\vspace{-0.1cm}
\section{Conclusion}
\vspace{-0.15cm}
Certified robustness has emerged as a gold standard to gauge with certainty the susceptibility of machine learning models to test-time attacks.
In this work, we showed that these guarantees can be rendered ineffective by our bilevel optimization based data poisoning attack that adds imperceptible perturbations to the points of the target class.
Unlike previous data poisoning attacks, our attack can reduce the ACR of an entire target class and is even effective against models trained using training methods that have been shown to improve certified robustness. Our results suggests that data quality is a crucial factor in achieving high certified robustness guarantees but is overlooked by current approaches.
\vspace{-0.15cm}
\section{Acknowledgments}
\vspace{-0.15cm}
This work was supported by the NSF EPSCoR-Louisiana Materials Design Alliance (LAMDA) program \#OIA-1946231 and by LLNL Laboratory Directed Research and Development project 20-ER-014 (LLNL-CONF-817233).
This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, Lawrence Livermore National Security, LLC.\footnote{The views and opinions of the authors do not necessarily reflect those of the U.S. government or Lawrence Livermore National Security, LLC neither of whom nor any of their employees make any endorsements, express or implied warranties or representations or assume any legal liability or responsibility for the accuracy, completeness, or usefulness of the information contained herein.}
\clearpage
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,501,239 | arxiv | \section{Introduction}\label{sec:intro}
The primitive equations are a model for oceanic and atmospheric dynamics and are derived from the Navier-Stokes equations by assuming a hydrostatic balance for the pressure term, see
\cite{Lionsetall1992, Lionsetall1992_b, Lionsetall1993}. These equations are known to be globally and strongly well-posed in the three dimensional setting for arbitrarily large data belonging
to $H^1$ by the celebrated result of Cao and Titi \cite{CaoTiti2007}. The latter considers the case of Neumann boundary conditions and this result also holds true for the case mixed
Dirichlet and Neumann boundary conditions, again for data in $H^1$, as shown by Kukavika and Ziane \cite{Ziane2007}.
Several approaches have been developed in the last years aiming for extending the above two results to the case of rough initial data. One approach is based on the theory of
weak solutions, see e.g. \cite{LiTiti2015, TachimMedjo2010, Kukavicaetall2014, Ziane2009}. Although the existence of weak solutions to the primitive equations for initial data in $L^2$
is known since the pioneering work by Lions, Temam and Wang \cite{Lionsetall1992}, its uniqueness remains an open problem until today. Li and Titi \cite{LiTiti2015} proved uniqueness of weak solutions
assuming that the initial data are small $L^\infty$-perturbations of continuous data or data belonging to $\{v\in L^6 \colon \partial_z v\in L^2\}$, where $z$ denotes the vertical variable.
By a weak-strong uniqueness argument, these unique weak solutions regularize and even become strong solutions. For a survey of known results, see also \cite{LiTiti2016}.
A different approach to the primitive equations is based on a semilinear evolution equation for the hydrostatic Stokes operator within the $L^p$-setting, see \cite{HieberKashiwabara2015}.
There, the existence of a unique, global, strong solution to the primitive equations for initial data belonging to $H^{2/p,p}$ was proved for the case of mixed Dirichlet-Neumann boundary conditions.
This approach was transfered in \cite{NeumannNeumann, GigaGriesHieberHusseinKashiwabara2017} to the case of pure Neumann boundary conditions and global,
strong well-posedness of the primitive equations was obtained for data $a$ of the form
$a=a_1 + a_2$, where $a_1\in C(\overline{G};L^1(-h,0))$ and $a_2\in L^{\infty}(G;L^1(-h,0))$ with $a_2$ being small. These spaces are scaling invariant and represent the anisotropic character of the
primitive equations.
Note that the choice of boundary conditions has a severe impact on the linearized primitive equations. In the setting of layer domains, i.e.,
$\Omega= G\times (-h,0)\subset \mathbb{R}^3$ with $G= (0,1)^2$ and $h>0$, this is illustrated best by the hydrostatic Stokes operator $A_\os$. The latter can be represented formally by the differential expression
\begin{align}\label{eq:Ap}
\Ae v = \Delta v + \frac{1}{h}\nabla_H (-\Delta_H)^{-1}\text{div}_H \Big(\restr{\partial_z v}{z=-h}\Big),
\end{align}
restricted to hydrostatically solenoidal vector fields, where for $z=-h$ Dirichlet and for $z=0$ Neumann boundary conditions are imposed and periodicity is assumed horizontally, see \cite{GGHHK17} for details. In particular,
in the case of pure Neumann boundary conditions, the hydrostatic Stokes operator reduces to the Laplacian, i.e. $A_\os v =\Delta v$.
It is the aim of this article to study properties of the hydrostatic Stokes semigroup and terms of the form $\nabla e^{tA_\os} \mathbb{P}$ on spaces of bounded functions.
These properties yield then the global, strong well-posedness result of the primitive equations in the case of mixed Dirichlet-Neumann boundary conditions. More precisely, we prove global, strong well-posedness of the primitive
equations for initial data of the form
$$
a=a_1 + a_2, \quad a_1\in C(\overline{G};L^p(-h,0)), \quad \mbox{and} \quad a_2\in L^{\infty}(G;L^p(-h,0)) \quad \mbox{for} \quad p>3,
$$
where $a_1$ is periodic in the horizontal variables and $a_2$ is sufficiently small. Our strategy is to introduce a reference solution for the smoothened part of the initial data and to combine this
with an evolution equation approach for the remaining rough part.
The main difficulty when dealing with the primitive equations on spaces of bounded functions is that the hydrostatic Helmholtz projection $\mathbb{P}$ {\em fails to be bounded} with respect to
the $L^\infty$-norm. This is similar to the case of the classical Stokes semigroup, for which $L^\infty$-theory was developed in \cite{AG13} and \cite{AGH15}.
In Sections 6 and 7 we prove that the combination of the three main players, $\nabla$, $\mathbb{P}$, $e^{tA_\os}$, nevertheless give rise to bounded operators on
$L^\infty_HL^p_z(\Omega)$, which in addition satisfy typical global, second order parabolic decay estimates of the form
\begin{align*}
t^{1/2} \lVert \partial_i e^{tA_\os}\mathbb{P}f \rVert_{L^\infty_H L^p_z(\Omega)} &\le C e^{t\beta} \lVert f \rVert_{L^\infty_H L^p_z (\Omega)}, \\
t^{1/2} \lVert e^{tA_\os}\mathbb{P}\partial_j f \rVert_{L^\infty_H L^p_z(\Omega)}& \le C e^{t\beta}\lVert f\rVert_{L^\infty_H L^p_z (\Omega)}, \\
t\lVert \partial_i e^{tA_\os}\mathbb{P} \partial_j f\rVert_{L^\infty_H L^p_z(\Omega)} &\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\end{align*}
for $t>0$, where $\partial_i, \partial_j\in \{\partial_x,\partial_y,\partial_z\}$.
Note that the choice of the boundary conditions involved affects to a very great extent the difficulty in proving these estimates.
For the case of mixed Dirichlet-Neumann boundary conditions, our approach relies on the representation \eqref{eq:Ap} of the linearized problem.
The constraint $p>3$ arises from embedding properties for the reference solution and estimates for the linearized problem in $L^{\infty}(G;L^p(-h,0))$.
Our approach is based on an iteration scheme, which is inspired by the classical schemes to the Navier-Stokes equations.
Here, the iterative construction of a unique, local solution relies on $L^\infty_HL^p_z(\Omega)$-estimates for the crucial terms of the
form $e^{tA_\os}\mathbb{P}\div (u \otimes v)$, where $u=(v,w)$ is the full velocity and $v$ its horizontal component.
Let us note that the above linear estimates are of independent interest for further considerations.
The use of a reference solution allows us to obtain the smallness condition on the $L^\infty_H L^p_z$-perturbation $a_2$ of $a_1$ by means of an absolute constant, while for Neumann boundary conditions
it is needed that $a_2$ is small compared to $a_1$, cf.\@ \cite{NeumannNeumann}. Also, Li and Titi assume in \cite{LiTiti2015} that $a_2$ is small compared to the $L^4$-norm of $a_1$.
Comparing our result with the one by Li and Titi in \cite{LiTiti2015}, which has been obtained for Neumann boundary conditions, we observe that the initial data allowed in our approach are of anisotropic
nature and require no conditions on the derivatives of the initial data, such as e.g. $\partial_z v \in L^2$ as in \cite{LiTiti2015}.
This article is structured as follows: In Section~\ref{sec:pre} we collect preliminary facts and fix the notation. In Section~\ref{sec:main} we state our main results
concerning the global strong well-posedness of the primitive equations for rough data and the crucial estimates for the linearized problem. The proof of our main results starts with a discussion of
anisotropic $L^p$-spaces in Section~\ref{sec:spaces}, which is followed in Section~\ref{sec:laplace} by estimates for the Laplacian in anisotropic spaces.
The subsequent Sections ~\ref{sec:stokesEasy} and \ref{sec:stokesHard} are devoted to the development of an $L^{\infty}(G;L^p(-h,0))$-theory for the hydrostatic Stokes equations and its
associated resolvent problem. Finally, in Section~\ref{sec:proofs} we present our iteration scheme yielding the global, strong well-posedness of the primitive equations for rough initial data.
\section{Preliminaries}\label{sec:pre}
Let $\Omega=G\times(-h,0)$ where $G=(0,1)^2$. We consider the primitive equations on $\Omega$ given by
%
\begin{align}\label{eq:PrimitiveEquations}
\begin{array}{rll}
\partial_t v-\Delta v+(u\cdot \nabla)v+\nabla_H \pi
&=0 & \text{ on } \Omega\times(0,\infty),
\\
\partial_z \pi&=0 & \text{ on } \Omega\times (0,\infty),
\\
\text{div}_H \overline{v}&=0 & \text { on } G\times(0,\infty),
\\
v(0)&=a & \text { on } \Omega,
\end{array}
\end{align}
%
using the notations $\div_H v = \partial_x v_1+\partial_y v_2$ and $\nabla_H \pi = (\partial_x \pi, \partial_y \pi)^T$, while $\overline{v}=\frac{1}{h}\int_{-h}^0 v(\cdot,z)\,dz$ is the vertical average, $\pi\colon G\to \mathbb{R}$ denotes the surface pressure, $u=(v,w)$ is the velocity field with horizontal and vertical components $v:\Omega\to \mathbb{R}^2$ and $w:\Omega\to \mathbb{R}$ respectively, where $w=w(v)$ is given by the relation
%
\begin{align}\label{eq:WRelation}
w(x,y,z)=-\int_h^z \text{div}_H v(x,y,r)\,dr.
\end{align}
%
This is supplemented by mixed Dirichlet and Neumann boundary conditions
%
\begin{align}\label{eq:bc}
\partial_z v =0 \text{ on } \Gamma_u\times(0,\infty), \quad
\pi, v \text { periodic } \text{ on } \Gamma_l\times(0,\infty), \quad
v =0 \text{ on } \Gamma_b\times(0,\infty),
\end{align}
%
where the boundary is divided into $\Gamma_u=G\times \{0\}$, $\Gamma_l=\partial G\times [-h,0]$ and $\Gamma_b=G\times\{0\}$.
In the following we will be dealing with anisotropic $L^p$-spaces on cylindrical sets of the type $U= \Omega$ or $U=\mathbb{R}^2 \times \mathbb{R}$. More precisely, if $U=U'\times U_3\subset \mathbb{R}^2\times \mathbb{R}$ is a product of measurable sets and $q,p\in[1,\infty]$ we define
%
\begin{align*}
L^q_H L^p_z(U)
:=L^q(U';L^p(U_3))
:=\{f:U\to \mathbb{K} \text{ measurable}, \lVert f\rVert_{L^q_H L^p_z(U)}<\infty\},
\end{align*}
%
for $\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}$ with norm
%
\begin{align*}
\lVert f\rVert_{L^q_H L^p_z(U)}:=\begin{cases}
& \left(\int_{U'} \lVert f(x',\cdot)\rVert^q_{L^p(U_3)}\,dx'\right)^{1/q},
\quad q\in [1,\infty),
\\
&
\text{ess sup}_{x'\in U'} \lVert f(x',\cdot)\rVert_{L^p(U_3)}, \quad q=\infty.
\end{cases}
\end{align*}
%
Endowed with this norm, $L^q_H L^p_z(U)$ is a Banach space for all $p,q\in [1,\infty]$.
We will denote the $W^{k,p}$-closure of $C_{\text{per}}^\infty(\overline{\Omega})$ by $W^{k,p}_{\text{per}}(\Omega)$,
where $C_{\text{per}}^\infty(\overline{\Omega})$ denotes the space of smooth functions $v$ on $\overline{\Omega}$ that such that $\partial^\alpha_x v$ and $\partial^\alpha_y v$ are periodic on $\Gamma_l$ with
period $1$ in the variables $x$ and $y$ for all $\alpha \in \mathbb{N}$, but not necessarily periodic with respect to the vertical direction $z$. Moreover, by $C^{m,\alpha}(\overline{\Omega})$, $C^{m,\alpha}(\overline{G})$ we denote the spaces of
$m$-times differentiable functions with H\"older-continuous derivatives of exponents $\alpha\in (0,1)$ and the subspaces of functions periodic on $\Gamma_l$ and $\partial G$ will be denoted
by $C^{m,\alpha}_{\text{per}}(\overline{\Omega})$ and $C^{m,\alpha}_{\text{per}}(\overline{G})$, respectively. For a Banach space $E$ we denote by $C_{\text{per}}([0,1]^2;E)$ the set of
continuous functions $f:[0,1]^2\to E$ such that $f(0,y)=f(1,y)$ and $f(y,0)=f(y,1)$ for all $x,y\in [0,1]$.
In order to include the condition $\div_H \overline{v}=0$ one defines the \textit{hydrostatic Helmholtz projection} $\mathbb{P}$ as in \cite{HieberKashiwabara2015, GGHHK17} using the two-dimensional Helmholtz projection $Q$ with periodic boundary conditions given by $Qg=g-\nabla_H \pi$ for $g:G\to \mathbb{R}^2$ solving $\Delta_H \pi = \text{div}_H g$ for $\pi$ periodic on $\partial G$,
where $\Delta_H g=\partial_x^2 g+\partial_y^2 g$. The hydrostatic Helmholtz projection is then defined as
\begin{align*}
\mathbb{P} f=f-(1-Q)\overline{f}= f + \frac{1}{h}\nabla_H (-\Delta_H)^{-1}\text{div}_H\overline{f} = f-\nabla_H \pi.
\end{align*}
%
The range of $\mathbb{P}\colon L^p(\Omega)^2\rightarrow L^p(\Omega)^2$, $p\in (1,\infty)$, is denoted by $L^p_{\overline{\sigma}}(\Omega)$ and is given by
%
\[
\overline{\{v\in C_{\text{per}}^\infty(\overline{\Omega})^2: \text{div}_H \overline{v}=0\}}^{\norm{\cdot}_{L^p(\Omega)}}.
\]
%
Further characterizations of $L^p_{\overline{\sigma}}(\Omega)$ are given in \cite[Proposition 4.3]{HieberKashiwabara2015}.
Since $\mathbb{P}$ fails to be bounded on $L^\infty(\Omega)^2$ it is not evident which space
is a suitable substitute for $L^p_{\overline{\sigma}}(\Omega)$ in the case $p=\infty$. In this article, we will be considering the spaces
\begin{align}\label{eq:Xpspaces}
X:=C_{\text{per}}([0,1]^2;L^p(-h,0))^2
\quad \hbox{and} \quad
X_\os:=X\cap L^p_{\overline{\sigma}}(\Omega), \quad p\in (1,\infty).
\end{align}
The linearization of equation \eqref{eq:PrimitiveEquations}, called the \textit{hydrostatic Stokes equation}, is given by
%
\begin{align}\label{eq:hydrostaticStokes}
\partial_t v - \Delta v + \nabla_H \pi =f, \quad \div_H \overline{v}=0, \quad v(0)=a
\end{align}
%
and subject to boundary conditions \eqref{eq:bc}. The dynamics of this evolution equation is governed by the hydrostatic Stokes operator, and its $X_\os$-realization $A_\os$
is given by
\begin{align*}
A_\os v := \Ae v, \quad D(A^{{\overline{\sigma}}}_\infty)
=\{
v\in W^{2,p}_{\text{per}}(\Omega)^2\cap X_\os:
\restr{\partial_z v}{\Gamma_u}=0,
\restr{v}{\Gamma_b}=0,
\Ae v\in X_\os
\},
\end{align*}
where $\Ae v$ is defined by \eqref{eq:Ap}. It wil be proved that $A_\os$ generates a strongly continuous, analytic semigroup $e^{tA_\os}$ on $X_\os$.
Information on the linear theory in $L^p_{\overline{\sigma}}(\Omega)$ for $p\in (1,\infty)$ can be found in \cite{GGHHK17}.
\begin{comment}
So, we consider the semi-linear problem
\begin{align}\label{eq:PrimitiveEquationsAbstract}
\begin{array}{rll}
\partial_t v-A_{p,\os} v&=F(v) & \text{ on } \Omega\times(0,\infty),
\\
v(0)&=a & \text { on } \Omega,
\end{array}
\end{align}
with boundary conditions as in \eqref{eq:bc}, where
\begin{align*}
F(v)
=-\mathbb{P}\left( (u\cdot\nabla) v\right)
=-\mathbb{P}\left((v\cdot\nabla_H)v+w \partial_z v\right).
\end{align*}
\end{comment}
\section{Main results}\label{sec:main}
Our first main result concerns the global well-posedness of the primitive equations for \textit{arbitrarily large} initial data in $X_\os$, while the second result extends this situation to the
case of small perturbations in $L^\infty_H L^p_z(\Omega)$. Here, a \textit{strong solution} means -- as in \cite{HieberKashiwabara2015} -- a solution $v$ to the primitive equations satisfying
\begin{align}\label{eq:StrongSolution}
v\in C^1((0,\infty);L^p(\Omega))^2\cap C((0,\infty);W^{2,p}(\Omega))^2.
\end{align}
Our third main result concerns $L^\infty_HL^p_Z$-estimates for the hydrostatic Stokes semigroup. These estimates are essential for proving the above two results on the non-linear problem. They are also of
independent interest.
\begin{theorem}\label{MainTheorem}
Let $p\in (3,\infty)$. Then for all $a\in X_\os$ there exists a unique, global, strong solution $v$ to the primitive equations \eqref{eq:PrimitiveEquations} with $v(0)=a$ satisfying
\[
v\in C([0,\infty);X_\os),
\quad
t^{1/2}\nabla v\in L^\infty((0,\infty);X),
\quad
\limsup_{t\to 0+}t^{1/2}\lVert \nabla v(t) \rVert_{L^\infty_H L^p_z(\Omega)}=0.
\]
The corresponding pressure satisfies
%
\[
\pi \in C((0,\infty);C^{1,\alpha}([0,1]^2)),
\quad \alpha\in (0,1-3/p)
\]
%
and is unique up to an additive constant.
\end{theorem}
\begin{theorem}\label{PerturbationTheorem}
Let $p\in (3,\infty)$. Then there exists a constant $C_0>0$ such that if $a=a_1+a_2$ with $a_1\in X_\os$ and $a_2\in L^\infty_H L^p_z(\Omega)^2\cap L^p_{\overline{\sigma}}(\Omega)$ with
%
\[
\lVert a_2\rVert_{L^\infty_H L^p_z(\Omega)}\le C_0,
\]
%
then there exists a unique, global, strong solution $v$ to the primitive equations \eqref{eq:PrimitiveEquations} with $v(0)=a$ satisfying
%
\[
v\in C([0,\infty);L^p_{\overline{\sigma}}(\Omega))
\cap L^{\infty}((0,T); L^\infty_H L^p_z(\Omega))^2
\]
%
as well as
%
\[
t^{1/2}\nabla v\in L^\infty((0,\infty);X),
\quad
\limsup_{t\to 0+}t^{1/2}\lVert \nabla v \rVert_{L^\infty_H L^p_z(\Omega)}
\le C \lVert a_2\rVert_{L^\infty_H L^p_z},
\]
%
where $C>0$ does not depend on the data, and the pressure has the same regularity as in Theorem~\ref{MainTheorem}.
\end{theorem}
Taking advantage of the regularization of solutions for $t>0$ one passes into the setting discussed in \cite{HieberKashiwabara2015} and \cite{GigaGriesHieberHusseinKashiwabara2017}, and thus we
obtain the following corollary.
\begin{corollary}\label{cor:smoothness}
For $t>0$ the solution $v, \pi$ in Theorem~\ref{MainTheorem} and Theorem~\ref{PerturbationTheorem} are real analytic in time and space, and the velocity $v$ decays exponentially as $t \to \infty$.
\end{corollary}
Our main result on the hydrostatic semigroup acting on $X_\os$ reads as follows.
\begin{theorem}\label{thm:semigroup}
Let $p\in (3,\infty)$. Then the following assertions hold true: \\
a) $A_\os$ is the generator of a strongly continuous, analytic and exponentially stable semigroup $e^{tA_\os}$ on $X_\os$ of angle $\pi/2$. \\
b) There exist constants $C>0$, $\beta>0$ such that for $\partial_i, \partial_j\in \{\partial_x,\partial_y,\partial_z\}$
\begin{align*}
\tag{i} t^{1/2}\lVert \partial_j e^{tA_\os} f\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}, \quad t>0, \, f \in X_\os,
\\
\tag{ii} t^{1/2}\lVert e^{tA_\os}\mathbb{P} \partial_j f\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}, \quad t>0, \, f \in X_\os,\\
\tag{iii} t\lVert \partial_i e^{tA_\os}\mathbb{P} \partial_j f\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}, \quad t>0, \, f \in X_\os;
\end{align*}
\item[c)]
For all $f\in X_\os$
\begin{align*}
\lim_{t \to 0+} t^{1/2}\lVert \nabla e^{tA_\os}f\rVert_{L^\infty_H L^p_z(\Omega)}=0.
\end{align*}
\end{theorem}
%
\begin{remarks}\label{rem:main}
a) We note that when in the situation of Theorem~\ref{PerturbationTheorem} the initial data do not belong to $X$, i.e. when $a_2\neq 0$, the solution fails to be continuous at $t=0$ with respect
to the $L^\infty_H L^p_z$-norm. \\
b) The condition $p>3$ is due to the embeddings
%
\[
v_{\text{ref}}(t_0)
\in B^{2-2/q}_{pq}(\Omega)^2
\hookrightarrow C^1(\overline{\Omega})^2 \quad \hbox{and} \quad W^{2,p}(\Omega) \hookrightarrow C^{1,\alpha}(\overline{\Omega})
\quad \hbox{for } p\in (3,\infty),
\]
cf. \cite[Section 3.3.1]{Triebel}. Since the two-dimensional Helmholtz projection $Q$ fails to be bounded with respect to the $L^\infty$-norm, we instead estimate it in spaces of H{\"o}lder continuous functions $C^{0,\alpha}_{\text{per}}([0,1]^2)=C^{0,\alpha}(\mathbb{T}^2)$ for $\alpha \in (0,1)$ where $\mathbb{T}^2$ denotes the two-dimensional torus.
In fact $Q$ is bounded with respect to the $C^{0,\alpha}$-norm. This follows by the theory of Fourier multipliers on Besov spaces, compare e.g. \cite[Theorem 6.2]{Amann1997} for the whole space, and the periodic case follows using periodic extension.
\\
c) In Theorem~\ref{thm:semigroup} one can even consider $f\in L^{\infty}_HL^p_z(\Omega)^2$ for $p\in (3,\infty)$. Then the corresponding semigroup is still analytic, but it fails to be strongly continuous. The estimates $(i)-(iii)$ still hold, whereas property $c)$ in Theorem~\ref{thm:semigroup} has to be replaced by
\begin{align*}
\limsup_{t \to 0+} t^{1/2}\lVert \nabla e^{tA_\os}v\rVert_{L^\infty_H L^p_z(\Omega)} \leq C \lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
\end{align*}
for some $C>0$, where with a slight abuse of notation $e^{tA_\os}$ denotes also the hydrostatic Stokes semigroup on $L^\infty_H L^p_z(\Omega)$.
\\
d ) Some words about our strategy for proving the global well-posedness results are in order:
\begin{itemize}
\item[(i)] We will first construct a local, mild solution to the problem \eqref{eq:PrimitiveEquations}, i.e. a function
satisfying the relation
\begin{align}\label{eq:mildsolution}
v(t)=e^{tA_\os}a+\int_0^t e^{(t-s)A_\os}\mathbb{P} F(v(s))\,ds,
\quad t\in (0,T)
\end{align}
for some $T>0$, where $F(v)=-(u\cdot\nabla) v$. We will then show that $v$ regularizes for $t_0>0$ and using the result of \cite[Theorem 6.1]{HieberKashiwabara2015} or
\cite[Theorem 3.1]{GigaGriesHieberHusseinKashiwabara2017}, we may
take $v(t_0)$ as a new initial value to extend the mild solution to a global, strong solution
on $(t_0,\infty)$ and then on $(0,\infty)$ by uniqueness. The additional regularity for $t\to 0+$ results form the construction of the mild solutions.
\item[(ii)] In order to construct a mild solution
we decompose $a=a_{\text{ref}}+a_0$ such
that $a_{\text{ref}}$ is sufficiently smooth and $a_0$ can be taken to be arbitrarily small.
\item[(iii)]Using previously established results concerning the existence of solutions to the primitive equations for {\em smooth} data, we obtain a reference solution $v_{\text{ref}}$ and
construct then $V:=v-v_{\text{ref}}$ via an iteration scheme using $L^\infty$-type estimates for terms of the form $\nabla e^{tA_\os}\mathbb{P}$ given in Theorem~\ref{thm:semigroup}.
\end{itemize}
\end{remarks}
\section{Properties of anisotropic spaces}\label{sec:spaces}
In this section, we will discuss properties of anisotropic $L^q$-$L^p$-spaces. We will write $C(U';L^p(U_3))$ for the set of continuous $L^p(U_3)$-valued functions on $U'$ and likewise
%
\[
L^q(U';C(U_3))
:=\{ f\in L^q_H L^\infty_z(U):
f(x',\cdot)\in C(U_3) \text{ for almost all } x'\in U'
\},
\]
%
and $C_c(U';L^p(U_3))$ and $L^q(U';C_c(U_3))$ for the subsets of functions with compact support in horizontal and vertical variables, respectively. For $p,q\in[1,\infty)$ the space
$C^\infty_c(U)$ is dense in these spaces as well as in $L^q_H L^p_z(U)$, and furthermore we have
%
\begin{align*}
\overline{C^\infty_c(\mathbb{R}^3)}^{\norm{\cdot}_{L^\infty_H L^p_z}}
= C_0(\mathbb{R}^2;L^p(\mathbb{R})),
\quad
\overline{C^\infty_c(\mathbb{R}^3)}^{\norm{\cdot}_{L^q_H L^\infty_z}}
= L^q(\mathbb{R}^2;C_0(\mathbb{R})),
\end{align*}
%
as well as
%
\begin{align*}
\overline{C_{\text{per}}^\infty(\overline{\Omega})^2}
^{\norm{\cdot}_{L^\infty_H L^p_z}}
= X,
\quad
\overline{C_{\text{per}}^\infty(\overline{\Omega})}
^{\norm{\cdot}_{L^q_H L^\infty_z}}
= L^q(G;C[-h,0]).
\end{align*}
%
Observe thst even $C^\infty_{\text{per}}([0,1]^2;C^\infty_c(-h,0))^2$ is dense in $X$ and $L^q_H L^p_z(\Omega)^2$. If $p=q=\infty$, then
%
\[
\overline{C^\infty_c(\mathbb{R}^3)}^{\norm{\cdot}_{L^\infty_H L^\infty_z}}
= C_0(\mathbb{R}^3),
\quad \overline{C_{\text{per}}^\infty(\overline{\Omega})}^{\norm{\cdot}_{L^\infty_H L^\infty_z}}
= C_\text{per}([0,1]^2;C[-h,0]).
\]
%
Here $C_0(\mathbb{R}^d)$ denotes the set of functions vanishing at infinity.
These density results follow from the fact that if $E$ is a Banach space over $\mathbb{K}\in \{\mathbb{R},\mathbb{C}\}$, then the linear space generated by elementary tensor functions $f\otimes e$ for
measurable $f:U'\to \mathbb{K}$ and $e\in E$ is dense in $L^q(U';E)$ for $q\in [1,\infty)$, since it contains the simple $E$-valued functions. It is also dense in $C_0(U';E)$, if one only
considers continuous functions $f$, due to a generalization of the Stone-Weierstrass theorem, see e.g. \cite{Khan}.
In the case that $U\subset \mathbb{R}^3$ is bounded, we also have
%
\[
L^{q_1}_H L^{p_1}_z(U)\hookrightarrow L^{q_2}_H L^{p_2}_z(U)
\]
%
whenever $q_1\ge q_2$ and $p_1\ge p_2$. See \cite[Section 5]{HieberKashiwabara2015} for more details.
Another important property of the $L^q_H L^p_z$-norm is its behaviour under operations like multiplication and convolution. For the former one, we obviously obtain
%
\[
\lVert fg\rVert_{L^q_H L^p_z(U)}
\le \lVert f\rVert_{L^{q_1}_H L^{p_1}_z(U)} \lVert g\rVert_{L^{q_2}_H L^{p_2}_z(U)}
\]
%
whenever $1/p_1+1/p_2=1/p$ and $1/q_1+1/q_2=1/q$. For the latter one, the following variant of Young's inequality holds true.
\begin{lemma}{\cite[Theorem 3.1]{GreySinnamon}}\label{YoungAnisotropic}.
Let $f\in L^q_H L^p_z (\mathbb{R}^3)$ for $p,q\in [1,\infty]$ and $g\in L^1(\mathbb{R}^3)$. Then
%
\begin{align*}
\lVert g\ast f\rVert_{L^q_H L^p_z (\mathbb{R}^3)}
\le \lVert g\rVert_{L^1(\mathbb{R}^3)}
\lVert f\rVert_{L^q_H L^p_z (\mathbb{R}^3)}.
\end{align*}
%
\end{lemma}
\begin{comment}
\textcolor{red}{As the proof is given in the reference we could cut it here and save almost an entire page}
\begin{proof}
For $q=\infty$ and $p\in[1,\infty)$ we have with $x=(x',x_3)\in \mathbb{R}^2 \times \mathbb{R}$ that
%
\begin{align*}
\lvert g\ast f(x',x_3)\rvert
&\le \int_{\mathbb{R}^3}\lvert f(x-y)\rvert\cdot \lvert g(y)\rvert\,dy
\\
&\le \int_{\mathbb{R}^3}
\biggl[\lvert f(x-y)\rvert^p\cdot \lvert g(y)\rvert\biggr]^{1/p}
\lvert g(y)\rvert^{1-1/p}\,dy
\\
&\le\left(
\int_{\mathbb{R}^3}\lvert f(x'-y',x_3-y_3)\rvert^p\cdot \lvert g(y',y_3)\rvert\,dy
\right)^{1/p}
\left(\int_{\mathbb{R}^3}\lvert g(y)\rvert\,dy\right)^{1-1/p}
\end{align*}
%
and therefore
%
\begin{align*}
\int_{\mathbb{R}}\lvert g\ast f(x',x_3)\rvert^p\,dx_3
&\le \lVert g\rVert^{p-1}_{L^1(\mathbb{R}^3)}
\int_{\mathbb{R}^3} \left(
\int_{\mathbb{R}} \lvert f(x'-y',x_3-y_3)\rvert^p\,dx_3
\right) \lvert g(y',y_3)\rvert\,dy
\\
&\le \lVert g\rVert^{p}_{L^1(\mathbb{R}^3)}
\underset{x'\in \mathbb{R}^2}{\text{ess sup}}\int_{\mathbb{R}}\lvert f(x',x_3)\rvert^p\,dx_3
\end{align*}
%
which yields the desired estimate.
In the case $q\in[1,\infty)$ and $p\in[1,\infty)$ we first assume that $f$ and $g$ are non-negative and observe that applying the Minkowski inequality yields
%
\begin{align*}
\left(
\int_\mathbb{R} (g\ast f)(x',x_3)^p\,dx_3
\right)^{1/p}
&=\left(
\int_\mathbb{R}
\left(
\int_{\mathbb{R}^2}
\left(
\int_{\mathbb{R}}f(x'-y',x_3-y_3)g(y',y_3)\,dy_3
\right)
\,dy'
\right)^p
dx_3
\right)^{1/p}
\\
&\le \int_{\mathbb{R}^2}
\left(
\int_\mathbb{R}
\left(
\int_\mathbb{R} f(x'-y',x_3-y_3)g(y',y_3)\,dy_3
\right)^p
\,dx_3
\right)^{1/p}
\,dy'
\end{align*}
%
and applying Young's inequality in the last variable yields
%
\begin{align*}
\left(
\int_\mathbb{R} \left[
\int_\mathbb{R} f(x'-y',x_3-y_3)g(y',y_3)\,dy_3
\right]^p\,dx_3
\right)^{1/p}
\le
\left(
\int_\mathbb{R} f(x'-y',x_3)^p\,dx_3
\right)^{1/p}
\int_\mathbb{R} g(y',y_3)\,dy_3
\end{align*}
%
so by taking these together we obtain
%
\begin{align*}
\left(
\int_\mathbb{R} (g\ast f)(x',x_3)^p\,dx_3
\right)^{1/p}
\le \int_{\mathbb{R}^2}\left(
\int_\mathbb{R} f(x'-y',x_3)^p\,dx_3
\right)^{1/p}
\int_\mathbb{R} g(y',y_3)\,dy_3\,dy'.
\end{align*}
%
We now set
%
\[
F(x'):=\lVert f(x',\cdot)\rVert_{L^p(\mathbb{R})},
\quad
G(x'):=\lVert g(x',\cdot)\rVert_{L^1(\mathbb{R})},
\quad
\psi(x'):=\lVert (g\ast f)(x',\cdot)\rVert_{L^p(\mathbb{R})}
\]
%
to obtain $\psi(x')\le (G\ast F)(x')$ and thus $\lVert \psi\rVert_{L^q(\mathbb{R}^2)}\le \lVert G\ast F\rVert_{L^q(\mathbb{R}^2)}$. Applying Young's inequality then yields the desired estimate.
The step to general $f$ and $g$ is then done via an approximation argument. The case $q=1$ and $p=\infty$ is similar, and the case $p=q=\infty$ is included in the regular version of Young's inequality.
\end{proof}
\end{comment}
\section{Linear estimates for the Laplace operator}\label{sec:laplace}
In this section we establish resolvent and semigroup estimates for Laplace operators with a focus on anisotropic $L^q_HL^p_z$-spaces, where $p,q\in [1,\infty]$.
First, we consider the resolvent problem for the
Laplacian on the full space for
$$\lambda\in \Sigma_\theta=\{
\lambda\in\mathbb{C}\setminus\{0\}:\lvert\text{arg}(\lambda)\rvert<\theta\}, \quad \theta\in (0,\pi),$$
i.e.
\begin{align}\label{eq:LaplaceResolventFullspace}
\Delta v - \lambda v=f \text{ on } \mathbb{R}^3, \quad f\in C^\infty_c(\mathbb{R}^3),
\end{align}
%
and for $\partial_j\in\{\partial_x,\partial_y,\partial_z\}$
%
\begin{align}\label{eq:LaplaceResolventDerivativeFullspace}
\Delta w - \lambda w=\partial_j f \text{ on } \mathbb{R}^3, \quad f\in C^\infty_c(\mathbb{R}^3).
\end{align}
%
It is well known that the solution to problem \eqref{eq:LaplaceResolventFullspace} is given by the convolution $v=K_\lambda \ast f$ and the one to problem \eqref{eq:LaplaceResolventDerivativeFullspace} by
$v=\partial_j K_\lambda\ast f$, where $K_\lambda$ is explicitly given by
%
\begin{align*}
K_\lambda(x)
=\frac{1}{4\pi}
\frac{
e^{
-\lambda^{1/2}\lvert x\rvert
}
}{\lvert x\rvert},
\quad x\in\mathbb{R}^3\setminus\{0\}.
\end{align*}
%
Using this representation one easily obtains the following uniform $L^1(\mathbb{R}^3)$-estimates.
\begin{lemma}
For all $\theta\in(0,\pi)$ there exists $C_\theta>0$ such that for all $\lambda \in \Sigma_{\theta}$ one has
%
\[
\lvert \lambda \rvert \cdot \lVert K_\lambda\rVert_{L^1(\mathbb{R}^3)}
+\lvert \lambda \rvert^{1/2} \lVert \nabla K_\lambda\rVert_{L^1(\mathbb{R}^3)}
\le C_\theta.
\]
%
\end{lemma}
\begin{proof}
Set $\psi:=\text{arg}(\lambda)\in (-\theta,\theta)$. Since $K_\lambda$ is radially symmetric we use spherical coordinates to obtain
%
\[
\int_{\mathbb{R}^3} \lvert K_\lambda(x)\rvert\,dx
=\int_0^\infty r e^{-\lvert \lambda\rvert^{1/2}\cos(\psi/2)r}
\,dr
\]
%
as well as
%
\[
\int_{\mathbb{R}^3} \lvert \nabla K_\lambda(x)\rvert\,dx
\le
\int_0^\infty
(1+\lvert \lambda\rvert^{1/2}r)e^{-\lvert \lambda\rvert^{1/2}\cos(\psi/2)r}
\,dr.
\]
%
So, $\lvert \lambda \rvert \cdot \lVert K_\lambda\rVert_{L^1(\mathbb{R}^3)} =\text{sec}(\psi/ 2)^2$ and
$\lvert \lambda\rvert^{1/2} \lVert \nabla K_\lambda \rVert_{L^1(\mathbb{R}^3)}
\le \sec(\psi/2)+\sec(\psi/2)^2$, and thus
we obtain the desired result.
%
%
\end{proof}
From this and Young's inequality for convolutions in anisotropic spaces, cf. Lemma~\ref{YoungAnisotropic}, one immediately obtains suitable $L^q_H L^p_z$-estimates for the resolvent problems \eqref{eq:LaplaceResolventFullspace} and \eqref{eq:LaplaceResolventDerivativeFullspace} for $q,p\in [1,\infty]$.
\begin{corollary}\label{LemmaLaplaceResolventFullspace}
Let $\lambda \in \Sigma_{\theta}$ for some $\theta\in(0,\pi)$. Assume one of the following cases:
\begin{itemize}
\item[(i)] $p,q\in [1,\infty)$ and $f\in L^q_H L^p_z (\mathbb{R}^3)$, or
\item[(ii)] $p\in [1,\infty)$, $q=\infty$, and $f\in L^q_H L^p_z (\mathbb{R}^3)$ with compact support in horizontal direction, or
\item[(iii)] $p=\infty$, $q\in [1,\infty)$, and $f\in L^q_H L^p_z (\mathbb{R}^3)$ with compact support in vertical direction.
\end{itemize}
Then the functions
\begin{align*}
v=K_\lambda\ast f \quad \hbox{ and } \quad w=\partial_j K_\lambda\ast f
\end{align*}
are the unique solutions to the problems \eqref{eq:LaplaceResolventFullspace} and \eqref{eq:LaplaceResolventDerivativeFullspace} in $L^q_HL^p_z(\mathbb{R}^3)$, respectively, and
there exists a constant $C_\theta>0$ such that
%
\begin{align}\label{eq:AnisotropicResolventFullspace}
\lvert \lambda\rvert \cdot \lVert v\rVert_{L^q_H L^p_z (\mathbb{R}^3)}
+\lvert \lambda\rvert^{1/2} \lVert \nabla v\rVert_{L^q_H L^p_z (\mathbb{R}^3)}
+\lVert \Delta v\rVert_{L^q_H L^p_z (\mathbb{R}^3)}
\le C_\theta \lVert f\rVert_{L^q_H L^p_z (\mathbb{R}^3)},\\
%
\lvert \lambda\rvert^{1/2} \lVert w\rVert_{L^q_H L^p_z (\mathbb{R}^3)}
\le C_\theta \lVert f\rVert_{L^q_H L^p_z (\mathbb{R}^3)}.\label{eq:AnisotropicResolventDerivativeFullspace}
\end{align}
\end{corollary}
\begin{remark}
In the case $q,p\in[1,\infty)$ we have that $C^\infty_c(\mathbb{R}^3)$ is dense in $L^q_H L^p_z(\mathbb{R}^3)$, so we may assume that $f$ is essentially bounded and has compact support, yielding $\partial_i (K_\lambda \ast f)=(\partial_i K_\lambda)\ast f$. In the cases where $q$ and/or $p$ is infinite we add this as an assumption.
\end{remark}
We now investigate for the Laplacian on $\Omega$ with boundary conditions \eqref{eq:bc} the resolvent problems
%
\begin{align}\label{eq:LaplaceResolventOmega}
\lambda v-\Delta v=f \text{ on } \Omega,
\end{align}
%
and for $\partial_i \in\{\partial_x,\partial_y,\partial_z\}$
%
\begin{align}\label{eq:LaplaceResolventDerivativeOmega}
\lambda w-\Delta w=\partial_i f \text{ on } \Omega.
\end{align}
\begin{lemma}\label{LemmaAnisotropicLaplaceResolventOmega}
Let $\theta\in (0,\pi)$ and
$f\in L^q_H L^p_z(\Omega)$ for $q\in [1,\infty], p\in [1,\infty)$.
Then there exists $\lambda_0>0$ such that for $\lambda \in \Sigma_{\theta}$ with $\lvert \lambda\rvert\ge \lambda_0$ the problems \eqref{eq:LaplaceResolventOmega} and \eqref{eq:LaplaceResolventDerivativeOmega} have unique solutions $v\in L^q_H L^p_z(\Omega)$ and $w\in L^q_H L^p_z(\Omega)$, respectively, and there exists a constant $C_{\theta}>0$
such that %
\begin{align}\label{eq:AnisotropicResolventOmega}
\lvert \lambda \rvert \cdot \lVert v\rVert_{L^q_H L^p_z(\Omega)}
+ \lvert \lambda \rvert^{1/2} \lVert \nabla v\rVert_{L^q_H L^p_z(\Omega)}
+ \lVert \Delta v\rVert_{L^q_H L^p_z(\Omega)}
\le C_{\theta} \lVert f\rVert_{L^q_H L^p_z(\Omega)}, \\
\label{eq:AnisotropicResolventDerivativeOmega}
\lvert \lambda \rvert^{1/2} \lVert w\rVert_{L^q_H L^p_z(\Omega)}
\le C_{\theta} \lVert f\rVert_{L^q_H L^p_z(\Omega)}.
\end{align}
%
In particular for $q=\infty$ and $p\in (2,\infty)$ one can chose $\lambda_0=0$.
\end{lemma}
To prove this lemma, we will need some facts concerning isotropic $L^p$-spaces.
So, for $p\in (1,\infty)$ denote by $\Delta_p$ the Laplace operator on $L^p(\Omega)$ defined by
\begin{align*}
\Delta_p v=\Delta v, \quad D(\Delta_p)=\{
v\in W^{2,p}_{\text{per}}(\Omega):
\restr{\partial_z v}{\Gamma_u}=0,
\restr{v}{\Gamma_b}=0
\}.
\end{align*}
%
One has $\rho(-\Delta_p)\subset \mathbb{C}\setminus [\delta,\infty)$, for some $\delta>0$, i.e. $0\in \rho(-\Delta_p)$, cf. \cite[Remark 8.23]{Nau2012}, and the resolvent satisfies for some $C_{\theta,p}>0$ the estimate
\begin{align}\label{eq:LpResolventEstimateLaplace}
\lvert \lambda \rvert \cdot \lVert (\Delta_p - \lambda)^{-1}f\rVert_{L^p(\Omega)}
+ \Vert \Delta_p(\Delta_p - \lambda)^{-1}f\rVert_{L^p(\Omega)}
\le C_{\theta,p}\lVert f\rVert_{L^p(\Omega)}, \quad f\in L^p(\Omega),
\end{align}
%
where $\lambda\in \Sigma_\theta$, $\theta\in(0,\pi)$.
%
%
Furthermore, $-\Delta_p$ possesses a bounded $\mathcal{H}^\infty$-calculus of angle $0$, see e.g. \cite{Nau2012},
and therefore
%
\begin{align}\label{eq:Domains}
D((-\Delta_p)^\vartheta)=[L^p(\Omega),D(\Delta_p)]_\vartheta\subset W^{2\vartheta,p}(\Omega), \quad \vartheta\in [0,1],
\end{align}
%
where $[\cdot,\cdot]$ denotes the complex interpolation functor. In particular $\partial_j(-\Delta_p)^{-1/2}$ is bounded on $L^p(\Omega)$ for $\partial_j\in\{\partial_x,\partial_y,\partial_z\}$ and by taking adjoints the same holds true for the closure of $(-\Delta_p)^{-1/2}\partial_j$.
This yields the estimates
%
\begin{align}\label{eq:LpEstimateDerivativesLaplace}
\begin{split}
\lvert \lambda \rvert^{1/2}\lVert \partial_j(\Delta_p-\lambda)^{-1} f\rVert_{L^p(\Omega)}
+\lvert \lambda \rvert^{1/2}\lVert
(\Delta_p-\lambda)^{-1}\partial_j f\rVert_{L^p(\Omega)}
& \le C_{\theta,p} \lVert f\rVert_{L^p(\Omega)}, \quad f\in L^p(\Omega), \\
\lVert \partial_j (\Delta_p-\lambda)^{-1}\partial_i f\rVert_{L^p(\Omega)}
& \le C_{\theta,p}\lVert f\rVert_{L^p(\Omega)}, \quad f\in L^p(\Omega)
\end{split}
\end{align}
for $\lambda\in \Sigma_\theta$, $\theta\in(0,\pi)$, and some $C_{\theta,p}>0$.
\begin{proof}[Proof of Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega}]
First, we apply the following density arguments:
\begin{itemize}
\item[(i)] For $q,p\in [1,\infty)$ and $f\in L^q_H L^p_z(\Omega)$ we assume that $f\in C^\infty_{\text{per}}([0,1]^2;C^\infty_c(-h,0))$ since \\ $C^\infty_{\text{per}}([0,1]^2;C^\infty_c(-h,0))$ is a dense subspace of $L^q_H L^p_z(\Omega)$.
\item[(ii)] For $q=\infty$ and $f\in L^\infty(G;L^p(-h,0))$, we assume that $f\in L^\infty(G;C^\infty_c(-h,0))$, as the latter space is dense in $L^\infty_H L^p_z(\Omega)$.
\end{itemize}
In particular, in either case we may assume that $f=0$ on $\Gamma_u\cup \Gamma_b$ and $f\in L^\infty(\Omega)$.
The existence of a unique solution to the problems \eqref{eq:LaplaceResolventOmega} and \eqref{eq:LaplaceResolventDerivativeOmega} in $L^q_H L^p_z(\Omega)$ for such smooth $f$ follows from the properties of the mappings $(\lambda-\Delta)^{-1}$ and $(\lambda-\Delta)^{-1}\partial_i$ in $L^r(\Omega)$ for $\lambda\in \lambda \in \Sigma_{\theta}$
since
$$v\in W^{2,r}(\Omega)\hookrightarrow L^\infty(\Omega)\hookrightarrow L^q_H L^p_z(\Omega) \quad \hbox {and} \quad w\in W^{1,r}(\Omega)\hookrightarrow L^\infty(\Omega)\hookrightarrow L^q_H L^p_z(\Omega), \quad r >3.$$
It therefore suffices to prove the estimates \eqref{eq:AnisotropicResolventOmega} and \eqref{eq:AnisotropicResolventDerivativeOmega}. This is done in the following by localizing the results of Lemma~\ref{LemmaLaplaceResolventFullspace}.
For this purpose we first make use of the extension operator
%
\[
E=E^\text{even,odd}_{z}\circ E^{\text{per}}_{H}
\]
%
where $E^{\text{per}}_{H}$ is the periodic extension operator from $G$ to $\mathbb{R}^2$ in horizontal direction and $E_z^{\text{even,odd}}$ extends from $(-h,0)$ to $(-2h,h)$ in vertical direction via even and odd reflexion at the top and bottom part of the boundary respectively.
Second, we utilize a family of cut-off-functions $\chi_r\in C^\infty_c(\mathbb{R}^3)$ for $r\in (0,\infty)$ of the form $\chi_r(x,y,z)=\varphi_r(x,y)\psi_r(z)$ where $\varphi_r \in C_c^\infty(\mathbb{R}^2)$ and $\psi_r\in C_c^\infty(\mathbb{R})$ satisfy
%
\begin{align*}
\begin{array}{rlrl}
\varphi_r \equiv 1 & \text{ on } [-1/4,5/4]^2, &
\varphi_r \equiv 0 & \text{ on } \left((-\infty,-r-1/4]\cup [5/4+r,\infty)\right)^2,
\\
\psi_r \equiv 1 & \text{ on } [-5h/4,h/4], &
\psi_r \equiv 0 & \text{ on } (-\infty,-r-5h/4]\cup [h/4+r,\infty),
\end{array}
\end{align*}
%
and there is a constant $M>0$ independent of $r$ such that
%
\[
\lVert \varphi_r\rVert_\infty +\lVert \psi_r\rVert_\infty
+ r \left(
\lVert \nabla_H \varphi_r \rVert_\infty +\lVert \partial_z \psi_r \rVert_\infty
\right)
+r^2 \left(
\lVert \Delta_H \varphi_r\rVert_\infty +\lVert \partial_z^2\psi_r\rVert_\infty
\right)
\le M.
\]
%
Here, we consider $0<4r<3\min\{1,h\}$ which implies that $\varphi_r$ and $\psi_r$ are supported on $(-1,2)$ and $(-2h,h)$ respectively.
We now define an extension of $v$ from $\Omega$ onto the whole space $\mathbb{R}^3$ via
%
\[
u(x,y,z)=\chi_r(x,y,z) (Ev)(x,y,z)
\]
%
for a suitable value of $r$ which we will specify later on.
If $v$ solves problem \eqref{eq:LaplaceResolventOmega} then $u$ solves the problem
%
\[
\lambda u - \Delta u = F \text{ on } \mathbb{R}^3, \quad
F:=\chi_r Ef-2(\nabla \chi_r) \cdot E(\nabla v)-(\Delta \chi_r) Ev.
\]
%
Here we made use of the fact that $E$ commutes with derivatives of $v$.
Note that not only does $F$ have compact support, but we also have $F\in L^\infty(\mathbb{R}^3)$ since we may assume that $f\in L^\infty(\Omega)$ and $v\in W^{1,\infty}(\Omega)$ by the above approximation argument.
Thus we may now apply Lemma~\ref{LemmaLaplaceResolventFullspace}, and estimate \eqref{eq:AnisotropicResolventFullspace} yields
%
\[
\lvert \lambda \rvert\cdot \lVert u\rVert_{L^q_HL^p_z(\mathbb{R}^3)}
+\lvert \lambda \rvert^{1/2} \lVert \nabla u\rVert_{L^q_HL^p_z(\mathbb{R}^3)}
\le C_\theta \lVert F\rVert_{L^q_HL^p_z(\mathbb{R}^3)}.
\]
%
To estimate $F$ we use that $\chi_r$ is supported on $(-1,2)^2\times(-2h,h)$, and therefore
%
\begin{align*}
\lVert \chi_r Ef\rVert_{L^q_HL^p_z(\mathbb{R}^3)}
&\le 27 M^2 \lVert f\rVert_{L^q_HL^p_z(\Omega)} ,
\\
\lVert (\nabla\chi_r) \cdot E(\nabla v)\rVert_{L^q_HL^p_z(\mathbb{R}^3)}
&\le 27 M^2r^{-1}\lVert \nabla v\rVert_{L^q_HL^p_z(\Omega)},
\\
\lVert (\Delta\chi_r) Ev\rVert_{L^q_HL^p_z(\mathbb{R}^3)}
&\le 27 M^2r^{-2}\lVert v\rVert_{L^q_HL^p_z(\Omega)}.
\end{align*}
%
Next, we set $r=\eta \lvert \lambda\rvert^{-1/2}$ to obtain
%
\begin{align*}
\lVert F\rVert_{L^q_HL^p_z(\mathbb{R}^3)}
&\le 27 M^2 \left(
\lVert f\rVert_{L^q_HL^p_z(\Omega)}
+2\eta^{-1}\lvert \lambda\rvert^{1/2}\lVert \nabla v\rVert_{L^q_HL^p_z(\Omega)}
+\eta^{-2} \lvert \lambda\rvert\cdot\lVert v\rVert_{L^q_HL^p_z(\Omega)}
\right).
\end{align*}
%
Now assume that $\eta>0$ is sufficiently large enough such that $54 C_\theta M^2\eta^{-1}<1/2$, $27C_\theta M^2 \eta^{-2}<1/2$ and then assume that $\lambda_0>0$ is large enough such that $4\eta \lambda_0^{-1/2}<3\min\{1,h\}$. This and the fact that $u$ is an extension of $v$ then yields
%
\[
\lvert \lambda \rvert\cdot \lVert v\rVert_{L^q_HL^p_z(\Omega)}
+\lvert \lambda \rvert^{1/2} \lVert \nabla v\rVert_{L^q_HL^p_z(\Omega)}
\le 54 C_\theta M^2 \lVert f\rVert_{L^q_HL^p_z(\Omega)} \quad \hbox{for} \quad \lvert \lambda\rvert\ge \lambda_0.
\]
%
In the case $q=\infty$, $p\in (2,\infty)$ we obtain the estimate for the full range of $\lambda\in \Sigma_\theta$ by setting $\lambda_1:=\frac{\lambda_0}{\lvert \lambda \rvert}\lambda$ for $0<\lvert \lambda\rvert <\lambda_0$. Then $f\in L^\infty_H L^p_z(\Omega)\hookrightarrow L^p(\Omega)$ yields
%
\[
\lvert \lambda \rvert \cdot \lVert v\rVert_{L^p(\Omega)}
+\lvert \lambda \rvert^{1/2} \lVert \nabla v \rVert_{L^p(\Omega)}
+\lVert \Delta v\rVert_{L^p(\Omega)}
\le C_{\theta,p} \lVert f\rVert_{L^p(\Omega)}
\]
%
by \eqref{eq:LpResolventEstimateLaplace} and since $\lambda_1 v-\Delta v=f+(\lambda_1-\lambda)v$ we obtain
\[
\lvert \lambda_1 \rvert \cdot \lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
+\lvert \lambda_1 \rvert^{1/2} \lvert \nabla v \rvert_{L^\infty_H L^p_z(\Omega)}
+\lVert \Delta v\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta,p} \left(
\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}
+\lvert \lambda_1-\lambda\rvert \cdot \lVert v \rVert_{L^\infty_H L^p_z(\Omega)}
\right)
\]
%
where we can further estimate $\lvert \lambda_1-\lambda\rvert<\lambda_0$, and
$p\in (1,\infty)$ yields
%
\[
\lVert v \rVert_{L^\infty_H L^p_z(\Omega)}
\le C_p \lVert v\rVert_{W^{2,p}_H L^p_z(\Omega)}
\le C_p \lVert v\rVert_{W^{2,p}(\Omega)}
\le C_p \lVert \Delta v\rVert_{L^p(\Omega)}
\le C_p\lVert f\rVert_{L^p(\Omega)}
\le C_p \lVert f\rVert_{L^\infty_H L^p_z(\Omega)}
\]
%
where we used $W^{2,p}(G)\hookrightarrow L^{\infty}(G)$ and that $\Delta_p$ is invertible on $L^p(\Omega)$. Since $\lvert \lambda_1\rvert=\lambda_0>\lvert \lambda\rvert$, this yields the desired result for the full range of $\lambda\in \Sigma_\theta$, $\theta\in (0,\pi)$.
If $v$ instead solves problem \eqref{eq:LaplaceResolventDerivativeOmega} with
$\partial_i=\partial_z$ then $u$ solves the problem
%
\[
\lambda u-\Delta u=G \text{ on } \mathbb{R}^3, \quad
G:=\chi_r E(\partial_z f)-2(\nabla \chi_r) \cdot E(\nabla v)-(\Delta\chi_r)Ev.
\]
%
We rewrite
%
\[
-2(\nabla\chi_r) \cdot E(\nabla v)-(\Delta\chi_r)Ev
=-2 \text{div} (\nabla\chi_r Ev)+(\Delta\chi_r) Ev,
\quad
\chi_r E(\partial_z f)=\partial_z (\chi_r s Ef)-(\partial_z\chi_r) s Ef
\]
%
where
%
\[
s(z)=\begin{cases}
1, & z\in (-2h,0),
\\
-1, & x\in (0,h).
\end{cases}
\]
%
Here, by the density argument above, we may assume $f=0$ on $\Gamma_u\cup \Gamma_b$.
This yields $u=u_1+u_2$ where
%
\begin{align*}
\begin{array}{rlll}
\lambda u_1-\Delta u_1&=\partial_z G_1 +\text{div}_H G_2 &\text{ on } \mathbb{R}^3, &
G_1:=\chi_r s Ef, \quad G_2:=-2 (\nabla \chi_r) Ev,
\\
\lambda u_2-\Delta u_2&=G_3 &\text{ on } \mathbb{R}^3, &
G_3:=-(\partial_z\chi_r) s Ef+(\Delta\chi_r) Ev.
\end{array}
\end{align*}
%
Since $G_i$, $i\in \{1,2,3\}$, are bounded and have compact support, we may apply Lemma~\ref{LemmaLaplaceResolventFullspace} to obtain the estimate
%
\[
\lvert \lambda \rvert^{1/2} \lVert u\rVert_{L^q_H L^p_z(\mathbb{R}^3)}
\le C_\theta \left(
\lVert G_1\rVert_{L^q_H L^p_z(\mathbb{R}^3)}
+\lVert G_2\rVert_{L^q_H L^p_z(\mathbb{R}^3)}
+\lvert \lambda \rvert^{-1/2} \lVert G_3\rVert_{L^q_H L^p_z(\mathbb{R}^3)}
\right).
\]
%
Proceeding as above we obtain
%
\begin{align*}
\lVert G_1\rVert_{L^q_H L^p_z(\mathbb{R}^3)}
&\le 27 M^2\lVert f\rVert_{L^q_H L^p_z(\Omega)},
\\
\lVert G_2\rVert_{L^q_H L^p_z(\mathbb{R}^3)}
&\le 54 M^2 \eta^{-1} \lvert\lambda\rvert^{1/2}
\lVert v\rVert_{L^\infty_H L^p_z(\Omega)},
\\
\lVert G_3\rVert_{L^\infty_H L^p_z(\mathbb{R}^3)}
&\le 27 M^2\eta^{-1}\lvert \lambda\rvert^{1/2}
\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}
+27 M^2 \eta^{-2}\lvert \lambda\rvert\cdot
\lVert v\rVert_{L^\infty_H L^p_z(\Omega)}.
\end{align*}
%
The above assumptions on $\eta$ and $\lambda_0$ then yield the desired result for $\lvert \lambda\rvert>\lambda_0$. The case $\partial_i\in \{\partial_x,\partial_y\}$ is analogous where for $f\in L^\infty(G;C^\infty_c(-h,0))$ horizontal derivatives are understood in the sense of distributions, and otherwise derivatives can be treated using smooth approximations as above.
For the case $q=\infty$ and $p\in (2,\infty)$, to extend this estimate to the full range of $\lambda\in \Sigma_\theta$ one proceeds as above to obtain
\[
\lvert \lambda\rvert^{1/2}\cdot \lVert v\rVert_{L^p(\Omega)}
+ \lVert \nabla v\rVert_{L^p(\Omega)}
\le C_{\theta,p} \lVert f\rVert_{L^p(\Omega)}
\]
%
from \eqref{eq:LpEstimateDerivativesLaplace},
as well as
%
\[
\lvert \lambda_1\rvert^{1/2}\cdot \lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
+ \lVert \nabla v\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta}\left(
\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}
+ \lvert \lambda_1\rvert^{-1/2} \lvert \lambda_1-\lambda\rvert \cdot
\lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
\right).
\]
%
Since we have $\lvert \lambda_1\rvert^{-1/2} \lvert \lambda_1-\lambda\rvert\le \lambda_0^{1/2}$ and $p\in (2,\infty)$ this yields
%
\[
\lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_p \lVert v\rVert_{W^{1,p}_H L^p_z(\Omega)}
\le C_p \lVert v\rVert_{W^{1,p}(\Omega)}
\le C_p \lVert \nabla v \rVert_{L^p(\Omega)}
\le C_p \lVert f\rVert_{L^p(\Omega)}
\le C_p \lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\]
%
where we used the embedding $W^{1,p}(G)\hookrightarrow L^{\infty}(G)$ and
the Poincar{\'e} inequality $\lVert v\rVert_{L^p(\Omega)} \leq C_p \lVert \nabla v\rVert_{L^p(\Omega)}$ for $v$ with $\restr{v}{\Gamma_b}=0$.
\end{proof}
\begin{remark}\label{rem:AnisotropicLaplaceResolventOmega}
The results of Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega} also hold true if the condition $\restr{\partial_z v}{\Gamma_u}=0$ is replaced by $\restr{v}{\Gamma_u}=0$ or if $L^q_HL^p_z(\Omega)$ is replaced by
$C_{\text{per}}([0,1]^2;L^p(-h,0))$. For pure Dirichlet boundary conditions one extends by an odd reflexion at both $z=0$ and $z=-h$ replacing $E^\text{even,odd}_{z}$ by $E^\text{odd,odd}_{z}$ and setting $s(z)\equiv 1$ in the proof.
\end{remark}
Since $\Omega=G\times(-h,0)$ is a cylindrical domain the semigroup generated by the Laplacian with the above boundary conditions satisfies
%
\[
e^{t\Delta}(f\otimes g)
=e^{t\Delta_H}f\otimes e^{t\Delta_z}g, \quad
f:G\to \mathbb{R}^2, \quad g:(-h,0)\to \mathbb{R},
\]
%
where $(f\otimes g)(x,y,z):=f(x,y)g(z)$ is an elementary tensor, $\Delta_H:=\partial_x^2+\partial_y^2$ is the Laplacian on $G$ with periodic boundary conditions and
$\Delta_z$ is defined by
%
\[
\Delta_z v := \partial_z^2 v, \quad D(\Delta_z)=\{ f\in W^{2,p}(-h,0):f(-h)=\partial_z f(0)=0 \}.
\]
We now investigate these operators separately, starting with the vertical one, cf. \cite{DenkHieberPruess2003, Nau2012}.
\begin{lemma}\label{LemmaVerticalSemigroup}
Let $p\in(1,\infty)$. Then the operator $\Delta_z$
%
%
generates a strongly continuous, exponentially stable, analytic semigroup on $L^p(-h,0)$.
%
\end{lemma}
\begin{lemma}\label{LemmaHorizontalSemigroup}
Let $\theta\in(0,\pi/2)$. Then there exists a constant $C_\theta>0$ such that for all $\tau\in\Sigma_\theta$ we have
%
\begin{align*}
\lvert \tau\rvert^{1/2}\lVert \nabla_H e^{\tau\Delta_H}Q f\rVert_{L^\infty(G)}
\le C_\theta \lVert f\rVert_{L^\infty(G)}, \quad f\in L^\infty(G).
\end{align*}
%
\end{lemma}
\begin{remark}
\label{rem:HelmholtzLinfty}
Note that although the two-dimensional Helmholtz projector with periodic boundary conditions $Q$ is unbounded on $L^{\infty}(G)$, the composition $\nabla_H e^{\tau\Delta_H}Q$ defines a bounded operator for $\tau\in \Sigma_\theta$.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{LemmaHorizontalSemigroup}]
Let $Q_{\mathbb{R}^2}$ and $Q$ be the Helmholtz projection on $\mathbb{R}^2$ and $\mathbb{T}^2$, respectively, and $E_H^{\text{per}}$ be the periodic extension operator from $G$ onto $\mathbb{R}^2$. Then $E^{\text{per}}_H Q f=Q_{\mathbb{R}^2} E^{\text{per}}_H f$ for all $f:G\to \mathbb{R}^2$ and
%
\begin{align*}
E_H^{\text{per}}\lvert \tau\rvert^{1/2} \nabla_H e^{\tau\Delta_H}Q f
=\lvert \tau\rvert^{1/2} \nabla_H e^{\tau\Delta_H}E_H^{\text{per}}Q f
=\lvert \tau\rvert^{1/2} \nabla_H e^{\tau\Delta_H}Q_{\mathbb{R}^2} E_H^{\text{per}} f.
\end{align*}
%
Since $\lVert E_H^{\text{per}}f\rVert_{L^\infty(\mathbb{R}^2)}=\lVert f\rVert_{L^\infty(G)}$ it therefore suffices to consider the operator $\Delta_H$ on the full space $\mathbb{R}^2$.
Recall that $\mathds{1}-Q_{\mathbb{R}^2}$ is given by $(R_jR_k)_{1\le j,k\le 2}$ where $R_j$ is the Riesz transform in the $j$-th direction.
We therefore investigate the family of Fourier multipliers
%
\[
m_{\tau,j,k,l}(\xi)=
\begin{cases}
&\lvert \tau \rvert^{1/2}\xi_l \left(
\delta_{j,k}-\frac{\xi_j\xi_k}{\lvert \xi\rvert^2}
\right)
e^{-\tau\lvert \xi\rvert^2},
\quad \xi\in\mathbb{R}^2\setminus\{0\},
\\
&0, \quad \xi=0,
\end{cases} \quad \hbox{for} \quad 1\le j,k,l \le 2.
\]
%
Using the invariance under rescaling and replacing $\xi$ with $\lvert \tau\rvert^{-1/2}\xi$, we may assume that $\tau=e^{i\psi}$ where $\lvert \psi\rvert < \theta$.
We show that for each of these symbols we have $m=\hat{g}$ for some $g\in L^1(\mathbb{R}^2)$ such that $\lVert g\rVert_{L^1(\mathbb{R}^2)}\le C_\theta$. The desired estimate then follows from Young's inequality.
Since this family of symbols belongs to $C(\mathbb{R}^2)\cap C^\infty(\mathbb{R}^2\setminus\{0\})$ we verify the Mikhlin condition
%
\begin{align}\label{eq:Mikhlin}
\max_{\lvert \alpha\rvert\le 2}\sup_ {\xi\in\mathbb{R}^2\setminus\{0\}}
\lvert \xi\rvert^{\lvert\alpha\rvert+\delta}
\lvert D^\alpha m(\xi)\rvert<M<\infty,
\end{align}
%
for some $\delta>0$.
Elementary calculations using the homogeneity of the first factor show that for an arbitrary multi-index $\alpha\in\mathbb{N}^2$ we have
%
\[
\sup_ {\xi\in\mathbb{R}^2\setminus\{0\}}
\lvert \xi\rvert^\alpha
\left\lvert D^\alpha \frac{\xi_j \xi_k}{\lvert \xi\rvert^2}\right\rvert
<M_\alpha
<\infty,
\quad
\sup_ {\xi\in\mathbb{R}^2\setminus\{0\}}
\lvert \xi\rvert^{\alpha+\delta}
\left\lvert D^\alpha \xi_l e^{-e^{i\psi}\lvert \xi\rvert^2}\right\rvert
<M_{\alpha,\delta,\psi}
\le M_{\alpha,\delta,\theta}
<\infty
\]
%
for $\delta\in(0,1)$ which together with the product rule yield that \eqref{eq:Mikhlin} is satisfied. Analogously we verify the condition
%
\begin{align}\label{eq:NotMikhlin}
\lvert \xi\rvert^{\lvert \alpha\rvert}\lvert D^\alpha m(\xi)\rvert
\le C_\alpha \lvert \xi\rvert,
\quad \lvert \xi\rvert\le 1, \xi\neq 0
\end{align}
%
for $0<\lvert \alpha\rvert\le 2$ by noting that
%
\begin{align*}
\lvert \xi\rvert^{\lvert \alpha\rvert}
\left\lvert D^\alpha \frac{\xi_j \xi_k \xi_j}{\lvert \xi\rvert^2}\right\rvert
\le C_\alpha \lvert \xi\rvert, \quad \lvert \xi\rvert\le 1, \xi\neq 0
\end{align*}
%
and
%
\begin{align*}
\lvert \xi\rvert^{\lvert \alpha\rvert} \left\lvert D^\alpha
e^{-e^{i\psi}\lvert \xi\rvert^2}\right\rvert
+\lvert \xi\rvert^{\lvert \alpha\rvert} \left\lvert D^\alpha \xi_l
e^{-e^{i\psi}\lvert \xi\rvert^2}\right\rvert
\le C_{\alpha,\delta,\psi}\le C_{\alpha,\delta,\theta}
\quad \lvert \xi\rvert\le 1, \xi\neq 0.
\end{align*}
%
We now split the symbol into $m=\varphi m+(1-\varphi)m$ where $\varphi\in C^\infty_c(\mathbb{R}^2)$ is a cut-off function satisfying $\varphi(\xi)=1$ for $\lvert \xi\rvert\le 2$. Applying \cite[Lemma 8.2.3 and 8.2.4]{ArendtBattyHieberNeubrander} to the terms $(1-\varphi)m$ and $\varphi m$ respectively then yields the desired results.
\end{proof}
\section{Linear estimates for the hydrostatic Stokes operator: part 1}\label{sec:stokesEasy}
A key element in the proof of our global existence results are the estimates for the hydrostatic Stokes semigroup in $X_\os$.
To this end, we prove first estimates in the larger space $X$, where we make use of representation \eqref{eq:Ap}. We thus define the operator $A$ by
\begin{align*}
A v := \Ae v, \quad D(A)
=\{
v\in W^{2,p}_{\text{per}}(\Omega)^2\cap X\colon
\restr{\partial_z v}{\Gamma_u}=0,
\restr{v}{\Gamma_b}=0,
\Ae v\in X
\}.
\end{align*}
It is the aim of this section to prove the following claim.
\begin{claim}\label{SemigroupEstimates}
Let $p\in (3,\infty)$. Then
\\
a) $A$ is the generator of a strongly continuous, analytic semigroup on $X$.
%
%
\\
b) There exist constants $C>0$, $\beta\in\mathbb{R}$ such that for $\partial_i\in \{\partial_x,\partial_y, \partial_z\}$, $t>0$ and $f\in X$ one has that
\begin{align*}
\tag{i} t^{1/2}\lVert \partial_i e^{tA} f\rVert_{L^\infty_H L^p_z(\Omega)}
\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}, \quad t>0, \, f \in X,
\end{align*}
for $\partial_j\in \{\partial_x,\partial_y\}$
\begin{align*}
\tag{ii} t^{1/2}\lVert \partial_j e^{tA}\mathbb{P} f\rVert_{L^\infty_H L^p_z(\Omega)}
\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}, \quad t>0, \, f \in X, \\
\tag{iii} t^{1/2}\lVert e^{tA}\mathbb{P} \partial_j f\rVert_{L^\infty_H L^p_z(\Omega)}
\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}, \quad t>0, \, f \in X, \\
\tag{iv} t\lVert \partial_i e^{tA}\mathbb{P} \partial_j f\rVert_{L^\infty_H L^p_z(\Omega)}
\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}, \quad t>0, \, f \in X.
\end{align*}
c) $X_\os$ is an invariant subspace of $A$, and its restriction is $A_\os$. The semigroup $e^{tA}$ restricts to an exponentially stable, strongly continuous, analytic semigroup of angle $\pi/2$ on $X_\os$.
\\
d)
Furthermore, for all $v\in X_\os$
%
\begin{align*}
\lim_{t \to 0+} t^{1/2}\lVert \nabla e^{tA}v\rVert_{L^\infty_H L^p_z(\Omega)}=0.
\end{align*}
%
\end{claim}
\begin{comment}
\begin{remark} \label{rem:thm62}
\begin{itemize}
\item[(a)] In order to fix the notation, $(b)$ implies that there exists $C>0$, $\beta\in \mathbb{R}$ such that
\begin{align} \label{eq:semigroup}
\lVert e^{tA}f\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}.
\end{align}
\item[(b)] In Theorem~\ref{SemigroupEstimatesHard} below, the corresponding estimates are proven for $\partial_j=\partial_z$. As the proof of these estimates is far more involved it is presented in the following section.
\item[(c)] In Claim~\ref{SemigroupEstimates} and Theorem~\ref{SemigroupEstimatesHard} one can even consider $f\in L^{\infty}_HL^p_z(\Omega)^2$ for $p\in (3,\infty)$. Then the corresponding semigroups are still analytic, but they fail to be strongly continuous. The estimates $(i)-(iv)$ still hold, whereas property $(e)$ has to be replaced by
\begin{align*}
\limsup_{t \to 0+} t^{1/2}\lVert \nabla e^{tA}v\rVert_{L^\infty_H L^p_z(\Omega)} \leq C \lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
\end{align*}
for some $C>0$.
\item[(d)] Constraints on the range of the admissible values for $p$ arise in a number of instances.
%
In the following $A$ is studied via perturbation arguments that treat $Bv:=\frac{1}{h}(1-Q)\restr{\partial_z v}{\Gamma_b}$ as term of lower order. Since the two-dimensional Helmholtz projection $Q$ fails to be bounded with respect to the $L^\infty$-norm, we instead estimate it in spaces of H{\"o}lder continuous functions $C^{0,\alpha}_{\text{per}}([0,1]^2)=C^{0,\alpha}(\mathbb{T}^2)$ for $\alpha \in (0,1)$ where $\mathbb{T}^2$ denotes the two-dimensional torus. The fact that $Q=1+\nabla_H (-\Delta_H)^{-1}\text{div}_H$ is bounded with respect to the $C^{0,\alpha}$-norm is well known by the theory of Fourier multipliers on Besov spaces, compare e.g. \cite[Theorem 6.2]{Amann1997} for the whole space, and the periodic case follows using periodic extension.
%
The condition $p>3$ then arises since the proof makes use of the embedding $W^{2,p}(\Omega)\hookrightarrow C^{1,\alpha}(\overline{\Omega})$, compare \cite[Section 3.3.1]{Triebel}.
\end{itemize}
\end{remark}
\end{comment}
%
%
%
%
In order to solve equation \eqref{eq:hydrostaticStokes} in $X_\os$, we collect first several facts concerning the corresponding theory in $L^p_{\overline{\sigma}}(\Omega)$. To this end,
let $p\in (1,\infty)$, and define $A_{p,\os}\colon D(A_{p,\os}) \rightarrow L^p_{\overline{\sigma}}(\Omega)$ by
\begin{align*}
A_{p,\os} v :=\mathbb{P}\Delta v, \quad D(A_{p,\os})=\{
v\in W^{2,p}_{\text{per}}(\Omega)^2:
\text{div}_H \overline{v}=0,
\restr{\partial_z v}{\Gamma_u}=0,
\restr{v}{\Gamma_b}=0.
\}.
\end{align*}
Consider furthermore $A_{p}\colon D(A_p)\to L^p(\Omega)^2$ defined by
\begin{align*}
A_{p} v:=\Delta_p v+B v, \quad D(A_p):=D(\Delta_p)^2,
\quad
Bv:=\frac{1}{h}(1-Q)\restr{\partial_z v}{\Gamma_b},
\end{align*}
%
where $\Delta_p$ denotes the Laplacian in $L^p(\Omega)^2$ as in the last section. By \cite{GGHHK17}, the operator $A_{p}$ is an extension of $A_{p,\os}$. The idea is that the pressure term may be recovered by applying the vertical average and horizontal divergence to \eqref{eq:hydrostaticStokes}, yielding
%
\begin{align}\label{eq:LinearPressureWeak}
\Delta_H \pi = \text{div}_H \overline{f}
-\text{div}_H\frac{1}{h}\restr{\partial_z v}{\Gamma_b},
\end{align}
%
or equivalently since $1-Q$ agrees with $\nabla_H (-\Delta_H)^{-1}\text{div}_H$ one has $\nabla_H \pi=(1-Q)\overline{f}-Bv$.
%
%
%
\begin{comment}
Therefore, the resolvent problem $\lambda v- A_{p,\os} v = \mathbb{P} f$ is equivalent to the equation $\lambda v - \Delta v +\nabla_H \pi = f$ with the pressure term as in \eqref{eq:LinearPressureWeak}
satisfying the estimate
\begin{align}\label{eq:PressureEstimate}
\lVert \nabla_H \pi\rVert_{L^p(G)}\le C_{\theta,p}\lVert f\rVert_{L^p(\Omega)}.
\end{align}
Note, that on the larger domain $D(A_p)$ one generally has $\mathbb{P}\Delta v\neq \Delta v+Bv$.
\begin{lemma}[cf. \cite{GGHHK17}] \label{lem:Ap}
For $p\in (1,\infty)$, the following assertions hold true:
\begin{itemize}
\item[(a)] $A_{p}$ is the generator of a strongly continuous, analytic semigroup on $L^p(\Omega)^2$
\item[(b)] $L^p_{\overline{\sigma}}(\Omega)$ is an invariant subspace of $A_{p}$, and its restriction $A_{p,\os}$ generates an exponentially stable, strongly continuous, analytic semigroup semigroup on $L^p_{\overline{\sigma}}(\Omega)$ of angle $\pi/2$. \end{itemize}
\end{lemma}
To fix notation, this means that for $\theta \in (0,\pi)$ there exists a
constant $C_{\theta,p}$ such that
\begin{align}\label{eq:LpResolventEstimate}
\lvert \lambda \rvert \cdot \lVert (\lambda-A_{p,\os})^{-1}f\rVert_{L^p(\Omega)}
+ \Vert A_{p,\os}(\lambda-A_{p,\os})^{-1}f\rVert_{L^p(\Omega)}
\le C_{\theta,p}\lVert f\rVert_{L^p(\Omega)}
\end{align}
for all $f\in L^p_{\overline{\sigma}}(\Omega)$ and all $\lambda\in\Sigma_{\theta}$. In addition, we have
\begin{align}\label{eq:LpEstimateDerivatives}
\lvert \lambda \rvert^{1/2}\lVert \partial_i(\lambda-A_{p,\os})^{-1} f\rVert_{L^p(\Omega)}
+\lvert \lambda \rvert^{1/2}\lVert
(\lambda-A_{p,\os})^{-1}\partial_i f\rVert_{L^p(\Omega)}
\le C_{\theta,p} \lVert f\rVert_{L^p(\Omega)}
\end{align}
%
for the same range of parameters as in \eqref{eq:LpResolventEstimate} and for $\partial_i\in\{\partial_x,\partial_y,\partial_z\}$. Furthermore, the operators
\[
\partial_i (-A_{p,\os})^{-1/2}, \quad (-A_{p,\os})^{-1/2}\partial_i
\]
are bounded from $L^p_{\overline{\sigma}}(\Omega)$ to $L^p(\Omega)^2$, compare \eqref{eq:Domains}.
\end{comment}
Note that the following inclusions hold
\begin{align}\label{eq:domaininclusion}
A \subset A_{p} \quad \hbox{and} \quad A_\os \subset A_{p,\os},
\end{align}
and that $e^{tA_{p,\os}}$, $e^{tA_p}$, $e^{tA}$ and $e^{tA_\os}$ are consistent semigroups.
\begin{proof}[Proof of Claim~\ref{SemigroupEstimates}]
Let $\lambda_0>0$ with $\lambda_0\in \rho(A_p)$, $\theta\in (0,\pi/2)$, and
%
\[
\lambda \in
\Sigma_{\theta+\pi/2} \cap B_{\lambda_0}(0)^c\subset \rho(A_p).
\]
%
By \eqref{eq:domaininclusion} it follows that $\lambda-A$ is injective for $\lambda \in \rho(A_p)$ and likewise $\lambda-A_\os$ is injective for $\lambda \in \rho(A^{\overline{\sigma}}_p)$.
Since $X\hookrightarrow L^p(\Omega)^2$ the existence of a unique $v\in D(A_p)$ for $p\in (1,\infty)$ follows from the $L^p$-theory for $A_p$, cf. \cite{GGHHK17},
and since $W^{2,p}_{\text{per}}(\Omega)^2 \hookrightarrow X$ for $p\in (3/2,\infty)$ it follows that $v\in D(A)$. Since
$(A_p-\lambda)^{-1}$ further leaves $L^p_{\overline{\sigma}}(\Omega)$ invariant, $f\in X_\os$ implies $v\in D(A_\os)$.
Hence,
%
\begin{align}\label{eq: resolvent inclusions}
\rho(A_{p})\subset \rho(A) \quad \text{ and } \quad
\rho(A_{p,\os})\subset \rho(A_\os).
\end{align}
%
In particular the resolvent sets are non-empty and thus the operators are closed.
Since the semigroup estimates follow from resolvent estimates by arguments involving the inverse Laplace transform, it now remains to prove suitable resolvent estimates in $X$. To this end we observe first that $v=(\lambda-A)^{-1}f$ is equivalent to
\begin{align}\label{eq:repv}
v=(\lambda-\Delta_p)^{-1}(f+Bv),
\end{align}
and second, using the fact that $Q$ is continuous on $C_{per}^{0,\alpha}([0,1]^2)$ for $\alpha\in (0,1)$, that
%
\begin{align*}
\lVert Bv\rVert _{L^\infty_H L^p_z(\Omega)}
\le h^{1/p} \lVert Bv\rVert _{L^\infty(\Omega)}
\le h^{1/p} \lVert Bv\rVert_{C^{0,\alpha}({[0,1]^2)}}
\le C \lVert \restr{\partial_z v}{\Gamma_b}\rVert_{C^{0,\alpha}([0,1]^2)}
\le C \lVert v\rVert_{C^{1,\alpha}(\overline{\Omega})}.
\end{align*}
%
Assuming $p\in (3,\infty)$ we have $W^{2,p}(\Omega)\hookrightarrow C^{1,\alpha}(\overline{\Omega})$ for some $\alpha=\alpha_p\in (0,1-3/p)$.
Using the resolvent estimate for $A_{p}$ in $L^p(\Omega)^2$ we obtain
%
\[
\lVert v\rVert_{C^{1,\alpha}(\overline{\Omega})}
\le C_p \lVert v\rVert_{W^{2,p}(\Omega)}
\le C_p \left(
\lVert v\rVert_{L^p(\Omega)}+\lVert A v\rVert_{L^p(\Omega)}
\right)
\le C_p (1+\lvert \lambda\rvert^{-1}) \lVert f\rVert_{L^p(\Omega)}.
\]
%
This and $\lvert \lambda\rvert>\lambda_0$ yield $\lVert Bv\rVert _{L^\infty_H L^p_z(\Omega)}\le C_p (1+\lambda_0^{-1})\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}$.
So, using Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega} we obtain
%
\begin{align} \label{eq1: conclusion of resolvent estimate}
\lvert \lambda\rvert\cdot \lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
+\lvert \lambda\rvert^{1/2} \lVert \nabla v\rVert_{L^\infty_H L^p_z(\Omega)}
+ \lVert A v\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C_{\theta,p,\lambda_0}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\end{align}
%
where we used that for $\lambda$ as above and $p\in (3,\infty)$ one has
%
\begin{align*}
\lVert A v\rVert_{L^\infty_H L^p_z(\Omega)}
\le \lVert \Delta v\rVert_{L^\infty_H L^p_z(\Omega)}
+\lVert Bv\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta,p,\lambda_0}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}.
\end{align*}
%
Note that if one instead considers $f\in X_\os$, then $\lambda_0>0$ can be taken to be arbitrarily small and $\theta$ arbitrarily close to $\pi/2$ by \cite[Theorem 3.1]{GGHHK17}.
Since $0\in \rho(A_{p,\os})\subset \rho(A_\os)$, compare \cite[Theorem 3.1]{HieberKashiwabara2015}
and \eqref{eq: resolvent inclusions}
it follows that the spectral bound
%
\[
\beta:=\sup\{
\text{Re}(\lambda): \lambda\in \sigma(A_\os)
\}
\]
%
is negative implying
exponential decay,
and estimate \eqref{eq1: conclusion of resolvent estimate} is valid for all $\lambda\in\Sigma_\theta$, $\theta\in(0,\pi)$ and $f\in X_\os$.
To verify that $D(A)$ and $D(A_\os)$ are dense in $X$ and $X_\os$ respectively, observe that the space
%
\[
C^\infty_{\text{per}}([0,1]^2;C^\infty_c((-h,0)))^2
\]
%
is contained in $D(A)$ and dense in $X$, so the semigroup generated by $A$ is strongly continuous on $X$. Since it leaves $L^p_{\overline{\sigma}}(\Omega)$ invariant, the restriction of the semigroup on $X\cap L^p_{\overline{\sigma}}(\Omega)=X_\os$ is strongly continuous as well and generated by the restriction of $A$ onto $D(A)\cap L^p_{\overline{\sigma}}(\Omega)=D(A_\os)$, i.e. $A_\os$, which is therefore densely defined on $X_\os$. Thus we have proven $a)$, $c)$ and estimate $(i)$ in $b)$.
To prove the remaining semigroup estimates in $b)$ we consider the corresponding resolvent estimates.
Since $X\hookrightarrow L^p(\Omega)^2$ and $\mathbb{P}$ is bounded on $L^p(\Omega)^2$ the existence of
%
\[
v:=(\lambda-A_{p,\os})^{-1}\mathbb{P} f\in D(A_{p,\os})
\hookrightarrow W^{2,p}_{\text{per}}(\Omega)^2
\hookrightarrow X
\]
%
for $f\in X$ follows from
the $L^p$-theory for $A_{p,\os}$,
and it suffices to extend
the $L^p$-estimate
\begin{align}\label{eq:LpEstimateDerivatives}
\lvert \lambda \rvert^{1/2}\lVert \partial_i(\lambda-A_{p,\os})^{-1} f\rVert_{L^p(\Omega)}
+\lvert \lambda \rvert^{1/2}\lVert
(\lambda-A_{p,\os})^{-1}\partial_i f\rVert_{L^p(\Omega)}
\le C_{\theta,p} \lVert f\rVert_{L^p(\Omega)}, \quad f\in L^p_{\overline{\sigma}}(\Omega),
\end{align}
where $\partial_i\in\{\partial_x,\partial_y\}$,
$\theta \in (0,\pi)$, $C_{\theta,p}>0$,
to $X$, i.e. to prove the estimate
%
\begin{align} \label{eq: resolvent estimate for horizontal derivative}
\lvert \lambda \rvert^{1/2} \lVert \nabla_H v\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta,p}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\quad \lambda \in \Sigma_\theta.
\end{align}
%
Recall that $\mathbb{P} f=f-(1-Q)\overline{f}=\tilde{f}+Q\overline{f}$,
%
and that if $f\in X$ then $\overline{f}\in C_{\text{per}}([0,1]^2)^2$ satisfies $\lVert \overline{f}\rVert_{\infty}\le C \lVert f\rVert_{L^\infty_H L^p_z(\Omega)}$ for any $p\in[1,\infty]$.
Using \eqref{eq:repv} we rewrite
%
\[
v=(\lambda-A_\os)^{-1}\mathbb{P} f=(\lambda-\Delta)^{-1}(\tilde{f}+Bv+Q\overline{f}),
\]
%
and since the term $\tilde{f}+Bv$ can be dealt with as before, it suffices to show the estimate
%
\begin{align}
\lvert \lambda\rvert^{1/2} \lVert
\nabla_H (\lambda-\Delta)^{-1}Q\overline{f}
\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_\theta \lVert \overline{f}\rVert_{L^\infty(G)}.
\end{align}
%
Since $Q\overline{f}$ does not depend on $z$ we can write $Q\overline{f}=Q\overline{f}\otimes 1$, and so for $\lambda=\lvert \lambda\rvert e^{i\psi}$ with $\psi\in(-\pi/2+\varepsilon,\pi/2-\varepsilon)$ for small $\varepsilon>0$ we have
%
\begin{align*}
\lvert \lambda\rvert^{1/2}\nabla_H(\lambda-\Delta)^{-1}
\left(Q\overline{f}\otimes 1\right)
=\lvert \lambda\rvert^{1/2}\int_0^\infty e^{-\lambda t}
\left(
\nabla_H e^{t\Delta_H}Q\overline{f}\otimes e^{t\Delta_z}1
\right)
\,dt,
\end{align*}
%
where $e^{t\Delta_z}$ denotes the semigroup from Lemma~\ref{LemmaVerticalSemigroup}. Applying the estimates in Lemma~\ref{LemmaHorizontalSemigroup} and \ref{LemmaVerticalSemigroup} yields
\begin{align*}
\lvert \lambda\rvert^{1/2}\lVert
\nabla_H(\lambda-\Delta)^{-1}\left(Q\overline{f}\otimes 1\right)
\rVert_{L^\infty_H L^p_z(\Omega)}
&\le
\lvert \lambda\rvert^{1/2}\int_0^\infty e^{-\lambda t}
\lVert \nabla_H e^{t\Delta_H}Q\overline{f}\rVert_{L^\infty(G)}
\lVert e^{t\Delta_z}1\rVert_{L^p(-h,0)}
\,dt
\\
&\le C \lvert \lambda\rvert^{1/2}
\left(
\int_0^\infty e^{-\lvert\lambda\rvert\cos(\psi) t}t^{-1/2}\,dt
\right)
\lVert \overline{f}\rVert_{L^\infty(G)}
\\
&\le C \frac{\sqrt{\pi}}{\sqrt{\cos(\pi/2-\varepsilon)}}
\lVert \overline{f}\rVert_{L^\infty(G)}.
\end{align*}
%
To include the full range of angles $\psi$ one simply replaces $\Delta_H$ and $\Delta_z$ with $e^{i\theta}\Delta_H$ and $e^{i\theta}\Delta_z$ respectively where $\theta\in(-\pi/2,\pi/2)$ is a suitable angle.
Since an elementary calculation shows that $\nabla_H$ commutes with $A$ and $\mathbb{P}$ we obtain
%
\[
\partial_i (\lambda-A)^{-1} f
=(\lambda-A)^{-1}\partial_i f, \quad
\partial_i (\lambda-A)^{-1}\mathbb{P} f
=(\lambda-A)^{-1}\mathbb{P}\partial_i f
\]
%
for horizontal derivatives $\partial_i\in\{\partial_x,\partial_y\}$ and $f\in C^\infty_{\text{per}}([0,1]^2;C^\infty_c[-h,0])^2$. Note that for any $v\in W^{2,p}_{\text{per}}(\Omega)$ the horizontal derivatives $\partial_x v$ and $\partial_y v$ are periodic on $\Gamma_l$ as well. This yields suitable estimates for the right-hand sides.
To verify $d)$, we first make use of the density of the domains of the generators. So, let $\varepsilon>0$ and $v'\in D(A_\os)$ such that $\lVert v-v'\rVert_{L^\infty_H L^p_z(\Omega)}<\varepsilon/2C_0$. By $b)$ $(i)$ we have
\[
t^{1/2}\lVert \nabla e^{tA} v\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_0 \lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
\]
for all $v\in X$ and $t>0$. Then
\[
t^{1/2}\lVert \nabla e^{tA}v\rVert_{L^\infty_H L^p_z(\Omega)}
\le \frac{\varepsilon}{2}+t^{1/2}\lVert \nabla e^{tA}v'\rVert_{L^\infty_H L^p_z(\Omega)}
\]
and we can further estimate
\[
\lVert \nabla e^{tA}v'\rVert_{L^\infty_H L^p_z(\Omega)}
\le h^{1/p}\lVert e^{tA}v'\rVert_{C^1(\overline{\Omega})}
\le C_p \lVert e^{tA}v'\rVert_{D(A_{p,\os})}.
\]
This and the invertibility of $A_{p,\os}$ on $L^p_{\overline{\sigma}}(\Omega)$ yield
\begin{align*}
t^{1/2}\lVert \nabla e^{t A_{p,\os}}v'\rVert_{L^p_{\overline{\sigma}}(\Omega)}
\le C_p t^{1/2} \lVert A_{p,\os} e^{tA_{p,\os}}v'\rVert_{L^p_{\overline{\sigma}}(\Omega)}
=C_p t^{1/2}\lVert e^{tA_{p,\os}}A_{p,\os} v'\rVert_{L^p_{\overline{\sigma}}(\Omega)} \leq C_p t^{1/2}\lVert A_{p,\os} v'\rVert_{L^p_{\overline{\sigma}}(\Omega)}
\end{align*}
and since $A_{p,\os} v'\in L^p_{\overline{\sigma}}(\Omega)$ the claim follows.
\end{proof}
\section{Linear estimates for the hydrostatic Stokes operator: part 2}\label{sec:stokesHard}
This section is devoted to prove that the estimates of Claim~\ref{SemigroupEstimates} in the case of vertical derivatives, i.e.
that the estimates (ii), (iii) and (iv) in Claim~\ref{SemigroupEstimates} are valid even for $\partial_j=\partial_z$.
\begin{claim}\label{SemigroupEstimatesHard}
Under the assumptions of Claim~\ref{SemigroupEstimates}
there exist constants $C>0$ and $\beta\in \mathbb{R}$ such that
%
\begin{align}\label{eq:FirstVerticalEstimate}
t^{1/2}\lVert \partial_z e^{tA}\mathbb{P}f\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C e^{t\beta}\lVert f\rVert_{L^\infty_H L^p_z (\Omega)}, \\
\label{eq:SecondVerticalEstimate}
t^{1/2}\lVert e^{tA}\mathbb{P}\partial_zf\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C e^{t\beta}\lVert f\rVert_{L^\infty_H L^p_z (\Omega)}, \\
\label{eq:ThirdVerticalEstimate}
t\lVert \partial_i e^{tA}\mathbb{P} \partial_j f\rVert_{L^\infty_H L^p_z(\Omega)}
&\le C e^{\beta t}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\end{align}
where $\partial_i, \partial_j\in \{\partial_x,\partial_y,\partial_z\}$,
for all $t>0$ and $f\in X$.
\end{claim}
As in the last section, these semigroup estimates follow from suitable resolvent estimates and
standard arguments involving the inverse Laplace transform.
Before investigating the estimate for $\partial_z (\lambda-A)\mathbb{P}$ we present an anisotropic version of an interpolation inequality.
We use the notation $(x,y,z) =: (x', z)$ and let $B(x_0'; r) = \{ x' \in \mathbb R^2 : |x' - x_0'| < r \}$ denote a disk in $\mathbb R^2$.
\begin{lemma} \label{lem: anisotropic interpolation inequality}
Let $p \in (2, \infty)$, $q \in [1,\infty]$, $r>0$, and $x_0' \in \mathbb R^2$.
Then, for $v \in W^{1,p}(B(x_0'; r); L^q_z)$, $L^q_z=L^q(-h,0)$ we have
%
\begin{equation*}
\|v\|_{L^\infty(B(x_0'; r); L^q_z)}
\le C r^{-2/p} (
\|v\|_{L^p(B(x_0'; r); L^q_z)} + r\|\nabla_Hv\|_{L^p(B(x_0'; r); L^q_z)}
),
\end{equation*}
%
where the constant $C = C_{\Omega, p, q}>0$ is independent of $r$ and $x_0'$.
\end{lemma}
\begin{proof}
We put $w(x') := (\int_{-h}^0 |v(x', z)|^q \, dz)^{1/q}$ and apply a two-dimensional interpolation inequality, compare \cite[Lemma 3.1.4]{Lunardi1995} to have
%
\begin{equation} \label{eq1: prf of anisotropic interpolation inequality}
\|w\|_{L^\infty(B(x_0'; r))}
\le C r^{-2/p} (
\|w\|_{L^p(B(x_0'; r))} + r\|\nabla_Hw\|_{L^p(B(x_0'; r))}
).
\end{equation}
%
One sees that $\|w\|_{L^p(B(x_0'; r))} = \|v\|_{L^p(B(x_0'; r); L^q_z)}$.
To estimate the second term we compute $\partial_i w$ for $\partial_i\in\{\partial_x,\partial_y\}$ as follows:
\begin{align*}
\partial_i w(x')
= \left( \int_{-h}^0 |v(x', z)|^q \, dz \right)^{1/q - 1}
\int_{-h}^0 |v(x', z)|^{q - 2} (\partial_i v(x', z) \cdot v(x', z)) \, dz.
\end{align*}
%
Using H{\"o}lder's inquality we obtain
%
\begin{align*}
|\partial_i w(x')|
\le \left( \int_{-h}^0 |v(x', z)|^q \, dz \right)^{1/q-1}
\int_{-h}^0 |v(x', z)|^{q - 1} |\partial_i v(x', z)| \, dz
\le \left( \int_{-h}^0 |\partial_i v(x', z)|^q \, dz \right)^{1/q}
\end{align*}
%
and substituting this into \eqref{eq1: prf of anisotropic interpolation inequality} proves the estimate for $q<\infty$.
The case $q = \infty$ is a straightforward result of \eqref{eq1: prf of anisotropic interpolation inequality}.
\end{proof}
It is well known that $1-Q=-\nabla_H (-\Delta_H)^{-1} \mathrm{div}_H=\nabla_H \Delta_H^{-1} \mathrm{div}_H$ with periodic boundary conditions is a singular integral operator which fails to be bounded in $L^\infty(G)^2$.
However, if one allows for a logarithmic (and therefore divergent) factor, some $L^\infty$-type estimate are still available.
In this spirit we give a local $L^p$-estimate for the operator $\nabla_H (-\Delta_H)^{-1} \mathrm{div}_H$ corresponding to the scale of the $L^\infty$-norm.
\begin{proposition} \label{prop: nearly-optimal estimate for laplacian}
Let $p \in (1,\infty)$, $x_0' \in G$. Then there exists $r_0>0$ such that for all $r\in (0,r_0)$ the weak solution of
%
\begin{equation} \label{eq: Laplace equation}
\Delta_H \pi = \mathrm{div}_H F
\quad\text{in}\quad G, \qquad
\pi|_{\partial G}: \text{ periodic},
\qquad \int_G \pi \, dx' = 0,
\end{equation}
%
for $F \in L^\infty(G)^2$ satisfies
%
\begin{equation*}
\|\nabla_H \pi\|_{L^p(B(x_0'; r))}
\le C r^{2/p} (1 + |\log r|) \|F\|_{L^\infty(G)}.
\end{equation*}
%
Here the constant $C = C_{G, p}>0$ is independent of $x_0'$ and $r$.
\end{proposition}
\begin{proof}
By applying a periodic extension we may assume that \eqref{eq: Laplace equation} holds in a larger square $G' := (-2, 3)^2$. We choose $r_0<1/8$ to obtain $B(x_0'; 4r_0) \subset (-1/2,3/2)^2$ and utilize two cut-off functions $\omega,\theta \in C^\infty_c(\mathbb R^2)$, $\theta=\theta_r$, satisfying the following properties:
%
\begin{align*}
\begin{array}{lll}
\omega\equiv1 \text{ on } [-1, 2]^2, &
\mathrm{supp}\,(\omega) \subset G', &
\|\nabla_H^k \omega\|_{L^\infty(\mathbb R^2)}
\le C,
\\
\theta\equiv1 \text{ on } B(x_0'; 2r), &
\mathrm{supp}\,(\theta) \subset B(x_0'; 4r), &
\|\nabla_H^k \theta\|_{L^\infty(\mathbb R^2)}
\le Cr^{-k}
\end{array}
\end{align*}
%
for $k=0,1,2$; compare the proof of Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega}.
%
From \eqref{eq: Laplace equation} we see that $\omega \pi$ satisfies
%
\begin{equation*}
\Delta_H (\omega \pi)
= \mathrm{div}_H (\omega F)
- \nabla_H\omega\cdot F
+ 2\mathrm{div}_H\big( (\nabla_H\omega)\pi \big)
- (\Delta_H\omega)\pi
\quad\text{in}\quad \mathbb R^2.
\end{equation*}
%
Then, letting $\Psi(x', y') := \frac1{2\pi} \log|x' - y'|$ be the Green's function for the Laplacian in $\mathbb R^2$, we obtain
%
\begin{align*}
(\omega \pi)(x')
= - \int_{\mathbb R^2} (\nabla_{y'} \Psi)(x', y') \cdot
\big[
\omega F + 2(\nabla_{y'}\omega)\pi
\big](y') \, dy'
- \int_{\mathbb R^2} \Psi(x', y') \big[
(\nabla_H\omega)\cdot F + (\Delta_H\omega)\pi
\big](y') \, dy'.
\end{align*}
%
Therefore, for $x' \in B(x_0'; r)$ we have the representation
%
\begin{align*}
\nabla_H \pi(x')
= & - \int_{\mathbb R^2} (\nabla_{x'}\nabla_{y'} \Psi)(x', y') \big[
\omega F + 2(\nabla_{y'}\omega)\pi
\big](y') \, dy'
- \int_{\mathbb R^2} (\nabla_{x'}\Psi)(x', y') \big[
(\nabla_H\omega)\cdot F + (\Delta_H\omega)\pi
\big](y') \, dy'
\\
= & - \int_{\mathbb R^2} (\nabla_{x'}\nabla_{y'} \Psi)(x', y') \big[
\theta F + \omega(1 - \theta) F + 2(\nabla_{y'}\omega)\pi
\big](y') \, dy'
\\
& - \int_{\mathbb R^2} (\nabla_{x'}\Psi)(x', y') \big[
(\nabla_H\omega)\cdot F + (\Delta_H\omega)\pi
\big](y') \, dy'
\\
=: &\,\Pi_1(x') + \Pi_2(x') + \Pi_3(x') + \Pi_4(x') + \Pi_5(x')
\end{align*}
%
where in the second step we used $\omega \theta=\theta$.
We derive $L^p(B(x_0'; r))$-estimates for each of the above terms as follows:
By the Calder\'on--Zygmund inequality we have
\begin{equation*}
\|\Pi_1\|_{L^p(\mathbb R^2)}
\le C\|\theta F\|_{L^p(\mathbb R^2)}
\le C \|\theta\|_{L^p(\mathbb R^2)} \|F\|_{L^\infty(G')}
\le C r^{2/p} \|F\|_{L^\infty(G)}.
\end{equation*}
%
For the second term note that we have $|\nabla_{x'}\nabla_{y'} \Psi(x', y')| \le C|x' - y'|^{-2}$ and
%
\[
\mathrm{supp}\,(\omega(1-\theta))
=\mathrm{supp}\,(\omega-\theta)
\subset \mathrm{supp}(\omega)\setminus B(x'_0;2r)
\]
%
yields $\mathrm{supp}\,(\omega(1-\theta)) \subset \{r \le |x'-y'| \le 4\}$ and therefore
%
\begin{align*}
\|\Pi_2\|_{L^p(B(x_0'; r))}
&\le \lVert 1\rVert_{L^p(B(x'_0;r))}
\left(
\sup_{x' \in B(x_0'; r)} \int_{r \le |x' - y'|\le 4} C|x' - y'|^{-2} \, dy'
\right)
\|\omega(1 - \theta) F\|_{L^\infty(G')}
\\
&\le C r^{2/p} (1 + |\log r|) \|F\|_{L^\infty(G)}.
\end{align*}
%
The condition $\mathrm{supp}\,(\nabla_H\omega)\subset G'\setminus [-1,2]$ yields
\begin{align*}
\|\Pi_3\|_{L^p(B(x_0'; r))}
&\le \lVert 1\rVert_{L^p(B(x'_0;r))}
\left(
\sup_{1/2 \le |x' - y'| \le 3} C|x' - y'|^{-2}
\right)
\|2 (\nabla_H\omega) \pi\|_{L^1(G')}
\le C r^{2/p} \|\pi\|_{L^1(G)}.
\end{align*}
It follows from Poincar\'e's inequality and the $L^2$-theory for
\eqref{eq: Laplace equation} that
\begin{equation*}
\|\pi\|_{L^1(G')}
\le C\|\pi\|_{L^2(G)}
\le C\|\nabla_H\pi\|_{L^2(G)}
\le C\|F\|_{L^2(G)}
\le C\|F\|_{L^\infty(G)}
\end{equation*}
and therefore $\|\Pi_3\|_{L^p(B(x_0'; r))} \le C r^{2/p} \|F\|_{L^\infty(G)}$.
%
Similarly to $\Pi_3$, we have
\begin{align*}
\|\Pi_4 + \Pi_5\|_{L^p(B(x_0'; r))}
&\le \lVert 1\rVert_{L^p(B(x'_0;r))} \Big(
\sup_{1/2 \le |x' - y'| \le 3} C|x' - y'|^{-1}
\Big)
(\|F\|_{L^1(G)} + \|\pi\|_{L^1(G)})
\\
&\le C r^{2/p} \|F\|_{L^\infty(G)}.
\end{align*}
Combining these estimates yields the desired estimate.
\end{proof}
\begin{remark}
Note that the Calder\'on--Zygmund inequality we have used to estimate $\Pi_1$ does not hold for $p\in \{1,\infty\}$ while the arguments of Section~\ref{sec:stokesEasy} can be adapted to cover the case $p=\infty$.
\end{remark}
We now turn to prove the estimate
$\lvert \lambda\rvert^{1/2}\lVert \partial_z (\lambda-A)^{-1}\mathbb{P}\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta, p, \lambda_0} \lVert f\rVert_{L^\infty_H L^p_z(\Omega)}$
for $\lambda\in\Sigma_{\theta}$, $|\lambda|>\lambda_0$ and for $\theta\in(0,\pi)$, $p>3$.
For this purpose we observe that the solution $v$ to the resolvent problem
%
\begin{align*}
\lambda v-Av=\mathbb{P} f
\quad \text{on} \quad \Omega
\end{align*}
%
with boundary conditions \eqref{eq:bc} is decomposed as $v = v_1 + v_2$, where $(v_1, \pi_1)$ and $(v_1, \pi_1)$ solve
%
\begin{equation} \label{eq: problem for v1}
\lambda v_1 - \Delta v_1 + \nabla_H \pi_1
= f \text{ on }\Omega,
\quad
\Delta_H\, \pi_1 = -h^{-1} \mathrm{div}_H(\partial_z v|_{\Gamma_b})
\text{ on } G,
\end{equation}
%
and
%
\begin{equation} \label{eq: problem for v2}
\lambda v_2 - \Delta v_2 + \nabla_H \pi_2
= 0 \text{ on } \Omega,
\quad
\Delta_H\, \pi_2 = \mathrm{div}_H\, \bar f
\text{ on } G,
\end{equation}
respectively, both equipped with the boundary conditions \eqref{eq:bc} and periodic boundary conditions for $\pi_i$ on $\partial G$, as $\pi:=\pi_1+\pi_2$ satisfies \eqref{eq:LinearPressureWeak}.
Since \eqref{eq: problem for v1} is equivalent to $v_1=(\lambda-\Delta)^{-1}(f+Bv)$ we obtain
%
\begin{align}\label{eq:voneestimate}
|\lambda|^{1/2} \|\partial_z v_1\|_{L^\infty_H L^p_z(\Omega)}
\le |\lambda|^{1/2} \|\nabla v_1\|_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta, p,\lambda_0} \|f\|_{L^\infty_H L^p_z(\Omega)}
\end{align}
%
for $\lvert \lambda\rvert>\lambda_0$ by the same argument used to derive \eqref{eq1: conclusion of resolvent estimate}.
This, $\nabla_H v_2=\nabla_H v-\nabla_H v_1$, and estimate \eqref{eq: resolvent estimate for horizontal derivative} yield
%
\begin{align}\label{eq: horizontal estimate for v two}
\lvert \lambda \rvert^{1/2} \lVert \nabla_H v_2\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta,p,\lambda_0}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\quad \lambda \in \Sigma_\theta.
\end{align}
%
In order to prove estimate~\eqref{eq:FirstVerticalEstimate} it thus remains to establish the following.
\begin{proposition}\label{vtwoestimate}
Let $p \in (3,\infty)$ and $\theta\in (0,\pi)$. Then there exists constants $\lambda_0>0$ and $C_{\theta,p,\lambda_0}>0$ such that for all $\lambda \in \Sigma_\theta$ with $\lvert \lambda\rvert>\lambda_0$ and $f \in X$ the solution $v_2$ of \eqref{eq: problem for v2} satisfies
%
\begin{equation*}
|\lambda|^{1/2} \|\partial_z v_2\|_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta,p,\lambda_0} \|f\|_{L^\infty_H L^p_z(\Omega)}.
\end{equation*}
%
\end{proposition}
\begin{remark}\label{FullRangeRemark}
The estimate
%
\[
\lvert \lambda \rvert^{1/2}
\lVert \partial_z (\lambda-A)^{-1}\mathbb{P}f\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta,p}\lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\quad f\in X
\]
%
actually holds for the full range of $\lambda\in\Sigma_\theta$, $\theta\in (0,\pi)$, i.e. one can take $\lambda_0=0$. This is obtained by using that $\mathbb{P}f\in L^p_{\overline{\sigma}}(\Omega)$ yields $v:=(\lambda-A)^{-1}\mathbb{P}f\in D(A_{p,\os})$ and therefore
%
\[
\lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
\le C_p \lVert v\rVert_{W^{2,p}(\Omega)}
\le C_p \lVert Av\rVert_{L^p_{\overline{\sigma}}(\Omega)}
\le C_p \lVert \mathbb{P}f\rVert_{L^p_{\overline{\sigma}}(\Omega)}
\le C_p\lVert f\rVert_{L^p_{\overline{\sigma}}(\Omega)}
\le C_p \lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\]
%
so the same argument as in the proof of Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega} applies.
\end{remark}
\begin{proof}[Proof of Proposition \ref{vtwoestimate}]
We will simply write $(v,\pi)$ instead of $(v_2,\pi_2)$ for the solution of \eqref{eq: problem for v2}.
By applying a periodic extension in the horizontal variables we may assume that \eqref{eq: problem for v2} holds in a larger domain allowing us to replace $\Omega$ and $G$ by $\Omega' := G'\times(-h,0)$ and $G' := (-2,3)^2$ respectively. We decompose the boundary of $\Omega'$ into $\Gamma_u'=G'\times \{0\}$, $\Gamma_l' := \partial G'\times[-h,0]$ and $\Gamma'_b=G\times\{-h\}$.
For simplicity we continue to denote the periodic extensions of $v$, $\pi$ and $f$ in the same manner.
Let $\eta>1$ be a parameter to be fixed later, and let $\lambda_0$ be a positive number such that
%
\begin{align}\label{eq: definition of r zero}
r_0:=\eta\, \lambda_0^{-1/2} < \min\{1/8, h/4\}.
\end{align}
%
We fix arbitrary $\lambda\in\Sigma_\theta$, $|\lambda|>\lambda_0$, put $r:=\eta|\lambda|^{-1/2}<r_0$, and introduce two cut-off functions $\alpha=\alpha_r$, $\beta=\beta_r$, satisfying
%
\begin{align*}
&\alpha \in C^\infty([-h, 0]), \quad
\alpha\equiv0 \text{ on } [-h, -h + r ],
\quad \alpha\equiv1 \text{ on } [-h+2r, 0], \quad
|\partial_z^k\alpha(z)| \le Cr^{-k},
\\
&\beta \in C^\infty([-h, 0]), \quad
\beta\equiv1 \text{ on } [-h, -h+2r], \quad
\beta\equiv0 \text{ on } [-h+3r, 0], \quad
|\partial_z^k\beta(z)| \le Cr^{-k}
\end{align*}
%
for $k=0,1,2$, compare the proof of Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega}.
%
We then split the estimate for $\partial_zv$ into the ``upper'' and ``lower'' parts in $\Omega$ as
%
\begin{equation} \label{eq: decomposition to alpha and beta}
\|\partial_z v\|_{L^\infty_H L^p_z(\Omega)}
\le \|\partial_z (\alpha v)\|_{L^\infty_H L^p_z(\Omega)}
+ \|\partial_z (\beta v)\|_{L^\infty_H L^p_z(\Omega)}.
\end{equation}
%
\textbf{Step 1.}
Let us first focus on $\partial_z(\alpha v)$.
By Lemma~\ref{lem: anisotropic interpolation inequality} with radius $\lvert \lambda\rvert^{-1/2}$ and $p=q$ we have
%
\begin{align}\label{eq: alpha estimate step one}
\begin{split}
|\lambda|^{1/2} \|\partial_z(\alpha v)\|_{L^\infty_H L^p_z(\Omega)}
\le C_p |\lambda|^{1/p} \sup_{x_0' \in G} \biggl(
|\lambda|^{1/2}
\|\partial_z (\alpha v)\|_{L^p(C(x_0'; |\lambda|^{-1/2}))}
+ \|\nabla_H\partial_z (\alpha v)\|_{L^p(C(x_0'; |\lambda|^{-1/2}))}
\biggr),
\end{split}
\end{align}
%
where $C(x_0'; |\lambda|^{-1/2})$ denotes the cylinder $B(x_0'; |\lambda|^{-1/2})\times(-h,0)$ and we used that
%
\[
\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}=\sup_{x_0'\in G}\lVert f\rVert_{L^\infty(B(x'_0,R);L^p_z)}, \quad R>0.
\]
%
In the following we fix arbitrary $x_0' \in G$ and introduce a cut-off function $\theta = \theta_r \in C^\infty_c(\mathbb R^2)$ such that
%
\begin{equation*}
\theta\equiv1 \text{ in } \overline{B(x_0'; |\lambda|^{-1/2})}, \quad
\mathrm{supp}\,\theta \subset B(x_0'; r), \quad
\|\nabla_H^k \theta\|_{L^\infty(\mathbb R^2)}
\le Cr^{-k}
\end{equation*}
%
for $k=0,1,2$. Then $\theta\alpha v$ solves
%
\begin{align*}
\lambda(\theta\alpha v) - \Delta(\theta\alpha v)
&= - \theta\alpha\nabla_H\pi
- 2\nabla(\theta\alpha)\cdot\nabla v
- (\Delta(\theta\alpha)) v \quad\text{on}\quad \Omega',
\\
\partial_z(\theta\alpha v)|_{\Gamma_u' \cup \Gamma_b'} &= 0, \quad
\theta\alpha v \text{ periodic on }\Gamma_l'.
\end{align*}
%
We further differentiate this equation with respect to $z$ to obtain
%
\begin{align*}
\lambda(\theta \partial_z(\alpha v) ) - \Delta(\theta \partial_z(\alpha v))
= F_1+\partial_z F_2 \quad\text{on}\quad \Omega',
\quad
\restr{\theta\partial_z(\alpha v)}{\Gamma'_u \cup \Gamma'_b} = 0,\quad
\theta\partial_z(\alpha v) \text{ periodic on }\Gamma_l'.
\end{align*}
%
where
%
\begin{align*}
F_1&:=-\theta (\partial_z\alpha) (\nabla_H\pi)
-(\Delta_H\theta)(\partial_z\alpha)v
-(\Delta_H\theta)\alpha(\partial_z v),
\\
F_2&:=-2(\nabla_H\theta)\alpha\cdot(\nabla_Hv)
-2\theta(\partial_z\alpha)(\partial_zv)
- \theta(\partial_z^2\alpha)v.
\end{align*}
%
By \eqref{eq:AnisotropicResolventOmega} and \eqref{eq:AnisotropicResolventDerivativeOmega} for $\Omega'$ in the case $q=p$, we obtain the estimate
%
\begin{align}\label{eq: alpha estimate step two}
|\lambda|^{1/2} \|\partial_z (\theta \alpha v)\|_{L^p(\Omega')}
+ \|\nabla\partial_z (\theta \alpha v)\|_{L^p(\Omega')}
\le C_{\theta} \left(
|\lambda|^{-1/2}\lVert F_1\rVert_{L^p(\Omega')}
+ \lVert F_2\rVert_{L^p(\Omega')}
\right).
\end{align}
and since $\theta \equiv 1$ on $C(x'_0;\lvert \lambda \rvert^{-1/2})\subset \Omega'$ by \eqref{eq: definition of r zero}, we further have
%
\begin{align}\label{eq: alpha estimate step three}
\begin{split}
\|\partial_z (\alpha v)\|_{L^p(C(x_0'; |\lambda|^{-1/2}))}
&\le \|\partial_z (\theta \alpha v)\|_{L^p(\Omega')},
\\
\|\nabla\partial_z (\alpha v)\|_{L^p(C(x_0'; |\lambda|^{-1/2}))}
&\le \|\nabla\partial_z (\alpha v)\|_{L^p(C(x_0'; |\lambda|^{-1/2}))}.
\end{split}
\end{align}
Let us estimate each term on this right-hand side of \eqref{eq: alpha estimate step two} as follows:
Denoting $\lVert \cdot \rVert_{L^p_H}:=\lVert \cdot \rVert_{L^p(B(x'_0;r))}$ and $\lVert \cdot \rVert_{L^p_z}:=\lVert \cdot \rVert_{L^p(-h,0)}$, we first observe that the cut-off functions satisfy
%
\[
\lVert \theta\rVert_{L^p_H}
\le C r^{2/p},
\quad
\lVert \nabla_H \theta\rVert_{L^p_H}
\le C r^{2/p-1},
\quad
\lVert \Delta_H \theta\rVert_{L^p_H}
\le C r^{2/p-2}
\]
%
as well as
%
\[
\|\partial_z\alpha\|_{L^p_z}
\le C r^{1/p-1},
\quad
\|\partial_z^2\alpha\|_{L^p_z}
\le C r^{1/p-2}.
\]
%
By Proposition~\ref{prop: nearly-optimal estimate for laplacian} we then have
\begin{equation*}
\|\theta (\partial_z\alpha) (\nabla_H\pi)\|_{L^p(\Omega')}
\le \lVert \theta\rVert_\infty
\|\partial_z\alpha\|_{L^p_z}
\|\nabla_H\pi\|_{L^p_H}
\le C_p r^{3/p-1}(1 + |\log r|) \|f\|_{L^\infty_H L^p_z(\Omega)}.
\end{equation*}
We further have the Poincar\'e inequality
%
\begin{align}\label{eq: vertical poincare}
\|f\|_{L^\infty(G'; L^p(-h, -h + d))}
\le d\|\partial_z f\|_{L^\infty_H L^p_z},
\quad 0\le d\le h, \quad \restr{f}{\Gamma'_b} = 0
\end{align}
%
and hence using H\"older's inequality yields
%
\begin{equation*}
\|(\Delta_H\theta)(\partial_z\alpha)v\|_{L^p(\Omega')}
\le
\|\Delta_H\theta\|_{L^p_H}
\|\partial_z\alpha\|_{\infty}
\|v\|_{L^\infty(G'; L^p(-h,-h+2r))}
\le Cr^{2/p-2}\|\partial_zv\|_{L^\infty_H L^p_z(\Omega)}.
\end{equation*}
For the third term in $F_1$ we simply have
%
\begin{equation*}
\|(\Delta_H\theta)\alpha(\partial_zv)\|_{L^p(\Omega')}
\le \lVert \Delta_H \theta\rVert_{L^p_H}
\lVert \alpha\rVert_\infty
\|\partial_zv\|_{L^\infty_H L^p_z(\Omega)}
\le Cr^{2/p-2} \|\partial_zv\|_{L^\infty_H L^p_z(\Omega)}.
\end{equation*}
The first term in $F_2$ is estimated via \eqref{eq: horizontal estimate for v two}, yielding
%
\begin{equation*}
\|(\nabla_H\theta) \alpha (\nabla_Hv)\|_{L^p(\Omega')}
\le \lVert \nabla_H \theta\rVert_{L^p_H}
\lVert \alpha\rVert_\infty
\lVert \nabla_Hv\|_{L^\infty_H L^p_z(\Omega)}
\le C_{\theta,p,\lambda_0} r^{2/p-1} |\lambda|^{-1/2} \|f\|_{L^\infty_H L^p_z(\Omega)},
\end{equation*}
whereas for the second term in $F_2$ we simply have
%
\begin{equation*}
\|\theta (\partial_z\alpha) (\partial_zv)\|_{L^p(\Omega')}
\le \lVert \theta\rVert_\infty \lVert \partial_z \alpha\rVert_\infty
\lVert \partial_z v\rVert_{L^\infty_H L^p_z(\Omega)}
\le Cr^{2/p-1} \|\partial_zv\|_{L^\infty_H L^p_z(\Omega)},
\end{equation*}
%
and by the Poincar\'e inequality \eqref{eq: vertical poincare} we estimate the last term by
%
\begin{equation*}
\|\theta (\partial_z^2\alpha) v\|_{L^p(\Omega')}
\le \lVert \theta\rVert_{L^p_H}\lVert \partial^2_z \alpha\rVert_\infty
\lVert v\rVert_{L^\infty_H L^p_z(\Omega)}
\le Cr^{2/p-1} \|\partial_zv\|_{L^\infty_H L^p_z(\Omega)}.
\end{equation*}
%
Collecting the above estimates, using \eqref{eq: alpha estimate step one}, \eqref{eq: alpha estimate step two} and \eqref{eq: alpha estimate step three}, as well as $r=\eta \lvert \lambda\rvert^{-1/2}$, we obtain that
%
\begin{align} \label{eq: estimate for alpha v}
\begin{split}
|\lambda|^{1/2} \|\partial_z(\alpha v)\|_{L^\infty_HL^p_z(\Omega)}
&\le C_{\theta,p,\lambda_0} \left(
\eta^{2/p-2}+\eta^{3/p-2}\lvert \lambda \rvert^{-1/2p}
+\eta^{2/p-1}r^{1/p}\lvert \log(r)\rvert
\right)
\lVert f\rVert_{L^\infty_H L^p_z(\Omega)}
\\&+ C_{\theta,p} (\eta^{2/p-1} + \eta^{2/p-2}) |\lambda|^{1/2}
\|\partial_zv\|_{L^\infty_HL^p_z(\Omega)}.
\\
&\le C_{\theta,p,\lambda_0} \eta^{2/p-1}
\big(1 + r^{1/p} |\log r| \big) \|f\|_{L^\infty_HL^p_z(\Omega)}
\\&+ C_{\theta,p} (\eta^{2/p-1} + \eta^{2/p-2}) |\lambda|^{1/2}
\|\partial_zv\|_{L^\infty_HL^p_z(\Omega)}.
\end{split}
\end{align}
\textbf{Step 2:} Now we shall estimate $\partial_z(\beta v)$.
We apply Lemma~\ref{lem: anisotropic interpolation inequality} as in the previous step to obtain
%
\begin{align}\label{eq: beta estimate step one}
|\lambda|^{1/2} \|\partial_z(\beta v)\|_{L^\infty_H L^p_z(\Omega)}
&\le C_p |\lambda|^{1/p} \sup_{x_0' \in G} \left(
|\lambda|^{1/2} \|\partial_z (\beta v)\|_{L^p(C(x_0'; |\lambda|^{-1/2}))}
+ \|\nabla_H\partial_z (\beta v)\|_{L^p(C(x_0'; |\lambda|^{-1/2}))}
\right).
\end{align}
%
In the following we fix an arbitrary point $x_0' \in G$.
With the same cut-off function $\theta \in C^\infty_c(\mathbb R^2)$ as in Step 1, we find that $\theta\beta v$ solves
%
\begin{align*}
\lambda(\theta\beta v) - \Delta(\theta\beta v)
= F_3 \quad\text{in}\quad \Omega',
\quad
\partial_z(\theta\beta v)|_{\Gamma_u'} = 0, \quad
\restr{\theta\beta v}{\Gamma_b'} = 0, \quad
\theta\beta v \text{ periodic on }\Gamma_l'
\end{align*}
%
where
%
\[
F_3:=
-\theta\beta(\nabla_H\pi)
-2(\nabla_H \theta)\beta\cdot (\nabla_H v)
-2\theta(\partial_z \beta)(\partial_z v)
-(\Delta_H \theta)\beta v
-2\theta(\partial_z^2\beta)v.
\]
%
We apply estimate \eqref{eq:AnisotropicResolventOmega} on $\Omega'$ with $q=p$ to obtain
%
\begin{align}\label{eq: beta estimate step two}
|\lambda|^{1/2} \|\nabla (\theta \beta v)\|_{L^p(\Omega')}
+ \|\Delta (\theta \beta v)\|_{L^p(\Omega')}
\le C_\theta \lVert F_3\rVert_{L^p(\Omega')}
\end{align}
where we further have, compare \eqref{eq: alpha estimate step three}, that
%
\begin{align}\label{eq: beta estimate step three}
\begin{split}
\lVert \partial_z (\beta v)\rVert_{L^p(C(x'_0;\lvert \lambda \rvert^{-1/2})}
&\le \lVert \nabla (\theta\beta v)\rVert_{L^p(\Omega')},
\\
\lVert \nabla_H \partial_z (\beta v)\rVert_{L^p(C(x'_0;\lvert \lambda \rvert^{-1/2})}
&\le \lVert \nabla_H \partial_z (\theta \beta v)\rVert_{L^p(\Omega')}
\le \lVert \theta \beta v \rVert_{W^{2,p}(\Omega')}
\le C_p \lVert \Delta (\theta \beta v)\rVert_{L^p(\Omega')}
\end{split}
\end{align}
%
by the invertibility of the Laplace operator with mixed Neumann and Dirichlet boundary conditions, compare Section \ref{sec:laplace}.
We now estimate the right-hand side of \eqref{eq: beta estimate step two} as follows:
Note that $\beta$ satisfies the estimates
%
\[
\lVert \beta\rVert_{L^p_z}\le C r^{1/p},
\quad
\lVert \partial_z \beta\rVert_{L^p_z}\le C r^{1/p-1},
\quad
\lVert \partial_z^2 \beta\rVert_{L^p_z}\le C r^{1/p-2}
\]
%
since $\supp (\beta)\subset [-h,-h+3r]$.
It follows from Proposition~\ref{prop: nearly-optimal estimate for laplacian} that
%
\begin{align*}
\|\theta \beta (\nabla_H\pi)\|_{L^p(\Omega')}
\le \lVert \theta\rVert_\infty
\|\beta\|_{L^p_z}
\|\nabla_H\pi\|_{L^p_H}
\le C_p r^{3/p} (1 + |\log r|) \|f\|_{L^\infty_HL^p_z(\Omega)}.
\end{align*}
The estimate \eqref{eq: horizontal estimate for v two} implies that
%
\begin{align*}
\|(\nabla_H\theta)\beta\cdot (\nabla_H v)\|_{L^p_H}
&\le \|\nabla_H\theta\|_{L^p_H}
\lVert \beta\rVert_\infty
\|\nabla_H v\|_{L^\infty_HL^p_z(\Omega)}
\le C_{\theta,p,\lambda_0}r^{2/p-1} |\lambda|^{-1/2}
\|f\|_{L^\infty_HL^p_z(\Omega)},
\end{align*}
and for the term containing vertical derivatives we have
%
\begin{equation*}
\|\theta(\partial_z\beta) (\partial_zv)\|_{L^p(\Omega')}
\le \|\theta\|_{L^p_H}
\|\partial_z\beta\|_\infty
\|\partial_zv\|_{L^\infty_HL^p_z(\Omega)}
\le Cr^{2/p-1} \|\partial_zv\|_{L^\infty_HL^p_z(\Omega)}.
\end{equation*}
%
By the Poincar\'e inequality \eqref{eq: vertical poincare} we have
%
\begin{align*}
\|(\Delta_H\theta) \beta v\|_{L^p(\Omega')}
\le \|\Delta_H\theta\|_{L^p_H}
\lVert \beta\rVert_\infty
\|v\|_{L^\infty(G; L^p(-h, -h + 3r))}
\le Cr^{2/p-1} \|\partial_zv\|_{L^\infty_HL^p_z(\Omega)}
\end{align*}
as well as
%
\begin{equation*}
\|\theta(\partial_z^2\beta) \, v\|_{L^p(\Omega')}
\le \lVert \theta\rVert_{L^p_H}
\lVert \partial_z^2\beta \rVert_\infty
\lVert v\rVert_{L^\infty(G;L^p(-h,-h+3r))}
\le Cr^{2/p-1} \|\partial_zv\|_{L^\infty_HL^p_z(\Omega)}.
\end{equation*}
%
%
Combining the above estimates with \eqref{eq: beta estimate step one}, \eqref{eq: beta estimate step two} and \eqref{eq: beta estimate step three} as well as $r=\eta \lvert \lambda \rvert^{-1/2}$ then yields
%
\begin{align} \label{eq: estimate for beta v}
\begin{split}
|\lambda|^{1/2} \|\partial_z(\beta v)\|_{L^\infty_H L^p_z(\Omega)}
&\le C_{\theta,p,\lambda_0}
\left(
\eta^{2/p-1}
+\eta^{3/p} |\lambda|^{-1/2p}
\big( 1 + |\log(\eta|\lambda|^{-1/2})| \big)
\right)\|f\|_{L^\infty_H L^p_z(\Omega)}
\\&+ C_{\theta,p}\eta^{2/p-1} |\lambda|^{1/2} \|\partial_zv\|_{L^\infty_H L^p_z(\Omega)}.
\end{split}
\end{align}
We now substitute \eqref{eq: estimate for alpha v} and \eqref{eq: estimate for beta v} into \eqref{eq: decomposition to alpha and beta}. Since all constants $C>0$ do not depend on the parameter $\eta>0$, we can take it to be sufficiently large and so similarly to the proof of Lemma ~\ref{LemmaAnisotropicLaplaceResolventOmega} we obtain
%
\begin{equation*}
|\lambda|^{1/2} \|\partial_z v\|_{L^\infty_HL^p_z(\Omega)}
\le C_{\theta,p,\lambda_0}
\left(
\eta^{2/p-1}(1+r^{1/p}\lvert \log(r)\rvert)
+\eta^{3/p}\lvert \lambda \rvert^{-1/2p}(1+\lvert \log(\lvert \lambda \rvert)\rvert)
\right)
\|f\|_{L^\infty_HL^p_z(\Omega)} .
\end{equation*}
%
Since
%
\[
\sup_{0<r<r_0}r^{1/p}\lvert \log(r)\rvert <\infty,
\quad
\sup_{\lvert \lambda\rvert>\lambda_0}
\lvert \lambda\rvert^{-1/2p}(1+\lvert \log(\lvert \lambda\rvert)\rvert)
<\infty,
\]
%
for any $r_0,\lambda_0>0$ and $p\in (1,\infty)$, this implies the desired estimate
$|\lambda|^{1/2} \|\partial_z v\|_{L^\infty_HL^p_z(\Omega)}
\le C \|f\|_{L^\infty_HL^p_z(\Omega)}$ for $|\lambda| \ge \lambda_0$.
\end{proof}
We now turn to the problem
%
\begin{align}\label{eq:HydrostaticStokesResolventDerivative}
\lambda v-Av
=\mathbb{P}\partial_z f
\text{ on } \Omega
\end{align}
%
with boundary conditions \eqref{eq:bc} for $f\in X$. Since
%
\begin{align}\label{eq:FixedByProjection}
\mathbb{P}\partial_z f=\partial_z f-(1-Q)\overline{\partial_z f}=\partial_z f,
\end{align}
%
whenever $f=0$ on $\Gamma_u\cup \Gamma_b$ and $C^\infty_{\text{per}}([0,1]^2;C^\infty_c (-h,0))^2$ is dense in $X$ we may assume without loss of generality that \eqref{eq:FixedByProjection} holds.
Moreover, in view of periodic extension we may assume that \eqref{eq:HydrostaticStokesResolventDerivative} holds in a larger domain $\Omega' := G'\times(-h,0)$, $G' := (-2,3)^2$. Since the problem is well-posed in $L^p_{\overline{\sigma}}(\Omega)$ by \eqref{eq:LpEstimateDerivatives}, estimate~\eqref{eq:SecondVerticalEstimate} then follows from the following:
\begin{proposition} \label{thm: 2nd vertical derivative estimate}
Let $p\in(2, \infty)$ and $\theta\in(0, \pi)$.
Then there exists constants $\lambda_0>0$ and $C_{\theta,p,\lambda_0}>0$ such that for all $\lambda\in\Sigma_\theta$ with $\lvert \lambda \rvert>\lambda_0$ and $f \in X$ the solution to the problem \eqref{eq:HydrostaticStokesResolventDerivative} satisfies
%
\begin{equation*}
\lvert \lambda\rvert^{1/2}\lVert v\rVert_{L^\infty_HL^p_z(\Omega)}
\le C_{\theta, p,\lambda_0} \lVert f\rVert_{L^\infty_HL^p_z(\Omega)}.
\end{equation*}
%
\end{proposition}
To prove this estimate,
we adopt a duality argument combined with
the use of a regularized delta function, which is based on the methodology known in
$L^\infty$-type error analysis of the finite element method, cf. \cite{RaSc82}.
In order to prove this estimate we first introduce some notation. Using periodicity, one sees that for any $\varepsilon\in (0,1)$ we have $B(x_0',\varepsilon)\subset G'$ for $x_0'\in G$ and
%
\begin{equation*}
\|v\|_{L^\infty_HL^p_z(\Omega)}^p
= \sup_{x_0' \in G} \sup_{x' \in B(x'_0; \varepsilon)}
\int_{-h}^0 |v(x', z)|^p \, dz,
\end{equation*}
%
where by $B(x'_0;\varepsilon)$ we continue to denote a disk in $\mathbb{R}^2$, compare Lemma~\ref{lem: anisotropic interpolation inequality}.
In the following we fix arbitrary $x_0' \in G$, $x' \in B(x_0'; \varepsilon)$ and choose $\varepsilon = |\lambda|^{-\frac{p}{2(p-2)}}$ for $\lambda$ as above.
Letting $\delta\ge 0$ be a smooth nonnegative function in the variables $(x,y)=:x'$ such that $\mathrm{supp}\,\delta \subset B(0; 1)$ and $\int_{\mathbb R^2} \delta \, dx' = 1$, we introduce a rescaled function as
%
\begin{align}\label{eq:DeltaRescaling}
\delta_\varepsilon(x')
:= \frac1{\varepsilon^2} \delta \left( \frac{x'}\varepsilon \right ),
\qquad
\delta_{\varepsilon, x_0'}(x')
:= \delta_\varepsilon(x' - x_0').
\end{align}
%
We then obtain
%
\begin{equation} \label{eq1: proof of 2nd vertical estimate}
\int_{-h}^0 |v(x', z)|^p \, dz
= \int_{-h}^0\int_{G'}
\big( |v(x', z)|^p - |v(y', z)|^p \big) \delta_{\varepsilon,x'_0}(y')
\, dy'dz
+ (v, \delta_{\varepsilon, x_0'}|v|^{p-2}v^*)_{\Omega'}
=: I_1(x') + I_2,
\end{equation}
%
where $v^*$ means the complex conjugate of $v$ and $(\cdot, \cdot)_{\Omega'}$ denotes the inner product on $L^2(\Omega')^2$. In the following we estimate the two terms on the right-hand side separately, beginning with $I_1$.
\begin{lemma} \label{lem1: proof of 2nd vertical estimate}
Under the assumptions of Proposition~\ref{thm: 2nd vertical derivative estimate} we have for all
for all $x'_0\in G$ and $x'\in B(x'_0;\varepsilon)$, $\varepsilon = |\lambda|^{-\frac{p}{2(p-2)}}$, that
%
\[
|I_1(x')|=\left\vert \int_{\Omega'}
\big( |v(x', z)|^p - |v(y', z)|^p \big) \delta_{\varepsilon,x'_0}(y')
\, dy'dz \right\vert
\le C_{\theta,p}|\lambda|^{-1/2} \|f\|_{L^\infty_H L^p_z(\Omega)} \|v\|_{L^\infty_H L^p_z(\Omega)}^{p-1}.
\]
%
\end{lemma}
\begin{proof}
Since $\int_{\mathbb{R}^2}\delta_{\varepsilon,x'_0}(y')\,dy'=1$ and
$\mathrm{supp}\, \delta_{\varepsilon,x_0'} \subset B(x_0'; \varepsilon)$ we obtain
%
\begin{align*}
|I_1(x')|
&\le \sup_{y' \in B(x_0'; \varepsilon)} \int_{-h}^0
\big| |v(x', z)|^p - |v(y', z)|^p \big| \, dz
\\
&\le C \sup_{y' \in B(x_0'; \varepsilon)} \int_{-h}^0
(|v(x', z)|^{p-1} + |v(y', z)|^{p-1}) \big| v(x', z) - v(y', z) \big|
\, dz,
\end{align*}
%
where we have used the elementary inequality
%
\begin{equation*}
|a^p - b^p|
\le p \max\{a, b\}^{p-1} |a - b|
\le p (a+b)^{p-1} |a - b|
\le p 2^{p-2} (a^{p-1}+b^{p-1}) |a - b|
\end{equation*}
%
for all $a,b\ge 0$, where we used that $p\in [2,\infty)$ implies that $x\mapsto x^{p-1}$ is a convex function.
%
H\"older's inequality then implies that
%
\begin{equation*}
\int_{-h}^0
(|v(x', z)|^{p-1} + |v(y', z)|^{p-1}) \big| v(x', z) - v(y', z) \big|
\, dz
\le (\|v(x')\|_{L^p_z}^{p-1} + \|v(y')\|_{L^p_z}^{p-1})
\|v(x') - v(y')\|_{L^p_z}.
\end{equation*}
%
%
Hence we have
%
\begin{align*}
\sup_{x'\in B(x'_0;\varepsilon)}|I_1(x')|
&\le C \sup_{y'\in B(x_0'; \varepsilon)} \|v(y')\|_{L^p_z}^{p-1}
\sup_{y'\in B(x_0'; \varepsilon)} \|v(x') - v(y')\|_{L^p_z}
\le C \|v\|_{L^\infty_HL^p_z}^{p-1}
\varepsilon^\alpha
\|v\|_{C^\alpha_HL^p_z(\Omega)},
\end{align*}
%
where $\alpha := 1-2/p>0$ and
$\lVert v \rVert_{C^\alpha_HL^p_z(\Omega)}$ denotes the space of $L^p(-h,0)$-valued H\"older continuous functions of exponent $\alpha$ on $\overline{G}$.
The assumption $\varepsilon = |\lambda|^{-\frac{p}{2(p-2)}}$ then yields
$\varepsilon^\alpha = |\lambda|^{-1/2}$.
We now use the Sobolev embedding $W^{1,p}(G) \hookrightarrow C^\alpha(\overline{G})$ to obtain the estimate $\|v\|_{C^\alpha_HL^p_z} \le C\|v\|_{W^{1,p}(\Omega)}$.
In addition, the Poincar\'e inequality yields
%
\[
\lVert v \rVert_{W^{1,p}(\Omega)}
\le C_p \lVert \nabla v\rVert_{L^p(\Omega)}
= C_p\lVert \nabla (\lambda-A_{p,\os})^{-1}\partial_z f\rVert_{L^p(\Omega)}
\le C_{\theta,p} \lVert f\rVert_{L^p(\Omega)}
\le C_{\theta,p} \lVert f\rVert_{L^\infty_H L^p_z(\Omega)},
\]
%
where we used that $\nabla (-A_{p,\os})^{-1/2}$, $A_{p,\os}(\lambda-A_{p,\os})^{-1}$ and $(-A_{p,\os})^{-1/2}\partial_z$ are (uniformly) bounded on $L^p_{\overline{\sigma}}(\Omega)$ for $\lambda\in\Sigma_\theta$
by \cite{GGHHK17}.
Combining these results then gives the desired estimate.
\end{proof}
%
In order to estimate $I_2$ we perform a duality argument.
For this purpose we introduce an auxiliary problem corresponding to \eqref{eq:HydrostaticStokesResolventDerivative} as follows:
%
\begin{equation} \label{eq: dual problem}
\begin{aligned}
\lambda^* w - \Delta w + \nabla_H \Pi
&= \delta_{\varepsilon,x_0'} |v|^{p-2}v^* \quad\text{in}\quad \Omega',
\\
\partial_z \Pi &= 0 \quad\text{in}\quad \Omega',
\\
\mathrm{div}_H\, \bar w &= 0 \quad\text{in}\quad G',
\\
\partial_zw|_{\Gamma_u'} = 0, \quad w|_{\Gamma_b'} &= 0, \quad
w, \Pi \text{ periodic on } \Gamma_l',
\end{aligned}
\end{equation}
%
where the upper script $*$ means complex conjugate as before.
%
We establish an $L^1_HL^q_z$-estimate to this problem, where $q := p/(p-1)$ is the dual index of $p$.
%
\begin{proposition} \label{prop: L1Lq estimate for dual problem}
Let $p\in(2,\infty)$, $1/p+1/q=1$ and $\theta\in(0,\pi)$.
Then there exists a sufficiently large
$\lambda_0>0$ and a constant $C_{p, \lambda_0, \theta}>0$ such that the solution of \eqref{eq: dual problem} satisfies
%
\begin{equation*}
|\lambda|^{1/2}\|\partial_zw\|_{L^1_HL^q_z(\Omega')}
\le C_{\theta,p} \left(
1+|\lambda|^{-1/2q}\varepsilon^{2/s-2}
\right)
\|v\|_{L^\infty_H L^p_z(\Omega)}^{p-1},
\end{equation*}
%
for all $\varepsilon\in(0,1)$, $s \in (1, q]$, $x_0' \in G$, $\lambda\in\Sigma_\theta$, $|\lambda|>\lambda_0$, and $v \in X$.
\end{proposition}
\begin{remark}
If one even has $p\in (3,\infty)$ then this result can be extended to the full range of $\lambda \in \Sigma_\theta$ by a similar argument as in the proof of Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega}, compare Remark~\ref{FullRangeRemark}.
\end{remark}
For simplicity, we write $L^p_HL^q_z$ to refer to $L^p_H L^q_z(\Omega')=L^p(G'; L^q(-h, 0))$ when there is no ambiguity.
First we introduce the following result.
\begin{lemma} \label{lem: Ls estimate of F}
Let $\varepsilon\in(0,1)$, $x_0' \in G$, $p\in(1,\infty)$, $1/p+1/q=1$ and $v \in X$ be arbitrary. Then, for $\delta_{\varepsilon,x'_0}$ defined as in \eqref{eq:DeltaRescaling} and
$s \in [1,q]$ we have
%
\begin{equation*}
\|\delta_{\varepsilon,x_0'} |v|^{p-2}v^*\|_{L^s(\Omega')}
\le C\|\delta_{\varepsilon,x_0'} |v|^{p-2}v^*\|_{L^s_HL^q_z}
\le C\varepsilon^{2/s-2} \|v\|_{L^\infty_HL^p_z(\Omega)}^{p-1}
\end{equation*}
%
for a constant $C>0$ not depending on $\varepsilon$, $x'_0$ and $v$.
\end{lemma}
\begin{proof}
We set $F := \delta_{\varepsilon,x_0'} |v|^{p-2}v^*$.
%
Noting that $|F|^q = \delta_{\varepsilon, x_0'}^q |v|^p$ and that $\delta_{\varepsilon,x_0'}$ is independent of $z$, we obtain
%
\begin{align*}
\|F\|_{L^s_HL^q_z}
&= \left[
\int_{G'} \left(
\int_{-h}^0 \delta_\varepsilon(x' - x_0')^q |v(x', z)|^p\,dz
\right)^{s/q}\,dx'
\right]^{1/s}
\\
&\le \left(
\int_{G'} \delta_\varepsilon(x' - x_0')^s\,dx'
\right)^{1/s}
\left[
\sup_{x'\in G'} \left( \int_{-h}^0 |v(x', z)|^p\, dz \right)^{1/p}
\right]^{p/q}
\\
&\le C\varepsilon^{2/s - 2} \|v\|_{L^\infty_HL^p_z(\Omega)}^{p-1},
\end{align*}
%
where we used the periodicity of $v$ in the last step. This completes the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop: L1Lq estimate for dual problem}]
We set $r := \eta|\lambda|^{-1/2}$, where $\eta>0$ is a large number to be fixed later and $\lvert \lambda\rvert>\lambda_0$, where $\lambda_0>0$ is sufficiently large such that $\eta \lambda_0^{-1/2}<1$.
%
We introduce two cut-off functions $\alpha=\alpha_r$, $\beta=\beta_r$ in the vertical direction as follows:
\begin{align*}
&\alpha \in C^\infty([-h, 0]), \quad
\alpha\equiv0 \text{ in } [-h, -h + r ], \quad
\alpha\equiv1 \text{ in } [-h+2r, 0], \quad
|\partial_z^k\alpha(z)| \le Cr^{-k},
\\
&\beta \in C^\infty([-h, 0]), \quad
\beta\equiv1 \text{ in } [-h, -h+2r], \quad
\beta\equiv0 \text{ in } [-h+3r, 0], \quad
|\partial_z^k\beta(z)| \le Cr^{-k}
\end{align*}
%
for $k=0,1,2$.
%
Then we may split the estimate for $\partial_zw$ into the ``upper'' and ``lower'' parts in $\Omega'$ as
%
\begin{equation} \label{eq: decomposition to alpha and beta for w}
\|\partial_z w\|_{L^1_HL^q_z}
\le \|\partial_z (\alpha w)\|_{L^1_HL^q_z}
+ \|\partial_z (\beta w)\|_{L^1_HL^q_z}.
\end{equation}
%
\textbf{Step 1.} We consider $\alpha w$, which satisfies
%
\begin{align*}
\lambda^* \alpha w - \Delta(\alpha w)
&= \alpha F
- \alpha(\nabla_H \Pi)
- 2(\partial_z\alpha) (\partial_z w)
- (\partial_z^2\alpha) w,
\\
\partial_z(\alpha w) &= 0 \quad\text{on}\quad \Gamma_u'\cup\Gamma_b',
\qquad
\alpha w \text{ periodic on }\Gamma_l'
\end{align*}
%
where $F:= \delta_{\varepsilon,x_0'} |v|^{p-2}v^*$ as in the proof of Lemma\ref{lem: Ls estimate of F}.
Differentiating this with respect to $z$ yields
%
\begin{align*}
\lambda^*\partial_z(\alpha w) - \Delta(\partial_z(\alpha w))
&= \partial_z\left[
\alpha F
- 2(\partial_z\alpha) (\partial_z w)
- (\partial_z^2\alpha) w
\right]
- (\partial_z\alpha) (\nabla_H \Pi) \quad\text{in}\quad \Omega',
\\
\partial_z(\alpha w) &= 0 \quad\text{on}\quad \Gamma_u'\cup\Gamma_b',
\qquad
\partial_z(\alpha w) \text{ periodic on }\Gamma_l'.
\end{align*}
%
Applying Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega} in $L^1_HL^q_z(\Omega')$ we obtain
%
\begin{align*}
|\lambda|^{1/2} \|\partial_z(\alpha w)\|_{L^1_HL^q_z}
\le &C\left(
\|\alpha F\|_{L^1_HL^q_z}
+ \|(\partial_z\alpha) (\partial_zw)\|_{L^1_HL^q_z}
+ \|(\partial_z^2\alpha) w\|_{L^1_HL^q_z}
\right)
\\+ &C|\lambda|^{-1/2} \|(\partial_z\alpha) (\nabla_H \Pi)\|_{L^1_HL^q_z}.
\end{align*}
%
We now estimate each term on the right-hand side.
By Lemma~\ref{lem: Ls estimate of F} with $s = 1$ we have
%
\begin{equation*}
\|\alpha F\|_{L^1_HL^q_z}
\le \lVert \alpha\rVert_\infty \|F\|_{L^1_HL^q_z}
\le C \|v\|_{L^\infty_HL^p_z(\Omega)}^{p-1}.
\end{equation*}
%
Using the estimate on derivatives of $\alpha$ we obtain
%
\begin{equation*}
\|(\partial_z\alpha) (\partial_zw)\|_{L^1_HL^q_z}
\le Cr^{-1} \|\partial_zw\|_{L^1_HL^q_z},
\end{equation*}
%
and by the Poincar\'e inequality we have
%
\begin{equation*}
\|(\partial_z^2\alpha) w\|_{L^1_HL^q_z}
\le C r^{-2} \|w\|_{L^1_HL^q_z(G'\times (-h,-h+2r))}
\le Cr^{-1} \|\partial_zw\|_{L^1_HL^q_z}.
\end{equation*}
%
Using $L^s(G')\hookrightarrow L^1(G')$ as well as the estimate
on the pressure term, cf. \cite[Theorem 3.1.]{HieberKashiwabara2015},
in $L^s(\Omega)$ for $s\in (1,q]$, we obtain
%
\begin{align*}
\|(\partial_z\alpha) (\nabla_H \Pi)\|_{L^1_HL^q_z}
\le C\|\partial_z\alpha\|_{L^q_z} \|\nabla_H\Pi\|_{L^s(G')}
\le Cr^{1/q - 1} \|F\|_{L^s}
\le Cr^{1/q - 1} \varepsilon^{2/s-2}
\|v\|_{L^\infty_HL^p_z(\Omega)}^{p-1},
\end{align*}
%
%
%
Collecting the above estimates and plugging in $r=\eta\lvert \lambda\rvert^{-1/2}$ yields
%
\begin{equation} \label{eq: alpha w}
|\lambda|^{1/2} \|\partial_z(\alpha w)\|_{L^1_HL^q_z}
\le C(1 + \eta^{1/q-1} |\lambda|^{-1/2q} \varepsilon^{2/s-2})
\|v\|_{L^\infty_HL^p_z(\Omega)}^{p-1}
+ C \eta^{-1} |\lambda|^{1/2}
\|\partial_z w\|_{L^1_HL^q_z}.
\end{equation}
%
\textbf{Step 2.} We consider $\beta w$, which satisfies
%
\begin{align*}
\lambda^*\beta w - \Delta(\beta w)
&= \beta F
- \beta \nabla_H\Pi
- 2(\partial_z\beta) (\partial_z w)
- (\partial_z^2\beta) w
\quad\text{in}\quad \Omega',
\\
\partial_z(\beta w) &= 0 \text{ on } \Gamma_u', \quad
\beta w = 0 \text{ on } \Gamma_b',
\quad \partial_z(\beta w) \text{ periodic on }\Gamma_l'.
\end{align*}
%
Applying Lemma~\ref{LemmaAnisotropicLaplaceResolventOmega} in $L^1_HL^q_z$ we obtain
%
\begin{equation*}
|\lambda|^{1/2} \|\partial_z(\beta w)\|_{L^1_HL^q_z}
\le C(\|\beta F\|_{L^1_HL^q_z}
+ \|(\partial_z\beta) (\partial_zw)\|_{L^1_HL^q_z}
+ \|(\partial_z^2\beta) w\|_{L^1_HL^q_z}
+ \|\beta (\nabla_H\Pi)\|_{L^1_HL^q_z}).
\end{equation*}
%
A calculation similar to Step 1 then gives
%
\begin{equation} \label{eq: beta w}
|\lambda|^{1/2} \|\partial_z(\beta w)\|_{L^1_HL^q_z}
\le C(1 + \eta^{1/q} |\lambda|^{-1/2q} \varepsilon^{2/s-2}) \|v\|_{L^\infty_HL^p_z}^{p-1}
+ C\eta^{-1} |\lambda|^{1/2} \|\partial_z w\|_{L^1_HL^q_z}.
\end{equation}
%
Substituting \eqref{eq: alpha w} and \eqref{eq: beta w} into \eqref{eq: decomposition to alpha and beta for w} and choosing sufficiently large $\eta$ enable us to absorb the term $\lvert \lambda\rvert^{1/2}\lVert \partial_z w\rVert_{L^1_H L^q_z}$ from the right-hand side, which leads to
%
\begin{equation*}
|\lambda|^{1/2} \|\partial_zw\|_{L^1_HL^q_z}
\le C(1 + |\lambda|^{-1/2q} \varepsilon^{2/s-2})
\|v\|_{L^\infty_HL^p_z(\Omega)}^{p-1}
\end{equation*}
%
This completes the proof.
\end{proof}
With the preparations above, we are now in the position to prove Proposition~\ref{thm: 2nd vertical derivative estimate}.
\begin{proof}[Proof of Proposition~\ref{thm: 2nd vertical derivative estimate}]
By \eqref{eq1: proof of 2nd vertical estimate} and Lemma~\ref{lem1: proof of 2nd vertical estimate} we have
%
\begin{equation} \label{eq1: proof of 2nd vertical derivative estimate}
\|v\|_{L^\infty_H L^p_z(\Omega)}^p
\le C|\lambda|^{-1/2} \|f\|_{L^\infty_H L^p_z(\Omega)} \|v\|_{L^\infty_H L^p_z(\Omega)}^{p-1}
+ I_2,
\end{equation}
%
with $I_2$ as defined in \eqref{eq1: proof of 2nd vertical estimate}.
Substituting \eqref{eq: dual problem} and integrating by parts, we find that
%
\begin{align*}
I_2
&= (v, \delta_{\varepsilon, x_0'} |v|^{p-2}v^*)_{\Omega'}
= (v, \lambda^* w - \Delta w + \nabla_H \Pi)_{\Omega'}
= (\lambda v - \Delta v + \nabla_H\pi, w)_{\Omega'}
= (\partial_z f, w)_{\Omega'} \\
&= -(f, \partial_z w)_{\Omega'},
\end{align*}
%
where we have used that
$(v,\nabla_H \Pi)_{\Omega'}=0=(\nabla_H \pi,w)_{\Omega'}$ since $\text{div}_H \overline{v}=0=\text{div}_H \overline{w}$ for the third and
$f|_{\Gamma_u\cup\Gamma_b} = 0$ for the last equality.
Using $1/p+1/q=1$ and applying Proposition~\ref{prop: L1Lq estimate for dual problem} we obtain
%
\begin{align*}
|I_2|
\le \|f\|_{L^\infty_H L^p_z(\Omega)}
\|\partial_z w\|_{L^1_H L^q_z}
\le C |\lambda|^{-1/2} \|f\|_{L^\infty_H L^p_z(\Omega)}
\|v\|_{L^\infty_H L^p_z(\Omega)}^{p-1}
\left(
1+\lvert \lambda\rvert^{-1/2q} \varepsilon^{2/s-2}
\right)
.
\end{align*}
%
We set $\varepsilon = |\lambda|^{ -\frac{p}{2(p-2)} }$ for $\lvert \lambda\rvert>1$ and
$s = \min\{\frac{4p}{3p+2}, \frac{p}{p-1}\} \in (1, q]$. This yields
%
\[
-\frac1{2q} + \left(1 - \frac1s\right)\frac{p}{p-2}
=
-\frac12 +\frac1{2p}
+\left(1 - \frac1s\right)\frac{p}{p-2}
\le
-\frac14 + \frac1{2p}
< 0
\]
%
which implies that $1+\lvert \lambda\rvert^{-1/2q} \varepsilon^{2/s-2}\le 2$ for $\lvert \lambda\rvert>1$ and therefore
%
\begin{equation} \label{eq2: proof of 2nd vertical derivative estimate}
|I_2| \le C|\lambda|^{-1/2} \|f\|_{L^\infty_H L^p_z(\Omega)} \|v\|_{L^\infty_H L^p_z(\Omega)}^{p-1},
\quad \lvert \lambda\rvert>1.
\end{equation}
%
The desired estimate then follows from \eqref{eq1: proof of 2nd vertical derivative estimate} and \eqref{eq2: proof of 2nd vertical derivative estimate} after dividing by $\|v\|_{L^\infty_H L^p_z(\Omega)}^{p-1}$.
\end{proof}
\begin{proof}[Proof of Claim~\ref{SemigroupEstimatesHard}]
Estimate \eqref{eq:FirstVerticalEstimate} now follows from \eqref{eq:voneestimate} and Proposition~\ref{vtwoestimate}, whereas estimate \eqref{eq:SecondVerticalEstimate} follows from
Proposition \ref{thm: 2nd vertical derivative estimate}. Estimate \ref{eq:ThirdVerticalEstimate} follows from \eqref{eq:FirstVerticalEstimate}, \eqref{eq:SecondVerticalEstimate} and Claim~\ref{SemigroupEstimates}.
\end{proof}
\section{Proof of the main results}\label{sec:proofs}
Theorem~\ref{thm:semigroup} is a direct consequence of Claims~\ref{SemigroupEstimates} and~\ref{SemigroupEstimatesHard}.
For the non-linear problem in the space $X$ we will make use of the following estimates.
\begin{lemma}\label{NonlinearEstimates}
Let $p>3$. Then exists a constant $C>0$ such that for all $t>0$ and $v_i\in X_\os$ satisfying $\nabla v_i\in X$ and $\restr{v_i}{\Gamma_b}=0$ with $u_i=(v_i,w_i)$ as in \eqref{eq:WRelation} for $i=1,2$ we have
%
\begin{align}
\tag{i}
\lVert e^{tA}\mathbb{P}(u_1\cdot \nabla)v_2\rVert_{L^\infty_H L^p_z}
&\le C t^{-1/2}\lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert v_2\rVert_{L^\infty_H L^p_z},
\\
\tag{ii}
\lVert \nabla e^{tA}\mathbb{P}(u_1\cdot \nabla)v_2\rVert_{L^\infty_H L^p_z}
&\le C t^{-1/2} \lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert \nabla v_2\rVert_{L^\infty_H L^p_z},
\\
\tag{iii}
\lVert \nabla e^{tA}\mathbb{P}(u_1\cdot \nabla)v_2\rVert_{L^\infty_H L^p_z}
&\le C t^{-1} \lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert v_2\rVert_{L^\infty_H L^p_z},
\end{align}
%
as well as
%
\begin{align}
\tag{iv}
\lVert e^{tA}\mathbb{P} (u_1\cdot \nabla)v_2\rVert_{L^\infty_H L^p_z}
&\le C \left(
t^{-1/2}\lVert \nabla v_i\rVert_{L^\infty_H L^p_z} \lVert v_j \rVert_{L^\infty_H L^p_z}
+ \lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert \nabla v_2\rVert_{L^\infty_H L^p_z}
\right)
\end{align}
%
where $\{i,j\}=\{1,2\}$.
\end{lemma}
\begin{proof}
We begin by noting that
%
\begin{align*}
\lVert (u_1 \cdot\nabla) v_2\rVert_{L^\infty_H L^p_z}
&\le \left(
\lVert v_1 \rVert_{L^\infty(\Omega)}+\lVert w_1\rVert_{L^\infty(\Omega)}
\right)
\lVert \nabla v_2\rVert_{L^\infty_H L^p_z}.
\end{align*}
%
So, using Sobolev embeddings, the Poincar{\'e} inequality and $X\hookrightarrow L^p(\Omega)^2$ we obtain
%
\[
\lVert v_i \rVert_{L^\infty(\Omega)}
\le C \lVert v_i\rVert_{W^{1,p}(\Omega)}
\le C \lVert \nabla v_i\rVert_{L^p(\Omega)}
\le C \lVert \nabla v_i\rVert_{L^\infty_H L^p_z}.
\]
%
Similarly one has
%
\[
\lVert w_i\rVert_{L^\infty(\Omega)}
\le C\lVert \text{div}_H v_i \lVert_{L^\infty_H L^p_z}
\le C\lVert \nabla v_i \rVert_{L^\infty_H L^p_z}.
\]
%
This allows us to obtain (ii) via Claim~\ref{SemigroupEstimates} and \ref{SemigroupEstimatesHard} as well as
%
\begin{align*}
\lVert \nabla e^{tA}\mathbb{P}(u_1\cdot \nabla)v_2\rVert_{L^\infty_H L^p_z}
\le C t^{-1/2} \lVert (u_1 \cdot\nabla) v_2\rVert_{L^\infty_H L^p_z}
\le C t^{-1/2} \lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert \nabla v_2\rVert_{L^\infty_H L^p_z}.
\end{align*}
%
To prove (i) we proceed analogously as above to obtain
%
\[
\lVert v_1\otimes v_2 \rVert_{L^\infty_H L^p_z}
\le C \lVert \nabla v_i\rVert_{L^\infty_H L^p_z} \lVert v_j\rVert_{L^\infty_H L^p_z},
\quad
\lVert w_1 v_2 \rVert_{L^\infty_H L^p_z}
\le C \lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert v_2\rVert_{L^\infty_H L^p_z}
\]
%
where $\{i,j\}=\{1,2\}$ and since $\text{div}\, u_i=0$ we can write
%
\[
(u_1\cdot \nabla)v_2
=\nabla \cdot(u_1\otimes v_2)
=\nabla_H\cdot (v_1\otimes v_2)
+ \partial_z (w_1 v_2)
\]
%
which allows us to apply Claim~\ref{SemigroupEstimates} and \ref{SemigroupEstimatesHard} yielding
\begin{align*}
\lVert e^{tA}\mathbb{P}(u_1\cdot \nabla)v_2\rVert_{L^\infty_H L^p_z}
&= \lVert e^{tA}\mathbb{P}\nabla \cdot (u_1\otimes v_2)\rVert_{L^\infty_H L^p_z}
\\
&\le \lVert e^{tA}\mathbb{P}\nabla_H \cdot (v_1\otimes v_2)\rVert_{L^\infty_H L^p_z}
+ \lVert e^{tA}\mathbb{P}\partial_z (w_1 v_2)\rVert_{L^\infty_H L^p_z}
\\
&\le C t^{-1/2}\left(
\lVert v_1\otimes v_2\rVert_{L^\infty_H L^p_z}+\lVert w_1 v_2\rVert_{L^\infty_H L^p_z}
\right)
\\
&\le C t^{-1/2}\lVert \nabla v_1\rVert_{L^\infty_H L^p_z}\lVert v_2\rVert_{L^\infty_H L^p_z},
\end{align*}
%
and estimate (iii) is obtained analogously via
%
\begin{align*}
\lVert \nabla e^{tA}\mathbb{P}(u_1\cdot \nabla)v_2\rVert_{L^\infty_H L^p_z}
\le C t^{-1}\left(
\lVert v_1\otimes v_2\rVert_{L^\infty_H L^p_z}+\lVert w_1 v_2\rVert_{L^\infty_H L^p_z}
\right)
\le C t^{-1}\lVert \nabla v_1\rVert_{L^\infty_H L^p_z}\lVert v_2\rVert_{L^\infty_H L^p_z}.
\end{align*}
To prove (iv) we observe that $w_i=0$ on $\Gamma_u\cup \Gamma_b$ implies that
%
\[
\mathbb{P} \partial_z (w_1 v_2)
=\partial_z (w_1 v_2)
=-(\text{div}_H v_1)v_2+w_1\partial_z v_2
\]
%
and the right-hand side is further estimated via
%
\[
\lVert (\text{div}_H v_1)v_2\rVert_{L^\infty_H L^p_z}
\le C\lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert v_2\rVert_{L^\infty(\Omega)}
\le C \lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert \nabla v_2 \rVert_{L^\infty_H L^p_z},
\]
%
and
%
\[
\lVert w_1\partial_z v_2\rVert_{L^\infty_H L^p_z}
\le \lVert w_1 \rVert_{L^\infty(\Omega)}\lVert \partial_z v_2\rVert_{L^\infty_H L^p_z}
\le C\lVert \nabla v_1\rVert_{L^\infty_H L^p_z}\lVert \nabla v_2\rVert_{L^\infty_H L^p_z}.
\]
%
Applying Claim~\ref{SemigroupEstimates} then yields that for $\{i,j\}=\{1,2\}$ we have
\begin{align*}
\lVert e^{tA}\mathbb{P} \nabla_H \cdot (v_1\otimes v_2)\rVert_{L^\infty_H L^p_z}
\le C t^{-1/2}\lVert v_1\otimes v_2\rVert_{L^\infty_H L^p_z}
\le C t^{-1/2} \lVert \nabla v_i\rVert_{L^\infty_H L^p_z} \lVert v_j\rVert_{L^\infty_H L^p_z},
\end{align*}
%
as well as
%
\begin{align*}
\lVert e^{tA}\mathbb{P} \partial_z (w_1 v_2)\rVert_{L^\infty_H L^p_z}
&\le C \lVert \nabla v_1\rVert_{L^\infty_H L^p_z} \lVert \nabla v_2\rVert_{L^\infty_H L^p_z}
\end{align*}
%
which implies (iv) and completes the proof.
\end{proof}
It has been proven in \cite{GGHHK17} that the operator $A_{p,\os}$ possesses maximal $L^q$-regularity. In \cite{GigaGriesHieberHusseinKashiwabara2017} the authors applied this to develop a solution theory for initial data
%
\[
a\in X_{\gamma}:=(L^p_{\overline{\sigma}}(\Omega),D(A_p))_{1-1/q,q}\subset B_{pq}^{2-2/q}(\Omega)^2\cap L^p_{\overline{\sigma}}(\Omega)
\]
%
where $p,q\in (1,\infty)$ satisfy $1/p + 1/q \le 1$. In particular, one has the following result.
\begin{lemma}\label{SmoothData}
Let $a\in X_\gamma$. Then there exists a unique strong solution to the primitive equations \eqref{eq:PrimitiveEquations} with boundary conditions \eqref{eq:bc} satisfying
%
\[
v\in C([0,\infty);X_\gamma).
\]
%
\end{lemma}
This enables a key step in the proof of our main result as it guarantees the existence of smooth reference solutions $v_{\text{ref}}$ to the primitive equations given sufficiently smooth reference data $a_{\text{ref}}$.
In order to construct $v$ as a solution to problem \eqref{eq:PrimitiveEquations} with initial data $a$ we construct $V:=v-v_{\text{ref}}$ by an iterative method using initial data $a_0:=a-a_{\text{ref}}$.
Before we do so, we establish an auxiliary lemma.
\begin{lemma}\label{RecursiveLemma}
Let $(a_n)_{n\in\mathbb{N}}$ be a sequence of positive real numbers such that
%
\[
a_{m+1}\le a_0+c_1 a_m^2+c_2 a_m \quad \text{ for all } m\in \mathbb{N}
\]
%
and constants $c_1>0$ and $c_2\in (0,1)$ such that $4c_1 a_0<(1-c_2)^2$. Then $a_m<\frac{2}{1-c_2}a_0$ for all $m\in\mathbb{N}$.
\end{lemma}
\begin{proof}
Let $x_0$ be the smallest solution to the equation $x=a_0+c_1 x^2+c_2x$. Then
\[
0<x_0=\frac{(1-c_2)-\sqrt{(1-c_2)^2-4c_1a_0}}{2c_1}
=\frac{1}{2c_1}\frac{4c_1 a_0}{(1-c_2)+\sqrt{(1-c_2)^2-4c_1 a_0}}
<\frac{2}{1-c_2}a_0,
\]
%
and since $p(x)=a_0+c_1 x^2+c_2 x$ is an increasing function on $[0,\infty)$ it follows that $p(x)\le x_0$ for $x\in[0,x_0]$. The condition $c_2\in(0,1)$ further yields
%
\[
(1-c_2)+\sqrt{(1-c_2)^2-4c_1a_0}<2
\]
%
from which it follows that $a_0<x_0$ and thus the claim is easily derived by induction.
\end{proof}
We now prove our main result.
\begin{proof}[Proof of Theorem~\ref{MainTheorem}]
\textbf{Step 1:} \textit{Decomposition of data.}
Given an initial value $a\in X_\os$ we will split it into a smooth part $a_{\text{ref}}$ and a small rough part $a_0$, where $a=a_{\text{ref}}+a_0$, as follows:
Since $A_\os$ is densely defined on $X_\os$ we take $a_{\text{ref}}\in D(A_\os)$ such that
$a_0:=a-a_{\text{ref}}$ can be assumed to be arbitrarily small in $X_\os$.
Now let $q\in (1,\infty)$ be such that $1/q+1/p\le 1$ and $2/q+3/p<1$. The latter condition on $q$ then yields the embedding $X_\gamma \hookrightarrow C^1(\overline{\Omega})^2$.
Due to $D(A_\os)\subset D(A_{p,\os})\subset X_{\gamma}$ it follows from Lemma \ref{SmoothData} that taking $a_{\text{ref}}$ as initial data of the primitive equations, there exists a function $v_{\text{ref}}\in C([0,\infty);X_\gamma)$ solving the primitive equations with initial data $v_{\text{ref}}(0)=a_{\text{ref}}$.
\textbf{Step 2:} \textit{Estimates for the construction of a local solution.}
We will show that there exists a constant $C_0>0$ such that if
$a_0 \in X_\os$ satisfies $\lVert a_0\rVert_{L^\infty_H L^p_z}<C_0$ then there exists a time $T>0$ and a unique function
%
\[
V\in\mathcal{S}(T):=\{
V\in C([0,T];X_\os):\lVert \nabla V(t)\rVert_{L^\infty_H L^p_z}
= o(t^{-1/2})
\},
\]
%
where
%
\[
\lVert V\rVert_{\mathcal{S}(T)}
=\max\left\{
\sup_{0<t<T}\lVert V(t)\rVert_{L^\infty_H L^p_z},
\sup_{0<t<T}t^{1/2}\lVert \nabla V(t)\rVert_{L^\infty_H L^p_z}
\right\}
\]
%
such that $v=v_{\text{ref}}+V$ solves problem \eqref{eq:PrimitiveEquations} on
$(0,T)$ with initial value $v(0)=a$. In order to construct $V$ we define the iterative sequence of functions $(V_m)_{m\in\mathbb{N}}$ via
%
\begin{align}
V_0(t)=e^{tA}a_0, \quad V_{m+1}(t)=e^{tA}a_0+\int_0^t e^{(t-s)A}F_m(s)\,ds
\end{align}
%
where
%
\[
F_m:=-\mathbb{P} \left(
(U_m\cdot\nabla)V_m
+(U_m\cdot\nabla)v_{\text{ref}}
+(u_{\text{ref}}\cdot\nabla)V_m
\right)
\]
%
and $U_m=(V_m,W_m)$, $u_{\text{ref}}=(v_{\text{ref}},w_{\text{ref}})$ with the vertical component $w$ given by the horizontal component $v$ via the relation \eqref{eq:WRelation}.
We will now estimate this sequence in $\mathcal{S}(T)$ for some value $T>0$ to be fixed later on.
Since $\mathbb{P} a_0=a_0$ we have
%
\[
\lVert V_0\rVert_{\mathcal{S}(T)}\le C \lVert a_0\rVert_{L^\infty_H L^p_z},
\quad T\in (0,\infty)
\]
%
by Lemma~\ref{SemigroupEstimates}.
For $m\ge 1$ we will first consider the gradient estimates. We have already estimated the term $\nabla e^{tA}a_0$, whereas for the convolution integrals we have
%
\begin{align*}
\left\lVert \int_0^{t/2}\nabla e^{(t-s)A}\mathbb{P} \left(
(U_m(s)\cdot \nabla) V_m(s)
\right)\,ds \right\rVert_{L^\infty_H L^p_z}
&\le C \left(\int_0^{t/2}(t-s)^{-1}s^{-1/2}\,ds\right) K_m(t) H_m(t)
\\
&= C t^{-1/2} K_m(t) H_m(t)
\end{align*}
%
by Lemma~\ref{NonlinearEstimates} (iii) where
%
\begin{align*}
K_m(t):=\sup_{0<s<t} s^{1/2}\lVert \nabla V_m(s)\rVert_{L^\infty_H L^p_z},
\quad
H_m(t):=\sup_{0<s<t} \lVert V_m(s)\rVert_{L^\infty_H L^p_z}
\end{align*}
%
and via Lemma~\ref{NonlinearEstimates} (ii) we obtain
%
\begin{align*}
\left\lVert \int_{t/2}^t\nabla e^{(t-s)A}\mathbb{P} \left(
U_m(s)\cdot \nabla V_m(s)
\right)\,ds\right\rVert_{L^\infty_H L^p_z}
&\le C \left(
\int_{t/2}^t(t-s)^{-1/2}s^{-1}
\right)K_m(t)^2
\\
&\le C t^{-1/2} K_m(t)^2.
\end{align*}
%
Finally applying Lemma~\ref{NonlinearEstimates} (ii) to the two remaining mixed terms yields
%
\begin{align*}
\left\lVert \int_0^t\nabla e^{(t-s)A}\mathbb{P} \left(
U_m(s)\cdot \nabla) v_{\text{ref}}(s)
\right)\,ds\right\rVert_{L^\infty_H L^p_z}
&\le C \left(
\int_0^t (t-s)^{-1/2}s^{-1/2}\,ds
\right) \sup_{0<s<t}\lVert \nabla v_{\text{ref}}(s)\rVert_{L^\infty_H L^p_z} K_m(t)
\\
&=C \sup_{0<s<t}\lVert \nabla v_{\text{ref}}(s)\rVert_{L^\infty_H L^p_z} K_m(t),
\\
\left\lVert \int_0^t\nabla e^{(t-s)A}\mathbb{P} \left(
u_{\text{ref}}(s)\cdot \nabla) V_m(s)
\right)\,ds\right\rVert_{L^\infty_H L^p_z}
&\le C \sup_{0<s<t}\lVert \nabla v_{\text{ref}}(s)\rVert_{L^\infty_H L^p_z} K_m(t).
\end{align*}
%
We set $R:=\sup_{0\le t\le T_0} \lVert \nabla v_{\text{ref}}(t)\rVert_{L^\infty_H L^p_z}$ and note that $0<R<\infty$ by Lemma~\ref{SmoothData}, since $v_{\text{ref}}\in C([0,\infty);X_{\gamma})$ and $2/q+3/p<1$ implies that $X_{\gamma}\subset B^{2-2/q}_{pq}(\Omega)^2\hookrightarrow C^1(\overline{\Omega})^2$ via embedding theory, cf. \cite[Section 3.3.1]{Triebel}.
Taking these estimates together yields
%
\begin{align}\label{eq:EstimateWithGradient}
t^{1/2}\lVert \nabla V_{m+1}(t)\rVert_{L^\infty_H L^p_z}
&\le C_1\biggl( \lVert a_0\rVert_{L^\infty_H L^p_z}
+ K_m(t) H_m(t)
+ K_m(t)^2
+ R t^{1/2} K_m(t)\biggr).
\end{align}
%
To estimate $\lVert V_{m+1}(t)\rVert_{L^\infty_H L^p_z}$ we apply Lemma~\ref{NonlinearEstimates} (i) to obtain
%
\begin{align*}
\left\lVert \int_0^t e^{(t-s)A}\mathbb{P} \left(
(U_m(s)\cdot\nabla)V_m(s)
\right)\,ds \right\rVert_{L^\infty_H L^p_z}
&\le C \left(\int_0^t (t-s)^{-1/2}s^{-1/2}\,ds\right) K_m(t) H_m(t)
\\
&= C K_m(t) H_m(t)
\end{align*}
%
whereas for the mixed terms Lemma~\ref{NonlinearEstimates} (iv) yields
%
\begin{align*}
\lVert e^{(t-s)A}\mathbb{P} (U_m(s)\cdot\nabla)v_{\text{ref}}(s)\rVert_{L^\infty_H L^p_z}
\le &C \biggl(
(t-s)^ {-1/2}\lVert \nabla v_{\text{ref}}(s)\rVert_{L^\infty_H L^p_z}
\lVert V_m(s)\rVert_{L^\infty_H L^p_z}
\\&+\lVert \nabla V_m(s)\rVert_{L^\infty_H L^p_z} \lVert \nabla v_{\text{ref}}(s)\rVert_{L^\infty_H L^p_z}
\biggr)
\end{align*}
%
and therefore
%
\begin{align*}
\left\lVert
\int_0^t e^{(t-s)A}\mathbb{P} ( (U_m(s)\cdot\nabla)v_{\text{ref}}(s))\,ds
\right\rVert_{L^\infty_H L^p_z}
&\le C \left(
\int_0^t (t-s)^{-1/2}\,ds
\right) R H_m(t)
+ C \left( \int_0^t s^{-1/2}\,ds\right) R K_m(t)
\\&= C R t^{1/2} (H_m(t)+K_m(t)),
\end{align*}
%
and the other mixed term can be treated analogously due to the symmetry of the right-hand side in (iv).
Taking these estimates together yields
%
\begin{align}\label{eq:EstimateWithoutGradient}
\lVert V_{m+1}(t)\rVert_{L^\infty_H L^p_z}
&\le C_1 \biggl(\lVert a_0\rVert_{L^\infty_H L^p_z}
+ K_m(t) H_m(t)
+ t^{1/2}H_m(t)
+ t^{1/2}K_m(t)\biggr).
\end{align}
%
Since the right-hand sides of \eqref{eq:EstimateWithGradient} and \eqref{eq:EstimateWithoutGradient} are increasing functions we obtain for $t>0$ that
%
\begin{align}\label{Recursive}
\begin{split}
K_{m+1}(t)&\le C_1\biggl(
\lVert a_0\rVert_{L^\infty_H L^p_z}
+ K_m(t)H_m(t)
+ K_m(t)^2
+ R t^{1/2}K_m(t)
\biggr),
\\
H_{m+1}(t)&\le C_1\biggl(
\lVert a_0\rVert_{L^\infty_H L^p_z}
+ K_m(t)H_m(t)
+ R t^{1/2}H_m(t)
+ R t^{1/2}K_m(t)\biggr).
\end{split}
\end{align}
%
Now let $T\in (0,T_0)$ where $T_0>0$ is chosen in such a way that
%
\[
8C_1 R T_0^{1/2}<1.
\]
%
Then for all $0<t\le T<T_0$ we have
%
\[
\lVert V_{m+1}\rVert_{\mathcal{S}(t)}
\le C_1\lVert a_0\rVert_{L^\infty_H L^p_z}
+2C_1 \lVert V_{m}\rVert_{\mathcal{S}(t)}^2
+\frac{1}{4}\lVert V_{m}\rVert_{\mathcal{S}(t)}.
\]
%
By Lemma~\ref{RecursiveLemma} it follows that if $8C_1^2\lVert a_0\rVert_{L^\infty_H L^p_z}<(1-1/4)^2$, then for all $m\in \mathbb{N}$ we have
%
\begin{align}\label{eq: Vm uniformly bounded}
\lVert V_{m}\rVert_{\mathcal{S}(t)}
\le \frac{8}{3}C_1\lVert a_0\rVert_{L^\infty_H L^p_z},
\quad
t\in (0,T].
\end{align}
%
The property $\lim_{t \to 0+}t^{1/2}\lVert \nabla V_m(t)\rVert_{L^\infty_H L^p_z}=0$ is then easily obtained via induction and Claim~\ref{SemigroupEstimates} (e).
\textbf{Step 3:} \textit{Convergence.}
We now show that $(V_m)_{m\in\mathbb{N}}$ is a Cauchy sequence in $\mathcal{S}(T)$ if $\lVert a_0\rVert_{L^\infty_H L^p_z}$ is sufficiently small. For this purpose we consider the new sequence
\[
\tilde{V}_m:=V_{m+1}-V_m,\quad m\ge 0.
\]
%
Using the previous estimates we already know that $\lVert \tilde{V}_0\rVert_{\mathcal{S}(T)}<\infty$.
To estimate this sequence further we use
%
\[
F_m-F_{m-1}=\left( \tilde{U}_{m-1}\cdot \nabla\right)V_m
+\left( U_{m-1}\cdot \nabla\right)\tilde{V}_{m-1}
+\left( \tilde{U}_{m-1}\cdot \nabla\right)v_{\text{ref}}
+\left( U_{\text{ref}}\cdot \nabla\right)\tilde{V}_{m-1}
\]
%
and proceed as above to obtain
%
\begin{align}
\begin{split}
t^{1/2}\lVert \nabla \tilde{V}_m(t)\rVert_{L^\infty_H L^p_z}
\le C_2\biggl(&
2 H_m(t) \tilde{K}_{m-1}(t)
+2 \tilde{H}_{m-1}(t) K_{m-1}(t)
+K_m(t)\tilde{K}_{m-1}(t)
\\&+K_{m-1}(t)\tilde{K}_{m-1}(t)
+2 R t^{1/2} \tilde{K}_{m-1}
\biggr)
\end{split}
\end{align}
%
as well as
%
\begin{align}
\lVert \tilde{V}_m(t)\rVert_{L^\infty_H L^p_z}
&\le C_2 \biggl(
\tilde{K}_{m-1}(t)\left[H_m(t)+H_{m-1}(t)\right]
+ Rt^{1/2}\left[\tilde{K}_{m-1}(t)+\tilde{H}_{m-1}(t)\right]
\biggr),
\end{align}
%
where
%
\[
\tilde{K}_m(t):=\sup_{0<s<t}s^{1/2}\lVert \nabla \tilde{V}_m(t)\rVert_{L^\infty_H L^p_z},
\quad
\tilde{H}_m(t):=\sup_{0<s<t}\lVert \tilde{V}_m(t)\rVert_{L^\infty_H L^p_z}.
\]
%
By \eqref{eq: Vm uniformly bounded} it follows that if
%
\[
\max\{2 R T_0^{1/2},16 C_1\lVert a_0\rVert_{L^\infty_H L^p_z}\}<1/4C_2,
\]
%
then for $m\ge 1$ and $0<t\le T<T_0$ we have
%
\[
\lVert \tilde{V}_m(t)\rVert_{\mathcal{S}(t)}
\le C_2\biggl(16C_1 \lVert a_0\rVert_{L^\infty_H L^p_z}+2Rt^{1/2}\biggr)
\lVert \tilde{V}_{m-1}(t)\rVert_{\mathcal{S}(t)}
<\frac{1}{2}\lVert \tilde{V}_{m-1}(t)\rVert_{\mathcal{S}(t)}.
\]
%
Therefore, since $\mathcal{S}(T)$ is a Banach space, $(V_m)_{m\in\mathbb{N}}$ converges in $\mathcal{S}(T)$. We denote the limit by $V$ and see that it satisfies
%
\begin{align}\label{eq: mild solution V}
V(t)=e^{tA}a_0-\int_0^t e^{(t-s)A}\mathbb{P}\biggl(
(U(s)\cdot\nabla)V(s)
+(U(s)\cdot\nabla)v_{\text{ref}}(s)
+(u_{\text{ref}}(s)\cdot \nabla)V(s)
\biggr)\,ds
\end{align}
%
for $t\in (0,T)$ and thus $v:=V+v_{\text{ref}}$ is a solution to the primitive equations \eqref{eq:PrimitiveEquations}.
\textbf{Step 4:} \textit{Extending to a global solution.}
Using $V\in \mathcal{S}(T)$, the embedding $L^\infty_H L^p_z(\Omega) \hookrightarrow L^p(\Omega)$, as well as the semigroup estimates
%
\begin{align*}
t^{\vartheta}\lVert e^{tA}\mathbb{P} f\rVert_{D((-A_{p,\os})^{\vartheta})}
\le C \lVert f\rVert_{L^p(\Omega)}, \quad
t^{1/2}\lVert e^{tA}\mathbb{P} \nabla\cdot f\rVert_{L^p(\Omega)}
\le C \lVert f\rVert_{L^p(\Omega)},
\quad t>0,
\quad \vartheta\in [0,1]
\end{align*}
%
compare \cite[Lemma 4.6]{HieberKashiwabara2015} and \cite[Theorem 3.7]{GGHHK17}, one easily obtains that $V(t_0)\in D((-A_{p,\os})^{\vartheta})$ for $t_0>0$, and thus $v(t_0)\in D((-A_{p,\os})^{1/p})$ as well, so $v$ can be extended to a global solution that is strong on $(t_0,\infty)$.
\textbf{Step 5:} \textit{Uniqueness.}
To see that $v$ is a unique solution and thus strong on $(0,t_0)$ as well, we consider $v^{(1)}$ and $v^{(2)}$ both to be solutions in the sense of Theorem~\ref{MainTheorem} with initial value $a$ and set
%
\[
t^*:=\inf\{t\in[0,\infty):v^{(1)}(t)\neq v^{(2)}(t)\}.
\]
%
Assume that $t^*\in (0,\infty)$. Then using continuity of the solutions
%
\[
a^*:=v^{(1)}(t^*)=v^{(2)}(t^*)=a^*_{\text{ref}}+a_0^*
\]
%
where $a_0^*\in X_\os$ is sufficiently small and $a^*_{\text{ref}}\in D(A_\os)$. Let $v^*_{\text{ref}}$ be the reference solution to the initial data $a^*_{\text{ref}}$ and
%
\[
V^{(i)}(t):=v^{(i)}(t^*+t)-v^*_{\text{ref}}(t^*),
\quad i=1,2.
\]
%
Then $V^{(1)},V^{(2)}\in \mathcal{S}(T^*)$ both satisfy the condition \eqref{eq: mild solution V} for arbitrary $t\in (0,T^*)$, $T^*\in (0,\infty)$. We set
$\tilde{V}:=V^{(1)}-V^{(2)}$ and observe that proceeding analogously as before one obtains
%
\begin{align*}
\tilde{H}(t)
&\le C_3\biggl(
t^{1/2}(\tilde{H}(t)
+\tilde{K}(t))+H^{(1)}(t) \tilde{K}(t)
+K^{(2)}(t) \tilde{H}(t)
\biggr),
\\
\tilde{K}(t)
&\le C_3\biggl(
2t^{1/2}\tilde{K}(t)
+H^{(1)}(t)\tilde{K}(t)
+K^{(2)}(t)\tilde{H}(t)
+K^{(1)}\tilde{K}(t)
+K^{(2)}\tilde{K}(t)
\biggr),
\end{align*}
%
where $\tilde{H}, H^{(i)}, \tilde{K}, K^{(i)}$ are defined analogously to above. This yields
\begin{align}\label{eq:UniquenessEstimate}
\lVert \tilde{V}\rVert_{\mathcal{S}(t)}
\le C_3 \left(
t^{1/2}+H^{(1)}(t)+H^{(2)}(t)+K^{(1)}(t)+K^{(2)}(t)
\right)
\lVert \tilde{V}\rVert_{\mathcal{S}(t)},
\quad t\in (0,T^*).
\end{align}
%
By taking $T^*>0$ to be small the terms $(T^*)^{1/2}$ and $K^{(1)}(T)^*,K^{(2)}(T^*)$ can be taken to be arbitrarily small due to $\lVert \nabla V^{(i)}(t)\rVert=o(t^{-1/2})$, which in the case $t^*=0$ follows from the regularity of $v$ and in the case $t^*>0$ this follows from $\lVert \nabla v(t^*)\in L^\infty_H L^p_z(\Omega)^2$.
As for $H^{(1)}$ and $H^{(2)}$, using the same arguments that derived \eqref{eq:EstimateWithoutGradient} one obtains for $t\in (0,T^*)$ that
%
\begin{align}\label{eq:Second Uniqueness Estimate}
\begin{split}
H^{(i)}(t)
& \le C_1 \left(
\lVert a^*_0\rVert_{L^\infty_H L^p_z}
+ K^{(i)}(t) H^{(i)}(t)
+ R^* t^{1/2} H^{(i)}(t)
+ R^* t^{1/2} K^{(i)}(t)
\right),
\end{split}
\end{align}
%
where $R^*:=\sup_{0\le t\le T^*} \lVert \nabla v^*_{\text{ref}}(t)\rVert_{L^\infty_H L^p_z}$.
Now, we choose $T\in (0,T^*)$ so small that
%
\begin{align*}
K^{(i)}(T) H^{(i)}(T^*)
+ R^* T^{1/2} H^{(i)}(T^*)
+ R^* T^{1/2} K^{(i)}(T^*)
\le \lVert a^*_0\rVert_{L^\infty_H L^p_z}.
\end{align*}
%
Now, taking $\lVert a^*_0\rVert_{L^\infty_H L^p_z}$ to be sufficiently small, using that the constants $C_i>0$, $i=1,2,3$, are independent of $\lVert a^*_0\rVert_{L^\infty_H L^p_z}$, we obtain that the pre-factor in \eqref{eq:UniquenessEstimate} is smaller $1$.
Hence, it follows that $\lVert \tilde{V}\rVert_{\mathcal{S}(t)}=0$ for $t\in (0,T)$ and thus $v^{(1)}=v^{(2)}$ on $[0,t^*+T)$ which is a contradiction.
\textbf{Step 6:} Additional regularity.
By \cite[Theorem 6.1]{HieberKashiwabara2015} we thus have
%
\[
v\in C^1((0,\infty);L^p_{\overline{\sigma}}(\Omega))\cap C((0,\infty);W^{2,p}(\Omega))^2,
\quad
\pi\in C((0,\infty;W^{1,p}(G)).
\]
%
The additional regularity $v\in C([0,\infty];X_\os)$ follows from the strong continuity of the semigroup on $X_\os$.
For the pressure we have $\pi(t) \in W^{1,p}(G)\hookrightarrow C^{0,\alpha}([0,1]^2)$ for $\alpha \in (0,1-2/p)$. To obtain the regularity of $\nabla_H \pi$, observe that
%
\[
\nabla_H \pi
=-Bv
-(1-\mathbb{P})(u\cdot \nabla)v
=-Bv
-(1-Q)\overline{(u\cdot \nabla)v}
\]
%
where we used that $(1-\mathbb{P})f=(1-Q)\overline{f}$. In the proof of Claim~\ref{SemigroupEstimates} we have already proven that $Bv(t)\in C^{0,\alpha}([0,1]^2)$ for $\alpha \in (0,1-3/p)$ if $v(t)\in W^{2,p}(\Omega)^2$. Likewise, since $1-Q$ is continuous on $C^{0,\alpha}_{\text{per}}([0,1]^2)^2$ and $v\in C((0,\infty);W^{2,p}(\Omega))^2$, we obtain that the remaining terms belong to $C((0,\infty);C^{0,\alpha}([0,1]^2))^2$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{PerturbationTheorem}]
Here, we make use of the fact that the relevant estimates in Claim ~\ref{SemigroupEstimates} and Claim~\ref{SemigroupEstimatesHard} can also be applied in $L^{\infty}_HL^p_z(\Omega)^2$, compare Remark~\ref{rem:main}~$(c)$.
Let $a=a_1+a_2$ be as in Theorem~\ref{PerturbationTheorem}. Next, we introduce a decomposition setting $a_0:=a_2+(a_1-a_{\text{ref}})$ where
\begin{align*}
a_{\text{ref}} \in D(A_\os),
\quad
a_1 \in X_\os
\quad \hbox{and} \quad
a_2\in L^{\infty}_HL^p_z(\Omega)^2\capL^p_{\overline{\sigma}}(\Omega),
\end{align*}
where $a_{\text{ref}}$ is such that $a_0$ satisfies the smallness condition of Theorem~\ref{MainTheorem}.
Then the same iteration scheme as in the previous proof can be used to construct $V$ for the initial value $a_0$ and, in turn, $v$ to the initial value $a$.
The property
%
\[
v\in C([0,\infty);L^p_{\overline{\sigma}}(\Omega))
\cap L^{\infty}((0,T); L^\infty_H L^p_z(\Omega))^2
\]
%
follows from the boundedness and exponential stability of the semigroup on $L^p_{\overline{\sigma}}(\Omega)$ and $L^\infty_H L^p_z(\Omega)^2\cap L^p_{\overline{\sigma}}(\Omega)$, as well as the strong continuity on $L^p_{\overline{\sigma}}(\Omega)$.
Since the solution regularizes at $t_0>0$, compare Step 4 in the previous proof, we further obtain
$v\in C((0,\infty);X_\os)$ from the strong continuity on $X_\os$.
The condition
%
\begin{align}\label{eq: lim sup gradient}
\limsup_{t\to 0+}t^{1/2}\lVert \nabla v \rVert_{L^\infty_H L^p_z(\Omega)}
\le C \lVert a_2\rVert_{L^\infty_H L^p_z},
\end{align}
%
is verified as follows.
Since Claim~\ref{SemigroupEstimates} yields
%
\[
\limsup_{t\to 0+}t^{1/2}\lVert \nabla e^{tA}(a_1-a_{\text{ref}})\rVert_{L^\infty_H L^p_z}
=0, \quad
t^{1/2}\lVert \nabla e^{tA}a_2\rVert_{L^\infty_H L^p_z}
\le C_4 \lVert a_2\rVert_{L^\infty_H L^p_z},
\quad t>0,
\]
%
one obtains
%
\[
\limsup_{t\to 0+} t^{1/2}\lVert \nabla V_0(t)\rVert_{L^\infty_H L^p_z}
\le C_4 \lVert a_2\rVert_{L^\infty_H L^p_z}.
\]
%
We now prove
$\limsup_{t\to 0+}t^{1/2}\lVert \nabla V_{m}(t)\rVert_{L^\infty_H L^p_z}
\le 2 C_4 \lVert a_2\rVert_{L^\infty_H L^p_z}$ by induction. Assuming the claim holds for $m\in \mathbb{N}$ we obtain
%
\begin{align*}
\limsup_{t \searrow 0}t^{1/2}\lVert \nabla V_{m+1}(t)\rVert_{L^\infty_H L^p_z}
\le \left(
1
+2C_1 \lVert a_0\rVert_{L^\infty_H L^p_z}
+4C_4 \lVert a_2\rVert_{L^\infty_H L^p_z}
\right)
C_4 \lVert a_2\rVert_{L^\infty_H L^p_z}
\end{align*}
%
in the same manner as \eqref{eq:EstimateWithGradient}.
Assuming that $\lVert a_0\rVert_{L^\infty_H L^p_z}<1/4C_1$ and $\lVert a_2\rVert_{L^\infty_H L^p_z}<1/8C_4$ it follows that the claim holds for all $m\in\mathbb{N}$ and by taking the limit the same estimate holds for $V$. Using $v_{\text{ref}}\in C([0,\infty);C^1(\overline{\Omega})^2)$, we obtain that $v=V+v_{\text{ref}}$ satisfies \eqref{eq: lim sup gradient}.
To prove uniqueness we make the following modifications. If $v^{(1)}$ and $v^{(2)}$ are both solutions in the sense of Theorem~\ref{PerturbationTheorem}, we again define $t^*:=\inf\{t\in [0,\infty):v^{(1)}(t)\neq v^{(2)}\}$.
In the case $t^*>0$ we have $a^*=v^{(1)}(t^*)=v^{(2)}(t^*)\in D((-A_{p,\os})^{\vartheta})$ for any $\vartheta \in[0,1]$, compare Step 4 of the proof of Theorem~\ref{MainTheorem}. Choosing $2\vartheta-3/p>0$ we have that $ D((-A_{p,\os})^{\vartheta})\hookrightarrow X_\os$ and thus we can decompose $a^*=a^*_{\text{ref}}+a^*_0$ as before and the same argument applies.
If we instead have $t^*=0$ we continue to use the decomposition $a=a_{\text{ref}}+a_0$ where $a_0=a_2+(a_1-a_{\text{ref}})$. In this case we have $\lim_{t\to 0+} K^{(i)}(t)\le C \lVert a_2\rVert_{L^\infty_H L^p_z}$ for an absolute constant $C>0$ and thus the quantities on the right-hand side of \eqref{eq:UniquenessEstimate} can again be taken to be sufficiently small, where on the right-hand side of \eqref{eq:Second Uniqueness Estimate} one has $\lVert a_0\rVert_{L^\infty_H L^p_z}$ instead of $\lVert a^*_0\rVert_{L^\infty_H L^p_z}$, which again yields uniqueness.
This completes the proof.
\end{proof}
|
1,116,691,501,240 | arxiv | \section{Introduction}
Sachdev--Ye--Kitaev (SYK) model was proposed by Kitaev~\cite{Kitaev-talks} as a generalization of Sahdev--Ye model~\cite{Sachdev-Ye, Sachdev-1006} and first was extensively studied in~\cite{Polchinski, Maldacena-SYK, Kitaev, Jevicki-1, Jevicki-2}. Ever since it has received a great attention from the high energy and condensed matter physics community.
The success of SYK model is due to its remarkable properties. First, this model is exactly solvable in the large $N$ and IR limit. Second, in this limit the model acquires conformal symmetry and the effective action can be approximated by the Schwarzian one. Third, the leading correction to the out-of-time ordered four-point correlation function exponentially grows with time, with the exponent of this growth saturating the universal bound~\cite{MSS}. This behavior is very unusual; moreover, it coincides with the behavior of similar functions on a black hole background. Finally, SYK model is closely related to 2D dilaton gravity which describes excitations above the near horizon extremal black hole~\cite{Almheiri, Maldacena-JT, Jensen, Engelsoy}. Together these properties make SYK model an excellent toy model for many physical phenomena, including quantum chaos~\cite{Kitaev-talks, MSS}, information scrambling~\cite{Sekino, Susskind, Lashkari}, traversable wormholes~\cite{Maldacena-1704, Maldacena-1804, Maldacena-1807, Maldacena-1912} and strange metals~\cite{Hartnoll, Song, Sachdev}.
In this review we give a pedagogical introduction to SYK model and 2D dilaton gravity. We follow the original papers~\cite{Polchinski, Maldacena-SYK, Kitaev, Jevicki-1, Jevicki-2, MSS, Almheiri, Maldacena-JT, Jensen, Engelsoy} and try to be as specific as possible, i.e. we do our best to reveal every detail and loophole in the discussion. We believe this makes the discussion clear and self-consistent. Due to this reason we also expect the review to be suitable even for a reader who is not familiar with the phenomena under consideration.
The paper is organized as follows. In section~\ref{sec:chaos} we briefly discuss quantum chaos and scrambling, the phenomena that are related to the quantum black hole dynamics and motivate the study of SYK model and 2D dilaton gravity. In particular, we introduce out-of-time ordered correlation functions (OTOCs), which are the main tool for studying these phenomena. This section is relatively sketchy, because for brevity we postpone the discussion of specific examples to the following sections.
In sections~\ref{sec:basics} and~\ref{sec:treatment} we give a comprehensive review of SYK model. We broadly discuss large $N$ diagrammatics, emergence of conformal symmetry in the IR limit, effective and Schwarzian action, exact two-point and four-point functions. Some technical details are discussed in appendices. Also we briefly review recent results in the topic.
In section~\ref{sec:JT} we attempt to give an equally comprehensive review of 2D dilaton gravity (or Jackiw--Teitelboim gravity). We show that this theory describes excitations above the near horizon extremal black hole, explain that this theory effectively reduces to the one-dimensional theory with Schwarzian action, calculate four-point functions of the matter fields living in the corresponding space.
Finally, instead of conclusion in section~\ref{sec:examples} we briefly review the most notable examples of chaotic behavior. Among them are SYK model and 2D dilaton gravity (we briefly recall the main properties of these models), SYK-like tensor models, BTZ black hole, $CFT_2$ with large central charge and Hermitian matrix quantum field theory with quartic self-interaction.
\section{Motivation}
\label{sec:chaos}
In this section we discuss the main motivation for studying SYK model and 2D dilaton gravity, which is based on the connection to quantum chaos (subsection~\ref{sec:classical}) and scrambling (subsection~\ref{sec:scramblers}). It is believed that these phenomena are related to the black hole information paradox~\cite{Sekino, Susskind}, so they have received a lot of attention.
Here we qualitatively show that both of these phenomena rely on the exponential growth of OTOCs, which were first calculated in~\cite{Larkin} and popularized by~\cite{Almheiri-1304, Shenker-1306, MSS}. Therefore, systems with such a behavior of the correlators are of particular interest. SYK model and 2D dilaton gravity are exactly such type of systems. In section~\ref{sec:examples} we also briefly review other chaotic systems.
Note that this section may seem relatively sketchy, because we do not discuss the limits of applicability of the statements being formulated and do not provide any specific examples. Such examples will be broadly discussed in the following sections. In fact, part of the original motivation to study SYK model was exactly to find a convenient example for which the statements of this section can be verified in a controlled way~\cite{Kitaev-talks}.
\subsection{Quantum chaos}
\label{sec:classical}
In this subsection we discuss a putative connection between some specific correlation functions and classical chaos~\cite{Kitaev-talks,MSS}.
First of all, let us remind what the classical chaos is. Consider a classical system with the following equation of motion:
\beq \label{eq:chaos-1}
\dot{X}^i(t) = F^i\left[X^i(t)\right], \quad i=1 \ldots N, \eeq
where $\mathbf{X}$ is a vector in the $N$-dimensional phase space, $\mathbf{F}$ is a smooth vector function and $\dot X^i \equiv d X^i/dt$. Let us introduce the norm on the phase space, $\| \cdot \|$, and expand the function $\mathbf{F}$ near a point $\mathbf{X}_0$:
\beq \label{eq:chaos-2}
\delta \dot{X}^i = A^i_j \delta X^j + B^i (\delta X^i), \quad i=1 \ldots N, \eeq
where $\delta X^i \equiv X^i - X^i_0$, $A^i_j \equiv \left( \frac{\partial F^i}{\partial X^j} \right)_{\delta\mathbf{X} = \mathbf{0}}$ and $\mathbf{B}$ is analytical function such that $\left\| \mathbf{B}(\delta \mathbf{X}) \right\| \rightarrow 0$ as $\left\| \delta \mathbf{X} \right\| \rightarrow 0$. The solution of the linearized equation (i.e. equation with ommited $\mathbf{B}$) is straightforward:
\beq \delta \mathbf{X} = \sum_{j = 1}^N c_j \mathbf{h}_j e^{\lambda_j t}, \eeq
where $\lambda_j$ and $\mathbf{h}_j$ are eigenvalues and eigenvectors of the matrix $\mathbf{A}$ (for simplicity we assume that all eigenspaces are one-dimensional), $c_j$ are integration constants that correspond to the initial condition $\delta \mathbf{X}(t = 0) = \delta \mathbf{X}_0$. It is easy to see that for long evolution times but small $\delta \mathbf{X}_0$, such that the condition $\left\| \mathbf{B}(\delta \mathbf{X}) \right\| \ll \| \mathbf{A} \delta \mathbf{X} \|$ is always satisfied, the norm of the final deviation vector grows exponentially:
\beq \| \delta \mathbf{X} (t) \| \le \| \delta \mathbf{X}_0 \| e^{\lambda_{max} t}, \eeq
where $\lambda_{max}$ is the biggest eigenvalue of $\mathbf{A}$. If this eigenvalue is positive, phase space trajectories rapidly diverge, i.e. a small perturbation in the initial conditions leads to a significat change in the future behavior of the system (at least for some set of initial conditions). Such sensitivity to initial conditions is sometimes called the ``butterfly effect'' or ``classical chaos''.
In general, eigenvalues and eigenvectors depend on the point $\mathbf{X}_0$ and the definition of norm $\| \cdot \|$. However, the maximal eigenvalue, which is also referred to as the maximal Lyapunov exponent, can be considered as the general property of the system:
\beq \label{eq:chaos-3}
\lambda_{max} \equiv \lim_{t \rightarrow \infty} \lim_{\| \delta \mathbf{X} \| \rightarrow 0} \sup \left( \frac{1}{t} \log \frac{ \| \delta \mathbf{X}(t) \|}{ \| \delta \mathbf{X}(0) \| } \right). \eeq
This definition can be applied both to linearized~\eqref{eq:chaos-2} and general systems~\eqref{eq:chaos-1}. Since the exponent~\eqref{eq:chaos-3} does not depend on the definition of the norm~\cite{Gur-Ari,Eichhorn}, we can choose it as $\| \mathbf{X} \| = \sum_{i=1}^N |X^i|$. Then the sensitivity to initial conditions can be reformulated as follows:
\beq \label{eq:chaos-4}
\left| \frac{\partial X^i(t)}{\partial X^j(0)} \right| \approx \left| \frac{\delta X^i(t)}{\delta X^j(0)} \right| \sim e^{\lambda t}, \eeq
for some components $X^i$ and $X^j$ of the vector $\mathbf{X}(t)$, which describes the phase trajectory. The first identity is approximately equal for small $\delta \textbf{X}$.
Then let us consider a larger system whose configuration space coincides with the phase space of the initial system: $q^i = X^i$, $i=1 \ldots N$. Here $q^i$ are generalized coordinates, corresponding generalized momenta are denoted as $p^i$. Introducing the Poisson bracket $\{ \cdot, \cdot \}_{PB}$, we can rewrite the property~\eqref{eq:chaos-4} in a form suitable for quantum generalizations:
\beq \left| \{ q^i(t), p^j(0) \}_{PB} \right| = \left| \sum_{k = 1}^N \frac{\partial q^i(t)}{\partial q^k(0)} \frac{\partial p^j(0)}{\partial p^k(0)} - \frac{\partial q^i(t)}{\partial p^k(0)} \frac{\partial p^j(0)}{\partial q^k(0)} \right| = \left| \frac{\partial q^i(t)}{\partial q^j(0)} \right| \sim e^{\lambda t}. \eeq
So far we have considered classical mechanics. Now let us proceed to the quantum mechanical situation. We remind that in the semiclassical limit the Poisson bracket coincides with the commutator of the corresponding operators:
\beq \label{eq:chaos-5}
\left\{ q^i(t), p^j(0) \right\}_{PB} \sim -\frac{i}{\hbar} \left[ \hat{q}^i(t), \hat{p}^j(0) \right], \quad \text{as} \quad \hbar \rightarrow 0. \eeq
Note that the position and momentum operators act at different moments of time, so the expression~\eqref{eq:chaos-5} is not trivial.
This correspondence allows one to extend the concept of classical chaos and maximal Lyapunov exponent to arbitrary quantum systems~\cite{Wijn, Fine, Kitaev-talks, Aleiner-96}. Roughly speaking, we want to derive a quantity that correctly captures the sensitivity of the quantum system to a change in initial conditions and reproduces the exponential growth~\eqref{eq:chaos-4} in the limit $\hbar \rightarrow 0$ if the system is chaotic. The simplest expression of this kind is the following amplitude:
\beq \label{eq:chaos-6}
A_{in-out} = \langle out | \left[ q^i(t), p^j(0) \right] | in \rangle, \eeq
where $| in \rangle$ and $| out \rangle$ are initial and final wave-functions of the system. Unfortunately, this expression has two drawbacks. First, due to the dependence on the specific states the quantity~\eqref{eq:chaos-6} varies significantly for the same system. Second, in quantum field theory one usually considers the analog of~\eqref{eq:chaos-6} for the vacuum state or thermal ensemble, for which two-point functions exponentially decay rather than grow (in quantum mechanics correlation functions decay or grow algebraically). Thus, we need to eliminate the dependence on $| in \rangle$ and $| out \rangle$.
In order to do this we sum over final states and average over a suitable initial ensemble, e.g. over the thermal one:
\beq \label{eq:chaos-7}
C(t) = \sum_n \sum_{out} \frac{1}{Z} e^{-\beta E_n} \langle n | \left[ q^i(t), p^j(0) \right]^\dagger | out \rangle \langle out | \left[ q^i(t), p^j(0) \right] | n \rangle = -\langle \left[ q^i(t), p^j(0) \right]^2 \rangle_\beta, \eeq
where $\beta$ is the inverse temperature, $E_n$ is the energy of the $n$-th energy level, $Z = \sum_n e^{-\beta E_n}$ is the partition function, $\langle \cdots \rangle_\beta$ denotes the averaging over the thermal ensemble. Such an average was first considered in the classical paper~\cite{Larkin}.
On the one hand, due to~\eqref{eq:chaos-5} we expect that this quantity exponentially grows: $C(t) \sim \hbar^2 e^{2 \lambda t}$. On the other hand, the semiclassical approximation is applicable only for small enough times, $t < t_* \sim \frac{1}{\lambda} \log \frac{1}{\hbar}$, where $t_*$ is called the ``Ehrenfest time''~\cite{Aleiner-96, Silvestrov, Berman, Zaslavsky}. Note that $t_* \rightarrow \infty$ as $\hbar \rightarrow 0$. One expects that for larger times correlator $C(t)$ approaches some constant value~\cite{MSS,Almheiri-1304}.
The quantity~\eqref{eq:chaos-7} can be easily generalized to an arbitrary quantum system with a large number of degrees of freedom, $N \gg 1$:
\beq \label{eq:chaos-8}
C(t) = -\langle \left[V(t), W(0) \right]^2 \rangle_\beta, \eeq
where $V$ and $W$ are Hermitian operators each of which has vanishing one-point function ($\langle V \rangle_\beta = \langle W \rangle_\beta = 0$) and corresponds to $\mathcal{O}(1)$ degrees of freedom\footnote{E.g. in the case of SYK model such operators are Majorana fermions: $V(t) = \chi_i(t)$, $W(0) = \chi_j(0)$.}. We call the system chaotic if the quantity~\eqref{eq:chaos-8} grows exponentially for \textit{all possible pairs}\footnote{In integrable systems the function $C(t)$ can grow for some, but not all pairs of operators, e.g. see~\cite{Roberts-1412}.} of operators $V$ and $W$ with mentioned properties. The maximal exponent of this growth is referred to as ``quantum Lyapunov exponent''. The time $t_*$ at which $C(t)$ saturates is referred to as ``scrambling time'', which is an analog of the Ehrenfest time. We will discuss the motivation for this terminology in more detail in section~\ref{sec:scramblers}.
Note that in practice the correlator~\eqref{eq:chaos-8} should be regularized because it contains the product of operators at coincident times. A common approach is to uniformly smear the thermal distribution between the two commutators (which is equivalent to smearing of operators in the imaginary time):
\beq \label{eq:chaos-9}
C(t) = -\text{tr}\left( \rho^{\frac{1}{2}} \left[V(t), W(0) \right] \rho^{\frac{1}{2}} \left[V(t), W(0) \right] \right), \eeq
where $\rho = \frac{1}{Z} e^{-\beta H}$ is the density matrix. Of course, one can also consider other types of smearing, but this one has the most natural physical interpretation, see~\cite{Romero-Bermudez} for a more detailed discussion. Therefore, in this paper we are interested in such correlators as~\eqref{eq:chaos-9}. In the main body of this paper we will see how such an expression arises naturally.
Let us expand the commutators in~\eqref{eq:chaos-9} and rewrite $C(t)$ as the sum of four four-point correlation functions:
\beq \begin{aligned}
C(t) &= 2 \text{tr}\left( V(t) \rho^{\frac{1}{2}} V(t) W \rho^{\frac{1}{2}} W \right) - \text{tr}\left( \rho^{\frac{1}{2}} V(t) W \rho^{\frac{1}{2}} V(t) W \right) - \text{tr}\left( \rho^{\frac{1}{2}} W V(t) \rho^{\frac{1}{2}} V(t) W \right) = \\
&= 2 \times \text{TOC}(t) - \text{OTOC}\left(t - \frac{i\beta}{4}\right) - \text{OTOC}\left(t + \frac{i\beta}{4}\right),
\end{aligned} \eeq
where we denoted $W = W(0)$ for short, introduced time-ordered correlator (TOC) and out-of-time ordered correlator (OTOC):
\beq \text{TOC}(t) \equiv \text{tr}\left( V(t) \rho^{\frac{1}{2}} V(t) W \rho^{\frac{1}{2}} W \right), \quad \text{OTOC}(t) \equiv \text{tr}\left( \rho^{\frac{1}{4}} V(t) \rho^{\frac{1}{4}} W \rho^{\frac{1}{4}} V(t) \rho^{\frac{1}{4}} W \right). \eeq
There are two important time scales for $C(t)$. First one is the dissipation time $t_d$, at which two-point correlation functions exponentially decay: $\langle V(t) V \rangle_\beta \sim \langle W(t) W \rangle_\beta \sim \langle V(t) W \rangle_\beta \sim e^{-t/t_d}$. Typically $t_d \sim \beta$. At this time scale both TOC and OTOC are approximately equal to the product of two disconnected two-point functions, so the commutator $C(t)$ is close to zero~\cite{MSS,Polyakov,Makeenko}:
\beq \text{TOC}(t) \approx \text{OTOC}(t) \approx \langle V V \rangle_\beta \langle W W \rangle_\beta + \mathcal{O}\left(e^{-t/t_d}\right) + \mathcal{O}\left(\frac{1}{N}\right), \eeq
where we denoted $\langle V V \rangle_\beta = \left\langle V\left(-i\beta/2\right) V \right\rangle_\beta = \text{tr}\left( \rho^{\frac{1}{2}} V \rho^{\frac{1}{2}} V \right)$ for short. We remind that we work in the large $N$ limit, so the number $\frac{1}{N}$ plays the role of Planck's constant $\hbar$.
The second time scale is the scrambling time $t_*$. Typically $t_*$ is parametrically larger than $t_d$, namely $t_* \sim \beta \log N$. If the system is chaotic, well after the dissipation time and well before the scrambling time $C(t)$ exponentially grows and OTOC rapidly decays:
\beq \label{eq:chaos-10}
C(t) \sim \frac{1}{N} e^{\kappa t}, \quad \text{OTOC}(t) \sim \langle V V \rangle_\beta \langle W W \rangle_\beta - \frac{A}{N} e^{\kappa t}, \eeq
where $A$ is some numerical coefficient. At greater times $C(t)$ is saturated and OTOC approaches zero. Since TOC at such times is approximately constant, growth of $C(t)$ and decay of OTOC are qualitatively identical.
Thus, such a behavior of OTOC and of the function $C(t)$ can be considered as an indicator of quantum chaos. In particular, it allows one to extract the quantum Lyapunov exponent $\kappa$, which is expected to coincide with the classical exponent~\eqref{eq:chaos-3} in the semiclassical limit.
However, we would like to emphasize two important points regarding OTOCs and quantum chaos. First, one should keep in mind that the argumentation of this subsection is quite naive and in fact the connection between the exponential growth of $C(t)$ and classical chaos is questionable. There is an evidence both in favor of this interpretation~\cite{Cotler-1704} and against it~\cite{Hashimoto, Xu}. For this reason notions of ``scrambling'' (exponential growth of OTOC) and ``chaos'' (exponential growth of the average distance between phase trajectories) should be distinguished, although they are often considered as the same.
Second, OTOCs are not the only possible measure of chaos; in fact, there were several attempts to extend the concept of classical chaos to quantum systems. The most notable alternative approach\footnote{In fact, this idea is old and well developed enough to be included in textbooks on chaos, e.g. see~\cite{Haake,Ott, Stockmann}.} to quantum chaos is based on the level statistics at small energy separation: if this statistics agrees with Random Matrix Theory, one can consider the system as chaotic~\cite{Gharibyan, Haake, Ott, Stockmann}. This approach is also closely related to the Eigenstate Thermalization Hypothesis~\cite{Deutsch, Srednicki, DAlessio}, which states that under some assumptions any local operator in an isolated quantum system eventually approaches its thermal form:
\beq V_{ij} = \langle E_i | V | E_j \rangle = \overline{V}(E) \delta_{ij} + e^{-\frac{1}{2} S(E)} f(E, \omega) R_{ij}, \eeq
where $| E_i \rangle$ is the state with the energy $E_i$, $S(E) = -\text{tr}\left( \rho \log \rho\right)$, $\overline{V}(E) = \text{tr} \left(\rho V\right)$, thermal density matrix $\rho$ is fixed by $E = \text{tr}\left(\rho H\right)$, $f(E, \omega) = f(E, -\omega)$ is a smooth real function and $R_{ij}$ is a Hermitian random matrix with zero mean and unit variance. It is still not known whether this old approach is related to OTOCs or not, although there is some evidence in favor of this~\cite{Foini, Murthy, Parker, Avdoshkin, Huang-2}. In particular, it was shown that SYK model and 2D $CFT$ with large central charge under some assumptions behave like a random-matrix theory~\cite{Sonner, Vielma, Anous}, whereas correlation functions in these models have the form~\eqref{eq:chaos-10}.
\subsection{Fast scramblers}
\label{sec:scramblers}
The original motivation for studying of OTOCs was based on the fast scrambling conjecture, which was proposed in~\cite{Sekino,Susskind}, proved in~\cite{Lashkari} and adapted for correlators in~\cite{MSS}. In this subsection we briefly review this conjecture. Please note that this subsection may seem relatively vague if the reader does not have a specific example in mind. Such examples are discussed in the following sections.
First of all, consider a complex quantum system with a large number of degrees of freedom $N$, prepare a pure state $| \Psi \rangle$ and let this state freely evolve under the action of unitary operator $U$. Due to the Eigenstate Thermalization Hypothesis one expects that after a long enough time the system thermalizes although its state remains pure. By this we mean that density matrix of every small subsystem (with number of degrees of freedom $m < N/2$) is close to thermal density matrix, or, equivalently, the entanglement entropy\footnote{\label{foot:S} We remind that the entanglement entropy of subsystem $L$ is defined as $S_L = -\text{tr}_L \big(\rho_L \log \rho_L\big)$, where the trace is taken over the Hilbert space of $L$, $\rho_L = \text{tr}_R| \Psi \rangle \langle \Psi |$ and $R$ is the complement of $L$.} of every small subsystem is close to the maximal value~\cite{Page,Nishioka}. Roughly speaking, by this time the information about the initial state has been smeared throughout the system, so one needs to measure $\mathcal{O}(N)$ degrees of freedom to restore it. For this reason such a system was proposed to be called ``scrambled''~\cite{Sekino}.
Then let us perturb a small amount of degrees of freedom in a scrambled system and again let the system evolve freely. We expect that after some time the information about the perturbation is also smeared across all degrees of freedom, and system returns to a scrambled state. This time is referred to as ``scrambling time''.
The fast scrambling conjecture~\cite{Sekino,Susskind,Lashkari} states that scrambling time of any system cannot be less than $t_*^{min} \sim \beta \log N$. Moreover, the bound is saturated for black holes (if they satisfy all the explicit and implicit assumptions of the conjecture), which makes them ``the fastest scramblers in nature by a wide margin''\footnote{For finite-dimensional systems the bound can be tightened: $t_*^{min} \sim \beta N^{\frac{2}{d}}$, where $d$ is dimensionality of the system~\cite{Sekino}. For instance, in 3D $t_*^{min} \sim \beta N^{\frac{2}{3}}$. Thus, from this point of view black holes seem to be infinite-dimensional systems.}. Later it was argued that Rindler and de Sitter spaces also saturate this bound~\cite{Susskind}, but subsequent direct calculations did not confirm\footnote{The original argumentation of~\cite{Susskind} was based on the fact that the clock close to the event horizon goes as $e^{2 \pi t/\beta}$, where $t$ is the asymptotic observer's time. However, later it was shown that this is not enough. This is a good reminder that it is important to clarify all the assumptions in which the hypothesis is formulated.} this conjecture~\cite{Anninos, Aalsma}. This conjecture has important implications for information cloning and black hole information paradox~\cite{Hayden,Almheiri-1212,Mathur}.
To estimate scrambling time, one needs to find how quickly a small perturbation spreads over the entire system. In some special cases this process can be studied directly~\cite{Roberts-1802,Qi}, but much more often one needs to rely on implicit signs of scrambling. In essense, there are two such indicators.
One way to capture the rate of scrambling is to prepare a thermofield double (TFD) state, which describes two identical thermal subsystems:
\beq \label{eq:TFD}
| TFD \rangle = \frac{1}{\sqrt{Z}} \sum_n e^{-\frac{1}{2} \beta E_n} | n \rangle_L \otimes | n \rangle_R, \quad \text{so that} \quad \rho_L = \rho_R = \frac{1}{Z} \sum_n e^{-\beta E_n} | n \rangle \langle n |, \eeq
perturb one subsystem by a local operator and check how the mutual information, $I_{LR} = S_L + S_R - S_{L \cup R}$, evolves in time (see footnote~\ref{foot:S} for the definition of $S$). Usually subsystems are called ``left'' (L) and ``right'' (R) which explains the subscripts of $S$. Before the perturbation both subsystems are highly correlated, so the mutual information is non-zero. However, gradually the perturbation grows and affects more and more degrees of freedom. For instance, for a local operator $V$ and a generic Hamiltonian $H$ with local interactions, the $k$-th term in the expansion of the evolved operator $V(t) = e^{i H t} V e^{-i H t}$ can lead to a product of $k$ local operators:
\beq V(t) = V + i t [H, V] + \frac{(i t)^2}{2!} \left[ H, [H, V] \right] + \cdots + \frac{(i t)^k}{k!} \left[ H, \left[H, \cdots [H, V] \right] \right] + \cdots. \eeq
Thus, one expects that eventually the perturbation spreads throughout the entire system, the entanglement between L and R subsystems disappears and mutual information becomes close to zero. Therefore, the moment $t_*$ at which $I_{LR}(t_*) \approx 0$ can be considered as an estimate of the scrambling time. An example of such calculation can be found e.g. in~\cite{Shenker-1306,Roberts-1412,Hartman,Asplund,Arefeva}. In particular, this calculation reproduces the conjectured bound $t_* \sim \beta \log N$ for black holes~\cite{Shenker-1306,Roberts-1412}.
Another way to evaluate $t_*$ is based on calculation of out-of-time-ordered correlators introduced in the previous subsection. Let us qualitatively explain why such correlators are sensitive to scrambling. As was noticed in~\cite{Almheiri-1304,Shenker-1306,Roberts-1412,MSS}, OTOC can be rewritten as a two-sided correlation function in a perturbed thermofield double state:
\beq \label{eq:scramblers-1}
\text{OTOC}(t) = \left\langle V\Big(t - \frac{i\beta}{4}\Big) W\Big(0\Big) V\Big(t + \frac{i\beta}{4}\Big) W\Big(\frac{i\beta}{2}\Big) \right\rangle_\beta = \langle \psi | W_L W_R | \psi \rangle, \eeq
where $V$ and $W$ are local Hermitian operators, $W_L = W^\dagger \otimes 1$ acts on the left subsystem, $W_R = 1 \otimes W$ acts on the right subsystem and the perturbed state is as follows:
\beq \label{eq:scramblers-2}
| \psi \rangle = V_L\Big(t + \frac{i \beta}{4}\Big) | TFD \rangle = \frac{1}{\sqrt{Z}} \sum_{mn} e^{-\frac{\beta}{4} (E_m + E_n)} V(t)_{nm} | m \rangle_L \otimes | n \rangle_R. \eeq
At small times the operator $V$ affects only $\mathcal{O}(1)$ degrees of freedom and cannot significantly change the global pattern of correlations, so the perturbed state is close to pure $| TFD \rangle$. Thus, left and right subsystems are highly entangled and the correlator is big, i.e. $\text{OTOC}(t) \approx \langle V V \rangle_\beta \langle W W \rangle_\beta$. However, over time the perturbation involves other degrees of freedom and destroys the fragile pattern of correlations, so eventually OTOC decays to zero. In this setting, scrambling time is the time at which OTOC saturates: $\text{OTOC}(t_*) \approx 0$ or $C(t_*) \approx 2 \langle V V \rangle_\beta \langle W W \rangle_\beta$.
What is interesting here is the rate at which OTOC approaches zero. On general grounds, one expects that in the large $N$ limit and for small evolution times the first correction to OTOC is of the order of $\mathcal{O}\left(\frac{1}{N}\right)$:
\beq \frac{\text{OTOC}(t)}{\langle V V \rangle_\beta \langle W W \rangle_\beta} = 1 - \frac{A}{N} f(t) + \mathcal{O}\left(\frac{1}{N^2}\right), \eeq
where $A$ is some positive $\mathcal{O}(1)$ numerical factor and $f(t)$ is some monotonously growing function. Extending this approximation to large times, one can qualitatively estimate the scrambling time as $t_* \sim f^{-1}\left(N/A\right)$, where $f^{-1}$ is the inverse of $f$, $f \circ f^{-1} = f^{-1} \circ f = 1$. At the same time, the fast scrambling conjecture states that $t_* \gtrsim \beta \log N$. Therefore, the function $f$ cannot grow faster than exponentially in time, $f(t) \lesssim e^{\kappa t}$. The exponent of this growth is also bounded, $\kappa \le \frac{B}{\beta}$, where $B$ is a universal positive $\mathcal{O}(1)$ numerical constant. This analog of the fast scrambling conjecture for OTOCs was proven in~\cite{MSS} and called a ``bound on chaos''\footnote{In fact, for gravitational scattering of massive particles with spin $J > 2$ one expects that $\kappa \sim \frac{2 \pi}{\beta} (J - 1)$. However, it was argued that such processes violate causality and unitarity~\cite{MSS, Zhiboedov}.}:
\beq \frac{d}{dt} \Big[ \langle V V \rangle_\beta \langle W W \rangle_\beta - \text{OTOC}(t) \Big] \le \frac{2 \pi}{\beta} \Big[ \langle V V \rangle_\beta \langle W W \rangle_\beta - \text{OTOC}(t) \Big], \quad \text{i.e.} \quad \kappa \le \frac{2 \pi}{\beta}. \eeq
Note that for systems that saturate the bound on $f$, the number $\kappa$ can be considered as an analog of classical Lyapunov exponent from subsection~\ref{sec:classical}.
Furthermore, OTOC is a very convenient measure of the spatial growth of operators. In $(d>1)$-dimensional chaotic systems (i.e. systems with $f(t) \sim e^{\kappa t}$) the exponential growth in time is typically supplemented~\cite{Roberts-1603} by a coordinate-dependent factor: $f(t) \sim e^{\kappa (t - |x|/v_B)}$, where $|x|$ is the distance to the initial perturbation caused by the operator $V$ and $v_B$ is some positive constant. It is easy to see that OTOC significantly deviates from the initial value only inside the ball of radius $r < v_B t$. This ball can be interpreted as an area affected by the perturbation, i.e. the ``size'' of the operator $V$. For this reason constant $v_B$ is called the ``butterfly velocity''. The discussion and examples of spatial operator growth can be found e.g. in~\cite{Roberts-1603,Roberts-1409,Shenker-1306,Hosur,Nahum,Mezei}.
Of course, compared to mutual information, OTOCs are a very crude measure of scrambling. In particular, $I_{LR}$ drops to zero almost immediately after $t_*$, whereas OTOCs at such times merely start to decay~\cite{Shenker-1306}. However, in practice it is much easier to calculate correlation functions than mutual information, which makes OTOCs a very popular tool. To the present moment OTOCs were calculated in a large variety of models, including BTZ black hole~\cite{Shenker-1306, Shenker-1312, Roberts-1409, Shenker-1412}, 2D $CFT$~\cite{Roberts-1412, Turiaci, Fitzpatrick}, de Sitter space~\cite{Anninos, Aalsma}, SYK model~\cite{Maldacena-SYK, Kitaev, Jevicki-1, Jevicki-2, Kitaev-talks, Polchinski} and its analogs~\cite{SUSY-SYK, Fu, Gross-1610, Gu}, 2D dilaton gravity~\cite{Maldacena-JT, Jensen}, matrix models~\cite{Stanford-1512, Gur-Ari}, and of course in plenty of quantum many-body systems~\cite{Aleiner, Wijn, Fine, Yao, Huang, Swingle, Shen, Dora, Lin, Bohrdt, Patel-1, Patel-2, vonKeyserlingk, Hosur, Nahum, Mezei}. In the following sections we will take a closer look at the two most notable examples: SYK model (sections~\ref{sec:basics} and~\ref{sec:treatment}) and 2D dilaton gravity (section~\ref{sec:JT}).
Finally, let us emphasize that arguments of~\cite{Sekino, Susskind, Lashkari, MSS} work only for nearly equilibrium situations (e.g. large, semiclassical black hole or eternal black hole in $AdS$ space), assuming that a small perturbation induced by operator $V$ cannot significantly change the initial state. Usually OTOCs are also calculated for such situations. Due to this assumption one can use equilibrium (Matsubara) diagrammatic technique and apply the analytic continuation procedure to correlation functions. However, this intuition does not work if the perturbation is big or the system is far from equilibrium (e.g. for small black holes). In this case one needs to use non-equilibrium (Schwinger--Keldysh) diagrammatic technique and take into account that the state of the system can evolve in time~\cite{Arseev, Kamenev}. An example of such calculation for black holes and de Sitter space can be found in~\cite{Maldacena-1912, Krotov, Akhmedov, Akhmedov-H, Akhmedov-1701, Akhmedov-1901}, a generalization of non-equilibrium technique for OTOCs can be found in~\cite{Aleiner, Haehl}. However, it is still unknown whether arguments of~\cite{Sekino, Susskind, Lashkari, MSS} can be extended to non-equilibrium systems or not.
\section{SYK basics}
\label{sec:basics}
SYK model is one of the most notable models for quantum chaos and holography. Due to its remarkable properties it is an excellent toy model for many physical phenomena, including traversable wormholes~\cite{Maldacena-1704, Maldacena-1804, Maldacena-1807, Maldacena-1912} and strange metals~\cite{Hartnoll, Song, Sachdev}. Due to this reason we review this model in great detail.
This section is mostly based on the pioneer papers~\cite{Polchinski, Maldacena-SYK, Kitaev} and talks of Kitaev~\cite{Kitaev-talks}. Reviews~\cite{Sarosi, Rosenhaus-1807} also come in handy. For simplicity we consider the model with four-fermion interaction vertex ($q = 4$), which is the simplest non-trivial and non-degenerate case. The generalization to other cases ($q\ge2$) is straightforward and can be found in the mentioned papers.
In this section we discuss the basic properties of SYK model: large $N$ diagrammatics, emergence of conformal symmetry in the IR limit, effective and Schwarzian action. The calculation of the four-point function is placed in the separate section (section~\ref{sec:treatment}) because of its bulkiness.
\subsection{Main definitions}
\label{sec:SYK-defs}
SYK model is quantum mechanics of $N \gg 1$ Majorana fermions with all-to-all random couplings:
\beq
\label{eq:SYK-action}
I_{SYK} = \int d\tau \left[ \frac{1}{2} \sum_{i=1}^N \chi_i(\tau) \dot{\chi}_i(\tau) - \frac{1}{4!} \sum_{i,j,k,l=1}^N J_{ijkl} \, \chi_i(\tau) \chi_j(\tau) \chi_k(\tau) \chi_l(\tau) \right],
\eeq
where $\dot{\chi}_i = d \chi_i/d\tau$. Let us clarify the notations. Letter~$\tau$ denotes Euclidean time, which is related to Lorentzian time~$t$ by the Wick rotation: $\tau = i t$. In this section we work in Euclidean time if not stated otherwise. Operators~$\chi_i$ are Hermitian: $\chi_i = \chi_i^\dagger$, and obey the usual anticommutation relations:
\beq
\label{eq:anticommutator}
\left\{ \chi_i, \chi_j \right\} = \delta_{ij}, \quad i, j = 1 \ldots N.
\eeq
One can find more information about representations of one-dimensional Clifford algebra in appendix~\ref{sec:majorana}. Note that in one-dimensional case Majorana fermions are dimensionless. The couplings $J_{ijkl}$ are distributed randomly and independently, i.e. accordingly to the gaussian distribution\footnote{A generalization to non-gaussian distributions can be found in~\cite{Krajewski}.} with the following probability density function:
\beq
\label{eq:PDF}
P(J_{ijkl}) = \exp\left( -\frac{N^3 J_{ijkl}^2}{12 J^2} \right) \quad \text{for every} \quad J_{ijkl}.
\eeq
We emphasize that the summation over $i$, $j$, $k$ and $l$ is not assumed. This distribution leads to several important properties. First, it fixes the average and average square of couplings:
\beq
\label{eq:J1}
\overline{J_{ijkl}} = 0, \quad \overline{J_{ijkl}^2} = \frac{3! J^2}{N^3},
\eeq
where $J$ is a constant with dimension of mass. Second, the even moments of couplings split into the sum of all possible products of the second moments (average squares), i.e. there is a Wick-type decomposition for average of even number of couplings. For instance,
\beq
\label{eq:J2}
\begin{aligned}
\overline{J_{i_1 i_2 i_3 i_4} J_{j_1 j_2 j_3 j_4} J_{k_1 k_2 k_3 k_4} J_{l_1 l_2 l_3 l_4}} &= \overline{J_{i_1 i_2 i_3 i_4} J_{j_1 j_2 j_3 j_4}} \; \overline{J_{k_1 k_2 k_3 k_4} J_{l_1 l_2 l_3 l_4}} + \overline{J_{i_1 i_2 i_3 i_4} J_{k_1 k_2 k_3 k_4}} \; \overline{J_{j_1 j_2 j_3 j_4} J_{l_1 l_2 l_3 l_4}} + \\ &+ \overline{J_{i_1 i_2 i_3 i_4} J_{l_1 l_2 l_3 l_4}} \overline{J_{j_1 j_2 j_3 j_4} J_{k_1 k_2 k_3 k_4}}.
\end{aligned} \eeq
Roughly speaking, to perform such an averaging one should create many copies of the system with randomly choosen couplings\footnote{In fact, if one is interested only in extensive quantities, such as energy or entropy, for large $N$ it is sufficient to consider only one specific realization with randomly distributed couplings. Roughly speaking, the large $N$ system can be divided into a large number of large subsystems that average themselves in such quantities.}, calculate the expressions in question and average the final results\footnote{One can also consider a generalization of the model with dynamical couplings. In particular, large $N$ fermionic tensor models reproduce all main properties of SYK model without the trick with disorder average. For review see subsubsection~\ref{sec:tensor} and papers~\cite{Klebanov-1, Klebanov-2, Klebanov-3, Witten-1610, Gurau-1611, Nishinaka}.}. The reasons why one requires properties~\eqref{eq:J1} and~\eqref{eq:J2} will become clear in the next subsection.
Note that anticommutation relations~\eqref{eq:anticommutator} imply the antisimmetry of the couplings:
\beq J_{ijkl} = \text{sgn} \sigma J_{\sigma(i) \sigma(j) \sigma(k) \sigma(l)}, \quad \text{where} \quad \sigma: i \rightarrow \sigma(i), \quad i =1 \ldots N. \eeq
First, this reduces the number of independent non-zero components of $J_{ijkl}$ to $\frac{N!}{4!(N-4)!}$. Second, this allows one to define the disorder average of two arbitrary couplings:
\beq
\label{eq:J3}
\overline{J_{i_1 i_2 i_3 i_4} J_{j_1 j_2 j_3 j_4}} = \frac{3! J^2}{N^3} \sum_{\sigma} \text{sgn} \sigma \delta_{i_1 \sigma(j_1)} \delta_{i_2 \sigma(j_2)} \delta_{i_3 \sigma(j_3)} \delta_{i_4 \sigma(j_4)},
\eeq
where the sum is performed over all possible permutations of indices. Essentially, this sum just checks whether indices of $J_{i_1 i_2 i_3 i_4}$ and $J_{j_1 j_2 j_3 j_4}$ coincide or not.
The important particular case in applications below is the case of three coincident indices:
\beq
\label{eq:J4}
\sum_{k,l,m = 1}^N \overline{J_{iklm} J_{jklm}} = \frac{3! J^2}{N^3} \sum_{k,l,m=1}^N \delta_{ij} \delta_{kk} \delta_{ll} \delta_{mm} + \cdots = \frac{3! J^2}{N^3} \left( N^3 \delta_{ij} + \mathcal{O}(N^2) \right) = 3! J^2 \delta_{ij} + \mathcal{O}\left(\frac{1}{N}\right).
\eeq
Let us also specify the interval where the Euclidean time $\tau$ runs. In this paper we consider two closely related cases: Euclidean line $\tau_{line} \in (-\infty, \infty)$ and Euclidean circle: $\tau_{circle} \in \left[-\frac{\beta}{2}, \frac{\beta}{2}\right)$, $\tau + \beta \sim \tau$. The first case describes the zero-temperature quantum mechanics, whereas the second case corresponds to the thermal state with the inverse temperature $\beta = \frac{1}{T}$. Below we will use the following map to change between the Euclidean line and circle:
\beq
\label{eq:circle-line}
\tau_{line} = \tan \frac{\pi \tau_{circle}}{\beta}. \eeq
Note that this mapping function is real and monotonic, i.e. it preserves the order of times: $d\tau_{line}/d\tau_{circle} > 0$.
Finally, note that in the free theory the Hamiltonian is zero, $H_0(\tau) = 0$. Hence, operators are constant even in Heisenberg picture: $\chi_i(\tau) = e^{\tau H_0} \chi_i(0) e^{-\tau H_0} = \chi_i(0)$. Therefore, one can use anticommutation relations~\eqref{eq:anticommutator} to find the two-point correlation functions in the zero-temperature free theory:
\beq
\label{eq:2p-free-0}
\langle 0 | \mathcal{T} \chi_i(\tau) \chi_j(0) | 0 \rangle \equiv \theta(\tau) \langle 0 | \chi_i \chi_j | 0 \rangle - \theta(-\tau) \langle 0 | \chi_j \chi_i | 0 \rangle = \frac{1}{2} \text{sgn} \tau \delta_{ij},
\eeq
and finite-temperature free theory:
\beq \langle \mathcal{T} \chi_i(\tau) \chi_j(0) \rangle_\beta = \frac{1}{2} \text{sgn} \left( \sin \frac{\pi \tau}{\beta} \right) \delta_{ij}. \eeq
Here $| 0 \rangle$ denotes the vacuum state in the free theory and $\langle \cdots \rangle_\beta$ denotes the averaging over the thermal distribution together with the quantum averaging:
\beq \langle \cdots \rangle_\beta \equiv \frac{\text{tr} \left[ e^{-\beta H} \cdots \right]}{\text{tr} \left[ e^{-\beta H} \right]}. \eeq
A more accurate derivation of the propagators can be found in appendix~\ref{sec:majorana}.
Note that thermal fermion propagator is antiperiodic due to anticommutation rule~\eqref{eq:anticommutator}. For instance, for $\tau > 0$:
\beq \text{tr} \left[ e^{-\beta H} \chi(\tau + \beta) \chi(0) \right] = \text{tr} \left[ \chi(\tau) e^{-\beta H} \chi(0) \right] = \text{tr} \left[ e^{-\beta H} \chi(0) \chi(\tau) \right] = - \text{tr} \left[ e^{-\beta H} \chi(\tau) \chi(0) \right], \eeq
Finally, it is convenient to define the averaged correlation functions:
\begin{align}
G_0(\tau) &\equiv \frac{1}{N} \sum_{i=1}^N \langle \mathcal{T} \chi_i(\tau) \chi_i(0) \rangle = \frac{1}{2} \text{sgn} \tau, \label{eq:bare-0} \\
G_0^\beta(\tau) &\equiv \frac{1}{N} \sum_{i=1}^N \langle \mathcal{T} \chi_i(\tau) \chi_i(0) \rangle_\beta = \frac{1}{2} \text{sgn} \left( \sin \frac{\pi \tau}{\beta} \right). \label{eq:bare-t}
\end{align}
Note that for $\tau \in \left[ -\frac{\beta}{2}, \frac{\beta}{2} \right)$ the finite-temperature propagator~\eqref{eq:bare-t} coincides with the zero-temperature propagator~\eqref{eq:bare-0}. Also note that any fermion Green function is antisymmetric: $G(\tau) = - G(-\tau)$.
\subsection{Two-point function and diagrammatics}
\label{sec:SYK-diagrams}
Let us turn on the interaction term:
\beq H(\tau) = \frac{1}{4!} \sum_{i,j,k,l} J_{ijkl} \chi_i(\tau) \chi_j(\tau) \chi_k(\tau) \chi_l(\tau), \eeq
and calculate averaged over disorder loop corrections to the free propagators. For greater clarity, we turn back to the Lorentzian time for a while, expand the evolution operators and calculate few first orders in $J$. The evolution operator is given by the following expression:
\beq U(t_1, t_2) \equiv \mathcal{T}\text{exp} \left[-i \int_{t_2}^{t_1} dt H(t)\right] = 1 - i \int_{t_2}^{t_1} dt H(t) - \int_{t_2}^{t_1} dt \int_{t_2}^t dt' H(t) H(t') + \cdots. \eeq
The exact propagator $G(t)$ can be transformed to the following form:
\beq G(t) \delta_{ij} = \left\langle \mathcal{T} U^\dagger(t, -\infty) \chi_i(t) U(t, 0) \chi_j(0) U(0, -\infty) \right\rangle = \frac{\langle \mathcal{T} \chi_i(t) \chi_j(0) U(+\infty, -\infty) \rangle}{\langle U(+\infty, -
\infty) \rangle}, \eeq
Here we use the unitarity of $U(t_1, t_2)$ and suppose that the vacuum state is not disturbed under adiabatic turning on and switching off the interaction term~\cite{Peskin,Akhmedov}. Note that we do not need to use the interaction picture since $H_0 = 0$. Now let us expand this expression and average it over the disorder:
\beq
\label{eq:2p-1}
\begin{aligned}
G(t) \delta_{ab} &= \Big\langle \mathcal{T} \Big[ \chi_a(t) \chi_b(0) - \frac{i}{4!} \sum_{i,j,k,l} \overline{J_{ijkl}} \int_{-\infty}^{+\infty} dt' \chi_a(t) \chi_b(0) \chi'_i \chi'_j \chi'_k \chi'_l - \\ &- \frac{1}{2} \frac{1}{(4!)^2} \sum_{i,j,k,l,p,q,r,s}\overline{J_{ijkl} J_{pqrs}} \int_{-\infty}^{+\infty} dt' \int_{-\infty}^{+\infty} dt'' \chi_a(t) \chi_b(0) \chi'_i \chi'_j \chi'_k \chi'_l \chi''_p \chi''_q \chi''_r \chi''_s + \mathcal{O}(J^3) \Big] \Big\rangle,
\end{aligned} \eeq
where we denoted $\chi_i(t') \equiv \chi'_i$ and $\chi_i(t'') \equiv \chi''_i$ for short. We also used that in the large $N$ limit the averaging over the disorder in the numerator and denonimator in~\eqref{eq:2p-1} can be done independently.
Now one sees that the rules~\eqref{eq:J1},~\eqref{eq:J2} and~\eqref{eq:J3} single out the very special type of vacuum expectation values. First, the disconnected part of the averages factorizes as usual. Second, odd orders in $J_{ijkl}$ die out after the disorder averaging. Third, the connected part of the expression~\eqref{eq:2p-1} reduces to the following expression:
\beq \label{eq:2p-3} \begin{aligned}
G(t) - G_0(t) &= \frac{2 \cdot 4 \cdot 4!}{2 (4!)^2} \frac{1}{N} \sum_{i,j,k,m,n}\overline{J_{ikmn} J_{jkmn}} \delta_{ij} \int dt' dt'' G_0(t-t') G_0(t'-t'')^3 G_0(t'') + \mathcal{O}(J^4) = \\ &= J^2 \int dt' dt'' G_0(t-t') G(t'-t'')^3 G(t'') + \mathcal{O}\left(\frac{J^2}{N}\right) + \mathcal{O}(J^4).
\end{aligned} \eeq
Here we have applied the Wick's theorem for the vacuum expectation values, contracted couplings with Kronecker deltas which come from the free propagators~\eqref{eq:2p-free-0}, used antisimmetry of $J_{ijkl}$ to find the numerical coefficient\footnote{All possible contractions give~$4\cdot4\cdot3\cdot2$ and the symmetry under the change~$t'\leftrightarrow t''$ gives~$2$.} and used the relation~\eqref{eq:J4} to single out the leading order in~$N$. The expression~\eqref{eq:2p-3} can be schematically represented by the so-called melonic diagram (Fig.~\ref{fig:melonic-1}). The other second-order diagram (Fig.~\ref{fig:melonic-2}) identically equals zero, because it contains couplings with coincident indices.
\begin{figure}[t]
\begin{center}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[width=1\linewidth]{SYK-1.png}
\caption{Melonic diagram}
\label{fig:melonic-1}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[width=1\linewidth]{SYK-2.png}
\caption{Double tadpole diagram that is identically zero}
\label{fig:melonic-2}
\end{minipage}
\end{center}
\end{figure}
Using Wick's theorem and the relation~\eqref{eq:J3} one can write down higher order corrections, which correspond to higher-order diagrams (Fig.~\ref{fig:melonic-3}). Each diagram is proportional to the certain power of~$J$ and~$N$. The power of~$J$ is simply equals to the number of vertices of the diagram (each vertex gives~$J$). The power of~$N$ has no simple connection with the shape of the diagram. However, it is easy to see that the only diagrams which survive in the limit $N \rightarrow \infty$ are melonic diagrams, because the expression~\eqref{eq:J4} is the only one of the order~$J^2 N^0$. Roughly speaking, Kronecker deltas in~\eqref{eq:J4} are contracted directly, whereas Kronecker deltas in other averages are contracted through the other deltas. The longer the ``path'' of contraction of the indexes via Kronecker symbols, the lower is the power of~$N$.
\begin{figure}[t]
\center{\includegraphics[scale=0.4]{SYK-diagrams.png}}\caption{Second-order (a) and fourth-order (b--m) corrections to the propagator. The only diagrams that survive in the limit $N \rightarrow \infty$ are (a), (b) and (e).} \label{fig:melonic-3}
\end{figure}
For instance, compare the double melon (Fig.~\ref{fig:melonic-3}b or Fig.~\ref{fig:melonic-3}e) with non-melonic diagram (e.g. Fig.~\ref{fig:melonic-3}h). Double melonic diagram contains the following disorder average:
\beq \frac{J^4}{N^6} \sum \overline{J_{iklm} J_{nklm}} \; \overline{J_{npqr} J_{jpqr}} \propto \frac{J^4}{N^6} \sum \delta_{in} \delta_{kk} \delta_{ll} \delta_{mm} \delta_{jn} \delta_{pp} \delta_{qq} \delta_{rr} + \cdots = J^4 + \mathcal{O} \left(\frac{J^4}{N}\right). \eeq
Obviously, the contraction of six Kronecker deltas of the form $\delta_{nn}$ gives~$N^6$, so that the overall order of the diagram is~$J^4 N^0$. At the same time, the diagram depicted on the Fig.~\ref{fig:melonic-3}h contains a slightly modified average:
\beq \frac{J^4}{N^6} \sum \overline{J_{iklm} J_{jqrm}} \; \overline{J_{krnp} J_{qlnp}} \propto \frac{J^4}{N^6} \sum \delta_{ij} \delta_{kq} \delta_{lr} \delta_{mm} \delta_{kq} \delta_{lr} \delta_{nn} \delta_{pp} + \cdots = \mathcal{O} \left(\frac{J^4}{N}\right). \eeq
Here the power~$N^5$ comes from the contraction of $\delta_{mm}$, $\delta_{nn}$, $\delta_{pp}$, $\delta_{kq} \delta_{kq}$ and $\delta_{lr} \delta_{lr}$. One can see that two ``paths'' of the contraction lengthened and one ``path'' shortened, which reduced the power of~$N$ by one. The other possible products of the Kronecker deltas, which follow from~\eqref{eq:J3}, give even longer ``paths'' of the contraction.
Thus, the only type of diagrams which survive in the limit $N \rightarrow \infty$ are melonic diagrams (Fig.~\ref{fig:melonic-3}a,~\ref{fig:melonic-3}b and~\ref{fig:melonic-3}e). Moreover, one needs not to care about the signs and numerical coefficients in front of such diagrams, because all melons come with the same numerical coefficient. In fact, the correction (Fig.~\ref{fig:melonic-1}) can be thought of as a single block that can be inserted into any tree-level line of itself.
Recently the dominance of melonic diagrams also was rigorously proved, e.g. based on combinatorial analysis~\cite{Bonzom} and generalizations of the model~\cite{Gurau-1702}. We will not discuss such proof.
Note that in this subsection we worked in the zero-temperature limit, $\beta = \infty$, i.e. calculated the vacuum expectation values. However, the obtained results can be easily generalized to the finite-temperature case, because the averaging over the disorder does not depend on the temperature and always singles out melonic diagrams. It does not matter whether Feynman or Matsubara technique is used, the prefactors of diagrams are the same.
\subsection{Dyson--Schwinger equation and IR limit}
\label{sec:DS}
Using the results of the previous section one can straightforwardly write down the Dyson--Schwinger (DS) equation in the limit~$N \rightarrow \infty$:
\beq
\label{eq:DS-1}
\begin{aligned}
G(\tau_1, \tau_2) &= G_0(\tau_1, \tau_2) + \int d\tau_3 d\tau_4 G_0(\tau_1, \tau_3) \Sigma(\tau_3, \tau_4) G(\tau_4, \tau_2), \\
\Sigma(\tau_1, \tau_2) &\equiv J^2 G(\tau_1, \tau_2)^3.
\end{aligned} \eeq
This equation sums up only the melonic diagrams, which dominate in the limit in question. Here we turned back to the Euclidean time and took into account that corrections to each propagator of the melon endlessly grow upwards (as in Fig.~\ref{fig:melonic-3}e) and to the right (as in Fig.~\ref{fig:melonic-3}b), i.e. corresponding tree-level propagators are replaced with the exact ones (Fig.~\ref{fig:Dyson}). This equation (with the appropriate limits of the integration over $\tau$) holds both for zero- and finite-temperature propagators. Due to the translational invariance the exact propagator depends on the time difference: $G(\tau_1, \tau_2) = G(\tau_1 - \tau_2)$, $\Sigma(\tau_1, \tau_2) = \Sigma(\tau_1 - \tau_2)$. Hence, we can make the Fourier transformation of the equation~\eqref{eq:DS-1}:
\begin{figure}[t]
\center{\includegraphics[scale=0.3]{SYK-Dyson.png}}\caption{Dyson--Schwinger equation which sums up melonic diagrams. Thin lines correspond to tree-level propagators, thick lines correspond to exact ones.} \label{fig:Dyson}
\end{figure}
\beq
\label{eq:DS-2}
G^{-1}(\omega) = -i \omega - \Sigma(\omega),
\eeq
where we used the explicit form of the tree-level propagator:
\beq G_0(\omega) \equiv \int_{-\infty}^\infty d\tau e^{i \omega \tau} \frac{1}{2} \text{sgn} \tau = \frac{i}{\omega + i 0}, \quad \text{i.e.} \quad G_0^{-1}(\omega) = - i \omega. \eeq
The equation~\eqref{eq:DS-1} can be solved numerically. However, in the low frequency limit, $\omega \ll J$ (i.e. $J \tau \gg 1$), and strong coupling, $\beta J \gg 1$, one can also find its analitical solution. Let us first consider the zero-temperature case $\beta = \infty$. On dimensional grounds, we expect that in the limit under consideration the exact propagator decays as $G(\tau) \sim \tau^{-\frac{1}{2}}$. Hence, the left-hand side of the equation~\eqref{eq:DS-1} is negligible and the equation reduces to the following form (the result below shows that this assumption is correct):
\beq \label{eq:DS-3}
0 = G_0(\tau_1, \tau_2) + \int_{-\infty}^\infty d\tau_3 \int_{-\infty}^{\infty} d\tau_4 G_0(\tau_1, \tau_3) \Sigma(\tau_3, \tau_4) G(\tau_4, \tau_2), \eeq
hence,
\beq \label{eq:DS-9}
\int d\tau \Sigma(\tau_1, \tau) G(\tau, \tau_2) = -\delta(\tau_1 - \tau_2). \eeq
To obtain the second identity we have differentiated~\eqref{eq:DS-3} over $\tau_1$, used the relation $\partial_{\tau_1} G_0(\tau_1, \tau_2) = \delta(\tau_1 - \tau_2)$ and then took the integral over $\tau_3$. Obviously, the same equation arises when one dropes off the inverse tree-level propagagtor in~\eqref{eq:DS-2}:
\beq
\label{eq:DS-8}
G^{-1}(\omega) \approx -\Sigma(\omega), \quad \text{or} \quad \Sigma(\omega) G(\omega) \approx -1.
\eeq
This is just a Fourier transformation of the equation~\eqref{eq:DS-9}. Note that in the limit in question the DS equation~\eqref{eq:DS-3} is invariant under reparametrizations of time, $\tau \rightarrow f(\tau)$, $ f'(\tau) > 0$:
\beq
\label{eq:DS-4}
\begin{aligned}
G(\tau_1, \tau_2) &\rightarrow G\left[f(\tau_1), f(\tau_2)\right] f'(\tau_1)^\Delta f'(\tau_2)^\Delta, \\
\Sigma(\tau_1, \tau_2) &\rightarrow \Sigma\left[f(\tau_1), f(\tau_2)\right] f'(\tau_1)^{3\Delta} f'(\tau_2)^{3\Delta},
\end{aligned} \eeq
where $\Delta = \frac{1}{4}$. In fact,
\beq
\label{eq:DS-5}
\int d f(\tau) \Sigma\left[f(\tau'), f(\tau)\right] G\left[f(\tau),f(\tau'')\right] = \frac{\int d\tau \Sigma(\tau', \tau) G(\tau, \tau'')}{f'(\tau')^{\frac{1}{4}} f'(\tau'')^{\frac{3}{4}}} = \frac{-\delta(\tau' - \tau'')}{f'(\tau')} = -\delta\left[f(\tau') - f(\tau'')\right].
\eeq
We emphasize that these reparametrizations should respect the orientation of the Euclidean circle: otherwise, the last equality in~\eqref{eq:DS-5} does not hold.
Thus, we obtain that in the IR limit fermions acquire an anomalous conformal dimension\footnote{In general, in the model with $q$-fermion interaction term fermions acquire a conformal dimension $\Delta = \frac{1}{q}$.} $\Delta = \frac{1}{4}$, which hints at the following ansatz to solve the DS equation:
\beq G_c(\tau_1, \tau_2) = B \frac{\text{sgn} \tau_{12}}{|J \tau_{12}|^{2\Delta}}, \eeq
where $\tau_{12} \equiv \tau_1 - \tau_2$ and $B$ is some numerical constant to be determined. The letter ``c'' stands for ``conformal''. Keeping in mind the following integral, which reduces to the gamma-function after the $\frac{\pi}{2}$ rotation in the complex plane:
\beq \int_{-\infty}^\infty d\tau e^{i \omega \tau} \frac{\text{sgn}\tau}{|\tau|^{2D}} = 2 i \Gamma(1 - 2D) \cos(\pi D) |\omega|^{2D - 1} \text{sgn}\omega, \eeq
we confirm that our ansatz does solve the equation~\eqref{eq:DS-3}, and find the numerical factor~$B$:
\beq
\label{eq:DS-6}
G_c(\tau) = \frac{1}{(4\pi)^{\frac{1}{4}}} \frac{\text{sgn} \tau}{|J \tau|^{2\Delta}}. \eeq
Note that this solution decays as $J (\tau_1 - \tau_2) \rightarrow \infty$, which confirms the self-consistency of the approximation in which the equation~\eqref{eq:DS-3} was obtained. This solution was originally found by Sachdev and Ye in the system of randomly coupled spins~\cite{Sachdev-Ye}.
Finally, reparametrization invariance~\eqref{eq:DS-4} allows one to find the finite-temperature exact propagator without solving the corresponding DS equation~\cite{Parcollet}. In fact, zero- and finite-temperature propagators are connected by the map~\eqref{eq:circle-line}, which does satisfy the condition $f'(\tau) > 0$. Therefore, we can simply use this map in the expression~\eqref{eq:DS-6}:
\beq
\label{eq:DS-7}
G_c^\beta(\tau) = \frac{\pi^{\frac{1}{4}}}{\sqrt{2 \beta J}} \frac{\text{sgn} \left(\sin \frac{\pi \tau}{\beta} \right)}{|\sin \frac{\pi \tau}{\beta}|^{2\Delta}}, \quad \tau \in \left[-\frac{\beta}{2}, \frac{\beta}{2} \right). \eeq
Here we substituted the correct $\text{sgn}$ function from subsection~\ref{sec:SYK-defs}. However, note that $\text{sgn} \left( \sin\frac{\pi \tau}{\beta} \right) = \text{sgn} \left( \tan\frac{\pi \tau}{\beta} \right) = \text{sgn}(\tau)$ for $\tau \in \left[-\frac{\beta}{2}, \frac{\beta}{2}\right)$. Also note that in the limit $\tau \ll \beta$ expressions~\eqref{eq:DS-6} and~\eqref{eq:DS-7} coincide.
We remind that $G_c(\tau)$ and $G_c^\beta(\tau)$ are approximately equal to the exact propagators $G(\tau)$ and $G^\beta(\tau)$ only for relatively large times $\tau \gg 1/J$. At the same time, in the UV limit ($\tau \ll 1/J$) exact propagators are approximately equal to the bare ones, $G_0(\tau)$ and $G_0^\beta(\tau)$ correspondingly. In the intermediate region $G(\tau)$ and $G^\beta(\tau)$ interpolate between these functions (e.g. see Fig.~\ref{fig:G-numerical}).
\begin{figure}[t]
\center{\includegraphics[scale=0.25]{SYK-G-numerical.png}}\caption{A reprint of numerical solutions to the large $N$ Dyson--Schwinger equation~\eqref{eq:DS-1} obtained in~\cite{Maldacena-SYK} for $\beta J = 10$ and $\beta J = 50$. The exact solution is shown in solid lines, conformal approximation in dash-dotted lines and conformal approximation plus the first correction (which breaks the reparametrization invariance) in dashed lines. For convenience the variable $\theta = \frac{2 \pi \tau}{\beta}$ is introduced.} \label{fig:G-numerical}
\end{figure}
After the analytic continuation of~\eqref{eq:DS-7} to the Lorentzian time $t = -i \tau$ one obtains the following two-point function\footnote{Note that in the Lorentzian signature one should specify the propagator (i.e. ordering of the operators into the correlation function) using $i \epsilon$ prescription~\cite{Maldacena-SYK, Peskin}. The analytical behavior of different propagators is different, but the overall exponential factor is unique.}:
\beq G_c^\beta(t) = \frac{(\pi)^{\frac{1}{4}}}{\sqrt{2 \beta J}} \frac{1}{|\sinh \frac{\pi t}{\beta}|^{2\Delta}} \propto e^{-\frac{2 \pi \Delta}{\beta} t}, \quad \text{as} \quad t \gg \frac{1}{J}. \eeq
This function becomes exponentially small after the time $t_d = \frac{\beta}{2 \pi \Delta} \sim \beta$, which is usualy called as dissipation time. This is quite an unusual behavior for a 1D system, but note that we consider large $N$ limit. In fact, it was shown in~\cite{Lunkin} that the exponential decay is replaced by the correct power-like one: $G_c^\beta(t) \sim (t/t_M)^{-3/2}$, for times larger than $t_M \sim N/J$. We will return to this expression when we discuss four-point funcitons (Sec.~\ref{sec:treatment}).
\subsection{Effective action}
\label{sec:effective}
In this subsection we derive the effective action and DS equations~\eqref{eq:DS-1} directly from the path integral. Here we assume the Gaussian distribution for coupling constants $J_{ijkl}$, which gives the following averaging rule:
\beq \overline{f(J_{ijkl})} \equiv \int \mathcal{D} J_{ijkl} f\left( J_{ijkl} \right), \quad \text{where} \quad \mathcal{D} J_{ijkl} \equiv \exp\left[ - \frac{N^3}{12 J^2} \sum_{i<j<k<l} J_{ijkl}^2 \right] \prod_{i<j<k<l} \sqrt{\frac{N^3}{3! J^2}} \frac{dJ_{ijkl}}{\sqrt{2\pi}}. \eeq
There are two physically distinct ways to realize the disorder average. First, one can average the partition functions itself, i.e. find $\overline{Z}$. Second, one can average the free energy using so-called replica trick:
\beq
\label{eq:effective-1}
\beta \overline{F} \equiv -\overline{\log Z} = -\lim_{M \rightarrow 0} \partial_M \overline{Z^M}.
\eeq
In this approach one introduces $M$ copies of the system ($\chi_i \rightarrow \chi_i^\alpha$, $i=1 \ldots N$, $\alpha = 1 \ldots M$), calculates the extended partition function $Z^M$, averages over the disorder, analitically continues to non-integer $M$ and take the formal limit~\eqref{eq:effective-1}. If one wants to find the free energy, entropy and other thermodinamic functions that are in some sense directly observable quantities, one should consider the second average.
However, in SYK model both methods of averaging give the same effective action~\cite{Sarosi,Jevicki-1,Sachdev}, because the replica-nondiagonal contributions to the replica action are suppressed by higher powers of $\frac{1}{N}$, and replica partition function simply splits into the product of $M$ naively-averaged partition functions: $\overline{Z^M} = \left(\overline{Z}\right)^M + \mathcal{O}\left(\frac{1}{N}\right)$. One can find the details on replica calculation in~\cite{Kitaev,Jevicki-1,Bagrets-1607, Arefeva-1811}. Thus, for simplicity we consider the disorder average of the partition function itself:
\beq \label{eq:effective-0}
\begin{aligned}
\overline{Z} &= \int \mathcal{D} J_{ijkl} \mathcal{D} \chi_i \exp \left[ \int d\tau \left( \frac{1}{2} \sum_{i=1}^N \chi_i \partial \chi_i - \frac{1}{4!} \sum_{i,j,k,l=1}^N J_{ijkl} \, \chi_i \chi_j \chi_k \chi_l \right) \right] = \\
& = \int \mathcal{D} \chi_i \exp\left[ \frac{1}{2} \sum_i \int d\tau \chi_i \partial \chi_i + \frac{3! J^2}{2 N^3} \frac{1}{4!} \sum_{i,j,k,l} \left( \int d\tau \chi_i \chi_j \chi_k \chi_l \right)^2 \right] = \\ &= \int \mathcal{D} \chi_i \exp\Bigg[ \frac{1}{2} \sum_i \int d\tau \chi_i \partial \chi_i + \frac{N J^2}{8} \int d\tau d\tau'\Bigg(\frac{1}{N} \sum_i \chi_i(\tau) \chi_i(\tau')\Bigg)^4 \Bigg] = \\ &= \int \mathcal{D} \chi_i \exp\left[ \int d\tau d\tau' \left( \frac{N}{2} G_0^{-1}(\tau, \tau') \Xi(\tau, \tau') + \frac{N J^2}{8} \Xi(\tau, \tau')^4 \right) \right].
\end{aligned} \eeq
Here we performed the gaussian integration over $J_{ijkl}$, reorganized the integrals over $d\tau$ and the sum over fermion indexes. For convenience we also introduced the inverse tree-level propagator $G_0^{-1}(\tau, \tau')$ and mean field variable $\Xi(\tau, \tau')$:
\beq \label{eq:effective-4}
G_0^{-1}(\tau, \tau') = \delta(\tau - \tau') \partial_{\tau}, \qquad \Xi(\tau, \tau') = \frac{1}{N} \sum_{i=1}^N \chi_i(\tau) \chi_i(\tau'). \eeq
Then we formally apply the following identity:
\beq f(\Xi) = \int dx f(x) \delta(x-\Xi) = \frac{N}{2\pi} \int dx dy f(x) e^{i N (x - \Xi) y}, \eeq
for the functional variables
\beq \label{eq:effective-v}
x = G(\tau, \tau'), \quad y = i \Sigma(\tau, \tau'), \eeq
with the following normalization condition:
\beq \int \mathcal{D} G \mathcal{D} \Sigma \exp \left[ -\frac{N}{2} \int d\tau d\tau' \Sigma(\tau, \tau') G(\tau, \tau') \right] = 1, \eeq
to the function
\beq \begin{aligned}
\exp&\left( \frac{N J^2}{8} \int d\tau d\tau' \Xi(\tau, \tau')^4 \right) = \\ &= \int \mathcal{D} G \mathcal{D} \Sigma \exp \left\{ \frac{N}{2} \int d\tau d\tau' \left[ \frac{J^2}{4} G(\tau, \tau')^4 - \Sigma(\tau, \tau') \Big( G(\tau, \tau') - \Xi(\tau, \tau') \Big) \right] \right\}.
\end{aligned} \eeq
In this way we reorganize the nonlinear term $\Xi(\tau, \tau')^4$ in~\eqref{eq:effective-0}:
\beq \begin{aligned}
\overline{Z} &= \int \mathcal{D} G \mathcal{D} \Sigma \int \mathcal{D} \chi_i \exp \left\{ \frac{N}{2} \int d\tau d\tau' \left[ \left( G_0^{-1}(\tau, \tau') + \Sigma(\tau, \tau') \right) \Xi(\tau, \tau') + \frac{J^2}{4} G(\tau, \tau')^4 - \Sigma(\tau, \tau') G(\tau, \tau') \right] \right\} \\ &= \int \mathcal{D} G \mathcal{D} \Sigma \int \mathcal{D} \chi_i \exp\Bigg[ \frac{1}{2} \sum_i \int d\tau d\tau' \, \chi_i(\tau) \Big( \delta(\tau - \tau') \partial_\tau + \Sigma(\tau, \tau') \Big) \chi_i(\tau') + \\ &\phantom{= \int \mathcal{D} G \mathcal{D} \Sigma \int \mathcal{D} \chi_i \exp\Bigg[}+ \frac{N}{2} \int d\tau d\tau' \left( \frac{J^2}{4} G(\tau, \tau')^4 - \Sigma(\tau, \tau') G(\tau, \tau') \right) \Bigg].
\end{aligned} \eeq
In the last line we substituted the explicit form of the inverse tree-level propagator and mean field variable~\eqref{eq:effective-4}. Finally, after the integration over $\chi_i(\tau)$ we obtain the effective action:
\begin{align}
\label{eq:effective-2}
\overline{Z} &= \int \mathcal{D} G \mathcal{D} \Sigma \, e^{-I_{eff}[G, \Sigma]}, \\
\label{eq:effective-3}
\frac{I_{eff}}{N} &= -\frac{1}{2} \log \det \Big( -\delta(\tau-\tau')\partial_\tau - \Sigma(\tau,\tau')\Big) + \frac{1}{2} \int d\tau d\tau' \left( \Sigma(\tau, \tau') G(\tau, \tau') - \frac{J^2}{4} G(\tau, \tau')^4 \right).
\end{align}
This effective action clearly reproduces the DS equation~\eqref{eq:DS-1} after the variations over $G$ and $\Sigma$. Indeed, variation wrt $G$ gives the expression for the self-energy, whereas variation wrt $\Sigma$ gives the equation itself\footnote{In the second line we used that $G_0^{-1}(\tau', \tau) = G_0^{-1}(\tau,\tau')$ and $\Sigma(\tau', \tau) = - \Sigma(\tau,\tau')$.}:
\beq \begin{aligned}
\delta_{\Sigma} I_{eff} &= -\frac{1}{2} \text{tr} \log \left( 1 - \left(-\partial_\tau - \Sigma\right)^{-1} \delta \Sigma \right) + \frac{1}{2} \int d\tau d\tau' G(\tau, \tau') \delta \Sigma(\tau, \tau') = \\ &= \frac{1}{2} \int d\tau d\tau' \left[ G(\tau, \tau') - \left[G_0^{-1}(\tau, \tau') - \Sigma(\tau, \tau')\right]^{-1} \right] \delta \Sigma(\tau, \tau'), \quad \text{hence}, \quad G^{-1} = G_0^{-1} - \Sigma.
\end{aligned} \eeq
Practically, this means that we need not to rigorously explain the calculations that have been performed above, because the only important property which we require from the effective action is the correct DS equation. As soon as we find such an action, we entirely define the theory in the limit $N \rightarrow \infty$. In principle, we could just guess the action~\eqref{eq:effective-3} from the equation~\eqref{eq:DS-1}.
We emphasize that the solution of the DS equation~\eqref{eq:DS-1} is a true saddle point of the effective action~\eqref{eq:effective-3}, i.e. it is maximum on $G$ and minimum on $\Sigma$. This is due to the specific choice of the integration variable $y$, which is pure imaginary~\eqref{eq:effective-v}. Such a saddle point should be treated with caution. However, the numerical calculation shows that the solution of the DS equation does converge to this point~\cite{Maldacena-SYK, Jevicki-1, Jevicki-2, Arefeva-1811, Wang}.
Note that the functional integration over one-dimensional Majorana fermions is defined badly, because such fermions cannot be described by neither normal nor Grassmann numbers. In practice one should redefine Majorana fermions in terms of the ordinary Dirac fermions and reduce the integral~\eqref{eq:effective-0} to the integral over Grassmann variables. For the details on this calculation see appendix~\ref{sec:majorana-integral}.
Also note that the number $\frac{1}{N}$ plays the role of Planck's constant $\hbar$ in the functional integral~\eqref{eq:effective-2}, i.e. the limit $N \rightarrow \infty$ is equivalent to the classical limit $\hbar \rightarrow 0$.
Finally, the effective action~\eqref{eq:effective-3} allows one to calculate the entropy and free energy of the system, which determine its thermodynamic properties~\cite{Maldacena-SYK,Jevicki-2,Cotler}:
\beq \label{eq:effective-5}
\beta F = \beta E_0 + N \left[ -S_0 - \frac{2 \pi^2 C}{\beta J} + \mathcal{O}\left(\frac{1}{(\beta J)^2}\right) \right] + \frac{3}{2} \log(\beta J) + \text{const} + \mathcal{O}\left(\frac{1}{N}\right), \eeq
where $E_0$ is the ground state energy, $S_0 \approx 0.232$ is the low temperature entropy per site and $C$ is a numerical coefficient, the origin of which will be explained below. Note that the entropy of the system is large ($S \sim N$) even at low temperatures, which is not a common property. This is due to a specific form of the density of states, which resembles the random matrix semicircle and smoothly goes to zero at low energies (Fig.~\ref{fig:DOS}). In other words, even near the ground state the density of states is large ($\rho \sim e^{S_0 N}$) and energy gaps are small ($\sim e^{-S_0 N}$).
\begin{figure}[t]
\center{\includegraphics[scale=0.2]{DOS.png}}\caption{A reprint of the energy spectrum numerically calculated in~\cite{Maldacena-SYK} for a single realization of the couplings in the model~\eqref{eq:SYK-action} with $N=32$ fermions.} \label{fig:DOS}
\end{figure}
\subsection{Schwarzian action}
\label{sec:Schwarzian}
As we have seen in the subsection~\ref{sec:DS}, the presence of the inverse tree-level propagator in~\eqref{eq:DS-1} breaks the reparametrization invariance of DS equation~\eqref{eq:DS-4}. In this subsection we study this breaking more carefully. First let us make the change $\Sigma \rightarrow \Sigma - G_0^{-1}$ in the effective action~\eqref{eq:effective-3} and separate the conformally-invariant and non-invariant parts $I_{eff} = I_{CFT} + I_S$:
\begin{align}
\label{eq:Sch-CFT} \frac{I_{CFT}}{N} &= -\frac{1}{2} \log \det \Big(- \Sigma(\tau,\tau')\Big) + \frac{1}{2} \int d\tau d\tau' \left( \Sigma(\tau, \tau') G(\tau, \tau') - \frac{J^2}{4} G(\tau, \tau')^4 \right), \\
\label{eq:Sch-S} \frac{I_S}{N} &= -\frac{1}{2} \int d\tau d\tau'G_0^{-1}(\tau, \tau') G(\tau, \tau').
\end{align}
Now it is easy to see that conformal part~$I_{CFT}$ reproduces DS equation~\eqref{eq:DS-3} or~\eqref{eq:DS-8}, which is invariant wrt reparametrizations $\tau \rightarrow f(\tau)$, $f'(\tau) > 0$. Furthermore, delta-function in $G_0^{-1}(\tau, \tau')$ picks up small time differences $|\tau - \tau'| \ll J^{-1}$, therefore it can be neglected in the IR limit. Hence, conformal invariance emerges in the deep IR limit and disappears when one moves avay from it.
However, one cannot simply throw away the non-invariant part of the effective action, because it contains the essential information about the theory. In order to see this, let us consider fluctuations of the effective action~\eqref{eq:effective-3} near the saddle point $(\tilde{G}, \tilde{\Sigma})$. We emphasize that $\tilde{G} \ne G_c$; $G_c$ is only IR limit of $\tilde{G}$. It is convenient to parametrize the fluctuations\footnote{Note that the measure of the functional integration does not change if we choose fluctuations in this form. } in the form $G = \tilde{G} + \frac{\delta G}{|\tilde{G}|}$, $\Sigma = \tilde{\Sigma} + |\tilde{G}| \delta \Sigma$:
\beq \begin{aligned}
\frac{I_{eff}}{N} &\approx \frac{1}{4} \int d\tau_1 d\tau_2 d\tau_3 d\tau_4 \, \delta\Sigma(\tau_1, \tau_2) \left(\big|\tilde{G}(\tau_1, \tau_2)\big| \tilde{G}(\tau_1, \tau_3) \tilde{G}(\tau_2, \tau_4) \big|\tilde{G}(\tau_3, \tau_4)\big| \right) \delta\Sigma(\tau_3, \tau_4) + \\ &+ \frac{1}{2} \int d\tau_1 d\tau_2 \left( \delta G(\tau_1, \tau_2) \delta \Sigma(\tau_1, \tau_2) - \frac{3 J^2}{2} \delta G(\tau_1, \tau_2)^2 \right) \equiv \\ &\equiv -\frac{1}{12 J^2} \langle \delta \Sigma | K | \delta \Sigma \rangle + \frac{1}{2} \langle \delta G | \delta \Sigma \rangle - \frac{3 J^2}{4} \langle \delta G | \delta G \rangle.
\end{aligned} \eeq
Here $K$ is the operator that acts on the space of antisymmetric two-point functions (and generates ladder diagrams, as we will see in Sec.~\ref{sec:4p-CFT}). The integral kernel of this operator looks as follows:
\beq \label{eq:Sch-K} \begin{gathered}
K(\tau_1, \tau_2, \tau_3, \tau_4) \equiv -3 J^2 \big|\tilde{G}(\tau_1, \tau_2)\big| \tilde{G}(\tau_1, \tau_3) \tilde{G}(\tau_2, \tau_4) \big|\tilde{G}(\tau_3, \tau_4)\big|, \\
K | A \rangle = \int d\tau_3 d\tau_4 \, K(\tau_1, \tau_2, \tau_3, \tau_4) A(\tau_3, \tau_4).
\end{gathered} \eeq
It is straightforward to see that this kernel is antisymmetric under the changes $\tau_1 \leftrightarrow \tau_2$ and $\tau_3 \leftrightarrow \tau_4$ but symmetric under the change $(\tau_1, \tau_2) \leftrightarrow (\tau_3, \tau_4)$ (recall that $G(\tau_2, \tau_1) = - G(\tau_1, \tau_2)$). Also we introduce the identity operator~\cite{Kitaev, Kitaev-reps}:
\beq \label{eq:Sch-I} \begin{aligned}
I(\tau_1, \tau_2, \tau_3, \tau_4) &\equiv \frac{1}{2} \left[ \delta(\tau_1 - \tau_3) \delta(\tau_2 - \tau_4) - \delta(\tau_1 - \tau_4) \delta(\tau_2 - \tau_3) \right], \\ I | A \rangle &= | A \rangle,
\end{aligned} \eeq
and the inner product of two-point functions:
\beq \label{eq:inner} \langle A | B \rangle \equiv \int d\tau_1 d\tau_2 A^*(\tau_1, \tau_2) B(\tau_1, \tau_2). \eeq
We remind that $\Sigma$ is a Lagrange multiplier, i.e. it does not appear in physical quantities. Hence, we can just integrate out its fluctuations from the functional integral with the action~\eqref{eq:effective-3} to obtain in the semiclassical approximation:
\beq \label{eq:Sch-2}
\frac{I_{eff}[\delta G]}{N} = -\log \int \mathcal{D} \delta \Sigma \, e^{- I_{eff}[\delta G,\delta \Sigma]} \simeq \frac{3 J^2}{4} \big\langle \delta G \big| (K^{-1} - I) \big| \delta G \big\rangle. \eeq
Let us check what happens with the action~\eqref{eq:Sch-2} in the conformal (IR) limit. Naively one thinks that non-invariant part of the action is negligible in this limit, i.e. action~\eqref{eq:effective-3} approximately equals~\eqref{eq:Sch-CFT}. This means that conformally invariant propagator replaces the exact saddle point, $\tilde{G} \approx G_c$. The fluctuations of the effective action in this limit are as follows:
\beq \label{eq:Sch-7}
\frac{I_{eff}[\delta G]}{N} \approx \frac{I_{CFT}[\delta G]}{N} \approx \frac{3 J^2}{4} \langle \delta G | K_c^{-1} - I | \delta G \rangle, \eeq
where the operator $K_c$ has the form~\eqref{eq:Sch-K} with the functions $G_c$ instead of $\tilde{G}$. Unfortunately, such a naively trancated effective action does not appropriately treat all fluctuations around the saddle point. Indeed, let us consider such fluctuations $\delta G$ that conserve the conformal symmetry~\eqref{eq:DS-4}. In this case $G = G_c + \frac{\delta G}{|G_c|}$ and $\Sigma = J^2 G_c^3 + 3 J^2 |G_c| \delta G$ solve the conformal Dyson--Schwinger equation~\eqref{eq:DS-9}:
\beq \int d\tau_4 \Big(\Sigma_c(\tau_3, \tau_4) + 3 J^2 |G_c(\tau_3, \tau_4)| \delta G(\tau_3, \tau_4) \Big) \left(G_c(\tau_4, \tau_2) + \frac{\delta G(\tau_4, \tau_2)}{|G_c(\tau_4, \tau_2)|} \right) = -\delta(\tau_3 - \tau_2), \eeq
Substracting the DS equation on the conformal functions $G_c$ and $\Sigma_c$, multiplying by $G_c(\tau_3, \tau_1)$ and integrating over $\tau_3$ we obtain the following identity:
\beq \int d\tau_3 d\tau_4 \left( \frac{\delta G(\tau_4, \tau_2)}{|G_c(\tau_4, \tau_2)|} \Sigma_c(\tau_3, \tau_4) G_c(\tau_3, \tau_1) + 3 J^2 G_c(\tau_1, \tau_3) G_c(\tau_2, \tau_4) |G_c(\tau_3, \tau_4)| \delta G(\tau_3, \tau_4) \right) = 0, \eeq
which straightforwardly reduces to
\beq \left( I - K_c \right) \delta G = 0. \eeq
Thus, on such fluctuations the conformally-invariant action~\eqref{eq:Sch-7} or~\eqref{eq:Sch-CFT} is zero, i.e. non-invariant part~\eqref{eq:Sch-S} cannot be omitted. Therefore, we have to move away from IR limit and estimate how the action~\eqref{eq:Sch-S} changes under the conformal transformations~\eqref{eq:DS-4}.
Let us first consider zero temperature case ($\beta = \infty$). As the first approximation, we expand the conformal propagator:
\beq
G_c(\tau_1, \tau_2) \rightarrow G_c\left[f(\tau_1), f(\tau_2)\right] \approx \frac{\text{sgn} (\tau_1 - \tau_2)}{(4\pi)^{\frac{1}{4}} J^{2\Delta}} \frac{f'(\tau_1)^\Delta f'(\tau_2)^\Delta}{\left| f(\tau_1) - f(\tau_2)\right|^{2\Delta}}, \eeq
near $\tau = \frac{\tau_1 + \tau_2}{2}$ into the powers of $\tau_{12} = \tau_1 - \tau_2$:
\beq
\label{eq:Sch-3}
G(\tau_1, \tau_2) = G_c(\tau_1, \tau_2) \left( 1 + \frac{\Delta}{6} \tau_{12}^2 \text{Sch}\left[f(\tau), \tau\right] + \mathcal{O} (\tau_{12}^3) \right), \quad \text{where} \quad \text{Sch}\left(f(\tau),\tau\right) \equiv \frac{f'''}{f'} - \frac{3}{2} \left(\frac{f''}{f'}\right)^2. \eeq
We do this expansion, because delta-function from $G_0^{-1}(\tau_1, \tau_2)$ in~\eqref{eq:Sch-S} picks up values around $\tau_{12} \approx 0$. We will use this property below. Then we substract the untransformed part from~\eqref{eq:Sch-3} and substitute the final result into the action~\eqref{eq:Sch-S} to obtain that:
\beq \label{eq:Sch-6} \begin{aligned}
\frac{I_S}{N} = -\frac{1}{2} \langle G_0^{-1} | \delta G \rangle &= -\frac{1}{2} \int d\tau d\tau_{12} \, G_0^{-1}(\tau_{12}) \tilde{G}(\tau_{12}) \left[ \frac{\Delta}{6} \tau_{12}^2 \text{Sch}\left[f(\tau), \tau\right] + \mathcal{O} (\tau_{12}^3) \right] \approx \\ &\approx -\frac{\Delta}{12} \int d\tau_{12} \delta(\tau_{12}) \partial_{\tau_{12}} \left( \tau_{12}^2 \tilde{G}(\tau_{12}) \right) \int d\tau \, \text{Sch}\left[ f(\tau), \tau) \right] = \\ &= -\frac{1}{J} \underbrace{\frac{\Delta}{12} \int d\eta \delta(\eta) \partial_\eta \left( \eta^2 \tilde{G}(\eta) \right)}_C \int d\tau \, \text{Sch}\left[ f(\tau), \tau) \right],
\end{aligned} \eeq
where we have changed to the dimensionless variable $\eta = J \tau_{12}$. Now it is easy to see that the integral over $d\eta$ is undefined:
\beq \label{eq:Sch-C} C = \frac{\Delta}{12} \int d\eta \delta(\eta) \left[ (\eta^2 g(\eta))' \text{sgn}\eta + \eta^2 g(\eta) \delta(\eta) \right] = \frac{\Delta}{12} \int d\eta g(\eta) \eta^2 \delta(\eta)^2 = \frac{\Delta}{12} \delta(0) \cdot g(0) \cdot 0^2 = 0 \cdot \infty, \eeq
where we singled out the relevant part of the saddle point value, $\tilde{G}(\eta) = g(\eta) \text{sgn}\eta$.
There is no simple way to resolve this uncertainty, because we cannot analytically find the function $g(\eta)$ for all times. However, this problem can be solved by smearing the delta-function (i.e. by replacing the term $G_0^{-1}$ with the other suitable source which is big at small times, $\eta \ll 1$) and introducing gentle UV and IR cut-offs for the integral~\eqref{eq:Sch-C}. This was done in~\cite{Kitaev}.
The other way is to calculate the leading non-conformal corrections to the eigenfunctions and eigenvalues of the operator $K$, substitute them into the action~\eqref{eq:Sch-2} and directly evaluate $I_S = I_{eff} - I_{CFT} \approx \delta I_{CFT}$. This calculation was performed in~\cite{Maldacena-SYK,Jevicki-2}. Both these methods lead to the action of the form~\eqref{eq:Sch-6} with the coefficient $C \approx 0.48 \times \frac{\Delta}{12} > 0$. In summary, for the zero-temperature theory we obtain:
\beq
\label{eq:Sch-4}
\frac{I_S}{N} \approx -\frac{C}{J} \int_{-\infty}^\infty \text{Sch}\left[f(\tau), \tau\right] d\tau. \eeq
As usual, one can change to the finite-temperature version of~\eqref{eq:Sch-4} using the map~\eqref{eq:circle-line}:
\beq
\label{eq:Sch-5}
\frac{I_S}{N} = -\frac{C}{J} \int_{-\frac{\beta}{2}}^{\frac{\beta}{2}}\text{Sch}\left[\tan\frac{\pi \varphi(\tau)}{\beta}, \tau \right] d\tau. \eeq
In this case the saddle point values of the effective action are parametrized by the function $\varphi(\tau)$, which maps the time circle to itself and preserves its orientation. Note that the coefficient $C$ is exactly the coefficient in the thermodynamic identity~\eqref{eq:effective-5}. This is because the low energy dynamics of SYK model is determined by the Schwarzian action.
Note that conformal invariance does not completely disappear when one moves away from IR limit. Indeed, exact propagators and the effective action must be invariant under the transformations from the $SL(2,\mathbb{R})$ group: these transformations are the rotations of the time circle (or time line in the limit $\beta \rightarrow \infty$) and do not correspond to any physical degrees of freedom. Both the action~\eqref{eq:Sch-2} and the Schwarzian action~\eqref{eq:Sch-5} are zero on the reparametrizations from $SL(2,\mathbb{R})$ group.
Thus, the apparent conformal symmetry of the IR theory is actually broken down to the symmetry wrt the transformations from the $SL(2,\mathbb{R})$ group. The dynamics of the pseudo-Goldstone boson which is associated to this broken symmetry (so-called ``soft mode'') is approximately described by the Schwarzian action~\eqref{eq:Sch-5}.
\section{SYK spectrum and four-point functions}
\label{sec:treatment}
This section has two main purposes. First, on a simple example we show how to calculate quantum corrections (which are suppressed by the powers of $\frac{1}{N}$) to many-point correlation functions. For this reason we keep as many details of the calculation as possible. Second, we show that OTOC exponentially saturates with time, with the main growing contribution being provided by the Schwarzian action. This is one of the most striking properties of SYK, as soon as this growth saturates the ``bound on chaos'' and coincides with the behavior of similar correlators calculated on black hole background (see subsection~\ref{sec:scramblers} and paper~\cite{MSS}). This section is mostly based on the pioneer papers~\cite{Maldacena-SYK,Kitaev,Polchinski}. A generalization to $n$-point functions with arbitrary $n$ can be found in~\cite{Gross-1710}.
Let us consider the following four-point correlation function:
\beq \label{eq:4p-1} \begin{aligned}
&\frac{1}{N^2} \sum_{i,j=1}^N \left\langle \mathcal{T} \chi_i(\tau_1) \chi_i(\tau_2) \chi_j(\tau_3) \chi_j(\tau_4) \right\rangle = \\ &=\frac{1}{Z} \int \mathcal{D} G \mathcal{D} \Sigma \left[ G(\tau_1, \tau_2) G(\tau_3, \tau_4) + \frac{1}{N} \left( G(\tau_1, \tau_4) G(\tau_2, \tau_3) - G(\tau_1, \tau_3) G(\tau_2, \tau_4) \right) \right] e^{- I_{eff}[G, \Sigma]},
\end{aligned} \eeq
where we have used the approach of subsection~\ref{sec:effective} to transform from the functional integrals over $\mathcal{D} \chi_i$ on the LHS to those over $\mathcal{D} G$ and $\mathcal{D} \Sigma$ on the RHS. Letter $Z$ denotes the partition function~\eqref{eq:effective-2}. As usual, we work in the limit $J\tau \gg 1$, $N \gg 1$ and keep the leading quantum correction ($\sim \frac{1}{N}$) to the classical expression:
\beq \mathcal{F}(\tau_1, \tau_2, \tau_3, \tau_4) \equiv \frac{1}{N^2} \sum_{i,j=1}^N \left\langle \mathcal{T} \chi_i(\tau_1) \chi_i(\tau_2) \chi_j(\tau_3) \chi_j(\tau_4) \right\rangle - \tilde{G}(\tau_1, \tau_2) \tilde{G}(\tau_3, \tau_4), \eeq
where $\tilde{G}$ denotes the saddle point value of the effective action~\eqref{eq:effective-3}, which in the IR limit approximately equals the conformal propagator~\eqref{eq:DS-7}. For clarity we consider the theory at finite temperature, i.e. $\tau_{1,2,3,4} \in \left[-\frac{\beta}{2}, \frac{\beta}{2}\right)$.
Without loss of generality we restrict ourselves to the region $\tau_1 > \tau_2$, $\tau_3 > \tau_4$ and $\tau_1 > \tau_3$. First, function $\mathcal{F}(\tau_1, \tau_2, \tau_3, \tau_4)$ does not depend on the choise of the coordinates on the time circle, i.e. does not change under the cyclic permutation of its arguments. Second, this function is antisymmetric under the changes $\tau_1 \leftrightarrow \tau_2$ and $\tau_3 \leftrightarrow \tau_4$ and symmetric under the simultaneous change $(\tau_1, \tau_2) \leftrightarrow (\tau_3, \tau_4)$, which follows from the anticommutation relations of $\chi_i$'s. Together these two symmetries allow one to recover the behavior of this function in the regions with the other order of $\tau_{1,2,3,4}$.
As we have shown in the subsection~\ref{sec:Schwarzian}, it is convenient to separate conformally-invarinant and non-invariant fluctuations near the saddle point value $\tilde{G}$. We denote these fluctuations as $\delta G^\parallel$ and $\delta G^\perp$ correspondingly. Unlike the subsection~\ref{sec:Schwarzian} in this section we do not divide the fluctuations by $\tilde{G}$. I.e. the fluctuations $\delta G^\parallel$ are defined in such way that the function $G_c + \delta G^\parallel$ solves the conformal DS equation~\eqref{eq:DS-9}, and the subspace of non-invariant fluctuations $\delta G^\perp$ is the orthogonal complement to the subspace of conformally-invariant fluctuations. Note that due to the symmetry~\eqref{eq:DS-4} all conformal fluctuations can be parametrized by the function $\varphi(\tau)$, which maps the time circle into itself:
\beq \delta G_\varphi^\parallel(\tau_1, \tau_2) = G_c^\beta\left[\varphi(\tau_1), \varphi(\tau_2)\right] - G_c^\beta(\tau_1, \tau_2), \quad \text{for some reparametrisation} \quad \tau \rightarrow \varphi(\tau). \eeq
In these notations the functional integral for the four-point function looks as follows:
\beq \begin{aligned}
\mathcal{F} &\approx \mathcal{F}_0 + \frac{1}{Z} \int \mathcal{D} \delta G^\parallel \mathcal{D} \delta G^\perp \mathcal{D} \Sigma \left( \delta G^\parallel(\tau_1, \tau_2) + \delta G^\perp(\tau_1, \tau_2) \right) \left( \delta G^\parallel(\tau_3, \tau_4) + \delta G^\perp(\tau_3, \tau_4) \right) e^{-I_{CFT} - I_S} = \\
&= \mathcal{F}_0 + \mathcal{F}_S + \mathcal{F}_{CFT} + \mathcal{O}\left(\frac{1}{N^2}\right),
\end{aligned} \eeq
where we expanded the integrand near the saddle point and introduced the following expectation values:
\begin{align}
\label{eq:4p-4} \mathcal{F}_0 &\equiv \frac{1}{N} \left( \tilde{G}(\tau_1, \tau_4) \tilde{G}(\tau_2, \tau_3) - \tilde{G}(\tau_1, \tau_3) \tilde{G}(\tau_2, \tau_4) \right), \\
\label{eq:4p-2} \mathcal{F}_S &\equiv \left\langle \delta G^\parallel(\tau_1, \tau_2) \delta G^\parallel(\tau_3, \tau_4) \right\rangle_S = \frac{\int \mathcal{D} \varphi \, \delta G_\varphi^\parallel(\tau_1, \tau_2) \delta G_\varphi^\parallel(\tau_3, \tau_4) e^{-I_S[\varphi]}}{\int \mathcal{D} \varphi \, e^{-I_S[\varphi]}}, \\
\label{eq:4p-3} \mathcal{F}_{CFT} &\equiv \left\langle \delta G^\perp(\tau_1, \tau_2) \delta G^\perp(\tau_3, \tau_4) \right\rangle_{CFT} = \frac{\int \mathcal{D} \delta G^\perp \, \delta G^\perp(\tau_1, \tau_2) \delta G^\perp(\tau_3, \tau_4) e^{-I_{eff}[\delta G^\perp]}}{\int \mathcal{D} \delta G^\perp \, e^{-I_{eff}[\delta G^\perp]}}.
\end{align}
We will clarify the meaning of the notations in subsections~\ref{sec:4p-S} and~\ref{sec:4p-CFT}. To obtain the average~\eqref{eq:4p-2}, we use that the Jacobian
\beq J = \left[\frac{\mathcal{D} G_\varphi^\parallel}{\mathcal{D} \varphi}\right]_{\varphi(\tau) = \frac{2 \pi \tau}{\beta}} \eeq
is constant and non-zero, because for reparametrisations which are infinitesimally close to the identity, $\varphi(\tau) = \frac{2 \pi \tau}{\beta} + \delta \varphi(\tau)$, fluctuations $\delta G_\varphi^\parallel$ depend only on $\delta \varphi$ (see eq.~\eqref{eq:4p-S-1}). The integral $\int \mathcal{D} \delta G^\perp \mathcal{D} \Sigma \, e^{-I_{CFT}}$ in the numerator and denominator of~\eqref{eq:4p-2} is also constant and non-zero. For the average~\eqref{eq:4p-3} we repeated the argumentation around the formula~\eqref{eq:Sch-2} and used the action $I_{eff}$ evaluated on the conformal functions $\tilde{G} = G_c^\beta$. We remind that for the conformally-invariant fluctuations $\left(I - K \right) \delta G^\parallel = 0$, hence, $I_{eff}[\delta G^\perp + \delta G^\parallel] = I_{eff}[\delta G^\perp]$.
For convenience in this section we rescale the fields and map the finite-temperature time circle into the unit circle by the following transformation:
\beq \label{eq:4p-5}
\begin{gathered}
\tau \rightarrow \frac{2 \pi \tau}{\beta}, \quad \chi_i \rightarrow \left( \frac{\beta J}{2 \pi}\right)^\Delta \chi_i, \\
G(\tau, \tau') \rightarrow \left( \frac{\beta J}{2 \pi}\right)^{2 \Delta} G(\tau, \tau'), \quad \Sigma(\tau, \tau') \rightarrow \frac{1}{J^2} \left( \frac{\beta J}{2 \pi}\right)^{6 \Delta} \Sigma(\tau, \tau').
\end{gathered} \eeq
In this case the Schwarzian and conformally-invariant actions acquire the following form:
\beq \begin{aligned}
\frac{I_{CFT}}{N} &= -\frac{1}{2} \log\det\Big(-\Sigma(\tau, \tau')\Big) + \frac{1}{2} \int_{-\pi}^\pi d\tau \int_{-\pi}^\pi d\tau' \left( \Sigma(\tau, \tau') G(\tau, \tau') - \frac{1}{4} G(\tau, \tau')^4 \right), \\
\frac{I_S}{N} &= -\frac{2 \pi C}{\beta J} \int_{-\pi}^\pi \text{Sch}\left(\tan\frac{\varphi(\tau)}{2},\tau\right) d\tau.
\end{aligned} \eeq
Thus, all the prefactors and their dependence on $N$, $J$ and $\beta$ become explicit. Both contributions from the conformally-invariant and non-invariant parts are of the order $\mathcal{O}\left(\frac{1}{N}\right)$, because both actions $I_S$ and $I_{CFT}$ are proportional to $N$. However, in the case of strong coupling $\beta J \gg 1$ the leading contribution to the correlation function comes from the Schwarzian action due to the additional small factor. Roughly speaking, due to this small factor soft mode fluctuations are easiest to excite. We calculate this contribution in subsection~\ref{sec:4p-S} and compare it with the contribution from the conformal part in subsection~\ref{sec:4p-CFT}.
\subsection{Soft mode contribution}
\label{sec:4p-S}
In this subsection we review the argumentation of~\cite{Kitaev} to estimate the correlator~\eqref{eq:4p-2} in the limit $1 \ll \tau J < \beta J \ll N$. In this limit the fluctuations are small, so we use gaussian approximation for the functional integrals. Note that this limit does not hold in zero temperature case. In fact, we have to work in the limit of small but non-zero temperatures: $\frac{J}{N} \ll T \ll J$.
Consider conformally-invariant fluctuations of the saddle point value $\tilde{G} \approx G_c^\beta$. For the infinitesimal transformations $\delta \varphi(\tau) \equiv \varphi(\tau) - \tau$ the fluctuations look as follows:
\beq
\label{eq:4p-S-1}
\begin{aligned}
\delta G_\varphi^\parallel(\tau_1, \tau_2) &= G_c^\beta\left[\varphi(\tau_1), \varphi(\tau_2)\right] - G_c^\beta(\tau_1, \tau_2) = \\
&= \left[ \delta \varphi(\tau_1) \partial_{\tau_1} + \frac{1}{4} \delta \varphi'(\tau_1) + \delta \varphi(\tau_2) \partial_{\tau_2} + \frac{1}{4} \delta \varphi'(\tau_2) \right] G_c^\beta(\tau_1, \tau_2) = \\
&= \frac{1}{4} \left[ \delta \varphi'(\tau_1) + \delta \varphi'(\tau_2) - \frac{\delta \varphi(\tau_1) - \delta \varphi(\tau_2)}{\tan \frac{\tau_1 - \tau_2}{2}}\right] G_c^\beta(\tau_1, \tau_2).
\end{aligned} \eeq
To obtain the last line we have used the expression~\eqref{eq:DS-7}.
Let us expand the function $\delta \varphi$ in Fourier modes:
\beq \delta \varphi(\tau) = \sum_{m \in \mathbb{Z}} (\delta \varphi)_m e^{i m \tau}, \eeq
and rewrite the expression~\eqref{eq:4p-S-1} as:
\beq \frac{\delta G_\varphi^\parallel(\tau_1, \tau_2)}{G_c^\beta(\tau_1, \tau_2)} = -\frac{i}{2} \sum_{m \in \mathbb{Z}} e^{i m \frac{\tau_1 + \tau_2}{2}} \left[ \frac{\sin\left(\frac{m \tau_{12}}{2}\right)}{\tan \frac{\tau_{12}}{2}} - m \cos\left(\frac{m \tau_{12}}{2}\right) \right] (\delta \varphi)_m, \eeq
where $\tau_{12} \equiv \tau_1 - \tau_2$. Then we use the following integral:
\beq \int_{\tau_2}^{\tau_1} \frac{s_{10} s_{02}}{s_{12}} e^{i m \tau_0} \frac{d \tau_0}{2\pi} = \frac{2}{\pi} \frac{1}{m (m^2 - 1)} e^{i m \frac{\tau_1 + \tau_2}{2}} \left[ \frac{\sin\left(\frac{m \tau_{12}}{2}\right)}{\tan \frac{\tau_{12}}{2}} - m \cos\left(\frac{m \tau_{12}}{2}\right) \right], \eeq
which allows one to write:
\beq \label{eq:4p-S-3}
\frac{\delta G_\varphi^\parallel(\tau_1, \tau_2)}{G_c^\beta(\tau_1, \tau_2)} =
\frac{- i \pi}{2} \sum_{m \in \mathbb{Z}} \int_{\tau_2}^{\tau_1} \frac{s_{10} s_{02}}{s_{12}} m (m^2 - 1) (\delta \varphi)_m e^{i m \tau_0} \frac{d \tau_0}{2\pi}, \eeq
where we have denoted $s_{12} \equiv 2 \sin \frac{\tau_1 - \tau_2}{2}$ and assumed that $2\pi > \tau_1 - \tau_2 > 0$. Finally, we introduce the $SL(2, \mathbb{R})$-invariante observable:
\beq \label{eq:4p-S-2}
O(\tau) = \text{Sch}\left(\tan\frac{\varphi(\tau)}{2},\tau\right) = \frac{1}{2} + \delta \varphi' + \delta \varphi''' + \frac{1}{2} (\delta \varphi')^2 - (\delta \varphi') (\delta \varphi''') - \frac{3}{2} (\delta \varphi'')^2 + \mathcal{O}(\delta \varphi^3), \eeq
do the Fourier transformation of the non-invariant part:
\beq \delta O(\tau) \equiv O(\tau) - \frac{1}{2} = -i \sum_{m \in \mathbb{Z}} m (m^2 - 1) (\delta \varphi)_m e^{i m \tau} + \mathcal{O}\left(\delta\varphi^2\right), \eeq
and compare this expression to the expression~\eqref{eq:4p-S-3}. As a result, we obtain the following integral representation for the variation of the variable $G$:
\beq \frac{\delta G_\varphi^\parallel(\tau_1, \tau_2)}{G_c^\beta(\tau_1, \tau_2)} = \frac{\pi}{2} \int_{\tau_2}^{\tau_1} \frac{s_{10} s_{02}}{s_{12}} \delta O(\tau_0) \frac{d \tau_0}{2\pi}. \eeq
Using this representation we can rewrite the correlator~\eqref{eq:4p-2} as:
\beq \label{eq:4p-S-5} \begin{aligned}
\frac{\mathcal{F}_S(\tau_1, \tau_2, \tau_3, \tau_4)}{G_c^\beta(\tau_1, \tau_2) G_c^\beta(\tau_3, \tau_4)} &= \frac{\left\langle \delta G_\varphi^\parallel(\tau_1, \tau_2) \delta G_\varphi^\parallel(\tau_3, \tau_4) \right\rangle_S}{G_c^\beta(\tau_1, \tau_2) G_c^\beta(\tau_3, \tau_4)} = \\ &= \frac{\pi^2}{4} \int_{\tau_2}^{\tau_1} \frac{d\tau_5}{2\pi} \int_{\tau_4}^{\tau_3} \frac{d\tau_6}{2\pi} \langle \delta O(\tau_5) \delta O(\tau_6) \rangle_S \frac{s_{15} s_{52}}{s_{12}} \frac{s_{36} s_{64}}{s_{34}}.
\end{aligned} \eeq
Recall that we have restricted ourselves to the region $\tau_1 > \tau_2$, $\tau_3 > \tau_4$ and $\tau_1 > \tau_3$.
Let us estimate the correlation function of two $\delta O$'s in the gaussian approximation. Using the expansion~\eqref{eq:4p-S-2} we find the Schwarzian action~\eqref{eq:Sch-5} up to the boundary and $\mathcal{O}(\delta \varphi^3)$ terms:
\beq \frac{I_S}{N} = -\frac{2 \pi C}{\beta J} \int_{-\pi}^\pi \left[ \frac{1}{2} + \frac{(\delta \varphi')^2 - (\delta \varphi'')^2 }{2} \right] d\tau = -\frac{\pi C}{\beta J} + \frac{\pi C}{\beta J} \sum_{m \in \mathbb{Z}} m^2 (m^2 - 1) (\delta \varphi)_m (\delta \varphi)_{-m}. \eeq
Therefore, in the gaussian approximation the correlation function of two $\delta \varphi$'s looks as follows:
\beq \label{eq:4p-S-6}
\langle (\delta \varphi)_m (\delta \varphi)_n \rangle_S = \frac{1}{2 \pi C} \frac{\beta J}{N} \frac{\delta_{m, -n}}{m^2 (m^2 - 1)}, \quad \text{where} \quad m, n \ne -1, 0, 1. \eeq
Note that modes with $m = -1, 0, 1$ are $SL(2, \mathbb{R})$ generators, i.e. they correspond to the non-physical degrees of freedom and cancel out from all physical observables. These are zero modes of the Schwarzian action, which we have mentioned at the end of subsection~\ref{sec:Schwarzian}.
Using the identity~\eqref{eq:4p-S-6} we find the correlation function of two $\delta O$'s:
\beq \label{eq:4p-S-7} \begin{aligned}
\langle \delta O(\tau_5) \delta O(\tau_6) \rangle_S &= -\sum_{m,n \in \mathbb{Z}} m (m^2 - 1) n (n^2 - 1) \langle (\delta \varphi)_m (\delta \varphi)_n \rangle_S e^{i m \tau_5 + i n \tau_6} = \\ &= \frac{1}{2 \pi C} \frac{\beta J}{N} \sum_{m \ne 0} (m^2 - 1) e^{i m (\tau_5 - \tau_6)} = \frac{1}{2 \pi C} \frac{\beta J}{N} \Big[ 1 - 2 \pi \delta(\tau_{56}) - 2 \pi \delta''(\tau_{56}) \Big],
\end{aligned} \eeq
where we have used that $\delta(\tau) = \frac{1}{2\pi} \sum e^{i m \tau}$. Please note that delta-functions in~\eqref{eq:4p-S-7} are zero if the integration intervals over $d\tau_5$ and $d\tau_6$ do not overlap. Therefore, it is convenient to separately consider two different orderings:
\beq \begin{aligned}
\text{OPE:} \quad 2 \pi > \tau_1 > \tau_2 > \tau_3 > \tau_4 > 0, \\
\text{OTO:} \quad 2 \pi > \tau_1 > \tau_3 > \tau_2 > \tau_4 > 0.
\end{aligned} \eeq
The abbreviation OPE stands for ``operator product expansion'', which is applicable for the corresponding time ordering (see~\cite{Maldacena-SYK,Sarosi,Gross} and subsubsection~\eqref{sec:OPE} for the details). The abbreviation OTO stands for ``out of time ordered'' for obvious reasons.
For the OPE ordering the integrals over $d\tau_5$ and $d\tau_6$ decouple, and the result of the integration in~\eqref{eq:4p-S-5} reduces to:
\beq \label{eq:4p-S-8}
\frac{\mathcal{F}_S(\tau_1, \tau_2, \tau_3, \tau_4)}{G_c^\beta(\tau_1, \tau_2) G_c^\beta(\tau_3, \tau_4)} = \frac{1}{8 \pi C} \frac{\beta J}{N} \left( \frac{\tau_{12}}{2 \tan \frac{\tau_{12}}{2}} - 1 \right) \left( \frac{\tau_{34}}{2 \tan \frac{\tau_{34}}{2}} - 1 \right), \eeq
In fact, this correlator describes the fluctuations of the total energy in the thermal ensemble, so it could be expected to factorize. More detailed explanation can be found in appendix~\ref{sec:energy-fluctuations} and paper~\cite{Maldacena-SYK}.
For the OTO ordering we obtain\footnote{A useful relation is $\partial_{\tau_{56}}^2 = \frac{1}{4} \partial_{\tau_5}^2 + \frac{1}{4} \partial_{\tau_6}^2 - \frac{1}{2} \partial_{\tau_5} \partial_{\tau_6}$} the contribution~\eqref{eq:4p-S-8} plus the additional term due to the delta-functions in~\eqref{eq:4p-S-7}:
\beq \label{eq:4p-S-9} \begin{aligned}
\frac{\mathcal{F}_S(\tau_1, \tau_2, \tau_3, \tau_4)}{G_c^\beta(\tau_1, \tau_2) G_c^\beta(\tau_3, \tau_4)} = \frac{1}{8 \pi C} \frac{\beta J}{N} \Bigg[ &-\frac{3 \pi}{8} \frac{\sin\left(\Delta \tau\right)}{\sin\left(\frac{\tau_{12}}{2}\right) \sin\left(\frac{\tau_{34}}{2}\right)} + \frac{\pi}{16} \frac{\sin\left(\Delta \tau - \tau_{12} \right)}{\sin\left(\frac{\tau_{12}}{2}\right) \sin\left(\frac{\tau_{34}}{2}\right)} + \frac{\pi}{16} \frac{\sin\left(\Delta \tau - \tau_{34} \right)}{\sin\left(\frac{\tau_{12}}{2}\right) \sin\left(\frac{\tau_{34}}{2}\right)} - \\ &-\frac{\pi}{8} \frac{ 2 \Delta \tau - \tau_{12} - \tau_{34} }{\tan\left(\frac{\tau_{12}}{2}\right) \tan\left(\frac{\tau_{34}}{2}\right)} + \frac{3 \pi}{8} \frac{1}{\tan\left(\frac{\tau_{12}}{2}\right)} + \frac{3 \pi}{8} \frac{1}{\tan\left(\frac{\tau_{34}}{2}\right)} + \\ &+ \left( \frac{\tau_{12}}{2 \tan \frac{\tau_{12}}{2}} - 1 \right) \left( \frac{\tau_{34}}{2 \tan \frac{\tau_{34}}{2}} - 1 \right) \Bigg],
\end{aligned} \eeq
where we have introduced the time $\Delta \tau \equiv \frac{\tau_1 + \tau_2}{2} - \frac{\tau_3 + \tau_4}{2}$. It is convenient to take $\tau_1 - \tau_2 = \pi$ and $\tau_3 - \tau_4 = \pi$, because in this case the expression for the correlator~\eqref{eq:4p-S-9} significantly simplifies to:
\beq \label{eq:4p-S-10}
\frac{\mathcal{F}_S\left(\tau_1, \tau_2, \tau_3, \tau_4 \right)}{G_c^\beta\left(\frac{\beta}{2}\right) G_c^\beta\left(\frac{\beta}{2}\right)} = \frac{1}{8 \pi C} \frac{\beta J}{N} \left[1 - \frac{\pi}{2} \sin\left(\frac{2 \pi \Delta \tau}{\beta}\right) \right]. \eeq
Here we have restored $\beta$ in the exponent, i.e. mapped unit circle back to the $\beta$-circle~\eqref{eq:4p-5}.
To understand the physical relevance of the obtained result, let us analitycally continue the four-point function to the Lorentzian time and check the behavior of the correlator at large values of $t = -i \Delta \tau \gg J^{-1}$. A particularly important case is when $\tau_1 = \frac{\beta}{4} + i t$, $\tau_2 = - \frac{\beta}{4} + i t$, $\tau_3 = 0$, $\tau_4 = -\frac{\beta}{2}$, which describes the regularized out-of-time-ordered correlation function (OTOC):
\beq \label{eq:OTOC} \begin{aligned}
\text{OTOC}(t) &\equiv \frac{1}{N^2} \sum_{i,j=1}^N \text{tr} \left[ \rho^{\frac{1}{4}} \chi_i(t) \rho^{\frac{1}{4}} \chi_j(0) \rho^{\frac{1}{4}} \chi_i(t) \rho^{\frac{1}{4}} \chi_j(0) \right] = \\ &= \tilde{G}\left(\frac{\beta}{2}\right) \tilde{G}\left(\frac{\beta}{2}\right) + \mathcal{F}\left( \frac{\beta}{4} + i t, -\frac{\beta}{4} + i t, 0, - \frac{\beta}{2} \right) = \\ &= \tilde{G} \tilde{G} + \mathcal{F}_S + \mathcal{F}_{CFT} + \mathcal{F}_0 + \mathcal{O}\left(\frac{1}{N^2}\right),
\end{aligned} \eeq
where we have defined the density matrix as $\rho \equiv \frac{1}{Z} e^{-\beta H}$. For brevity we omitted arguments of four-point functions in the last line. At $t=0$ this choice corresponds to the OTO region, so the correlator is given by the analytical continuation of~\eqref{eq:4p-S-10} to the non-zero real $t$. Now it is straightforward to see that in the leading order the corrected OTOC rapidly decays:
\beq \label{eq:OTOC-S} \begin{aligned}
\text{OTOC}(t) &\approx \tilde{G}\left(\frac{\beta}{2}\right) \tilde{G}\left(\frac{\beta}{2}\right) + \mathcal{F}_S\left( \frac{\beta}{4} + i t, -\frac{\beta}{4} + i t, 0, - \frac{\beta}{2} \right) \approx \\ &\approx \frac{\sqrt{\pi}}{2 \beta J} \left[ 1 + \frac{1}{8 \pi C} \frac{\beta J}{N} \left( 1 - \frac{\pi}{2} \cos\left(\frac{2 \pi i t}{\beta}\right)\right) \right] \approx \\
&\approx \frac{\sqrt{\pi}}{2 \beta J}\left[1 - \frac{\Delta^2}{2 C} \frac{\beta J}{N} e^{\frac{2 \pi}{\beta} t} \right], \quad \text{for} \quad \beta \ll t \ll \beta \log \frac{N}{\beta J}.
\end{aligned} \eeq
Here we restored the conformal dimension $\Delta = \frac{1}{4}$ and substituted the approximate saddle value, $\tilde{G} \approx G_c^\beta$. However, for bigger times gaussian approximation breaks down and one has to take into account corrections to this result. In general, one expects that the decay is saturated due to the contribution of multiple parallel ladders (see subsection~\ref{sec:4p-CFT}), but we will not discuss this point here.
Note that the contribution of the soft mode to the regularized time-ordered correlation function (TOC) does not change with $t$:
\beq \label{eq:TOC} \begin{aligned}
\text{TOC}(t) &\equiv \frac{1}{N^2} \sum_{i,j=1}^N \text{tr} \left[ \chi_i(t) \rho^{\frac{1}{2}} \chi_i(t) \chi_j(0) \rho^{\frac{1}{2}} \chi_j(0) \right] = \\ &= \tilde{G}\left(\frac{\beta}{2}\right) \tilde{G}\left(\frac{\beta}{2}\right) + \mathcal{F}\left( \frac{\beta}{2} + i t, i t, 0, - \frac{\beta}{2} \right) \approx \\ &\approx \tilde{G}\left(\frac{\beta}{2}\right) \tilde{G}\left(\frac{\beta}{2}\right) + \mathcal{F}_S\left( \frac{\beta}{2} + i t, i t, 0, - \frac{\beta}{2} \right) \approx \frac{\sqrt{\pi}}{2 \beta J} + \frac{\text{const}}{N}.
\end{aligned} \eeq
Finally, one also should take into account the $\mathcal{F}_0$ and $\mathcal{F}_{CFT}$ corrections to the connected four-point function, which are also of the order $\mathcal{O}\left(\frac{1}{N}\right)$. However, at the end of subsection~\ref{sec:DS} we have shown that two-point correlation functions exponentially decay for big Lorentzian times, $t \gg \beta$. Thus, for such times the contribution of $\mathcal{F}_0$ to OTOC and TOC also exponentially decays and therefore can be neglected. The contribution of $\mathcal{F}_{CFT}$ will be discussed in the next subsection.
\subsection{Conformal action contribution}
\label{sec:4p-CFT}
In this subsection we estimate the conformal contribution to the four-point correlation function, which is given by~\eqref{eq:4p-3}. As usual, we work in the IR and large $N$ limit. We remind that in this limit the theory is conformally invariant in the sense~\eqref{eq:DS-4}, so we can freely change between the zero temperature and finite temperature cases using the map~\eqref{eq:circle-line}. Due to this reason in the most of this subsection we work with zero-temperature functions.
At the same time, integrands in both numerator and denominator of~\eqref{eq:4p-3} are invariant wrt arbitrary reparametrizations and $I_S$ is non-zero for all but $SL(2,\mathbb{R})$ reparametrizations. One can integrate such reparametrizations out and obtain a non-zero constant that cancels when one calculates correlation functions. Therefore, the full reparametrization symmetry of the four-point function is effectively broken down to $SL(2,\mathbb{R})$.
Taking the integral over the fluctuations\footnote{We remind that in this section we parametrize fluctuations as $G = \tilde{G} + \delta G$ while in the subsection~\eqref{sec:Schwarzian} we used the notation $G=\tilde{G} + \frac{\delta G}{|\tilde{G}|}$.} of the variable $G$ in the functional integral~\eqref{eq:4p-3} with the effective action~\eqref{eq:Sch-2}, we obtain:
\beq \label{eq:4p-CFT-1}
\mathcal{F}_{CFT} = \frac{2}{3 J^2 N} \frac{(K_c^{-1} - I)^{-1} I}{| G_c(\tau_1, \tau_2) G_c(\tau_3, \tau_4) |} = \frac{2}{3 J^2 N} \frac{(I - K_c)^{-1} K_c I}{| G_c(\tau_1, \tau_2) G_c(\tau_3, \tau_4) |}.
\eeq
Here $K_c$ denotes the conformal kernel that is defined by~\eqref{eq:Sch-K} with conformal two-point functions $\tilde{G} = G_c$. From~\eqref{eq:Sch-K},~\eqref{eq:Sch-I} and~\eqref{eq:4p-4} it follows that:
\beq K_c I = \frac{3 J^2 N}{2} \mathcal{F}_0(\tau_1, \tau_2, \tau_3, \tau_4) |G_c(\tau_1, \tau_2) G_c(\tau_3, \tau_4) |. \eeq
Now it is easy to see that $\mathcal{F}_{CFT}$ is simply the sum of all possible ladder diagrams from the Fig.~\ref{fig:ladder}:
\begin{figure}[t]
\center{\includegraphics[scale=0.3]{SYK-ladders.png}}\caption{Sum of the ladder diagrams which contribute to $\mathcal{F}_{CFT}$} \label{fig:ladder}
\end{figure}\beq \mathcal{F}_{CFT} = \sum_{n=0}^\infty \mathcal{F}_n = (I - K_c)^{-1} \mathcal{F}_0, \eeq
where $\mathcal{F}_n \equiv K_c^n \mathcal{F}_0$ corresponds to the $n$-ladder diagrams. Indeed, one can check that in the diagrammatic technique introduced in Sec.~\ref{sec:SYK-diagrams} ladder diagrams as in Fig.~\ref{fig:ladder} are the only contributions to the 4-point correlation functions of the order $\frac{1}{N}$.
Note that the kernel $K_c$, which we use, is conjugated to the natural kernel, which follows from the diagrams Fig.~\ref{fig:ladder}, by the power of the propagator:
\beq K_c(\tau_1, \tau_2, \tau_3, \tau_4) = | G_c(\tau_1, \tau_2) | K_{diagram}(\tau_1, \tau_2, \tau_3, \tau_4) | G_c(\tau_3, \tau_4) |^{-1}. \eeq
We did such a conjugation to make the symmetry $(\tau_1, \tau_2) \leftrightarrow (\tau_3, \tau_4)$ explicit. It is straightforward to check that under reparametrizations $\tau \rightarrow f(\tau)$, $f'(\tau) > 0$, operator $K_c$ transforms as a four-point function of the fields with conformal fimension $\Delta = \frac{1}{2}$.
Note that the diagrammatics with conformal two-point functions naively leads to the divergent expression, because in the conformal limit operator $K$ has unit eigenvalue: $(I - K_c) \delta G = 0$ (see Sec.~\ref{sec:Schwarzian}). In the section~\ref{sec:4p-S} we treated this divergence directly, moving away from the IR limit and considering non-conformal corrections to the effective action. The alternative approach is to calculate the leading correction to the unit eigenvalue~\cite{Maldacena-SYK}.
To calculate the expression~\eqref{eq:4p-CFT-1} we need to determine a complete set\footnote{I.e. such set, in which $I = \sum_h \frac{1}{\langle \Psi_h | \Psi_h \rangle} | \Psi_h \rangle \langle \Psi_h |$.} of antisymmetric eigenfunctions $\Psi_h(\tau_1, \tau_2)$, find eigenvalues $K_c \Psi_h = k(h) \Psi_h$ and calculate the following sum:
\beq \label{eq:4p-CFT-2}
(I - K_c)^{-1} K_c I = \sum_{k(h) \ne 1} \frac{k(h)}{1 - k(h)} \frac{1}{\langle \Psi_h | \Psi_h \rangle} |\Psi_h \rangle \langle \Psi_h |, \eeq
where $h$ in this expression is an abstract label that numerates eigenvalues and eigenfunctions (this label will be specified below). In other words, we need to find the spectrum of the conformal kernel $K_c$. We remind that we have to exclude the unit eigenvalue subspace, because during the integration over this subspace the effective action~\eqref{eq:Sch-2} is zero, i.e. the dominant contribution to the full four-point function is given by~\eqref{eq:4p-2}.
\subsubsection{$SL(2,\mathbb{R})$ generators and casimir}
It is difficult to directly solve the integral equation $K_c \Psi_h = k(h) \Psi_h$. Fortunately, the $SL(2,\mathbb{R})$ invariance significantly simplifies this task. This invariance implies that $K_c$ commutes with the casimir $C$ of the $SL(2,\mathbb{R})$ group --- therefore, eigenfunctions of $K_c$ and $C$ coincide. This allows one to find eigenfunctions and eigenvalues separately. First, one solves the simpler equation\footnote{It is convenient but not necessary to choose the eigenvalue of the casimir as $h(h-1)$} $C \Psi_h = h(h-1) \Psi_h$, and then determines the eigenvalues $k(h)$ for the known functions $\Psi_h$.
The $SL(2,\mathbb{R})$ algebra can be presented using the following generators:
\beq \label{eq:casimir-1}
L_0^\tau = - \tau \partial_\tau - \Delta, \quad L_{-1}^\tau = \partial_\tau, \quad L_1^\tau = \tau^2 \partial_\tau + 2 \Delta \tau. \eeq
It is straightforward to check that these operators obey the proper commutation relations:
\beq \left[ L_m^\tau, L_n^\tau \right] = (m-n) L_{m+n}^\tau \quad \text{for} \quad m,n = -1, 0, 1. \eeq
Note that in this definition an operator with conformal dimension $\Delta$ is annihilated by the generator $L_0^\tau$.
Please note that in the case $\Delta = \frac{1}{2}$ these generators should commute with the kernel $K_c$:
\beq \label{eq:casimir-2}\begin{aligned}
\left \langle \left(L_m^{\tau_1} + L_m^{\tau_2}\right) K_c(\tau_1, \tau_2, \tau_3, \tau_4) | \Psi_h(\tau_3, \tau_4) \right \rangle &= \left \langle K_c(\tau_1, \tau_2, \tau_3, \tau_4) \left(L_m^{\tau_3} + L_m^{\tau_4} \right) | \Psi_h(\tau_3, \tau_4) \right\rangle + \\ &+ 2 \int_{-\infty}^\infty d\tau_4 \left[ \tau_3^{m+1} K_c(\tau_1, \tau_2, \tau_3, \tau_4) \Psi_h(\tau_3, \tau_4) \right]_{\tau_3 = -\infty}^{\tau_3 = \infty},
\end{aligned} \eeq
where $\langle \cdot | \cdot \rangle$ denotes the inner product~\eqref{eq:inner}. This condition implies that $SL(2,\mathbb{R})$ generators are zero modes of the operator $K_c$. To ensure this commutation relation, the term in the second line must vanish for all basis functions $\Psi_h$ and all generators. Below we will see that this condition imposes an important restriction on the functions $\Psi_h$.
Finally, using the generators~\eqref{eq:casimir-1} we build the casimir operator:
\beq \begin{aligned}
C &= \left( L_0^{\tau_1} + L_0^{\tau_2} \right)^2 - \frac{1}{2} \left( L_{-1}^{\tau_1} + L_{-1}^{\tau_2} \right) \left( L_1^{\tau_1} + L_1^{\tau_2} \right) - \frac{1}{2} \left( L_1^{\tau_1} + L_1^{\tau_2} \right) \left( L_{-1}^{\tau_1} + L_{-1}^{\tau_2} \right) = \\ &= 2 \left(\Delta^2 - \Delta\right) + 2 L_0^{\tau_1} L_0^{\tau_2} - L_{-1}^{\tau_1} L_1^{\tau_2} - L_1^{\tau_1} L_{-1}^{\tau_2}.
\end{aligned} \eeq
\subsubsection{Eigenfunctions and eigenvalues}
Let us solve the equation $C \Psi_h = h(h-1) \Psi_h$. Substituting generators~\eqref{eq:casimir-1} and $\Delta = \frac{1}{2}$ we obtain the following differential equation:
\beq \label{eq:casimir-3}
\left[ - \left(\tau_1 - \tau_2\right)^2 \partial_{\tau_1} \partial_{\tau_2} + \left(\tau_1 - \tau_2\right) \left(\partial_{\tau_1} - \partial_{\tau_2}\right) \right] \Psi_h(\tau_1, \tau_2) = h(h-1) \Psi_h(\tau_1, \tau_2). \eeq
We propose the following ansatz to solve this equation:
\beq \label{eq:eigen-0}
\Psi_{h\omega}(\tau_1, \tau_2) = \frac{\text{sgn} (\tau_1 - \tau_2)}{\sqrt{|\tau_1 - \tau_2|}} \psi_{h}\left(\frac{\left|\omega (\tau_1 - \tau_2)\right|}{2}\right) e^{- i \omega \frac{\tau_1 + \tau_2}{2}} . \eeq
This ansatz is inspired by the following properties of the casimir operator and function $\Psi_h$. First, $\Psi_h$ is an antisimmetric function with the conformal weight $\Delta = \frac{1}{2}$, so we expect the factor $\frac{\text{sgn}(\tau_1 - \tau_2)}{\sqrt{|\tau_1 - \tau_2|}}$. Second, the structure of the equation~\eqref{eq:casimir-3} points that it is convenient to use variables $\tau \equiv \tau_1 - \tau_2$ and $T \equiv \frac{1}{2}(\tau_1 + \tau_2)$ rather than $\tau_1$ and $\tau_2$. Third, the result of the action of the casimir operator~\eqref{eq:casimir-3} on~\eqref{eq:eigen-0} does not depend on $\omega$, and, finally, $\psi_h$ solves the Bessel equation:
\beq \label{eq:eigen-1}
\left[ x^2 \partial_x^2 + x \partial_x + \left( x^2 - h (h-1) - \frac{1}{4} \right) \right] \psi_{h}(x) = 0, \quad \text{where} \quad x \equiv \frac{|\omega \tau|}{2}. \eeq
This means that for each $h$ one has an infinite set of eigenfunctions parametrized by the frequency $\omega$. In the zero temperature case frequency is continious ($\omega \in \mathbb{R}$), in the finite temperature case it is descrete ($\omega = \frac{\pi}{\beta}(2n+1)$, $n\in\mathbb{Z}$). This also implies that in the decomposition~\eqref{eq:4p-CFT-2} one has to sum over the set $\Psi_{h\omega}$ instead of the set $\Psi_h$:
\beq \label{eq:4p-CFT-4}
(I-K_c)^{-1}K_c I = \sum_{k(h) \ne 1} \sum_\omega \frac{k(h,\omega)}{1 - k(h,\omega)} \frac{1}{\langle \Psi_{h\omega} | \Psi_{h\omega} \rangle} |\Psi_{h\omega} \rangle \langle \Psi_{h\omega} |. \eeq
The general solution of the equation~\eqref{eq:eigen-1} is the sum of Bessel functions:
\beq \psi_{h}(x) = -A_h J_{h - \frac{1}{2}} \left(x\right) - B_h Y_{h - \frac{1}{2}} \left(x\right) = \frac{B_{1-h}}{\cos(\pi h)} J_{h-\frac{1}{2}} \left(x\right) - \frac{B_h}{\cos(\pi h)} J_{\frac{1}{2}-h} \left(x\right). \eeq
Here $B_h$ is some function of $h$. To obtain the second equality we required $\Psi_{1-h} = \Psi_h$, because the equation~\eqref{eq:eigen-1} is invariant under the change $h \rightarrow 1 - h$. Also we have used the following relation between Bessel functions of the first and second kinds:
\beq Y_\alpha(x) = \frac{J_\alpha(x) \cos(\pi \alpha) - J_{-\alpha}(x)}{\sin(\pi \alpha)}. \eeq
Then we recall that the kernel $K_c$ must commute with $SL(2,\mathbb{R)}$ generators. This implies that the term in the second line of~\eqref{eq:casimir-2} should be identically zero for $m=-1,0,1$ and all $h$. For $m=-1,0$ this condition is always satisfied, so it does not restrict anything. Indeed, the expression under the square brackets is proportional to $|\tau_3|^{m-2}$ as $\tau_3 \rightarrow \pm \infty$, i.e. in the case $m=-1,0$ the integrand is identically zero. However, in the case $m=1$ this condition imposes an additional restriction on the coefficients $B_h$:
\beq \begin{aligned}
&\int_{-\infty}^\infty d\tau_4 \left[ \tau_3^2 K(\tau_1, \tau_2, \tau_3, \tau_4) \psi_{h \omega}\left(\frac{\left|\omega ( \tau_3 - \tau_4)\right|}{2}\right) \cos\left(\frac{\omega(\tau_3 + \tau_4)}{2}\right) \right]_{\tau_3 = -\infty}^{\tau_3 = \infty} = \\
&= - 3 \sqrt{\pi} J^2 \int_{-\infty}^\infty d\tau_4 \frac{\text{sgn}(\tau_2 - \tau_4)}{\sqrt{|\tau_1 - \tau_2| |\tau_2 - \tau_4|}} \sin\frac{\omega \tau_4}{2} \left[ \frac{B_h}{\cos(\pi h)} \cos\frac{\pi h}{2} - \frac{B_{1-h}}{\cos(\pi h)} \sin\frac{\pi h}{2} \right] = 0,
\end{aligned} \eeq
hence,
\beq \frac{B_h}{B_{1-h}} = \tan\frac{\pi h}{2}. \eeq
Thus, the eigenfunctions have the following form (up to the numerical factor to be fixed below):
\beq \label{eq:eigen-2}
\Psi_{h\omega}(\tau_1, \tau_2) = \frac{\text{sgn}\tau}{\sqrt{|\tau|}} e^{-i \omega T} \left[ \frac{\cos\frac{\pi h}{2}}{\cos(\pi h)} J_{h-\frac{1}{2}} \left(\frac{|\omega \tau|}{2}\right) - \frac{\sin\frac{\pi h}{2}}{\cos(\pi h)} J_{\frac{1}{2}-h} \left(\frac{|\omega \tau|}{2}\right)\right]. \eeq
Integrating the function~\eqref{eq:eigen-2} with the kernel $K_c$ as in~\eqref{eq:Sch-K} one finds the corresponding eigenvalue, $K_c \Psi_h = k(h,\omega) \Psi_h$:
\beq \label{eq:eigen-4}
k(h,\omega) = -\frac{3}{2} \frac{\tan\left[\frac{\pi}{2} \left(h - \frac{1}{2}\right)\right]}{h - \frac{1}{2}}. \eeq
This calculation is cumbersome but straightforward, so we do not reproduce it here. A detailed calculation\footnote{The authors of~\cite{Polchinski} use a different kernel and obtain slightly different eigenfunctions, but the integral for the eigenvalue coincides with our case.} can be found in appendices C and D of~\cite{Polchinski}.
Note that the eigenvalue~\eqref{eq:eigen-4} does not depend on the frequency $\omega$ due to the conformal invariance of the kernel. However, it does depend on the frequency when one moves away from the IR limit. In the paper~\cite{Maldacena-SYK} this dependence was established and used to calculate the leading non-conformal correction to the four-point correlation functions. The result of this calculation coincides with the result of subsection~\ref{sec:4p-S}.
\subsubsection{The complete set of eigenfunctions}
Eigenfunctions of the Hermitian operator form the complete set (e.g. see~\cite{Reed}). Keeping this fact in mind, we demand the hermiticity of the Casimir operator wrt the inner product~\eqref{eq:inner}:
\beq \label{eq:set-1}
\langle C \Psi_{h\omega}(\tau_1, \tau_2) | \Psi_{h'\omega'}(\tau_1, \tau_2) \rangle = \langle \Psi_{h\omega}(\tau_1, \tau_2) | C \Psi_{h'\omega'}(\tau_1, \tau_2) \rangle. \eeq
On the one hand, the hermiticity mens that the eigenvalue of the casimir is real:
\beq \text{Im}\left[h(h-1)\right]=0, \quad \text{i.e.} \quad \text{Im}[h] \left(2 \text{Re}[h] - 1\right) = 0. \eeq
In other words, variable $h$ either pure real or has the fixed real part: $h = \frac{1}{2} + i s$, $s \in \mathbb{R}$, $s > 0$ (without the last inequality the eigenfunctions are ambiguous: $\Psi_{\frac{1}{2} +is,\omega} = \Psi_{\frac{1}{2}-is,\omega}$). On the other hand, identity~\eqref{eq:set-1} implies that the corresponding boundary term vanishes for arbitrary $\omega$, $\omega'$ and $h$, $h'$ from the spectrum:
\beq \begin{aligned}
\langle C \Psi_{h\omega}(\tau_1, \tau_2) | \Psi_{h'\omega'}(\tau_1, \tau_2) \rangle &- \langle \Psi_{h\omega}(\tau_1, \tau_2) | C \Psi_{h'\omega'}(\tau_1, \tau_2) \rangle = \\ &=\frac{8 \pi \delta(\omega-\omega')}{\omega} \Big[ x \Big(\psi_{h'}(x) \partial_x \psi_{h}^*(x) - \psi_{h}^*(x) \partial_x \psi_{h'}(x) \Big) \Big]_{x = 0}^{x = \infty} = 0.
\end{aligned} \eeq
Here we substituted the ansatz~\eqref{eq:eigen-0} and denoted $x=\frac{|\omega \tau|}{2}$. Substituting the asymptotics of the bessel function~\cite{Gradshteyn} we find that:
\beq \lim_{x \rightarrow \infty} \Big[ x \Big(\psi_{h'}(x) \partial_x \psi_{h}^*(x) - \psi_{h}^*(x) \partial_x \psi_{h'}(x) \Big) \Big] = 0 \eeq
for arbitrary $h$ and $h'$, and
\small \beq \begin{aligned}
\lim_{x \rightarrow 0} &\Big[ x \Big(\psi_{h'}(x) \partial_x \psi_{h}^*(x) - \psi_{h}^*(x) \partial_x \psi_{h'}(x) \Big) \Big] = \lim_{x \rightarrow 0} \frac{\cos\frac{\pi h^*}{2} \cos\frac{\pi h'}{2}}{\cos(\pi h^*) \cos(\pi h')} \times \\
\times &\Bigg[ \frac{h^* - h'}{\Gamma\left(h^*+\frac{1}{2}\right) \Gamma\left(h'+\frac{1}{2}\right)} \left( \frac{x}{2} \right)^{h^* + h' - 1} - \tan\frac{\pi h^*}{2} \tan\frac{\pi h'}{2} \frac{h^* - h'}{\Gamma\left(\frac{3}{2} - h^*\right) \Gamma\left(\frac{3}{2} - h'\right)}\left( \frac{x}{2} \right)^{1 - h^* - h'} + \\ &+ \tan\frac{\pi h^*}{2} \frac{h^* + h' - 1}{\Gamma\left(\frac{3}{2} - h^*\right) \Gamma\left(h'+\frac{1}{2}\right)} \left( \frac{x}{2} \right)^{h' - h^*} - \tan\frac{\pi h'}{2} \frac{h^* + h' - 1}{\Gamma\left(h^*+\frac{1}{2}\right) \Gamma\left(\frac{3}{2} - h'\right)} \left( \frac{x}{2} \right)^{h^* + h' - 1}\Bigg] = 0
\end{aligned} \eeq \normalsize
for values of the form $h = \frac{1}{2} + is$, $s \in \mathbb{R}$, $s>0$ (in this case one obtains an oscillating expression) or $h = 2 n$, $n = 1,2,3,\cdots$ (in this case the divergent terms are multiplied by zeroes). We conclude that together these two sets form the complete set in the space of antisimmetric two-point functions.
Let us find the normalization in the decomposition~\eqref{eq:4p-CFT-4}, i.e. calculate the inner product of two eigenfunctions:
\beq \begin{aligned}
\langle \Psi_{h\omega}(\tau_1,\tau_2) | \Psi_{h'\omega'}(\tau_1,\tau_2)\rangle = 2 \int_{-\infty}^\infty dT \int_0^\infty \frac{d\tau}{\tau} \psi_h^*\left(\frac{|\omega| \tau}{2}\right) \psi_{h'}\left(\frac{|\omega'| \tau}{2}\right) e^{i (\omega - \omega')T} = \\ = 4 \pi \delta(\omega - \omega') \int_0^\infty \frac{d\tau}{\tau} \left[ \frac{\sin\frac{\pi h}{2}}{\cos(\pi h)} J_{\frac{1}{2}-h} \left(\frac{\omega \tau}{2}\right) - \frac{\cos\frac{\pi h}{2}}{\cos(\pi h)} J_{h-\frac{1}{2}} \left(\frac{\omega \tau}{2}\right) \right]^* \times \\ \times \left[ \frac{\sin\frac{\pi h'}{2}}{\cos(\pi h')} J_{\frac{1}{2}-h'} \left(\frac{\omega \tau}{2}\right) - \frac{\cos\frac{\pi h'}{2}}{\cos(\pi h')} J_{h'-\frac{1}{2}} \left(\frac{\omega \tau}{2}\right) \right].
\end{aligned} \eeq
For the discrete set this integral gives the Kronecker delta:
\beq \langle \Psi_{h\omega}(\tau_1,\tau_2) | \Psi_{h'\omega'}(\tau_1,\tau_2)\rangle = \frac{2 \pi^2}{2h - 1} \delta_{h h'} \cdot 2\pi\delta(\omega - \omega'), \eeq
and for the continuum set it gives the Dirac delta\footnote{We introduce UV cut-off $\epsilon \rightarrow 0$ and use that $\lim_{\epsilon \rightarrow 0} \frac{2}{p-s} \sin\left(\frac{s-p}{2}\log\frac{\epsilon}{2}\right) = \pi \delta(s-p)$. More details can be found in~\cite{Polchinski}.}:
\beq \langle \Psi_{h\omega}(\tau_1,\tau_2) | \Psi_{h'\omega'}(\tau_1,\tau_2)\rangle = \frac{2 \pi \tan(\pi h)}{2h - 1} 2 \pi \delta(h - h') \cdot 2\pi\delta(\omega - \omega'). \eeq
Furthermore, the identity operator~\eqref{eq:Sch-I} on the space of antisimmetric two-point functions can be represented as the following decomposition:
\beq \begin{aligned}
I(\tau_1, \tau_2, \tau_3, \tau_4) = \frac{1}{2} \int_{-\infty}^\infty \frac{d\omega}{2\pi} \Bigg[ &\int_0^\infty \frac{ds}{2\pi} \frac{2h - 1}{\pi \tan(\pi h)} \Psi_{h\omega}(\tau_1, \tau_2) \Psi_{h\omega}^*(\tau_3, \tau_4)\Big|_{h = \frac{1}{2} + is} + \\ + &\sum_{n=1}^\infty \frac{2h-1}{\pi^2} \Psi_{h\omega}(\tau_1, \tau_2) \Psi_{h\omega}^*(\tau_3, \tau_4)\Big|_{h = 2n} \Bigg].
\end{aligned} \eeq
Substituting the ansatz~\eqref{eq:eigen-0} and integrating over the frequencies, we obtain the decomposition which explicitly looks as the conformal four-point function of fields with $\Delta=1$:
\beq \label{eq:set-2}
I(\tau_1, \tau_2, \tau_3, \tau_4) = \frac{1}{2} \frac{\text{sgn}(\tau_{12}) \text{sgn}(\tau_{34})}{|\tau_{12}| |\tau_{34}|} \left[ \int_0^\infty \frac{ds}{2\pi} \frac{2h - 1}{\pi \tan(\pi h)} \Psi_h (\chi) \Big|_{h = \frac{1}{2} + is} + \sum_{n=1}^\infty \frac{2h-1}{\pi^2} \Psi_h (\chi) \Big|_{h=2n} \right], \eeq
where we have denoted $\tau_{12} \equiv \tau_1 - \tau_2$, introduced the $SL(2,\mathbb{R})$-invariant cross-ratio:
\beq \label{eq:cross}
\chi \equiv \frac{\tau_{12} \tau_{34}}{\tau_{13} \tau_{24}}, \eeq
and defined the function $\Psi_h(\chi)$:
\beq \Psi_h(\chi) \equiv \begin{cases}
\frac{\Gamma\left(\frac{h}{2}\right) \Gamma\left(\frac{1-h}{2}\right)}{\sqrt{\pi}} \phantom{.}_2 F_1\left[\frac{h}{2}, \frac{1-h}{2}, \frac{1}{2}, \left(\frac{2-\chi}{\chi}\right)^2\right], \quad &\text{if} \quad \chi > 1, \\
\frac{\cos^2\left(\frac{\pi h}{2}\right)}{\cos(\pi h)} \frac{\Gamma(h)^2}{\Gamma(2h)} \chi^h \phantom{.}_2 F_1\left(h, h, 2h, \chi \right) + (h \rightarrow 1-h), \quad &\text{if} \quad 0 < \chi < 1.
\end{cases} \eeq
Here $\phantom{.}_2 F_1(\cdots)$ is the hypergeometric function. For the details on this calculation see appendix~\ref{sec:product-integral} and papers~\cite{Polchinski,Maldacena-SYK}.
Finally, the decomposition~\eqref{eq:set-2} can be rewritten as the single controur integral:
\beq \label{eq:set-3}
I(\tau_1, \tau_2, \tau_3, \tau_4) = \frac{1}{2} \frac{\text{sgn}(\tau_{12}) \text{sgn}(\tau_{34})}{\tau_{12} \tau_{34}} \int_\mathcal{C} \frac{dh}{2 \pi i} \frac{h - \frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \Psi_h(\chi), \eeq
where the countor $\mathcal{C}$ is defined in the following way:
\beq \label{eq:set-4}
\int_\mathcal{C} \frac{dh}{2 \pi i} \equiv \int_{\frac{1}{2}-i\infty}^{\frac{1}{2}+i\infty} \frac{dh}{2\pi i} + \sum_{n=1}^\infty \text{Res}_{h=2n}. \eeq
In order to rewrite the integral over $ds$ we used the symmetry of the integrand under the change $h \rightarrow 1-h$ and the following indentity:
\beq \frac{2}{\tan(\pi h)} = \frac{1}{\tan\frac{\pi h}{2}} - \frac{1}{\tan\frac{\pi(1-h)}{2}}. \eeq
Of course, the decomposition for the identity operator can also be deduced from the representation theory of $SL(2,\mathbb{R})$ group. One can find more details on this method in~\cite{Kitaev-reps}.
\subsubsection{Four-point function and OPE}
\label{sec:OPE}
To find the conformal contribution to the four-point function, we substitute the eigenvalues and the decomposition of the identity operator~\eqref{eq:set-3} into eq.~\eqref{eq:4p-CFT-1}:
\beq \label{eq:4p-fin-1}
\mathcal{F}_{CFT}(\tau_1, \tau_2, \tau_3, \tau_4) = \frac{\sqrt{4\pi}}{3 N} \frac{\text{sgn}(\tau_{12})}{|J\tau_{12}|^{2\Delta}} \frac{\text{sgn}(\tau_{34})}{|J\tau_{34}|^{2\Delta}} \mathcal{F}_{CFT}(\chi), \eeq
where $\Delta = \frac{1}{4}$ and we have introduced the $SL(2,\mathbb{R})$-invariant function $\mathcal{F}_{CFT}(\chi)$:
\beq \label{eq:4p-fin-2}
\mathcal{F}_{CFT}(\chi) = \int_\mathcal{C} \frac{dh}{2 \pi i} \frac{k(h)}{1 - k(h)} \frac{h - \frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \Psi_h(\chi) \Big|_{h\ne2}. \eeq
In the finite temperature case the expression~\eqref{eq:4p-fin-1} transforms to:
\beq \label{eq:4p-fin-3}
\mathcal{F}_{CFT}(\tau_1, \tau_2, \tau_3, \tau_4) = \frac{\sqrt{4\pi}}{3 N} \frac{1}{\beta J} \frac{\text{sgn}\left(\sin\frac{\pi \tau_{12}}{\beta}\right)}{\big|\sin\frac{\pi \tau_{12}}{\beta}\big|^{2\Delta}} \frac{\text{sgn}\left(\sin\frac{\pi \tau_{34}}{\beta}\right)}{\big|\sin\frac{\pi \tau_{34}}{\beta}\big|^{2\Delta}} \mathcal{F}_{CFT}(\tilde{\chi}), \quad \tilde{\chi} = \frac{\sin\frac{\pi \tau_{12}}{\beta} \sin\frac{\pi \tau_{34}}{\beta}}{\sin\frac{\pi \tau_{13}}{\beta} \sin\frac{\pi \tau_{24}}{\beta}}. \eeq
In~\eqref{eq:set-4} we have to exclude the value $h=2$ because it corresponds to the zero mode of the operator $K_c$, i.e. to the soft mode discussed in the subsection~\ref{sec:4p-S}. However, $h=2$ is not the only solution to the equation $k(h) = 1$ with $k(h)$ from~\eqref{eq:eigen-4}. In fact, this equation has infinitely many real solutions of the form $h_m = 2\Delta + 2 m + 1 + \epsilon_m$, where $\epsilon_m$ is going to zero for large integer $m$ as:
\beq \epsilon_m \approx \frac{3}{2 \pi m}, \quad \text{for} \quad m \gg 1. \eeq
These solutions do not belong to the spectrum of the operator $K_c$, but they correspond to the simple poles of the function $\frac{k(h)}{1 - k(h)} \frac{h-\frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \Psi_h(\chi)$. Hence, we can push the contour $\mathcal{C}$ to the right (Fig.~\ref{fig:pole-1}) and obtain the different decomposition for the function $\mathcal{F}_{CFT}$:
\begin{figure}[t]
\center{\includegraphics[scale=0.3]{Pole-1.png}}\caption{To the left: contour $\mathcal{C}$ from the sum~\eqref{eq:4p-fin-2}. To the right: contour from the sum~\eqref{eq:4p-fin-OPE}. Dots denote poles that correspond to the solutions of $\tan\frac{\pi h}{2} = 0$, crosses denote poles that correspond to the solutions of $k(h) = 1$. Note the double pole at $h=2$.} \label{fig:pole-1}
\end{figure}
\beq \label{eq:4p-fin-OPE}
\mathcal{F}_{CFT}(\chi) = \sum_{m=0}^\infty \text{Res}_{h=h_m} \left[ \frac{k(h)}{1 - k(h)} \frac{h-\frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \Psi_h(\chi) \right], \eeq
where $h_0 = 2$ and $h_m$ for $m>0$ have the form mentioned above. However, the contribution from the $h_0 = 2$ pole cancels if one moves away from the IR limit and considers the corrections to the $k(h, \omega)$ near the $h_0 = 2$ (we will not discuss how this happens, for the details see~\cite{Maldacena-SYK}). Thus, for $\chi < 1$ this expansion reproduces the four-point function OPE~\cite{Maldacena-SYK,Sarosi}:
\beq \mathcal{F}_{CFT}(\chi) = \sum_{m=1}^\infty c_m^2 \chi^{h_m} \phantom{.}_2 F_1\left(h_m, h_m, 2h_m, \chi \right) = \vcenter{\hbox{\includegraphics[scale=0.15]{block.png}} }, \eeq
where $h_m$ are conformal weights of the corresponding intermediate operators and the coefficients $c_m$ are found from the decomposition of~\eqref{eq:set-3} around $\chi = 0$. The asymptotic behavior of the conformal weights shows that the operators of the OPE are built from two fermion fields, $2m+1$ derivatives and anomalous part that corresponds to the interactions:
\beq \mathcal{O}_m = \sum_{i=1}^N \sum_{k=0}^{2m+1} d_{mk} \partial_\tau^k \chi_i \partial_\tau^{2m+1-k} \chi_i, \eeq
where $d_{mk}$ are some numerical coefficients. The explicit form of the operators $\mathcal{O}_m$ can be found in~\cite{Gross}.
\subsubsection{OTOC and TOC}
In this subsubsection we estimate the conformal contributions to OTOC~\eqref{eq:OTOC}, which corresponds to the function $\mathcal{F}\left(\frac{\beta}{4}+it, -\frac{\beta}{4}+it,0,-\frac{\beta}{2}\right)$, and TOC~\eqref{eq:TOC}, which corresponds to the function $\mathcal{F}\left(\frac{\beta}{2}+it, it,0,-\frac{\beta}{2}\right)$. On the tree level both of these correlators behave as:
\beq \text{OTOC}(t) = \text{TOC}(t) = \tilde{G}\left(\frac{\beta}{2}\right) \tilde{G}\left(\frac{\beta}{2}\right) \approx \frac{\sqrt{\pi}}{2 \beta J} + \mathcal{O}\left(\frac{1}{N}\right), \eeq
in the limit $t \rightarrow \infty$. In subsection~\ref{sec:4p-S} we estimated the leading $\mathcal{O}\left(\frac{1}{N}\right)$ corrections to these correlators which are ensured by the so-called soft mode. Here we find the subleading corrections that have the same order in $\frac{1}{N}$ but are suppressed by the small factor $\frac{1}{\beta J}$. We denote such corrections as $\delta \text{OTOC}(t)$ and $\delta \text{TOC}(t)$.
In the limit $t \rightarrow \infty$ choises of times for both OTOC and TOC give small cross-ratios~\eqref{eq:cross}, $\chi \rightarrow 0$. However, in the limit $t \rightarrow 0$ times of the OTOC correspond to the cross-ratio $\chi \rightarrow 2 - \frac{4 \pi i t}{\beta}$, whereas times of the TOC correspond to $\chi \rightarrow 1 - \frac{\pi^2 t^2}{\beta^2}$. Hence, for the OTOC we need to analytically continue the $\chi > 1$ version of the expression~\eqref{eq:4p-fin-2} to small imaginary cross-ratios $\chi \sim -4 i e^{-\frac{2 \pi t}{\beta}}$:
\small \beq \delta \text{OTOC}(t) = \frac{\sqrt{4\pi}}{3 N} \frac{1}{\beta J} \int_\mathcal{C} \frac{dh}{2 \pi i} \frac{k(h)}{1 - k(h)} \frac{h - \frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \frac{\Gamma\left(\frac{h}{2}\right) \Gamma\left(\frac{1-h}{2}\right)}{\sqrt{\pi}} \phantom{.}_2 F_1\left[\frac{h}{2}, \frac{1-h}{2}, \frac{1}{2}, \left(\frac{2-\chi}{\chi}\right)^2\right]_{h\ne2}. \eeq \normalsize
For the TOC we just need to take the limit $\chi \sim 4 e^{-\frac{2 \pi t}{\beta}} \rightarrow 0$:
\small \beq \delta \text{TOC}(t) = \frac{\sqrt{4\pi}}{3 N} \frac{1}{\beta J} \int_\mathcal{C} \frac{dh}{2 \pi i} \frac{k(h)}{1 - k(h)} \frac{h - \frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \left[\frac{\cos^2\left(\frac{\pi h}{2}\right)}{\cos(\pi h)} \frac{\Gamma(h)^2}{\Gamma(2h)} \chi^h \phantom{.}_2 F_1\left(h, h, 2h, \chi \right) + (h \rightarrow 1-h)\right]_{h\ne2}. \eeq \normalsize
To evaluate the integral along the contour $\mathcal{C}$ we use the following trick. First of all, we define the function $k_R(h)$:
\beq k_R(h) \equiv \frac{\cos\left[\frac{\pi (2h-1)}{4}\right]}{\cos\left[\frac{\pi (2h+1)}{4}\right]} k(h), \eeq
which has two useful properties. On the one hand, for any real even $h$ this function coincides with the eigenvalue $k(h)$, so we can substitute $k(h) \rightarrow k_R(h)$ into the descrete sum in~\eqref{eq:4p-fin-2}. On the other hand, $k_R(h) = 1$ in the unique point of the complex plane, $h=2$. Hence, we can freely\footnote{$\Psi_h(\chi)$ has simple poles at the points $h=1,3,5,\cdots$, but these poles are cancelled by zeroes of $\left[\tan\left(\frac{\pi h}{2}\right)\right]^{-1}$.} pull the contour that circles $h=2,4,6,\cdots$ back to the line $h = \frac{1}{2} + is$ (Fig.~\ref{fig:pole-2}). After this operation we get the single integral over the line plus the pole at $h=2$:
\begin{figure}[t]
\center{\includegraphics[scale=0.3]{Pole-2.png}}\caption{To the left: sum over the poles into~\eqref{eq:4p-fin-2} with $k(h) \rightarrow k_R(h)$. To the right: result of pushing the contour to the left. Note that $k_R(h) = 1$ only for $h =2$, so we do not get contributions like in the right part of Fig.~\ref{fig:pole-1}.} \label{fig:pole-2}
\end{figure}
\beq \label{eq:4p-fin-4} \begin{aligned}
\mathcal{F}_{CFT}(\chi) &= \int_{-\infty}^\infty \frac{ds}{2\pi} \left[ \frac{k(h)}{1-k(h)} - \frac{k_R(h)}{1-k_R(h)}\right] \frac{h-\frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \Psi_h(\chi) - \\ &- \text{Res}_{h=2} \left[ \frac{k_R(h)}{1-k_R(h)} \frac{h-\frac{1}{2}}{\pi \tan\left(\frac{\pi h}{2}\right)} \Psi_h(\chi) \right].
\end{aligned} \eeq
The integral in the first line rapidly converges and does not grow in the limit $\chi \rightarrow 0$, because in this limit function $\Psi_{\frac{1}{2} + s}(\chi) \sim \chi^\frac{1}{2}$ (this is true for both OTOC and TOC cases). Therefore, for our purposes this integral can be neglected.
At the same time, the pole in the second line does give a growing contribution to OTOC. Moreover, this is a double pole, which gives the combination of the function $\Psi_h$ and its derivative $\partial_h \Psi_h$. This combination grows faster than exponentially:
\beq \label{eq:OTOC-CFT}
\delta \text{OTOC}(t) \approx \frac{C_1}{\beta J N} e^{\frac{2 \pi t}{\beta}} + \frac{C_2}{\beta J N} \frac{2 \pi t}{\beta} e^{\frac{2 \pi t}{\beta}}, \eeq
where $C_1$ and $C_2$ are some positive numerical constants. However, this does not mean that bound~\cite{MSS} is violated in SYK model. Indeed, the contribution~\eqref{eq:OTOC-CFT} is multiplied by the small factor $\frac{1}{\beta J}$. Hence, the contribution of the conformal part is smaller than the contribution of the soft mode, at least untill the growth of the OTOC saturates. This means that the expression~\eqref{eq:OTOC-CFT} can be interpreted in terms of the leading correction to the Lyapunov exponent~\cite{Maldacena-SYK,Kitaev,Sarosi}:
\beq \label{eq:4p-fin-5}
\text{OTOC}(t) \rightarrow \text{OTOC}(t) + \delta \text{OTOC}(t) \approx \frac{\sqrt{\pi}}{2 \beta J} \left[ 1 - \text{const} \frac{\beta J}{N} e^{\kappa t} \right], \quad \text{for} \quad \beta \ll t \ll \beta \log\frac{N}{\beta J}, \eeq
where ``const'' is a positive $\mathcal{O}(1)$ numerical factor and $\kappa$ is the corrected Lyapunov exponent; both factor and exponent are equal to the corresponding values from~\eqref{eq:OTOC-S} with $\mathcal{O}\left(\frac{1}{\beta J}\right)$ corrections:
\beq \label{eq:4p-fin-6}
\kappa \approx \frac{2 \pi}{\beta} \left( 1 - \frac{6.05}{\beta J} + \cdots \right). \eeq
One can also check that the pole in the second line in~\eqref{eq:4p-fin-4} does not give any growing with time contributions into the TOC. This contribution is of the ordrer $\mathcal{O}\left(\frac{1}{N}\right)$, i.e. the approximate indentity for the whole TOC is the same as in~\eqref{eq:TOC}.
We emphasize that OTOC rapidly decays only well before the scrambling time $t_* \sim \beta \log \frac{N}{\beta J}$. At greater times our approximations do not work, and other types of diagrams (e.g. multiple parallel ladders) also generate significant corrections to~\eqref{eq:OTOC-S}. So one expects that the rate of the decay slows down before OTOC is eventually saturated~\cite{Sarosi,MSS,Maldacena-SYK,Kitaev}. This conjecture was confirmed in~\cite{Bagrets-1702}, where SYK OTOCs were evaluated for arbitrary times. Namely, there was established a new time scale $t_M = \frac{N \log N}{64 \sqrt{\pi} J}$, after which exponential decay of OTOCs is replaced by a power law: $\text{OTOC}(t) \sim (t/t_M')^{-6}$, where $t_M' = t_M$ if $\beta \ll t_M$ and $t_M' = \beta^{-1}$ in the opposite case.
Let us also emphasize again that identities~\eqref{eq:4p-fin-5} and~\eqref{eq:4p-fin-6} were obtained in the limit $1 \ll \tau J < \beta J \ll N$, i.e. only for small but non-zero temperature. However, recently it was argued that in the large $q$ limit, where $q$ is the number of fermions in the interaction vertex, similar identities hold for arbitrary temperatures and couplings~\cite{Streicher,Choi}.
\section{2D dilaton gravity}
\label{sec:JT}
The other remarkable theory which exibits a chaotic behavior is two-dimensional dilaton gravity coupled to matter. Correlation functions of the boundary operators corresponding to bulk matter fields in this model behave similarly to the correlation functions of the fermion fields in SYK model. However, we emphasize that the behavior of these models coincides only in the low energy limit. Two-dimensional dilaton gravity has been extensively studied in~\cite{Almheiri, Maldacena-JT, Jensen, Engelsoy}. In this section we review it.
\subsection{Dilaton gravity as the near-horizon limit of extremal black hole}
\label{sec:extremal}
First of all, let us show that in the near-horizon limit space-time of the 4D extremal black hole factorizes into the product of 2D anti-de Sitter space and 2D sphere. The metric and the electromagnetic field of the charged Reissner--Nordstr{\"o}m black hole are as follows:
\beq \label{eq:RN} \begin{aligned}
ds^2 &= -\frac{(r-r_+)(r-r_-)}{r^2} dt^2 + \frac{r^2}{(r-r_+)(r-r_-)} dr^2 + r^2 d\Omega^2, \\
r_\pm &= Q l_p + E l_p^2 \pm \sqrt{2 Q E l_p^3 + E^2 l_p^4}, \\
F_{rt} &= \frac{Q}{r^2}.
\end{aligned} \eeq
Here $M$ is the mass and $Q$ is the electrical charge of the black hole, $d\Omega^2$ is the metric on the two-sphere with unit radius. Also $l_p = \sqrt{G}$ is the Planck length ($G$ is the usual 4D Newton constant), and the excitation energy above extremality is $E = M - \frac{Q}{l_p}$. Obviously, for $E=0$ horizons $r_+$ and $r_-$ coincide and the black hole becomes extremal. Note that in this case $M$ and $Q$ are not independent, so the Planck length is the only dimensionful parameter of the extremal black hole.
In order to take the near-horizon limit of the extremal black hole, we introduce the variable $z$:
\beq z \equiv \frac{Q^2 l_p^2}{r - r_+}, \eeq
and take the limit $r \rightarrow r_+$, $l_p \rightarrow 0$ while keeping $z = \text{const}$. This is the simplest combination of $r-r_+$ and $l_p$ with the dimensionality of length which does not vanish in the limit $r \rightarrow r_+$ (we introduce the factor $Q^2$ for the convenience). It is straightforward to see that the metric~\eqref{eq:RN} factorizes into the sum of $AdS_2$ and $S_2$ in the limit in question:
\beq \label{eq:extremal-0}
ds^2 \approx Q^2 l_p^2 \left( \frac{-dt^2 + dz^2}{z^2} + d\Omega^2 \right). \eeq
Now let us show that some type of the excitations above the near horizon extremal black hole~\eqref{eq:extremal-0} are described by the two-dimensional dilaton gravity~\cite{Sarosi, Nayak, Grumiller-1, Thomi}. Namely, we consider static, spherically symmetric ansatz for the metric:
\beq ds^2 = h_{ij}(x^0,x^1) dx^i dx^j + \Phi^2(x^0,x^1) d\Omega^2, \eeq
where $i,j=0,1$, $x^0=t$, $x^1=r$, $h_{ij}$ and $\Phi$ are some functions to be determined. The determinant of the metric ($g = \det g_{\mu\nu}$), Ricci-scalar ($R_g$) and square of the electromagnetic tensor ($F_{\mu\nu}^2$) are as follows:
\beq \begin{aligned}
\sqrt{-g} &= \sqrt{-h} \cdot \Phi^2 \sin\theta, \\
R_g &= R_h + \frac{2}{\Phi^2} - 4 \nabla^2 \log\Phi - 6 h^{mn} \nabla_m \log\Phi \nabla_n\log\Phi, \\
F_{\mu\nu}^2 &= \frac{2 Q^2}{\Phi^4},
\end{aligned} \eeq
where $\nabla_k$ denotes the covariant derivative wrt the metric $h_{ij}$. In the second line we used that the unit sphere has constant curvature $R_{(\theta,\phi)} = 2$. Subsituting these formulae into the Einstein--Hilbert action:
\beq I = -\frac{1}{16 \pi l_p^2} \int d^4x \sqrt{-g} \left[ R_h - \frac{l_p^2}{4} F_{\mu\nu}^2 \right], \eeq
using Stokes' theorem (we assume that corresponding boundary terms at flat space-time infinity vanish) and integrating over the angular degrees of freedom, we obtain the following two-dimensional theory\footnote{Of course, one can also consider other theories of 2D dilaton gravity, e.g. theories with a different type of the potential. A comprehensive review of such theories can be found in~\cite{Grumiller-1}.}:
\beq I = -\frac{1}{4 l_p^2} \int d^2x\sqrt{-h} \left[ \Phi^2 R_h + 2 (\nabla \Phi)^2 + 2 - \frac{2 Q^2 l_p^2}{\Phi^2} \right]. \eeq
The field $\Phi$ is usually referred to as the dilaton field. Note that the Weyl transformation shifts the potential and the coefficient in front of the kinetic term:
\beq h_{ij} \rightarrow h_{ij} \Phi^{-\frac{\lambda}{2}} \quad \text{leads to} \quad 2 \rightarrow 2 - \lambda, \quad 2 - \frac{2 Q^2 l_p^2}{\Phi^2} \rightarrow \Phi^{-\frac{\lambda}{2}} \left( 2 - \frac{2 Q^2 l_p^2}{\Phi^2} \right), \eeq
so we can get rid of the kinetic term for the field $\Phi$:
\beq \label{eq:extremal-1}
I = -\frac{1}{4 l_p^2} \int d^2x\sqrt{-h} \left[ \Phi^2 R_h + 2 - \frac{2 Q^2 l_p^2}{\Phi^2} \right]. \eeq
Since now the dilaton is non-dynamical, the extremum of this action is achieved at some constant value $\Phi_0$ which determines the curvature of the spacetime. Moreover, the curvature is always negative, i.e. the extremum corresponds to the $AdS_2$ space:
\beq \delta_\Phi I = 0 \quad \text{implies} \quad R_h = -\frac{2 Q^2 l_p^2}{\Phi_0^4} = - \frac{2}{L^2}, \eeq
where we have defined the radius of the $AdS_2$ as $L = \frac{\Phi_0^2}{|Q| l_p}$. Substituting $L^2 \approx Q^2 l_p^2$ from~\eqref{eq:extremal-0}, one can estimate the critical value of the dilaton field: $\Phi_0 \approx |Q| l_p$. As we have expected, in the leading order this theory reproduces the near-horizon limit of the extremal black hole with the gravitational radius $r_\pm \approx \Phi_0$. Let us consider excitations above the extremality, which in this picture correspond to small deformations of the dilaton field:
\beq \label{eq:extremal-3}
\Phi^2 = \Phi_0^2 + \phi(x,t), \quad \phi(x,t) \ll \Phi_0^2, \eeq
and expand the action~\eqref{eq:extremal-1} up to the second order in $\frac{\phi}{\Phi_0^2}$:
\beq \label{eq:extremal-2} \begin{aligned}
I \approx &-\frac{1}{2 l_p^2} \int d^2 x \sqrt{-h} - \frac{ \Phi_0^2}{4 l_p^2} \left[ \int d^2x \sqrt{-h} \left( R_h + \frac{2}{L^2} \right) + 2 \int_{bdy} \mathcal{K} \right] - \\ &- \frac{1}{4 l_p^2} \left[ \int d^2x \sqrt{-h} \, \phi \left( R_h + \frac{2}{L^2} \right) + 2 \int_{bdy} \phi_b \mathcal{K} \right].
\end{aligned} \eeq
Here we have restored the appropriate boundary terms at the $AdS_2$ boundary\footnote{Note that this is not the same as flat space-time boundary of the 4D theory.} to make the minimal action finite (we will check this below) and introduced the trace of the extrinsic curvature:
\beq \label{eq:extremal-4}
\mathcal{K} = -\frac{h_{ab} T^a T^c \nabla_c n^b}{h_{ab} T^a T^b}, \eeq
where $T^a$ and $n^a$ are tangent and unit normal vectors to the boundary curve\footnote{In higher dimensional case boundary surface has two tangent vectors $T_1^a$ and $T_2^a$, so this expression must be modified to:
\beqs \mathcal{K} = -\frac{h_{ab} T_1^a T_2^c \nabla_c n^b}{h_{ab} T_1^a T_2^b}. \eeqs} of $AdS_2$. Also we have denoted for short $\phi \big|_{bdy} = \phi_b$.
The first term in~\eqref{eq:extremal-2} is proportional to the volume of the $AdS_2$ space which is infinite but constant. The second term is the ordinary 2D Einstein gravity. This expression is purely topological, i.e. it just gives the Euler characteristic of the manifold due to the Gauss-Bonnet theorem. Hence, both of the terms under discussion do not affect the equations of motion.
At the same time, the last term in the sum~\eqref{eq:extremal-2} does describe a non-trivial dynamics of the remaining fields. The corresponding action:
\beq \label{eq:JT}
I_{JT} = -\frac{1}{16 \pi G} \left[ \int d^2x \sqrt{-h} \, \phi \left(R_h + \frac{2}{L^2} \right) + 2 \int_{bdy} \phi_b \mathcal{K} \right], \eeq
is usually referred to as Jackiw--Teitelboim 2D gravity theory~\cite{Jackiw, Teitelboim}. Note that we have rescaled the Newton constant. Also note that $\phi$ and $G^{-1}$ always come together and form a dimensionless combination, so it is convenient to define dimensionless dilaton and Newton constant. In the following sections we will study the dynamical implications of the action~\eqref{eq:JT} more thoroughly.
A more detailed derivation of the theory~\eqref{eq:JT} from the near-horizon limit of extremal black hole can be found e.g. in~\cite{Thomi, Nayak}. Also note that this theory can be obtained by a reduction of some other higher-dimensional models~\cite{Kolekar, Grumiller-1}.
\subsection{Pure $AdS_2$ and its symmetries}
\label{sec:AdS-basics}
Before discussing the Jackiw--Teitelbom theory, let us first consider pure $AdS_2$ space to set up the notations and reveal some useful properties of the space.
First of all, it is convenient to set the radius of the space to unity, $L=1$, because it can be easily restored on dimensional grounds. Below we will consider such space if it is not stated otherwise.
Second, we will work in Euclidean signature. On the one hand, it is natural from the holographic point of view, because eventually we are interested in correlation functions of operators in the dual boundary theory (see subsection~\ref{sec:AdS-4p}): similarly to SYK case (section~\ref{sec:treatment}), we evalute some type of correlation functions in the Euclidean signature and then analytically continue them to Lorentzian times (see~\cite{Akhmedov-1802} for the discussion of the analytical continuation of $AdS$ correlation functions).
On the other hand, in Euclidean signature $AdS_2$ is just the hyperbolic disk (Lobachevsky space\footnote{We do not distinguish between the upper half-plane and unit disk because they can be mapped into each other by the M{\"o}bius transformation: $w \rightarrow \frac{iw+1}{w+i}$, where $w = t + iz$. The metrics on the plane and the disk are related by the same transformation. In particular, curves of constant $t$ and $z$ on the Fig.~\ref{fig:AdS-1} should be interpreted as the mappings of the corresponding curves on the hyperbolic plane.}), which is fully covered by the Poincar{\'e} and Rindler coordinates (see figure~\ref{fig:AdS-1}):
\begin{figure}[t]
\begin{center}
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[width=1\linewidth]{AdS-1.png}
\caption{Curves of constant $t$, $z$, $\varphi$ and $\rho$. Arrows show the direction in which the complementary coordinate increases}
\label{fig:AdS-1}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\includegraphics[width=1\linewidth]{AdS-2.png}
\caption{Cutoff of the $AdS_2$ space}
\label{fig:AdS-2}
\end{minipage}
\end{center}
\end{figure}
\beq \begin{aligned}
ds^2 &= \frac{dt^2 + dz^2}{z^2}, \quad &\text{(Poincar{\'e}}) \\
ds^2 &= d\rho^2 + \sinh^2\rho d\varphi^2. \quad &\text{(Rindler)}
\end{aligned} \eeq
One can change between these coordinates using the following identities:
\beq \label{eq:AdS-2}
\tanh \frac{\rho}{2} \cos \varphi = - \frac{2t}{t^2 + (z+1)^2}, \quad \tanh \frac{\rho}{2} \sin \varphi = \frac{t^2 + z^2 - 1}{t^2 + (z+1)^2}. \eeq
Note that $t$ runs from $-\infty$ to $\infty$ and $\varphi$ runs from $-\pi$ to $\pi$ (in fact, this coordinate is periodic: $\varphi \sim \varphi + 2\pi$). Also note that in Lorentzian signature Poincar{\'e} coordinates ($ds^2 = \frac{-d\hat{t}^2 + dz^2}{z^2}$) cover only half of the spacetime and Rindler coordinates ($ds^2 = d\rho^2 - \sinh^2\rho d\hat{\varphi}^2$) cover even smaller region (e.g. see~\cite{Maldacena-JT,Spradlin}).
Finally, in practice one should cut off $AdS_2$ space at some curve that is close to the boundary (Fig.~\ref{fig:AdS-2}). Otherwise, the volume of the space and the length of the boundary-boundary geodesics are infinite. This cutoff corresponds to the UV cutoff in putative dual boundary theory. To implement such a cutoff we fix the boundary value of the metric:
\beq ds \big|_{bdy} = \sqrt{\frac{ds^2}{d\tau^2}} d\tau = \sqrt{\frac{(t')^2 + (z')^2}{z^2}} d\tau = \frac{d\tau}{\epsilon}, \eeq
which implies that the boundary curve has large proper length:
\beq S = \int ds = \int_0^\beta \frac{d\tau}{\epsilon} = \frac{\beta}{\epsilon} \rightarrow \infty, \eeq
where the time on the boundary theory runs in the interval $\tau \in [0, \beta)$ and prime denotes the derivative over $\tau$. The limit $S \rightarrow \infty$ corresponds to $\epsilon \rightarrow 0$. Note that in this limit coordinates of the curve are not independent:
\beq \label{eq:AdS-3}
\frac{(t')^2 + (z')^2}{z^2} = \frac{1}{\epsilon^2}, \quad \text{hence}, \quad z(\tau) = \epsilon t'(\tau) + \mathcal{O}(\epsilon^3). \eeq
Thus, the function $t(\tau)$ unambiguously determines the boundary curve.
As soon as the interior of the space is the same for all boundary curves, the geometry of the clipped space is determined by the shape of the boundary curve, i.e. by the single function $t(\tau)$. However, we remind that Euclidean $AdS_2$ space is invariant under the transformations from the isometry group $SO(2,1) \simeq SL(2,\mathbb{R})/\mathbb{Z}_2$, i.e. under translations and rotations. Hence, the functions $t(\tau)$ and $\tilde{t}(\tau)$, which are related by such a transformation:
\beq \label{eq:isometry}
t(\tau) \rightarrow \tilde{t}(\tau) = \frac{a t(\tau) + b}{c t(\tau) + d}, \quad \text{where} \quad ad - bc = 1 \quad \text{and} \quad a,b,c,d \in \mathbb{R}, \eeq
describe the same geometry. This statement is obvious if we rewrite the Poincar{\'e} metric in terms of complex coordinates, $w = t + i z$. The transformations that map the upper half plane into itself are as follows:
\beq w \rightarrow \frac{aw + b}{cw + d}, \quad \text{where} \quad ad - bc = 1 \quad \text{and} \quad a,b,c,d \in \mathbb{R}, \eeq
which gives~\eqref{eq:isometry} in the limit $\epsilon \rightarrow 0$.
\subsection{Schwarzian theory}
\label{sec:AdS-Sch}
In this section we consider the Jackiw--Teitelboim theory~\eqref{eq:JT} on the clipped Poincar{\'e} disk and show that it effectively reduces to the one-dimensional theory with Schwarzian action. First, let us consider the bulk part of the action~\eqref{eq:JT}:
\beq I_{bulk} = -\frac{1}{16 \pi G} \int d^2x \sqrt{h} \, \phi \left(R_h + 2 \right). \eeq
The equation of motion for the dilaton establishes the constraint $R_h + 2 = 0$, i.e. simply tells that the metric is that of $AdS_2$. This is true even if we add matter fields, because they are not directly coupled to the dilaton. The equations of motion for the metric are as follows:
\beq T_{ij}^\phi \equiv \frac{1}{8 \pi G} \left( \nabla_i \nabla_j \phi - h_{ij} \nabla^2 \phi + h_{ij} \phi \right) = 0, \eeq
which determines the behavior of the dilaton field:
\beq \phi = \frac{a + b t + c (t^2 + z^2)}{z}, \eeq
where $a,b,c$ are integration constants. Note that near the boundary dilaton blows up:
\beq \label{eq:Dilaton}
\phi \big|_{bdy} \approx \frac{1}{\epsilon} \frac{a + b t(\tau) + c t^2(\tau)}{t'(\tau)} \equiv \frac{\phi_r(\tau)}{\epsilon}, \eeq
where we have used~\eqref{eq:AdS-3} and for convenience defined the ``renormalized'' boundary field $\phi_r(\tau)$. However, we assume that this large field is still smaller than the extremal value, $\frac{\phi_r}{\epsilon} \ll \Phi_0^2 \approx Q^2 l_p^2$ due to~\eqref{eq:extremal-3}.
Now let us evaluate the boundary term. The tangent and normal vectors to the curve $\left( t(\tau), z(\tau) \right)$ in the Poincar{\'e} metric are $T^a = \bem t' \\ z' \eem$ and $n^a = \frac{z}{\sqrt{(t')^2 + (z')^2}} \bem -z' \\ t' \eem$ correspondingly. Hence, using~\eqref{eq:extremal-4} and~\eqref{eq:AdS-3} one readily obtains the trace of the extrinsic curvature:
\beq \label{eq:K}
\mathcal{K} = \frac{t'\left(t'^2+z'^2+z z''\right)-zz't''}{\left(t'^2 + z'^2\right)^{3/2}} = 1+ \epsilon^2 \text{Sch}\left[t(\tau),\tau\right] + \mathcal{O}(\epsilon^4). \eeq
Substituting~\eqref{eq:Dilaton} and~\eqref{eq:K} into the boundary part of the action~\eqref{eq:JT} and changing to the integration over the time on the boundary, we obtain the following action:
\beq I_{JT}^{min} = 0 + I_{bdy} = -\frac{1}{8 \pi G} \int_{bdy} ds \frac{\phi_r(\tau)}{\epsilon} \mathcal{K} = - \frac{1}{8 \pi G} \int_0^\beta \frac{d\tau}{\epsilon} \frac{\phi_r(\tau)}{\epsilon} \Big[ 1 + \epsilon^2 \text{Sch}\left[ t(\tau), \tau \right] + \mathcal{O} (\epsilon^4) \Big]. \eeq
The divergent term (of the order of $\mathcal{O}(\epsilon^{-2})$) contributes to the ground state energy of the theory and should be treated using the holographic renormalizations~\cite{Skenderis, deBoer, Akhmedov-RG}. This method as applied to 2D dilaton gravity was extensively studied in~\cite{Grumiller-2, Grumiller-3, Cvetic, Jensen}. Here we just assume that the divergent term can be omitted\footnote{We emphasize that the only safe way to get the correct action and observables is honest holographic renormalization, because the mentioned crude method is sometimes misleading~\cite{Grumiller-2, Davis}. However, for the theory~\eqref{eq:JT} this crude method gives the correct result. A thorough discussion of boundary conditions, boundary counterterms and derivation of the Schwarzian action in 2D dilaton gravity can be found in~\cite{Grumiller-2, Grumiller-3, Gonzalez, Grumiller-4, Grumiller-5}.}. Thus, in the leading order in $\epsilon$ we obtain the following action:
\beq I_{JT}^{min} \approx -\frac{1}{8 \pi G} \int_0^\beta d\tau \phi_r(\tau) \text{Sch}\left[ t(\tau), \tau \right]. \eeq
It is straightforward to check that the variation of this action over $t(\tau)$ reproduces the relation~\eqref{eq:Dilaton}.
Moreover, the time dependence of the $\phi_r(\tau)$ can be removed by the rescaling the time on the boundary theory. In order to do this we define a new coordinate $\tilde{\tau}$ such that $d\tilde{\tau} = \frac{\bar{\phi}_r d\tau}{\phi_r^2(\tau)}$, where $\bar{\phi}_r$ is some positive dimensionless constant (we remind that we consider dimensionless dilaton and Newton constant), and use the composition law for the Schwarzian\footnote{$\text{Sch}\left[f\left(g(\tau)\right), \tau\right] = (g')^2 \text{Sch}\left[f(g), g\right] + \text{Sch}[g,\tau]$.}:
\beq \label{eq:Sch-ac}
I_{bdy} \approx -\frac{\bar{\phi}_r}{8 \pi G} \int_0^{\tilde{\beta}} d\tilde{\tau} \text{Sch}\left[ t(\tilde{\tau}), \tilde{\tau}\right]. \eeq
The integral of the second term, $\phi_r \text{Sch}\left[\tilde{\tau},\tau\right] = -2 \phi_r''$, is zero due to the periodicity $\phi_r'(\tau+\beta) = \phi_r'(\tau)$ (the boundary curve is smooth and closed). So in what follows we consider constant boundary values of the dilaton without loss of generality.
It is also convenient to change to the Rindler coordinates using the map $t(\tau) = \tan\frac{\varphi(\tau)}{2}$, which follows from the near-boundary limit ($z \rightarrow 0$) of the identities~\eqref{eq:AdS-2}:
\beq \text{Sch}\left[t, \tau\right] = \text{Sch}\left[\varphi, \tau\right] + \frac{(\varphi')^2}{2}. \eeq
Varying the corresponding action wrt $\varphi$ we obtain the following equation of motion:
\beq \label{eq:Sch-eom}
\frac{\text{Sch}\left[\varphi,\tau\right]'}{\varphi'} - \varphi'' = 0, \eeq
which has a linear in time solution:
\beq \label{eq:Sch-sln}
\varphi(\tau) = \frac{2 \pi \tau}{\beta}. \eeq
We choose the coefficient of the linear dependence in such a way that the Rindler time is periodic with the period $2 \pi$, $\varphi \sim \varphi + 2\pi$. This solution can be associated to the boundary theory at the temperature $\beta$. In what follows we will consider excitations over this solution. For convenience we set $\beta = 2 \pi$.
Note that the equation~\eqref{eq:Sch-eom} is a fourth-order non-linear differential equation that potentially has many sophisticated solutions. We do not know all of them. As a consequence, we cannot explicitly check whether the solution~\eqref{eq:Sch-sln} is the true minimum of the action~\eqref{eq:Sch-ac} or not. However, we expect the latter to be true on physical grounds.
Finally, let us consider fluctuations of the boundary curve near the minimal solution~\eqref{eq:Sch-sln}:
\beq \label{eq:Sch-flu}
\varphi(\tau) \approx \tau + \delta\varphi(\tau). \eeq
As in SYK model (see subsection~\ref{sec:4p-S}) we find the effective action for such fluctuations:
\beq \label{eq:Sch-app}
I_S = -\frac{\bar{\phi}_r}{16 \pi G} \int_0^{2\pi} d\tau \left[ (\delta\varphi')^2 - (\delta\varphi'')^2 \right] + \mathcal{O}\left(\delta\varphi^3\right), \eeq
and estimate their correlation function (compare with~\eqref{eq:4p-S-6}):
\beq \label{eq:Sch-G}
\langle \delta\varphi(\tau) \delta\varphi(0)\rangle_S \approx \frac{4 G}{\bar{\phi}_r} \sum_{m\ne-1,0,1} \frac{e^{i m \tau}}{m^2 (m^2 - 1)} = \frac{4 G}{\bar{\phi}_r} \left[ - \frac{\left(|\tau|-\pi\right)^2}{2} + \left(|\tau|-\pi\right)\sin|\tau| + 1 + \frac{\pi^2}{6} + \frac{5}{2} \cos|\tau| \right]. \eeq
Note that we excluded the modes that correspond to translations and rotations from $SL(2,\mathbb{R})$, because they are not dynamical. We will need this expression to evaluate the corrections to the correlators in the boundary theory (subsection~\ref{sec:AdS-4p}).
\subsection{Matter fields}
\label{sec:AdS-matter}
Let us add matter fields to the theory~\eqref{eq:JT}. The simplest action would be:
\beq I_{m} = \frac{1}{2} \int d^2x \sqrt{h} \left[ h^{ab} \partial_a \xi \partial_b \xi + m^2 \xi^2 \right]. \eeq
The solution to the corresponding equation of motion which is finite in the bulk but divergent in the limit $z \rightarrow 0$ is as follows:
\beq \label{eq:matter-1}
\xi(t,z) = z^{1-\Delta} \xi_r(t) + \cdots, \quad \text{where} \quad \Delta = \frac{1}{2} + \sqrt{\frac{1}{4} + m^2}, \eeq
and $\xi_r(t)$ is the boundary value of the field $\xi(t,z)$; the function $\xi_r(t)$ unambiguously determines the field $\xi(t,z)$ if it is finite in the bulk. We have denoted the subleading contribution in the limit $z \rightarrow 0$ as ``$\cdots$''. According to the $AdS/CFT$ prescription~\cite{Maldacena-AdS,Gubser,Aharony,Witten-AdS}, the function $\xi_r(t)$ can be interpreted as the source for the operator with the conformal dimension $\Delta$. Hence, the effective theory for matter fields which propagate in $AdS_2$\footnote{We remind that matter fields do not affect the constraint $R_h + 2 =0$, see the beginning of the subsection~\ref{sec:AdS-Sch}.} and satisfy boundary conditions~\eqref{eq:matter-1} is as follows (for the derivation see e.g.~\cite{Freedman}):
\beq \label{eq:matter-2}
I_{m-bdy} = - D \int dt dt' \frac{\xi_r(t) \xi_r(t')}{|t - t'|^{2\Delta}}, \quad \text{where} \quad D = \frac{\left(\Delta - \frac{1}{2}\right) \Gamma(\Delta)}{\sqrt{\pi} \Gamma\left(\Delta - \frac{1}{2} \right)}. \eeq
This action implicitly depends on the form of the boundary curve. In order to reveal this dependence we use~\eqref{eq:AdS-3} and rewrite the boundary condition in terms of the time on the boundary:
\beq \xi_r(t,z) \approx z^{1-\Delta} \xi_r(t) = \epsilon^{1-\Delta} \left[t'(\tau)\right]^{1-\Delta} \xi_r\left[t(\tau)\right] = \epsilon^{1-\Delta} \xi_r(\tau). \eeq
where we have introduced the ``renormalized'' field $\xi_r(\tau) \equiv \left[t'(\tau)\right]^{1-\Delta} \xi_r\left[t(\tau)\right]$. Substituting this definition into the action~\eqref{eq:matter-2} we obtain:
\beq I_{m-bdy} = - D \int d\tau d\tau' \left[ \frac{t'(\tau) t'(\tau')}{\left( t(\tau) - t(\tau') \right)^2} \right]^{\Delta} \xi_r(\tau) \xi_r(\tau'). \eeq
Thus, in the quasiclassical limit $G \rightarrow 0$ the boundary partition function with the source $\xi_r(\tau)$ looks as follows:
\beq \label{eq:matter-3}
Z\left[\xi_r(\tau)\right] = e^{ - I_0 - I_{Sch} - I_{m-bdy}}, \eeq
where $I_0$ denotes the ground state free energy. This term is naively divergent (in particular, it includes the divergent term which we have obtained in subsection~\ref{sec:AdS-Sch}), so it should be renormalized~\cite{Sarosi,Cvetic,Jensen}. However, it does not depend on the shape of the boundary and we just omit it in what follows.
Moreover, in the limit $G \rightarrow 0$ the contribution of the matter term is negligible (at least if $\Delta$ grows slower than $G^{-2/3}$, see~\cite{Sarosi,Maldacena-JT}), so the partition function~\eqref{eq:matter-3} is saturated at the extremum of the Schwarzian action. This limit correponds to the large $N$ limit in the dual boundary CFT. Hence, the two-point correlation function of the operators in the dual theory in the leading order is as follows:
\beq \label{eq:AdS-2p}
\langle V(\tau) V(\tau') \rangle = \frac{1}{Z\left[\xi_r\right]} \frac{\partial^2 Z\left[ \xi_r \right]}{\partial \xi_r(\tau) \partial \xi_r(\tau')} \Bigg|_{\xi_r = 0} = \left[ \frac{t'(\tau) t'(\tau')}{\left( t(\tau) - t(\tau') \right)^2} \right]^{\Delta} = \frac{1}{\left(2 \sin\frac{\tau - \tau'}{2}\right)^{2\Delta}}, \eeq
where we substituted the saddle point solution~\eqref{eq:Sch-sln} and set $\beta = 2\pi$. Here operator $V(\tau)$ is the conjugate to $\xi_r(\tau)$ according to the $AdS/CFT$ dictionary. Of course, this argumentation also holds for many-point correlation function.
There are two possible types of corrections to this expression. The first one is the corrections due to interactions in the bulk, including interaction between matter fields and backreaction to the shape of the boundary. The second one is ``quantum gravity'' loop corrections due to the fluctuations of $t(\tau)$ and $\xi(t,z)$ near the classical values (we remind that for finite $G$ the right hand side of~\eqref{eq:matter-3} is the functional integral over the bulk fields). In the limit $G \rightarrow 0$ the leading corrections come from the fluctuations of the boundary shape~\eqref{eq:Sch-flu}. In the next subsection we evaluate the contribution of such fluctuations into four-point correlation functions.
\subsection{Four-point correlation function, TOC and OTOC}
\label{sec:AdS-4p}
Following~\cite{Maldacena-JT,Jensen} in this subsection we evaluate the first ``quantum gravity'' correction to the four-point function in the ``nearly $AdS_2$'' theory. In this subsection the calculations are very similar to those that we have already discussed for SYK model in section~\ref{sec:treatment}. As in SYK model, it is convenient to define the connected part of the four-point function:
\beq \label{eq:AdS-4p-0}
\mathcal{F}(\tau_1, \tau_2, \tau_3, \tau_4) \equiv \left\langle V(\tau_1) V(\tau_2) W(\tau_3) W(\tau_4) \right\rangle - \left\langle V(\tau_1) V(\tau_2) \right\rangle \left\langle W(\tau_3) W(\tau_4) \right\rangle. \eeq
For simplicity we consider operators $V$ and $W$ which have the same conformal dimension~$\Delta$ and dual to different free fields in the bulk. First, thus we avoid the cross-channels. Second, two-point correlation functions of such operators rapidly decay under the evolution in the Lorentzian time: $\langle V(\tau_1 + it) W(\tau_2) \rangle \sim e^{- \frac{2 \pi \Delta}{\beta} t} \approx 0$ for $ t \gg \beta$ (here we restored $\beta$ in~\eqref{eq:AdS-2p}). We will need this property to evaluate OTOC and TOC.
Let us find the first order in $G$ correction to the function $\mathcal{F}$. To do this, we consider small fluctuations\footnote{Due to the action~\eqref{eq:Sch-app} such fluctuations are of the order $\delta \varphi \sim \sqrt{G/\bar{\phi}_r}$.} on top of the ``classical'' boundary curve:
\beq t(\tau) = \tan \frac{\varphi(\tau)}{2} = \tan\frac{\tau + \delta \varphi(\tau)}{2}, \eeq
and expand the two-point function~\eqref{eq:AdS-2p} to the third order in $\delta \varphi$:
\beq \left[ \frac{t'(\tau) t'(\tau')}{\left( t(\tau) - t(\tau') \right)^2} \right]^{\Delta} = \frac{1}{\left(2 \sin\frac{\tau - \tau'}{2}\right)^{2\Delta}} \left[ 1 + \mathcal{B}(\tau, \tau') + \mathcal{C}(\tau, \tau') + \mathcal{O}\left(\delta \varphi^3\right) \right], \eeq
where
\beq \begin{aligned}
\mathcal{B}(\tau_1, \tau_2) &= \Delta \left( \delta \varphi'(\tau_1) + \delta \varphi'(\tau_2) - \frac{\delta \varphi(\tau_1) - \delta \varphi(\tau_2)}{\tan \frac{\tau_{12}}{2}} \right), \\
\mathcal{C}(\tau_1, \tau_2) &= \frac{\Delta}{\left( 2 \sin \frac{\tau_{12}}{2} \right)^2} \Big[ \left(1 + \Delta + \Delta \cos \tau_{12} \right) \left( \delta \varphi(\tau_1) - \delta \varphi(\tau_2) \right)^2 + \\ &\phantom{=\frac{\Delta}{\left( 2 \sin \frac{\tau_{12}}{2} \right)^2} \Big[}+ 2 \Delta \sin \tau_{12} \left( \delta \varphi(\tau_1) - \delta \varphi(\tau_2) \right) \left( \delta \varphi'(\tau_1) + \delta \varphi'(\tau_2) \right) - \\ &\phantom{=\frac{\Delta}{\left( 2 \sin \frac{\tau_{12}}{2} \right)^2} \Big[}- \left( \cos \tau_{12} - 1 \right) \left( \Delta \left( \delta \varphi'(\tau_1) + \delta \varphi'(\tau_2) \right)^2 - \delta \varphi'(\tau_1)^2 - \delta \varphi'(\tau_2)^2 \right) \Big].
\end{aligned} \eeq
Here we denoted $\tau_{12} = \tau_1 - \tau_2$. Using this expansion we average the generating functional~\eqref{eq:matter-3} over the fluctuations of the boundary shape and find the effective action:
\beq \begin{aligned}
-I_{eff}[\xi_V, \xi_W] &= \log\left\langle e^{-I_{m-bdy}[\xi_V] - I_{m-bdy}[\xi_W]}\right\rangle_S = \\ &= D \int d\tau_1 d\tau_2 \Big[ 1 + \left\langle \mathcal{C}(\tau_1, \tau_2) \right\rangle_S \Big] \frac{\xi_V(\tau_1) \xi_V(\tau_2) + (\xi_V \leftrightarrow \xi_W)}{\left(2 \sin\frac{\tau_1 - \tau_2}{2}\right)^{2\Delta}} + \\ &+ \frac{D^2}{2} \int d\tau_1 d\tau_2 d\tau_3 d\tau_4 \left\langle \mathcal{B}(\tau_1, \tau_2) \mathcal{B}(\tau_3, \tau_4) \right\rangle_S \frac{\xi_V(\tau_1) \xi_V(\tau_2) \xi_V(\tau_3) \xi_V(\tau_4) + (\xi_V \leftrightarrow \xi_W)}{\left(2 \sin\frac{\tau_1 - \tau_2}{2}\right)^{2\Delta} \left(2 \sin\frac{\tau_3 - \tau_4}{2}\right)^{2\Delta}} + \\ &+ D^2 \int d\tau_1 d\tau_2 d\tau_3 d\tau_4 \left\langle \mathcal{B}(\tau_1, \tau_2) \mathcal{B}(\tau_3, \tau_4) \right\rangle_S \frac{\xi_V(\tau_1) \xi_V(\tau_2) \xi_W(\tau_3) \xi_W(\tau_4)}{\left(2 \sin\frac{\tau_1 - \tau_2}{2}\right)^{2\Delta} \left(2 \sin\frac{\tau_3 - \tau_4}{2}\right)^{2\Delta}} + \mathcal{O}(G^2),
\end{aligned} \eeq
where the sources $\xi_V$, $\xi_W$ are dual to the operators $V$, $W$ correspondingly and $\langle \cdots \rangle_S$ denotes the averaging over the linearized Schwarzian action~\eqref{eq:Sch-app}:
\beq \langle O \rangle_S \equiv \frac{\int \mathcal{D} \delta \varphi \, O e^{-I_{Sch}[\delta \varphi]}}{\int \mathcal{D} \delta \varphi \, e^{-I_{Sch}[\delta \varphi]}}. \eeq
Note that $\langle \mathcal{B}(\tau_1, \tau_2)\rangle_S = 0$, because $\mathcal{B}$ is linear in $\delta \varphi$. Differentiating the effective generating functional, we find the connected four-point function:
\beq \mathcal{F}(\tau_1, \tau_2, \tau_3, \tau_4) = \frac{\partial^4 e^{-I_{eff}[\xi_V, \xi_W]}}{\partial \xi_V(\tau_1) \partial \xi_V(\tau_2) \partial \xi_W(\tau_3) \partial \xi_W(\tau_4)} \Big|_{\xi_V = 0, \; \xi_W = 0} = \frac{\langle \mathcal{B}(\tau_1, \tau_2) \mathcal{B}(\tau_3, \tau_4) \rangle_S}{\left(2 \sin\frac{\tau_1 - \tau_2}{2}\right)^{2\Delta} \left(2 \sin\frac{\tau_3 - \tau_4}{2}\right)^{2\Delta}}. \eeq
Thus, we need to calculate the expectation value of the product of two $\mathcal{B}$s. Using the propagator~\eqref{eq:Sch-G} and taking into account that
\beq \langle \delta \varphi'(\tau_1) \delta \varphi(\tau_2) \rangle_S = \text{sgn} \left( \tau_1 - \tau_2 \right) \langle \delta \varphi(\tau_1) \delta \varphi(\tau_2) \rangle'_S, \quad \langle \delta \varphi'(\tau_1) \delta \varphi'(\tau_2) \rangle_S = \langle \delta \varphi(\tau_1) \delta \varphi(\tau_2) \rangle''_S, \eeq
we find that this average significantly depends on the order of the Euclidean times due to sign factors. As in SYK model, there are two essentially different orderings (expressions for other orderings follow from the symmetries of $\mathcal{F}$ discussed in the section~\ref{sec:treatment}):
\beq \begin{aligned}
\text{OPE:} \quad 2 \pi > \tau_1 > \tau_2 > \tau_3 > \tau_4 > 0, \\
\text{OTO:} \quad 2 \pi > \tau_1 > \tau_3 > \tau_2 > \tau_4 > 0.
\end{aligned} \eeq
For the first type of ordering the connected four-point function is as follows:
\beq \label{eq:AdS-4p-1}
\frac{\mathcal{F}(\tau_1, \tau_2, \tau_3, \tau_4)}{G(\tau_1, \tau_2) G(\tau_3, \tau_4)} = \frac{16 G \Delta^2}{\bar{\phi}_r} \left( \frac{\tau_{12}}{2 \tan \frac{\tau_{12}}{2}} - 1 \right) \left( \frac{\tau_{34}}{2 \tan \frac{\tau_{34}}{2}} - 1 \right) + \mathcal{O}(G^2). \eeq
Here $G(\tau_1, \tau_2)$ denotes the tree-level two-point functions~\eqref{eq:AdS-2p} of operators $V$ and $W$. For the second type of ordering the expression for the connected four-point function is more cumbersome:
\beq \label{eq:AdS-4p-2}
\begin{aligned}
\frac{\mathcal{F}(\tau_1, \tau_2, \tau_3, \tau_4)}{G(\tau_1, \tau_2) G(\tau_3, \tau_4)} &= \frac{16 G \Delta^2}{\bar{\phi}_r} \left( \frac{\tau_{12}}{2 \tan \frac{\tau_{12}}{2}} - 1 \right) \left( \frac{\tau_{34}}{2 \tan \frac{\tau_{34}}{2}} - 1 \right) + \\ &+ \frac{8 \pi G \Delta^2}{\bar{\phi}_r} \left( \frac{\sin\frac{\tau_{12} + \tau_{34}}{2} - \sin\frac{\tau_{13}+\tau_{24}}{2}}{\sin\frac{\tau_{12}}{2} \sin\frac{\tau_{34}}{2}} + \frac{\tau_{23}}{\tan\frac{\tau_{12}}{2} \tan\frac{\tau_{34}}{2}} \right) + \mathcal{O}(G^2).
\end{aligned} \eeq
In a similar way we also find the $\mathcal{O}(G)$ correction to the two-point functions:
\beq \label{eq:AdS-4p-3}
\begin{aligned}
\frac{\langle V(\tau_1) V(\tau_2) \rangle}{G(\tau_1, \tau_2)} = 1 &+ \frac{G \Delta}{\bar{\phi}_r} \frac{1}{\left(\sin \frac{\tau_{12}}{2}\right)^2} \Big[ 2 + 4 \Delta + \tau_{12} (\tau_{12} - 2 \pi) (\Delta + 1) + \\ &+ \left( \Delta \tau_{12} (\tau_{12} - 2 \pi) - 4 \Delta - 2 \right) \cos \tau_{12} + 2 ( \pi - \tau_{12}) (2 \Delta + 1) \sin \tau_{12} \Big] + \mathcal{O}(G^2).
\end{aligned} \eeq
The correction to the $\langle W W \rangle$ correlator is the same.
Finally, we restore $\beta$, substitute appropriate Euclidean times into the correlator~\eqref{eq:AdS-4p-0} and analytically continue~\eqref{eq:AdS-4p-1} and~\eqref{eq:AdS-4p-2} to non-zero Lorentzian times to obtain TOC and OTOC. For OTOC we consider the following set of complex times:
\beq \tau_1 = \frac{\beta}{4} + it, \quad \tau_2 = -\frac{\beta}{4} + it, \quad \tau_3 = 0, \quad \tau_4 = -\frac{\beta}{2}, \eeq
In the pure imaginary case ($t = 0$) this choice corresponds to the OTO ordering, so we need to analytically continue~\eqref{eq:AdS-4p-2}:
\beq \begin{aligned}
\text{OTOC}(t) &\equiv \text{tr}\left[\rho^{\frac{1}{4}} V(t) \rho^{\frac{1}{4}} W(0) \rho^{\frac{1}{4}} V(t) \rho^{\frac{1}{4}} W(0) \right] = \\ &= \mathcal{F}\left(\frac{\beta}{4} + it, -\frac{\beta}{4} + it, 0, -\frac{\beta}{2} \right) + \left\langle V\left(\frac{\beta}{2}\right) V\left(0 \right) \right\rangle \left\langle W\left(\frac{\beta}{2} \right) W\left(0 \right) \right\rangle \approx \\ &\approx \left( \frac{\pi}{\beta} \right)^{4 \Delta} \left[ 1 - 2 \Delta^2 \frac{\beta G}{\bar{\phi}_r} e^{\frac{2 \pi t}{\beta}} \right], \quad \text{for} \quad \beta \ll t \ll \beta \log \frac{\bar{\phi}_r}{\beta G}.
\end{aligned} \eeq
Here $\rho$ denotes the density matrix in the corresponding boundary CFT. Note that we neglect the $\mathcal{O}(G)$ contributions from~\eqref{eq:AdS-4p-2} and~\eqref{eq:AdS-4p-3} which do not grow with $t$. We demand $t \gg \beta$ to exclude possible contribution from ``mixed'' correlators of the form $\langle V W \rangle$, which decay at such times. Also note that gaussian approximation that we used to obtain this result works well only for relatively small times, i.e. until the decay of the OTOC is saturated. For larger times this correlator should be calculated more carefully.
For TOC we consider the different set of times
\beq \tau_1 = \frac{\beta}{2} + it, \quad \tau_2 = it, \quad \tau_3 = 0, \quad \tau_4 = -\frac{\beta}{2}, \eeq
which corresponds to the OPE ordering at the beginning of the Lorentzian time evolution, $t = 0$. Thus we analytically continue the correlator~\eqref{eq:AdS-4p-1} and obtain the following expression:
\beq \text{TOC}(t) \equiv \text{tr}\left[ V(t) \rho^{\frac{1}{2}} V(t) W(0) \rho^{\frac{1}{2}} W(0) \right] \approx \left( \frac{\pi}{\beta} \right)^{4\Delta} \left[ 1 + \text{const} \frac{G}{\bar{\phi}_r} \right], \eeq
which weakly deviates from the tree-level value even for large evolution times.
\section{Examples of chaotic behavior (instead of conclusion)}
\label{sec:examples}
Instead of conclusion let us briefly review the most notable examples of chaotic systems, i.e. models with exponentially growing $C(t)$ and rapidly decaying OTOC. All these models are considered in the quasiclassical limit (large $N$ or small $G$ limit) and somehow model all-to-all couplings; furthermore, only small fluctuations above the equilibrium state are considered. So the calculation of the correlation functions are similar in all cases. In particular, in these models the leading contribution to OTOC is ensured by ladder diagrams.
\subsection{SYK model / 2D dilaton gravity}
First of all, let us briefly recall the main properties of SYK model. This is a quantum mechanical model of $N \gg 1$ Majorana fermions with all-to-all couplings $J_{ijkl}$, which are distributed randomly and independently, i.e. accordingly to the gaussian distribution with an average square $\overline{J_{ijkl}^2} = \frac{3! J^2}{N^3}$ (no sum assumed). Such a choice of couplings allows one to introduce a kind of $\frac{1}{N}$ expansion for the disorder averaged correlation functions. In particular, disorder averaged corrections to two-point and four-point functions are determined by the so-called ``melonic'' (Fig.~\ref{fig:melonic-1}) and ``ladder'' (Fig.~\ref{fig:ladder}) diagrams.
Using such a diagrammatics, one finds that in the limit $1 \ll \beta J \ll N$, which corresponds to small but non-zero temperature ($T = 1/\beta$), the exact two-point correlation function exponentially decays in Lorentzian time:
\beq G_c^\beta(t) \approx \frac{\pi^{\frac{1}{4}}}{\sqrt{2 \beta J}} \frac{\text{sgn}(t)}{\big| \sinh\frac{\pi t}{\beta} \big|^{\frac{1}{2}}} \sim e^{-t/t_d}, \quad \text{for} \quad t \gg t_d = \frac{2 \beta}{\pi}, \eeq
time-ordered correlator is approximately equal to the product of two-point functions:
\beq \text{TOC}(t) \approx G_c^\beta\left(-\frac{i \beta}{2}\right) G_c^\beta\left(-\frac{i \beta}{2}\right) \approx \frac{\sqrt{\pi}}{2 \beta J}, \quad \text{for} \quad t \gg t_d, \eeq
and out-of-time-ordered correlator rapidly saturates:
\beq \text{OTOC}(t) \approx \frac{\sqrt{\pi}}{2 \beta J}\left[1 - \frac{\Delta^2}{2 C} \frac{\beta J}{N} e^{\kappa t} \right], \quad \text{for} \quad t_d \ll t \ll t_* = \beta \log\frac{N}{\beta J}, \eeq
where $C$ is some positive numerical constant, $\Delta = \frac{1}{4}$ is effective conformal dimension of fermions and $\kappa \approx \frac{2 \pi}{\beta} \left(1 - \frac{6.05}{\beta J} + \cdots \right)$ is Lyapunov exponent. Thus, the expectation value of the square of commutator grows exponentially:
\beq \begin{aligned}
C(t) &= 2 \times \text{TOC}(t) - \text{OTOC}\left(t - \frac{i \beta}{4} \right) - \text{OTOC}\left(t + \frac{i \beta}{4} \right) \approx \\ &\approx \frac{\text{const}}{N} 2 \cos\left(\frac{\beta \kappa}{4}\right) e^{\kappa t} \approx \frac{\text{const}}{N} \frac{6 \pi}{\beta J} e^{\kappa t}.
\end{aligned} \eeq
Note that the prefactor of the growing exponent is non-zero because $\kappa$ is not exactly equal to the maximal value $\frac{2 \pi}{\beta}$.
One can find the detailed derivation of these identities in the sections~\ref{sec:basics},~\ref{sec:treatment} of the present paper, papers~\cite{Kitaev, Polchinski, Maldacena-SYK, Sarosi, Jevicki-1, Jevicki-2, Rosenhaus-1807} and talks~\cite{Kitaev-talks}.
It is worth stressing that a pure boson analog of SYK model:
\beq I = \int d\tau \left[ \frac{1}{2} \sum_{i=1}^N \left(\frac{d\phi^i}{d\tau}\right)^2 + \sum_{i,j,k,l=1}^N J_{ijkl} \phi^i \phi^j \phi^k \phi^l \right], \eeq
is not self-consistent, in particular it has no reasonable exact solution~\cite{SUSY-SYK}. At the same time, supersymmetric analogs of SYK model are well defined~\cite{SUSY-SYK,Fu}.
SYK model is also closely related to Jackiw--Teitelboim (JT) gravity, i.e. two-dimensional ``near-$AdS_2$'' gravity with dilaton~\cite{Almheiri,Maldacena-JT,Jensen}. It can be shown that this theory is effectively one-dimensional, since its dynamics is determined by the shape of the boundary curve. Furthermore, in the IR limit the effective action of this theory exactly coincides with the effective action of SYK model. In both cases this action appears due to the symmetry wrt $SL(2,\mathbb{R})$ trasformations. Therefore it is not surprising that in the semiclassical limit the behavior of correlation functions in JT gravity is similar to the one of corresponding quantities in SYK model:
\begin{align}
G(t) &\approx \left(\frac{\pi}{\beta \sinh\frac{\pi t}{\beta}}\right)^{2 \Delta} \sim e^{-t/t_d}, \quad &&\text{for} \quad t \gg t_d = \frac{\beta}{2 \pi \Delta}, \\
\text{TOC}(t) &\approx \left( \frac{\pi}{\beta} \right)^{4 \Delta}, \quad &&\text{for} \quad t \gg t_d , \\
\text{OTOC}(t) &\approx \left( \frac{\pi}{\beta} \right)^{4 \Delta} \left[ 1 - 2 \Delta^2 \frac{\beta G}{\bar{\phi}_r} e^{\kappa t} \right], \quad &&\text{for} \quad t_d \ll t \ll t_* = \beta \log \frac{\bar{\phi}_r}{\beta G},
\end{align}
where $\Delta$ is the conformal dimension of the operators dual to free matter fields in the bulk, $G$ is 2D Newton constant, $\bar{\phi}_r$ is the boundary value of the dilaton and $\kappa \approx \frac{2 \pi}{\beta}$ is the Lyapunov exponent.
The details on the derivation of the correlation functions and other properties of 2D dilaton gravity can be found in section~\ref{sec:JT}, papers~\cite{Maldacena-JT,Jensen,Almheiri,Engelsoy,Sarosi} and talks~\cite{Kitaev-talks}.
Note that JT gravity can be derived as a near-horizon limit of an extremal black hole~\cite{Nayak,Kolekar}, and $AdS_2$ space exibits the same causal properties as higher-dimensional $AdS$ black holes. This opens a way to use JT gravity and SYK model as toy models of many complex black hole phenomena, e.g. as toy models of traversable wormhole~\cite{Maldacena-1704, Maldacena-1804, Maldacena-1807, Maldacena-1912}.
However, it is worth stressing that JT gravity incorporates only the lowest-energy features of SYK model (which are described by the Schwarzian action) and hence cannot be considered as a complete gravity dual of this model. In fact, at the present moment such a dual is far from being known. The main problem is that the complete gravity dual should reproduce the non-local action~\eqref{eq:effective-3} that describes the dynamics of the bilinear fields $G$ and $\Sigma$. This requires to couple the theory to an infinite number of massive bulk fields (each with $\mathcal{O}(1)$ mass), but it is not known how to do this. A more detailed discussion of the putative SYK gravity dual can be found in~\cite{Sarosi,Gross,Gross-1710}.
\subsection{Generalizations of SYK model}
\label{sec:tensor}
All the remarkable properties of SYK model, including solvability in the large $N$ limit, emergence of conformal symmetry in IR and saturation of ``bound on chaos'', are based on the averaging of correlation functions over the quenched disorder, i.e. over random implementations of coupling constants. This means that SYK model is not really a quantum mechanical model; in particular, one cannot find a unitary operator that generates time evolution in this model. Thus, generalizations of SYK model, which mimick it in the large $N$ limit without the quench disorder, are of great interest. Here we present three examples of such models.
The first example is Gurau--Witten model proposed in~\cite{Witten-1610,Gurau-1611}:
\beq \label{eq:GW}
I_{GW} = \int_0^\beta d\tau \left[\frac{1}{2} \sum_{c=0}^3 \left(\sum_{\mathbf{a}^{c}} \chi_{\mathbf{a}^{c}}^{c} \frac{d}{d\tau} \chi_{\mathbf{a}^{c}}^{c}\right) + \frac{J}{N^{3/2}} \sum_{\mathbf{a}^{0} \mathbf{a}^{1} \mathbf{a}^{2} \mathbf{a}^{3}} \chi_{\mathbf{a}^{0}}^{0} \chi_{\mathbf{a}^{1}}^{1} \chi_{\mathbf{a}^{2}}^{2} \chi_{\mathbf{a}^{3}}^{3} \prod_{c_{1}<c_{2}} \delta_{a^{c_1 c_2} a^{c_2 c_1}}\right], \eeq
where $\chi^c$ are real fermionic fields and $\tau$ is Euclidean time. For every color $c$, the field $\chi^c$ lives in a vector representation of $O(N)^3$, i.e. it is a rank three tensor with indexes $\textbf{a}^c =\left \{ a^{c d}, d \ne c \right\}$, each of which runs in the range $1 \ldots N$. The full symmetry group of the model is $O(N)^6$. For simplicity we present only the model with four-fermion vertex, general expressions can be found in~\cite{Witten-1610,Gurau-1611}.
The second example is uncolored fermionic tensor model, or Klebanov--Tarnoposky model~\cite{Klebanov-1,Klebanov-2,Klebanov-3}:
\beq \label{eq:KT}
I_{KT} = \int_0^\beta d\tau \left[ \frac{i}{2} \sum_{abc} \chi^{abc} \frac{d}{d\tau} \chi^{abc} - \frac{g}{4} \sum_{a_1 a_2 b_1 b_2 c_1 c_2} \chi^{a_1 b_1 c_1} \chi^{a_1 b_2 c_2} \chi^{a_2 b_1 c_2} \chi^{a_2 b_2 c_1} \right], \eeq
where $\chi^{abc}$ is rank-three fermionic tensor, indexes $a,b,c$ are indistinguishable and run in the range $1 \ldots N$. The full symmetry of the model is $O(N)^3$.
The third example mimicks SYK model by replacing random couplings $J_{ijkl}$ with a light boson tensor field~\cite{Nishinaka}:
\beq \label{eq:NT}
I_{NT} = \int_0^\beta d\tau \sum_{i<j<k<l} \frac{1}{2 \epsilon} \left[ \left( \frac{d \phi_{ijkl}}{d\tau}\right)^2 + m^2 \left(\phi_{ijkl}\right)^2 \right] + I_{SYK}, \eeq
where $\epsilon = \frac{3!}{\pi} \frac{m J^2}{N^3}$, $m \beta \ll 1$ and $I_{SYK}$ is the standard SYK action~\eqref{eq:SYK-action} with $J_{ijkl} = \phi_{ijkl}$.
We will not review models~\eqref{eq:GW},~\eqref{eq:KT} and~\eqref{eq:NT} in details, the only important point for us is that they reproduce SYK diagrammatics in the large $N$ limit. The derivation of this and other remarkable properties of SYK-like tensor models can be found in~\cite{Witten-1610, Gurau-1611, Klebanov-1, Klebanov-2, Klebanov-3, Nishinaka, Klebanov-4, Pakrouski, Popov, Gaitan, Choudhury, Bulycheva, Krishnan, Bonzom-tensor, Giombi, Ferrari-1, Ferrari-2, Azeyanagi-1, Azeyanagi-2}. Therefore, one can expect that these models are described by the same effective action and have the same properties as SYK model.
The other notable extension of SYK model is the complex SYK model~\cite{Sachdev, Gu-1910, Bulycheva-1706}:
\beq I_{CSYK} = \int_0^\beta d\tau \left[ \sum_{i=1}^N \chi_i^\dagger(\tau) \dot{\chi}_i(\tau) - \sum_{j_1 < j_2, k_1 < k_2} J_{j_1 j_2, k_1 k_2} \mathcal{A} \left\{ \chi_{j_1}^{\dagger} \chi_{j_2}^{\dagger} \chi_{k_1} \chi_{k_2} \right\} \right], \eeq
where $\mathcal{A}\{\cdots\}$ denotes the antisymmetrized product of operators and randomly distributed couplings $J_{j_1 j_2, k_1 k_2}$ have zero mean and variance $\overline{\left| J_{j_1 j_2, k_1 k_2}\right|^2} = \frac{2 J^2}{N^3}$. This theory has both $SL(2, \mathbb{R})$ and $U(1)$ symmetry. Similarly to its real predecessor, in the IR limit complex SYK is described by the Schwarzian action with an additional term corresponding to the $U(1)$ mode. A thorough discussion of this model and its applications can be found in~\cite{Sachdev, Gu-1910, Bulycheva-1706, Davison}.
\subsection{$CFT_2$ with large central charge / shock waves in $AdS_3$}
BTZ black hole and 2D $CFT$ with large central charge were among the first systems where OTOCs were calculated~\cite{Shenker-1306, Shenker-1312, Roberts-1409, Shenker-1412, Roberts-1412}. Let us briefly review the main ideas of this calculation.
First of all, in the subsection~\ref{sec:scramblers} we noticed that OTOC of local operators $V$ and $W$ can be represented as a two-sided correlation function in a perturbed thermofield double state, see formulae~\eqref{eq:scramblers-1} and~\eqref{eq:scramblers-2}. If the left and right systems are $CFT$s with $AdS$ duals then the pure state~\eqref{eq:TFD} is dual to an eternal $AdS$ Schwarzschild black hole with inverse temperature $\beta$~\cite{Maldacena-TFD}. In particular, if both systems are 2D $CFT$, $| TFD \rangle$ describes a BTZ black hole. In this picture operator $V_L(t)$ acting on the pure $| TFD \rangle$ is dual to a particle injected near the left boundary at the moment $t$ in the past. According to holographic dictionarty~\cite{Maldacena-AdS,Gubser,Aharony,Witten-AdS}, the mass of the particle is $m_V = \frac{\Delta_V}{2 L}$, where $L$ is the radius of $AdS$ space and $\Delta_V$ is the conformal dimension of $V$ (we assume that $\Delta_V \gg 1$). In general, such a perturbation distorts the geometry of the space. Hence, one needs to estimate this distortion in order to evaluate the two-sided correlator and OTOC.
Without going into details, one obtains that the distorted geometry is described by a so-called shock wave~\cite{Cornalba,Shenker-1306,Shenker-1312}. In a nutshell, this solution is obtained by gluing the metrics of the initial black hole (of mass $M$) and the black hole that swallowed the injected particle (of mass $M + m_V$) in such way that the time at the boundary flows continuously and the radius of unit circle is continuous across the gluing surface. For small masses of the injected particle, $m_V \ll M$, the metric of the shock wave is as follows:
\beq ds^2 = -\frac{4 L^2}{(1 + UV)^2} dU dV + R^2 \left( \frac{1 - UV}{1 + UV} \right)^2 d\phi^2 + \frac{4 L^2}{(1 + UV)^2} \frac{m_V}{4 M} e^{\frac{R t}{L^2}} \delta(U) dU^2, \eeq
where $U = u$, $V = v + \frac{m_V}{4 M} e^{R t/L^2} \theta(u)$, $u$ and $v$ are standard Kruskal coordinates and $R$ is the radius of the black hole. In this metric the geodesic distance between two points close to the left and right boundaries is:
\beq \frac{d}{L} \approx 2 \log \frac{2 r}{R} + 2 \log \left[ \cosh \frac{R (t_R - t_L)}{2 L^2} + \frac{m_V}{8 M} e^{\frac{R t}{L^2} - \frac{R (t_R + t_L)}{2 L^2}} \right], \eeq
where $t_L$, $t_R$ are time coordinates and $r$ is radial coordinate of the left and right end points of the geodesic. For simplicity we assume that the anglular coordinates of the end points coincide. Subtracting the divergent contribution and setting $t_L = t_R = 0$, one ontains the following two-sided correlation function in the semiclassical limit ($G \rightarrow 0$):
\beq \label{eq:shock-1} \begin{aligned}
\text{OTOC}(t) &\approx \Big\langle TFD \Big| V_L^\dagger(t) W_L(0) W_R(0) V_L(t) \Big| TFD \Big\rangle \sim e^{-m_W d} \sim \\ &\sim \left[ 1 + \frac{m_V}{8 M} e^{\frac{R t}{L^2}} \right]^{-2 L m_W} \sim \left[1 + C_1 \frac{m_V L}{S} e^{\frac{2 \pi t}{\beta}} \right]^{-2 L m_W}, \quad \text{for} \quad t \ll t_* = \frac{\beta}{2 \pi} \log S,
\end{aligned} \eeq
where $m_W = \frac{\Delta_W}{2 L}$, $\Delta_W \gg 1$ is the conformal dimension of $W$ and $C_1$ is a positive numerical constant. Here we have used identities for the temperature $\beta = \frac{2 \pi L^2}{R}$, mass $M = \frac{R^2}{8 G L^2}$ and entropy $S = \frac{\pi R}{2 G}$ of BTZ black hole. Also we assumed that the black hole is large, $R \sim L$, so that $S \sim \frac{R^2}{G L}$ and $C_1 = \mathcal{O}(1)$. A detailed derivation of~\eqref{eq:shock-1} and the related discussion can be found in~\cite{Shenker-1306,Roberts-1409,Shenker-1412}.
Finally, under these assumptions one can obtain the correlation function in the boundary $CFT$ with large central charge $c = \frac{3 L}{2 G}$:
\beq \label{eq:shock-2}
\text{OTOC}(t) \sim \left[ 1 + C_2 \frac{\Delta_V}{c} e^{\frac{2 \pi t}{\beta}} \right]^{-\Delta_W}, \quad \text{for} \quad t \ll t_* \sim \frac{\beta}{2 \pi} \log c, \eeq
where $C_2$ is another positive $\mathcal{O}(1)$ numerical constant. One can also obtain this answer without holography, considering different analytical continuations of the Euclidean four-point function and using Virasoro conformal block of the identity operator~\cite{Roberts-1412,Fitzpatrick,Turiaci}.
Note that both black hole entropy and central charge measure the number of degrees of freedom of the corresponding systems, hence, for both~\eqref{eq:shock-1} and~\eqref{eq:shock-2} scrambling time $t_* \sim \beta \log N$. This saturates the bound of the fast scrambling conjecture. The Lyapunov exponent $\kappa = \frac{2 \pi}{\beta}$ also saturates the corresponding bound. However, we remind that~\eqref{eq:shock-1} reproduces only the leading contribution in the limit $G \rightarrow 0$, while the complete answer captures quantum corrections too. As was shown in~\cite{Shenker-1412}, such corrections increase the scrambling time and reduce the growth rate of OTOCs.
\subsection{Large $N$ Hermitian matrix $\Phi^4$ model}
A remarkable example of chaotic, but not maximally chaotic, model is the large $N$ matrix scalar quantum field theory with quartic self-interaction, which was considered in~\cite{Stanford-1512}:
\beq \label{eq:weak-0}
I = \int d^4 x \frac{1}{2} \text{tr} \Big[ \left( \partial_ \mu \Phi \right)^2 - m^2 \Phi^2 - g^2 \Phi^4 \Big], \eeq
where $\Phi$ is Hermitian $N \times N$ matrix. The `t Hooft coupling is $\lambda = g^2 N \ll 1$. Summing the leading contributions in the limit $N \rightarrow \infty$, $g \rightarrow 0$, $\lambda = \text{const}$ and taking the integral over the spatial coordinates one obtains an integro-differential equation for the averaged square of commutator:
\beq \label{eq:weak-1}
\frac{d}{dt} C(t) = M \circ C(t), \eeq
where $M$ is some integral operator specified in~\cite{Stanford-1512} and
\beq \label{eq:weak-2}
C(t) = \frac{1}{N^4} \sum_{abcd} \int d^3\textbf{x} \, \text{tr}\left( \rho^{\frac{1}{2}} \left[\Phi_{ab}(x), \Phi_{cd}(0)\right] \rho^{\frac{1}{2}} \left[\Phi_{ab}(x), \Phi_{cd}(0)\right] \right). \eeq
As in the conformal part of SYK four-point function (subsection~\ref{sec:4p-CFT}), the leading contribution to~\eqref{eq:weak-2} is provided by ladder diagrams, with operator $M$ adding an extra rung to the ladder. The largest eigenvalue of the equation~\eqref{eq:weak-1} is nothing but Lyapunov exponent $\kappa$ that determines the growth rate of $C(t) \sim e^{\kappa t}$. Numerically diagonalizing~\eqref{eq:weak-1} one can show that for small inverse temperatures, $m \beta \ll 1$, the exponent is as follows:
\beq \label{eq:weak-3}
\kappa \approx 0.025 \frac{\lambda^2}{\beta^2 m}. \eeq
In the case of zero bare mass, $m = 0$, one should substitute into~\eqref{eq:weak-3} the thermal mass $m_{th}^2 = \frac{2 \lambda}{3 \beta^2}$ generated by one-loop corrections to two-point functions:
\beq \kappa \approx 0.025 \frac{\lambda^2}{\beta^2 m_{th}} \approx 0.031 \frac{\lambda^{3/2}}{\beta}. \eeq
There is also another way to find the Lyapunov exponent~\eqref{eq:weak-3} which relies on an analogy between epidemic growth and scrambling. Let us consider the theory~\eqref{eq:weak-0} as a gas of $N^2$ interacting particles. The one-particle distribution function $f(t,\mathbf{p})$ of this gas satisfies (in the leading order) the linearized Boltzmann equation:
\beq \label{eq:weak-4}
\frac{\partial}{\partial t} f(t,\mathbf{p}) = \int \frac{d^3\mathbf{q}}{(2\pi)^3} \frac{1}{2 E_\mathbf{q}} \Big[ R^\wedge(\mathbf{p},\mathbf{q}) - R^\vee(\mathbf{p},\mathbf{q}) \Big] f(t,\mathbf{q}), \eeq
where $E_\mathbf{p} = \sqrt{m^2 + \mathbf{p}^2}$, $\mathbf{p}$ is three-dimensional momentum, functions $R^\wedge(\mathbf{p}, \mathbf{q})$ and $R^\vee(\mathbf{p}, \mathbf{q})$ measure increase and decrease of the particle density in the phase space cell $\mathbf{p}$ associated with the phase space cell $\mathbf{q}$. Note that the loss of particles is caused by two distinct processes: annihilation and outflow of particles to other cells. These processes are described by functions $2 \Gamma_\mathbf{p} \delta(\mathbf{p} - \mathbf{q})$ and $R^\vee(\mathbf{p}, \mathbf{q}) - 2 \Gamma_\mathbf{p} \delta(\mathbf{p} - \mathbf{q})$ correspondingly. The gain is only due to the inflow from other cells. For simplicity we assume that the system is spatially homogeneous.
Now let us use this qualitative model to estimate how quickly a local perturbation spreads throughout the system (i.e. estimate how quickly the system scrambles). Imagine that we injected into the system a contagious particle which infects other particles when they collide. In the early stages of the epidemic the rate of its growth is determined by the gross flow passing through the phase space cell, i.e. by the sum of inflow and outflow:
\beq \label{eq:weak-5}
\frac{\partial}{\partial t} f_{OTOC}(t,\mathbf{p}) = \int \frac{d^3\mathbf{q}}{(2\pi)^3} \frac{1}{2 E_\mathbf{q}} \frac{\sinh\frac{\beta E_\mathbf{q}}{2}}{\sinh\frac{\beta E_\mathbf{p}}{2}} \Big[ R^\wedge(\mathbf{p},\mathbf{q}) + R^\vee(\mathbf{p},\mathbf{q}) - 4 \Gamma_\mathbf{p} \delta(\mathbf{p} - \mathbf{q}) \Big] f_{OTOC}(t,\mathbf{q}). \eeq
To obtain this equation we changed the sign of the outflow term in~\eqref{eq:weak-4} and divided the function $f(t,\mathbf{p})$ by $\sinh\frac{\beta E_\mathbf{p}}{2}$. The function $f_{OTOC}(t,\mathbf{p})$ measures the infected particle density. If this qualitative picture is applicable to the system~\eqref{eq:weak-0} and infected particles are analogs of particles affected by a perturbation, then the epidemic growth is equivalent to scrambling. Hence, one expects that the growth rate of $f_{OTOC}(t,\mathbf{p})$ coincides with the growth rate of $C(t)$.
Indeed, it was shown in~\cite{Aleiner,Grozdanov} that equation~\eqref{eq:weak-5} can be deduced from the IR limit of Bethe-Salpeter equation for OTOC (in this limit Bethe-Salpeter equations decouple). Therefore, one can evaluate the Lyapunov exponent by diagonalizing~\eqref{eq:weak-5} instead of~\eqref{eq:weak-1}. In particular, this method reproduces the result~\eqref{eq:weak-3} in the limit $N \gg 1$, $m \beta \ll 1$. Note that this approach also can be applied to other weakly coupled systems.
\section*{Acknowledgements}
Authors would like to thank F.~K.~Popov, A.~Milekhin, A.~Yu.~Morozov, V.~A.~Rubakov, P.~I.~Arseev, V.~V.~Losyakov, U.~Moschella, A.~S.~Gorsky, A.~Dymarsky, D.~Grumiller, D.~A.~Galante, L.~A.~Akopyan, E.~N.~Lanina, R.~O.~Sharipov and E.~S.~Trunina for useful comments and discussions. Especially we would like to thank E.~T.~Akhmedov for sharing of his ideas and support throughout the work. Also we would like to thank Hermann Nicolai and Stefan Theisen for the hospitality at the Albert Einstein Institute, Golm, where the work on this project was partly done. This work was supported by the grant from the Foundation for the Advancement of Theoretical Physics and Mathematics ``BASIS''.
|
1,116,691,501,241 | arxiv | \section{Characterizing Deep Learning Personalized Recommendation Models}
\label{sec:char}
This section describes the general architecture of DL-based recommendation
models with prominent sparse embedding features and their performance bottlenecks.
As a case study, we conduct a thorough characterization of the recently-released Deep Learning Recommendation Model (DLRM) benchmark~\cite{DLRM}.
The characterization---latency breakdown, roofline analysis, bandwidth analysis, and memory locality---illustrates the unique memory requirements and access behavior of production-scale recommendation models and justifies the proposed near-memory accelerator architecture.
\vspace{-0.1cm}
\subsection{Overview of Personalized Recommendation Models}
\label{sec:recModels}
\vspace{-0.1cm}
Personalized recommendation is the task of recommending content to users based on their preferences and previous interactions.
For instance, video ranking (e.g., Netflix, YouTube), a small number of videos, out of potentially millions, must be recommended to each user.
Thus, delivering accurate recommendations in a timely and efficient manner is important.
Most modern recommendation models have an extremely large feature set to capture a range of user behavior and preferences.
These features are typically separated out into dense and sparse features.
While dense features (i.e., vectors, matrices) are processed by typical DNN layers (i.e., FC, CNN, RNN), sparse features are processed by indexing large embedding tables.
A general model architecture of DL-based recommendation systems is captured in Figure~\ref{fig:sparsenn}. A few examples are listed with their specific model parameters~\cite{DLRM,fox,youtube} in Figure~\ref{fig:sparsenn}(b).
Similar mixture of dense and sparse features are broadly observable across many alternative recommendation models~\cite{alibabaRec,wideanddeep,mtwnd,DLRM,fox,youtube}.
Embedding table lookup and pooling operations provide an abstract representation of sparse features learned during training and are central to DL-based recommendation models.
Embedding tables are organized as a set of potentially millions of vectors.
Generally, embedding table operations exhibit Gather-Reduce pattern; the specific element-wise reduction operation varies between models.
For example, Caffe\cite{caffe2} comprises a family of embedding operations, prefixed by \textit{SparseLengths} (i.e., SparseLengthsWeightedSum8BitsRowwise), that perform a similar Gather-Reduce embedding operation with quantized, weighted summation.
The SLS operator primitive is widely employed by other production-scale recommendation applications (e.g. YouTube~\cite{youtube} and Fox~\cite{fox}).
Our work aims to alleviate this performance bottleneck and improve system throughput by devising a novel NMP solution to offload the SLS-family embedding operations thus covering a broad class of recommendation systems.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{rm_v2.pdf}
\vspace{-0.7cm}
\caption{
(a) Simplified model-architecture reflecting production-scale recommendation models;
(b) Parameters of representative recommendation models.}
\label{fig:sparsenn}
\vspace{-0.6cm}
\end{figure}
\if 0
\begin{table}[t!]
\caption{Model parameters of representative recommendation models}
\label{tab:rec_models}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& \# Embs & Emb size & Pooling & Batch size & \# FC \\ \hline
RM1-small & 8 & 1M & 10-80 & 1-256 & 6 \\ \hline
RM1-large & 12 & 1M & 10-80 & 1-256 & 6 \\ \hline
RM2-small & 24 & 1M & 10-80 & 1-256 & 6 \\ \hline
RM2-large & 64 & 1M & 10-80 & 1-256 & 6 \\ \hline
Fox & 2 & $\sim$Millions & $\sim$50 & - & 4 \\ \hline
Youtube & 2 & $\sim$Millions & $\sim$50 & - & 3 \\ \hline
\end{tabular}
\end{table}
\begin{table}[t!]
\caption{SparseLengths-Family Inference Operators in Caffe2}
\label{tab:caffe2_sls_op}
\centering
\begin{tabular}{|c|c|}
\hline
Embedding Gather-Reduce Operators & Data Type \\ \hline
SparseLengthsMean & FP-32 \\ \hline
SparseLengthsMean8BitsRowwise & Int-8 \\ \hline
SparseLengthsMeanFused8BitRowwise & Int-8 \\ \hline
SparseLengthsWeightedMean8BitsRowwise & Int-8 \\ \hline
SparseLengthsSum & FP-32 \\ \hline
SparseLengthsSum8BitsRowwise & Int-8 \\ \hline
SparseLengthsSumFused8BitRowwise & Int-8 \\ \hline
SparseLengthsWeightedSum & FP-32 \\ \hline
SparseLengthsWeightedSum8BitsRowwise & Int-8 \\ \hline
SparseLengthsWeightedSumFused8BitRowwise & Int-8 \\ \hline
\end{tabular}
\end{table}
\fi
\vspace{-0.1cm}
\subsection{A Case Study---Facebook's DLRM Benchmark}
\vspace{-0.1cm}
To demonstrate the advantages of near-memory processing for at-scale personalized recommendation models, we study Facebook's deep learning recommendation models (DLRMs)~\cite{DLRM}.
Dense features are initially processed by the BottomFC operators, while sparse input features are processed through the embedding table lookups.
The output of these operatiors are combined and processed by TopFC producing a prediction of click-through-rate of the user-item pair.
This paper focuses on performance acceleration strategies for four recommendation models representing two canonical classes of the models, RM1 and RM2~\cite{arxiv-gupta-19}.
These two model classes attribute to significant machine learning execution cycles at Facebook's production datacenter, RM1 over 30\% and RM2 over 25\%~\cite{sigarch-blog}.
The parameters to configure them are shown in Figure~\ref{fig:sparsenn}(b).
The notable distinguishing factor across these configurations is the number of the embedding tables.
RM1 is a comparatively smaller model with few embedding tables;
RM2 has tens of embedding tables.
In production environments, recommendation models employ three levels of parallelism, shown in Figure~\ref{fig:model_parallelism}, to achieve high throughput under strict latency constraints~\cite{arxiv-gupta-19}.
Model-level parallelism grows by increasing the number of concurrent model inference ($m$) on a single machine, operator-level parallelism adds parallel threads ($n$) per model and data-level parallelism is scaled by increasing batch size.
An SLS operator performs a batch of pooling operations; one pooling operation performs the summation for a set of vectors.
The inputs to SLS, for one batch of embedding lookups, include an indices vector containing sparse-IDs, and optionally a weight vector
\vspace{-0.1cm}
\subsection{Operator Bottleneck Study}
\vspace{-0.1cm}
We observe that \emph{the SLS-family of operators is the largest contributor to latency} in recommendation models especially as batch size, data-level parallelism, increases.
Figure~\ref{fig:model_latency_breakdown} depicts the execution time breakdown per operator with the majority of the time spent executing FC and SLS Caffe2 operators~\cite{arxiv-gupta-19}.
With a batch size of 8, SLS accounts for 37.2\% and 50.6\% of the total model execution time of RM1-small and RM1-large, respectively.
Whereas for larger models represented by RM2-small and RM2-large, a more significant portion of the execution time goes into SLS (73.5\%, 68.9\%).
Furthermore, the fraction of time spent on the embedding table operations increases with higher batch-size --- 37.2\% to 61.1\% and 50.6\% to 71.3\% for RM1-small and RM1-large respectively.
Note, the execution time of RM2-large is 3.6$\times$ higher than RM1-large because RM2 comprises a higher number of parallel embedding tables.
Generally, embedding table sizes are expected to increase further for models used in industry~\cite{tensorDIMM}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{model_parallelism.pdf}
\vspace{-0.7cm}
\caption{Model-, operator- and data-level parallelism in production system.}
\label{fig:model_parallelism}
\vspace{-0.3cm}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{model_latency_breakdown_v6.pdf}
\vspace{-0.8cm}
\caption{Inference latency and breakdown across models (RM1-small, RM1-large, RM2-small, RM2-large) with varying batch sizes (8, 64, 128, 256).}
\label{fig:model_latency_breakdown}
\vspace{-0.5cm}
\end{figure}
\vspace{-0.1cm}
\subsection{Roofline Analysis}
\vspace{-0.1cm}
\if 0
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{figs/sec3/2-RM1-RM2-roofline.pdf}
\vspace{-0.6cm}
\caption{Roofline of multi-threaded RM1-large and RM2-large with varying batch size (1, 8, 16, 32, 64, 128, 256). Darker color indicates larger batch.
}
\label{fig:RM2_roofline}
\vspace{-0.4cm}
\end{figure}
\fi
Applying the roofline model~\cite{roofline}, we find \emph{recommendation models lie in the memory bandwidth-constrained region, close to the theoretical roofline performance bound}.
We construct a roofline describing the theoretical limits of the test system described in Section~\ref{sec:method}. We use Intel's Memory Latency Checker (MLC)\footnote{Intel MLC~\cite{MLC} measures the bandwidth from the processor by creating threads that traverse a large memory region in random or sequential stride as fast as possible.} to derive the memory bound. We derive the compute bound by sweeping the number of fused multiply-add (FMA) units in the processor and the operating frequency of the CPU (Turbo mode enabled).
Figure~\ref{fig:RM2_roofline} presents the roofline data points for the models, RM1 and RM2, as well as their corresponding FC and SLS operators separately. We sweep batch size from 1 to 256 with darker colors indicating a larger batch size. We observe that the SLS operator has low compute but higher memory requirements; the FC portion of the model has higher compute needs; and the combined model is in between.
SLS has low and fixed operational intensity across batch sizes, as it performs vector lookups and element-wise summation.
FC's operational intensity increases with batch size, as all requests in the batch share the same FC weights, increasing FC data reuse.
With increasing batch size, the FC operator moves from the region under the memory-bound roofline to the compute-bound region.
For the full model, we find RM1 and RM2 in the memory bound region, as the operational intensity is dominated by the high percentage of SLS operations.
It also reveals that, with increasing batch size, the performance of SLS, as well as the RM1 and RM2 models, is approaching the theoretical performance bound of the system
More importantly, \textit{our roofline analysis suggests that the performance of the recommendation model is within 35.1\% of the theoretical performance bound and there is little room for further improvement without increasing system memory bandwidth.}
By performing the embedding lookups and pooling operations before crossing the pin-limited memory interface, near-memory processing can exploit higher internal bandwidth of the memory system,
thus effectively lifting up the roofline and fundamentally improving the memory bandwidth-constrained performance bound.
\begin{figure}[t!]
\begin{minipage}[t]{0.44\linewidth}
\includegraphics[width=\linewidth]{roofline.pdf}
\vspace{-0.7cm}
\caption{Roofline of multi-threaded RM1-large and RM2-large sweeping batch size (1-256). Darker color indicates larger batch.}
\label{fig:RM2_roofline}
\end{minipage}%
\hfill%
\begin{minipage}[t]{0.54\linewidth}
\includegraphics[width=\linewidth]{bandwidth.pdf}
\vspace{-0.7cm}
\caption{Memory bandwidth saturation with increasing number of parallel SLS threads and batch sizes.}
\label{fig:bandwidth}
\end{minipage}
\vspace{-0.5cm}
\end{figure}
\vspace{-0.1cm}
\subsection{Memory Bandwidth of Production Configurations}
\vspace{-0.1cm}
\emph{Executing embedding operations on real systems can saturate memory bandwidth at high model-, operator- and data-level parallelism.}
Figure~\ref{fig:bandwidth} depicts the memory bandwidth consumption as we increase the number of parallel SLS threads for different batch sizes (blue curves). The green horizontal line represents the ideal peak bandwidth (76.8 GB/s, 4-channel, DDR4-2400) and the red curve is an empirical upper bound measured with Intel MLC~\cite{MLC}.
We observe that memory bandwidth can be easily saturated by embedding operations especially as batch size and the number of threads increase.
In this case, the memory bandwidth saturation point occurs (batch size = 256, thread size = 30) where more than 67.4\% of the available bandwidth is taken up by SLS.
In practice, a higher level of bandwidth saturation beyond this point becomes undesirable as memory latency starts to increase significantly~\cite{intel_optane}.
\textit{What is needed is a system that can perform the Gather-Reduce operation near memory such that only the final output from the pooling returns to the CPU.}
\vspace{-0.1cm}
\subsection{Embedding Table Locality Analysis}
\label{sec:locality}
\vspace{-0.1cm}
Prior work~\cite{arxiv-gupta-19} has assumed that embedding table lookups are always random, however we show, \emph{for traces from production traffic, there exists modest level of locality mostly due to temporal reuse.}
While recommendation models are limited by memory performance generally, we wanted to study the memory locality to see if caching can improve performance.
We evaluate both a random trace and embedding table (T1-T8) lookup traces from production workloads used by Eisenman et al.~\cite{eisenman2018bandana}.
In production systems, one recommendation model contains tens of embedding tables and multiple models are co-located on a single machine.
To mimic the cache behavior of a production system, we simulate the cache hit rate for multiple embedding tables co-located on one machine.
In Figure~\ref{fig:sls_lru_locality}(a), Comb-8 means that 8 embedding tables are running on the machine and the T1-T8 traces (each for a single embedding table) are interleaved for the 8 embedding tables.
For Comb-16, Comb-32 and Comb-64 we multiply the 8 embedding tables 2, 4, and 8 times on the same machine, which also approximates larger models with 16, 32 and 64 embedding tables.
We use the LRU cache replacement policy and 4-way set associative cache. We assume each embedding table is stored in a contiguous logical address space and randomly mapped to free physical pages.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{lru_locality.pdf}
\vspace{-0.7cm}
\caption{(a) Temporal data locality sweeping cache capacity 8-64MB with fixed cacheline size of 64B;
(b) Spatial data locality sweeping cacheline size 64-512B with fixed cache capacity 16MB.
}
\label{fig:sls_lru_locality}
\vspace{-0.5cm}
\end{figure}
To estimate the amount of temporal locality present, we sweep the cache capacity between 8-64MB with fixed cacheline size of 64B.
In Figure~\ref{fig:sls_lru_locality}(a), the random trace has a low hit rate of $<$5\% representing the worst case locality. We see that the combined simulation of production traces is much higher than random with a hit rate between 20\% and 60\%.
More importantly, hit rate increases as cache size increases. In Section~\ref{sec:co-opt}, we will show how optimizations to {\textit{RecNMP}~} can take advantage of this locality through table-aware packet scheduling and software locality hints from batch profiling.
Spatial locality can be estimated by sweeping the cacheline size of 64-512B with a fixed cache capacity of 16MB. Figure~\ref{fig:sls_lru_locality}(b) illustrates this sweep for the Comb-8.
We observe that as the cacheline size increases, in fact, hit rate decreases. In order to isolate the effect of increased conflict misses we run the same experiment on a fully-associative cache and observe similar trends of decreasing hit rate.
Thus, we conclude that embedding table lookup operations have little spatial locality.
\section{Conclusion}
\vspace{-0.1cm}
We propose \textit{RecNMP}---a practical and scalable near-memory solution for personalized recommendation.
We perform a systematic characterization of production-relevant recommendation models and reveal its performance bottleneck.
A light-weight, commodity DRAM compliant design, {\textit{RecNMP}~} maximally exploits rank-level parallelism and temporal locality of production embedding traces to achieve up to $9.8\times$ performance improvement of sparse embedding operation (carried out by the SLS-family operators).
Offloading SLS also offers alleviated cache contention for the non-SLS operators that remain in the CPU, resulting in up to 30\% latency reduction for co-located FC operators.
Overall, our system-level evaluation demonstrates that {\textit{RecNMP}~} offers up to $4.2\times$ throughput improvement and 45.8\% memory energy saving with representative production-relevant model configurations.
\section{{RecNMP~} System Design}
\label{sec:design}
\vspace{-0.1cm}
Considering the unique memory-bounded characteristics and the sparse and irregular access pattern of personalized recommendation, we propose \textit{RecNMP}---a practical and lightweight near-memory processing solution to accelerate the dominated embedding operations.
It is designed to maximize DRAM rank-level parallelism by computing directly and locally on data fetched from concurrently activated ranks.
First, we employ a minimalist style hardware architecture and embed specialized logic units and a rank-level cache to only support the SLS-family inference operators instead of general-purpose computation.
The modified hardware is limited to the buffer chip within a DIMM without requiring any changes to commodity DRAM devices.
Next, the sparse, irregular nature of embedding lookups exerts a high demand on command/address (C/A) bandwidth.
This is addressed by sending a compressed instruction format over the standard memory interface, conforming to the standard DRAM physical pin-outs and timing constraints. Other proposed NMP solutions have employed special NMP instructions without address the C/A limitation of irregular and low spatial locality memory accesses pattern~\cite{chameleon, tensorDIMM}.
We also present a hardware/software (HW/SW) interface for host-NMP coordination by adopting a heterogeneous computing programming model, similar to OpenCL~\cite{opencl}.
Finally, we explore several HW/SW co-optimization techniques--\textit{memory-side caching}, \textit{table-aware scheduling} and \textit{hot entry profiling}--that provide additional performance gains. These approaches leverage our observations from the workload characterization in the previous section.
\vspace{-0.2cm}
\subsection{Hardware Architecture}
\vspace{-0.1cm}
\textbf{System overview.}
{\textit{RecNMP}~} resides in the buffer chip on the DIMM.
The buffer chip bridges the memory channel interface from the host and the standard DRAM device interface, using data and C/A pins, as illustrated in Figure~\ref{fig:system_overview}(a).
Each buffer chip contains a {RecNMP~} processing unit (PU) made up of a DIMM-NMP module and multiple rank-NMP modules.
This approach is non-intrusive and scalable, as larger memory capacity can be provided by populating a single memory channel with multiple RecNMP-equipped DIMMs.
Multiple DRR4 channels can also be utilized with software coordination.
The host-side memory controller communicates with a {RecNMP~} PU by sending customized compressed-format NMP instructions (NMP-Inst) through the conventional memory channel interface; the PU returns the accumulated embedding pooling results (DIMM.Sum) to the host.
Regular DDR4-compatible C/A and data signals (DDR.C/A and DDR.DQ) are decoded by the {RecNMP~} PU from the NMP-Insts and then sent to all DRAM devices across all parallel ranks in a DIMM.
By placing the logic at rank-level, {\textit{RecNMP}~} is able to issue concurrent requests to the parallel ranks and utilize, for SLS-family operators, the higher internal bandwidth present under one memory channel.
Its effective bandwidth thus aggregates across all the parallel activated ranks. For example, in Figure~\ref{fig:system_overview}(a), a memory configuration of 4 DIMMs$\times$2 ranks per DIMM could achieve $8\times$ higher internal bandwidth.
The DIMM-NMP module first receives a NMP-Inst through DIMM interface and then forwards it to the corresponding rank-NMP module based on the rank address.
The rank-NMPs decode and execute the NMP-Inst to perform the local computation of the embedding vectors concurrently.
We do not confine a SLS operation to a single rank but support aggregation across ranks within the PU. This simplifies the memory layout and increases bandwidth.
DIMM-NMP performs the remaining element-wise accumulation of the partial sum vectors (PSum) from parallel ranks to arrive at the final result (DIMM.Sum). In the same fashion, Psums could be accumulated across multiple {RecNMP~} PUs with software coordination.
We will next dive into the
design details on the DIMM-NMP and rank-NMP modules. While they are on the same buffer chip, having separate logical modules makes it easy to scale to DIMMs with a different number of ranks.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{hardware-architecture-v2.pdf}
\vspace{-0.8cm}
\caption{(a) Architecture overview of {\textit{RecNMP}~} architecture; (b) DIMM-NMP; (c) Rank-NMP; (d) NMP instruction format.}
\label{fig:system_overview}
\vspace{-0.6cm}
\end{figure*}
\textbf{DIMM-NMP Module.}
To dispatch the NMP-Inst received from the DIMM interface, the DIMM-NMP module employs DDR PHY and protocol engine similar to the design of a conventional DIMM buffer chip relaying the DRAM C/A and DQ signals from and to the host-side memory controller.
The instruction is multiplexed to the corresponding ranks based on the Rank-ID as shown in Figure~\ref{fig:system_overview}(b).
DIMM-NMP buffers the Psum vectors accumulated by each rank-NMP in its local registers and performs final summation using an adder tree before sending the final result back to the host via the standard DIMM interface.
Depending on the memory system configuration, the number of ranks within a DIMM can vary, changing the number of inputs to the adder tree.
\textbf{Rank-NMP Module.}
{\textit{RecNMP}~} uses the internal bandwidth on a DIMM to increase the effective bandwidth of embedding table operations, thus the majority of the logic is replicated for each rank. Three crucial functions are performed by the rank-NMP module---translating the NMP-Inst into low-level DDR C/A commands, managing {\it memory-side caching} and local computation of SLS-family operators.
As illustrated in Figure~\ref{fig:system_overview}(c), the NMP-Inst is decoded to control signals and register inputs.
To address C/A bus limitations, all of the DDR commands for a single SLS vector is embedded in one NMP-Inst. Three fields in NMP-Inst (Figure~\ref{fig:system_overview}(d))---DDR cmd (the presence/absence of \{ACT, RD, PRE\} with bit 1/0), vector size (vsize), and DRAM address (Daddr)---determine the DDR command sequence and the burst length. These are fed to the local command decoder (Rank.CmdDecoder) to generate standard DDR-style ACT/RD/PRE commands to communicate with DRAM devices.
The tags are set at runtime by the host-side memory controller based on the relative physical address location of consecutive embedding accesses. This keeps the CmdDecoder in rank-NMP lightweight, as the host-side memory controller has performed the heavy-lifting tasks of request reordering, arbitration, and clock and refresh signal generation.
If a 128B vector (vsize=2) requires ACT/PRE from a row buffer miss, the command sequence to DRAM devices for the NMP-Inst is \{PRE, ACT Row, RD Col, RD Col+8\} decoded from \{ACT, RD, PRE\} and vsize tags.
Our locality analysis in Section~\ref{sec:char} shows that the modest temporal locality within some embedding tables as vectors are reused.
The operands of each SLS-family operator vary so caching the final result in the DIMM or CPU will be ineffective.
We incorporate a memory-side cache (RankCache) in each rank-NMP module to exploit the embedding vectors reuse.
The RankCache in {\textit{RecNMP}~} takes hints from the LocalityBit in the NMP-Inst to determine whether an embedding vector should be cached or bypassed.
The detailed method to generate the LocalityBit hint through hot entry profiling will be explained in Section~\ref{sec:co-opt}.
Entries in RankCache are tagged by the DRAM address field (Daddr).
If the LocalityBit in the NMP-Inst indicates low locality,
the memory request bypasses the RankCache and is forwarded to Rank.CmdDecoder to initiate a DRAM read. Embedding tables are read-only during inference, so this optimization does not impact correctness.
The datapath in the rank-NMP module supports a range of SLS-family operators.
The embedding vectors returned by the RankCache or DRAM devices are loaded to the input embedding vector registers.
For weighted sum computation, the weight registers are populated by the weight fields from the NMP-Inst.
For quantized operators such as the SLS-8bits operator, the dequantized parameters $Scalar$ and $Bias$ are stored with the embedding vectors and can be fetched from memory to load to the Scalar and Bias registers.
The Weight and Scalar/Bias registers are set to be 1 and 1/0 during execution of non-weighted and non-quantized SLS operators.
The PsumTag decoded from the NMP-Inst is used to identify the embedding vectors belonging to the same pooling operations, as multiple poolings in one batch for one embedding table could be served in parallel.
The controller counter, vector size register,
and final sum registers in the both the DIMM-NMP and rank-NMP modules are all memory-mapped, easily accessible and configurable by the host CPU.
\vspace{-0.1cm}
\subsection{C/A Bandwidth Expansion}
\label{sec:mem_config}
\vspace{-0.1cm}
Although the theoretical aggregated internal bandwidth of {\textit{RecNMP}~} scales linearly with the number of ranks per channel, in practice, the number of concurrently activated ranks is limited by the C/A bandwidth.
Due to frequent row buffer misses/conflicts from low spatial locality, accessing the embedding table entries in memory requires a large number of ACT and PRE commands.
The reason is that the probability of accessing two embedding vectors in the same row is quite low, as spatial locality only exists in continuous DRAM data burst of one embedding vector.
In production, embedding vector size ranges from 64B to 256B with low spatial locality, resulting in consecutive row buffer hits in the narrow range of 0 to 3.
To fully understand the C/A bandwidth limitation, we analyze the worst-case scenario when the embedding vector size is 64B.
A typical timing diagram is presented in Figure~\ref{fig:timing}(a). It shows an ideal sequence of bank-interleaved DRAM reads that could achieve one consecutive data burst.
In this burst mode, the \emph{ACT} command first sets the row address.
Then the \emph{RD} command is sent accompanied by the column address.
After $t_{RL}$ DRAM cycles, the first set of two 64-bit data (DQ0 and DQ1) appear on the data bus.
The burst mode lasts for 4 DRAM cycles (burst length = 8) and transmits a total of 64B on the DQ pins at both rising and falling edges of the clock signal.
Modern memory systems employ bank interleaving, therefore in the next burst cycle (4 DRAM cycles), data from a different bank can be accessed in a sequential manner.
In this ideal bank interleaving case, every 64B data transfer takes 4 DRAM cycles and requires 3 DDR commands (ACT/RD/PRE) to be sent over the DIMM C/A interface, this consumes 75\% of the C/A bandwidth.
Activating more than one bank concurrently would require issuing more DDR commands, thus completely exhausting the available C/A bandwidth of conventional memory interface.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{ddr_timing.pdf}
\vspace{-0.7cm}
\caption{Timing diagram of (a) ideal DRAM bank interleaving read operations; (b) The proposed {\textit{RecNMP}~} concurrent rank activation.
}
\label{fig:timing}
\vspace{-0.6cm}
\end{figure}
To overcome C/A bandwidth limitation, we propose a customized NMP-Inst with a compressed format of DDR commands to be transmitted from memory controller to {RecNMP~} PUs.
Figure~\ref{fig:timing}(b) illustrates the timing diagram of interleaving NMP-Inst to a 4 DIMMs $\times$ 2 Ranks per DIMM memory configuration.
Eight NMP-Insts can be transferred between memory controller and DIMMs interfaces in 4 DRAM data burst cycles on double data rate.
In low spatial locality case (64B embedding vector and one NMP-Inst per vector) and ideal bank interleaving, we could potentially activate 8 parallel ranks to perform 8${\times}$64B lookups concurrently in 4 DRAM data burst cycles.
Although customized instructions have been proposed before~\cite{nda-kim, chameleon, tensorDIMM}, our solution is the first one to directly deal with the C/A bandwidth limitation using DDR command compression that enables up to $8\times$ bandwidth expansion for small-sized embedding vectors (i.e. 64B) with low spatial locality.
Higher expansion ratio can be achieved with larger vector size.
\vspace{-0.25cm}
\subsection{Programming Model and Execution Flow}
\vspace{-0.1cm}
Like previous NMP designs~\cite{chameleon,pim-NN-training}, {\textit{RecNMP}~} adopts a heterogeneous computing programming model (e.g. OpenCL),
where the application is divided into host calls running on the CPU and NMP kernels being offloaded to RecNMP PUs.
NMP kernels are compiled into packets of NMP-Insts and transmitted to each memory channel over the DIMM interface to {\textit{RecNMP}~} PUs.
Results of NMP kernels are then transmitted back to the host CPU.
In Figure~\ref{fig:system_overview}(d), each 79-bit NMP-Inst contains distinctive fields that are associated with different parameters in an embedding operation, locality hint bit (LocalityBit) and pooling tags (PsumTag) passed between the HW/SW interface.
The proposed NMP-Inst format can fit within the standard 84-pin C/A and DQ interface.
Using a simple SLS function call in Figure~\ref{fig:offload_flow}(a) as an example, we walk through the execution flow of the proposed {\textit{RecNMP}~} programming model.
First, memory is allocated for SLS input and output data, and is marked up as either Host (cacheable) or NMP (non-cacheable) regions to simplify memory coherence between the host and \textit{RecNMP}.
Variables containing host visible data, such as the two arrays \emph{Indices and Lengths}, are initialized and loaded by the host and are cachable in the host CPU's cache hierarchy.
The embedding table (Emb) in memory is initialized by the host as a host non-cacheable NMP region using a non-temporal hint (NTA)~\cite{nta}.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{programming_model-v1.pdf}
\vspace{-0.7cm}
\caption{(a) {\textit{RecNMP}~} SLS example code; (b) NMP packet; (c) NMP kernel offloading; (d) NMP-enabled memory controller.}
\label{fig:offload_flow}
\vspace{-0.7cm}
\end{figure}
Next, the code segment
marked as a NMP kernel is compiled to packets of NMP-Insts (Figure~\ref{fig:offload_flow}(b)).
A single SLS NMP kernel containing one batch of embedding poolings can be split into multiple NMP packets, with each packet having one or more pooling operations.
The NMP-Insts belonging to different embedding poolings in one NMP packet are tagged by PsumTag, and the maximum number of poolings in one packet is determined by the number of bits of the PsumTag.
We use a 4-bit PsumTag in our design.
At runtime, the NMP kernel is launched by the host with special hardware/driver support to handle NMP packet offloading; access to the memory management unit (MMU) to request memory for NMP operations; and the virtual memory system for logical-to-physical addresses translation (Figure~\ref{fig:offload_flow}(c)).
The offloaded NMP packets bypass L1/L2 and eventually arrive at the host-side memory controller with an NMP extension.
To avoid scheduling the NMP packets out-of-order based on FR-FCFS policy, the NMP extension of the memory controller includes extra scheduling and arbitration logic.
As illustrated in Figure~\ref{fig:offload_flow}(d), the memory controller with the NMP extension receives concurrent NMP packets from parallel execution of multiple host cores, which are stored in a queue.
Once scheduled, each NMP packet is decoded into queued NMP-Insts.
Physical-to-DRAM address mapping is then performed and a FR-FCFS scheduler reorders the NMP-Insts within a packet only and not between packets.
Instead of sending direct DDR commands, ACT/RD/PRE actions are compressed into the 3-bit DDR\_cmd field in the NMP-Inst.
The host-side memory controller also calculates the correct accumulation counter value to configure the memory-mapped control registers in the {\textit{RecNMP}~} PU.
Finally, after the completion of all the counter-controlled local computation inside the {\textit{RecNMP}~} PU for one NMP packet, the final summed result is transmitted over the DIMM interface and returned to the \emph{Output} cacheable memory region visible to the CPU.
\vspace{-0.2cm}
\subsection{HW/SW Co-optimization}
\label{sec:co-opt}
\vspace{-0.1cm}
Our locality analysis of production recommendation traffic in Section~\ref{sec:locality} illustrates intrinsic temporal reuse opportunities in embedding table lookups.
We propose memory-side caches (RankCache) inside rank-NMP modules.
To extract more performance from memory-side caching, we explore two additional HW/SW co-optimization techniques.
This locality-aware optimization results in 33.7\% memory latency improvement and 45.8\% memory access energy saving (detailed performance benefits will be presented in Section V).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{intra_mini_batch_opt_1-v1.pdf}
\vspace{-0.7cm}
\caption{
NMP packet scheduling scheme that prioritizes batch of single table.}
\label{fig:intra_mini_batch_opt}
\vspace{-0.5cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{opt_hit_rate.pdf}
\vspace{-0.7cm}
\caption{Hit rate of 1MB cache without optimization, with table-aware packet scheduling optimization, with both table-aware packet scheduling and hot entry profiling optimization, and ideal case without interference.
}
\label{fig:opt_hit_rate}
\vspace{-0.5cm}
\end{figure}
First, to preserve the intrinsic locality from embedding lookups residing in one table, we propose to prioritize scheduling NMP packets from a single batch requests to the same embedding table together -- {\it table-aware packet scheduling}.
In production workloads, the memory controller receives NMP packets from parallel SLS threads with equal scheduling priority.
The intra-embedding table temporal locality is not easily retained because of the interference from lookup operations of multiple embedding tables.
This locality can be further degraded when multiple recommendation models are co-located.
Therefore, as illustrated in Figure~\ref{fig:intra_mini_batch_opt}, we propose an optimized table-aware NMP packet scheduling strategy to exploit the intrinsic temporal locality within a batch of requests by ordering packets from the same embedding table in one batch first, allowing the embedding vectors to be fetched together, thereby retaining the temporal locality.
SLS operators access separate embedding tables as running in parallel threads, the mechanics of our implementation comes from the thread-level memory scheduler~\cite{mem-schedule}.
Next, we propose another optimization technique -- {\it hot entry profiling}, built on top of the observation that a small subset of embedding entries exhibit relatively higher reuse characteristics.
We profile the vector of indices used for embedding table lookup in an NMP kernel and mark the entries with high locality by explicitly annotating NMP-Insts with a \emph{LocalityBit}.
NMP-Inst with LocalityBit set will be cached in the RankCache; otherwise, the request will bypass the RankCache.
This hot entry profiling step can be performed before model inference and issuing SLS requests and only costs $<$2\% of total end-to-end execution time.
We profile the indices of each incoming batch of embedding lookups and set LocalityBit if the vectors are accessed $>t$ times within the batch.
Infrequent ($<t$ times) vectors will bypass the RankCache and are read directly from the DRAM devices.
We sweep the threshold $t$ and pick the value with the highest cache hit rate to use in our simulation.
This hot entry profiling optimization reduces cache contention and evictions caused by the less-frequent entries in the RankCache.
Figure~\ref{fig:opt_hit_rate} depicts the hit rate improvement when the different optimizations are applied.
Comb-8 indicates the overall hit rate at model level of 8 embedding tables (T1-T8).
To gain more insights, we investigate the hit rate of embedding tables (T1 to T8) in Comb-8.
The ideal bar indicates the theoretical hit rate with an infinitely sized cache.
With the proposed co-optimization, the measured hit rate closely approaches the ideal case across the individual embedding tables, even for the trace with limited locality (T8), illustrating the proposed technique can effectively retain embedding vectors with high likelihood of reuse in RankCache.
\vspace{-0.2cm}
\section{Evaluation Results} \label{sec:eval}
\vspace{-0.05cm}
This section presents a quantitative evaluation of {\textit{RecNMP}~} and shows it accelerates end-to-end personalized recommendation inference by up to $4.2\times$.
We first present the latency improvement of the offloaded SLS operators on a baseline system before analysing different optimizations including placement with page coloring, memory-side caching, table-aware packet scheduling and hot-entry profiling. We compare {\textit{RecNMP}~} with state-of-the-art NMP systems TensorDIMM and Chameleon~\cite{tensorDIMM,chameleon}.
We also analyze the effect of {\textit{RecNMP}~} on co-located FC operators.
Finally, an end-to-end evaluation of throughput improvement and energy savings at the model level and the area/power overhead is presented.
\vspace{-0.3cm}
\subsection{SLS Operator Speedup}
\vspace{-0cm}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{sls_speedup_no_cache.pdf}
\vspace{-0.7cm}
\caption{(a) Normalized latency of RecNMP-base to the baseline DRAM with different memory configuration (DIMM x Rank) and NMP packet size; (b) Distribution of rank-level load imbalance for 2-, 4-, and 8-rank systems.}
\label{fig:sls_speedup_no_cache}
\vspace{-0.6cm}
\end{figure}
In theory, because {\textit{RecNMP}~} exploits rank-level parallelism, speedup will scale linearly with the number of ranks and number of DIMMs in a system.
Therefore, we choose four memory channel configurations (\# of DIMMs $\times$ \# of ranks per DIMM) that correspond to $1\times2$, $1\times4$ and $2\times2$, and $4\times2$ to demonstrate a range of system implementations.
\textbf{Basic {\textit{RecNMP}~} design without RankCache.}
We start by evaluating {\textit{RecNMP}~} without a RankCache (RecNMP-base).
In addition to varying the DIMM/rank configuration, we sweep the number of poolings in one NMP packet, where one pooling, in DLRM, is the sum of 80 embedding vectors.
In Figure~\ref{fig:sls_speedup_no_cache}(a), we find 1) SLS latency indeed scales linearly as we increase the number of active ranks in a channel; 2) latency also decreases when there are more pooling operations in an NMP packet.
The variation we observe, as well as the performance gap observed between the actual speedup and the theoretical speedup ($2\times$ for 2-rank, $4\times$ for 4-rank, and $8\times$ for 8-rank systems) is caused by the uneven distribution of embedding lookups across the ranks.
As the ranks operate in parallel, the latency of the SLS operation is determined by the slowest rank, the rank that runs more embedding lookups.
Figure~\ref{fig:sls_speedup_no_cache}(b) shows the statistical distribution of fraction of the work run on the slowest rank.
When the NMP packet has fewer NMP-Insts, the workload distributes more unevenly, resulting in a longer tail that degrades average speedup.
To address the load imbalance, we experiment with software methods to allocate an entire embedding table to the same rank.
One software approach to perform such data layout optimization is page coloring~\cite{page-coloring}.
As indicated in Figure~\ref{fig:sls_speedup_no_cache}(a), page coloring could achieve 1.96$\times$, 3.83$\times$ and 7.35$\times$ speedup in 2-rank, 4-rank and 8-rank system compared with the DRAM baseline.
The specific page coloring mechanism can be implemented in the operating system by assigning a fixed color to the page frames used by an individual embedding table. The virtual memory system would need to be aware of the DRAM configuration to allocate pages of the same color to physical addresses that map to the same rank.
This data layout optimization can lead to near-ideal speedup, but it requires maintaining high model- and task-level parallelism such that multiple NMP packets from different SLS operators can be issued simultaneously to all the available ranks.
\textbf{{\textit{RecNMP}~} with RankCache and co-optimization.}
Memory-side caching at the rank-level with table-aware packet scheduling and hot entry profiling is one of the key features of \textit{RecNMP}; these optimizations are described in Section~\ref{sec:co-opt}.
Figure~\ref{fig:sls_speedup_cache_opt}(a) depicts the performance benefits (i.e. latency reduction) enabled by applying different optimization techniques: 1) adding a RankCache, 2) scheduling accesses to the same table together, 3) adding a cachability hint bit from software.
Using a configuration with 8-ranks 8 poolings per packet, we observe 14.2\% latency improvement by adding a 128KB RankCache and an additional 15.4\% improvement by prioritizing the scheduling of NMP packets from the same table and batch.
In the final combined optimization, \emph{schedule + profile}, we pass cacheability hint after profiling the indices in the batch which reduces cache contention and allows low-locality requests not marked for caching to bypass the RankCache,
delivering another 7.4\% improvement.
The total memory latency speedup achieved by offloading SLS to an optimized design (RecNMP-opt) is $9.8\times$.
In Figure~\ref{fig:sls_speedup_cache_opt}(b), we sweep RankCache capacity from 8KB to 1MB and display how cache size affects the normalized latency and cache hit rate.
When RankCache is small (e.g. 8KB), the low cache hit rate (e.g. 24.9\%) leads to high DRAM access latency.
The performance reaches the optimal design point at 128KB.
Further increase of cache size has marginal improvement on hit rate, since it already reaches the compulsory limit in the trace.
Yet it incurs longer cache access latency and degrades overall performance.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{sls_speedup_cache_opt_v2.pdf}
\vspace{-0.7cm}
\caption{(a) Normalized latency of RecNMP-cache and RecNMP-opt with schedule and hot-entry profile optimization to the baseline DRAM system; (b) Cache size sweep effects in RecNMP-opt.}
\label{fig:sls_speedup_cache_opt}
\vspace{-0.5cm}
\end{figure}
\textbf{Performance comparison.}
We compare {\textit{RecNMP}~} with state-of-the-art NMP designs such as Chameleon~\cite{chameleon} and TensorDIMM~\cite{tensorDIMM}.
Both are DIMM-based near-memory processing solutions.
TensorDIMM scales the embedding operation performance linearly with the number of parallel DIMMs.
Since non-SLS operators are accelerated by GPUs in TensorDIMM, which is orthogonal to near-memory acceleration techniques, we only compare its memory latency speedup with \textit{RecNMP}.
Chameleon does not directly support embedding operations.
We estimate its performance of Chameleon by simulating the temporal and spatial multiplexed C/A and DQ timing of Chameleon's NDA accelerators.
In Figure~\ref{fig:sls_speedup_tensordimm}, as {\textit{RecNMP}~} exploits rank-level parallelism, its performance scales when either the number of DIMMs and ranks increase, whereas Chameleon and TensorDIMM only scale by increasing the number of DIMMs.
This is evident as we sweep the memory channel configuration.
When we increase the number of ranks per-DIMM, {\textit{RecNMP}~} can deliver 3.3-6.4$\times$ and 2.4-4.8$\times$ better performance than Chameleon and TensorDIMM.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{speedup_recnmp_tensordimm.pdf}
\vspace{-0.7cm}
\caption{Comparison between Host baseline, RecNMP-opt, TensorDIMM~\cite{tensorDIMM} and Chameleon~\cite{chameleon} with both random and production traces}
\label{fig:sls_speedup_tensordimm}
\vspace{-0.6cm}
\end{figure}
It is also worth noting that {\textit{RecNMP}~} has performance advantages ($1.9\times$ and $1.4\times$) even in configurations with one rank per DIMM, thanks to the memory-side caching, table-aware packet scheduling, and hot-entry profiling optimization techniques.
Neither Chameleon nor TensorDIMM includes a memory-side cache to explicitly take advantage of the available locality in the memory access patterns, hence their performance, with respect to memory latency, is agnostic to traces with different amounts of data reuse.
In contrast, {\textit{RecNMP}~} design can extract 40\% more performance (shown as shaded) from production traces when compared to fully random traces.
\vspace{-0.2cm}
\subsection{FC Operator Speedup}
\label{sec:FC-exp}
\vspace{-0.1cm}
Although {\textit{RecNMP}~} is designed to accelerate the execution of SLS operators, it can also improve FC performance by alleviating cache contention caused by model co-location. As the degree of data-level parallelism increases, the FC weights brought into the cache hierarchy have higher reuse, normally resulting in fewer cache misses.
However, when co-located with other models, reusable FC data are often evicted early from the cache by SLS data, causing performance degradation.
Figure~\ref{fig:fc_contention} shows the degree of performance degradation on the co-located FC operations.
The amount of performance degradation experienced by the FC layers varies by the FC sizes, the degree of co-location, and the pooling values.
When examining the FC performance in baseline systems,
we observe worsening FC performance with larger FC weights at higher co-location degrees and higher pooling values.
{\textit{RecNMP}~} effectively reduces the pressure from the cache contention, we show the base {\textit{RecNMP}~} design but RecNMP-opt impacts FC performance equally as it offloads the same SLS computation.
This beneficial effect ranging from 12\% to 30\% is more pronounced for larger FCs whose weight parameters exceed the capacity of the L2 cache and reside mainly inside the LLC cache.
For smaller FCs whose working set fits inside the L2 cache (e.g. all BottomFC and RM1's TopFC), the relative improvement is comparatively lower ($\sim4\%$).
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{fc_contention_v5.pdf}
\vspace{-0.8cm}
\caption{Effect of model co-location on latency of (a) TopFC in RM2-small model; (b) TopFC in RM2-large model. }
\label{fig:fc_contention}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.2cm}
\subsection{End-to-end Model Speedup}
\vspace{-0.1cm}
\textbf{Throughput improvement.} To estimate the improvement of end-to-end recommendation inference latency, we calculate the total speedup by weighting the speedup of both SLS and non-SLS operators.
We measure model-level speedup across all four representative model configurations, shown in Figure~\ref{fig:model_speedup}(a).
Not surprisingly, the model that spends the most time running SLS operators (RM2-large) receives the highest speedup.
In Figure~\ref{fig:model_speedup}(b), the performance improvement obtained by {\textit{RecNMP}~} varies with batch size.
In general, the model-level speedup increases with a larger batch size, as the proportion of time spent in accelerated SLS operators grows.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{model_speedup_sensitivity_v9.pdf}
\vspace{-0.7cm}
\caption{(a) Single end-to-end speedup of recommendation inference with 2-rank, 4-rank and 8-rank {\textit{RecNMP}~} systems; (b) Single model speedup with different batch size; (c) Host and RecNMP-opt co-located model latency-throughput tradeoff.}
\label{fig:model_speedup}
\vspace{-0.6cm}
\end{figure}
Figure~\ref{fig:model_speedup}(c) looks at the overall effect of increasing co-location in the presence of random or production traces for both the CPU baseline and our proposed {\textit{RecNMP}~} solution.
Co-location generally increases the system throughput at the cost of degrading latency.
Compared to random traces, the locality present in production traces improves performance.
However, this locality performance ``bonus'' wears off as the level of model co-location increases due to the cache interference from the growing number of embedding tables in multiple models.
Applying {\textit{RecNMP}~} in a 8-rank system results in 2.8-3.5$\times$ and 3.2-4.0$\times$ end-to-end speedup of RM1-large and RM2-small as the number of co-located models increases, because the fraction of SLS latency rises
The improvement of both latency and throughput enabled by {\textit{RecNMP}~} is clearly observed compared to the baseline system.
\textbf{Memory energy savings.} Comparing with the baseline DRAM system, {\textit{RecNMP}~} provide 45.8\% memory energy saving.
{\textit{RecNMP}~} saves the energy from the reduced data movement between the processor and the memory by performing local accumulation near DRAM devices and the leakage saving from reduced latency.
In addition, by incorporating memory-side caching and applying co-optimization techniques to improve RankCache hit rate, {\textit{RecNMP}~} achieves extra energy savings by reducing the number of DRAM accesses.
\textbf{Area/power overhead.}
We estimate {\textit{RecNMP}~} design overhead assuming 250MHz clock frequency and 40nm CMOS technology.
The area and power numbers are derived from Synopsys Design Compiler (DC) for the arithmetic and control logic and Cacti~\cite{cacti} for SRAM memory (i.e. RankCache).
Table~\ref{tab:design_overhead} summarizes the overhead of each {RecNMP~} processing unit for both the basic configuration without cache and the optimized configuration with cache optimization.
\begin{table}[]
\caption{Summary of {\textit{RecNMP}~} Design Overhead}
\label{tab:design_overhead}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{RecNMP PU} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Chameleon~\cite{chameleon}\\ (8 CGRA\\ accelerators)\end{tabular}} \\ \cline{2-3}
& \begin{tabular}[c]{@{}c@{}}RecNMP-base\\ w/o RankCache\end{tabular} & \begin{tabular}[c]{@{}c@{}}RecNMP-opt\\ with RankCache\end{tabular} & \\ \hline
Area (mm$^2$) & 0.34 & 0.54 & 8.34 \\ \hline
Power (mW) & 151.3 & 184.2 & 3138.6-3251.8 \\ \hline
\end{tabular}
\vspace{-0.5cm}
\end{table}
Compared with Chameleon, which embeds 8 CGRA cores per DIMM, our {RecNMP~} PU consumes a fraction of the area (4.1\%, 6.5\% for RecNMP-base and RecNMP-opt) and power (4.6-5.9\%).
When scaling {RecNMP~} PUs to multiple ranks in the DIMM, the total area and power will grow linearly, but it also translates to linearly-scaled embedding speedup.
Given that a single DIMM consumes 13W~\cite{tensorDIMM} and a typical buffer chip takes up 100mm$^2$~\cite{buffer-chip}, {\textit{RecNMP}~} incurs small area/power overhead that can easily be accommodated without requiring any changes to the DRAM devices.
\section{Experimental Methodology}
\label{sec:method}
\vspace{-0.1cm}
Our experimental setup combines real-system evaluations with cycle-level memory simulations, as presented in Figure \ref{fig:simulation_setup}.
For real-system evaluations, we run production-scale recommendation models on server-class CPUs found in the data center.
This allows us to measure the impact of accelerating embedding operations as well as the side-effect of improved memory performance of FC operations on end-to-end models.
Cycle-level memory simulations allow us to evaluate the design tradeoffs when DRAM systems are augmented with \textit{RecNMP}. Table~\ref{tab:sys_config} summarizes the parameters and configurations used in the experiments.
We ran experiments on an 18-core Intel Skylake with DDR4 memory. The DRAM simulation used standard DDR4 timing from a Micron datasheet~\cite{ddr4-datasheet}.
\if 0
\textbf{Recommendation workloads.}
We configure the DLRM~\cite{DLRM} benchmark to a set of models that are representative of production-scale use cases~\cite{arxiv-gupta-19} and summarized in Table~\ref{tab:rec_models}.
The four distinct recommendation models--- RM1-small, RM1-large, RM2-small, and RM2-large---are able to cover a wide design space.
RM1-small and RM2-small represent the state-of-the-art personalized recommendation models, whereas RM1-large and RM2-large represent recommendation models with larger resource requirements.
Generally, RM1 has smaller and fewer number of embedding tables than that of RM2.
RM1 and RM2 have a BottomFC layer of a similar size.
But the size of the TopFC layer is determined by the total number of sparse and dense features from the embedding lookups and BottomFC.
The size of the FC weights in RM1-small and RM1-large is approximately 0.3MB while those weight sizes of RM2-small and RM2-large are 1.2MB and 3.2MB, respectively.
\fi
\textbf{Real-system evaluation.}
We configured the DRLM benchmark with the same model parameters and traces in Figure~\ref{fig:sparsenn}(b) and Section~\ref{sec:char}.
The workload characterization (Section~\ref{sec:char}) and real-system experiments (Section~\ref{sec:eval}) are performed on single socket Intel Skylake servers, specifications in Table~\ref{tab:sys_config}.
\if 0
To achieve high performance and saturate memory bandwidth we exploit a combination of data-level, model-level, and task-level parallelism.
Data parallelism is determined by the batch-size.
For model parallelism, we increase the number of parallel threads for each model to achieve higher inter-operator parallelism.
For task parallelism, we co-locate multiple models on a single machine with the dedicated cores for each models.
Co-locating multiple inferences on a single machine stresses the capacity of the shared LLC, which could cause latency degradation. The cache contention effect for model co-location will be studied in more details using the experiments in Section~\ref{sec:FC-exp}.
\fi
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{simulation_setup_v3.pdf}
\vspace{-0.7cm}
\caption{{\textit{RecNMP}~} experimental methodology.}
\label{fig:simulation_setup}
\vspace{-0.5cm}
\end{figure}
\begin{table}[t!]
\caption{System Parameters and Configurations}
\vspace{-0.2cm}
\label{tab:sys_config}
\centering
\begin{tabular}{l|c|l|c}
\hline\hline
\multicolumn{4}{c}{\textbf{Real-system Configurations}}\\
\hline
Processor & 18 cores, 1.6 GHz & L1I/D & 32 KB\\
\hline
L2 cache & 1 MB & LLC & 24.75 MB \\
\hline
DRAM & \multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}DDR4-2400MHz 8Gb $\times$8, 64 GB, \\ 4 Channels $\times$ 1 DIMM $\times$ 2 Ranks, FR-FCFS\\ 32-entry RD/WR queue, Open policy,\\ Intel Skylake address mapping~\cite{skladdr}\end{tabular}} \\
\hline\hline
\multicolumn{4}{c}{\textbf{DRAM Timing Parameters}} \\
\hline
\multicolumn{4}{c}{\begin{tabular}[c]{@{}c@{}} tRC=55, tRCD=16, tCL=16, tRP=16, tBL=4\\ tCCD\_S=4, tCCD\_L=6, tRRD\_S=4, tRRD\_L=6, tFAW=26\end{tabular}}\\
\hline\hline
\multicolumn{4}{c}{\textbf{Latency/Energy Parameters}} \\
\hline
\multicolumn{4}{c}{\begin{tabular}[c]{@{}c@{}} DDR Activate = 2.1nJ, DDR RD/WR = 14pJ/b, Off-chip IO = 22pJ/b\\
RankCache RD/WR = 1 cycle, 50pJ/access,\\ FP32 adder = 3 cycles, 7.89pJ/Op, FP32 mult = 4 cycles, 25.2pJ/Op\\\end{tabular}}\\
\hline\hline
\end{tabular}
\vspace{-0.7cm}
\end{table}
\textbf{Cycle-level memory simulation.}
We build the {\textit{RecNMP}~} cycle-level simulation framework with four main components: (1) physical addresses mapping module; (2) packet generator; (3) locality-aware optimizer; and (4) a cycle-accurate model of a {RecNMP~} PU consisting of DRAM devices, RankCache, arithmetic and control logic.
We use Ramulator \cite{ramulator} to conduct cycle-level evaluations of DDR4 devices. On top of Ramulator, we build a cycle-accurate LRU cache simulator for RankCache and model of the 4-stage pipeline in the rank-NMP module.
Cacti~\cite{cacti} is used to estimate the access latency and area/energy of RankCache.
The hardware implementation used to estimate the latency, area and power of the arithmetic logic is built from Synopsys Design Compiler with a commercial 40nm technology library.
To estimate the DIMM energy, we use Cacti-3DD~\cite{cacti-3dd} for DRAM devices and Cacti-IO~\cite{cacti-io} for off-chip I/O at the DIMM level.
During simulation we emulate the scheduling packet generation steps taken by the software stack and the memory controller. First, we apply a standard page mapping method~\cite{page_map} to generate the physical addresses from a trace of embedding lookups by assuming the OS randomly selects free physical pages for each logical page frame.
This physical address trace is fed to Ramulator to estimate baseline memory latency.
For {\textit{RecNMP}~} workloads, the packet generator divides the physical address trace into packets of NMP-Insts that are sent to the cycle-accurate model.
Next, the when evaluating systems with HW/SW co-optimizations, the locality-aware optimizer performs table-aware packet scheduling and hot entry profiling and decides the sequence of NMP-Insts. {\textit{RecNMP}~} activate all memory ranks in parallel and traditional DRAM bank-interleaving is also used.
For each NMP packet, performance is determined by the slowest rank that receives the heaviest memory request load.
Rank-NMP and DIMM-NMP logic units are pipelined to hide the latency of memory read operations.
The total latency of {\textit{RecNMP}~} includes extra DRAM cycles during initialization to configure the accumulation counter and the vector size register and a cycle in the final stage to transfer the sum to the host. The latency, in DRAM cycles, of the major components including RankCache, rank-NMP logic performing weighted-partial sum and final sum are in Table~\ref{tab:sys_config}.
\section{Introduction}
\label{sec:intro}
\vspace{-0.1cm}
Personalized recommendation is a fundamental building block of
many internet services used by search engines, social networks,
online retail, and content streaming~\cite{GCP,Covington:2016,Walmart_AI,Amazon_Personalize}.
Today's personalized recommendation systems leverage
deep learning to maximize accuracy and deliver the best user experience~\cite{hazelwood2018applied,Zhou:2018,Guo:2018,Cheng:2016,DLRM}.
The underlying deep learning models
now consume the majority of the datacenter cycles spent on AI~\cite{arxiv-gupta-19,sigarch-blog}.
For example, recent analysis reveals that the top recommendation models collectively contribute to more than 72\% of all AI inference cycles across Facebook's production datacenters~\cite{sigarch-blog}.
Despite the large computational demand and production impact,
relatively little research has been conducted to optimize deep learning (DL)-based recommendation.
Most research efforts within the architecture community have
focused on accelerating the compute-intensive, highly-regular computational patterns found in
fully-connected (FC), convolution (CNN), and recurrent (RNN)
neural networks~\cite{cnvlutin,ISAAC,EIE,Minerva,Eyeriss,NeuroCube,Cambricon,TPU_short,ScaleDeep,SCNN,MacCNNEfficiency,Scalpel,OptLPSGD,DNNmemristorreliable,GANaccelerator,CompressingDMA,InSituAI,Ganax,snapea,UCNN,OutlierDNN,PredictionCNN,Bitfusion,Gist,DarkPruning,InputSim}.
Unlike CNNs and RNNs, recommendation models exhibit low compute-intensity and little to no regularity.
Existing acceleration techniques either do not apply
or offer small improvements at best, as they tend to exploit regular reusable dataflow patterns and assume high spatial locality which are not the main performance bottleneck in recommendation models~\cite{arxiv-gupta-19}.
Given the volume of personalized inferences and their rapid growth rate occurring in the data center,
an analogous effort to improve performance of these models would have substantial impact.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{motivation.pdf}
\vspace{-0.7cm}
\caption{(a) Compute and memory footprint of common deep learning operators, sweeping batch size;
(b) Roofline lifting effect and the operator-level (FC, SLS) and end-to-end model (RM) speedup enabled by \textit{RecNMP}.}
\label{fig:motiv}
\vspace{-0.6cm}
\end{figure}
To suggest personalized contents to individual users, recommendation models are generally structured to take advantage of both continuous (dense) and categorical (sparse) features.
The latter are captured by large embedding tables with sparse lookup and pooling operations.
These embedding operations dominate the run-time of
recommendation models and are markedly distinct from other layer types.
A quantitative comparison of the raw compute and memory access requirements is
shown in Figure~\ref{fig:motiv}(a).
Sparse embedding operations, represented by SparseLengthsSum (SLS), consist of a small sparse lookup into a large embedding table
followed by a reduction of the embedding entries (i.e., pooling).
They present two unique challenges:
First, while the sparse lookup working set is comparatively small (MBs), the irregular nature of the table indices exhibits poor predictability, rendering typical prefetching and dataflow optimization techniques ineffective.
Second, the embedding tables are on the order of tens to hundreds of GBs,
overwhelming on-chip memory resources.
Furthermore, the circular points in Figure~\ref{fig:motiv}(b) show the operational
intensity of SLS is orders of magnitude less than FC layers.
Low intensity limits the potential of custom hardware
including the specialized datapaths and on-chip memories used in CNN/RNN accelerators.
The result is a fundamental memory bottleneck that cannot be overcome
with standard
caching (e.g., tiling~\cite{tile}),
algorithmic (e.g., input batching),
or hardware acceleration techniques.
This paper proposes \textit{RecNMP}---a near-memory processing solution
to accelerate the embedding operations for DL-based recommendation.
{\textit{RecNMP}~} is a lightweight DIMM-based system built on top of existing standard DRAM technology.
We focus on DIMM-based near-memory processing~\cite{nda-kim,chameleon,tensorDIMM}
instead of resorting to specialized 2.5D/3D integration processes (e.g. HBM)~\cite{graphpim,pim-enabled,NeuroCube}. The DIMM form factor with commodity DDR4
devices can support the 100GB+ capacities necessary for production-scale recommendation models with low cost.
By eliminating the off-chip memory bottleneck and exposing higher internal bandwidth
we find that {\textit{RecNMP}~} provides significant opportunity to improve performance and efficiency by lifting the roofline by 8$\times$ for the bandwidth-constrained region
(Figure~\ref{fig:motiv}(b)), enabling optimization opportunity not feasible with existing
systems.
We have performed a detailed characterization of recommendation models using open-source, production-scale DLRM benchmark~\cite{DLRM, arxiv-gupta-19} as a case study.
This analysis quantifies the potential benefits of near-memory processing in accelerating recommendation models and builds the intuition for co-designing the NMP hardware with the algorithmic properties of recommendation.
Specifically, it highlights the opportunity for the {\textit{RecNMP}~} architecture in which
bandwidth-intensive embedding table operations are performed in the memory and compute-intensive
FC operators are performed on the CPU (or potentially on an accelerator).
The proposed {\textit{RecNMP}~} design exploits DIMM- and rank-level parallelism in DRAM memory systems. {\textit{RecNMP}~} performs local lookup and pooling functions near memory, supporting a range of sparse embedding inference operators, which produces the general Gather-Reduce execution pattern.
In contrast to a general-purpose NMP architecture, we make a judicious design choice to implement selected lightweight functional units with small memory-side caches to limit the area overhead and power consumption. We combine this light-weight hardware with software optimizations including table-aware packet scheduling and hot entry profiling.
Compared to previous work whose performance evaluation is solely based on randomly-generated embedding accesses~\cite{tensorDIMM}, our characterization and experimental methodology is modeled after representative production configurations and is evaluated using real production embedding table traces.
Overall, {\textit{RecNMP}~} leads to significant embedding access latency reduction ($9.8\times$) and improves end-to-end recommendation inference performance ($4.2\times$) as illustrated in Figure~\ref{fig:motiv}(b).
Our work makes the following research contributions:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item Our in-depth workload characterization shows that production recommendation models are constrained by memory bandwidth.
Our locality analysis using production embedding table traces reveals distinctive spatial and temporal reuse patterns and motivates a custom-designed NMP approach for recommendation acceleration.
\item We propose \textit{RecNMP}, a lightweight DDR4-compatible near-memory processing architecture. {\textit{RecNMP}~} accelerates the execution of a broad class of recommendation models and exhibits 9.8$\times$ memory latency speedup and 45.9\% memory energy savings.
Overall, {\textit{RecNMP}~} achieves 4.2$\times$ end-to-end throughput improvement.
\item We examine \textit{hardware-software co-optimization} techniques (memory-side caching, table-aware packet scheduling, and hot entry profiling) to enhance {\textit{RecNMP}~} performance, and customized NMP instruction with $8\times$ DRAM command/address bandwidth expansion.
\item A \textit{production-aware evaluation framework} is developed to take into account common data-center practices and representative production configuration, such as model co-location and load balancing.
\end{itemize}
\section{Related Work}
\vspace{-0.1cm}
\textbf{Performance characterization of recommendation models.}
Recent publications have discussed the importance and scale of personalized recommendation models in data center~\cite{DLRM, arxiv-gupta-19, mlperf, alibabaRec,sigarch-blog}.
Compared to CNNs, RNNs, and FCs~\cite{Minerva, EIE, Eyeriss,mlperf, fathom}, the analysis demonstrates how recommendation models have unique storage, memory bandwidth, and compute requirements.
For instance, ~\cite{arxiv-gupta-19} illustrates how Facebook's personalized recommendation models are dominated by embedding table operations.
To the best of our knowledge, {\textit{RecNMP}~} is the first to perform locality study using production-scale models with representative embedding traces.
\textbf{DRAM-based near-memory and near-data acceleration.}
Many prior works explore near-memory processing using 3D/2.5D-stacked DRAM technology (e.g. HMC/HBM)~\cite{nda-kim,graphpim,NeuroCube,3d1,3d2,3d3,3d4,3d5,3d6,3d7,3d8}. Due to their limited memory capacity ($16-32$GB) and high cost of ownership, these schemes are not suitable for large-scale deployment of recommendation models (10s to 100s of GBs) in production environment.
Chameleon~\cite{chameleon} introduces a practical approach to performing near-memory processing by integrating CGRA-type accelerators inside the data buffer devices in a commodity LRDIMM~\cite{chameleon}.
Unlike Chameleon's DIMM-level acceleration, {\textit{RecNMP}~} exploits rank-level parallelism with higher speedup potential.
{\textit{RecNMP}~} also employs a lightweight NMP design tailored to sparse embedding operators with much lower area and power overheads than CGRAs.
\textbf{System optimization for memory-constrained learning models.}
Sparse embedding representations have been commonly employed to augment deep neural network (DNN) models with external memory to memorize previous history.
Eisenman et al. explore the use of NVMs for large embedding storage~\cite{eisenman2018bandana}.
Although the proposed techniques result in $2-3\times$ improvement of effective NVM read bandwidth ($2.3GB/s$), it remains far below typical DRAM bandwidth ($76.8GB/s$) and cannot fundamentally address the memory bandwidth bottleneck in recommendation models.
MnnFast targets optimization for memory-augmented neural network and proposes a dedicated embedding cache to eliminate the cache contention between embedding and inference operations~\cite{mnnfast}.
However, these techniques do not directly apply to personalized recommendation consisting order-of-magnitude larger embedding tables.
TensorDIMM~\cite{tensorDIMM} proposes a custom DIMM module enhanced with near-memory processing cores for embedding and tensor operations in deep learning.
The address mapping scheme in TensorDIMM interleaves consecutive 64B within each embedding vector across the DIMM modules.
Its performance thus scales at the DIMM level and relies on the inherent high spatial locality of large embedding vectors, it is unable to apply to this approach to small vectors (e.g. 64B).
Given the same memory configuration, our design can outperform TensorDIMM in memory latency speedup by extracting additional performance gains from rank-level parallelism and memory-side caching optimizations.
The introduction of a customized compressed NMP instruction in {\textit{RecNMP}~} also fundamentally addresses the C/A bandwidth constraints, without the restrictions on small embedding vectors as imposed by TensorDIMM.
|
1,116,691,501,242 | arxiv | \section{Introduction}
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{event_building_upgrade.png}
\caption{The architecture of the upgraded \ac{LHCb} readout system.\label{daq_system}}
\end{figure}
The \ac{LHCb} experiment\ \cite{LHCb} will receive a substantial upgrade\ \cite{tdr_upgrade} during the \ac{LS2} of the \ac{LHC}. One of the major changes during this upgrade process will be the installation of a completely new DAQ system without any low level hardware trigger, providing higher trigger yield at the luminosity foreseen after \ac{LS2}. To implement a trigger-less readout, the full bandwidth of \mbox{$\sim$32 Tb/s} produced by the detector must be forwarded by the event building network, in order to achieve this total throughput we are targetting a system composed of $\sim$500 nodes interconnected together using \mbox{100 Gb/s} networking technology, as shown in \figurename\ \ref{daq_system}.
In order to design and build a system with the above mentioned complexity we need extensive planning and testing, for this reason we developed \ac{DAQPIPE}. This software generates real event building traffic and can be configured in multiple ways in order to experiment with different network configurations and technologies. By only using \ac{DAQPIPE}, in order to test the scalability of the system, we need to access to \ac{HPC} clusters equipped with \mbox{100 Gb/s} capable interconnection networks. Because of the relative small number of suitable systems available in world, the waiting time can be very long and the network configuration may be suboptimal for event building tests.
In this work, we present a low level simulation model that can be used, in parallel with tests on real systems, to speed up the process of designing the event building network for a trigger-less readout system.
\section{LHC\textup{b} Event Builder Architecture}
In this section, we briefly describe the DAQ's architecture of the \ac{LHCb} experiment for the Run-3 of the \ac{LHC}, because a full view is out of the scope of this paper we will focus on the network side of the system, a comprehensive view can be found in the \ac{TDR}\ \cite{tdr_upgrade}.
\subsection{Event building architecture}
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{eb-schema.pdf}
\caption{Event building architecture. The different arrows represent the multiple fragments gathered by the \acs{BU} while the black ones the control messages to and from the \acs{EM}.\label{eb_schema}}
\end{figure}
The \ac{LHCb} event building is composed of three main logical units:
\begin{itemize}
\item \textbf{\ac{BU}} receives and aggregates the fragments into full events
\item \textbf{\ac{RU}} collects the fragments from the DAQ board and sends them to the \acp{BU}
\item \textbf{\ac{EM}} assigns which event is built on which \ac{BU}
\end{itemize}
As depicted in \figurename\ \ref{eb_schema} a \ac{BU} and a \ac{RU} are aggregated into one single node generating a 'folded' event builder, because the data traffic is always flowing from the \acp{RU} to the \acp{BU}, this architecture is used to fully exploit the full-duplex nature of the network and to reduce by a factor two the number of physical machines needed in the final system compared to a one-directional event builder.
In the collective communication schema the traffic pattern of a folded event builder can be compared to an all-to-all with different data size for every fragment.
In order to reduce the network congestion, generated by an all-to-all personalized exchange, we use the \textit{linear shifting} traffic scheduling technique, which can be explained as follows:
\begin{itemize}
\item We divide the all-to-all exchange into $N$ phases, where $N$ is the total number of nodes
\item In every phase every node sends data to one destination and receives from one source
\item During phase $n$ node $i$ sends to node $ (n+i)\%N $\footnote{The $\%$ symbol indicates the modulo operation}
\end{itemize}
If the aforementioned conditions are respected for all the phases then we have a linear shifting scheduling. In a real world scenario a mechanism for defining phases and synchronizing all the nodes must be provided.
\subsection{Event building network}
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{fat-tree.pdf}
\caption{Fat-tree network build using switches with a radix of four. The two switches in the upper part are called spine switches, while the four in the lower part are called leaf switches.\label{fat-tree}}
\end{figure}
From the networking point of view, the event building traffic tends to create congestion and high link utilization among all the nodes, therefore the selected network topology has to be non-blocking and provide full bisection bandwidth.
For the implementation of the \ac{LHCb} event building network, we decided to use a folded Clos network as the one depicted in \figurename\ \ref{fat-tree}; often referred to as fat-tree\footnotemark. We selected this particular topology because: it fulfils the aforementioned requirements; it is widely adopted and it is supported by switch vendors. In particular, the OpenSM subnet manager used in InfiniBand-based networks provides optimized routing for fat-tree topologies\ \cite{ib_routing}. This algorithm uses a constant one-to-one correspondence between the spine switch selected and the switch port used by the destination node. This particular routing algorithm provides a conflict-free path for all the packets that are following a perfect linear shifter.
\footnotetext{From a rigorous point of view the network topology shown in \figurename\ \ref{fat-tree} is a folded Clos network, nevertheless in the industry and data center world, it is frequently referred to fat-tree. Even if the network topologies are not exactly the same from this point on we will use the industry standard naming 'fat-tree' instead of 'folded Clos'.}
\subsection{Event building benchmark: \ac{DAQPIPE}}
\ac{DAQPIPE}\ \cite{daqpipev1_adam}\cite{daqpipev1_daniel}\cite{daqpipev1_flavio} is a small benchmark application to test network fabrics for the future
\ac{LHCb} upgrade. It emulates an event builder based on a local area network and it supports multiple network technologies through different communication libraries like: MPI, LIBFABRIC, VERBS and PSM2.
\ac{DAQPIPE} can be used either in a PUSH or PULL schema and it supports different traffic shaping strategies to reduce network congestion. Technologies and protocols can be mixed in a plug-and-play way.
The software provides an implementation of all the logical blocks required by the \ac{LHCb} event building and emulates reading data from a real DAQ board connected to the detector. All the fragments of the same emulated event are then sent through the network using the desired communication library and protocol, and then aggregated into the \ac{BU} selected by the \ac{EM}.
In order to take advantage of the available bandwidth and reduce the CPU overhead, \ac{DAQPIPE} sends multiple fragments of multiple events in parallel. The number of fragments in flight and the number of events processed in parallel can be tuned via two parameters:
\begin{itemize}
\item \textbf{Credits:} number of events processed in parallel by the \ac{BU}
\item \textbf{Parallel sends:} number of fragments of the same event in flight
\end{itemize}
In order to reduce the traffic congestion \ac{DAQPIPE} provides a barrel shift-like traffic shaping, without enforcing strong synchronization among the nodes\footnote{There is a version of DAQPIPE with enforced timing but it will not be considered for the purpose of this paper}.
\section{Simulation Model}
The simulations model we developed is implemented using the \ac{OMNeT++} framework\ \cite{omnetpp}; this discrete event simulator primarily targets network simulations and offers multiple tools that can be used to accomplish different tasks: from describing the network topology to gathering advanced statistics from the simulated design. In order to simulate the \ac{LHCb} DAQ system, we mainly need two components: an accurate description of the network and a precise modelling of the DAQ traffic.
Mellanox technologies has already contributed to an \ac{OMNeT++} based InfiniBand \ac{FLIT} level simulation model. This model supports: link level flow control, static lookup-table-based routing, arbitration between multiple \acp{VL}\footnote{A \ac{VL} is the InfiniBand implementation of a Virtual Channel\ \cite{Duato_book} - i.e. a set of multiple flow control independent channels multiplexed on to the same physical one -}, packet generation and fragmentation and packet arbitration; however, it is not updated and does not support the \mbox{100 Gb/s} flavour of InfiniBand (i.e. EDR). Therefore we decided to expand the library capabilities to fulfil our requirements and to make it as accurate as possible. In order to obtain a realistic model behaviour, we performed a fine tuning of the parameters using information collected from real hardware available in our test laboratory. In particular, we focused on: buffer sizes, network latency, link flow control, packet arbitration, latency and jitter of our entire software stack including PCIe communication overheads.
\subsection{Modules Description}
\begin{figure}[hbt]
\centering
\subfloat[Switch port implementation]{
\includegraphics[width=0.439\columnwidth]{switch-port.png}
}
\subfloat[Host implementation]{
\includegraphics[width=0.45\columnwidth]{node.png}
}
\caption{Internal structure of a switch port and an host\label{modules}}
\end{figure}
\ac{OMNeT++} uses \textit{modules} as fundamental building blocks, hereinafter we provide a brief description of the main ones implemented:
\begin{itemize}
\item \textbf{IBOutBuf:} buffer for outgoing \acp{FLIT}
\item \textbf{IBInBuf:} buffer for incoming \acp{FLIT}
\item \textbf{IBVLArb:} it implements arbitration among the different \acp{VL}
\item \textbf{PktFwdIfc:} it provides destination ports to packets according to the static routing table
\item \textbf{SwitchPort:} it combines input and output buffers with the \ac{VL} arbitration logic
\item \textbf{IBApp:} it generates messages according to the selected traffic pattern
\item \textbf{IBWorkQueue:} queue for the different message coming from one or more applications
\item \textbf{IBGenerator:} it arbitrates all the work queues and generates the packets and the \acp{FLIT} accordingly
\item \textbf{IBSink:} it receives the packets and notifies the IBApp module
\end{itemize}
\figurename\ \ref{modules} depicts how module can be interconnected together and generate more complex units.
\subsection{Topologies}
In order to implement network topologies, \ac{OMNeT++} provides the \ac{NED} language which can be used to generate hierarchical and parametric networks. By using this powerful and flexible tool we implemented a parametric description of a fat-tree network. In view of analysing and comparing against real data collected on \ac{HPC} clusters we also implemented a Python script that generates \ac{NED} code by parsing the subnet manager information of the real cluster topology. In this way, we can study ideal topologies and compare them against real world systems with small imperfections like: missing nodes, swapped cables and suboptimal routing.
\subsection{Traffic injectors}
Accurate traffic modelling is a key component for obtaining precise and realistic network simulation; in particular, in this work we used both synthetic and real application traffic. Our main target is to simulate the event building system of the \ac{LHCb} experiment, therefore a particular effort was put in an accurate replication of the \ac{DAQPIPE} traffic. Moreover we implemented two linear shifters with a different phase definition.
A list and a brief description of the traffic injector implemented follows:
\begin{itemize}
\item \textbf{Fixed-size linear shifter:} it shifts destination after a fixed-size injection.
\item \textbf{Time-window linear shifter:} it shifts destination after a fixed time interval. This injector uses a fixed grace period to absorb jitter, during this period the nodes are not allowed to send data, resulting in increased stability at the expense of a lower theoretical throughput.
\item \textbf{\ac{DAQPIPE}:} an injector that replicates the real \ac{DAQPIPE} traffic. This traffic generator allows the user to change all the relevant parameters as in the real software.
\end{itemize}
\section{Parameter Tuning}
The simulation model has several different parameters that need to be tuned and optimized to replicate the behaviour of real InfiniBand systems. For our event building studies we are interested in \mbox{100 Gb/s} networking solutions, therefore we tuned the model to replicate InfiniBand EDR hardware. In particular we use a Mellanox SB7700 EDR switch and Mellanox ConnectX-5 \acp{HCA}.
Most of the basic parameters can be extracted from the InfiniBand architecture specification\ \cite{ib_spec}, e.g.: real bandwidth, header overhead, encoding overhead, link flow control behaviour, ecc. Advanced and hardware specific ones can be estimated performing real measurements and reverse engineering on the actual hardware.
Crucial values for our simulations are: switch buffer size, link layer latency and PCIe latency.
\subsection{Switch buffer estimation}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.65\columnwidth]{congestion.png}
\caption{Setup used to generate congestion and estimate the switch buffer size. host0 sends at full speed data to host2, at the same time host1 sends packets of different sizes to create controlled congestion.\label{congestion}}
\end{figure}
In order to measure the switch buffer size we can use two different techniques\ \cite{ib_pack_anal}: analysing the link level flow control packets or generating congestion and monitoring the congestion indicator\footnote{The congestion indicator is the PortXmitWait counter which indicates the time, expressed in clock ticks, that a given port has been idling because of insufficient credits on the receiving buffer} on the various ports.
Decoding the information from the flow control packets produces a more accurate measure, but it requires a low-level InfiniBand protocol analyser. Because there are no EDR capable protocol analysers available on the market, we decided to use the second strategy, and estimating the amount of buffering available in every switch port by measuring the performance counters.
The setup used is depicted in \figurename\ \ref{congestion} and the procedure used to create congestion is the following:
\begin{itemize}
\item host0 sends continuously to host2 at full speed
\item host1 sends to host2 packets of increasing size at regular intervals, to create congestion
\item by reading the PortXmitWait counter and knowing the packet size we can estimate the buffer size of the switch
\end{itemize}
Following this procedure we estimated a buffer size of \mbox{64 KiB} per port per \ac{VL} with 4 \acp{VL} enabled.
\subsection{Link layer latency estimation}
In order to measure the link layer latency, without using external protocol analysers, we decided to use the hardware timestamping feature of the IEEE 1588-2008 standard - i.e. \ac{PTP} - implementaion in the Mellanox \acp{HCA}.
The path latency measure using \ac{PTP} produced an estimation of \mbox{170 ns} full delay using a direct attached copper cable, between two directly connected hosts.
\subsection{PCIe latency modelling}
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{pcie_lat.pdf}
\caption{Application and PCIe latency of an InfiniBand EDR \ac{HCA}\label{lat}}
\end{figure}
The final piece in our model tuning is a realistic model PCIe and InfiniBand software stack latency; because of the non real time nature of modern computing systems and software, we decided decided to perform real world latency measures and replicate this behaviour in our simulation model.
The latency has been measured using the \textit{ib\_write\_lat} benchmark and subtracting the link layer latency, therefore this measurement include all the time needed from the hardware and software chain to make a packet available to the link layer.
\figurename\ \ref{lat} shows the histogram of the latency measurements, the simulation model generates random number generates from this distribution to replicate latency and jitter of the real system.
\section{Results}
In this section we present some results obtained by simulating \ac{DAQPIPE} with the aforementioned simulation model. In particular we provide a comparison between the simulation and real data and a comparison of two different network topologies.
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{real_vs_sim.pdf}
\caption{Comparison between real and simulated \ac{DAQPIPE} on a real \ac{HPC} cluster topology of 64 nodes\label{real_sim}}
\end{figure}
\figurename\ \ref{real_sim} shows a comparison between the simulated and the real \ac{DAQPIPE} for different values of the \textit{credits} and \textit{parallel sends} parameters.
The real data are collected on an \ac{HPC} cluster of 64 nodes interconnected via a fat-tree-like network with: missing nodes, swapped cables and non-ideal routing. The simulation uses a replica of the same topology and the same routing of the real system.
From this plot we can confirm that the simulation can replicate the trend and the absolute value of the measurements performed on the real system.
\begin{figure}[hbt]
\centering
\includegraphics[width=\columnwidth]{clean_vs_cluter_2.pdf}
\caption{Comparison between simulated \ac{DAQPIPE} on a real \ac{HPC} cluster topology of 64 nodes and on a fat tree of 72 nodes\label{clean_cluster}}
\end{figure}
In \figurename\ \ref{clean_cluster} we present a comparison of the performances of the simulated \ac{DAQPIPE} on two different topologies: a clean fat tree of 72 nodes and the real \ac{HPC} cluster of 64 nodes.
As we can see from the plot the performance loss is highly dependant on the parameters and can be as high as \mbox{40 Gb/s}, nevertheless the bandwidth drop for the best configuration is \mbox{$\sim$5 Gb/s}.
We can conclude that a non ideal topology affects the performances of \ac{DAQPIPE} and makes it more unstable, the performance drop can vary significantly and it is highly influenced by the configuration parameters and the topology itself.
\section{Conclusions and Future Work}
We have implemented an accurate low level model of our event building traffic based on the InfiniBand EDR fabric. We have tuned the model to achieve realistic results.
We have validated our model against real data obtained on an \ac{HPC} cluster and we measured the impact of non ideal fat-tree topologies.
We will run an extensive simulation campaign to evaluate the scalability of the system up to the required of \mbox{$\sim$500} nodes.
\printbibliography
\end{document} |
1,116,691,501,243 | arxiv | \section*{Acknowledgements}
We thank Giacomo Bighin and Luca Salasnich for stimulating discussions. We also thank Stefano Giorgini and Tomoki Ozawa for useful comments. This work has been supported by the QUIC grant of the European Horizon2020 FET program and by Provincia Autonoma di Trento.
\par
|
1,116,691,501,244 | arxiv | \section{Introduction} \label{sec:intro}
\begin{figure}[t]
\setlength{\fboxrule}{1pt}
\centering
\begin{subfigure}[c]{0.9\linewidth}
\includegraphics[width=\linewidth]{img/teaser_img}
\caption{Reference Image}
\vspace{1mm}
\end{subfigure}
\begin{subfigure}[c]{0.9\linewidth}
\begin{overpic}[width=\linewidth]{img/teaser_weights}
\put(1,25){\fcolorbox{black}{cyan}{\rule{3pt}{0pt}\rule{0pt}{3pt}} \small Forward}
\put(1,20){\fcolorbox{black}{magenta}{\rule{3pt}{0pt}\rule{0pt}{3pt}} \small Backward}
\end{overpic}
\caption{Fusion Weights}
\vspace{1mm}
\end{subfigure}
\begin{subfigure}[c]{0.9\linewidth}
\begin{overpic}[width=\linewidth]{img/teaser_dual_overlay}
\put(1,25){\small SF outliers: 16.62 \% (occ: 66.46 \%)}
\end{overpic}
\caption{Dual Frame Result from \cite{saxena2019pwoc}}
\vspace{1mm}
\end{subfigure}
\begin{subfigure}[c]{0.9\linewidth}
\begin{overpic}[width=\linewidth]{img/teaser_ours_overlay}
\put(1,25){\small SF outliers: 8.97 \% (occ: 8.75 \%)}
\end{overpic}
\caption{Our Fusion Result}
\end{subfigure}
\caption{Our deep temporal fusion (DTF) refines an initial dual-frame estimate by combination with an inverted backward scene flow. The fusion is realized as a pixel-wise weighted averaging and thus yields (soft) occlusion maps. This way, the initial results are significantly outperformed, especially in the difficult occluded areas.}
\label{fig:teaser}
\end{figure}
The estimation of motion is important in many applications such as autonomous or assisted driving, robot navigation, and others.
A representation of motion in 2D image space (optical flow) is only a proxy for real world motion in the 3D world.
Scene flow is the estimation of 3D geometry and 3D motion and as such a much richer and realistic representation.
However, due to its higher complexity and its requirements on sensors, it is less often applied.
Since the beginnings of scene flow estimation, major progress has been achieved.
Most recently, data-driven deep learning approaches have pushed the limits of scene flow estimation even further \cite{aleotti2020dwarf,jiang2019sense,ma2019drisf,saxena2019pwoc,yang2020upgrading}.
These approaches achieve state-of-the-art results at run times close to real time.
Yet, none of these deep learning methods utilizes a multi-frame setup which was shown to improve over a conceptually similar dual-frame approach for heuristic algorithms \cite{neoral2017object,schuster2020sffpp,taniai2017fsf,vogel2015PRSM}.
Many of these traditional, heuristic approaches use the additional information from multiple views as a kind of regularization during matching, making them more complex and reliable on specific, simplified motion models (\eg a constant motion assumption).
At the same time, all previous approaches (even multi-frame based) perform considerably worse in occluded areas (cf. \cref{tab:kitti}), which suggests that there is a lot of unused potential in multi-frame scene flow estimation.
More generic concepts for learning-based multi-frame settings were proposed in the context of optical flow \cite{liu2019selflow,maurer2018proflow,neoral2018continual,ren2019fusion}.
But these methods do not model the underlying issue of occlusions at all, or tackle the estimation of occlusions by bi-directional flow estimation (twice as much effort).
In our work, we present the first deep fusion strategy for scene flow which is using a trainable, flexible motion model that exploits the geometric 3D information for self-supervised estimation of occlusion during temporal fusion (see \cref{fig:framework}).
Our framework overcomes some issues of previous work by the following contributions:
\begin{enumerate}[itemsep=2pt,topsep=4pt,leftmargin=*]
\item It introduces a dedicated sub-network to temporally invert motion in the opposite direction of the target flow using a learned, flexible model of motion.
\item It combines an initial estimate of forward scene flow with the inverted backward scene flow using a weighted average which results in the estimation of occlusions without explicit supervision.
\item This way, the fused results show superior performance over the underlying dual-frame scene flow algorithms, especially in occluded areas.
\end{enumerate}
Additionally, our framework can be used together with any auxiliary scene flow estimator.
\section{Related Work} \label{sec:related}
\paragraph*{Scene Flow Estimation.}
The history of scene flow estimation began with early variational methods inspired by optical flow estimation \cite{huguet2007variational,vedula1999three}.
Many variants were presented for different sensor setups like RGBD \cite{herbst2013rgbd,jaimez2015primal}.
But all those methods are subjected to the requirements of the variational framework (small motions, good initialization) or of the hardware (\eg indoor environment for active depth cameras).
Within the classical sensor setup of stereo cameras, a big step forward was achieved by the introduction of the piece-wise rigid scene model \cite{behl2017bounding,lv2016CSF,menze2015object,vogel2013PRSF,vogel2015PRSM}.
However, these heuristic approaches presume local planarity and rigidity and lead to considerably long computation times.
A boost in run time was achieved with the introduction of the first deep learning algorithms due to the massive parallelization on GPUs.
At the same time, many of the newly proposed deep neural networks reached state-of-the-art results despite the lack of realistic, labeled training data \cite{aleotti2020dwarf,jiang2019sense,ma2019drisf,saxena2019pwoc,yang2020upgrading}.
Yet, no existing deep learning architecture for scene flow estimation makes use of the multi-frame nature of image sequences, which naturally exist in realistic applications.
Our approach fills this gap with a trainable, generic multi-frame solution for scene flow estimation.
Classical, heuristic approaches have shown that the transition from a single temporal frame pair to two (or more) is expected to improve the results \cite{neoral2017object,schuster2020sffpp,taniai2017fsf,vogel2015PRSM}.
However, all of these methods model the temporal relation of neighboring time frames as constant motion.
Our proposed framework distills a generic motion model from data.
\paragraph*{Deep Multi-Frame Models for Optical Flow.}
For optical flow there exists some previous work on deep multi-frame neural networks.
MFF \cite{ren2019fusion} computes forward flow for two consecutive time steps together with a backward flow for the central frame.
The backward flow is used to warp the previous forward motion towards the reference frame realizing a constant motion assumption.
A fusion network then combines the initial forward prediction and the warped one.
Occlusions are not modeled explicitly here.
ContinualFlow \cite{neoral2018continual} uses previous flow estimates as additional input during the estimation for the current time step.
Here, occlusions are learned as attention maps in a self-supervised manner similar to MaskFlownet \cite{zhao2020maskflownet} or PWOC-3D \cite{saxena2019pwoc}, but based on a cost volume instead of image features.
ProFlow \cite{maurer2018proflow} proposes an online inverter for motion that is trained for every frame on the fly.
In our work, we adopt this idea to avoid warping, but we only train a single inverter once to further avoid the re-training on every sample and the explicit estimation of occlusions at an early stage.
In SelFlow \cite{liu2019selflow} as in ProFlow also, occlusions are detected by a forward-backward consistency check.
SelFlow uses the additional multi-frame information by constructing cost volumes for forward and backward direction which are then used for the flow estimation.
Our work gets rid of any consistency checks, avoids warping to shift the handling of occlusions to a later stage, and learns a dedicated universal model for the inversion of motion.
Contrary to all mentioned cases, we propose a deep multi-frame model for the more complex problem of scene flow estimation.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{img/framework}
\caption{Overview of our proposed framework for deep temporal fusion with our trainable motion model.}
\label{fig:framework}
\end{figure*}
\section{Deep Multi-Frame Scene Flow} \label{sec:method}
Consider a stream of stereo image pairs $I_t^l$ and $I_t^r$ for left and right camera at a given time $t$.
For our framework, we tackle the problem of scene flow estimation with respect to a reference view (left at time $t$) into the future (time $t+1$).
While dual-frame solutions only consider the four images at these two time steps, a multi-frame method incorporates information from at least one additional time (usually $t-1$ to avoid delay in the prediction and account for the symmetry in motion).
Our framework builds on this exact setup using three stereo pairs at time $t-1$, $t$, and $t+1$.
The idea is outlined in \cref{fig:framework} and can be summarized as follows.
We use an arbitrary auxiliary model for scene flow estimation to predict forward ($t \rightarrow t+1$) and backward ($t \rightarrow t-1$) scene flow with respect to our reference view. This avoids any form of warping and thus postpones the problem of occlusions.
Then, we learn a motion model that transforms the backward estimate into a forward motion.
Finally, a temporal fusion module combines the forward and transformed backward estimate to obtain a refined result.
For the fusion, we use a strategy of weighted averages.
This implicitly yields soft occlusion maps for the two motion directions without explicit supervision on occlusions.
The underlying dual-frame model that we mainly use is PWOC-3D \cite{saxena2019pwoc} due to its simple training schedule compared to other approaches.
However, in our experiments (\cref{sec:experiments:dual}) we show that our framework is not limited to this model.
The novel sub-networks for motion inversion and fusion are presented in more detail in the next sections.
\subsection{Temporal Scene Flow Inversion} \label{sec:method:inverter}
Instead of a constant motion assumption, which is often applied in previous work, we create and train a compact neural network that utilizes a learned motion model to temporally invert scene flow.
Our architecture is inspired by the inversion module of \cite{maurer2018proflow} but we make it deeper since for our framework we want to learn a generic model that can invert motion for arbitrary sequences without the need of re-training on every frame.
In detail, the inversion sub-network consists of 4 convolutional layers with kernel size $3 \times 3$ and a fifth one with a $7 \times 7$ kernel and output feature dimensions of $16, 16, 16, 16, 4$ respectively.
The last layer is activated linearly.
Similarly to \cite{maurer2018proflow}, we equip our inverter with a mechanism for spatial variance by concatenating the input scene flow with normalized ($[-1, 1]$) spatial image coordinates of x- and y-direction.
This way and together with the depth information from the backward scene flow, the inversion network is able to operate fully in (hypothetical) 3D space.
For a qualitative impression of our inverter, \cref{fig:inverter} visualizes the results for a validation sample.
\subsection{Deep Forward-Backward Fusion} \label{sec:method:merger}
After the prediction of scene flow in the forward and backward direction (using the same reference frame) and inverting the backward estimate, we can merge the forward and inverted backward prediction.
The refined results can potentially overcome errors in difficult regions of occlusion or out-of-view motion, because occlusions occur rarely across multiple views \cite{schuster2020sffpp}.
Our fusion strategy follows a weighted average approach, where a fusion module predicts pixel-wise weights (that sum up to one) for the combination of the original forward estimate and the inverted backward scene flow.
Interestingly, these weights correspond to (soft) occlusion masks, revealing the main reason why the inverted backward motion should be preferred over a forward dual-frame estimate (cf. \cref{fig:teaser,fig:framework}).
While the direct prediction of a refined (or residual) scene flow during fusion is also possible, this would neither model the underlying issue nor produce occlusion masks.
For our fusion module, we adopt the architecture of the context network of PWC-Net \cite{sun2018pwc} and PWOC-3D \cite{saxena2019pwoc}.
It consists of seven convolutional layers with a kernel size of $3 \times 3$, output depth of $32, 64, 128, 128, 64, 32, 2$, and dilation rates of $1, 2, 4, 8, 16, 1, 1$ respectively.
The last layer predicts pseudo probabilities in a one-hot encoding for the forward and inverted backward scene flow which are used for weighted averaging after a softmax activation.
As input for this module, we concatenate the forward and inverted backward estimate.
Described above is a simple baseline for temporal fusion of scene flow (\textit{basic}).
Within the experiments in \cref{sec:experiments:ablation} we will compare different variants of our fusion module.
Though the network can detect occlusion based on the depth (disparity) and motion of neighboring pixels, it can not estimate out-of-view motion without knowing where the field of view ends.
This information could be guessed from the padding during convolution, however for more explicit modeling we again feed additional spatial information to the module, similar as with the inverter. We denote this variant as \textit{spatial}.
Another variant is again motivated by the issue of occlusion.
Since in multiple views different parts of a reference image are occluded, we argue that the predicted occlusion masks (fusion weights) should differ for the different components of the scene flow, \eg between left and right view of a stereo camera, there are no occlusions due to motion.
Therefore this variant is predicting a separate occlusion map for each channel of our scene flow representation (in image space) and is depicted as \textit{4ch} since it predicts fusion weights for four scene flow channels (two for optical flow and two for initial and future disparities).
Lastly, we combine both strategies and name the combination \textit{spatial-4ch}.
In \cref{fig:framework,fig:teaser}, the occlusion maps (fusion weights) for the \textit{basic} variant are shown for the sake of clarity and space.
\begin{figure}
\centering
\begin{subfigure}[c]{0.45\linewidth}
\includegraphics[width=\linewidth]{img/ft3d_bw_flow}
\caption{Backward Optical Flow}
\vspace{1.5mm}
\end{subfigure}
\begin{subfigure}[c]{0.45\linewidth}
\includegraphics[width=\linewidth]{img/ft3d_bw_disp1}
\caption{Disparity at $t-1$}
\vspace{1.5mm}
\end{subfigure}\\%
\begin{subfigure}[c]{0.45\linewidth}
\includegraphics[width=\linewidth]{img/ft3d_inv_flow}
\caption{Inverted Optical Flow}
\vspace{1.5mm}
\end{subfigure}
\begin{subfigure}[c]{0.45\linewidth}
\includegraphics[width=\linewidth]{img/ft3d_inv_disp1}
\caption{Inverted Disparity at $t+1$}
\vspace{1.5mm}
\end{subfigure}\\%
\begin{subfigure}[c]{0.45\linewidth}
\includegraphics[width=\linewidth]{img/ft3d_fw_flow}
\caption{Forward Optical Flow}
\end{subfigure}
\begin{subfigure}[c]{0.45\linewidth}
\includegraphics[width=\linewidth]{img/ft3d_fw_disp1}
\caption{Disparity at $t+1$}
\end{subfigure}
\caption{An example of the learned inversion of motion on data of FlyingThings3D \cite{mayer2016large}. The left and right columns show the optical flow and disparity at $t+1$ components of the scene flow. The first and last rows give the ground truth in backward and forward direction respectively. The center row presents the results of our generic motion inverter.}
\label{fig:inverter}
\end{figure}
\section{Experiments} \label{sec:experiments}
Our experiments and results are split into three sets with the following main intentions.
First of all, we validate that the overall framework improves over the initial dual-frame estimates of different auxiliary scene flow models.
Secondly, we compare our work to existing multi-frame scene flow algorithms using the official public KITTI benchmark \cite{geiger2012kitti,menze2015object}.
Lastly, our goal is to validate each step of our framework separately by means of an extensive ablation study.
As metric, the common KITTI outlier rate is used which classifies per-pixel estimates as outliers if they deviate more than 3 px and 5~\% from the ground truth.
This metric is computed for the different components of our scene flow, \ie initial disparity (\textit{D1}), next disparity (\textit{D2}), optical flow (\textit{OF}), or for the entire scene flow (\textit{SF}) if either of the three previous components is classified as an outlier.
All outlier rates are averaged over all valid ground truth pixels of the respective data split.
\begin{table*}[t]
\centering
\caption{Comparison of our multi-frame fusion approach to the dual-frame results of the underlying auxiliary scene flow estimator for the entire image (\textit{all}) and occluded areas only ($\mathit{occ} \in \mathit{all} \setminus \mathit{noc}$) on our KITTI validation split. The last column gives the maximum relative improvement of DTF over the respective dual-frame baseline.}
\label{tab:dual}
\begin{tabular}{cc||cccc|cccc|c}
\multirow{2}{*}{\begin{tabular}{c}Scene Flow\\Estimator\end{tabular}} & \multirow{2}{*}{Setup} & \multicolumn{4}{c|}{all} & \multicolumn{4}{c|}{occ} & \multirow{2}{*}{\begin{tabular}{c}max. rel.\\Improv.\end{tabular}}\\
& & D1 & D2 & OF & SF & D1 & D2 & OF & SF\Bstrut\\
\hline
\hline
\multirow{2}{*}{SENSE \cite{jiang2019sense}} & Dual & 0.97 & 2.22 & 3.00 & 4.04 & 2.08 & 8.23 & 7.19 & 11.84&\Tstrut\\
& \textbf{Ours} & \cellcolor{limegreen!0}0.97 & \cellcolor{limegreen!50}1.66 & \cellcolor{red!1}3.01 & \cellcolor{limegreen!42}3.52 & \cellcolor{limegreen!4}2.05 & \cellcolor{limegreen!50}4.81 & \cellcolor{red!0}7.21 & \cellcolor{limegreen!50}8.57 & 41.6 \%\Bstrut\\
\hline
\multirow{2}{*}{OE \cite{yang2020upgrading}} & Dual & 1.11 & 2.58 & 5.56 & 6.61 & 2.53 & 7.34 & 15.06 & 17.73&\Tstrut\\
& \textbf{Ours} & \cellcolor{red!3}1.12 & \cellcolor{limegreen!15}2.46 & \cellcolor{limegreen!5}5.46 & \cellcolor{limegreen!11}6.39 & \cellcolor{red!1}2.54 & \cellcolor{limegreen!16}6.97 & \cellcolor{limegreen!10}14.57 & \cellcolor{limegreen!16}16.86 & 5.0 \%\Bstrut\\
\hline
\multirow{2}{*}{DWARF \cite{aleotti2020dwarf}} & Dual & 2.35 & 3.49 & 7.07 & 8.16 & 3.94 & 7.59 & 17.70 & 19.63&\Tstrut\\
& \textbf{Ours} & \cellcolor{limegreen!50}1.17 & \cellcolor{limegreen!50}2.63 & \cellcolor{limegreen!50}5.64 & \cellcolor{limegreen!50}6.75 & \cellcolor{limegreen!50}2.82 & \cellcolor{limegreen!2}7.54 & \cellcolor{limegreen!50}14.90 & \cellcolor{limegreen!30}17.82 & 50.2 \%\Bstrut\\
\hline
\multirow{2}{*}{PWOC-3D \cite{saxena2019pwoc}} & Dual & 4.65 & 6.72 & 11.50 & 13.64 & 8.02 & 15.20 & 29.17 & 32.15&\Tstrut\\
& \textbf{Ours} & \cellcolor{limegreen!50}3.34 & \cellcolor{limegreen!50}4.85 & \cellcolor{limegreen!50}8.22 & \cellcolor{limegreen!50}9.70 & \cellcolor{limegreen!50}5.63 & \cellcolor{limegreen!50}10.10 & \cellcolor{limegreen!50}18.68 & \cellcolor{limegreen!50}21.24 & 36.0 \%\Bstrut\\
\hline
\multirow{2}{*}{SFF \cite{schuster2018sceneflowfields}} & Dual & 6.61 & 10.28 & 12.39 & 15.76 & 9.94 & 19.57 & 26.08 & 30.74&\Tstrut\\
& \textbf{Ours} & \cellcolor{limegreen!28}6.04 & \cellcolor{limegreen!40}9.03 & \cellcolor{limegreen!25}11.43 & \cellcolor{limegreen!30}14.30 & \cellcolor{limegreen!39}8.77 & \cellcolor{limegreen!50}15.91 & \cellcolor{limegreen!41}22.85 & \cellcolor{limegreen!48}26.25 & 18.7 \%\Bstrut\\
\end{tabular}
\end{table*}
\begin{figure*}[t]
\centering
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_img}
\caption{Reference Image}
\vspace{1.5mm}
\end{subfigure}
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_gt_dil}
\caption{\textbf{Forward} Ground Truth (enhanced)}
\vspace{1.5mm}
\end{subfigure}\\%
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_sense}
\caption{SENSE \cite{jiang2019sense}}
\vspace{1.5mm}
\end{subfigure}
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_oe}
\caption{OpticalExpansion \cite{yang2020upgrading}}
\vspace{1.5mm}
\end{subfigure}
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_dwarf}
\caption{DWARF \cite{aleotti2020dwarf}}
\vspace{1.5mm}
\end{subfigure}\\%
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_pwoc_orig}
\caption{PWOC-3D \cite{saxena2019pwoc}, Original}
\end{subfigure}
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_pwoc_retrained}
\caption{PWOC-3D \cite{saxena2019pwoc}, Re-trained}
\label{fig:backward:retrained}
\end{subfigure}
\begin{subfigure}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{img/backward_sff}
\caption{SFF \cite{schuster2018sceneflowfields}}
\end{subfigure}
\caption{Visualization of the \textbf{backward} optical flow for different scene flow estimators. Most auxiliary estimators used in our experiments have difficulties with backward motion because they do not perform actual matching but rather rely on the image information of the reference frame alone, especially for street surfaces. Significant improvements are noticeable once the backward branch gets trained end-to-end in our framework (\subref{fig:backward:retrained}), even though backward ground truth is not available.}
\label{fig:backward}
\end{figure*}
\subsection{Data Sets and Training} \label{sec:experiments:details}
\paragraph*{Data Sets.}
For most of our experiments, the well-known KITTI data set is used \cite{geiger2012kitti,menze2015object}.
However, it is limited in size and thus inappropriate for the training of deep neural networks.
Despite some success on unsupervised scene flow estimation \cite{hur2020selfmono} or knowledge distillation from teacher networks \cite{aleotti2020dwarf,jiang2019sense}, transfer learning by pre-training and fine-tuning is the most common strategy to overcome this issue \cite{mayer2018what,saxena2019pwoc,sun2018models,sun2018pwc}.
The one large-scale data set which provides labeled data for scene flow is FlyingThings3D (FT3D) \cite{mayer2016large}.
In this work, it is also used for pre-training of some parts of the pipeline.
For validation, we split 20 sequences from the KITTI \textit{training} subset as in \cite{saxena2019pwoc} and the last 50 sequences from each subset \textit{A}, \textit{B}, and \textit{C} of the FlyingThings3D \textit{train} set.
\paragraph*{Training and Implementation Details.}
Where required, the auxiliary scene flow estimators are initialized with the published pre-trained weights.
We use the rich ground truth of FlyingThings3D \cite{mayer2016large} to separately pre-train the inverter on forward and backward ground truth motion with an L2-loss for 40 epochs with a batch size of 4 and an initial learning rate of $1\times10^{-4}$ that we decrease to $5\times10^{-5}$ and $1\times10^{-5}$ after 20 and 30 epochs respectively.
The rest of our pipeline is initialized from scratch.
Afterwards, we fine-tune our fusion pipeline on KITTI \cite{menze2015object} for 100 epochs. The learning rate for fine-tuning starts at $5\times10^{-5}$ and is again reduced after 75 epochs to $1\times10^{-5}$.
Due to memory limitations, we use a batch size of 1 whenever the entire pipeline is used for training.
Unless mentioned otherwise, Leaky-ReLU \cite{maas2013leakyrelu} with a leak factor of $0.1$ is used after each convolution.
For all training stages, we use the ADAM optimizer \cite{kingma2015adam} with its default parameters.
Our robust loss function for the 4-dimensional scene flow in image space is similar to the one in \cite{saxena2019pwoc,sun2018pwc} and defined by
\begin{equation}
\mathcal{L} = \frac{1}{N_{gt}} \cdot \sum_{\mathbf{x} \in gt} \left( \vert s(\mathbf{x}) - \hat{s}(\mathbf{x}) \vert_1 + \epsilon \right)^{0.4}.
\end{equation}
Here $s$ and $\hat{s}$ are the estimated and ground truth scene flow fields, $\vert \cdot \vert_1$ is the L$_1$-norm, $\epsilon=0.01$ is a small constant for numerical stability, and the power of $0.4$ gives less weight to strong outliers.
For the entire pipeline, we impose this loss on the forward estimate, the inverted backward scene flow, and the final fusion:
\begin{equation}
\mathcal{L}_{total} = \mathcal{L}_{fw} + \mathcal{L}_{inv} + \mathcal{L}_{fused}
\end{equation}
This multi-stage loss avoids that during training the fusion flips to one side and does not recover because the other side would not receive any updates anymore.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{img/comparison}
\caption{Visual comparison of our deep multi-frame fusion framework to the auxiliary dual-frame model PWOC-3D \cite{saxena2019pwoc}. Scene flow results are shown by optical flow and disparity at $t+1$. The error maps indicate scene flow outliers in magenta and inliers in green. Notice the improvements in occluded areas (\eg in front and around of vehicles) or the out-of-view occlusions due to ego-motion (\eg the close-by part of the guardrail in the first example and the lower image corners).}
\label{fig:comparison}
\end{figure*}
\subsection{Comparison to the Auxiliary Estimators} \label{sec:experiments:dual}
In \cref{tab:dual} we validate that our deep temporal fusion framework surpasses a diverse set of underlying dual-frame estimators in terms of scene flow outliers.
Especially in the difficult areas of occlusion, our approach achieves significantly better results, reducing the scene flow outlier rate by up to {\raise.17ex\hbox{$\scriptstyle\sim$}} 30~\%.
The fusion improves the scene flow estimates for non-occluded areas also, resulting in an overall improvement over \textit{all} image areas.
For OpticalExpansion (OE) \cite{yang2020upgrading}, the relative improvement is less compared to other auxiliary estimators.
This has two reasons. First of all, some scene flow algorithms are heavily biased towards forward motions (cf. \cref{fig:backward}) and therefore provide much less reliable information for fusion in the backward branch. Secondly, the estimate of motion-in-depth from OE is depending a lot on the optical flow estimate, which amplifies the previous limitation and expands it over the complete scene flow estimation in backward direction.
The first reason additionally motivates an end-to-end training of the fusion framework together with the auxiliary estimator.
This is performed for PWOC-3D \cite{saxena2019pwoc} because it is most easy to train.
The other auxiliary estimators are used as off-the-shelf replacements with the officially provided pre-trained weights.
Our framework is even able to improve non-learning-based results from SFF \cite{schuster2018sceneflowfields}, with a noticeable margin of more than 10~\% in occluded areas.
Here, we account the smaller relative improvements to the ego-motion model that is applied in SFF which is able to estimate out-of-view motions in forward direction for the background more reliably.
A visual comparison between PWOC-3D and the multi-frame extension by our framework is conducted in \cref{fig:comparison}.
\begin{table*}
\centering
\caption{Results of the KITTI scene flow benchmark for all multi-frame approaches. We also provide results for the auxiliary scene flow methods used in our pipeline and conceptual dual-frame counterparts for other multi-frame methods, where existent. Scene flow outlier rates (\textit{SF}) are presented for foreground (\textit{fg}), background (\textit{bg}), and all regions, as well as for non-occluded areas (\textit{noc}), occluded areas only (\textit{occ}, details in the text), and the union (\textit{all}).}
\label{tab:kitti}
\begin{tabular}{c|c||ccc|ccc|ccc|c}
& \multirow{3}{*}{Method} & \multicolumn{9}{c|}{SF Outliers [\%]} & \multirow{3}{*}{\begin{tabular}{c}Run\\Time\\{[}s{]}\end{tabular}}\\
& & \multicolumn{3}{c|}{occ} & \multicolumn{3}{c|}{noc} & \multicolumn{3}{c|}{all} &\\
& & bg & fg & all & bg & fg & all & bg & fg & all &\Bstrut\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{multi-frame}} & PRSM \cite{vogel2015PRSM} & \textbf{12.36} & 37.65 & \textbf{15.74} & 5.54 & 17.65 & 7.71 & \textbf{6.61} & 20.79 & \textbf{8.97} & 300\Tstrut\\
& DTF+SENSE (\textbf{Ours}) & 16.37 & \textbf{37.49} & 19.65 & 6.69 & \textbf{9.72} & \textbf{7.23} & 8.21 & \textbf{14.08} & 9.18 & 0.76\\
& OSF+TC \cite{neoral2017object} & 15.46 & 43.98 & 19.49 & \textbf{5.52} & 15.57 & 7.32 & 7.08 & 20.03 & 9.23 & 3000\\
& SFF++ \cite{schuster2020sffpp} & 26.40 & 48.36 & 30.91 & 9.84 & 21.04 & 11.55 & 12.44 & 25.33 & 14.59 & 78\\
& DTF+PWOC (\textbf{Ours}) & 31.91 & 51.14 & 34.29 & 8.79 & 21.01 & 10.98 & 12.42 & 25.74 & 14.64 & \textbf{0.38}\\
& FSF+MS \cite{taniai2017fsf} & 21.59 & 65.48 & 27.63 & 9.23 & 28.03 & 12.60 & 11.17 & 33.91 & 14.96 & 2.7\Bstrut\\
\hline
\multirow{5}{*}{\rotatebox[origin=c]{90}{dual-frame}} & SENSE \cite{jiang2019sense} & 17.22 & 44.86 & 21.63 & 6.71 & 10.02 & 7.30 & 8.36 & 15.49 & 9.55 & 0.32\Tstrut\\
& OSF \cite{menze2015object} & 15.01 & 47.98 & 19.41 & 5.52 & 22.31 & 8.52 & 7.01 & 26.34 & 10.23 & 3000\\
& PWOC-3D \cite{saxena2019pwoc} & 41.20 & 47.52 & 41.62 & 9.29 & 18.03 & 10.86 & 14.30 & 22.66 & 15.69 & 0.13\\
& SFF \cite{schuster2018sceneflowfields} & 25.58 & 63.26 & 30.76 & 10.04 & 26.51 & 12.99 & 12.48 & 32.28 & 15.78 & 65\\
& PRSF \cite{vogel2013PRSF} & 41.09 & 58.82 & 42.80 & 8.35 & 26.08 & 11.53 & 13.49 & 31.22 & 16.44 & 150\\
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Evaluation of intermediate results in our pipeline on our KITTI validation split. For this experiment, PWOC-3D \cite{saxena2019pwoc} is the auxiliary estimator and is trained end-to-end. The inversion module is separately evaluated on FlyingThings3D.}
\label{tab:ablation}
\begin{tabular}{l||cccc|cccc}
\multirow{2}{*}{Output} & \multicolumn{4}{c|}{all} & \multicolumn{4}{c}{occ}\\
& D1 & D2 & OF & SF & D1 & D2 & OF & SF\Bstrut\\
\hline
\hline
forward (fw) & 3.47 & 5.83 & 8.95 & 10.76 & 5.89 & 14.39 & 23.17 & 26.93\Tstrut\\
inverted backward (bw-inv) & 4.15 & 6.00 & 20.34 & 22.14 & 6.64 & 9.92 & 31.74 & 33.81\Bstrut\\
\hline
constant linear inversion (FT3D) & -- & \textbf{1.27} & 47.16 & 47.18 & -- & -- & -- & --\Tstrut\\
our inverter (FT3D) & 2.19 & 3.25 & \textbf{41.98} & \textbf{42.34} & -- & -- & -- & --\Bstrut\\
\hline
fw + bw-inv + oracle & 2.63 & 3.91 & 6.25 & 7.51 & 4.53 & 8.40 & 16.39 & 18.43\Tstrut\Bstrut\\
\hline
fw + bw-inv + fusion-basic & \textbf{3.22} & 4.90 & 9.01 & 10.48 & 4.88 & 10.23 & 19.27 & 21.66\Tstrut\\
fw + bw-inv + fusion-spatial & 3.48 & 5.51 & 8.85 & 10.55 & 6.13 & 13.66 & 22.23 & 25.40 \\
fw + bw-inv + fusion-4ch & 3.34 & 4.85 & \textbf{8.22} & \textbf{9.70} & 5.63 & 10.10 & 18.68 & 21.24 \\
fw + bw-inv + fusion-spatial-4ch & 3.43 & \textbf{4.84} & 8.67 & 10.19 & \textbf{5.45} & \textbf{9.25} & \textbf{18.46} & \textbf{20.82} \\
\end{tabular}
\end{table*}
\subsection{Comparison to State-of-the-Art} \label{sec:experiments:sota}
To check the generalization of our model on more unseen data, we submit results obtained with our deep multi-frame model to the KITTI online benchmark.
The results for all multi-frame methods and related dual-frame baselines are presented in \cref{tab:kitti}.
Due to the limited number of training samples on KITTI, some over-fitting can be observed when comparing the numbers to the results on our validation split.
However, improvements over the underlying dual-frame models (SENSE and PWOC-3D) are still evident, again with margins of {\raise.17ex\hbox{$\scriptstyle\sim$}} 15 - 20~\% in occluded areas.
Since KITTI evaluates the submitted results only for non-occluded (\textit{noc}) and all valid pixels, the results for occluded areas (\textit{occ}) are reconstructed from the available data.
To this end, we compute the ratio of non-occluded image areas on the KITTI \textit{training} set (84.3 \%), and use this distribution to estimate results for only occluded areas for the KITTI \textit{testing} set based on the benchmark results for non-occluded (\textit{noc}) and \textit{all} areas according to the following formula:
\begin{equation}
{occ}_r = \frac{{all}_r - {noc}_r \cdot 0.843}{0.157}
\end{equation}
for the regions $r \in \lbrace bg, fg, all \rbrace$.
This strategy reveals that even for the top performing multi-frame methods, moving vehicles which leave the field of view are the most challenging areas.
In these regions (\textit{occ-fg}), our fusion approach achieves top performance.
It furthermore performs significantly better in foreground regions than the other multi-frame methods.
Lastly, we highlight that since ours is the first deep method for multi-view scene flow estimation, our run time is close-to real time and thus 2 to 5 orders of magnitude faster than that of most other multi-view methods.
The inversion and fusion without auxiliary scene flow estimation takes 0.12 seconds.
We use a Nvidia RTX 2080 Ti for inference.
\subsection{Ablation Study} \label{sec:experiments:ablation}
For completeness, each part of our framework is evaluated separately in \cref{tab:ablation}.
The first two rows show the results for the forward prediction and the inverted backward scene flow after end-to-end training.
We can see that within our multi-view training, the plain forward prediction is already improved over the dual-frame baseline (cf. \cref{tab:dual}).
Further, the results of the backward branch after inversion indicate that the motion inversion of optical flow is a bottleneck.
Yet, for occluded areas the inversion outperforms the forward prediction already in terms of change of disparity, validating its importance.
Both of these observations are confirmed by an evaluation of the inverter only on data of FlyingThings3D \cite{mayer2016large} as shown in the fourth row of \cref{tab:ablation} (cf. \cref{fig:inverter}) compared to a na\"ive constant linear motion assumption in 2D.
This is, optical flow and change of disparity are multiplied by $-1$.
Our learned motion model outperforms the constant motion model in terms of optical flow.
Though, one might doubt whether the quality of the inversion is good enough to improve the forward prediction.
Therefore, we compute an \textit{oracle} fusion using the ground truth to select the better estimate from the forward and inverted backward branch.
This experiment produces a theoretical bound for our fusion module and makes apparent that the inverted backward scene flow contains a lot of valuable information.
Within the last four rows of \cref{tab:ablation} we compare the different variants of our fusion module as described in \cref{sec:method:merger}.
The results in occluded areas reveal that all variants including the \textit{basic} one effectively tackle the problem of occlusion.
Among all, the \textit{spatial} version performs the worst unless combined with the \textit{4ch} variant.
However, we could observe stronger over-fitting for this model with most representation power (and highest number of parameters).
As a result, over the entire image area, the fusion module using four weight channels performs the best.
Worth highlighting is that our fusion results in occluded areas reach the level of the oracle prediction almost.
\section{Conclusion} \label{sec:conclusion}
In this work we have presented a straight-forward integration of multiple frames to improve scene flow estimates for a wide range of dual-frame algorithms.
Significant improvements could be achieved by inverting the backward motion of the reference view and fusing it with an initial forward estimate.
Moreover, our fusion strategy of weighted averages yields additional estimates of occlusion maps without the need for bi-directional consistency checks.
The experiments reveal that the inversion of optical flow is a limiting factor of the proposed approach, thus for future work we plan to equip the motion inverter with more domain knowledge to overcome this limitation and further to apply end-to-end training with other more complicated auxiliary estimators.
\section*{Acknowledgement}
This work was partially funded by the BMW Group and partially by the Federal Ministry of Education and Research Germany under the project VIDETE (01IW18002).
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,501,245 | arxiv | \section{Introduction}
In this article, we introduce Liesel,\footnote{\url{https://liesel-project.org}} a probabilistic programming framework for the development and estimation of a broad range of Bayesian models in Python. The framework, named after a fountain in its birth city G\"{o}ttingen, Germany, allows the user to represent statistical models as directed acyclic graphs (DAGs) and to implement tailor-made Markov chain Monte Carlo (MCMC) algorithms. Liesel provides many default components for these tasks, which are easy to extend and liberate the researcher from the time-consuming duty of re-implementing the basic components of their models and inference algorithms, giving them the opportunity to focus on the novel aspects of their research. This way, Liesel meets the requirements of many computational statisticians working on new methods or extensions of existing ones. Currently, the framework is particularly useful for developing semi-parametric regression models, since it includes all components required for this model class, but it can easily be extended beyond these models.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/workflow}
\caption{The standard workflow using the Liesel framework. When working with semi-parametric regression models, the first step is usually to create a Liesel model with the help of RLiesel. Then, the model graph is manipulated to accommodate newly developed features, and finally, Goose is used to develop an MCMC algorithm combining standard components like NUTS with user-defined kernels if required. The framework is, however, very flexible. The Liesel-Model library is not limited to semi-parametric regression models but can handle any Bayesian network expressed as a DAG. Goose communicates with the model via an interface which is also available for PyMC models or even self-written, JAX-compatible model representations.}
\label{fig:liesel-workflow}
\end{figure}
The Liesel framework consists of three main components: Goose, an MCMC library, Liesel-Model, a class system for representing statistical models as DAGs, and RLiesel, an R interface to conveniently set up semi-parametric regression models. The components and their relationships are illustrated in Figure~\ref{fig:liesel-workflow}. A standard workflow with Liesel involves the following steps: First, a semi-parametric regression model is configured with RLiesel, returning a Liesel-Model object. Second, the Liesel-Model object is modified for the research question at hand if required. Third, MCMC estimation is performed with Goose, potentially with different sampling schemes. In the end, Goose's utility functions can be used for model and estimation diagnostics.
Before we proceed to describe the three components of Liesel in more detail, we would like to point out that Liesel is by no means limited to semi-parametric regression models. In fact, the Liesel-Model library can be used to represent any model falling into the category of Bayesian networks, including, for example, regression models, spatial models, change-point models, Gaussian process models or Bayesian neural networks. For this rich model class, which may involve discrete model parameters, there is, to the best of our knowledge, no one-size-fits-all MCMC algorithm. For this reason, Goose encourages the researcher to use their expertise to design an optimal sampling scheme for their specific problem by providing a set of building blocks, which can be used to extend and replace standard MCMC algorithms. Moreover, Goose is not limited to the Liesel-Model library. As indicated in Figure~\ref{fig:liesel-workflow}, the Liesel framework is designed to be modular, which allows Goose to be agnostic about the concrete model implementation. Goose can also be used to estimate PyMC models or user-defined, JAX-compatible model implementations.
\subsection{Software components}
\paragraph{Liesel-Model} The model building library of Liesel (called Liesel-Model in this article to distinguish it from the Liesel probabilistic programming framework as a whole) facilitates the development of complex statistical models allowing the user to represent them as directed acyclic graphs (DAGs). DAGs are easy to reason about and to manipulate. In Liesel, each node of a DAG represents either data or a computation. The edges indicate data flow or, put differently, how the value of a node depends on the other nodes. Hence, the relationship between the model parameters and the conditional distributions of the model naturally translates to a DAG.
Liesel provides methods to alter, remove or replace subgraphs of a model. This way, the user can extend or modify a given model, for example, a semi-parametric regression model created with RLiesel. More specifically, a prior in the model hierarchy can be replaced by updating the corresponding subgraph. This feature makes Liesel especially well-suited for the development of new statistical models, and in combination with RLiesel, it can simplify research on semi-parametric regression models significantly.
\paragraph{Goose} Liesel's MCMC library is called Goose. To perform MCMC estimation, one needs to construct a Markov chain with an equilibrium distribution that matches the target distribution, i.e.~the posterior distribution. The chain is simulated for a given certain number of iterations, and the draws from the chain are used to approximate the posterior distribution. While a valid MCMC algorithm is mathematically guaranteed to converge to the posterior distribution, the convergence can be slow in practice. For this reason, most MCMC algorithms need to be tuned, i.e.~they need to learn some hyperparameters during a warmup phase to work efficiently.
Goose supports the user in building an MCMC algorithm for their estimation target by offering a broad range of well-tested kernels that can be combined in flexible ways to construct problem-specific MCMC algorithms. In this context, a kernel is an algorithm to transition a part of the parameter vector to a new state within an MCMC iteration. Most kernels in Goose also implement an automatic tuning procedure, which guarantees a high computational efficiency without requiring a manual adjustment of the kernel hyperparameters. The user can combine standard kernels like the No-U-Turn Sampler (NUTS) provided by Liesel with self-implemented ones, e.g.~specific Gibbs updates. Of course, Goose also supports using a single kernel like NUTS for the full parameter vector as in Stan.
\paragraph{RLiesel} The RLiesel package for R is built on top of the Liesel-Model library. It can be used to configure semi-parametric regression models with the convenient R formula notation. The models are represented as DAGs using the Liesel node and model classes and can be manipulated to incorporate new developments, e.g.~new predictor components or prior hierarchies. Finally, the user can take advantage of a default sampler setup or build a custom MCMC algorithm for their model using Goose. RLiesel is based on the \texttt{reticulate} package, which allows for a seamless integration of Python and R. With RLiesel, we strive to make Liesel accessible to the statistics community, where R is the predominant language, and to allow for the integration of Liesel with many popular R-based post-sampling utilities.
RLiesel does not only demonstrate how Liesel can be used to implement complex statistical models, but it can also serve as a solid basis for further methodological research on the popular model class of semi-parametric regression. Semi-parametric regression has received a lot of attention among applied statisticians in recent years and is closely related to the concepts of structured additive distributional regression \citep{Klein2015Multivariate} and generalized additive models for location, scale and shape \citep[GAMLSS, ][]{Rigby2005}. These models allow the researcher to explore complex relationships between explanatory and response variables including linear, non-linear, random and spatial effects. Many of them are also multi-predictor models, where different features of the response distribution such as variance, skewness or kurtosis can be related to covariate information. Due to its generality, semi-parametric regression can be understood as an ``umbrella'' model class comprising many interesting models, which pose a broad range of statistical and computational challenges. RLiesel and Liesel allow the statistician to address these issues with a set of well-tested building blocks, an intuitive graph-based model representation and API, and a modular library for MCMC inference. This is particularly important due to the complexity of the model class, which would make an implementation from scratch a very time-consuming task.
\subsection{Related software}
Most statistical software packages for Bayesian inference can be classified into software for a specific model class on the one hand and general probabilistic programming languages (PPLs) on the other hand. Liesel and RLiesel try to cover a middle ground between these two approaches: RLiesel facilitates the definition of semi-parametric models, while Liesel-Model and Goose are capable of expressing and estimating a broad range of statistical models. Hence, Liesel has similar capabilities as general-purpose PPLs like Stan \citep{SDT2022}, JAGS \citep{Plummer2022}, NIMBLE \citep[the successor to BUGS,][]{deValpine2017} or PyMC \citep{Salvatier2016}. Unlike these software projects, however, Liesel features a graph representation allowing for the modification of the model before estimation. Furthermore, with Liesel, users have full control of the estimation algorithm. Stan and JAGS provide only very limited options to customize the MCMC algorithm. In Stan, NUTS or HMC can be used, or alternatively a mean-field variational inference method. Certain parameters of the samplers, e.g.~the initial step size or the target acceptance rate, can be configured. However, block-based sampling is not possible and user-implemented samplers cannot be integrated. Moreover, discrete parameters cannot be modeled with Stan, since it relies on gradient-based samplers.
Compared to Stan, NIMBLE allows for a more detailed configuration of the MCMC algorithm. For example, the default samplers can be reordered or replaced, even with user-defined samplers. In contrast to Liesel, NIMBLE misses capabilities for automatic differentiation and consequently does not provide any gradient-based samplers. Moreover, NIMBLE restricts the compilation of user-defined functions to a subset of the R programming language, which makes third-party libraries difficult to use, while Liesel can wrap code of other JAX-based libraries. PyMC also offers some options to customize the MCMC algorithm but does not go as far as Liesel, and similar to other general-purpose PPLs, does not feature a mutable model object.
For complex models or large datasets, general-purpose PPLs may be slow or unable to sample the model at all. In these situations, model-specific software remains important, and modeling frameworks with customizable MCMC algorithms like Liesel or PyMC may serve as a basis for the implementation of model-specific solutions.
Its flexible model building library sets Liesel apart from other more specialized software. Similar to \texttt{brms} \citep{Buerkner2017}, which provides an interface for various types of multi-level models in Stan, RLiesel provides an interface for semi-parametric regression models in Liesel. RLiesel's features are comparable to other software in the field like \texttt{mgcv} \citep{Wood2022}, \texttt{gamlss} \citep{Stasinopoulos2017}, \texttt{GJRM} \citep{Marra2022}, BayesX \citep{Brezger2005} and \texttt{bamlss} \citep{Umlauf2021}. Its approach is different, however, in that the intermediate graph-based model representation can be modified and extended, allowing for the implementation of new models that are derived from a base model. BayesX was one of the first software packages for fast MCMC inference in semi-parametric regression models with spatial covariate effects. The software cannot be extended easily, however, restricting the user to the pre-defined predictor components (i.e.~linear, non-linear, spatial covariate effects, etc.). \texttt{bamlss} is another Bayesian software that allows the user to define their own predictor components, which need to be linear in a basis expansion of the covariates, and the corresponding regression coefficients need to follow a (potentially degenerate) multivariate normal prior. In that regard, the model graph of Liesel is more expressive and more flexible. The inference procedure in \texttt{bamlss} can be configured with the \texttt{optimizer} and \texttt{sampler} arguments, but a comprehensive collection of MCMC kernels as in Goose is missing. Automatic differentiation and high-performance computing hardware are also not supported in \texttt{bamlss}. Finally, the packages \texttt{mgcv} and \texttt{GJRM} are not primarily focused on Bayesian inference, although \texttt{mgcv} offers an interface to JAGS using the \texttt{jagam()} function. In contrast to Liesel, both packages have an exclusive focus on semi-parametric regression using basis function approaches.
\subsection{Technology stack}
Liesel uses a modern machine learning technology stack for the efficient implementation of the model graph and the MCMC kernels. In particular, Liesel depends on the Python packages NumPy \citep{Harris2020}, JAX \citep{Bradbury2022}, BlackJAX \citep{Lao2022} and TensorFlow Probability \citep{Dillon2017}. JAX, a library for scientific computing with support for automatic differentiation (AD) and just-in-time (JIT) compilation, is of particular importance for Liesel, since its features enable the implementation of computationally efficient inference algorithms. For example, when using reverse-mode AD, the value and the gradient of the log-posterior of a model can both be evaluated in the same amount of time -- up to a constant. Furthermore, JAX supports using CUDA-enabled graphics cards for its computations, and running them on even more powerful tensor processing units (TPUs) or networks of those.
Liesel runs on Linux, macOS and with some limitations on Windows,\footnote{JAX, one of Liesel's dependencies, does not provide official builds for Windows. However, JAX can either be built by the user or run using the Windows Subsystem for Linux (WSL).} and can be used on laptops, desktop computers and servers. Liesel's development is hosted on GitHub,\footnote{\url{https://github.com/liesel-devs/liesel}} where bugs can be reported and new features can be requested. The latest release of Liesel, 0.1.3 at the time of writing, is also available on the Python Package Index (PyPI).
The remainder of this article is organized as follows: In Section~\ref{sec:liesel-model}, the Liesel-Model library is discussed. Section~\ref{sec:goose} describes Liesel's MCMC library Goose, its main design goals, and the interfaces that allow the user to implement their own MCMC kernels and warmup schemes. RLiesel, the R interface for semi-parametric and distributional regression is covered in Section~\ref{sec:rliesel} together with some theoretical background on these model classes. Finally, Section~\ref{sec:case-study} describes a case study showing how the components of the Liesel framework can be used together to evaluate different MCMC algorithms on a semi-parametric regression model. The article concludes with a discussion in Section~\ref{sec:discussion}.
\section{Liesel: Developing probabilistic graphical models}
\label{sec:liesel-model}
\emph{\textbf{Please note:} The model building library of Liesel is going to receive a major update in version 0.2, which we plan to release in fall 2022. The arXiv preprint will be updated after the release to reflect the changes in version 0.2. For this reason, we focus on the abstract concepts and do not present any code examples in the current version of this section.}
The model building library of Liesel allows the user to express a broad range of (typically Bayesian) statistical models as probabilistic graphical models (PGMs). Particular attention is paid to the representation of semi-parametric regression models, which are described in Section~\ref{sec:rliesel}, and for which a number of convenience functions are provided. In general, however, almost any statistical model can be expressed with Liesel. The PGM representation allows for a convenient factorization of the log-probability of the model (or the unnormalized log-posterior in a Bayesian context). It is also the basis for the user interface that can be used to update the nodes in a natural way and to modify the structure of the graph (e.g.~by adding or removing nodes or edges).
\subsection{Probabilistic graphical models and directed acyclic graphs}
A PGM uses a graph to express the conditional dependence and independence between a set of random variables. For Bayesian models, one typically relies on directed acyclic graphs (DAGs) to represent hierarchical structures without any loops or circular dependencies, permitting the factorization of the joint probability into a product of conditional probabilities. More precisely, if $M = (X, E)$ is a DAG with nodes $x \in X$ representing random variables and edges $e \in E$ representing conditional dependencies between them, the joint probability of $M$ can be written as
$$\prod_{x \in X} p\bigl(x \mid {\operatorname{Inputs}(x)}\bigr),$$
i.e.~the product of the probabilities of the individual nodes conditional on their inputs (or parents). The inputs of a node $x \in X$ are all nodes $x' \in X$ for which $x$ and $x'$ are not conditionally independent given the other nodes of the model.
\subsection{Nodes and models in Liesel}
Liesel uses Python classes to implement and enrich the mathematical concept of a node in a PGM. A node has two important properties: a value and a log-probability, which is the evaluation of the log-probability density or mass function of the node at its value. To keep both properties in sync, i.e.~to avoid an inconsistent state, the node class comes with methods for setting its value and updating its state. The model class, on the other hand, represents a PGM and can hold a number of nodes. It provides methods for the evaluation of the model log-probability and for updating the nodes in a topological order. The model graph can also be visualized conveniently.
The nodes are able to cache their value and log-probability, meaning that the model graph is stateful. The results of expensive mathematical operations can be stored directly in the graph, enabling performance improvements for MCMC sampling, especially if multiple parameter blocks are used. If required, the user can implement new types of nodes and models due to the modular and extensible design of Liesel. More details on the key features of the nodes and models are provided in the following paragraphs.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figures/node-types}
\caption{The nodes of a Liesel model can be strong (blue) or weak (orange), and can have a probability distribution (double border) or not (single border). Weak nodes are functions of their inputs and can always be recomputed from the strong nodes of the model. Nodes with a distribution have a log-probability that is part of the model log-probability. For a graphical representation of a concrete semi-parametric regression model, see Figure~\ref{fig:dist-reg}.}
\label{fig:node-types}
\end{figure}
\paragraph{Nodes} Liesel extends the concept of a node in a PGM, where nodes are used to represent random variables, and adds a distinction between so-called ``strong'' and ``weak'' nodes. Strong nodes have a value that is either fixed or set by an inference algorithm such as a sampler or optimizer. With some rare exceptions, the random variables of a model are strong nodes and can represent observed data (e.g.~the response of a regression model) or a model parameter (in a Bayesian context). Conversely, not all strong nodes are random variables. Hyperparameters or design matrices are examples of nodes strong without an associated probability distribution.
In contrast, weak nodes represent functions of their inputs. These functions are usually deterministic and describe the mappings between the random variables of a model and their probability distributions. Weak nodes can also represent pseudo-random functions, in which case however, they require the state of the PRNG (stored in a strong node) as one of their inputs. The weak nodes can always be recomputed from the strong nodes, and hence, the state of a model is uniquely defined by the strong nodes. Weak nodes can be used to cache the results of expensive computations, because their value only needs to be updated when their inputs have changed. Node subclasses can implement weak nodes representing commonly used functions. By default, Liesel comes with a number of weak nodes facilitating the development of semi-parametric regression models.
If a node has a probability distribution, its log-probability is the evaluation of its probability mass or density function at its current value. For convenience, the log-probability of a node without a distribution is defined to be zero. Summing up the node log-probabilities gives the model log-probability, which can be interpreted as the unnormalized log-posterior in a Bayesian context. The log-posterior can be decomposed into the log-likelihood (considering only the observed nodes) and the log-prior (considering only the parameter nodes).
Liesel supports probability distributions that follow the class interface from TensorFlow Probability (TFP). Thus, all distributions from TFP can be used with Liesel and new ones can be implemented. One feature of TFP that is particularly useful for Bayesian statistics is the possibility to transform distributions with bijectors. When defining a transformed distribution, TFP automatically adjusts the log-probability with the log-determinant of the Jacobian of the bijector. For an overview of the different node types -- strong and weak, with and without a probability distribution -- see Figure~\ref{fig:node-types}.
Finally, we provide a concrete example and describe which node types would be used to represent a generalized linear model (GLM) in Liesel: The response vector $\yvec$ and the design matrix $\Xmat$ of a GLM are the observed data and would be two strong nodes. While the design matrix is fixed, the response is assumed to follow a probability distribution from the exponential family such as a Poisson or gamma distribution. The vector of regression coefficients $\betavec$ is the only model parameter and would be another strong node. In a Bayesian context, the regression coefficients are assigned a prior distribution, whose hyperparameters would again be strong nodes. In contrast, the linear predictor $\etavec = \Xmat\betavec$ would be a weak node representing a simple matrix-vector product. The expected value of the response $\muvec = h(\etavec)$ is the element-wise evaluation of the response (or inverse link) function $h$ at the linear predictor $\etavec$ and would be encoded in separate weak node.
\paragraph{Models} A Liesel model is a collection of nodes with properties for the model log-probability, the log-likelihood and the log-prior. Upon initialization, the model computes and stores a topological order of the nodes, which is required for updating the model. The API allows the user to extract and set the state of the model, that is, the values and log-probabilities of the nodes. If some of the nodes have a random seed as an input, the model can manage the PRNG state by splitting and distributing a JAX PRNG key.
The key feature of the model is its update mechanism, which also supports partial updates. If the value of a strong node is modified, its outputs (i.e.~nodes that have the modified node as one of their inputs) are recursively flagged as outdated. By calling the update method on the outdated nodes in a topological order, a consistent state can be restored. This is exactly how the update mechanism of the model works. For situations when only a subset of the nodes is of interest and a full update of the model graph is unnecessary, a partial update can be triggered through the model by specifying the target nodes of the update.
The nodes and the model in Liesel follow a stateful, object-oriented approach, which is incompatible with JAX's requirement for pure, stateless functions. To take full advantage of JAX's and Goose's features for JIT compilation, the computations need to be separated from the state of the model. For this purpose, Liesel provides helpers to extract pure functions from the model, which can be used to compute the log-probability and to update the state. These functions are also used in the model interface that can connect the model with Goose.
\subsection{Benefits of using Liesel}
Goose, the MCMC library that comes with Liesel, can be used independently of the model building library. When using Goose, the user can decide whether their model is best represented with Liesel, PyMC or a self-written log-probability function. Comparing these different approaches, we see the following particular benefits of using Liesel:
\begin{description}
\item[Caching] Weak nodes can be used to cache the results of expensive computations. This feature is particularly useful for efficient MCMC sampling with multiple parameter blocks, as supported by Goose. Using weak nodes as a cache, the results from the other branches of a tree-like model graph can be recycled when updating the branches individually. Further performance improvements can be achieved with Liesel's partial updates of the model graph, allowing the user to compute only those quantities that are relevant for a given operation.
\item[Graph manipulations] The graph of a Liesel model can be modified, allowing for a workflow with a base model, which can be customized to implement new variants of the model. This approach is most convenient if the base model is a semi-parametric regression model that can be configured with RLiesel (Section~\ref{sec:rliesel}). RLiesel provides many model components for semi-parametric regression, e.g.~different spline bases, penalties and response distributions.
\item[Hackability] Liesel tries to get out of the way of the user who is extending a model or implementing a new one. The design of the node and model classes is simple and follows the principle of least astonishment. When in doubt, less surprising behavior is favored over more convenience. New operations for a model can be implemented as weak nodes using JAX, which provides a familiar, NumPy-like user interface.
\item[Visualization] The graph of a Liesel model is composed of statistically meaningful nodes with values and log-probabilites. It is a wrapper around the computational graph of the model and can be plotted using the functions provided by Liesel. The visualization of the model graph can be useful for various purposes, including debugging or strengthening the intuition about the underlying statistical model.
\end{description}
\section{Goose: A toolbox for modular MCMC algorithms}
\label{sec:goose}
The Liesel framework includes a library named Goose for tailoring MCMC algorithms to specific estimation problems. Goose provides the means for statisticians to develop their own MCMC algorithms that fit the models they are working on better than generic samplers. Goose assists the statistician in three ways: First, by using Goose, they are freed from tedious bookkeeping tasks like storing the sample chains, managing the PRNG state or parallelizing the code to run multiple chains. Second, Goose provides the building blocks of an MCMC algorithm called kernels. A kernel is an algorithm that transitions the parameter vector or (in a blocked sampling scheme) a part of it within an MCMC iteration. Kernels can also define warmup procedures allowing them to learn their hyperparameters and thus removing the need to set them by hand. Third, a well-defined interface allows the combination of user-implemented problem-specific kernels with the default kernels in case the kernels that are shipped with Goose are not sufficient for the estimation problem.
All in all, Goose enables users to construct entirely new algorithms but also to use existing building blocks and combine them in new ways to match the estimation problem at hand. Statisticians using Goose can focus on how one MCMC transition should be performed. In this section, we introduce Goose in detail and our key design choices. Some implementation details are also discussed.
\subsection{The primary design goals}
The general goal of providing a modular framework for MCMC inference for statistical models can further be broken down into the following more specific design goals:
\begin{itemize}
\item Goose should free the user from monotonous tasks that are repeatedly encountered when implementing MCMC algorithms. Among these are storing the intermediate states, multi-chain management, tracking errors and debug messages, and calling tuning algorithms at the right time.
\item Goose should allow the user to decide how to transition the model parameters from one to the next MCMC iteration. In Goose, we do that by letting the user combine multiple transition kernels. Each kernel moves a part of the parameter vector or, if only one kernel is used, the entire parameter vector using a valid MCMC transition.
\item Goose should have a mechanism to tune the transition kernels automatically during a warmup phase and should thereby avoid that the user needs to tune the kernel hyperparameters by hand.
\item The user should have full control over the combined MCMC algorithm. That means, in particular, that all defaults must be changeable, but even more importantly, Goose must allow the implementation of user components. Therefore, the framework should be based on a collection of modular components with well-documented interfaces. The user should be able to compose and extend the components in a flexible, yet straightforward way.
\item Goose must support continuous and discrete model parameters.
\item Liesel models should be first-class citizens and easy to set up with Goose. However, Goose should be a general MCMC framework that can be used with any JAX model, e.g.~a PyMC model or a hand-coded model by the user.
\item Goose strives to be convenient to use and fast. To achieve these goals, Goose provides pre-implemented components of popular MCMC algorithms like HMC and NUTS. Furthermore, Goose makes heavy use of JAX's capabilities for automatic differentiation (sparing the user the implementation of derivatives) and just-in-time compilation (speeding up the repeated evaluation of the log-probability of the model). For this reason, the models and the components of the MCMC algorithms need to be expressed in JAX.
\item Whenever possible, Goose should wrap well-tested MCMC kernels from other libraries such as the NUTS and HMC kernels from BlackJAX. This way, we can avoid re-implementing complex algorithms, which would be unnecessarily error-prone, while extending the user base of existing projects like BlackJAX.
\end{itemize}
However, there are also aspects that are outside the scope of Goose. For instance, Goose does not check the mathematical correctness of the sampling schemes. It is up to the user to design a valid MCMC algorithm. The results from Goose should generally be reproducible on the same system. However, reproducibility between different hardware cannot be guaranteed due to small differences in the floating point arithmetic. These differences may add up to observable differences during many MCMC iterations using modern MCMC algorithms.\footnote{Exact reproducibility is limited for many modern computational tools. See for example Stan's reference manual (\url{https://mc-stan.org/docs/reference-manual/reproducibility.html}) or the corresponding section in Liesel's tutorial book (\url{https://liesel-devs.github.io/liesel-tutorials/reproducibility.html}).}
\subsection{Main components of Goose}
Goose is composed of many classes and interfaces. The design boils down to a few central pieces users must understand to successfully use Goose as their tool to create MCMC algorithms in a few steps. A deeper understanding is required to write extensions. The most important building blocks and their relationships are illustrated in Figure~\ref{fig:goose}. We describe their roles here. Note that we sometimes refer to the model parameters as the ``position''.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figures/goose}
\caption{Entity-relationship diagram of Goose's main components. Only the most important classes, fields and methods are shown here.}
\label{fig:goose}
\end{figure}
\begin{description}
\item[Engine] The \texttt{Engine} class is the central part of Goose and acts as a coordinating entity, hiding a big part of the complexity from the user. In particular, after the user has decided how the transitions should be executed, it makes sure that the right functions and methods are called at the right time guaranteeing that the transitions of the position happen as requested. Moreover, the engine keeps track of the sampling history and advances through the different sampling phases (e.g.~the warmup and posterior phase). It also coordinates the PRNG state and provides the top-level user interface.
\item[Kernel] A kernel object performs a single MCMC transition, i.e.~an update of the position or some elements of the position. The update must be a valid MCMC transition, for example based on the Metropolis-Hastings algorithm. The \texttt{Kernel} interface describes how the engine can interact with the kernels. The user can either use pre-implemented kernels or implement new kernel classes adhering to the kernel interface.
\item[Epoch] An epoch is a series of MCMC iterations. The \texttt{EpochConfig} class describes an epoch. Epochs are used to communicate to the kernels which phase of the MCMC algorithm they are in and which operations they are allowed to perform in this phase. More specifically, we divide the sampling process into a warmup and a posterior phase. Samples from the posterior phase are expected to be valid MCMC samples. In contrast, during the warmup phase, the chain may not yet have converged, and during the so-called adaptation epochs, the Markov property may be violated. This way, we allow the kernels to learn their hyperparameters during the adaptation epochs in the warmup phase. If done right, this can spare the user the manual tuning of the kernel hyperparameters, and it can lead to more efficient sampling in the posterior phase.
The simplest setup would contain only two epochs: a burn-in epoch (part of the warmup phase) and a posterior epoch (part of the posterior phase), each containing multiple MCMC iterations. On the other hand, a more complex setup can include multiple adaptation epochs in the warmup phase.
\item[ModelInterface] The \texttt{ModelInterface} describes how the components of Goose can communicate with the model. Most importantly, it describes how the unnormalized log-posterior can be evaluated for a given position. By defining the model interface as an abstraction layer, Goose can easily be used with different model backends.
\end{description}
To set up an MCMC algorithm, the user needs to combine the different components of Goose into one valid engine object that handles the communication between them. However, the constructor of the engine is quite complex. To ease the creation of an engine object, the \texttt{EngineBuilder} class can be used. It provides a step-by-step interface for the configuration of an engine.
Using the engine builder, Goose leaves the user with only a few tasks to set up an MCMC sampler. These are: (i) Select the appropriate kernels such that every part of the position is moved and add the kernels to the builder. (ii) Supply the builder with an instance of a model interface so that the engine knows how to communicate with the model. (iii) Set the initial values for the position. (iv) Define a sequence of epochs with the desired warmup scheme and the right number of posterior samples. Goose provides a helper function for this task. (v) Additionally, the user must initialize the random number generator and decide how many chains should be run in parallel. Afterwards, the engine is ready to be used for sampling.
\subsection{Some implementation details}
To enable a deeper understanding of Goose, we describe how the sampling is performed on an implementation level. We explain in detail how the engine communicates with the kernels and provide an overview of the sequence of these interactions. A simplified sequence diagram of the sampling process is shown in Figure~\ref{fig:engine}. Before the sampling is started with the method \texttt{sample\_all\_epochs()}, the user has to create an engine object as described above. That means a sequence of kernels and a sequence of epochs must be defined, the engine must be connected to the model via the model interface, and the initial position must be set. In the following, we assume that only one kernel is used. However, the extension to multiple kernels is straightforward and described later.
\begin{figure}
\centering
\includegraphics[height=18cm]{figures/engine}
\caption{Sequence diagram of the communication between the engine and a kernel. For simplification, we show only one kernel here. However, the extension to multiple kernels is natural by calling the kernel methods in a sequence, which can be achieved by wrapping the kernels in a \texttt{KernelSequence} object. The engine provides additional methods to run the epochs one by one and to append epochs, which are not shown here. These methods allow for an interactive use of the engine, while the diagram illustrates a ``one-shot'' run of an already configured engine.}
\label{fig:engine}
\end{figure}
The sampling process is divided into multiple phases, which we call ``epochs''. Each epoch has a duration, i.e.~the number of MCMC iterations that are performed in the epoch, and a type. At the beginning of each epoch, the kernel method \texttt{start\_epoch()} is called informing the kernel about the new epoch and allowing it to modify the kernel state. The kernel state is a data structure used to store parameters defining the behavior of the kernel. It may be modified during the warmup. The scale of the proposal distribution (also known as the step size) of a random walk kernel serves as an example in this section. The kernel state can also include a cache required to calculate the actual parameters that affect the transitions. Allowing the kernel to change its state at the beginning of an epoch enables it to prepare for the subsequent operations.
Afterwards, the control is handed back to the engine, and it calls the kernel method \texttt{transition()} for each MCMC iteration in the current epoch. The transition method is supposed to move the position and return the new position together with additional information (which would typically include whether the position changed, how large the acceptance probability was, whether an error occurred, etc.) to the engine. The engine takes care of storing the position and the additional information. Note that during the warmup phase, the kernels are allowed to change their state in the transition method, which allows for on-the-fly tuning of kernel parameters and updates of the cache. This is required, for example, for the dual averaging algorithm \citep[Section 3.2]{Nesterov2009, Hoffman2014} or for Welford's online algorithm for calculating the empirical variance of an element of the position \citep{Welford1962}.
Once all transitions defined in the current epoch have been carried out, the kernel method \texttt{end\_epoch()} is called. Again, the kernel can change its state and prepare for the following tuning. To invoke the tuning, the kernel method $\texttt{tune()}$ is called if the current epoch is an adaptation epoch. In the adaptive random walk kernel, this method would be the place to calculate the new step size based on Welford's algorithm and update it in the kernel state. The kernel is allowed to request the history of the positions visited in the current epoch. Having the history available facilitates the implementation of certain tuning algorithms.
The outlined process is repeated for each epoch. As soon as the first epoch of the posterior phase is encountered, the kernel method \texttt{end\_warmup()} is called before the call to \texttt{start\_epoch()}. It informs the kernel that the warmup phase is over, and subsequent to this call, the kernel must respect the Markov property.
Finally, the user can request the sampling results from the engine and inspect them. The results do not only contain the chain of the visited positions but also meta-information and an error log (e.g.~an error is reported if the log-posterior evaluates to $-\infty$). Liesel also provides some utilities for the inspection of the chains.
A more interactive approach is also possible. The user can always add more epochs to continue sampling. One restrictions is that Goose does not allow posterior epochs to be followed by epochs of any other type. The interactive approach is facilitated by the engine methods \texttt{append\_epoch()} and \texttt{sample\_next\_epoch()}. The user can run a few warmup epochs, inspect the chains, decide if they have reached the typical set and converged, add more warmup epochs if necessary or move on to the posterior epoch otherwise.
Everything that has been said so far can easily be generalized to multiple kernels. In that case, each method call is carried out in a loop over the sequence of kernels defined by the user. Note that the kernels cannot share their state.
If users want to work with custom MCMC transition or tuning methods or extend Goose's collection of kernels, they have to implement a new class that is required to follow the \texttt{KernelInterface}. The two most important methods to do so are \texttt{transition()} and \texttt{tune()}. We describe them in more detail and also provide more information on the implementation of the engine, which is useful to understand the requirements for the kernel methods.
\paragraph{The engine.}
As described above, the engine orchestrates the sampling process and provides the top-level user interface. It also hides some complexity that arises from using JAX and JIT-compiled functions. Using JAX comes with many benefits, e.g.~automatic differentiation (AD) and just-in-time (JIT) compilation. Furthermore, JAX programs can be executed on high-performance devices like GPUs and TPUs. For efficient sampling, the engine automatically groups multiple MCMC iterations into one compilation unit and uses JAX's \texttt{jit()} function to compile them together. Thus, the MCMC iterations are performed together on the computing device without the need for communication with the host. This ensures a better performance, especially if the computing device is not the CPU.
One drawback, or rather one limitation, is the requirement of ``pureness''\footnote{A pure function is a function whose value depends solely on the values of its arguments and which furthermore has no side effects. In JAX and Goose, the concept of pureness is a bit weaker. A function may depend on variables in the environment. However, the values of those variables are then compiled into the function, and therefore, the behavior of the function does not change if the variables are updated later. Consequently, the compiled function is pure.} for functions to be compiled with JAX. Pureness is not necessarily a disadvantage, because pure functions are easier to reason about for humans and for the compiler. This can result in faster execution times compared to non-pure functions.
Goose needs to guarantee that the compiled functions are pure. This implies that the engine must manage the PRNG state -- we use JAX's splittable Threefry counter-based PRNG -- as well as the kernel states. Goose requires all kernel methods called within the compiled functions (e.g.~\texttt{transition()} and \texttt{tune()}) to be pure, meaning that the kernels cannot store values changing over time in fields but must pass them back to the engine via a \texttt{KernelState} object, and receive them again from the engine together with the PRNG state for the next transition.
\paragraph{The transition method.}
The two most important methods every kernel needs to implement are the \texttt{transition()} and the \texttt{tune()} method. These methods are called by the engine and need to be pure and jittable.
The purpose of the transition method is to move the position or parts of it using a valid MCMC step, e.g.~a Metropolis-Hasting algorithm. The position is a subset of the model state. Through the standardized model interface, the kernel can extract the position from the model state.
The signature of the \texttt{transition()} method is as follows:
\begin{lstlisting}
Py> class Kernel:
+ # ...
+
+ def transition(
+ self,
+ prng_key: KeyArray,
+ kernel_state: KernelState,
+ model_state: ModelState,
+ epoch: EpochState,
+ ) -> TransitionResult[KernelState, TransitionInfo]:
+ # ...
+
+ # ...
\end{lstlisting}
Since the \texttt{transition()} method must be pure and MCMC transitions generally involve the generation of random numbers, the state of the PRNG needs to be provided as an argument. In addition, the \texttt{transition()} method receives the kernel state, the model state and the epoch state as arguments, and returns a \texttt{TransitionResult} object, which wraps the new kernel state, the new model state and some meta-information about the transition, e.g.~an error code or the acceptance probability (in a \texttt{TransitionInfo} object). An error code of zero indicates that the transition did not produce an error.
All inputs and outputs must be valid ``pytrees'' (i.e.~arrays or nested lists, tuples or dicts of arrays). The structure of these objects, e.g.~the shape of the arrays in the kernel state, must not change between transitions. This allows the kernels to have specialized \texttt{KernelState} and \texttt{TransitionInfo} classes.
\paragraph{Tuning a kernel.}
The sampling process can be divided into epochs of four types: fast and slow adaptation epochs, burn-in epochs and posterior epochs. The adaptation and burn-in epochs are so-called warmup epochs. During the adaptation epochs, the kernels are allowed to learn their hyperparameters from the history. Samples from the adaptation epochs are usually invalid as MCMC samples, because the Markov property of the chain is violated. In contrast, during a burn-in epoch, the kernels should no longer adjust their hyperparameters and the Markov property should be respected, but the chain may still require some more time to converge. Finally, when reaching the first posterior epoch, the chain should have converged, all transitions should be valid, e.g.~there should be no divergent transitions, and hence, the samples should approximate the target distribution appropriately.
The kernel method \texttt{tune()} is supposed to update the kernel hyperparameters at the end of an adaptation epoch. The method receives the PRNG state, the model state, the kernel state, the epoch state and optionally the ``history'', i.e.~the samples from the previous epoch, as arguments. It returns a \texttt{TuningResult} object that wraps the new kernel state and some meta-information about the tuning process, e.g.~an error code. As for the transition, the \texttt{TuningInfo} class can be kernel-specific but must be a valid pytree.
The signature of the \texttt{tune()} method is as follows:
\begin{lstlisting}
Py> class Kernel:
+ # ...
+
+ def tune(
+ self,
+ prng_key: KeyArray,
+ kernel_state: KernelState,
+ model_state: ModelState,
+ epoch: EpochState,
+ history: Position | None,
+ ) -> TuningResult[KernelState, TuningInfo]:
+ # ...
+
+ # ...
\end{lstlisting}
\paragraph{Debugging.}
The engine can be configured to store more information about the sampling process, e.g.~for debugging purposes. The extra information can include the log-posterior, log-likelihood, log-prior or any other quantity that can be computed from the model state by a quantity generator. Debugging is further facilitated with the option to store the kernel states for each iteration. Moreover, the engine can store information about the transitions and the tuning such as the acceptance probabilities or the proposals. In any case, the \texttt{transition()} and \texttt{tune()} methods of the kernels need to return an error code and inform the engine about non-fatal errors and warnings. The engine keeps a log and warns the user about potential problems. Goose's diagnostic tools can further aid the detection of potential sampling issues.
\subsection{Standard kernels in Goose}
Goose provides several kernels that can be used directly with many models. We discuss some of them here:
\begin{description}
\item[RandomWalkKernel] The RandomWalkKernel implements a Gaussian proposal distribution and a Metropolis-Hastings acceptance step. The kernel is self-tuning and uses the dual average algorithm to adjust the step size (i.e.~to scale the proposal distribution) during fast and slow adaptation epochs, such that a user-defined target acceptance rate, by default of 0.234 \citep{Gelman1997}, is reached.
\item[HMCKernel and NUTSKernel] The HMCKernel and NUTSKernel use the gradient of the log-posterior to generate MCMC chains with a low autocorrelation. The implementation of the \texttt{transition()} method is based on BlackJAX's implementations of the HMC \citep{Neal2011} and NUTS \citep{Hoffman2014, Lao2020, Phan2019} algorithms. Both kernels are able to tune the step size during fast and slow adaptation epochs using the dual averaging algorithm. After slow adaptation epochs, the mass vector or matrix of the impulse is adjusted based on the empirical variance-covariance of the samples from the previous epoch.
\item[IWLSKernel] The IWLSKernel is named after the method proposed by \citet{Gamerman1997}, which is often used for Bayesian distributional regression models \citep{Brezger2005}. However, Liesel's implementation is also inspired by the roughly equivalent Metropolis-adjusted Langevin algorithm (MALA) with the Riemann metric \citep{Girolami2011}. This approach allows us to add a step size parameter in a straightforward way, which can then be tuned with the dual averaging algorithm during fast and slow adaptation epochs. More precisely, the IWLSKernel employs a Metropolis-Hastings correction and a Gaussian proposal density, where the mean vector $\muvec$ and the covariance matrix $\Sigmamat$ depend on the gradient (score) and the Hessian (Hess) of the log-posterior, i.e.
$$\muvec = \thetavec + \nicefrac{s^2}{2} \operatorname{Hess}(\thetavec)^{-1} \operatorname{score}(\thetavec), \qquad \Sigmamat = s^2 \operatorname{Hess}(\thetavec)^{-1},$$
where $s$ denotes the step size and $\thetavec$ the position vector. The factor $\nicefrac{1}{2}$ that is multiplied with $s^2$ in the mean vector comes from the Langevin diffusion, which is the basis of the MALA algorithm.
\item[GibbsKernel] The GibbsKernel can wrap a user-defined function generating samples from a full conditional into a Goose-compatible kernel. With a Gibbs sampler, no tuning is necessary or possible, and therefore, the GibbsKernel has a trivial \texttt{tune()} method returning an empty kernel state.
\item[MHKernel] Similar to the GibbsKernel, the MHKernel implements a Metropolis-Hastings sampler as a wrapper around a user-defined function generating proposals based on the current state. If the proposal distribution is asymmetric, the function must also return the Metropolis-Hastings correction factor. An optional step size argument is also provided, which is tuned with the dual averaging algorithm if used.
\end{description}
\subsection{Beyond pre-implemented kernels}
The default Goose kernels are sufficient to estimate many statistical models with MCMC. However, Goose was specifically designed for cases when specialized kernels are needed. In these situations, new kernel classes adhering to the kernel interface can be implemented. The developer does not need to start from scratch, however. Goose comes with some building blocks that facilitate the implementation of new kernel classes. For example, if a kernel should support dual averaging, Goose can extend the kernel state with the necessary fields. It also comes with functions to calculate the error sum and to adjust the step size. A mixin for Metropolis-Hastings kernels is provided as well.
\section{RLiesel: An R interface for semi-parametric regression}
\label{sec:rliesel}
In this section, we discuss semi-parametric and distributional regression, the model classes Liesel offers first-class support for, before introducing RLiesel, an R interface that assists the user with the configuration of these regression models in Liesel. We also describe a natural workflow for RLiesel using R Markdown and Quarto.
\subsection{Semi-parametric regression}
\label{sec:semi-par}
Semi-parametric regression models combine parametric (usually linear) and non-parametric (usually spline-based) covariate effects. The standard semi-parametric regression model is given by
\begin{equation}
y_i = \beta_0 + \xvec_{i1}'\betavec_1 + f_{2}(\xvec_{i2}, \betavec_2) + \dots + f_{L}(\xvec_{iL}, \betavec_L) + \varepsilon_i, \qquad \varepsilon_i \overset{\text{i.i.d.}}{\sim} \mathcal{N}(0, \sigma^2),
\label{eq:semi-par}
\end{equation}
where the response $y_i$ is modeled as a function of the covariates $\xvec_{i1}$ with parametric effects and the covariates $\xvec_{il}$ with the non-parametric effects $f_{l}(\xvec_{il}, \betavec_l)$ for $l = 2, \dots, L$. The regression coefficients are the intercept $\beta_0$, the slope coefficients $\betavec_1$ and the spline coefficients $\betavec_l$. Fitting the model requires the estimation of the regression coefficients and the variance of the additive Gaussian error term $\varepsilon_i$.
One typical example of a non-parametric covariate effect is the B-spline $f(\xvec_i, \betavec) = \boldsymbol{b}(x_i)'\betavec$, where $\boldsymbol{b}(x_i)$ is the vector of B-spline basis functions for a fixed set of knots evaluated at $x_i$. For better readability, the index $l$ is omitted in the remainder of this section. The given B-spline representation is linear in the spline coefficients $\betavec$, allowing for a straightforward evaluation of the log-likelihood and the use of efficient estimation techniques. To avoid overfitting, certain smoothness properties can be encouraged through regularization, giving rise to the concept of penalized B-splines, also known as P-splines \citep{Eilers1996, Lang2004}.
In Bayesian statistics, regularization is achieved through informative priors, such as the multivariate normal distribution with the density
\begin{equation}
p(\betavec \mid \tau^2) \propto \left(\frac{1}{\tau^2}\right)^{\rk(\Kmat)/2} \exp\left(-\frac{1}{2\tau^2} \betavec'\Kmat\betavec\right),
\label{eq:mvn-prior}
\end{equation}
where $\tau^2$ is the variance (or inverse smoothing) parameter, and $\Kmat$ is a (potentially rank-deficient) penalty matrix. For P-splines with equidistant knots, it is common to penalize the second differences of the spline coefficients using the penalty matrix $\Kmat = \Dmat_2'\Dmat_2$, where $\Dmat_2$ is the second-order difference matrix such that $\Dmat_2\betavec = \Delta^2\betavec$. In this case, the penalty matrix is in fact rank-deficient, implying that additional constraints, usually a sum-to-zero constraint, are required for the identification of the spline coefficients.
The hyperprior on the variance parameter $\tau^2$ is typically weakly informative with support on the non-negative real line. \citet{Lang2004} suggest to use the conjugate inverse gamma prior with the hyperparameters $a = b = 0.01$ (or some other small number), allowing us to draw directly from the full conditional. However, priors like the half-Cauchy distribution or half-normal distribution might have better statistical properties in practice \citep{Gelman2006, Klein2016Priors}.
The concept of semi-parametric regression also encompasses other effect types that can be expressed as the inner product of a vector of basis function evaluations and a vector of regression coefficients, e.g.~random effects for clustered data or spatial effects. The structure of the penalty matrix $\Kmat$ in the multivariate normal prior~\eqref{eq:mvn-prior} depends on the desired effect type. For a random effect, we have $\Kmat = \Imat$, for an (intrinsic) Gaussian Markov random field, $\Kmat$ arises from the neighborhood structure \citep{Rue2005}, and for more general spatial effects, Vecchia approximations can be used to construct $\Kmat$ \citep{Katzfuss2021}. Note that the linear effect $\xvec_i'\betavec$ also fits into this framework by setting $\Kmat = \Zeromat$, reducing the multivariate normal prior~\eqref{eq:mvn-prior} to a flat prior. Consequently, parametric and non-parametric covariate effects can be treated the same way in this framework, and are generically referred to as predictor components or smooth terms. Semi-parametric regression is sometimes (perhaps more accurately, but also more verbosely) called structured additive regression. Consult \citet[Chapters~8 and~9]{Fahrmeir2013} for more information on predictor components and structured additive regression.
\subsection{Distributional regression}
Semi-parametric or structured additive regression predictors are often used in the context of distributional regression. These models are also known as generalized additive models for location, scale and shape (GAMLSS) and combine multiple regression predictors for different response parameters, that is,
\begin{equation}
p(y_i \mid \xvec_i, \betavec) = p(y_i \mid \theta_1(\xvec_{i1}, \betavec_1), \dots, \theta_K(\xvec_{iK}, \betavec_K)),
\label{eq:dist-reg}
\end{equation}
where the response $y_i$ follows a probability distribution with the parameters $\theta_k$ for $k = 1, \dots, K$, each of which is modeled as a function of the covariates $\xvec_{ik}$ and the regression coefficients $\betavec_k$. In contrast to generalized linear models (GLMs), the response distribution is not limited to the exponential family but can be of any parametric type, including for example non-negative continuous distributions like the Weibull or Pareto distribution. Distributional regression models for count data can take zero-inflation and overdispersion into account \citep{Klein2015Count}, while fractional responses (i.e.~single or multiple percentages) can be analyzed with the beta or Dirichlet distribution \citep{Klein2015Multivariate}. With mixed discrete-continuous distributions, we can add points with a non-zero probability mass to the support of a continuous response distribution. Finally, the distributional regression framework allows us to study multivariate response vectors using either conventional multivariate distributions \citep{Michaelis2018} or copulas to describe complex dependence structures with arbitrary marginal distributions \citep{Klein2016Copula}.
In distributional regression, each parameter of the response distribution is modeled with a semi-parametric regression predictor $\eta_{ik}$ (just as the one in Model~\eqref{eq:semi-par} in the previous section) and a response (or inverse link) function $h_k$, such that
\begin{equation}
\theta_{k}(\xvec_{ik}, \betavec_l) = h_k(\eta_{ik}) = h_k(\beta_{k0} + \xvec_{ik1}'\betavec_{k1} + f_{k2}(\xvec_{ik2}, \betavec_{k2}) + \dots + f_{kL_k}(\xvec_{ikL_k}, \betavec_{kL_k})).
\label{eq:dist-par}
\end{equation}
The response function $h_k$ is a one-to-one mapping of the predictor $\eta_{ik}$ from the real line to the appropriate parameter space. For positive-valued response parameters, the exponential function is typically used as a response function, and for parameters on the unit interval, the logistic function is a common choice.
The distributional regression model~\eqref{eq:dist-reg} with the semi-parametric predictor~\eqref{eq:dist-par} is a Bayesian hierarchical model, where the posterior can be factorized as $p\bigl(\bigcup_{k,l} \{\betavec_{kl}, \tau^2_{kl}\} \mid \bigcup_i \{y_i\}\bigr) = \prod_i p\bigl(y_i \mid \bigcup_{k,l} \{\betavec_{kl}\}\bigr) \cdot \prod_{k,l} p(\betavec_{kl} \mid \tau^2_{kl}) \cdot p(\tau^2_{kl})$. The model graph is a DAG with a tree-like structure, making it a good fit for software like Liesel, PyMC or Stan.
\subsection{DAG representations of semi-parametric regression models}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/dist-reg}
\caption{One possible DAG representation of the semi-parametric distributional regression model~\eqref{eq:dist-reg}. The different node types are described in Figure~\ref{fig:node-types}: Strong nodes are blue, weak nodes are orange. Nodes with double borders have a probability distribution, and oblique nodes are model parameters. Plate notation is used to indicate the range of the variable indices.}
\label{fig:dist-reg}
\end{figure}
One possible DAG representation of the semi-parametric distributional regression model is shown in Figure~\ref{fig:dist-reg}. The strong node $\alphavec_{kl}$ denotes the fixed hyperparameters of the prior of the variance parameter $\tau^2_{kl}$. Typically, $\alphavec_{kl} = (a_{kl}, b_{kl})' = (0.01, 0.01)'$ in the case of an inverse gamma prior. The choice of the weak nodes is essentially arbitrary: The nodes $f_{ikl}$, $\eta_{ik}$ and $\theta_{ik}$ could also be merged into a single weak node. In Liesel, we encourage a structure of the model graph that resembles the mathematical formulation of the semi-parametric distributional regression model in Equation~\eqref{eq:dist-reg} and \eqref{eq:dist-par}. This allows us to provide a number of pre-defined nodes for the components of the model class, which can be combined by the user in different ways.
The DAG representation can also be modified to improve the computational efficiency of the model. In the DAG as shown in Figure~\ref{fig:dist-reg}, the evaluation of the log-probability of $\betavec_{kl}$, i.e.~the evaluation of the multivariate normal prior~\eqref{eq:mvn-prior}, requires computing the rank of the penalty matrix $\Kmat_{kl}$. Given that the penalty matrix is usually a fixed hyperparameter, it is wasteful to repeat this expensive operation every time $\betavec_{kl}$ or $\tau^2_{kl}$ are updated. The performance of the model can be improved by adding a strong node with the pre-computed rank of $\Kmat_{kl}$. This node can then be used as an input for the probability distribution of $\betavec_{kl}$, hence avoiding the repeated computation of the matrix rank.
\subsection{Setting up semi-parametric regression models with RLiesel}
RLiesel is an R interface for Liesel, which can be used to configure semi-parametric distributional regression models. It is implemented as a thin wrapper around the \texttt{mgcv} package \citep{Wood2022}. The entry point to the package is the \texttt{liesel()} function, which requires the user to pass in the response data and distribution, and the predictors as arguments. The predictors are specified as R formulas with the extensions from \texttt{mgcv} to define non-parametric predictor components. They are passed on to the \texttt{gam()} function from \texttt{mgcv}, which initializes the design and penalty matrices. Finally, the Liesel model graph is built and filled with the data from \texttt{mgcv}. A concrete example how a model can be specified in RLiesel is given in the case study in Section~\ref{sec:case-study}.
\texttt{mgcv} is the state-of-the-art package for semi-parametric regression in R. It is extremely powerful, supports many different response distributions and predictor components, and is installed with R by default. Other notable features of \texttt{mgcv} are the automatic smoothness selection \citep{Wood2004} and various multivariate smooth terms. To the best of our knowledge, no package with a comparable set of features exists in Python. Most newer R packages in the domain of semi-parametric regression modeling depend on \texttt{mgcv} in one way or another. With our implementation of RLiesel, we follow the same approach and leverage the features of \texttt{mgcv} for the use with JAX and Liesel, avoiding the need to re-implement all predictor components in Python.
RLiesel configures the model graph, but does not automatically run an estimation procedure. Goose can be used for MCMC-based estimation, but needs to be configured in Python. For a seamless integration of RLiesel and Goose, we recommend Quarto \citep{Scheidegger2022} and \texttt{reticulate} \citep{Ushey2022}. Quarto allows the user to write and render dynamic documents in Markdown with embedded R and Python code cells, and using \texttt{reticulate}, objects can be shared between the R and Python processes at runtime. With this setup, the model can be configured using RLiesel in a R code cell, then exchanged with the Python process, before an MCMC algorithm is developed in another code cell. Finally, the estimation results can be visualized either in Python or R, depending on the user's preferences.
\section{Case study: Comparing different sampling schemes}
\label{sec:case-study}
In this case study, we show how RLiesel and Goose can be used to set up and compare different sampling schemes on a simple semi-parametric distributional regression model. Often, a one-size-fits-all MCMC algorithm does not work too well with a specific model. In these cases, one can try to reparametrize the model to improve the performance of the MCMC algorithm, or alternatively, one can try to develop a more suitable sampling scheme. The second approach is the particular strength of Liesel and Goose. Goose facilitates building custom samplers for specific estimation problems, allowing the user to combine different pre-defined and self-written kernels.
We use a dataset of LIDAR measurements, which was collected to determine the mercury concentration in the atmosphere, to evaluate the performance of five sampling schemes combining IWLS, Gibbs, NUTS and HMC kernels in different parameter blocks. For a detailed description of the experiment, see \citet{Holst1996}. Two lasers with different wavelengths were emitted by the LIDAR device, and the log-ratio between the signals (the amount of reflected light, $y_i$) was recorded for each range (the distance the light traveled, $x_i$). The data is shown in Figure~\ref{fig:lidar-splines} together with an estimate of the mean function. The derivative of the mean function is proportional to the desired estimate of the mercury concentration.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{lidar/lidar_files/figure-html/splines-1}
\caption{The log-ratio of the LIDAR signals for each range on top of a MCMC sample of 4000 estimated mean functions (left) and 4000 estimated standard deviation functions (right). The red lines mark the posterior mean, the sample was obtained with the IWLS-Gibbs scheme described in Section~\ref{sec:lidar-schemes}.}
\label{fig:lidar-splines}
\end{figure}
\subsection{Gaussian location-scale regression in RLiesel}
From Figure~\ref{fig:lidar-splines}, the non-linearity and heteroscedasticity of the LIDAR measurements becomes apparent. The semi-parametric Gaussian location-scale regression model
\begin{equation}
y_i \sim \mathcal{N}(\beta_0 + f(x_i), (\exp(\gamma_0 + g(x_i))^2)
\label{eq:lidar-model}
\end{equation}
is able to accommodate these properties of the data. Here, $\beta_0$ and $\gamma_0$ are the intercepts, and $f(x_i)$ and $g(x_i)$ are P-splines as described in Section~\ref{sec:semi-par}. For the P-splines, we use a cubic B-spline basis and a second-order difference penalty on the regression coefficients. The model belongs to the distributional regression framework as defined in Equation~\eqref{eq:dist-reg}, using a Gaussian response distribution and a log-link for the standard deviation.
With RLiesel, we can set up Model~\eqref{eq:lidar-model} as follows:
\begin{lstlisting}
R> library(SemiPar)
R> data(lidar)
R>
R> library(rliesel)
R> use_liesel_venv()
R>
R> model <- liesel(
+ response = lidar$logratio,
+ distribution = "Normal",
+ predictors = list(
+ loc = predictor(~s(range, bs = "ps"), inverse_link = "Identity"),
+ scale = predictor(~s(range, bs = "ps"), inverse_link = "Exp")
+ ),
+ data = lidar
+ )
\end{lstlisting}
The response variable and distribution, and the semi-parametric regression predictors are passed as arguments to the \texttt{liesel()} function. The predictors are specified as one-sided R formulas, where we can use the \texttt{s()} function from the \texttt{mgcv} package to define spline-based predictor components with the multivariate normal prior~\eqref{eq:mvn-prior}. The argument \texttt{bs = "ps"} indicates that we are using a P-spline. As Liesel depends on TensorFlow Probability (TFP) to represent probability distributions, we need to use the same class and parameter names. Here, the argument \texttt{distribution = "Normal"} refers to the class of the same name in TFP, which has the parameters \texttt{loc} and \texttt{scale} for the mean and the standard deviation of the normal distribution.
\subsection{Sampling schemes with different kernels in Goose}
\label{sec:lidar-schemes}
For the LIDAR model, we are using the IWLS-within-Gibbs sampling scheme as a benchmark. This scheme is provided as the default in RLiesel and has been propagated in the literature on semi-parametric distributional regression for several years \citep{Klein2015Count}. It combines one IWLS kernel for the regression coefficients~$\betavec$ with one Gibbs kernel for the smoothing parameter $\tau^2$ of each predictor component. Thus, in complex models with many predictor components, it results in a high number of parameter blocks, and sometimes in MCMC chains with a high autocorrelation. Furthermore, the use of the observed Fisher information in the IWLS kernel can cause numerical instabilities. Software packages like BayesX and \texttt{bamlss} replace the observed with the expected Fisher information whenever possible to mitigate these problems, but this workaround is model-specific and not possible with automatic differentiation.
Given the shortcomings of the IWLS-within-Gibbs scheme, it is interesting to compare its performance with gradient-based MCMC methods that do not require second derivatives such as HMC or NUTS. Relying only on the gradient, these kernels make it computationally feasible -- also in complex models -- to update large parameter blocks or the entire parameter vector. HMC and NUTS have been popularized with software like Stan \citep{SDT2022} and PyMC \citep{Salvatier2016}, and are known to work well in many applications \citep[Chapter 30]{MacKay2003}. In the LIDAR model, the smoothing parameters $\tau^2_f$ and $\tau^2_g$ need to be log-transformed if sampled with HMC or NUTS to guarantee an unconstrained parameter space. The configuration of all five sampling schemes is described in Table~\ref{tab:lidar-schemes}.
\begin{table}[!ht]
\renewcommand{\arraystretch}{1.1}
\newcommand{\cellcolor[HTML]{efa9b5}IWLS}{\cellcolor[HTML]{efa9b5}IWLS}
\newcommand{\cellcolor[HTML]{b6eaae}Gibbs}{\cellcolor[HTML]{b6eaae}Gibbs}
\newcommand{\cellcolor[HTML]{a3d4f5}NUTS}{\cellcolor[HTML]{a3d4f5}NUTS}
\newcommand{\cellcolor[HTML]{d1eafa}HMC}{\cellcolor[HTML]{d1eafa}HMC}
\centering
\caption{The sampling schemes for the LIDAR model. The IWLS kernel was used with the observed Fisher information as a metric (obtained through automatic differentiation). The NUTS kernel was configured with a maximum tree depth of 10 and a diagonal metric (tuned based on the empirical variances of the warmup samples). The HMC kernel was used with 64 integration steps and a diagonal metric. A smaller number of integration steps would have resulted in an insufficient exploration of the posterior distribution. The step size of the IWLS, NUTS and HMC kernels was calibrated with the dual averaging algorithm during the warmup epochs.
}
\label{tab:lidar-schemes}
\begin{tabular}{>{\bfseries}l|c|c|c|c|c|c}
& $\beta_0$ & $\betavec_f$ & $\tau^2_f$ or $\log(\tau^2_f)$ & $\gamma_0$ & $\gammavec_g$ & $\tau^2_g$ or $\log(\tau^2_g)$ \\
\hline
IWLS-Gibbs & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{b6eaae}Gibbs & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{efa9b5}IWLS & \cellcolor[HTML]{b6eaae}Gibbs \\
\hline
NUTS-Gibbs & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{b6eaae}Gibbs & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{a3d4f5}NUTS & \cellcolor[HTML]{b6eaae}Gibbs \\
\hline
NUTS1 & \multicolumn{6}{c}{\cellcolor[HTML]{a3d4f5}NUTS} \\
\hline
NUTS2 & \multicolumn{3}{c|}{\cellcolor[HTML]{a3d4f5}NUTS} & \multicolumn{3}{c}{\cellcolor[HTML]{a3d4f5}NUTS} \\
\hline
HMC2 & \multicolumn{3}{c|}{\cellcolor[HTML]{d1eafa}HMC} & \multicolumn{3}{c}{\cellcolor[HTML]{d1eafa}HMC} \\
\hline
\end{tabular}
\end{table}
Setting up sampling schemes and parameter blocks is straightforward with Goose. To facilitate the configuration of an MCMC engine, a builder class can be used. Through the builder, kernels can be assigned to one or more parameters, the model and initial values can be set, as well as the number of MCMC iterations. Finally, the engine can be built and run. The following code snippet illustrates the procedure for the NUTS2 scheme, but the setup of the other schemes works analogously:
\begin{lstlisting}
Py> builder = gs.EngineBuilder(seed=1337, num_chains=4)
Py>
Py> k1 = ["loc_p0_beta", "loc_np0_beta", "loc_np0_tau2_transformed"]
Py> k2 = ["scale_p0_beta", "scale_np0_beta", "scale_np0_tau2_transformed"]
Py> builder.add_kernel(gs.NUTSKernel(k1))
Py> builder.add_kernel(gs.NUTSKernel(k2))
Py>
Py> builder.set_model(lsl.GooseModel(model))
Py> builder.set_initial_values(model.state)
Py>
Py> builder.set_duration(warmup_duration=1000, posterior_duration=1000)
Py>
Py> engine = builder.build()
Py> engine.sample_all_epochs()
\end{lstlisting}
\subsection{Run time and effective sample size}
All sampling schemes from Table~\ref{tab:lidar-schemes} converged to the same posterior distribution shown in Figure~\ref{fig:lidar-splines}, so we can focus on comparing their efficiency rather than the parameter estimates. The MCMC algorithms were compiled and run on an Intel i7-1185G7 CPU with 8 cores and 3 GHz. The compilation was generally much more expensive than the generation of one chain with 1000 warmup and 1000 posterior iterations (Figure~\ref{fig:lidar-timings}). The IWLS-Gibbs and NUTS-Gibbs schemes were particularly slow to compile, presumably because combining two types of kernels means more work for the compiler, while the sampling schemes involving one or two NUTS kernels took most time to run.
The reason for the performance issues with NUTS was that the maximum tree depth of 10 was reached in about 90\% of the posterior iterations for the NUTS1 scheme, and in 75\% for NUTS2. The problem did not occur with the NUTS-Gibbs scheme, where we split the regression coefficients $\betavec$ and the smoothing parameters $\tau^2$ into separate blocks. We tried to improve the performance of the NUTS1 and NUTS2 schemes with a non-centered parameterization as recommended by the \citet[User's Guide, Section~25.7]{SDT2022} by diagonalizing the penalty matrices of the P-splines as described by \citet[Section~5.4]{Wood2017}, but did not achieve an efficiency improvement. Other reparametrizations or the use of a Riemann metric \citep{Girolami2011} might help to speed up the NUTS kernels, but we did not explore these options further in this case study.
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{lidar/lidar_files/figure-html/timings-1}
\caption{The compile and run time of the sampling schemes. The timings are obtained on an Intel i7-1185G7 CPU with 8 cores and 3 GHz for one MCMC chain with 1000 warmup and 1000 posterior iterations. The IWLS-Gibbs and NUTS-Gibbs schemes are most expensive to compile (because they combine two types of kernels), while the NUTS1 and NUTS2 schemes are most expensive to run (due to the high tree depth).}
\label{fig:lidar-timings}
\end{figure}
The efficiency of an MCMC algorithm cannot be assessed based on the run time alone, but the quality of the samples needs to be taken into account as well. We use the effective sample size \citep[ESS,][]{Gelman2013} for this purpose. The ESS estimates the size an independent sample would need to have to contain the same amount of information as the correlated MCMC sample. An MCMC chain with a high autocorrelation generally has a low ESS. For the LIDAR model, the NUTS-Gibbs scheme has the highest ESS with a median of 318.67 per 1000 iterations, and the HMC2 scheme has the lowest ESS with a median of 25.56 (Table~\ref{tab:lidar-ess}). The table also shows the ESS per second, which takes both the quality of the samples and the run time into account. By that measure, the two schemes involving a Gibbs kernel perform best, with a median of 869.05 for NUTS-Gibbs and 325.21 for IWLS-Gibbs.
\begin{table}[!ht]
\centering
\caption{The bulk ESS and bulk ESS per second of the sampling schemes. 30 MCMC chains are generated per scheme, and the summary statistics are computed pooling all 22 parameters of the LIDAR model. The ESS per second is computed based on the run time of the posterior iterations, not taking the compilation and the warmup iterations into account. The NUTS-Gibbs scheme is the most efficient, both in terms of ESS and ESS per second.}
\label{tab:lidar-ess}
\begin{tabular}{l|>{\bfseries}l|rr>{\bfseries}rrr}
\toprule
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 5\% & 25\% & Median & 75\% & 95\% \\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{Bulk ESS}} & IWLS-Gibbs & 33.53 & 70.01 & 91.17 & 114.61 & 269.58\\
& NUTS-Gibbs & 122.97 & 205.34 & 318.67 & 482.70 & 939.08\\
& NUTS1 & 7.80 & 46.30 & 92.78 & 347.65 & 945.62\\
& NUTS2 & 25.90 & 72.04 & 140.14 & 456.46 & 866.49\\
& HMC2 & 1.62 & 6.81 & 25.56 & 291.10 & 1249.13\\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{Bulk ESS/s}} & IWLS-Gibbs & 119.61 & 249.72 & 325.21 & 408.80 & 961.55\\
& NUTS-Gibbs & 335.37 & 560.01 & 869.05 & 1316.41 & 2561.02\\
& NUTS1 & 2.88 & 17.11 & 34.30 & 128.52 & 349.57\\
& NUTS2 & 17.96 & 49.95 & 97.16 & 316.48 & 600.77\\
& HMC2 & 7.56 & 31.88 & 119.55 & 1361.79 & 5843.52\\
\bottomrule
\end{tabular}
\end{table}
\section{Discussion}
\label{sec:discussion}
In this article, we introduced the probabilistic programming framework Liesel, which allows the user to express Bayesian models as directed acyclic graphs and to build custom MCMC algorithms. With our software, established MCMC algorithms can be combined in new ways, and the user can implement problem-specific kernels and warmup schemes. Goose, Liesel's MCMC library, is independent of Liesel's graph-based model representation and can also be used with other JAX-compatible software, for example PyMC or user-defined log-posterior functions.
Models expressed in Liesel can be modified through a programmer-friendly API. A base model can be generated with RLiesel, a tool to configure semi-parametric regression models, and new ideas can be explored with little effort by modifying the base model. Using state-of-the-art technology like just-in-time compilation, automatic differentiation and cluster computing, which is possible with JAX, Liesel allows for a fast development and testing cycle in Python while maintaining good computational performance.
The development of Liesel will be continued in the coming years. Liesel uses many libraries that are under active development and whose API changes must be reflected in our software. We also plan to integrate new features and other enhancements of these libraries into Liesel. Based on JAX's experimental module for sparse linear algebra, for example, we will improve the performance of different models using efficient decomposition algorithms for matrices with band structures or more general sparsity patterns.
The next major update of the software, Liesel 0.2, is planned for fall 2022. It will feature an improved model representation, making manipulations and extensions of the model graph easier and safer. In the new version, the graph of the statistical variables in the model will be built on top of a graph of computational nodes. This approach will result in an interface that is more convenient in standard use cases and more ``hackable'' in advanced use cases. The new interface aims to be simple and transparent with a small number of classes that do not surprise the developer with any ``magic'' behavior.
Liesel will also be extended with more model components and new MCMC kernels. The new building blocks in the modeling library will facilitate the rapid development of new types of models, thus speeding up research. In particular, RLiesel will be extended with the functionality to build non-linear models that overcome the typical additive predictor structure of semi-parametric regression, or models that involve covariates that are themselves assigned a model specification such as measurement error models or more general structural equation models. These extensions will also serve as a demonstration of the functionality and flexibility that Liesel offers for the development of Bayesian (regression) models.
Liesel's technology stack facilitates the implementation of gradient-based methods. Having automatic differentiation available will allow us to use general optimization algorithms to implement variational inference methods. Stochastic gradient MCMC (SG-MCMC) is a relatively new class of Monte Carlo algorithms that scale well to large datasets. Compared to traditional MCMC, these algorithms reduce the computational costs by using subsamples of the original dataset, while maintaining a high accuracy of the parameter estimates. Tools like Stan, PyMC and NIMBLE that enabled the broad success of Bayesian methods in many application areas are still missing SG-MCMC methods, although the first steps have been made (e.g.~in the R package \texttt{sgmcmc}). We plan to implement SG-MCMC kernels and non-traditional tuning methods for SG-MCMC in Liesel in the near future.
\bibliographystyle{plainnat}
|
1,116,691,501,246 | arxiv | \section{Introduction}
{\bf 1.1.}\,
Isaac Newton in his {\it Principia} \cite{N} studied the following problem of optimization. A solid body moves with constant velocity in a sparse medium. Collisions of the medium particles with the body are perfectly elastic. The absolute temperature of the medium is zero, so as the particles are initially at rest. The medium is extremely rare, so that mutual interactions of the particles are neglected. As a result of body-particle collisions, the drag force acting on the body is created. This force is usually called {\it resistance}.
The problem is: given a certain class of bodies, find the body in this class with the smallest resistance. Newton considered the class of convex bodies that are rotationally symmetric with respect to a straight line parallel the direction of motion and have fixed length along this direction and fixed maximal width.
In modern terms the problem can be formulated as follows. Let a reference system $x_1,\, x_2,\, z$ be connected with the body and the $z$-axis coincide with the symmetry axis of the body. We assume that the particles move upward along the $z$-axis. Let the lower part of the body's surface be the graph of a convex radially symmetric function $z = u(x_1, x_2) = \vphi(\sqrt{x_1^2 + x_2^2})$, $x_1^2 + x_2^2 \le L^2$; then the resistance equals
$$
2\pi \rho v^2 \int_0^L \frac{1}{1 + \vphi'(r)^2}\, r\, dr,
$$
where the constants $\rho$ and $v$ mean, respectively, the density of the medium and the scalar velocity of the body. The problem is to minimize
the resistance in the class of convex monotone increasing functions $\vphi : [0,\, L] \to \RRR$ satisfying $0 \le \vphi \le M$. Here $M$ and $L$ are the parameters of the problem: $M$ is length of the body and $2L$ is its maximal width.
Newton gave a geometric description of the solution to the problem. Typically, the optimal function bounds a convex body that looks like a truncated cone with slightly inflated lateral boundary. An optimal body, corresponding to the case when the length is equal to the maximal width, is shown in Fig.~\ref{figNewton}.
\begin{figure}[h]
\centering
\hspace*{6mm}
\rotatedown{
\includegraphics[scale=0.45]{sol3DV1Up321final.eps}
}
\caption{A solution to the rotationally symmetric Newton problem.}
\label{figNewton}
\end{figure}
Starting from the important paper by Buttazzo and Kawohl \cite{BK}, the problem of minimal resistance has been studied in various classes of (generally) nonsymmetric and/or (generally) nonconvex bodies. The problem for nonconvex bodies is by now well understood \cite{CL1,CL2,Canadian,AP,SIREV,Nonl2016}. Generalizations of the problem to the case of rotating bodies have been studied \cite{SIMArough,ARMA,PTG,OMT}, and connections with the phenomena of invisibility, retro-reflection, and Magnus effect in geometric optics and mechanics have been established \cite{invisibility,invisN,camouflage,tube,retro,PTG}. The methods of billiards, Kakeya needle problem, optimal mass transport have been used in these studies. A detailed exposition of results obtained in this area can be found in \cite{bookP}.
The most direct generalization of the original Newton's problem concerns finding the optimal convex (not necessarily symmetric) shape. More precisely, the problem is to minimize
\beq\label{func N}
\int\!\!\!\int_\Om \frac{1}{1 + |\nabla u(x_1,x_2)|^2}\, dx_1 dx_2
\eeq
in the class of convex functions
$$
\CCC_M = \{ u : \Om \to \RRR :\, 0 \le u \le M,\ u \, \text{is convex} \}.
$$
Here $\Om \subset \RRR^2$ is a compact convex set with nonempty interior, and $M > 0$ is the parameter of the problem.
Surprisingly enough, this problem is still poorly understood. It is known that there exists at least one solution \cite{M,BFK}. Let $u$ be a solution; then $u\rfloor_{\pl\Om} = M$ \cite{boundary} and at any regular point $x = (x_1, x_2)$ of $u$ we have either $|\nabla u(x)| \ge 1$, or $|\nabla u(x)| = 0$ \cite{BFK}. Moreover, if the zero level set $L = \{ x : u(x) = 0 \}$ has nonempty interior then we have $\lim_{\stackrel{x \to \bar x}{x \not\in L}} |\nabla u(x)| = 1$ for almost all $\bar x \in \pl L$ \cite{ridge}. If $u$ is $C^2$ in an open set $\UUU \subset \Om$, then the second derivative $u''(x)$ has a zero eigenvalue for all $x \in \UUU$, and therefore, graph$\big( u\rfloor_\UUU \big) = \{ (x, u(x)) : x \in \UUU \}$ is a developable surface \cite{BrFK}. A more detailed review of results concerning the convex problem can be found in \cite{sliding}.
The last property is of special interest. Indeed, the numerical results stated in \cite{W} seem to indicate that the regular points of a solution $u$ form several smooth curves on $\Om$ and $u$ is $C^2$ outside these curves. If this is true then, cutting off the graph of $u$ along the singular curves, one can flatten the resulting pieces on a plane without distortion.
Unfortunately, one does not know if the a priori assumptions on the structure of singular set and $C^2$ smoothness are correct. It is even not known whether the domain of $C^2$ smoothness of $u$ is nonempty.
{\bf 1.2.}\,
The aim of the present paper is to relax the $C^2$ condition to the $C^1$ one. This paper can be considered as a continuation of \cite{sliding}. The main difference of the results is that in \cite{sliding} the {\it existence} of a solution possessing a certain property is guaranteed, while in this paper it is assured that the property holds for {\it any} solution. It took about 25 years to relax from $C^2$ to $C^1$, but in our opinion, the method used here and in \cite{sliding} may be more important than the result obtained. The method is called {\it nose stretching} and represents a small variation of a convex body\footnote{A {\it convex body} is a compact convex set with nonempty interior.} in the following way. Having a convex body $C$, we take a point $O$ or two points $A$ and $B$ outside $C$ near its boundary, define $\tilde C = \text{{\conv}}(C \cup \{ O \})$ (in \cite{sliding}) or $\tilde C = \text{{\conv}}(C \cup \{ A, B \})$ (in this paper), and then define a continuous 1-parameter family of convex bodies joining $C$ and $\tilde C$.
We believe that it is fruitful to study the minimal problem in a more general form:
\begin{quote}
Minimize the functional
\beq\label{func general}
F(u) = \int_\Om f(\nabla u(x))\, dx.
\eeq
in the class $\CCC_M$, where $f : \RRR^2 \to \RRR$ is a continuous function.\footnote{Since the set of regular points of $u$ is a full-measure subset of $\Om$ and the vector function $x \mapsto \nabla u(x)$ is measurable, $F(u)$ is well defined.}
\end{quote}
Taking $f(\xi) = 1/(1 + |\xi|^2)$, one obtains the functional \eqref{func N}.
Let us also mention an equivalent setting of the problem first proposed in \cite{BG}. The graph of $u$ is the lower part of the boundary of the convex body
\begin{equation*}\label{C_u}
C = C_M(u) = \{ (x,z) :\, x \in \Om,\ u(x) \le z \le M \},
\end{equation*}
and the functional in \eqref{func general} can be represented in the form $F(u) = \FFF(C_u)$, where
\beq\label{func body}
\FFF(C) = \int_{\pl C} g(n_\xi)\, d\xi.
\eeq
Here $n_\xi$ is the outward normal to the convex body $C$ at the point $\xi \in \pl C$, and for $n = (n_1, n_2, n_3) \in S^2$,
$$
g(n) = \left\{ \begin{array}{ll}
f\big( \frac{n_1}{|n_3|},\, \frac{n_2}{|n_3|} \big) |n_3|, & \text{if} \ \, n_3 < 0; \\
0 & \text{if} \ \, n_3 \ge 0
\end{array} \right.
$$
(see \cite{sliding}). In particular, if $f(\xi) = 1/(1 + |\xi|^2)$, we have $g(\xi) = \big( n_3^3 \big)_+$, where $( \cdot )_+$ means the positive part of a real number, $z_+ = \max \{ z, 0 \}$.
Correspondingly, the minimization problem for $F$ in \eqref{func general} can be stated in the equivalent form:
\begin{quote}
Minimize $\FFF(C)$ in \eqref{func body} in the class of convex bodies $\{ C :\, C_1 \subset C \subset C_2 \}$, where $C_2$ is the cylinder with the base $\Om$ and height $M$,\, $C_2 = \Om \times [0,\, M]$, and $C_1$ is its upper end, $C_1 = \Om \times \{ M \}$.
\end{quote}
\begin{zam}
This problem admits a natural mechanical interpretation. Imagine a body moving in a highly rarefied medium where Newton's assumptions are not satisfied: the absolute temperature is nonzero and/or the body-particle reflections are not elastic. In this case the resistance (the projection of the drag force on the direction of motion) is given by the functional \eqref{func general} (or, equivalently, by \eqref{func body}), where the function $f$ (or $g$) can be determined if we know the law of body-particle reflection, the temperature, and the composition of the medium.
\end{zam}
The $C^2$ result in the paper \cite{BrFK} was formulated and proved for Newton's case $f(\xi) = 1/(1 + |\xi|^2)$. Its natural generalization provided in \cite{sliding} reads as follows.
\begin{theorem}\label{tpropo1} (Theorem 1 in \cite{sliding}).
Let $f$ be a $C^2$ function and the second derivative $f''$ have at least one negative eigenvalue for all values of the argument. Assume that $u$ minimizes functional \eqref{func general} in the class $\CCC_M$ and $u$ is $C^2$ in $\UUU$, where $\UUU \subset \Om$ is an open set. Then $\det u''(x) = 0$ for all $x \in \UUU$.
\end{theorem}
The $C^1$ result in \cite{sliding} is as follows.
\begin{theorem}\label{tpropo2} (Corollary 3 in \cite{sliding}).
Let $f$ be a bounded continuous function. Then there exists a function $u$ minimizing functional \eqref{func general} in $\CCC_M$ that possesses the following property. If $u$ is $C^1$ in $\UUU$ and $u > 0$ in $\UUU$, where $\UUU \subset \Om$ is an open set, then {\rm graph}$\big( u\rfloor_\UUU \big)$ does not contain extreme points of the epigraph of $u$.
\end{theorem}
\begin{zam}
The words {\rm "there exists a solution"} cannot be replaced with {\rm "for any solution"} in Theorem \ref{tpropo2}. Actually, if the function $f$ is locally affine, there may exist a solution $u$ and an open set $\UUU$ such that $u$ is $C^1$ and positive in $\UUU$, and {\rm graph}$\big( u\rfloor_\UUU \big)$ contains extreme points. Consider two examples.
1. $f$ is piecewise affine, $f(\xi) = (-\langle a, \xi \rangle + b)_+$ for a certain vector $a = (a_1, a_2)$ and $b > 0$. Here and in what follows, $\langle \cdot \,, \cdot \rangle$ means scalar product. This function corresponds to the mechanical model when the particles of the incident flow with equal velocity $(a_1, a_2, b)$ get stuck on the body's surface after the collision. In this case any function $u \in \CCC_M$ satisfying $\langle a, \nabla u \rangle < b$ is a solution.
2. $f \ge 0$, and $f(\xi) = 0$ when $|\xi| \le r$. Here $r > 0$.Then any function $u \in \CCC_M$ with $|\nabla u| \le r$ is a solution.
Note that in both examples the set $\{ \xi : \det f''(\xi) = 0 \}$ is large: it is the complement of a line, $\RRR^2 \setminus \{ \langle a, \xi \rangle = b \}$, in the first example, and it contains the open ball $\{ |\xi| < r \}$ in the second one.
\end{zam}
Let {\rm epi}$(u) = \{ (x,z) : x \in \Om,\, z \ge u(x) \} \subset \RRR^3$ denote the the epigraph of $u$. For a convex set $C \subset \RRR^3$, a point $\xi \in \pl C$ is called {\it regular}, if there is a single plane of support to $C$ at $\xi$, and {\it singular} otherwise. The set of singular points of $\pl C$ is denoted as sing$(C)$. The set of extremal points of $C$ is denoted as ext$(C)$. A point $x$ in the interior of $\Om$ is called {\it regular point} of $u$, if there exists $\nabla u(x)$, and {\it singular} otherwise. The set of singular points of $u$ is called sing$(u)$. We have $\text{\rm graph}\big( u\rfloor_{\text{\rm sing}(u)} \big) \subset \text{\rm sing}\, (\text{\rm epi}(u))$.
Later on in the text we will use the following notation. Let $G$ be a Borel subset of epi$(u)$ and pr$_x(G)$ be its orthogonal projection on the $x$-plane; then by definition
$$
F(G) = \int_{\text{pr}_x(G)} f(\nabla u(x)) dx.
$$
It is easy to check that $F(G)$ is well defined; that is, if $G \subset \text{epi}(u_1)$ and $G \subset \text{epi}(u_2)$ for two convex functions $u_1$ and $u_2$, then $\int_{\text{pr}_x(G)} f(\nabla u_1(x)) dx = \int_{\text{pr}_x(G)} f(\nabla u_2(x)) dx$. One easily sees that if $G_2$ is homothetic to $G_1$ with ratio $r$, that is, $G_2 = rG_1 + v$ with $v \in \RRR^3$, then $F(G_2) = r^2 F(G_1).$
The main result of this paper is the following theorem.
\begin{theorem}\label{cor}
Let $f$ be a $C^2$ function, and let $\{ \xi : \det f''(\xi) = 0 \}$ be a closed nowhere dense set in $\RRR^2$. If $u$ minimizes functional \eqref{func general} in the class $\CCC_M$, then
$$
\text{\rm ext} (\text{\rm epi}(u)) \subset \overline{\text{\rm sing} (\text{\rm epi}(u))}.
$$
Here and in what follows, bar means closure.
\end{theorem}
It follows from this theorem that an optimal function $u$ is uniquely defined by the set of singular points of epi$(u)$.
Recall that $C(u) = C_M(u) = \{ (x,z) : x \in \Om, \ u(x) \le z \le M \} = \text{epi}(u) \cap \{ z \le M \}$. Using Minkowski's theorem, one obtains the following corollary from Theorem \ref{cor}.
\begin{corollary}\label{cor1}
Under the conditions of Theorem \ref{cor}, if $u$ minimizes functional \eqref{func general} in the class $\CCC_M$, then $C(u)$ is the closure of the convex hull of {\rm sing}$(C(u))$,
$$
C(u) = \overline{\text{\rm \conv}(\text{\rm sing}\,(C(u)))}.
$$
\end{corollary}
Theorem \ref{cor} and Corollary \ref{cor1} are applicable to the classical case $f(\xi) = 1/(1 + |\xi|^2)$, since $f$ is $C^2$ and the set $\{ \xi : \det f''(\xi) = 0 \}$ is the circumference $|\xi| = 1/\sqrt 3$, and therefore, is closed and nowhere dense.
We will derive Theorem \ref{cor} from a result of the paper \cite{LP3} and the following Theorem \ref{t3} and Lemma \ref{lt1}.
\begin{theorem}\label{t3}
Let $f : \RRR^2 \to \RRR$ be a $C^2$ function.
Assume that a convex function $u : \Om \to \RRR$, an open set $\UUU \subset \Om$, and a point $\check x \in \UUU$ are such that
(i) $u$ is $C^1$ in $\UUU$;
(ii) $(\check x, u(\check x))$ is an extreme point of epi$(u)$;
(iii) $f''(\nabla u(\check x))$ has a negative eigenvalue. \\
Then for any $\ve > 0$ there is a convex function $\uu$ on $\Om$ such that\,
{\rm (a)} $\uu\rfloor_{\Om\setminus\UUU} = u\rfloor_{\Om\setminus\UUU}$,
{\rm (b)}~$|u - \uu| < \ve$, and
{\rm (c)}~$F(\uu) < F(u)$.
\end{theorem}
\begin{lemma}\label{lt1}
Consider a convex set $C \subset \RRR^3$ with nonempty interior. For $r \in \pl C$ denote by $n_r$ the outward normal to $C$ at $r$. Consider an open (in the relative topology) set $\UUU \subset \pl C$ and suppose that all points of $\UUU$ are regular. Let a set ${\cal E} \subset S^2$ contain no open (in the relative topology) sets, and let each point $r \in \UUU$ with $n_r \not\in \EEE$ is not an extreme point of $C$. Then ${\UUU}$ does not contain extreme points of $C$.
\end{lemma}
The proofs of Theorem \ref{t3} and Lemma \ref{lt1} are given in Section \ref{sec theorem} and Section \ref{sec technical}, respectively.
\begin{proof}
Here and in what follows, $\stackrel{\circ}{A}$ means the interior of a set $A$.
The set $\Om$ is the disjoint union of three sets,
$$
\Om = \Om_+ \sqcup \Om_- \sqcup \Om_0,
$$
where $\Om_+$ is the set of points $x$ in $\stackrel{\circ}{\Om}$ such that $f''(\nabla u(x))$ is positive defined, $\Om_-$ is the set of points $x$ in $\stackrel{\circ}{\Om}$ such that one of the eigenvalues of $f''(\nabla u(x))$ is negative and the other one is nonzero, and $\Om_0$ is the set of points $x$ such that either $\det f''(\nabla u(x)) = 0$ or $x \in \pl\Om$. The sets $\Om_+$ and $\Om_-$ are open, and the set $\Om_0$ is closed.
It is proved in (\cite{LP3}, claim 1 of Theorem 1) that if two different convex functions $u_1$ and $u_2$ are defined on a bounded set $\Upsilon$, coincide on $\pl\Upsilon$, and satisfy $u_2 \le u_1$, then
$$
\int_{\Upsilon} f(\nabla u_1(x))\, dx < \int_{\Upsilon} f(\nabla u_2(x))\, dx.
$$
Denote by $\hat u$ the function on $\Om$ such that its epigraph coincides with {\conv}$\big( \text{epi}\big( u\big\rfloor_{\Om\setminus\Om_+} \big) \big)$. Of course, $\hat u \ge u$. Since $\hat u$ coincides with $u$ outside $\Om_+$, we have
$$
F(\hat u) - F(u) = \int_{\Om_+} f(\nabla\hat u(x))\, dx - \int_{\Om_+} f(\nabla u(x))\, dx.
$$
Applying the result of \cite{LP3} to the functions $u_1 = \hat u\rfloor_{\Om_+}$ and $u_2 = u\rfloor_{\Om_+}$ and to the set $\Upsilon = \Om_+$, we conclude that $F(\hat u) \le F(u)$, and the equality here takes place only if $\hat u = u$. Since the function $u$ is optimal, $\hat u = u$. It follows that all points $(x, u(x))$, $x \in \Om_+$, are not extreme.
Let us show that the points $(x, u(x))$, $x \in \Om_- \setminus \overline{\text{sing}(u)}$, are not extreme. Denote $C = \text{epi}(u)$,\, $\EEE = \{ (0,0,-1) \}$,\, $U = \Om_- \setminus \overline{\text{sing}(u)}$, and assume that a point $\check r = (\check x, u(\check x)) \in \text{graph}(u\rfloor_\UUU)$, with $n_{\check{r}} \not\in \EEE$ (that is, $\nabla u(\check x) \ne 0$), is an extreme point of $C$. Take an open set $U'$ containing $\check x$ and contained in $U$ such that for some $\ve > 0$,\, $\ve < u < M - \ve$ in $U'$. By Theorem \ref{t3}, there exists a convex function $\uu$ on $\Om$ that coincides with $u$ outside $U'$ and satisfies $u-\ve < \uu < u + \ve$ in $U'$, and therefore, belongs to $\CCC_M$, and such that $F(\uu) < F(u)$. We have a contradiction with optimality of $u$.
This contradiction implies that if $r = (x, u(x)) \in \text{graph}(u\rfloor_U)$ and $n_{r} \not\in \EEE$, then $r$ is not an extreme point of $C$. Applying Lemma \ref{lt1}, one concludes that $\text{graph}(u\rfloor_U)$ does not contain extreme points of $C$.
It follows that the set $\{ (x, u(x)) :\, (\Om_+ \cup \Om_-) \setminus \overline{\text{sing}(u)} \}$ does not contain extreme points.
Now apply Lemma \ref{lt1} to the sets $C = \text{epi}(u)$,\, $\UUU = \pl C \setminus \overline{\text{\rm sing} (C)}$, and
$$
\EEE = \Big\{ \frac{(\xi, -1)}{\sqrt{|\xi^2| + 1}} :\ \det f''(\xi) = 0 \Big\} \cup \big\{ (\xi,0) :\, |\xi| = 1 \big\} \subset S^2.
$$
The set $\UUU$ is open in the relative topology of $\pl C$, and all points of $\UUU$ are regular. The set $\EEE$ does not contain open sets in the relative topology of $S^2$.
If $r = (x, z) \in \UUU$ and $n_r \not\in \EEE$ then $x \not\in \pl\Om$ (and therefore, $z = u(x))$,\, $x$ is not contained in $\overline{\text{sing}(u)}$, and $\det f''(\nabla u(x)) \ne 0$, that is, $x \not\in \overline{\text{sing}(u)} \cup \Om_0$. Using that $(\Om_+ \cup \Om_-) \setminus \overline{\text{sing}(u)} = \Om \setminus \big( \overline{\text{sing}(u)} \cup \Om_0 \big)$, we have $x \in (\Om_+ \cup \Om_-) \setminus \overline{\text{sing}(u)}$, and therefore, $r$ is not an extreme point of $\pl C$. By Lemma \ref{lt1}, $\UUU$ does not contain extreme points of $C = \text{epi}(u)$, that is, $\text{\rm ext} (\text{\rm epi}(u)) \subset \overline{\text{\rm sing} (\text{\rm epi}(u))}.$ Theorem \ref{cor} is proved.
\end{proof}
\begin{zam}
We see that the set $\{ \det f'' = 0 \}$ is an important characteristic of the problem. If this set (which is of course closed) is nowhere dense then, according to Theorem \ref{cor}, each solution is uniquely defined by the set of its singular points. If, otherwise, the set $\{ \det f'' = 0 \}$ has nonempty interior then this assertion may not be true and, moreover, the solution set is extremely degenerated.
Assume now that $u$ is a solution, $\UUU \subset \Om$ is an open and simply connected set, the contour $\nabla u\rfloor_{\pl\UUU}$ bounds an open set $\VVV$, and $\det f'' \ne 0$ in $\VVV$. It follows that $f$ is either strictly convex or strictly concave on $\VVV$ and, according to \cite{LP3}, $u$ can be reconstructed by the boundary values $u\rfloor_{\Om \setminus \UUU}$. In the former case, the epigraph of $u$ is the convex hull of the epigraph of $u\rfloor_{\Om \setminus \UUU}$. In the latter case, the epigraph of $u$ is the intersection of the closed half-spaces bounded below by tangent planes to graph$\big( u\rfloor_{\Om \setminus \UUU} \big)$ at regular points of $u$.
\end{zam}
\begin{zam}
Theorem \ref{cor} implies that the graph of a solution $u$ minus the closure of the set of its singular points is a developable surface.
Still, nothing is known about the set of singular points. We cannot even guarantee that it does not coincide with $\Om$.
\end{zam}
We state the following
\vspace{2mm}
{\bf Conjecture.} {\it Let $u$ solve problem \eqref{func general} in $\CCC_M$ for a certain $M > 0$. Then the set of singular points of $u$ is a closed nowhere dense subset of $\Om$.}
\section{Proof of Theorem \ref{t3}}\label{sec theorem}
In order to illustrate the idea of the proof, let us first solve the following toy problem in the 2D case.
\begin{propo}\label{light}
Consider the functional $F(u) = \int_\Om f(u'(x)) dx$ defined on the class of convex functions $u : \Om \to \RRR$, where $\Om = [a,\, b]$ is a compact segment and $f : \RRR \to \RRR$ is a $C^2$ function.
Assume that a convex function $u$ and an open interval $\UUU = [a_1,\, b_1] \subset \Om$, are such that for all $x \in \UUU$,
(i) $u$ is regular at $x$;
(ii) $(x, u(x))$ is an extreme point of epi$(u)$;
(iii) $f''(u'(x)) \ne 0$. \\
Then for any $\ve > 0$ there is a convex function $\uu$ on $\Om$ such that\,
{\rm (a)} $\uu = u$ outside $\UUU$ and $\uu \ne u$ in $[a_1+\ve,\, b_1-\ve]$,
{\rm (b)}~$|u - \uu| < \ve$, and
{\rm (c)}~$F(\uu) < F(u)$.
\end{propo}
The hypotheses (i), (ii), (iii) of the proposition of course imply that the derivative $u'$ exists and is strictly monotone increasing in $\UUU$.
The proof given below is not the easiest one, but the underlying idea of small variation admits a generalization to the 3D case.
However, the 3D case is much more complicated and includes much more technical details, as will be seen later.
\begin{proof}
Take a point $O$ below the graph of $u$ and denote $C = \text{epi}(u)$ and $\tilde C = \conv(C \cup O)$. Choose $O$ so as both tangent lines through $O$ to $C$ intersect $C$ at points with abscissas in $(a_1,\, a_1 + \ve)$ and $(b_1 - \ve,\, b_1)$ (and therefore, both points lie in graph$(u\rfloor_\UUU)$). Define the family of convex sets
$$
C_s =
\left\{
\begin{array}{ll}
(1-s)C + s\tilde C, & \text{if } 0 \le s \le 1;\\
C \cap \big[ (1-s)C + sO \big], & \text{if } s < 0,
\end{array}\right.
$$
and let $u^{(s)}$ be the function such that epi$(u^{(s)}) = C_s$. In particular, one has $u^{(0)} = u$. Of course, all functions $u^{(s)}$, $s \le 1$ are convex and are defined on $\Om$.
Denote by $A_0$ and $B_0$ the points of intersection of two tangent lines through $O$ to $C$; see Fig.~\ref{fig 2D}. Denote the resistance of a curve $\gam$ by $F(\gam)$; in particular, the resistance of the arc $A_0 B_0$ is denoted as $F(A_0 B_0)$, and the resistances of the line segments $OA_0$ and $OB_0$ are $F(OA_0)$ and $(OB_0)$. It is easy to see that the following relations are true,
\beq\label{f''positive}
\text{if} \ \, f''(u'(x)) > 0 \ \ \text{in} \ \, \UUU \quad \text{then} \ \, F(OA_0) + F(OB_0) - F(A_0 B_0) > 0;
\eeq
\beq\label{f''negative}
\text{if} \ \, f''(u'(x)) < 0 \ \ \text{in} \ \, \UUU \quad \text{then} \ \, F(OA_0) + F(OB_0) - F(A_0 B_0) < 0.
\eeq
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{2dim}
\caption{The set $C_s$ is shown light gray (a) for $0 < s < 1$ and (b) for $s < 0$.}
\label{fig 2D}
\end{figure}
The graph of $u$ is the union of curves $\bar A A_0$,\, $A_0 B_0$, and $B_0 \bar B$, therefore
$$
F(u) = F(\bar A A_0) + F(A_0 B_0) + F(B_0 \bar B).
$$
For $0 \le s \le 1$ the graph of $u^{(s)}$ is the union of curves $\bar A A_0$ and $B_0 \bar B$, line segments $A_0 A_s$ and $B_0 B_s$, and the curve $A_s B_s$; see Fig.~\ref{fig 2D}.\,(a). The arc $A_s B_s$ is homothetic to $A_0 B_0$ with the ratio $1-s$ and the center at $O$, therefore $F(A_s B_s) = (1-s) F(A_0 B_0)$. The length of the segment $A_0 A_s$ is $s$ times the length of of the segment $OA_0$, hence $F(A_0 A_s) = sF(OA_0)$. Similarly, $F(B_0 B_s) = sF(OB_0)$. Thus,
$$
F(u^{(s)}) = F(\bar A A_0) + F(A_0 A_s) + F(A_s B_s) + F(B_s B_0) + F(B_0 \bar B)
$$ $$
= F(\bar A A_0) + F(\bar B B_0) + (1-s) F(A_0 B_0) + sF(OA_0) + sF(OB_0).
$$
It follows that for $0 \le s \le 1$, $F(u^{(s)})$ is linear in $s$.
For $0 \le s \le 1$ the graph of $u^{(s)}$ is the union of curves $\bar A A'$,\, $A' B'$, and $B' \bar B$; see Fig.~\ref{fig 2D}.\,(b). Correspondingly, the resistance is
$$
F(u^{(s)}) = F(\bar A A') + F(A' B') + F(B' \bar B)
$$ $$
= F(\bar A A_0) + F(\bar B B_0) + F(A_s B_s) - F(A_0 A_s) - F(B_0 B_s)
$$ $$
+ [F(A_0 A_s) - F(A_0 A') - F(A' A_s)] + [F(B_0 B_s) - F(B_0 B') - F(B' B_s)]
$$
Since $F(A_0 A_s) - F(A_0 A') - F(A' A_s)$ and $F(B_0 B_s) - F(B_0 B') - F(B' B_s)$ are $o(s)$ as $s \to 0^-$, we obtain
$$
F(u^{(s)}) = F(\bar A A_0) + F(\bar B B_0) + (1-s) F(A_0 B_0) + sF(OA_0) + sF(OB_0) + o(s) \ \ \text{as} \ \, s \to 0^-.
$$
It follows that there exists the derivative of $F(u^{(s)})$ at $s = 0$,
$$
\frac{d}{ds}\bigg\rfloor_{s=0} F(u^{(s)}) = F(OA_0) + F(OB_0) - F(A_0 B_0).
$$
According to formulas \eqref{f''positive} and \eqref{f''negative}, the derivative is nonzero, and therefore, $F(u^{(s)}) < F(u)$ for $s$ in a sufficiently small one-sided (left or right) neighborhood.
\end{proof}
Let us now proceed to the proof of Theorem \ref{t3}.
Later on in the proof, we write $C$ for epi$(u)$.
Choose the orthogonal coordinates $x_1,\, x_2$ in such a way that $(1,0)$ is an eigenvector of $f''(\nabla u(\check x))$ corresponding to a negative eigenvalue. We have
$$
\frac{d^2}{dt^2}\Big\rfloor_{t=0} f(\nabla u(\check x) + (t,0)) < 0.
$$
Since $f$ is $C^2$ and $\nabla u$ is continuous in $\UUU \ni \check x$, the function $\frac{d^2}{dt^2} f(\nabla u(x) + (t,0))$ is continuous in $x$ and $t$ for $x$ sufficiently close to $\check x$ and $t$ sufficiently close to 0, hence this function is negative for $|x - \check x| < \om,\, |t| < \om$, for $\om > 0$ sufficiently small, and therefore, for $|x - \check x| < \om$ the function of one variable $t \mapsto f(\nabla u(x) + (t,0))$, $t \in (-\om,\, \om)$ is strictly concave.
Choose $r$ sufficiently small, so as
$$ r < \text{dist}(\check x, \pl\UUU);
$$
\beq\label{(beta)}
\begin{array}{l}
\text{If $|x - \check x| < r,\, |x' - \check x| < r$, and $\nabla u(x) - \nabla u(x')$ is parallel to the $x_1$-axis},\\
\text{then the restriction } \text{of $f$ on the line segment $[\nabla u(x),\, \nabla u(x')]$}\\
\text{is a strictly concave function of one variable.}
\end{array}
\eeq
Denote $D_r = \{ (x,z) :\, |x - \check x| < r,\, |z - u(\check x)| < r \}$. Since the point $(\check x, u(\check x))$ is extreme, it is not contained in $\text{{\conv}}(C \setminus D_r)$. Draw a plane $\Pi$ strictly separating $(\check x, u(\check x))$ and $\text{{\conv}}(C \setminus D_r)$; that is, the point $(\check x, u(\check x))$ and the convex body $\text{{\conv}}(C \setminus D_r)$ are contained in different open subspaces bounded by $\Pi$ (see Fig.~\ref{fig extreme}).
The plane $\Pi$ divides the surface $\pl C$ into two parts: the upper one, $\pl_+ C$, containing $\pl C \setminus D_r$, and the lower one, $\pl_- C$, containing $(\check x, u(\check x))$. Draw all planes of support through the points of $\pl_+ C$ and denote by $C'$ the intersection of half-spaces that are bounded by these planes and contain $C$. The body $C$ is contained in $C'$, but does not coincide with it. Indeed, let $\Pi'$ be the plane of support to $C'$ parallel to $\Pi$ and below $\Pi$.
Each point $Q$ in $\Pi' \cap C'$ is a singular point of $\pl C'$, since there are at least two planes of support through $Q$: $\Pi'$ and a plane tangent to $C$ at a point of $\pl C \cap \Pi$. Therefore, $Q$ does not belong to $C$.
Take a point $(x^0, u(x^0))$ in $\pl C \setminus \pl C'$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{Extr}
\caption{All points of $\pl C = \text{graph}(u)$ situated below the plane $\Pi$ are regular. The dashed broken line through $Q$ indicates a part of $\pl C'$.}
\label{fig extreme}
\end{figure}
Denote $b = u_{x_2}(x^0)$ and consider the functions $\mathbf{u}(x) = u(x) - bx_2$ and $\mathbf{f}(\xi) = f(\xi + (0,b))$. We have $\mathbf{u}_{x_2}(x^0) = 0$. One easily sees that the functions $\mathbf{u}$, $\mathbf{f}$, the set $\UUU$, and the point $\check x$ satisfy the hypotheses (i), (ii), and (iii) of Theorem \ref{t3}. (Note in passing that graph$(\mathbf{u})$ and epi$(\mathbf{u})$ are the images of graph$(u)$ and $C$, respectively, under the map $(x_1, x_2, z) \mapsto (x_1, x_2, z - bx_2)$.)
It suffices to prove the statement of Theorem \ref{t3} for $\mathbf{u}$ and $\mathbf{f}$; that is, fix $\ve > 0$ and find a function $\widetilde{\mathbf{u}}$ satisfying conditions (a), (b), (c) indicated there. Then, taking $\widetilde{u}(x) = \widetilde{\mathbf{u}}(x) + bx_2$ and using that $\int_\UUU \mathbf{f}(\nabla \widetilde{\mathbf{u}}(x)) dx = \int_\UUU f(\nabla\widetilde{u}(x)) dx$ and $\int_\UUU \mathbf{f}(\nabla {\mathbf{u}}(x)) dx = \int_\UUU f(\nabla u(x)) dx$, one concludes that the statement of Theorem \ref{t3} is also true for the original functions $u$ and $f$.
In order to simplify the notation, later on in the proof of Theorem \ref{t3} we will write $u$ and $f$ in place of $\mathbf{u}$ and $\mathbf{f}$ and use that ${u}_{x_2}(x^0) = 0$. We will also use the notation $C = \text{epi}(\mathbf{u})$. and write $Q$ in place of the image of $Q$ under the map $(x_1, x_2, z) \mapsto (x_1, x_2, z - bx_2)$.
Consider the auxiliary function of one variable
$$
w(x_1) = \inf_{x_2} u(x_1,x_2).
$$
We see that $w$ is convex, $w(x^0_1) = u(x^0)$, and the epigraph of $w$ coincides with the image of $C$ under the map $\pi: (x_1, x_2, z) \mapsto (x_1, z)$, that is, epi$(w) = \pi(C)$.
Take a vertical interval $J$ outside $C$ with the vertices $(x^0, u(x^0))$ and $Q^0 = (x^0, z^0)$,\ $z^0 < u(x^0)$, so as $Q^0$ lies in the interior of \conv$(C \cup \{ Q \})$.
Draw two support lines from an arbitrary point of the interval $\pi(J) = \big( (x_1^0, z^0),\ (x_1^0, u(x^0)) \big)$ to the convex set $\pi(C)$ in the $(x_1,z)$-plane. Since $\pl(\pi(C))$ contains at most countably many line segments, for all points of $\pi(J)$, except possibly for countably many ones, the intersection of each support line with $\pi(C)$ is a point.
Replacing if necessary the point $(x_1^0, z^0)$ with an interior point of $\pi(J)$, without loss of generality we assume that
\beq\label{(del)}
\text{the intersection of each line of support to} \ \, \pi(C)\ \, \text{through}\ \, (x_1^0, z^0) \ \, \text{is a point}
\eeq
and, additionally,
$$
u(x^0) - \ve < z^0 < u(x^0).
$$
Choose $\del > 0$ sufficiently small, so as the horizontal segment $I = [A, B]$ with the vertices
$$
A = (x^0, z^0) - (0, \del, 0) \qquad \text{and} \qquad B = (x^0, z^0) + (0, \del, 0)
$$
is contained in the interior of {\conv}$(C \cup \{ Q \})$ and
$$
z^0 > u(x_1^0, x_2^0 + t) - \ve \qquad \text{for all} \quad |t| < \del.
$$
Define
$$\tilde C = \text{{\conv}}(C \cup I)$$
and define the function $\widetilde{u}$ by the condition that $\tilde C$ is the epigraph of $\widetilde{u}$. We have $\widetilde{u} \le u$. Additionally, $\pl C \setminus \pl\tilde C \subset \text{graph}(u)\rfloor_\UUU$, and therefore,
\vspace{1mm}
(a) $\widetilde{u}\rfloor_{\Om\setminus\UUU} = u\rfloor_{\Om\setminus\UUU}$.
\vspace{1mm}
Each point of $\pl\tilde C$ is contained either in $\pl C$, or in a segment joining a point of $I$ with a point of $\pl C \cap \pl\tilde C$. It follows that for any $x \in \Om$, either $\widetilde{u}(x) = u(x)$ or there exist $x^1 \in \Om$ and $x^2 = x^0 + (0,t),\, |t| \le \del$ such that the point $(x, \widetilde{u}(x))$ lies on the segment joining the points $(x^1, u(x^1))$ and $(x^2, z^0)$, that is, $x = \lam x^1 + (1-\lam) x^2$ and $\widetilde{u}(x) = \lam u(x^1) + (1-\lam) z^0$ for some $0 \le \lam < 1$.
Taking into account that $u$ is convex and $z^0 > u(x^2) - \ve$, one obtains
$$
\widetilde{u}(x) > \lam u(x^1) + (1-\lam) (u(x^2) - \ve) \ge u(x) - (1-\lam) \ve.
$$
Thus, the following property is proved.
\vspace{1mm}
(b) $0 \le u - \widetilde{u} < \ve$.
\vspace{1mm}
There are two planes of support to $\pl\tilde C$ through each interior point of $I$, and this pair of planes does not depend on the choice of the point. Let them be designated as $\Pi_-$ and $\Pi_+$. The planes are of the form
\beq\label{Piplusminus}
\Pi_- :\, z - z^0 = \xi_- (x_1 - x_1^0) \quad \text{and} \quad \Pi_+ :\, z - z^0 = \xi_+ (x_1 - x_1^0),
\eeq
where $\xi_- < \xi_+$ are some real values. Condition \eqref{(del)} means that the intersection of each of these planes with $C$ is a line segment parallel to the $x_2$-axis (possibly degenerating to a point). Let them be denoted as
$$
I_\pm = \Pi_\pm \cap C = \big( x_1^\pm,\ x_2^\pm + [-a_\pm,\, a_\pm],\ z^0 + \xi_\pm (x_1^\pm - x_1^0) \big).
$$
Correspondingly, the intersection of each plane $\Pi_\pm$ with the graph of $\widetilde{u}$ is the graph of an affine function defined on a trapezoid. For the sake of the future use we denote these functions by $z_-(x)$ and $z_+(x)$ and provide their analytic description,
\beq\label{2affine}
z_-(x) - z^0 = \xi_- (x_1 - x_1^0), \ (x_1, x_2) \in T_- \ \ \text{and} \ \ z_+(x) - z^0 = \xi_+ (x_1 - x_1^0), \ (x_1, x_2) \in T_+,
\eeq
where $T_\pm$ is the trapezoid with the sides $S_0 = \big( x_1^0,\, x_2^0 + [-\del,\, \del] \big)$ and $S_\pm = \big( x_1^\pm,\, x_2^\pm + [-a_\pm,\, a_\pm])$, with $x_1^- < x_1^0 < x_1^+$ and $a_\pm \ge 0$; see Fig.~\ref{fig trapezoid}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.2]{Trape}
\caption{The trapezoids $T_+$ and $T_-$.}
\label{fig trapezoid}
\end{figure}
All points of $S_\pm$ are regular and for any $x \in S_\pm$, $\nabla u(x) = (\xi_\pm, 0)$, and additionally,
$$
w'(x_1^\pm) = \xi_\pm \quad \text{and} \quad w(x_1^\pm) = z^0 + \xi_\pm (x_1^\pm - x_1^0),
$$
By condition \eqref{(beta)}, the restriction of $f$ on the line segment $[\xi_-,\, \xi_+] \times \{ 0 \}$ is strictly concave, that is,
\beq
f(\xi, 0), \ \xi \in [\xi_-,\, \xi_+] \quad \text{is a strictly concave function of one variable.}
\eeq
Using that for $x \in [x_1^-,\, x_1^+]$,
$$
f(w'(x), 0)\ =\ f\Big( \frac{\xi_+ - w'(x)}{\xi_+ - \xi_-}\, \xi_- + \frac{w'(x) - \xi_-}{\xi_+ - \xi_-}\, \xi_+, \ 0 \Big)
$$ $$
>\ \frac{\xi_+ - w'(x)}{\xi_+ - \xi_-}\, f(\xi_-, 0)\ +\ \frac{w'(x) - \xi_-}{\xi_+ - \xi_-}\, f(\xi_+, 0)
$$
and that
$$
\int_{x_1^-}^{x_1^+} w'(x)\, dx = w(x_1^+) - w(x_1^-) = \xi_+ (x_1^+ - x_1^0) - \xi_- (x_1^- - x_1^0),
$$
one obtains
\beq\label{bigineq}
\begin{split}
\int_{x_1^-}^{x_1^+} f(w'(x), 0)\, dx\ &>\ \frac{\xi_+(x_1^+ - x_1^-) - \int_{x_1^-}^{x_1^+} w'(x)\, dx}{\xi_+ - \xi_-}\ f(\xi_-, 0)\\
+\ \frac{\int_{x_1^-}^{x_1^+} w'(x)\, dx - \xi_-(x_1^+ - x_1^-)}{\xi_+ - \xi_-}\ f(\xi_+) \ &=\ f(\xi_-, 0) (x_1^0 - x_1^-) + f(\xi_+, 0) (x_1^+ - x_1^0).
\end{split}
\eeq
\vspace{2mm}
Define the family of convex sets $C_s,\, s \le 1$ by
$$
C_s =
\left\{
\begin{array}{ll}
(1-s)C + s\tilde C, & \text{if } 0 \le s \le 1;\\
C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big], & \text{if } s < 0.
\end{array}\right.
$$
In particular, $C_0 = C$ and $C_1 = \tilde C$.
The following technical lemma will be proved in Section \ref{sec technical}.
\begin{lemma}\label{lt2}
$C_s$ is the epigraph of a convex function $u^{(s)}$ defined on $\Om$. There is a value $s_0 < 0$ such that for $s_0 \le s \le 1$,\, $u^{(s)}$ satisfies statements (a) and (b) of Theorem \ref{t3}.
\end{lemma}
Let us now study the values of the functional $F(u^{(s)})$ for $0 \le s \le 1$. We shall prove that $F(u^{(s)})$ is a polynomial of the $2^{\text{nd}}$ degree in $s$ and determine its coefficients.
Denote by $\Pi^n$,\, $\tilde\Pi^n$, and $\Pi^n_s$ the planes of support to $C$,\, $\tilde C$, and $C_s$, respectively, with the outward normal $n \in S^2$. In particular, $\Pi^n_0 = \Pi^n$ and $\Pi^n_1 = \tilde\Pi^n$. The following equations hold,
\beq\label{pies1}
\Pi^n_s = (1-s) \Pi^n + s \tilde\Pi^n,
\eeq
\beq\label{pies2}
\Pi^n_s \cap C_s = (1-s) (\Pi^n \cap C) + s (\tilde\Pi^n \cap \tilde C).
\eeq
We restrict ourselves to proving equation \eqref{pies2}, leaving equation \eqref{pies1} to the reader. We have
$$
\Pi^n \cap C = \{ r \in C :\, \langle r, n \rangle = \max_{\rho \in C} \langle \rho, n \rangle \}, \qquad \tilde\Pi^n \cap \tilde C = \{ r \in \tilde C :\, \langle r, n \rangle = \max_{\rho \in \tilde C} \langle \rho, n \rangle \},
$$
and since
$$
\max_{\rho \in C_s} \langle \rho, n \rangle = \max_{\rho_1 \in C,\, \rho_2 \in \tilde C} \langle (1-s)\rho_1 + s\rho_2,\, n \rangle = (1-s)\max_{\rho_1 \in C} \langle \rho_1, n \rangle + s\max_{\rho_2 \in \tilde C} \langle \rho_2, n \rangle,
$$
one comes to \eqref{pies2}:
$$
\Pi_s^n \cap C_s = \left\{ r = (1-s) r_1 + s r_2 :\ r_1 \in C,\ r_2 \in \tilde C, \
\langle r_1, n \rangle = \max_{\rho_1 \in C} \langle \rho_1, n \rangle,\, \right. $$
$$ \left.
\langle r_2, n \rangle = \max_{\rho_2 \in \tilde C} \langle \rho_2, n \rangle \right\}
= (1-s) \big(\Pi^n \cap C\big) + s \big(\tilde\Pi^n \cap \tilde C\big).$$
For each $n$, the plane $\tilde\Pi^n$ may intersect or not intersect $\pl C$, and may intersect or not intersect $I$. If $\tilde\Pi^n$ intersects $I$, the following three cases are possible: $\tilde\Pi^n \cap I = A$,\, $\tilde\Pi^n \cap I = B$,\, $\tilde\Pi^n \cap I = I$. Additionally, $\tilde\Pi^n$ always intersects either $\pl C$ or $I$. Thus, there may be 7 cases:
\vspace{1mm}
$\tilde\Pi^n \cap \pl C \ne \emptyset$ and $\tilde\Pi^n \cap I = \emptyset$;
\vspace{1mm}
$\tilde\Pi^n \cap \pl C \ne \emptyset$ and $\tilde\Pi^n \cap I = A$; \qquad $\tilde\Pi^n \cap \pl C = \emptyset$ and $\tilde\Pi^n \cap I = A$;
\vspace{1mm}
$\tilde\Pi^n \cap \pl C \ne \emptyset$ and $\tilde\Pi^n \cap I = B$; \qquad $\tilde\Pi^n \cap \pl C = \emptyset$ and $\tilde\Pi^n \cap I = B$;
\vspace{1mm}
$\tilde\Pi^n \cap \pl C \ne \emptyset$ and $\tilde\Pi^n \cap I = I$; \qquad $\tilde\Pi^n \cap \pl C = \emptyset$ and $\tilde\Pi^n \cap I = I$.
\vspace{1mm}
\hspace*{-6mm}Correspondingly, $S^2$ is the disjoint union of 7 sets,
$$
S^2 = \AAA_0 \sqcup \AAA_1^A \sqcup \AAA_1^B \sqcup \AAA_2^A \sqcup \AAA_2^B \sqcup \AAA_3 \sqcup \AAA_4,
$$
where $\AAA_0 = \{ n :\, \tilde\Pi^n \cap \pl C \ne \emptyset$ and $\tilde\Pi^n \cap I = \emptyset \}$;
$\AAA_1^A = \{ n :\, \tilde\Pi^n \cap \pl C = \emptyset$ and $\tilde\Pi^n \cap I = A \}$;
$\AAA_2^A = \{ n :\, \tilde\Pi^n \cap \pl C \ne \emptyset$ and $\tilde\Pi^n \cap I = A \}$;
$\AAA_1^B$ and $\AAA_2^B$ are defined in a similar way; \qquad
$\AAA_1 = \AAA_1^A \sqcup \AAA_1^B$; \, $\AAA_2 = \AAA_2^A \sqcup \AAA_2^B$;
$\AAA_3 = \{ n :\, \tilde\Pi^n \cap \pl C = \emptyset$ and $\tilde\Pi^n \cap I = I \}$;
$\AAA_4 = \{ n :\, \tilde\Pi^n \cap \pl C \ne \emptyset$ and $\tilde\Pi^n \cap I = I \}$.\\
The boundary of each of the bodies $C$, $\tilde C$, $C_s$ can be represented as the union,
$$
\pl C_s = \cup_{i=0}^4 \pl_i C_s, \quad
\pl_j C = \pl_j^A C \cup \pl_j^B C, \quad \pl_j \tilde C = \pl_j^A \tilde C \cup \pl_j^B \tilde C, \ \, j = 1,\, 2,
$$
where
$$
\pl_i C_s = \Big( \cup_{n\in\AAA_i} \Pi^n_s \Big) \cap C_s, \ \, i = 0,\ldots,4, \quad
\pl_j^A C_s = \Big( \cup_{n\in\AAA_j^A} \Pi^n_s \Big) \cap C_s, \ \, j = 1,\, 2,
$$
and a similar representation holds for $\pl_j^B C_s$.
We have $\pl_i C_0 = \pl_i C$ and $\pl_i C_1 = \pl_i \tilde C,\, i = 0,\ldots,4$. It follows from equation \eqref{pies2} that
\beq\label{pliCs}
\pl_i C_s = (1-s) \pl_i C + s \pl_i \tilde C, \ i = 0,\ldots,4, \quad \pl_j^A C_s = (1-s) \pl_j^A C + s \pl_j^A \tilde C, \ j = 1,\, 2,
\eeq
and the same is true for $\pl_j^B C_s$.
\begin{propo}\label{pro}
(a) If a point $P$ belongs to $\pl_1^A C \cup \pl_3 C$ then the interval $(A,\, P)$ does not intersect $C$.
(b) If an interval $(A,\, P)$ intersects $\del_0 C$ then $P \not\in C$.
(c) If an interval $(A,\, P)$ intersects $\del_2 C$ then either $P \not\in C$, or $P \in \del_2 C$.
(d) The sets $\pl_0 C,\, \pl_1^A C,\, \pl_1^B C,\, \pl_2^A C,\, \pl_2^B C,\, \pl_3 C,\, \pl_4 C$ for $0 \le s < 1$ are disjoint.
\end{propo}
\begin{proof}
The proof of statements (a), (b), (c) of the following proposition is easy and is left to the reader.
The sets $\pl_i C$, $i \ne 0$ belong to the graph of $u\rfloor_\UUU$, and therefore, do not contain singular points. If a point $x \in \pl C_s$, $0 < s < 1$, is singular and admits the decomposition $x = (1-s) x_1 + s x_2$ with $x_1 \in \pl C$, $x_2 \in \pl\tilde C$, then $x_1$ is a singular point of $\pl C$ and $x_2$ is a singular point of $\pl\tilde C$. It follows that the sets $\pl_i C_s$, $i \ne 0$ for $0 \le s < 1$ also do not contain singular points.
Let us show that the sets in each pair $\pl_i^* C_s$,\, $\pl_j^* C_s$, where each of the superscripts "$*$" should be removed or substituted with $A$ or $B$, are disjoint. Assume the contrary, that is, there exists a point $x \in \pl_i^* C_s \cap \pl_j^* C_s$; then $x$ is a singular point of $\pl C_s$. However, at least one of the subscripts $i,\, j$ is nonzero, and therefore, at least one of the sets $\pl_i C_s$,\, $\pl_j C_s$ does not contain singular points. The obtained contradiction proves statement (d).
\end{proof}
Taking $s = 1$, one sees that some of the sets $\pl_i \tilde C$,\, $i = 0,\ldots,4$ intersect. In particular, $\pl_4 \tilde C$ is the union of graphs of two affine functions $z_\pm(x)$ given by \eqref{2affine}, and $\pl_3 \tilde C = I$ is contained in $\pl_4 \tilde C$.
In Fig.~\ref{fig section} there is shown the section of $C$ by a plane through $AB$, which does not coincide with the planes $z - z^0 = \xi_\pm (x_1 - x_1^0)$. The plane is defined either by $z - z^0 = \xi (x_1 - x_1^0)$ with $\xi < \xi_-$ or $\xi > \xi_+$, or by $x_1 - x_1^0 = 0$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.2]{Intersect}
\caption{The section of $C$ by a plane through $AB$ is shown. The section of the surface $\pl_0 C$ (not shown completely) is represented by the lines $A' \widehat{A}$ and $B'' \widehat{B}$. The sections of the surfaces $\pl_1^A C$ and $\pl_1^B C$ are the lines $A' \widetilde{A}$ and $B' \widetilde{B}$, respectively. The sections of $\pl_2^A C$ and $\pl_2^B C$ are, respectively, the point $A'$ and the closed line segment $B' B''$. The section of the surface $\pl_3 C$ is the closed line segment $\widetilde{A} \widetilde{B}$. The intersection of the plane with $\pl_3 C$ is empty.}
\label{fig section}
\end{figure}
In Fig.~\ref{fig section k<1}, the section of $C_s$ by the same plane is shown.
\begin{figure}[h]
\centering
\includegraphics[scale=0.2]{SecLesOne}
\caption{The section of $C_s$ by a plane is shown, $s = 1/2$. The section of the surface $\pl_0 C_s$ coincides with the section of $\pl_0 C$. The closed line segments $A' {A}'_s$ and $B'' {B}'_s$ are, respectively, the sections of $\pl_2^A C_s$ and $\pl_2^B C_s$. The two curves and a closed line segment forming together the arc ${A}'_s {B}'_s$ represent the sections of $\pl_1^A C_s$,\, $\pl_1^B C_s$, and $\pl_3 C_s$.}
\label{fig section k<1}
\end{figure}
The section of $C_s$ by the plane $z - z^0 = \xi_+ (x_1 - x_1^0)$ is shown in Fig.~\ref{fig touch}. (The section by the plane $z - z^0 = \xi_- (x_1 - x_1^0)$ looks similar to it.) The plane is tangent to both sets $C$ and $C_s$. The intersection of the plane with $C$ is a line segment, and the intersection with $C_s$ is a trapezoid (they may degenerate to a point and a triangle, respectively).
\begin{figure}[h]
\centering
\includegraphics[scale=0.11]{To}
\caption{The section of $C_s$ with $0 < s < 1$ by the plane $z - z^0 = \xi_+ (x_1 - x_1^0)$ is shown. The intersection of the plane with $C$ is the segment $I_+ = \pl_4^+ C$, and the intersection with $C_s$ is the trapezoid $\pl_4^+ C_s$. }
\label{fig touch}
\end{figure}
Later on we will consider sections of the sets $C_s$ by the planes through the segment $I$. Having this in mind, denote by $\Pi[x_1]$ the plane through the lines
$AB$ and $(x_1, \RRR, w(x_1))$,\, $x_1 \in [x_1^-,\, x_1^+]$. In particular, the planes $\Pi[x_1^\pm]$ coincide, correspondingly, with the planes $\Pi_\pm$ defined by \eqref{Piplusminus}.
We are going to determine the resistance produced by each of the sets $\pl_i C_s$,\, $0 \le s < 1$,\, $i = 0,\ldots,4$. Let us consider them separately.
\vspace{2mm}
{\bf [0]} If $n \in \AAA_0$ then $\tilde\Pi^n = \Pi^n = \Pi^n_s$ for all $0 \le s \le 1$.
The sets $\pl_0 C_s$ for all $0 \le s \le 1$ coincide with $\pl_0 C$. The corresponding resistance does not depend on $s$,
$$F(\pl_0 C_s) = F(\pl_0 C) = a_0.
$$
\vspace{1mm}
{\bf [1]} If $n \in \AAA_1^A$ then $\tilde\Pi^n \cap \tilde C = A$, and if $n \in \AAA_1^B$ then $\tilde\Pi^n \cap \tilde C = B$, hence $\pl_1^A \tilde C = A$ and $\pl_1^B \tilde C = B$. The corresponding sets $\pl_1^A C_s$ and $\pl_1^B C_s$ are disjoint, and by \eqref{pliCs},
$$
\pl_1^A C_s = (1-s) \pl_1^A C + s A, \qquad \pl_1^B C_s = (1-s) \pl_1^B C + s B.
$$
Thus, $\pl_1^A C_s$ and $\pl_1^B C_s$ are homothetic, respectively, to $\pl_1^A C$ and $\pl_1^B C$ with the ratio $1-s$ and with the centers at $A$ and $B$, and therefore, $F(\pl_1^A C_s) = (1-s)^2 F(\pl_1^A C)$ and $F(\pl_1^B C_s) = (1-s)^2 F(\pl_1^B C)$. Denoting $a_1 = 2F(\pl_1 C) = 2(F(\pl_1^A C) + F(\pl_1^B C))$, we obtain
$$
F(\pl_1 C_s) = F(\pl_1^A C_s) + F(\pl_1^B C_s) = \frac{a_1}{2} (1-s)^2.
$$
\vspace{1mm}
{\bf [2]} Now consider $\AAA_2 = \AAA_2^{A} \cup \AAA_2^{B}$. The corresponding sets $\pl_2^A \tilde C$ and $\pl_2^B \tilde C$ are disjoint. Indeed, assume the contrary and take a point $P$ in $\pl_2^A \tilde C \cap \pl_2^B \tilde C$. Then there are two vectors $n_1 \in \AAA_2^{A}$ and $n_2 \in \AAA_2^{B}$ such that the plane $\tilde\Pi^{n_1}$ contains $P$ and $A$ and does not contain $B$, and the plane $\tilde\Pi^{n_2}$ contains $P$ and $B$ and does not contain $A$. It follows that $P$ is a singular point of $\pl\tilde C$.
The point $P$ can be represented as a convex combination of three points $A,\, B$, and $P' \in \pl C$. The point $P'$ is a regular point of $\pl C$, hence $P \ne P'$. The point $P$ cannot lie on the side $AP'$, since otherwise the plane of support through $P$ and $B$ contains $A$. The same is true for the side $BP'$. Of course, $P$ does not lie on the side $AB$. Finally, all interior points of the triangle $ABP'$ are regular points of $\pl\tilde C$, and therefore, cannot coincide with $P$. We come to a contradiction.
The set $\pl_2^A \tilde C$ is the union of line segments with one endpoint at $A$. Each crossing plane $\Pi[x_1]$,\, $x_1 \in (x_1^-,\, x_1^+)$ contains such a segment; let it be designated as $[A,\, A''(x_1)]$. It is the union $[A,\, A''(x_1)] = [A,\, A'(x_1)) \cup [A'(x_1),\, A''(x_1)]$, where $[A,\, A'(x_1)) \cap C = \emptyset$ and $[A'(x_1),\, A''(x_1)] \subset C$. The set $\pl_2^A C$ is then the union of the corresponding segments $[A'(x_1),\, A''(x_1)]$, and $\pl_2^A C_s$ is the union of the segments $[A_s(x_1),\, A''(x_1)]$, where $A_s(x_1) = (1-s) A'(x_1) + s A$.
Note that the segment $[A_s(x_1),\, A''(x_1)]$ can be represented as the disjoint union
$$
[A''(x_1),\, A_s(x_1)] = [A''(x_1),\, A'(x_1)] \sqcup
\big[ (A'(x_1),\, A] \setminus \big( (1-s) (A'(x_1),\, A] + s A \big) \big].
$$
Thus, $\pl_2^A C_s$ is the disjoint union of $\pl_2^A C$ and the set-theoretic difference of the set $\pl_2^A \tilde C \setminus \pl_2^A C$ and the set homothetic to it with ratio $1-s$ and the center $A$,
$$
\pl_2^A C_s = \pl_2^A C \sqcup
\Big[ \big( \pl_2^A \tilde C \setminus \pl_2^A C \big) \setminus \big( (1-s) (\pl_2^A \tilde C \setminus \pl_2^A C) + s A \big) \Big].
$$
Hence
$$
F(\pl_2^A C_s) = F(\pl_2^A C) + (1 - (1-s)^2) F(\pl_2^A \tilde C \setminus \pl_2^A C)
= F(\pl_2^A \tilde C) - (1-s)^2 (F(\pl_2^A \tilde C) - F(\pl_2^A C)),
$$
and a similar relation holds for $F(\pl_2^B C_s)$. Thus, denoting ${a_2} = 2( F(\pl_2^A \tilde C) - F(\pl_2^A C) ) + 2( F(\pl_2^B \tilde C) - F(\pl_2^B C) ) = 2( F(\pl_2 \tilde C) - F(\pl_2 C) )$ and $b_2 = F(\pl_2^A \tilde C) + F(\pl_2^B \tilde C) = F(\pl_2 \tilde C)$, we obtain
$$
F(\pl_2 C_s) = b_2 - \frac{a_2}{2} (1-s)^2.
$$
\vspace{1mm}
{\bf [3]} We have $\pl_3 \tilde C = I$ and $\AAA_3 = \{ n : \tilde\Pi^n \cap \tilde C = I \}$. The set $\AAA_3$ is the smaller arc of the great circle in the plane $n_2 = 0$ with the endpoints $n_\pm = (\xi_\pm, 0, -1)/\sqrt{1 + \xi_\pm^2}$.
The image of the set $\pl_3 C = \big( \cup_{n \in \AAA_3} \Pi^n \big) \cap C$ under the map $(x_1, x_2, z) \mapsto (x_1, z)$ is the graph of the restriction of the function $w$ on the interval $(x_1^-,\, x_1^+)$;
recall that $w(x_1) = \inf_{x_2} u(x_1,x_2)$.
The intersection of the set $\pl_3 C$ with each plane $\Pi[x_1]$ is a line segment parallel to $I$ (maybe degenerating to a point); let it be the segment $[A(x_1),\, B(x_1)]$ co-directional with $[A,\, B]$. Correspondingly, the set $\pl_3 C$ is the union of these segments,
$$
\pl_3 C = \cup_{x_1 \in (x_1^-,\, x_1^+)} [A(x_1),\, B(x_1)].
$$
We denote by $L(x_1)$ the length of $ [A(x_1),\, B(x_1)]$.
Denote $A_s(x_1) = (1-s) A(x_1) + s A$ and $B_s(x_1) = (1-s) B(x_1) + s B$. In particular, $A_0(x_1) = A(x_1)$ and $B_0(x_1) = B(x_1)$. The length of $[ A_s(x_1),\, B_s(x_1) ]$ equals $(1-s) L(x_1) + s |I|$. Recall that $|I| = 2\del$. We have
$$
\pl_3 C_s = (1-s) \pl_3 C + s I = \cup_{x_1 \in (x_1^-,\, x_1^+)} [ A_s(x_1),\, B_s(x_1) ].
$$
The image of $\pl_3 C_s$ under the map $(x_1, x_2, z) \mapsto (x_1, z)$ is the set $(1-s) \text{graph}\big( w\rfloor_{(x_1^-,\, x_1^+)} \big) + s (x_1^0, z^0)$; that is, it is the homothety of $\text{graph}\big( w\rfloor_{(x_1^-,\, x_1^+)} \big)$ with the center $(x_1^0, z^0)$ and ratio $1-s$.
The gradient of the function $u^{(s)}$ at each point of $\pl_3 C_s$ equals $(w'(x_1), 0)$, hence
$$
F(\pl_3 C_s) = \int_{(1-s)x_1^-}^{(1-s)x_1^+} f(w'(t/(1-s)), 0)\, \big[(1-s) L(t/(1-s)) + s |I| \big]\, dt = a_3 s(1-s) + \frac{b_3}{2}\, (1-s)^2,
$$
\beq\label{a3}
\text{where} \ \, a_3 = 2\del \int_{x_1^-}^{x_1^+} f(w'(x), 0)\, dx \ \, \text{and} \ \, b_3 = 2\int_{x_1^-}^{x_1^+} L(x_1) f(w'(x_1), 0)\, dx_1.
\eeq
\vspace{2mm}
{\bf [4]} $\AAA_4$ is the set of two vectors, $\AAA_4 = \{ n_-, n_+ \}$; recall that $n_\pm = (\xi_\pm, 0, -1)/\sqrt{1 + \xi_\pm^2}$.
The sets $\tilde\Pi^{n_-} \cap \tilde C = \pl_4^- \tilde C$ and $\tilde\Pi^{n_+} \cap \tilde C = \pl_4^+ \tilde C$ are graphs of the affine functions $z_\pm(x)$ defined by \eqref{2affine}, and $\pl_4^- \tilde C \cup \pl_4^+ \tilde C = \pl_4 \tilde C$. The planes $\Pi^{n_\pm} = \tilde\Pi^{n_\pm}$ coincide with the panes $\Pi_\pm$ defined by \eqref{Piplusminus}, and $\pl_4 C = I_- \cup I_+$.
Further, $\pl_4 C_s = \pl_4^- C_{s} \cup \pl_4^+ C_{s}$, where each set $\pl_4^\pm C_{s}$ is the graph of the restriction of the function $z = z_\pm(x)$ on the smaller trapezoid with a base $(x_1^\pm,\ x_2^\pm + [-a_\pm, a_\pm])$ and the lateral sides contained in the lateral bases of $T_\pm$, and with the height $s |x_1^0 - x_1^\pm|$; see Fig.~\ref{fig trapezoid}. The length of the other base is $2(1-s)a_\pm + 2s\del$. The slope of the planar set $\pl_4^\pm C_{s}$ is $\xi_\pm$, and the area of its projection on the $x$-plane equals
$$
\Del_4^\pm = s |x_1^0 - x_1^\pm| \big[ (2-s) a_\pm + s \del \big] = (1 - (1-s)^2) a_\pm |x_1^0 - x_1^\pm| + s^2 \del |x_1^0 - x_1^\pm|.
$$
Thus,
$$
F(\pl_4 C_s) = F(\pl_4^- C_{s}) +F(\pl_4^+ C_{s}) = \frac{a_4}{2} s^2 + b_4(1 - (1-s)^2),
$$
where
\beq\label{a4}
a_4 = 2\del \big( f(\xi_-, 0) (x_1^0 - x_1^-) + f(\xi_+, 0) (x_1^+ - x_1^0) \big), \ \ b_4 = f(\xi_-, 0) (x_1^0 - x_1^-) a_- + f(\xi_+, 0) (x_1^+ - x_1^0) a_+.
\eeq
\vspace{2mm}
Thus, for $0 \le s < 1$ we have
$$
F(u^{(s)}) = F(\pl_0 C_s) + F(\pl_1 C_s) + F(\pl_2 C) + F(\pl_3 C_s) + F(\pl_4 C_s)
$$
\beq\label{F kless1}
= c_0 + \frac{c_1}{2} (1-s)^2 + a_3 s(1-s) + \frac{a_4}{2} s^2,
\eeq
where $c_0 = a_0 + b_2 + b_4$ and $c_1 = a_1 - a_2 + b_3 - 2b_4$. Hence the derivative of $F(u^{(s)})$ for $0 < s < 1$ equals
$$
\frac{d}{ds} F(u^{(s)}) = c_1 (s - 1) + a_3(1 - 2s) + a_4 s
$$
and the right derivative at $s=0$ is
$$
\frac{d}{ds}\bigg\rfloor_{s=0^+} F(u^{(s)}) = a_3 - c_1.
$$
The following lemma serves to determine the left derivative of $F(u^{(s)})$ at $s=0$.
\begin{lemma}\label{ls<0}
$$
F(u^{(s)}) = c_0 + \frac{c_1}{2} + a_3 s - c_1 s + o(s) \quad \text{\rm as} \ s \to 0^-.
$$
\end{lemma}
Before proving this lemma, let us finish the proof of Theorem \ref{t3}.
It follow from the statement of Lemma \ref{ls<0} that the derivative of $F(u^{(s)})$ at $s = 0$ exists, and
\beq\label{derivative k=1}
\frac{d}{ds} F(u^{(s)})\Big\rfloor_{s=0} = a_3 - c_1.
\eeq
Now consider two cases. If $\frac{d}{ds} F(u^{(s)})\rfloor_{s=0} \ne 0$ then for $s$ sufficiently small (either positive or negative), the function $u_1 = u^{(s)}$ satisfies statement (c) of Theorem \ref{t3}. Statements (a) and (b) are guaranteed by Lemma \ref{lt2}, and so, Theorem \ref{t3} is proved.
If $\frac{d}{ds} F(u^{(s)})\rfloor_{s=0} = 0$ then by \eqref{derivative k=1} we have $a_3 = c_1$. From inequality \eqref{bigineq} and the definition of $a_3$ and $a_4$ in \eqref{a3} and \eqref{a4} one obtains $a_3 > a_4$. Now since
$$
F(u) = c_0 + \frac{c_1}{2} = c_0 + \frac{a_3}{2}, \qquad \lim_{s \to 1^-} F(u^{(s)}) = c_0 + \frac{a_4}{2} < c_0 + \frac{a_3}{2},
$$
we conclude that for $s$ sufficiently close to 1, $u^{(s)}$ satisfies statement (c) of Theorem \ref{t3}. Again, Lemma \ref{lt2} guarantees statements (a) and (b), and Theorem \ref{t3} is completely proved.
Let us now prove Lemma \ref{ls<0}.
Denote
$$
\del_0 C = \pl_0 C \cup \pl_2 C \cup \pl_4 C, \quad \del^A C = \pl_1^A C \cup \pl_3 C, \ \ \text{and} \ \ \del^B C = \pl_1^B C.
$$
The proof of Lemma \ref{ls<0} is based on several propositions, which are proved in Section \ref{sec technical}.
\begin{propo}\label{propo3}
For $s < 0$,\, $\pl C_s$ is the union of three sets,
$$
G_0^s = \del_0 C \cap \big[ (1-s) {C} + sA \big] \cap \big[ (1-s) {C} + sB \big],
$$
$$
G_1^s = C \cap \big[ (1-s) \del^A C + sA \big] \cap \big[ (1-s)C + sB \big],
$$
$$
G_3^s = C \cap \big[ (1-s) {C} + sA \big] \cap \big[ (1-s) \del^B C + sB \big],
$$
and $F(u^{(s)})$ is the sum of resistances of the corresponding sets,
$$ F(u^{(s)}) = F(G_1^s) + F(G_2^s) + F(G_3^s). $$
\end{propo}
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{secABg1}
\caption{The sections of $C$ and $C_s$ with $s < 0$ by a plane through $AB$ are shown. The (lower part of the) section of $C$ is shown lightgray. The section of $C_s$ is bounded below by the union of arcs $A'_s{A}_s$ and $B'_s{B}_s$. The arc $A'_s{A}_s$ is a part of the section of the surface $(1-s) \pl C + s A$, and the arc $B'_s{B}_s$ is a part of the section of the surface $(1-s) \pl C + s B$.}
\label{fig section AB}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{secGR}
\caption{The sections of $C$ and $C_s$ with $s < 0$ by a plane through $AB$ are shown.}
\label{fig seckgr1}
\end{figure}
The section of the sets $G_1^s$,\, $G_2^s$, and $G_3^s$ by a plane through $AB$ are shown in Fig.~\ref{fig section AB}. They are, respectively, the union of curves $\widehat A A'''$ and $\widehat B B'''$; the curve $A''' D$; and the curve $B''' D$.
Using the set-theoretic identity for three abstract sets $\mathcal{A},\, \mathcal{B}$, and $\mathcal{C}$,
$$
\mathcal{A} \cap \mathcal{B} \cap \mathcal{C} =
\mathcal{A} \setminus \Big( \big(\mathcal{A} \setminus \mathcal{B}\big) \cup \big(\mathcal{A} \setminus \mathcal{C}\big) \Big),
$$
$$
F(\mathcal{A} \cap \mathcal{B} \cap \mathcal{C}) =
F(\mathcal{A}) - F(\mathcal{A} \setminus \mathcal{B}) - F(\mathcal{A} \setminus \mathcal{C})
+ F\big( \mathcal{A} \setminus (\mathcal{B} \cup \mathcal{C}) \big),
$$
one can represent $G_1^s$,\, $G_2^s$, and $G_3^s$ as follows.
\beq\label{G1s}
G_1^s = \del_0 C \setminus \Big( \big( \del_0 C \setminus \big[ (1-s)C + sA \big] \big)\, \cup\, \big( \del_0 C \setminus \big[ (1-s)C + sB \big] \big) \Big);
\eeq
the sections of the sets $\del_0 C \setminus \big[ (1-s)C + sA \big]$ and $\del_0 C \setminus \big[ (1-s)C + sB \big]$ in Fig.~\ref{fig section AB} are, respectively, the curves $A'A'''$ and $B'B'''$.
\beq\label{G2s}
G_2^s = \big[ (1-s) \del^A C + sA \big] \setminus \Big( \big( \big[ (1-s) \del^A C + sA \big] \setminus C \big)\,
\cup\, \big( \big[ (1-s) \del^A C + sA \big] \setminus \big[ (1-s)C + sB \big] \big) \Big);
\eeq
the sections of $\big[ (1-s) \del^AC + sA \big] \setminus C$ and $\big[ (1-s) \del^AC + sA \big] \setminus \big[ (1-s)C + sB \big]$ are, respectively, the curves $A_s'A'''$ and $A_s D$.
\beq\label{G3s}
G_3^s = \big[ (1-s) \del^B C + sB \big] \setminus \Big( \big( \big[ (1-s) \del^B C + sB \big] \setminus C \big)\,
\cup\, \big( \big[ (1-s) \del^B C + sB \big] \setminus \big[ (1-s)C + sA \big] \big) \Big);
\eeq
the sections of $\big[ (1-s) \del^B C + sB \big] \setminus C$ and $\big[ (1-s) \del^B C + sB \big] \setminus \big[ (1-s)C + sA \big]$ are, respectively, the curves $B_s'B'''$ and $B_s D$.
As a consequence of formulas \eqref{G1s}, \eqref{G2s}, and \eqref{G3s} one obtains the following formulas for the resistance,
$$
F(G_1^s) = F(\del_0 C) - F\big(\del_0 C \setminus \big[ (1-s)C + sA \big] \big) - F\big( \del_0 C \setminus \big[ (1-s)C + sB \big] \big)
$$
\beq\label{FG1s}
+ F\big( \del_0 C \setminus \big[ (1-s)C + s(A \cup B) \big] \big);
\eeq
$$
F(G_2^s) = F\big( (1-s) \del^A C + sA \big) - F\big( \big[ (1-s) \del^A C + sA \big] \setminus C \big)
- F\big( \big[ (1-s) \del^A C + sA \big] \setminus \big[ (1-s)C + sB \big] \big) $$
\beq\label{FG2s}
+ F\big( \big[ (1-s) \del^A C + sA \big] \setminus \big( C \cup \big[ (1-s)C + sB \big] \big) \big);
\eeq
$$
F(G_3^s) = F\big( (1-s) \del^B C + sB \big) - F\big( \big[ (1-s) \del^B C + sB \big] \setminus C \big)
- F\big( \big[ (1-s) \del^B C + sB \big] \setminus \big[ (1-s)C + sA \big] \big) $$
\beq\label{FG3s}
+ F\big( \big[ (1-s) \del^B C + sB \big] \setminus \big( C \cup \big[ (1-s)C + sA \big] \big) \big).
\eeq
The last terms in equations \eqref{FG1s}, \eqref{FG2s}, and \eqref{FG3s} are difficult to estimate. In order to get rid of them, introduce the sets $\VVV_c$ by the inequality
$$
z - z^0 \ge \max \Big\{ \frac{w(x_1^- + |c|) - z^0}{(x_1^- + |c|) - x_1^0},\ \frac{w(x_1^+ - |c|) - z^0}{(x_1^+ - |c|) - x_1^0} \Big\}\, (x_1 - x_1^0).
$$
These sets form a nested family of dihedral angles with the edge $AB$. In particular, $\VVV_0$ contains $C$, and its faces are tangent to $C$. The smaller $|c|$, the closer $\VVV_c$ is to $\VVV_0$. In what follows we take $c = s$.
\begin{propo}\label{propo4} For $-s > 0$ sufficiently small we have
$$
\del_0 C \cap \VVV_{\als} \subset (1-s)C + s(A \cup B),
$$
$$
\big[ (1-s) \del^A C + sA \big] \cap \VVV_{\als} \subset C \cup \big[ (1-s)C + sB \big],
$$
$$
\big[ (1-s) \del^B C + sB \big] \cap \VVV_{\als} \subset C \cup \big[ (1-s)C + sA \big].
$$
\end{propo}
As a result, formula for the resistance takes the form
\begin{equation*}
\begin{split}
F(u^{(s)}) &= F\big( G_1^s \cap \VVV_{\als} \big) + F\big( G_2^s \cap \VVV_{\als} \big) + F\big( G_3^s \cap \VVV_{\als} \big) \\
&+ F\big( G_1^s \setminus \VVV_{\als} \big) + F\big( G_2^s \setminus \VVV_{\als} \big) + F\big( G_3^s \setminus \VVV_{\als} \big)
\end{split}
\end{equation*}
$$
= F(\del_0 C \cap \VVV_{\als}) - F\big(\del_0 C \cap \VVV_{\als} \setminus \big[ (1-s)C + sA \big] \big)
- F\big( \del_0 C \cap \VVV_{\als} \setminus \big[ (1-s)C + sB \big] \big)
$$
\begin{equation*}
\begin{split}
+ F\big( \big[ (1-s) \del^A C + sA \big] \cap \VVV_{\als} \big) &- F\big( \big[ (1-s) \del^A C + sA \big] \cap \VVV_{\als} \setminus C \big)\\
&- F\big( \big[ (1-s) \del^A C + sA \big] \cap \VVV_{\als} \setminus \big[ (1-s)C + sB \big] \big)
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
+ F\big( \big[ (1-s) \del^B C + sB \big] \cap \VVV_{\als} \big) &- F\big( \big[ (1-s) \del^B C + sB \big] \cap \VVV_{\als} \setminus C \big)\\
&- F\big( \big[ (1-s) \del^B C + sB \big] \cap \VVV_{\als} \setminus \big[ (1-s)C + sA \big] \big)\\
+ F\big( \pl C \setminus \VVV_{\als} \big).
\end{split}
\end{equation*}
\begin{propo}\label{propo5.0}
We have
$$
F(\del_0 C \cap \VVV_{\als}) = F(\del_0 C) + o(|s|) = a_0 + b_2 - \frac{a_2}{2} + o(|s|) \ \ \text{\rm as} \, \ s \to 0^-,
$$ $$
F\big( \big[ (1-s) \del^A C + sA \big] \cap \VVV_{\als} \big) = (1-s)^2 F(\del^A C) + o(|s|),
$$ $$
F\big( \big[ (1-s) \del^B C + sB \big] \cap \VVV_{\als} \big) = (1-s)^2 F(\del^B C) + o(|s|),
$$
and therefore,
$$
F\big( \big[ (1-s) \del^A C + sA \big] \cap \VVV_{\als} \big) + F\big( \big[ (1-s) \del^B C + sB \big] \cap \VVV_{\als} \big)
$$ $$
= (1-s)^2 \big( F(\pl_1 C) + F(\pl_3 C) \big) + o(|s|) = \frac{a_1 +b_3}{2} - (a_1 + b_3) s + o(|s|).
$$
\end{propo}
\begin{propo}\label{propo5}
$
F\big(\del_0 C \setminus \big[ (1-s)C + sA \big] \big) + F\big( \big[ (1-s) \del^A C + sA \big] \setminus C \big)
= -2s \big( F(\pl_2^A \tilde C) - F(\pl_2^A C) \big) + o(|s|),
$
$$
F\big(\del_0 C \setminus \big[ (1-s)C + sB \big] \big) + F\big( \big[ (1-s) \del^B C + sB \big] \setminus C \big)
= -2s \big( F(\pl_2^B \tilde C) - F(\pl_2^B C) \big) + o(|s|),
$$
and therefore,
$$
F\big(\del_0 C \setminus \big[ (1-s)C + sA \big] \big) + F\big( \big[ (1-s) \del^A C + sA \big] \setminus C \big) $$
$$ + F\big(\del_0 C \setminus \big[ (1-s)C + sB \big] \big) + F\big( \big[ (1-s) \del^B C + sB \big] \setminus C \big)
$$ $$
= -2s \big( F(\pl_2 \tilde C) - F(\pl_2 C) \big) + o(|s|) = -a_2 s + o(|s|),
$$ $$
F\big( \big[ (1-s) \del^A C + sA \big] \cap \VVV_{\als} \setminus \big[ (1-s)C + sB \big] \big) +
F\big( \big[ (1-s) \del^B C + sB \big] \cap \VVV_{\als} \setminus \big[ (1-s)C + sA \big] \big)
$$ $$
= -a_3 s + o(|s|).
$$
\end{propo}
\begin{propo}\label{propo5.1}
$$
F\big( \pl C \setminus \VVV_{\als} \big) = 2b_4 s + o(|s|).
$$
\end{propo}
Summing up the numbers obtained in Lemmas \ref{propo5.0}, \ref{propo5}, and \ref{propo5.1}, one comes to the statement of Lemma \ref{ls<0}.
\section{Proofs of technical lemmas}\label{sec technical}
\subsection{Proof of Lemma \ref{lt1}}
Assume the contrary, that is, a point $r \in \UUU$ is extreme. (It follows of course that $n_r \in \EEE$.) Choose $\ve > 0$ sufficiently small, so as the $\ve$-neighborhood of $r$ (let it be denoted by $B_\ve(r)$) contains only regular points of $\pl C$. Since $r$ is extreme, it is not contained in {\conv}$(C \setminus B_\ve(r))$. Draw a plane $\Pi$ strictly separating {\conv}$(C \setminus B_\ve(r))$ and $r$.
Let $C_+$ be the part of $C$ cut off by $\Pi$ and containing $r$. That is, $C_+$ is the intersection of $C$ with the open half-space bounded by $\Pi$ and containing $r$. Let $r' \in \pl C$ be a point in $C_+$ such that the tangent plane at it is parallel to $\Pi$. Take an open set $\OOO \subset S^2$ containing $n_{r'}$ and such that all points $r$ with the outward normal $n_r$ in $\OOO$ lie in $C_+$. Take $\xi$ in the set $\OOO \setminus \EEE$, which by the hypothesis of the lemma is not empty.
The set $\{ r : n_r = \xi \}$ is convex, maybe degenerating to a line segment or a point. Let $\check r$ be an extreme point of it (if the set is a segment or a point then $\check r$ coincides with an endpoint of the segment or with the point, respectively). Then $\check{r}$ is an extreme point of $C$, and $n_{\check r} \not\in \EEE$, in contradiction with the hypothesis of Lemma \ref{lt1}.
\subsection{Proof of Lemma \ref{lt2}}
For $0 \le s \le 1$ we have $C \subset C_s \subset \tilde C$, hence $C_s$ is the epigraph of a convex function $u^{(s)}$ defined on $\Om$ and satisfying $\tilde u \le u^{(s)} \le u$. Since the function $\tilde u$ satisfies statements (a) and (b), so does the function $u^{(s)}$.
Let now $s < 0$. The point $A' = A + (0, 0, \ve)$ is contained in $C$. The set $(1-s) C + s A'$ is the homothety of $C$ with the center $A' \in C$ and ratio $1-s > 1$, and therefore, contains $C$. Hence the set $(1-s)C + sA = (1-s)C + sA' + (0, 0, -s\ve)$ contains $C + (0, 0, -s\ve)$. The same argument holds for $(1-s)C + sB$. It follows that $C + (0, 0, -s\ve) \subset C_s \subset C$, and therefore, $C_s$ is the epigraph of a convex function $u^{(s)}$ defined on $\Om$ and satisfying $u \le u^{(s)} \le u - s\ve$. For $-1 < s < 0$, $u \le u^{(s)} < u + \ve$, and so, statement (b) is true.
For any $x \in \Om \setminus \UUU$, the line segment joining $(x, u(x))$ and $A$ is the disjoint union of two non-degenerated segments. One of them, let it be $\lam_1(x)$, is a closed segment with one endpoint at $(x, u(x))$, and is contained in $C$; let its length be $|\lam_1(x)| = l_1(x)$. The other one is a semiopen segment with one endpoint at $A$; let its length be $l_2(x)$. The function $l_1(x)$ is continuous, positive, and defined on the compact set $\Om \setminus \UUU$; hence it is bounded below by a positive constant $c_1$. The function $l_2(x)$ is bounded above by a constant $c_2$.
Let $-c_1/c_2 < s < 0$; then for all $x \in \Om \setminus \UUU$, the point $(x, u(x))$ is contained in the segment $(1-s) \lam_1(x) + s A$, and therefore, in $(1-s)C + sA$. A similar argument is valid also for $(1-s)C + sB$; hence one can choose $s_0 < 0$ so as for $s_0 < s < 0$ and for all $x \in \Om \setminus \UUU$, the point $(x, u(x))$ is contained in $[(1-s)C + sA] \cap [(1-s)C + sB]$. Since by definition $(x, u(x)) \in \pl C$, one concludes that this point belongs to $\pl C_s$, and therefore, $u(x) = u^{(s)}(x)$. We have proved that for $s_0 < s < 0$, $u\rfloor_{\Om\setminus\UUU} = u^{(s)}\rfloor_{\Om\setminus\UUU}$, and so, statement (a) is true.
\subsection{Lemma \ref{l convex}}
Later on we will need the following lemma.
\begin{lemma}\label{l convex}
Let $C_1$ and $C_2$ be two convex sets with nonempty interior in $\RRR^3$, and let $B_1 \subset \pl C_1$ and $B_2 \subset \pl C_2$ be two Borel sets such that the sets of normals to $C_1$ and $B_1$ and to $C_2$ and $B_2$ are disjoint. Then the projection of $B_1 \cap B_2$ on the $x$-plane has zero Lebesgue measure.
\end{lemma}
\begin{proof}
We have
$B_1 \cap B_2 \subset \pl C_1 \cap \pl C_2 \subset \pl(C_1 \cap C_2).$ Each point $r \in B_1 \cap B_2$ is a singular point of $\pl(C_1 \cap C_2).$ Indeed, if $r$ is a singular point of $\pl C_1$ or $C_2$, this is true. If, otherwise, it is a regular point of both $\pl C_1$ and $C_2$, and $n_1$ and $n_2$ are the corresponding outward normals, then according to the hypothesis of the lemma $n_1 \ne n_2$, and therefore, $r$ is singular. The set of singular points of $\pl(C_1 \cap C_2)$ has zero 2-dimensional Lebesgue measure, and therefore, its projection on the $x$-plane has zero measure.
\end{proof}
\subsection{Proof of Proposition \ref{propo3}}
For $s < 0$,\, $\pl C_s$ is the disjoint union of three sets,
$$
\pl C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big],
$$ $$
C \cap \pl\big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big],
$$ $$
C \cap \big[ (1-s)C + sA \big] \cap \pl\big[ (1-s)C + sB \big],
$$
which contain the sets in $\langle 1 \rangle$,\, $\langle 2 \rangle$,\, $\langle 3 \rangle$. Therefore the union of the sets $G_1^s,\, G_2^s,\, G_3^s$ is contained in $\pl C_s$. It remains to prove the reverse inclusion.
The boundary $\pl C$ is the disjoint union $\pl C = \del_0 C \sqcup \del^A C \sqcup \pl^B C$.
Additionally, $\del^A C \cap \big[ (1-s)C + sA \big] = \emptyset$
(otherwise the open segment joining $A$ and a point of $\del^A C$ would contain a point of $C$, which contradicts statement (a) of Proposition \ref{pro}),
and similarly, $\del^B C \cap \big[ (1-s)C + sB \big] = \emptyset$.
It follows that
\beq\label{total0}
\pl C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big] = \del_0 C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big].
\eeq
We have $\pl\big[ (1-s)C + sA \big] = (1-s) \pl C + sA$. Let $x \in [(1-s) \del_0 C + sA] \cap C$; then we have $x = (1-s) x_1 + sA \in C$ for some $x_1 \in \del_0 C$.
It follows from statements (b) and (c) of Proposition \ref{pro} that both $x$ and $x_1$ lie in $\pl_2 C$,
and therefore, $x \in \del_0 C \cap \big[ (1-s)C + sA \big]$. Hence
\beq\label{star}
[(1-s) \del_0 C + sA] \cap C \subset \del_0 C \cap \big[ (1-s)C + sA \big].
\eeq
Similarly, replacing $A$ with $B$ in \eqref{star}, one gets
\beq\label{starB}
[(1-s) \del_0 C + sB] \cap C \subset \del_0 C \cap \big[ (1-s)C + sB \big].
\eeq
Let us now prove that
\beq\label{star2}
\big[ (1-s) \del ^B C + sA \big] \cap \big[ (1-s)C + sB \big] = \emptyset.
\eeq
Indeed, otherwise there exist two points $x_1 \in \del^B C$ and $x_2 \in C$ such that $(1-s) x_1 + sA = (1-s) x_2 + sB$. Let the outward normal to the plane tangent to $C$ at $x_1$ be denoted by $n_{x_1}$. The plane of support to $\tilde C$ with the outward normal $n_{x_1}$ contains $B$ and does not contain $A$, hence the vectors $n_{x_1}$ and $\overrightarrow{BA}$ form obtuse angle, that is, $\langle n_{x_1},\, A-B \rangle < 0$. Since $(1-s) (x_2 - x_1) = s(A-B)$ and $s<0$, we have $\langle n_{x_1},\, x_2 - x_1 \rangle > 0$. This means that $x_2$ is contained in the open subspace bounded by the tangent plane to $C$ at $x_1$ that does not contain points of $C$, that is, $x_2 \not\in C$, in contradiction with our assumption.
From \eqref{star} and \eqref{star2} we obtain
\beq\label{total1}
\begin{split}
C \cap \big[ (1-s) \pl C + sA \big] \cap \big[ (1-s)C + sB \big] =
\Big( C \cap \big[ (1-s) \del_0 C + sA \big] \cap \big[ (1-s)C + sB \big] \Big) \cup \\
\Big( C \cap \big[ (1-s) \del^B C + sA \big] \cap \big[ (1-s)C + sB \big] \Big) \cup
\Big( C \cap \big[ (1-s) \del^A C + sA \big] \cap \big[ (1-s)C + sB \big] \Big)
\subset \\
\Big( \del_0 C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big] \Big) \cup
\Big( C \cap \big[ (1-s) \del^A C + sA \big] \cap \big[ (1-s)C + sB \big] \Big).
\end{split}
\eeq
Further, we have $\pl\big[ (1-s)C + sB \big] = (1-s) \pl C + sB$. Let us prove that
\beq\label{star4}
\big[ (1-s)C + sA \big] \cap \big[ (1-s) \del ^A C + sB \big] \subset \big[ (1-s) \del^A C + sA \big] \cap \big[ (1-s)C + sB \big]
\eeq
Take a point $x \in \big[ (1-s)C + sA \big] \cap \big[ (1-s) \del ^A C + sB \big]$; there exist $x_1 \in \del^A C$ and $x_2 \in C$ such that $x = (1-s) x_1 + sB = (1-s) x_2 + sA$. The plane of support to $\tilde C$ with the outward normal $n_{x_1}$ contains $A$ (and may contain or not contain $B$), hence $\langle n_{x_1},\, B-A \rangle \le 0$, and using that $(1-s) (x_2 - x_1) = s(B-A)$, we get $\langle n_{x_1},\, x_2 - x_1 \rangle \ge 0$. Since $x_2 \in C$, we conclude that $\langle n_{x_1},\, x_2 - x_1 \rangle = 0$. Hence the tangent planes to $C$ at $x_1$ and $x_2$ coincide, and the plane of support to $\tilde C$ with the same outward normal contains $I$. It follows that $x_1$ lies in $\pl_3 C$, and therefore, $x_2$ also lies in $\pl_3 C$, and $x \in \big[ (1-s) \pl_3 C + sA \big] \cap \big[ (1-s) \pl_3 C + sB \big] \subset \big[ (1-s) \del^A C + sA \big] \cap \big[ (1-s)C + sB \big]$. Formula \eqref{star4} is proved.
Thus, using \eqref{starB} and \eqref{star4}, one can write
\beq\label{total2}
\begin{split}
\big[ (1-s)C + sA \big] \cap \big[ (1-s) \pl C + sB \big] \cap C = \Big( \big[ (1-s)C + sA \big] \cap \big[ (1-s) \del_0 C + sB \big] \cap C \Big) \cup \\
\Big( \big[ (1-s)C + sA \big] \cap \big[ (1-s) \del^A C + sB \big] \cap C \Big) \cup
\Big( \big[ (1-s)C + sA \big] \cap \big[ (1-s) \del^B C + sB \big] \cap C \Big) \subset \\
\Big( \del_0 C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big] \Big) \cup
\Big( \big[ (1-s)C + sA \big] \cap \big[ (1-s) \del^B C + sB \big] \cap C \Big).
\end{split}
\eeq
It follows from \eqref{total0}, \eqref{total1}, and \eqref{total2} that $\pl C_s$ is the union of three sets,
$$
\del_0 C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s)C + sB \big] = G_1^s,
$$ $$
C \cap \big[ (1-s) \del^AC + sA \big] \cap \big[ (1-s)C + sB \big] = G_2^s,
$$ $$
C \cap \big[ (1-s)C + sA \big] \cap \big[ (1-s) \del^B C + sB \big] = G_3^s.
$$
Of course, $F(u^{(s)}) = F(\pl C_s)$. In order to prove that $F(\pl C_s) = F(G_1^s) + F(G_2^s) + F(G_3^s)$, it suffices to show that the intersections $G_1^s \cap G_2^s$,\, $G_2^s \cap G_3^s$,\, $G_1^s \cap G_3^s$ have zero measure. We have
$$
G_1^s \cap G_2^s \subset \del_0 C \cap \big[ (1-s) \del^AC + sA \big],
$$ $$
G_2^s \cap G_3^s \subset \big[ (1-s) \del^AC + sA \big] \cap \big[ (1-s) \del^B C + sB \big],
$$ $$
G_1^s \cap G_3^s \subset \del_0 C \cap \big[ (1-s) \del^B C + sB \big].
$$
The sets of outward normals to the convex surfaces $\del_0 C$,\, $\big[ (1-s) \del^AC + sA \big]$, and $\big[ (1-s) \del^B C + sB \big]$ are disjoint, hence by Lemma \ref{l convex}, the projections of the pairwise intersections of these surfaces on the $x$-plane have zero measure. This finishes the proof of Proposition \ref{propo3}.
\subsection{Proof of Proposition \ref{propo5.0}}
Calculate the resistances of the sets
$$
\del_0 C \cap \VVV_{\als}, \quad [(1-s) \del^A C + sA] \cap \VVV_{\als}, \ \ \text{and} \ \ [(1-s) \del^B C + sA] \cap \VVV_{\als}.
$$
Note that the sets $\VVV_{\als}$ and $\pl_4 C$ are disjoint, therefore
$$
F(\del_0 C \cap \VVV_{\als}) = F(\pl_0 C \cap \VVV_{\als}) + F(\pl_2 C \cap \VVV_{\als}) = a_0 + b_2 - \frac{a_2}{2} + o(1) \quad \text{as} \ s \to 0^-,
$$
$$
F\big( [(1-s) \del^A C + sA] \cap \VVV_{\als} \big) + F\big( [(1-s) \del^B C + sA] \cap \VVV_{\als} \big)
$$
$$
= F\big( [(1-s) \pl_1^A C + sA] \cap \VVV_{\als} \big) + F\big( [(1-s) \pl_1^B C + sA] \cap \VVV_{\als} \big) + F\big( [(1-s) \pl_3 C + sA] \cap \VVV_{\als} \big)
$$
$$
= \frac{a_1}{2} (1-s)^2 + \frac{b_3}{2} (1-s)^2 + o(1) \quad \text{as} \ s \to 0^-.
$$
\subsection{Proof of Proposition \ref{propo5}}
Note that $\MMM \mapsto (1-s) \MMM + s A$ defines the homothety of the set $\MMM$ with the ratio $1-s$ and the center at $A$. Its inversion is the homothety with the ratio $\frac{1}{1-s}$ and the same center, $\MMM \mapsto \frac{1}{1-s}\, \MMM + \frac{-s}{1-s} A$. The dihedral angles $\VVV_c$ are invariant under both homotheties; therefore the image of the set $([ (1-s) \del^A C + sA ] \cap \VVV_c) \setminus C$ under the inverse homothety is$[\del^A C \cap \VVV_c] \setminus \big[ \frac{1}{1-s}\, C + \frac{-s}{1-s} A \big]$, and
$$
F \big( ([ (1-s) \del^A C + sA ] \cap \VVV_c) \setminus C \big) =
(1-s)^2 F \big( [\del^A C \cap \VVV_c] \setminus \big[ \frac{1}{1-s}\, C + \frac{-s}{1-s} A \big] \big).
$$
Take a point $P$ from $\del^A C \cap \VVV_c$ and draw the ray $AP$. The intersection of this ray with $C$ is a (non-degenerated) segment $[P, Q]$. The condition that $P$ is contained in $[\del^A C \cap \VVV_c] \setminus \big[ \frac{1}{1-s}\, C + \frac{-s}{1-s} A \big]$ is as follows,
\beq\label{condition}
\frac{|PQ|}{|AP|} < -s.
\eeq
Take a point $Q$ from $\del_0 C \cap \VVV_c$ and draw the ray $AQ$. If this ray intersects the interior of $C$ then its intersection with $C$ is a (non-degenerated) segment $[P, Q]$. The condition that $Q$ is contained in $[\del_0 C \cap \VVV_c] \setminus [(1-s)C + sA]$ coincides with \eqref{condition}.
Consider the sections of $C$ by vertical planes through $A$. Let $\Pi_\vphi$ denote one such plane forming the angle $\vphi$ with the $x_1$-axis in the counterclockwise direction. The intersection of $\Pi_\vphi$ with the union of sets $\del_0 C \setminus [(1-s) \stackrel{\circ}{C} + sA]$ and $\del^A C \setminus \big[ \frac{1}{1-s}\, C + \frac{-s}{1-s} A \big]$ is an arc of the curve $\pl C \cap \Pi_\vphi$, with the endpoints $A_1^s = A_1^s(\vphi)$ and $A_2^s = A_2^s(\vphi)$ and the point $A$ being collinear, and with $|A_1^s A_2^s| = s|AA_1^s|$; see Fig.~\ref{fig sec}. The point $A' = A'(\vphi)$ is the lower point of the intersection of the curve $\pl C \cap \Pi_\vphi$ with the support line to $C$ from $A$. The vertical projections of $A_i^s$, $A'$, and $A$ on the $x$-plane are denoted, respectively, by $\widehat{A}_i^s = \widehat{A}_i^s(\vphi)$,\, $\widehat{A}'$, and $\widehat{A}$,\, $i = 1,\, 2$. Note that according to the definition, $\widehat{A} = (x_1^0, x_2^0 - \del)$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.22]{seco}
\caption{A section of $C$ by a plane through $A$.}
\label{fig sec}
\end{figure}
The intersection of $\Pi_\vphi$ with one of the faces of the dihedral angle $\VVV_{\als}$, $z - z^0 = \frac{w(x_1^- + s) - z^0}{(x_1^- + s) - x_1^0}, (x_1 - x_1^0)$ or $z - z^0 = \frac{w(x_1^+ - s) - z^0}{(x_1^+ - s) - x_1^0}\, (x_1 - x_1^0)$, is shown dashed in Fig.~\ref{fig sec}. The measure of the set of angles $\vphi$ for which this line is situated between the lines $AA'$ and $AA_1^s$ is $O(s)$ as $s \to 0^-$.
The maximal difference between the slope at $A'$ and the slope at a point of the arc $A_1^s A_2^s$ is $o(1)$. Here and in what follows, it is understood that the estimates $o(1)$ and $O(s)$ are uniform in $\vphi$ as $s \to 0^-$. Correspondingly, the maximal difference between $\nabla u(\widehat{A}'(\vphi))$ and $\nabla u(x)$, where $x$ lies on $[\widehat{A},\, \widehat{A}_2^s(\vphi)]$, is $O(s)$ as $s \to 0^-$, and the values $\nabla u(\widehat{A}'(\vphi))$ and $\nabla u(x)$,\, $x \in [\widehat{A},\, \widehat{A}_2^s(\vphi)]$ are uniformly bounded.
Since $\widehat{A}'$ lies in $[\widehat{A}_1^s,\, \widehat{A}_2^s]$, we have $|\widehat{A}_1^s \widehat{A}_2^s| = s |\widehat{A} \widehat{A}'| (1 + o(1))$. The value $F(\pl_2^A \tilde C) - F(\pl_2^A C)$ is the integral of $f(\nabla u(\widehat{A}'(\vphi)))$ over the union of segments $[\widehat{A}(\vphi),\, \widehat{A}'(\vphi)]$ for those $\vphi$ that $A'(\vphi)$ lies above the rays of intersection of $\Pi_\vphi$ with the faces of $\VVV_{\als}$.
The area of the union of segments $[\widehat{A}_1^s(\vphi),\, \widehat{A}_2^s(\vphi)]$ is $-2s(1 + o(1))$ times the area of the union of segments $[\widehat{A}(\vphi),\, \widehat{A}'(\vphi)]$. Besides, the area of the union of segments $[\widehat{A}_1^s(\vphi),\, \widehat{A}_2^s(\vphi)]$, for $\vphi$ such that the ray of intersection of $\Pi_\vphi$ with a face of $\VVV_{\als}$ lies between the rays $\widehat{A} \widehat{A}'$ and $\widehat{A} \widehat{A}_2^s$, is $o(s)$. The integrand in the functional equals $\nabla u(\widehat{A}'(\vphi))$. It follows that
$$
F\big( [\del_0 C \cap \VVV_{\als}] \setminus [(1-s) \stackrel{\circ}{C} + sA] \big) +
(1-s)^2 F \big( [\del^A C \cap \VVV_{\als}] \setminus \big[ \frac{1}{1-s}\, C + \frac{-s}{1-s} A \big] \big)
$$ $$
= -2s \big( F(\pl_2^A \tilde C) - F(\pl_2^A C) \big) + o(s) \quad \text{as} \ \ s \to 0^-.
$$
Similarly, replacing $A$ with $B$, we have
$$
F\big( [\del_0 C \cap \VVV_{\als}] \setminus [(1-s) \stackrel{\circ}{C} + sB] \big) +
(1-s)^2 F \big( [\del^B C \cap \VVV_{\als}] \setminus \big[ \frac{1}{1-s}\, C + \frac{-s}{1-s} B \big] \big)
$$ $$
= -2s \big( F(\pl_2^B \tilde C) - F(\pl_2^B C) \big) + o(s) \quad \text{as} \ \ s \to 0^-.
$$
Te sum of these values equals $s \big( F(\pl_2 \tilde C) - F(\pl_2 C) \big) + o(s) = -a_2 s + o(s)$.
\vspace{2mm}
Now consider the sets
$$
([(1-s) \del^A C + sA] \cap \VVV_{\als}) \setminus [ (1-s)C + sB ] \quad \text{and} \quad ([(1-s) \del^B C + sB] \cap \VVV_{\als}) \setminus [(1-s) C + sA].
$$
The former one can be represented as
\beq\label{1st set}
([(1-s) \del^A C + sA] \cap \VVV_{\als}) \setminus [ (1-s) \stackrel{\circ}{C} + sA + s(B-A) ],
\eeq
and the translation of the latter one by the vector $s(A-B)$ is
\beq\label{2nd set}
([(1-s) \del^B C + sA] \cap \VVV_{\als}) \setminus [(1-s) C + sA - s(B-A)].
\eeq
These sets do not intersect.
Let $P \in [(1-s) \del^A C + sA] \cap \VVV_{\als}$. The intersection of the ray $P + \lam(B - A),\, \lam \ge 0$ with the body $(1-s) C + sA$ is a closed segment $PQ$. The point $P$ lies in the former set \eqref{1st set} if and only if $|PQ| \le -s |I|$.
Let now $Q \in [(1-s) \del^B C + sA] \cap \VVV_{\als}$. The intersection of the ray $Q - \lam(B - A),\, \lam \ge 0$ with $(1-s) C + sA$ is a closed segment $PQ$. The point $Q$ lies in the latter set \eqref{2nd set} if and only if $|PQ| < -s |I|$.
The union of the sets \eqref{1st set} and \eqref{2nd set} induces a convex function. Its gradient at a point $(x_1,x_2)$ is $w'(x_1) + o(1),\, s \to 0^-$. The projection of this union of sets on the $x$-plane is a set contained in the strip $x_1^- + s < x_1 < x_1^+ - s$, and the intersection of this set with each line $x_1 = c \in (x_1^- + s,\, x_1^+ - s)$ is a segment with the length $-s |I| = -2\del s$. Therefore, the resistance of the union of the sets \eqref{1st set} and \eqref{2nd set} is
$$
-2\del s \int_{x_1^- + s}^{x_1^+ - s} f(w'(x) + o(1), 0)\, dx = -a_3 s.
$$
\subsection{Proof of Proposition \ref{propo5.1}}
Consider the set $\pl C \setminus \VVV_{\als}$.
Introduce the notation
$$
S(s) = \left\{ x: \ z - z^0 < \max \Big\{ \frac{w(x_1^- + s) - z^0}{(x_1^- + s) - x_1^0},\ \frac{w(x_1^+ - s) - z^0}{(x_1^+ - s) - x_1^0} \Big\}\, (x_1 - x_1^0) \right\}.
$$
This set is the disjoint union of two convex sets, $S(s) = S_+(s) \sqcup S_-(s)$, where each set $S_\pm(s)$ contains the segment $S_\pm = (x_1^\pm, x_2^\pm + [-a_\pm,\, a_\pm])$. Each set $S_\pm(s)$ is contained in the $\al_M(s)$-neighborhood of $S_\pm$ and contains the $\al_m(s)$-neighborhood of $S_\pm$, where $\al_m(s) \le \al_M(s)$,\, $\al_m(s)/|s| \to \infty$ and $\al_M(s) \to 0$ as $s \to 0^-$. The orthogonal projection of $\pl C_s \setminus \VVV_{\als}$ on the $x$-plane, $\text{pr}(\pl C_s \setminus \VVV_{\als})$, is the disjoint union of two sets that are the intersections
$$
S_\pm(s) \cap \left[ (1-s) S_\pm(s) + s A \right] \cap \left[ (1-s) S_\pm(s) + s B \right].
$$
One easily sees that the area of $\text{pr}(\pl C_s \setminus \VVV_{\als})$ equals the area of $S(s)$ minus the value
$$
|s| \left[ 2a_+ (x_1^+ - x_1^0) + 2a_- (x_1^0 - x_1^-) \right] + o(1) \quad \text{as } \, s \to 0^-.
$$
The set $\pl C_s \setminus \VVV_{\als}$ is the graph of the function $u^{(s)}(x)$, which is the maximum of the three functions, $u(x)$,\, $s z^0 + (1-s) u\big( \frac{1}{1-s}({x - s x_0 \mp s (0,\del)}) \big)$, restricted on the set $\text{pr}(\pl C_s \setminus \VVV_{\als})$. Thus, on this set either $\nabla u^{(s)}(x) = \nabla u(x)$, or $\nabla u^{(s)}(x) = \nabla u\big( \frac{1}{1-s}({x-s x_0 \mp s (0,\del)}) \big)$. It follows that $\nabla u^{(s)}(x) = \nabla u(x + O(s)) = \nabla u(x) + O(s)$, where $O(s)$ is uniform over all $x \in \text{pr}(\pl C_s \setminus \VVV_{\als})$. Hence
$$
F(\pl C_s \setminus \VVV_{\als}) = F(\pl C \setminus \VVV_{\als}) + s \left[2a_+ (x_1^+ - x_1^0) f(\xi_+, 0) + 2a_- (x_1^0 - x_1^-) f(\xi_-, 0) \right] + o(s)
$$ $$
= 2b_4 s + o(s) = \frac{a_4}{2} s^2 + b_4 (1 - (1-s)^2) +o(s) = 2b_4 s +o(s), \quad s \to 0^-.
$$
Summing up all the obtained values, one obtains
$$
F(u^{(s)}) = \big( a_0 + b_2 - \frac{a_2}{2} \big) + \big( \frac{a_1}{2} - a_1 s + \frac{b_3}{2} - b_3 s \big) - (-a_2 s) - (-a_3 s) + o(s)
$$
$$ = c_0 + \frac{c_1}{2} (1 - 2s) + a_3 s + o(s) \quad \text{as} \ s \to 0^-.$$
\section*{Acknowledgements}
This work is supported by CIDMA through FCT (Funda\c{c}\~ao para a Ci\^encia e a Tecnologia), reference UIDB/04106/2020.
|
1,116,691,501,247 | arxiv | \section{Introduction}\label{Sec:Introduction}
The acquisition of downlink channel state information (CSI) in frequency division duplex (FDD) massive multi-input multi-output (MIMO) systems has been a long-term problem that obsesses the mobile communication industry \cite{Araujo2016,Elijah2016,Fan2017}. Without the reciprocity between uplink and downlink, the downlink CSI has to be obtained through downlink training and feedback, causing a large amount of overhead. Recently, studies have suggested to utilize the spatial reciprocity \cite{Hugl2002} to reduce the cost of downlink CSI acquisition. Uplink and downlink channels has a similar spatial domain given that they share the same space and scatterers. Thereafter, part of downlink CSI can be derived from the uplink CSI.
\subsection{Related work}
Many related works have been developed to estimate or reconstruct the FDD massive MIMO downlink channels under different channel models. For clustering channels, where a continuous spatial region has distinct power, the correlation matrix is introduced to describe the power distribution of the channel in the spatial domain \cite{Xie2018,Haghighatshoar2018,Khalilsarai2018}. The downlink correlation matrix can be derived from the uplink, and only the downlink instantaneous CSI should be estimated in the downlink. For limited scattering channels, the angle and delay of each propagation path are common in uplink and downlink, and only the downlink gains should be estimated in the downlink \cite{Zhang2018,Han2019TWC,Han2019TCOM,Han2019JSTSP}. These spatial reciprocity-based methods effectively ease the burden of downlink training and feedback; they have great potential in the future use. The spatial reciprocity does not indicate that the downlink CSI can be completely derived from the uplink. The overhead of downlink training and feedback is still required.
In recent years, the rapid development of deep learning techniques stimulates their wide applications to various areas, including localization \cite{Ihsan2018} and FDD downlink channel estimation or prediction \cite{Alrabeiah2019,Arnold2019,Safari2018,Wang2019UL,Yang2019,Liu2019,Dong2018}. Most of these methods are based on an assumption that a mapping function exists between the uplink and the downlink channels, which can be conveniently learned by deep networks instead of traditional algorithms. With the mapping function, the downlink channel matrix can be directly predicted from the uplink channel matrix \cite{Alrabeiah2019,Arnold2019,Safari2018,Wang2019UL,Yang2019}.
The channel matrix of the massive MIMO multicarrier system also can be illustrated by an image. The channel as an image is an interesting strategy \cite{Wen2018,Wang2019Deep}, which enables the application of advanced deep learning-based image processing methods. For the downlink channel prediction problem, the uplink and downlink channel images are stacked together into a large image \cite{Safari2018}. With the uplink channel, the base station (BS) draws one half of the image. The other half of the image, which is currently white, is the downlink channel to be predicted. The downlink channel prediction method works as a painter to complete the other half of the image through image processing methods, such as generative adversarial networks. The methods in \cite{Alrabeiah2019,Arnold2019,Safari2018,Wang2019UL,Yang2019} do not require feedback, thereby raising the interests of the industry. However, the assumption of the mapping function between the uplink and the downlink channels is invalid in complicated multipath propagation scenarios \cite{Han2019TWC}.
To address this problem, the downlink channel is estimated with minor feedback in \cite{Liu2019}. The downlink channel matrix obtained at the user side is initially encoded, sent to the BS, and then decoded with the aid of uplink channel matrix. Besides, the downlink subchannel on subarray $\mathcal{A}$ (denoted by ${\bf H}_\mathcal{A}$) is correlated with that on subarray $\mathcal{B}$ (denoted by ${\bf H}_\mathcal{B}$) when spatial stationarity exists. If ${\bf H}_\mathcal{B}$ is obtained, then ${\bf H}_\mathcal{A}$ can be learned from ${\bf H}_\mathcal{B}$, with the cost of downlink training and feedback overhead to acquire ${\bf H}_\mathcal{B}$ \cite{Dong2018}. These methods are applicable in practice. However, they ignore the channel model and directly predicts the channel matrix. Under multipath propagation conditions, the accuracy of estimation is affected if the compression rate is low or the scale of subarray $\mathcal{B}$ is much smaller than that of subarray $\mathcal{A}$. On the contrary, scaling up the compression rate or the size of subarray $\mathcal{B}$ further increases the overhead amount. This contradiction limits the performance of these data-driven methods. Therefore, referring to the channel model is necessary to increase the efficiency of deep learning-based channel estimation.
In massive MIMO systems, where the scale of antenna array is extremely large, the signal reflected by a scatter does not arrive at the entire array, and the channel begins to show spatial non-stationarity \cite{Carvalho2019,Ali2019,Amiri2018}. Non-stationarity is a distinct feature in future massive MIMO systems, where the array may be widely spread on the wall of a building. The non-stationary channel is more complicated than the stationary ones because the visibility region of each scatterer, that is, the part of the array that can receive signals from the scatterer, should also be considered in the channel model. Thus, estimating the downlink channel of an FDD non-stationary massive MIMO system is challenging. The methods in \cite{Xie2018,Haghighatshoar2018,Khalilsarai2018,Zhang2018,Han2019TWC,Han2019TCOM,Alrabeiah2019,Arnold2019,Safari2018,Wang2019UL,Yang2019,Liu2019,Dong2018} are designed for FDD stationary massive MIMO systems. The study on the estimation of FDD non-stationary massive MIMO downlink channels is limited.
\subsection{Contribution of this paper}
We focus on the downlink channel reconstruction of FDD non-stationary massive MIMO systems. In accordance with the multipath channel model, the downlink channel can be reconstructed by the downlink gains, angles, delays, and visibility regions of the propagation paths. We acquire the frequency-independent parameters, including the angles, delays, and visibility regions, from the uplink and then estimate the downlink gains from the downlink given the spatial reciprocity.
If we apply the iteration-based algorithms to estimate the frequency-independent parameters, then the complexity of algorithm explosively increases. To tackle this problem, we propose a model-driven deep learning-based downlink channel reconstruction scheme, which has the following advantages.
\subsubsection{Power of using You Only Look Once (YOLO)}
YOLO, a fast object detection neural network that detects all the objects by looking at the image for only once, can effectively tackle the problem of explosive complexity. We introduce YOLO to detect each path with much reduced processing time compared with using iteration-based algorithms \cite{Han2019TWC}. With the bounding boxes designed in this study, the frequency-independent parameters of each path can be conveniently obtained.
\subsubsection{Efficiency of model-driven deep learning}
We do not follow the data-driven methods to learn the downlink channel matrix, but we learn the parameters of the paths in the channel. Driven by the channel model, the number of coefficients to be estimated is much smaller than the data-driven methods. Accordingly, the downlink training and feedback overhead is greatly reduced, and the accuracy of the reconstruction is guaranteed.
\subsubsection{Ability to identify visibility regions}
The visibility region of each scatterer consists of one or several subarrays that receive the signal reflected by the scatterer. Two algorithms are proposed to identify the visibility regions in different approaches. Either approach achieves a successful ratio of more than 98\%.
\subsubsection{Refinement of estimates}
A low-complexity refinement module is introduced to refine the estimates of angles and delays. After refinement, the normalized mean square error (NMSE) of uplink channel reconstruction is reduced from $-8$ dB to ${-28}$ dB at signal-to-noise ratio (SNR) $=0$ dB.
\subsubsection{Applicability to stationary cases}
The proposed scheme also works in the reduced FDD stationary massive MIMO systems, where most existing works focus on, and the NMSE performance is close to that of the algorithm-based reconstruction. On the basis of the stationarity of each subarray, an alternative scheme for non-stationary systems is further formulated and evaluated. Nevertheless, the proposed scheme is proven to be more efficient than the alternative one.
In the following section, we initially introduce the system model. The rationale of deep learning-based parameter estimation and the working steps of the proposed scheme are provided in Sections \ref{Sec:YOLO} and \ref{Sec:scheme}, respectively. Section \ref{Sec:simulations} evaluates the scheme and Section \ref{Sec:conclusion} concludes the paper.
\emph{Notations}---We denote scalars by letters in normal fonts, and use uppercase and lowercase boldface letters to represent matrices and vectors, respectively. The superscripts $(\cdot)^*$, $(\cdot)^{T}$, and $(\cdot)^{H}$ indicate conjugate, transpose, and conjugate transpose, respectively. $\mathbb{E}\{\cdot\}$ means considering the expectation with respect to the random variables inside the brackets. $\odot$ and $\otimes$ denote taking the Hadamard and Kronecker products, respectively. We also denote the absolute value and modulus operations by $\left| \cdot \right|$ and $\left\| \cdot \right\|$ and use $\left\lfloor \cdot \right\rfloor$ and $\left\lceil \cdot \right\rceil$ to round a decimal number to its nearest lower and higher integers, respectively. $[{\bf A}]_{i,:}$, $[{\bf A}]_{:,j}$, and $[{\bf A}]_{i,j}$ represent the $i$th row, the $j$th column, and the $(i,j)$th entry of matrix $\bf A$.
\section{System Model}\label{Sec:SystemModel}
In a cell of the FDD massive MIMO system, the BS is located at the cell center and equipped with an $M$-element uniform linear array (ULA), where $M$ is large. The distance between two adjacent ULA elements is $d$. Single-antenna users are randomly distributed in the cell. The reconstruction of each user channel is conducted independently; therefore, we focus on a single user.
The system works in the FDD duplexing mode. The uplink and downlink carrier frequencies are $f^{\rm ul}$ and $f^{\rm dl}$, respectively. The uplink and downlink carrier wavelengths are approximately equal and unified as $\lambda$ given $|f^{\rm ul}-f^{\rm dl}|\ll f^{\rm ul}$, $f^{\rm dl}$. Orthogonal frequency division multiplexing (OFDM) is applied. The area band has $N$ subcarriers with spacing $\Delta f$ between two adjacent subcarriers.
\subsection{Non-stationarity}
\begin{figure}
\centering
\includegraphics[scale=0.55]{NonStationary.pdf}
\caption{Spatial non-stationarity. Path 1 arrives at subarrays 1 and 2, whereas path 2 arrives at subarray $S$.} \label{Fig:NonStationary}
\end{figure}
The channel between the BS and the user comprises $L$ paths, corresponding to $L$ scatterers. For the line-of-sight path, the scatterer is the user antenna itself. The ULA at BS experiences spatial non-stationarity due to the large aperture of the array. Signals reflected by a scatterer may arrive at the entire ULA or a part of the ULA, as shown in Fig.~\ref{Fig:NonStationary}.
The ULA is uniformly segmented into $S$ subarrays, each with $M/S$ elements.
The set of adjacent subarrays that can see scatterer $l$ is defined as the visibility region of scatterer $l$, denoted as follows:
\begin{equation}\label{Eq:VRscatterer}
\Phi_l = \{ s_{l,{\rm start}},s_{l,{\rm start}}+1,\ldots,s_{l,{\rm end}}-1,s_{l,{\rm end}}\},
\end{equation}
where $s_{l,{\rm start}}$ and $s_{l,{\rm end}}$ are the first and last subarrays that can receive signals reflected from scatterer $l$, respectively, satisfying $1\le s_{l,{\rm start}}\le s_{l,{\rm end}} \le S$.
Similarly, the visibility region of subarray $s$ includes the scatterers that can see subarray $s$, denoted as follows:
\begin{equation}\label{Eq:VRsubarray}
\Psi_s = \{ l_{s,1},l_{s,2},\ldots,l_{s,L_s} \},
\end{equation}
where $1\le l_{s,i} \le L$ holds for $i=1,\ldots,L_s$ and $L_s$ is the number of scatterers that can reflect signals to subarray $s$, satisfying $0\le L_s\le L$.
The example of Fig.~\ref{Fig:NonStationary} is considered to illustrate the visibility regions. For the scatterers, $\Phi_1=\{1,2\}$ and $\Phi_2=\{S\}$, and for the subarrays, $\Psi_1=\Psi_2=\{1\}$ and $\Psi_S=\{2\}$.
\subsection{Spatial reciprocity}
Although the uplink and downlink channels are in different frequency bands, they share the space and the scatterers. On the basis of this spatial reciprocity, the delay and angle of the $l$ path, as well as the visibility region of the $l$th scatterer, are frequency-independent and identical in uplink and downlink. We denote $\tau_l$ and $\theta_l$ as the delay and angle of the $l$ path, respectively, satisfying\footnote{In practical systems, $\tau_l$ should be not greater than the cyclic-prefix length. Here, we relax this restriction by assuming that the cyclic-prefix length is equal to the symbol length.} $0\le\tau_l\le {1}/{\Delta f}$ and $0\le\theta_l\le 2\pi$. The frequency-independent parameters are $\tau_l$, $\theta_l$, and $\Phi_l$, where $l=1,\ldots,L$.
When reflection or scattering occurs, the phase shift amount differs in the uplink and downlink due to different carrier frequencies. Consequently, the complex gains are frequency-dependent and different in uplink and downlink. We denote $g^{\rm ul}_l$ and $g^{\rm dl}_l$ as the uplink and downlink complex gains of the $l$th path, respectively, which are different from each other.
\subsection{Channel model}
In the baseband, the frequency of the first subcarrier of the downlink OFDM module is regarded as 0 Hz, and that of the uplink OFDM module is $f^{\rm ul}-f^{\rm dl}$. The non-stationary downlink channel between the BS and the user $k$ across all antennas and subcarriers is modeled as
\begin{equation}\label{Eq:DLchannel}
{\bf H}^{\rm dl} = \sum_{l=1}^{L} g^{\rm dl}_l \left({\bf a}(\Theta_l) \odot {\bf p}(\Phi_l)\right) {\bf q}^T(\Gamma_l),
\end{equation}
where ${\bf H}^{\rm dl}\in\mathbb{C}^{N \times M}$ is in the antenna subcarrier domain,
\begin{equation}\label{Eq:GammaTheta}
\Theta_l=\frac{d}{\lambda}\sin\theta_l, \quad \Gamma_l=\Delta f \tau_l,
\end{equation}
simplify the expressions and have frequency-independency, satisfying $0\le\Theta_l\le 1$ and $0\le\Gamma_l\le 1$,
\begin{equation}\label{Eq:avec}
{\bf a}(\Theta)=\left[1, e^{j2\pi\Theta}, \ldots, e^{j2\pi(M-1)\Theta}\right]^T
\end{equation}
is the steering vector of the ULA, ${\bf p}(\Phi) \in \mathbb{Z}^{M\times 1}$ selects the ULA elements that are in the subarrays in $\Phi$, the $m$th entry is
\begin{equation}\label{Eq:pvec}
\left[{\bf p}(\Phi)\right]_m =
\begin{cases}
1, & \text{if $\lceil \frac{mS}{M}\rceil \in \Phi$,} \\
0, & \text{else,}
\end{cases}
\end{equation}
and
\begin{equation}\label{Eq:qvec}
{\bf q}(\Gamma)=\left[1, e^{j2\pi\Gamma}, \ldots, e^{j2\pi(N-1)\Gamma}\right]^T
\end{equation}
is the phase shift vector across the OFDM subcarriers.
Given the spatial reciprocity, the uplink baseband channel is expressed as
\begin{equation}\label{Eq:ULchanneltmp}
{\bf H}^{\rm ul} = \sum_{l=1}^{L} g^{\rm ul}_l e^{j2\pi\left(f^{\rm ul}-f^{\rm dl}\right)\tau_l} \left({\bf a}(\Theta_l) \odot {\bf p}(\Phi_l)\right) {\bf q}^T(\Gamma_l),
\end{equation}
where ${\bf H}^{\rm ul}\in\mathbb{C}^{N \times M}$ is in the antenna subcarrier domain. We further define
\begin{equation}\label{Eq:ULegain}
\alpha_l = g^{\rm ul}_l e^{j2\pi\left(f^{\rm ul}-f^{\rm dl}\right)\tau_l}
\end{equation}
as the effective uplink gain of the $l$th path and simplify the uplink channel model \eqref{Eq:ULchanneltmp} as
\begin{equation}\label{Eq:ULchannel}
{\bf H}^{\rm ul} = \sum_{l=1}^{L} \alpha_l \left({\bf a}(\Theta_l) \odot {\bf p}(\Phi_l)\right) {\bf q}^T(\Gamma_l).
\end{equation}
\section{Acquire model parameters through learning}\label{Sec:YOLO}
We focus on the reconstruction of the downlink channel ${\bf H}^{\rm dl}$, which is a fundamental requirement to harvest the spatial multiplexing gain of FDD massive MIMO downlink. Given the channel model \eqref{Eq:DLchannel}, we can reconstruct the downlink channel with the model parameters, i.e., $\Theta_l$, $\Gamma_l$, $\Phi_l$, and $g^{\rm dl}_l$ of each path. Thereafter, the acquisition of these model parameters becomes the primary task of downlink channel reconstruction. Notably, the number of paths (i.e., $L$) is also unknown.
\begin{figure*}
\centering
\includegraphics[scale=1]{NOMPprocess.pdf}
\caption{Residues of pilots after each iteration of the NOMP algorithm, where the horizontal and vertical axes represent delay and angle, respectively. Only one path is detected at each iteration. If three paths exist, then the NOMP algorithm requires three iterations to find all the paths.}\label{Fig:NOMPprocess}
\end{figure*}
On the basis of the spatial reciprocity, the model parameters are divided into two categories, that is, the frequency-independent parameters (i.e., $\Theta_l$, $\Gamma_l$, and $\Phi_l$) and the frequency-dependent parameters (i.e., $g^{\rm dl}_l$). We estimate the frequency-independent parameters in the uplink and acquire the frequency-dependent parameters through downlink training and feedback \cite{Han2019TWC}. This method greatly relaxes the overhead requirement on the downlink training and reduces the feedback amount from $MN$ to $L$ complex numbers compared with traditional linear channel estimation methods, such as least squares (LS) and linear minimum mean square error (LMMSE) estimators, which can also be regarded as data-driven methods.
The frequency-independent parameters are estimated during the uplink sounding phase.
The uplink all-one pilots received by the BS across all antennas and subcarriers are expressed as
\begin{equation}\label{Eq:ULpilots}
{\bf Y}^{\rm ul} = \sqrt{P^{\rm ul}} \sum_{l=1}^{L} \alpha_l \left({\bf a}(\Theta_l) \odot {\bf p}(\Phi_l)\right) {\bf q}^T(\Gamma_l) + {\bf Z}^{\rm ul},
\end{equation}
where ${\bf Y}^{\rm ul}\in\mathbb{C}^{M\times N}$ is in the antenna subcarrier domain, $P^{\rm ul}$ is the transmitted power of user, and ${\bf Z}^{\rm ul}\in\mathbb{C}^{M\times N}$ is the uplink complex Gaussian noise whose elements are independent and identically distributed (i.i.d.) with zero mean and unit variance. ${\bf Y}^{\rm ul}$ is a noisy mixture composed of the pilot components that travel along the $L$ paths and the additive Gaussian noise. We aim to extract $\Theta_l$, $\Gamma_l$, and $\Phi_l$ from the noisy mixture ${\bf Y}^{\rm ul}$.
This section formulates two key problems that lie in the extraction of these frequency-independent parameters through reviewing the authors' previous work \cite{Han2019TWC} for FDD stationary massive MIMO systems, and then introduces deep learning to tackle these problems.
\subsection{Problem formulation}
Newton orthogonal matching pursuit (NOMP) algorithm \cite{Mamandipoor2016} is adopted in \cite{Han2019TWC} to estimate $\Theta_l$, $\Gamma_l$, and $\alpha_l$ from ${\bf Y}^{\rm ul}$ successively. In the $l$th iteration, NOMP estimates $\Theta_l$ and $\Gamma_l$ of the $l$th path and removes the pilot component along this path from ${\bf Y}^{\rm ul}$. The residues of ${\bf Y}^{\rm ul}$ after each iteration are illustrated in Fig.~\ref{Fig:NOMPprocess}, where $M=N=64$ and $L=3$. Figs.~\ref{Fig:NOMPprocess}(a) and (b) show that the NOMP algorithm can recognize only the pilot component with the largest power. Subsequently, this strongest component is removed and the updated residue contains two components. After three iterations, all the components are removed, that is, all the paths are detected, thereby leaving only the noise in the residue, as illustrated in Fig.~\ref{Fig:NOMPprocess}(d). The iteration-based algorithm requires $L$ rounds of detection to recognize all the paths. The complexity of the NOMP algorithm is $\mathcal{O}(LMN\log(MN))$. If $M$, $N$, and $L$ grow large, then the processing time is considerably long. To avoid the latency caused by using high-complexity algorithms, we raise the first question as follows:
\begin{itemize}
\item {\bf Q1}: Can we rapidly recognize the angles and delays of all the paths?
\end{itemize}
The scheme proposed in \cite{Han2019TWC} was designed for spatially stationary systems and cannot identify $\Phi_l$, which is also frequency-independent in non-stationary systems. One solution is to estimate $\Phi_l$ together with $\Theta_l$ and $\Gamma_l$ at the $l$th iteration of the NOMP algorithm. With increasing parameters to be estimated, the computation complexity of the updated algorithm is further increased. All possible solutions of $\Phi_l$, which is further transformed to the search of $s_{l,{\rm start}}$ and $s_{l,{\rm end}}$, are exhaustively tested. The complexity of the updated algorithm is $1+2+\cdots+S=(S^2+S)/2$ times that of the NOMP algorithm, thereby resulting in incredibly long processing time, which is unacceptable in practice. Thus, we raise the second question as follows:
\begin{itemize}
\item {\bf Q2}: How to efficiently identify the visibility regions?
\end{itemize}
\subsection{Sparse image of uplink pilots}
\begin{figure*}
\centering
\includegraphics[scale=0.6]{CoordinateSystems.pdf}
\caption{(a) Image of uplink pilots and the coordinate system of image. (b) Sinc-function pattern of a column of ${\bar{\bf Y}}^{\rm ul}$ and the width of a dark spot (suppose only one path exists). (c) Coordinate system of network and the bounding box. } \label{Fig:CoordinateSystems}
\end{figure*}
When observing Fig.~\ref{Fig:NOMPprocess}(a) which shows significant sparsity, we can rapidly determine the three paths in the channel. This process is fast without adopting iterations or generating figures of residues, which can imitated by artificial intelligence. Therefore, prior to answering the two questions, we initially investigate the sparse image of uplink pilots.
In massive MIMO OFDM systems, $L\ll MN$ typically holds. After transforming ${\bf Y}^{\rm ul}$ from the antenna subcarrier domain to the angular temporal domain, the pilots show sparsity. The angular and temporal transformation matrices are defined as
\begin{equation}\label{Eq:Uatrans}
{\bf U}_{\rm a}=\left[{\bf a}(0), {\bf a}\left(-\frac{1}{\gamma_{\rm a} M}\right), \ldots, {\bf a}\left(-\frac{\gamma_{\rm a}M-1}{\gamma_{\rm a}M}\right)\right]
\end{equation}
and
\begin{equation}\label{Eq:Uttrans}
{\bf U}_{\rm t}=\left[{\bf q}(0), {\bf q}\left(-\frac{1}{\gamma_{\rm t} N}\right), \ldots, {\bf q}\left(-\frac{\gamma_{\rm t}N-1}{\gamma_{\rm t}N}\right)\right]
\end{equation}
respectively, where $\gamma_{\rm a}$ and $\gamma_{\rm t}$ are oversampling rates.
Thereafter, the uplink received pilots in temporal angular domain are calculated as
\begin{equation}\label{Eq:ULtildePilot}
{\bar{\bf Y}}^{\rm ul} = {\bf U}^H_{\rm a}{\bf Y}^{\rm ul}{\bf U}_{\rm t},
\end{equation}
where ${\bar{\bf Y}}^{\rm ul}\in\mathbb{C}^{\gamma_{\rm a}M\times\gamma_{\rm t}N}$ is a sparse matrix.
We normalize the module of each entry of ${\bar{\bf Y}}^{\rm ul}$ and obtain a new real-valued matrix ${\tilde{\bf Y}}^{\rm ul}$, whose $(m,n)$th entry is
\begin{equation}\label{Eq:ULbarPilot}
[{\tilde{\bf Y}}^{\rm ul}]_{m,n} = \frac{\eta|[{\bar{\bf Y}}^{\rm ul}]_{m,n}|}{\max_{i=1,\ldots,\gamma_{\rm t}N,k=1,\ldots,\gamma_{\rm a}M}|[{\bar{\bf Y}}^{\rm ul}]_{i,k}|}.
\end{equation}
The maximal entry of ${\tilde{\bf Y}}^{\rm ul}$ is normalized by $\eta$. This normalization can avoid the wide color range of the images in an extremely high SNR regime.
The image of ${\tilde{\bf Y}}^{\rm ul}$ is drawn by MATLAB, as an example shown Fig.~\ref{Fig:CoordinateSystems}(a), where $M=64$, $N=32$, $S=4$, and $L=2$. In the image, the horizontal axis represents delay (i.e., $\Gamma$) ranging from 0 to 1, and the vertical axis represents angle (i.e., $\Theta$) ranging from 0 to 1. In the coordinate system of the image, the upper left vertex is the origin (0,0).
The image has $L$ cross-style patterns, each corresponding to a path. The darkness of the cross-style pattern is determined by the gain of the path. Each cross-style pattern is composed of a strong dark spot at the center and four dotted tails that stretch upwards, downwards, leftward, and rightwards. Each dark spot has a semi-square or semi-rectangular shape and holds the following two properties.
\begin{property}\label{Theo:lightSpot1}
In the coordinate system of the image, the coordinates of the center of the dark spot are exactly the delay and angle of the $l$th path, i.e., $(\Gamma_l,\Theta_l)$.
\end{property}
\begin{proof}
Refer to Appendix A.
\end{proof}
\begin{property}\label{Theo:lightSpot2}
The width $w_l$ and height $h_l$ of the $l$th dark spot in Fig.~\ref{Fig:CoordinateSystems}(a) are given as follows:
\begin{equation}\label{Eq:WidthHeight}
w_l = \frac{2}{N}, \quad h_l = \frac{2S}{\left( s_{l,{\rm end}}-s_{l,{\rm start}}+1 \right)M}.
\end{equation}
\end{property}
\begin{proof}
Refer to Appendix B.
\end{proof}
The proofs show that the cross-style pattern is resulted from the sinc-function pattern of $\bar{\bf Y}^{\rm ul}$. We extract one column of $\bar{\bf Y}^{\rm ul}$ and illustrate it in Fig.~\ref{Fig:CoordinateSystems}(b). The sinc-function pattern in angular domain is exactly the array pattern of the ULA.
The two properties indicate that the information of $\Gamma_l$, $\Theta_l$, and $\Phi_l$ are directly illustrated in the image of uplink pilots. By observing the dark spots in the image, we can easily obtain these frequency-independent parameters.
\subsection{Power of YOLO network}
With the two properties, we regard the dark spots as the objects and tackle the problems in Section \ref{Sec:YOLO}.B with a powerful neural network for object detection, that is, YOLO.
\emph{Fast}:
As the name suggests, YOLO can find all objects that the network knows in an image by only observing the image once. According to \cite{Redmon2015}, YOLO can process 45 large images in a second, thereby demonstrating its rapid processing ability.
\emph{Ability to bound objects}:
YOLO can position the objects and estimate the size of each object by observing the bounding boxes that frame the objects. If the bounding boxes can be learned to exactly bound the dark spots, then we can answer the two questions as follows.
\begin{itemize}
\item {\bf Answer to Q1}: $\Gamma_l$ and $\Theta_l$ can be rapidly estimated by calculating the center of the $l$th bounding box.
\item {\bf Answer to Q2}: The size of $\Phi_l$ can be estimated by observing the height of the $l$th bounding box, thereby simplifying the identification of $\Phi_l$.
\end{itemize}
YOLO has advanced to version 3 \cite{Redmon2018}, which has a comprehensive network structure but a greatly enhanced successful detection ratio of small objects. Therefore, this version is adopted in this study. We maintain the original structure and the input and output settings of the YOLO network to the greatest extent. However, we perform the necessary modifications to satisfy the requirement of parameter estimation.
The image in Fig.~\ref{Fig:CoordinateSystems}(a) illustrates the input of the YOLO network. Only a small amount of data can train the network because all the input images of YOLO have strong similarities. YOLO has its own coordinate system, where the top left vertex of a input image is regarded as the origin (0,0), as shown in Fig.~\ref{Fig:CoordinateSystems}(c). The $x$ and $y$ axes stretch rightward and downward, respectively. These settings coincide with the coordinate system of the image of uplink pilots. Each axis in the coordinate system of the network ranges from 0 to 938,\footnote{938 is the double of the resolution of the network.} and the coordinates take integer values.
Here, the network outputs $5{\hat L}$ parameters after processing the image, where ${\hat L}$ denotes the number of detected paths and is an estimate of $L$. Five parameters are provided to describe the $l$th detected path, which are denoted as
\begin{equation}\label{Eq:YOLOoutput}
\left\{C_{l},x_{l,\min},y_{l,\min},x_{l,\max},y_{l,\max}\right\}
\end{equation}
where $C_{l}$ indicates the confidence level of the detection of the $l$th path, satisfying $0 < C_{l}\le 1$. When $C_{l}$ grows large, the probability of a successful detection increases. Generally, if $C_{l}$ is less than 0.5, then the $l$th detected path may be fake. False alarm generally happens in a low SNR regime, where the noise is falsely identified as the path. Only one class of object (the path) should be recognized; thus, the class indicator in the original network is no longer provided in the output.
Specially, $(x_{l,\min},y_{l,\min})$ and $(x_{l,\max},y_{l,\max})$ are the coordinates of the top left and the bottom right vertexes of the bounding box, respectively, thereby satisfying $0\le x_{l,\min},y_{l,\min} < 938$ and $0<x_{l,\max},y_{l,\max}\le938$, as shown in Fig.~\ref{Fig:CoordinateSystems}(c).
The bounding box exactly bounds the dark spots as suggested. Thereafter, when generating the labels of training data, we set
\begin{equation}\label{Eq:boxCenter}
\begin{aligned}
x_{l,\min} &= \left\lceil 938\left(\Theta_{l}-\frac{w_l}{2}\right)\right\rceil, y_{l,\min} = \left\lceil 938\left(\Gamma_{l}-\frac{h_l}{2}\right)\right\rceil,\\
x_{l,\max} &= \left\lceil 938\left(\Theta_{l}+\frac{w_l}{2}\right)\right\rceil, y_{l,\max} = \left\lceil 938\left(\Gamma_{l}+\frac{h_l}{2}\right)\right\rceil.
\end{aligned}
\end{equation}
Under this setting of coordinates, $\Gamma_l$, $\Theta_l$, and $\Phi_l$ can be estimated efficiently.
\subsection{YOLO-based parameter estimation}
Based on the Answer to Q1, $\Theta_l$ and $\Gamma_l$ are derived from the center of the bounding box, i.e.,
\begin{equation}\label{Eq:YOLOestTheta}
{\tilde\Theta}_{l} = \frac{y_{l,\min}+y_{l,\max}}{2\times 938}
\end{equation}
and
\begin{equation}\label{Eq:YOLOestGamma}
{\tilde\Gamma}_{l} = \frac{x_{l,\min}+x_{l,\max}}{2\times 938},
\end{equation}
where $\tilde\Theta_l$ and $\tilde\Gamma_l$ are the coarse estimates of $\Theta_l$ and $\Gamma_l$, respectively.
According to Property \ref{Theo:lightSpot2}, the size of $\Phi_l$ determines the height of the $l$th bounding box, which is calculated as
\begin{equation}\label{Eq:heightYOLOout}
\hat h_l = \frac{y_{l,\max}-y_{l,\min}}{938}.
\end{equation}
For a scatterer that can see $s$ subarrays, the height of the bounding box should be equal to
\begin{equation}\label{Eq:heightHs}
H^{(s)} = \frac{2S}{sM},
\end{equation}
where $s=1,\ldots,S$.
We estimate the size of $\Phi_l$ by exhaustively searching $H^{(1)},\ldots,H^{(S)}$ for the one that has the closest value to $\hat h_l$, i.e.,
\begin{equation}\label{Eq:Sl}
S_l = \arg\min_{s=1,\ldots,S} |\hat h_l-H^{(s)}|.
\end{equation}
The identification of $\Phi_l$ is greatly simplified with the knowledge of $S_l$, because we are required to identify only the first or the last subarrays of the adjacent $S_l$ subarrays. Two pointers, denoted as $i_{l,{\rm start}}$ and $i_{l,{\rm end}}$, are the indicators of $s_{l,{\rm start}}$ and $s_{l,{\rm end}}$, respectively, as shown in Fig.~\ref{Fig:Pointers}. We initially set $i_{l,{\rm start}}=1$ and $i_{l,{\rm end}}=S$. We can determine the $S_l$ subarrays by moving the two pointers for a sum of $S-S_l$ steps.
We decide how to move the pointers by observing the projection power. If a subarray can see a scatterer, then the uplink pilots on this subarray obtain distinct projection power on this path. The projection power from path $l$ to the uplink pilots on subarray $s$ is defined as
\begin{equation}\label{Eq:ProjectPower}
P_{l,s} = \left|\left({\bf a}({\tilde\Theta}_l)\odot{\bf p}(\{s\})\right)^H{\bf Y}^{\rm ul}{\bf q}^*({\tilde\Gamma}_l)\right|^2.
\end{equation}
The following theorem provides the approximation of $P_{l,s}$.
\begin{property}\label{Theo:PsApprox}
If ${\tilde\Theta}_l\approx\Theta_l$, ${\tilde\Gamma}_l\approx\Gamma_l$, and the size of each subarray is large, then $P_{l,s}$ can be approximated by
\begin{equation}\label{Eq:PsApprox}
P_{l,s} \approx
\begin{cases}
P^{\rm ul}|\alpha_l|^2 {M^2N^2}/{S^2} + {MN}/{S}, & \text{if $s \in \Phi_l$,} \\
{MN}/{S}, & \text{else.}
\end{cases}
\end{equation}
\end{property}
\begin{proof}
See Appendix C.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=0.55]{Pointers.pdf}
\caption{Pointers are used to identify the non-stationarity. The gray blocks represent the subarrays in $\Phi_l$.} \label{Fig:Pointers}
\end{figure}
According to Property \ref{Theo:PsApprox}, $P_{l,s_1}\approx P_{l,s_2}$ holds for subarrays $s_1,s_2\in\Phi_l$. Meanwhile, $P_{l,s_1}\gg P_{l,s_2}$ holds for subarrays $s_1\in\Phi_l$ and $s_2\notin\Phi_l$. That is, the subarrays in $\Phi_l$ have similar values of projection power on path $l$, and these values are much larger than those of the subarrays that are not in $\Phi_l$.
For the two pointers, if $P_{l,i_{l,{\rm start}}} \ge P_{l,i_{l,{\rm end}}}$, then the probability that $i_{l,{\rm end}}\notin\Phi_l$ is high. We move the pointer $i_{l,{\rm end}}$ backward by one step, i.e., $i_{l,{\rm end}} = i_{l,{\rm end}}-1$. Otherwise, we move the pointer $i_{l,{\rm start}}$ forward by one step, i.e., $i_{l,{\rm start}} = i_{l,{\rm start}}+1$. We continue to move the pointers until $i_{l,{\rm end}}-i_{l,{\rm start}}=S_l-1$. Thereafter, we set
\begin{equation}\label{Eq:sstartendEst1}
{\hat s}_{l,{\rm start}}=i_{l,{\rm start}}, {\hat s}_{l,{\rm end}}=i_{l,{\rm end}},
\end{equation}
and obtain the estimate of $\Phi_l$ as follows:
\begin{equation}\label{Eq:PhiEst}
\hat\Phi_l = \{ {\hat s}_{l,{\rm start}}, {\hat s}_{l,{\rm start}}+1, \ldots, {\hat s}_{l,{\rm end}}\}.
\end{equation}
The non-stationarity identification algorithm that utilizes $S_l$ is named as the bounding box-based algorithm.
\begin{figure*}
\centering
\includegraphics[scale=0.88]{SchemeModules.pdf}
\caption{Modules of the proposed downlink channel reconstruction scheme. The modules in gray are based on deep learning. The symbols above the arrow are the outputs of the left module of the arrow.} \label{Fig:SchemeModules}
\end{figure*}
\section{Downlink channel reconstruction scheme}\label{Sec:scheme}
On the basis of the channel model and the power of YOLO, we propose a model-driven deep learning-based scheme to reconstruct the non-stationary downlink channel.
Fig.~\ref{Fig:SchemeModules} illustrates the diagram of the proposed non-stationary downlink channel reconstruction scheme. The scheme functions successively in the following five modules.
Module 1: The \emph{angle and delay detector} at the BS obtains coarse estimates $\tilde\Theta_l$ and $\tilde\Gamma_l$ by applying \eqref{Eq:YOLOestTheta} and \eqref{Eq:YOLOestGamma}.
Module 2: The \emph{non-stationarity identifier} at the BS obtains $\hat\Phi_l$ through the bounding box-based algorithm or the algorithm described in the following section that uses $\tilde\Theta_l$, $\tilde\Gamma_l$, and ${\bf Y}^{\rm ul}$.
Module 3: The \emph{angle and delay refiner} at the BS obtains $\hat\Theta_l$ and $\hat\Gamma_l$, which are the refined estimates of $\Theta_l$ and $\Gamma_l$, respectively, by utilizing $\tilde\Theta_l$, $\tilde\Gamma_l$, $\hat\Phi_l$, and ${\bf Y}^{\rm ul}$.
Module 4: The \emph{downlink gain estimator} at the user obtains ${\hat g}^{\rm dl}_l$, which is the estimate of ${g}^{\rm dl}_l$, by utilizing the downlink pilots and $\hat\Phi_l$, $\hat\Theta_l$, and $\hat\Gamma_l$, and sends ${\hat g}^{\rm dl}_l$ to the BS.
Module 5: The \emph{downlink channel reconstructor} at the BS reconstructs the downlink channel by applying $\hat\Phi_l$, $\hat\Theta_l$, $\hat\Gamma_l$, and ${\hat g}^{\rm dl}_l$ to \eqref{Eq:DLchannel} as follows:
\begin{equation}\label{Eq:DLchannelrec}
{\hat{\bf H}}^{\rm dl} = \sum_{l=1}^{\hat L} {\hat g}^{\rm dl}_l \left({\bf a}(\hat\Theta_l) \odot {\bf p}(\hat\Phi_l)\right) {\bf q}^T(\hat\Gamma_l).
\end{equation}
The proposed scheme has low overhead and low complexity. In comparison with \cite{Han2019TWC}, the present work can rapidly identify the non-stationarity aside from detecting delays and angles. In the following subsections, modules 2--4 are described in detail, and the reduced case in the stationary scenario is further discussed.
\subsection{Non-stationarity identifier}
The bounding box-based algorithm utilizes the deep learning results but is sensitive to the accuracy of $y_{l,\max}$ and $y_{l,\min}$, especially when the size of $\Phi_l$ is smaller than $S$ and the power of this path is much smaller than the largest power of a path in the channel (i.e., $|\alpha_l|\ll\max_k|\alpha_k|$). To enhance the accuracy of the non-stationarity identifier, we further propose a projection power-based algorithm.
This algorithm is also based on Property \ref{Theo:PsApprox}, but identifies $\Phi_l$ by comparing the projection power from path $l$ to the uplink pilots on each subarray. We initially determine the subarray with the maximal projection power on path $l$,
\begin{equation}\label{Eq:barsl}
\bar s_l = \arg\max_{s=1,\ldots,S} P_{l,s}.
\end{equation}
Subsequently, we find the subarrays that have similar projection power with $P_{l,\bar s_l}$.
We still introduce two pointers and initialize them by $j_{l,{\rm start}}=1$ and $j_{l,{\rm end}}=S$. We move forward the pointer $j_{l,{\rm start}}$ until $P_{l,j_{l,{\rm start}}} \ge \delta P_{l,\bar s_l}$, where $0<\delta<1$. We set $\delta \in [0.1,0.5]$, considering the estimation error of ${\tilde\Theta}_l$ and ${\tilde\Gamma}_l$ and the existence of noise. Afterward, we move backward the pointer $j_{l,{\rm end}}$ until $P_{l,j_{l,{\rm end}}} \ge \delta P_{l,\bar s_l}$. Finally, the estimated indices of the first and last subarrays in $\Phi_l$ are
\begin{equation}\label{Eq:sstartendEst2}
{\hat s}_{l,{\rm start}}=j_{l,{\rm start}}, {\hat s}_{l,{\rm end}}=j_{l,{\rm end}},
\end{equation}
and $\hat\Phi_l$ is derived by applying \eqref{Eq:sstartendEst2} to \eqref{Eq:PhiEst}.
\subsection{Angle and delay refiner}
With ${\tilde\Theta}_l$, ${\tilde\Gamma}_l$, and $\hat\Phi_l$, the angle and delay refiner then calculates the refined estimates of angles and delays. Prior to describing the method to refine the estimates, we initially explain the reason of introducing this module.
\subsubsection{Reasons of introducing the refiner}
The angle and delay refiner is introduced because the accuracy of ${\tilde\Theta}_l$ and ${\tilde\Gamma}_l$ is impacted by the following factors of YOLO.
\emph{Image resolution}:
Each image is generated by a finite-dimensional angular temporal domain pilot matrix. The values of $\gamma_{\rm a}M$ and $\gamma_{\rm t}N$ are large but not infinite, thereby resulting in the on-grid effect. Then, the coordinates of the $l$th dark spot center are close to but not equal to $(\Theta_l,\Gamma_l)$. One solution is to increase the values of $\gamma_{\rm a}$ and $\gamma_{\rm t}$. However, scaling the oversampling rates results in multiplied complexity and extended running time to generate the images.
\emph{Network resolution}:
The maximal coordinates in the coordinate system of network are (938,938). For any input image, the network initially rescales the size of the image to $938\times 938$. That is, a $938\times 938$ dimensional matrix is processed in the network, instead of the original $\gamma_{\rm a}M\times\gamma_{\rm t}N$ dimensional matrix. Once $938<\gamma_{\rm a}M$ or $938<\gamma_{\rm t}N$ holds, the resolution is decreased.
\emph{Integer labels}:
We set the coordinates of the bounding boxes as integers to maintain the settings of the original YOLO network and guarantee the accuracy of detection. Using integer coordinates also results in the on-grid effect.
\emph{Detection error}:
Although the network is well trained, the detection error is inevitable. The bounding box may deviate from the ideal one. The minimum deviation amount is 1, thereby resulting in the error amount of ${1}/{938}$. Moreover, false alarm and miss detection may occur in a low SNR regime. Therefore, the network detection error is the most critical factor that harms the accuracy.
Consequently, the accuracy of ${\tilde\Theta}_l$ and ${\tilde\Gamma}_l$ is questioned due to these factors. Especially when $M$ and $N$ are large, a small error of angle and delay results in sharp degradation of channel reconstruction accuracy. Therefore, further processing these coarse estimates is necessary.
\subsubsection{Refining the estimates}
The inputs of the angle and delay refiner are
\begin{equation}\label{Eq:RefineInput}
\left\{{\bf Y}^{\rm ul}, {\tilde\Theta}_1, {\tilde\Gamma}_1, {\hat\Phi}_1, \ldots, {\tilde\Theta}_{\hat L}, {\tilde\Gamma}_{\hat L}, {\hat\Phi}_{\hat L} \right\}
\end{equation}
The outputs are the refined angles and delays, as follows:
\begin{equation}\label{Eq:RefineOutput}
\left\{{\hat\Theta}_1, {\hat\Gamma}_1, \ldots, {\hat\Theta}_{\hat L}, {\hat\Gamma}_{\hat L} \right\}.
\end{equation}
Recalling the NOMP algorithm, within each iteration, NOMP refines all the extracted paths through the Newton refinement method, which can effectively refine the estimates of delays and angles toward their real values. However, the original Newton refinement method is designed for stationary systems. Here, we adjust the method to fit the non-stationary cases.
The Newton method refines the paths one by one in decreasing order of the path power to guarantee the effectiveness of refinement. We initially calculate the coarse estimates of uplink effective gains of these $\hat L$ paths by
\begin{equation}\label{Eq:YOLOestULGain}
\left[{\tilde\alpha}_1, \ldots, {\tilde\alpha}_{\hat L} \right]^T = \left( {\bf A}^{{\rm ul}H} {\bf A}^{\rm ul} \right)^{-1} {\bf A}^{{\rm ul}H} {\bf y}^{\rm ul},
\end{equation}
where ${\bf A}^{\rm ul}\in\mathbb{C}^{MN\times \hat L}$, the $l$th column of ${\bf A}^{\rm ul}$ is
\begin{equation}\label{Eq:YOLOestULmtxA}
[{\bf A}^{\rm ul}]_{:,l} = {\bf q}({\tilde\Gamma}_{l})\otimes \left({\bf a}({\tilde\Theta}_{l})\odot {\bf p}({\hat\Phi}_{l})\right),
\end{equation}
and ${\bf y}^{\rm ul}$ is obtained by stacking all the columns of ${\bf Y}^{\rm ul}$ into a vector. Thereafter, we sort these paths by the decreasing order of $\|{\tilde\alpha}_l {\bf p}(\hat\Phi_l)\|^2$. To simplify the expression, we still maintain the denotations of the coarse estimates in \eqref{Eq:RefineInput}, which currently satisfy $\|{\tilde\alpha}_1 {\bf p}(\hat\Phi_1)\|^2 \ge \ldots \ge \|{\tilde\alpha}_{\hat L} {\bf p}(\hat\Phi_{\hat L})\|^2$. The residue is calculated as
\begin{equation}\label{Eq:ULpilotsRes}
{\bf Y}^{\rm ul}_{{\rm res}} = {\bf Y}^{\rm ul} - \sum_{l=1}^{\hat L} \sqrt{P^{\rm ul}}{\tilde\alpha}_l \left({\bf a}({\tilde\Theta}_{l})\odot {\bf p} ({\hat\Phi}_{l})\right) {\bf q}^T({\tilde\Gamma}_{l}).
\end{equation}
The Newton method refines the angles and delays by minimizing the residue power.
We describe the Newton method by taking the first path as an example. We initially define
\begin{equation}\label{Eq:ULpilotsNT1}
{\bf Y}^{\rm ul}_{{\rm res},+1} = {\bf Y}^{\rm ul}_{{\rm res}} + \sqrt{P^{\rm ul}} {\tilde\alpha}_1 \left({\bf a}({\tilde\Theta}_{1})\odot {\bf p} ({\hat\Phi}_{1})\right) {\bf q}^T({\tilde\Gamma}_{1}).
\end{equation}
Only the uplink pilots on the subarrays in ${\hat\Phi}_{1}$ are utilized in the refinement of ${\tilde\Theta}_1$ and ${\tilde\Gamma}_1$. The refined estimates obtained by the Newton method, that is, ${\hat{\alpha}}_1$, ${\hat{\Theta}}_1$, and ${\hat{\Gamma}}_1$, can achieve the minimum residue power, i.e.,
\begin{equation}\label{Eq:ULpilotsNT2}
\begin{aligned}
&({\hat{\alpha}}_1, {\hat{\Theta}}_1, {\hat{\Gamma}}_1) \\= &\arg\min_{\alpha,\Theta,\Gamma} \left\|\left[{\bf Y}^{\rm ul}_{{\rm res},+1} - \sqrt{P^{\rm ul}} {\alpha}{\bf a}({\Theta}){\bf q}^T({\Gamma})\right]_{{\bf r}({\hat\Phi}_{1}),:} \right\|^2_F,
\end{aligned}
\end{equation}
where the row-selection vector is defined as
\begin{equation}\label{Eq:rPhi}
{\bf r}(\Phi) = \left[ \frac{M}{S}(s_{\rm start}-1)+1,\ldots, \frac{M}{S}s_{\rm end}\right],
\end{equation}
and $s_{\rm start}$ and $s_{\rm end}$ represent the indices of the first and last subarrays in $\Phi$, respectively.
The derivations of ${\hat{\alpha}}_1$, ${\hat{\Theta}}_1$, and ${\hat{\Gamma}}_1$ are the same as the original Newton refinement method in \cite{Han2019TWC}; thus, they are omitted here. Having refined the estimates of the first path, we update the residue by
\begin{equation}\label{Eq:ResUpdate}
{\bf Y}^{\rm ul}_{{\rm res}} = {\bf Y}^{\rm ul}_{{\rm res},+1} - \sqrt{P^{\rm ul}}{\hat\alpha}_1 \left({\bf a}({\hat\Theta}_{1})\odot {\bf p} ({\hat\Phi}_{1})\right) {\bf q}^T({\hat\Gamma}_{1}).
\end{equation}
Thereafter, the estimates of the other paths are refined following the similar approach from \eqref{Eq:ULpilotsNT1}. The angle and delay refiner repeats the above refinement methods for $R_c$ rounds. The refinement has a low complexity of $\mathcal{O}(R_c \hat LMN)$.
After the refiner completes its work, all the frequency-independent parameters are acquired by the BS. The BS then reconstructs the uplink channel by
\begin{equation}\label{Eq:ULrecChannel}
{\hat{\bf H}}^{\rm ul} = \sum_{l=1}^{\hat L} {\hat\alpha}_l \left({\bf a}({\hat\Theta}_{l})\odot {\bf p} ({\hat\Phi}_{l})\right) {\bf q}^T({\hat\Gamma}_{l}).
\end{equation}
\subsection{Downlink gain estimator}
The estimated frequency-independent parameters, including $\hat\Theta_l$, $\hat\Gamma_l$, and $\hat\Phi_l$, are sent to the downlink gain estimator. This module functions at the user equipment. As suggested in \cite{Han2019TWC}, the downlink pilots are beamformed along the angles of the paths to enhance the received power at user equipment and improve the estimation accuracy of downlink gains. Thus, all-one downlink pilots occupy $\hat L$ OFDM symbols. The pilots received by the user on OFDM symbol $t$ are expressed as
\begin{equation}\label{Eq:DLpilots}
{\bf y}^{\rm dl}_{t} = \sum_{l=1}^{L} \sqrt{P^{\rm dl}} g^{\rm dl}_{l} {\bf q}(\Gamma_{l}) \left({\bf a}(\Theta_l)\odot{\bf p}(\Phi_l)\right)^T {\bf b}_t +{\bf z}^{\rm dl}_{t},
\end{equation}
where $t = 1,\ldots,\hat L$, ${\bf y}^{\rm dl}_{t}\in\mathbb{C}^{N\times 1}$, $P^{\rm dl}$ is the transmitted power of BS,
\begin{equation}\label{Eq:DLpilotsBF}
{\bf b}_t = \sqrt{\frac{S}{(\hat s_{t,{\rm end}}-\hat s_{t,{\rm start}}+1)M}}{\bf a}^*(\hat\Theta_t)\odot{\bf p}^T(\hat\Phi_t)
\end{equation}
is the beamforming vector for the downlink pilots on OFDM symbol $t$, and ${\bf z}^{\rm dl}_{t}\in\mathbb{C}^{N\times 1}$ is the downlink complex Gaussian noise whose elements are i.i.d. with zero mean and unit variance. $\|{\bf b}_t\|^2=1$ due to the power constrain. The design in \eqref{Eq:DLpilotsBF} indicates that on OFDM symbol $t$, the transmitted power is allocated only to the subarrays in $\hat\Phi_t$.
Afterwards, $\hat\Theta_l$, $\hat\Gamma_l$, and $\hat\Phi_l$ are applied in \eqref{Eq:DLpilots} to replace $\Theta_l$, $\Gamma_l$, and $\Phi_l$, respectively. The downlink gains of the $\hat L$ paths are estimated by
\begin{equation}\label{Eq:estDLGain}
\left[{\hat g}^{\rm dl}_1, \ldots, {\hat g}^{\rm dl}_{\hat L} \right]^T = \left( {\bf A}^{{\rm dl}H} {\bf A}^{\rm dl} \right)^{-1} {\bf A}^{{\rm dl}H} {\bf y}^{\rm dl},
\end{equation}
where
\begin{equation}\label{Eq:estDLmtxA}
{\bf A}^{\rm dl} = \left[ {\bf A}^{{\rm dl}T}_1,\ldots,{\bf A}^{{\rm dl}T}_{\hat L} \right]^T,
\end{equation}
the $l$th column of the $t$th submatrix ${\bf A}^{\rm dl}_t \in \mathbb{C}^{N\times \hat L}$ is expressed as
\begin{equation}\label{Eq:estDLmtxAt}
[{\bf A}^{\rm dl}_t]_{:,l} = {\bf q}(\hat\Gamma_l) \left({\bf a}(\hat\Theta_l)\odot{\bf p}(\hat\Phi_l)\right)^T {\bf b}_t,
\end{equation}
and ${\bf y}^{\rm dl}$ is obtained by stacking ${\bf y}^{\rm dl}_{1},\ldots,{\bf y}^{\rm dl}_{\hat L}$ into a vector.
The estimated downlink gains are fed back to the BS. Finally, the proposed scheme is completed with ${\hat{\bf H}}^{\rm dl}$ being reconstructed by the downlink channel reconstructor at the BS.
\begin{figure*}
\centering
\includegraphics[scale=0.82]{YOLOtest.pdf}
\caption{YOLO detection results of the images of uplink pilots when $S=4$, (a) $M=N=32$, SNR$=0$dB, (b) $M=N=32$, SNR$=10$dB, (c) $M=N=128$, SNR$=0$dB, and (d) $M=N=128$, SNR$=10$dB. The values upon the bounding boxes are the confidences of detection.} \label{Fig:YOLOtest}
\end{figure*}
\subsection{Discussions}
The proposed scheme can efficiently reconstruct the FDD non-stationary massive MIMO downlink channel by identifying the mapping between scatterers and subarrays. Moreover, it can be reduced to fit the stationary systems, thereby further achieving an alternative scheme to reconstruct the non-stationary downlink channel.
\subsubsection{Reducing to stationary cases}
In stationary massive MIMO systems, each scatterer can see all the subarrays. Or equivalently, the ULA is segmented into only $S=1$ subarray, and $\Phi_1=\ldots=\Phi_{L}=\{1\}$ holds. Under this condition, the following changes occur for the proposed scheme.
First, in the image of uplink pilots, when $M$ and $N$ are fixed, all the dark spots have the same shape and the unified height or width. With equal size of objects, the bounding boxes can frame the dark spots more accurately.
Second, the outputs of the non-stationarity identifier become $\hat\Phi_1=\ldots=\hat\Phi_{\hat L}=\{1\}$ with probability 1.
Third, the angle and delay refiner and the downlink gain estimator are reduced to the versions for stationary systems.
Therefore, in widely concerned FDD stationary massive MIMO systems, the proposed downlink channel reconstruction scheme also works, and even works better than in non-stationary systems with the same array scale.
\subsubsection{Alternative scheme for non-stationary systems}
The applicability of the proposed scheme in stationary systems inspires us with an alternative scheme to reconstruct the downlink non-stationary channel. It is known that a subarray is the smallest unit to describe the non-stationarity. If one subarray is considered individually, then stationarity exists in the subsystem formed by this subarray. We can apply the proposed scheme in each subsystem individually. Under this condition, the following changes should be performed.
First, the uplink pilots are divided by ${\bf Y}^{\rm ul} = \left[ {\bf Y}^{{\rm ul}T}_1,\ldots,{\bf Y}^{{\rm ul}T}_S \right]^T$, where ${\bf Y}^{{\rm ul}}_s \in\mathbb{C}^{M/S\times N}$.
Second, for subsystem $s$, the image is generated from ${\bf Y}^{{\rm ul}}_s$. The refined angles and delays of the paths that subarray $s$ can see are $\hat\Theta_{s,l}$ and $\hat\Gamma_{s,l}$, respectively, where $l=1,\ldots,\hat L_s$, and $\hat L_s$ is an estimate of $L_s$.
Third, a total of $\sum_{s=1}^{S}\hat L_s$ paths are estimated by the alternative scheme, requiring $\sum_{s=1}^{S}\hat L_s$ OFDM symbols for downlink pilots. Afterwards, the downlink gains, denoted as $\hat g^{\rm dl}_{s,l}$, are sent back to the BS, where $l=1,\ldots,\hat L_s, s=1,\ldots,S$.
Fourth, the stationary downlink channel in subsystem $s$ is reconstructed using $\hat\Theta_{s,l}$, $\hat\Gamma_{s,l}$, and $\hat g^{\rm dl}_{s,l}$, where $l=1,\ldots,\hat L_s$. Thereafter, the large-scale non-stationary channel is obtained by stacking all the stationary downlink channels together.
The alternative scheme requires a large amount of downlink training and feedback overhead because $\sum_{s=1}^{S}\hat L_s>L$. Moreover, when $\Psi_1=\Psi_2$ holds, the alternative scheme cannot identify this equivalence. The estimation accuracy of angles and delays further degrades when using a reduced number of antennas. Therefore, the proposed scheme is more efficient than the alternative.
\section{Numerical results}\label{Sec:simulations}
In this section, we evaluate the performance of the proposed non-stationary downlink channel reconstruction scheme. In the FDD system, $f^{\rm ul}=2.58$ GHz, and $f^{\rm dl}=2.64$ GHz. The OFDM subcarrier spacing is $\Delta f=15$ kHz. The number of paths $L$ is uniformly distributed in $[1,10]$. For the $l$th path, $\Theta_l$ and $\Gamma_l$ are uniformly distributed in $[0,1)$. The effective uplink gain satisfies $\alpha_l = \beta_l e^{j\phi^{\rm ul}_l}$, where $\beta_l$ is uniformly distributed in $[0.5,1]$, and $\phi^{\rm ul}_l$ is uniformly distributed in $[0,2\pi)$. The downlink gain is $g^{\rm dl}_l = \beta_l e^{j\phi^{\rm dl}_l}$, where $\phi^{\rm dl}_l$ is i.i.d. with $\phi^{\rm ul}_l$.
YOLO is implemented on the computer with one Nvidia GeForce GTX 1080 Ti GPU. The deep learning library of Keras running on top of TensorFlow is used. In the training phase, we generate 3,000 groups of data under each set of $M$, $N$, and $S$. Each group of training data consists of an image of uplink pilots and a label vector, which is denoted as
\begin{equation}\label{Eq:YOLOlabel}
\left\{0, x_{l,\min},y_{l,\min},x_{l,\max},y_{l,\max} \right\},
\end{equation}
where the first parameter (0) indicates the object class. We set $\gamma_{\rm a}=\gamma_{\rm t}=16$, and $\eta=255$. $L$ is randomly and uniformly distributed in $[1,10]$. SNR ranges from 0 dB to 10dB. The number of epochs is 300, and the batch size is 4. The training and testing data are generated by following the same procedure described in Section \ref{Sec:YOLO} and are not biased from each other. Therefore, overfitting issues do not exist.
\subsection{Evaluation of deep learning-based estimation}
We initially test the performance of the angle and delay detector, especially the detection accuracy of the YOLO network. For any input image, the ratio of successful detection of objects is increased when the sizes of objects are large. Thus, we start from the large dark spot cases, where $M=N=32$ and $S=4$. The channel is composed of two paths, satisfying $\Theta_1=0.6195$, $\Gamma_1=0.4102$, and $\Phi_1=\{3,4\}$ for path 1 and $\Theta_2=0.8099$, $\Gamma_2=0.2909$, and $\Phi_2=\{1,2\}$ for path 2. Fig.~\ref{Fig:YOLOtest}(a) illustrates the detection result under the condition of SNR $=0$ dB. The figure shows that the network can successfully recognize the actual dark spots from the noisy image with confidence levels of 0.97 and 0.95. The coarse estimates of angles and delays are $\tilde\Theta_1=0.5840$, $\tilde\Gamma_1=0.3865$, and $\hat\Theta_2=0.7625$, $\tilde\Gamma_2=0.2735$, which are close to the actual values. However, in a low SNR regime, the noise is distinct in the image and appears to be similar to the dark spots, thereby resulting in a false alarm with a confidence level of 0.58. Fig.~\ref{Fig:YOLOtest}(b) shows the detection result when the SNR is increased to 10 dB. The cross-style patterns can be clearly observed from the image, and the network detects the dark spots with confidence levels of 0.98 and 0.97. The confidences increase in proportion to SNR, and a false alarm is avoided. However, the coarse estimates of angles and delays are $\tilde\Theta_1=0.5835$, $\tilde\Gamma_1=0.3860$, and $\tilde\Theta_2=0.7625$, $\tilde\Gamma_2=0.2720$, whose accuracy is not improved accordingly.
\begin{figure}
\centering
\includegraphics[scale=0.55]{IdentifierSuccess.pdf}
\caption{Successful ratios of the algorithms in a non-stationarity identifier.} \label{Fig:IdentifierSuccess}
\end{figure}
Thereafter, the detection accuracy is tested under small dark spot condition, which is the actual massive MIMO condition. We set $M=N=128$ and $S=4$. Figs.~\ref{Fig:YOLOtest}(c) and (d) illustrate the results of SNR $=0$ and $10$ dB, respectively. In massive MIMO systems, the channel becomes sparse, and the noise power is no longer comparable with that of the dark spots even in a low SNR regime. The images of uplink pilots appear to be same as those under SNR $=0$ and $10$ dB. Thus, Figs.~\ref{Fig:YOLOtest}(c) and (d) show similar detection results. The sizes of the bounding boxes are much smaller than those in Figs.~\ref{Fig:YOLOtest}(a) and (b), achieving accurate coarse estimates of angles and delays. We take the first path with a confidence of 1.00 as an example. The actual values are $\Theta_1=0.3229$ and $\Gamma_1=0.1848$. The coarse estimates are $\tilde\Theta_1=0.3015$ and $\tilde\Gamma_1=0.1730$ under SNR$=0$ dB. The accuracy is enhanced compared with that of $M=N=32$. However, the large-aperture array is sensitive to the error of angles. Thus, the refinement of angles and delays is essential.
\begin{figure}
\centering
\includegraphics[scale=0.55]{ULMSE.pdf}
\caption{Effectiveness of the angle and delay refiner.} \label{Fig:ULMSE}
\end{figure}
We evaluate the non-stationarity identifier by examining the successful ratio of visibility region identification. For path $l$, the visibility region is successfully identified if $\hat\Phi_l=\Phi_l$. The successful identification ratios of the bounding box-based and the projection power-based algorithms are illustrated in Fig.~\ref{Fig:IdentifierSuccess}, where $M=N=128$, $S=4$, $L=10$, and $\delta=0.2$. The two algorithms can successfully identify the visibility regions with a probability higher than 0.98. As expected, the accuracy of bounding box-based algorithm is sensitive to the detection errors of bounding boxes, whereas the projection power-based algorithm is more robust. Therefore, the projection power-based algorithm achieves a high successful identification ratio. In the following simulations, we adopt this algorithm in the non-stationarity identifier.
\subsection{Evaluation of the refinement}
We examine the effectiveness of the angle and delay refiner through testing the NMSE performance of the reconstructed uplink channel. The refiner works for $R_c=3$ rounds. We introduce two widely used channel estimation algorithms, i.e., LS and LMMSE, as the benchmarks. Notably, LS and LMMSE are not realized through deep learning and do not involve network training. The non-stationary massive MIMO is still considered, where $M=N=128$, and $S=4$. The NMSE is calculated by averaging the NMSEs of the reconstructed or estimated uplink channel across all antennas and on one subcarrier as
\begin{equation}\label{Eq:NMSE}
{\rm NMSE} = \mathbb{E}\left\{\frac{1}{N} \sum_{n=1}^N\frac{\| [\hat{\bf H}^{\rm ul}]_{:,n}-[{\bf H}^{\rm ul}]_{:,n} \|^2}{\| [{\bf H}^{\rm ul}]_{:,n} \|^2} \right\}.
\end{equation}
Fig.~\ref{Fig:ULMSE} illustrates the results. In non-stationary systems, the LS algorithm performs worse than in stationary systems because some subarrays may not see any path, and the channel across these subarrays is zero. However, the LS algorithm still results in a nonzero estimated channel, which is the noise. The LMMSE algorithm identifies the noise through multiplying the covariance matrix of the channel. Thus, the LMMSE algorithm has accurate channel estimation results even in non-stationary systems. For the proposed reconstruction scheme, if we directly apply the coarse estimates in the uplink channel model, then the NMSE of the reconstructed channel is worse. Fig.~\ref{Fig:YOLOtest} shows that even though the coarse estimates of the angles and delays are very close to their actual values, their estimation errors are large and unacceptable in massive MIMO systems, where a small angle offset dramatically impacts the channel reconstruction accuracy. Moreover, the accuracy of the reconstruction without refinement remains $10^{-0.8}$ (i.e., $-8$ dB) and cannot be improved with the increase of SNR because the bounding boxes remain unchanged [Figs.~\ref{Fig:YOLOtest}(c) and (d)]. Fortunately, the angle and delay refiner significantly improves the NMSE of the reconstructed uplink channel, for example, $10^{-2.8}$ (i.e., $-28$ dB) at SNR $=0$ dB. Moreover, the NMSE can be further decreased with the increase in SNR, demonstrating the effectiveness of the refiner.
\begin{figure}
\centering
\includegraphics[scale=0.54]{StationaryCompare.pdf}
\caption{NMSEs of downlink channel estimation schemes under stationarity.} \label{Fig:YOLOvsNOMP}
\end{figure}
\subsection{Comparison of the proposed and the alternative schemes}
We further evaluate the proposed downlink channel reconstruction scheme when reduced to stationary conditions. The NOMP, LMMSE, beam tracking (BT), and compressed sensing (CS)-based downlink channel estimation schemes are introduced as benchmarks. The three latter schemes do not utilize spatial reciprocity and simply rely on downlink training and feedback \cite{Rao2014,Gao2015}. Here, we consider the upper bound cases of the BT and CS-based schemes. The BT-based scheme adopts full-set discrete Fourier transform beams at the BS, and therefore requires $M$ orthogonal downlink pilots. Subsequently, the user feeds back the received pilots that occupy more than 99\% of the total received power and their beam indices. The CS-based scheme also uses $M$ orthogonal downlink pilots to distinguish different BS antennas and adopts the OMP algorithm to estimate the downlink gains on the extracted orthogonal paths, which are then fed back to the BS. The feedback amounts of the two schemes are definitely larger than that of the proposed scheme because of the on-grid effect. In the stationary system, $S=1$, $L\in[1,10]$, and we set $M=N=32$ and $M=N=128$, respectively. Fig.~\ref{Fig:YOLOvsNOMP} presents the NMSEs of the reconstructed or estimated downlink channels. Under the same system settings, the proposed scheme achieves nearly the same accuracy as the NOMP-based scheme with greatly reduced time consumption. Especially when $M=N=128$ and $L=10$, NOMP consumes more than 5 minutes and YOLO costs less than 2 seconds to determine all the paths. Moreover, although with much lower cost of downlink training and feedback, the proposed scheme still significantly outperforms the LMMSE, BT, and CS-based schemes. Therefore, the proposed deep-learning scheme is more efficient than the existing algorithm-based schemes. On the other hand, the accuracy is improved proportional to the values of $M$ and $N$. When SNR $=10$ dB, the NMSE approximates $10^{-3}$ and $10^{-4}$ under the conditions of $M=N=32$ and $M=N=128$, respectively, serving as the lower and upper NMSE bounds of the proposed non-stationary channel reconstruction scheme.
Finally, we compare the proposed scheme with the alternative scheme under non-stationary conditions of $S=4$, $M=N=128$, and $L\in[1,10]$. From the image drawn by ${\tilde{\bf Y}}^{\rm ul}$, YOLO can detect all the paths without causing missing or false alarm, demonstrating the accuracy of the proposed scheme. The total number of paths estimated by the alternative scheme is more than the number of actual paths and that of the paths estimated by the proposed scheme even though all the paths are accurately estimated. In a subsystem generated by a subarray, when the SNR is low, the noise seriously disturbs the detection of YOLO, thereby causing an extremely high false alarm rate, as illustrated in Fig.~\ref{Fig:SchemesL}. Most of the paths estimated by the alternative scheme are fake paths. With the increase in SNR, the probability of false alarm decreases. Nevertheless, the overhead amount still quadruples that of the proposed scheme when SNR $=10$ dB.
\begin{figure}
\centering
\includegraphics[scale=0.55]{SchemesL.pdf}
\caption{Number of paths estimated by the proposed and alternative schemes.} \label{Fig:SchemesL}
\end{figure}
For the alternative scheme, with less measurements in each subsystem, the estimation accuracy of the angles and delays of actual paths is lower than that of the proposed scheme. Therefore, the NMSE of the uplink channel reconstructed by the alternative scheme is worse than that of the proposed scheme, as shown by the curve labeled as ``UL alternative'' in Fig.~\ref{Fig:SchemesNMSE}. Moreover, the large amount of fake paths causes the alternative scheme to outperform the LMMSE method by integrating these paths together, thereby compensating the estimation error of actual paths and achieving a high global accuracy. When reconstructing the downlink channel, the amount of downlink training and feedback overhead cost by the downlink gain estimation module of the alternative scheme is much larger than that of the proposed scheme. The alternative scheme has a good NMSE performance in reconstructing the downlink channel. Nevertheless, the NMSE of the alternative scheme is still inferior to that of the proposed scheme. The proposed scheme harvests the multi-subarray gain and achieves almost equivalent NMSE performance in reconstructing the uplink and downlink channels, demonstrating the accuracy of frequency-independent parameter estimation. Moreover, the NMSE performance of the two schemes under non-stationary condition is between that under the stationary conditions of $M=N=32$ and $M=N=128$. This phenomenon is in accordance with the assumption of the lower and upper bounds in Fig.~\ref{Fig:YOLOvsNOMP}.
\begin{figure}
\centering
\includegraphics[scale=0.55]{SchemesNMSE.pdf}
\caption{NMSEs of the proposed and the alternative schemes.} \label{Fig:SchemesNMSE}
\end{figure}
We further compare the performance of the two schemes in practical systems. The spectral efficiency in the downlink is evaluated under the condition of maximal ratio transmitting. The signal received by the user can be expressed as
\begin{equation}\label{Eq:DLsignal}
{\bf r} = \sqrt{P} {\rm diag}\left\{{\bf H}^{\rm dl} \frac{{\hat{\bf H}}^{{\rm dl}H}}{\|{\hat{\bf H}}^{{\rm dl}H}\|}\right\} {\bf x} + {\bf z}^{\rm dl},
\end{equation}
where ${\bf r}\in\mathbb{C}^{N\times 1}$ with the $n$th entry as the received signal on the $n$th subcarrier, ${\bf x}\in\mathbb{C}^{N\times 1}$ is the transmitted signal across all subcarriers, satisfying $\mathbb{E}\{{\bf x}{\bf x}^H\} = {\bf I}$, and ${\bf z}^{\rm dl}\in\mathbb{C}^{N\times 1}$ is the noise whose elements are i.i.d. with zero mean and unit variance. When perfect downlink CSI is available at the user, the spectral efficiency can be calculated as
\begin{equation}\label{Eq:rate}
{\rm SE} = \mathbb{E}\left\{\frac{1}{N}\sum_{n=1}^N \log_2\left(1+\left|\left[{\bf H}^{\rm dl} \frac{{\hat{\bf H}}^{{\rm dl}H}}{\|{\hat{\bf H}}^{{\rm dl}H}\|}\right]_{n,n}\right|^2\right) \right\}.
\end{equation}
Fig.~\ref{Fig:SchemesSE} illustrates the Monte-Carlo results of the spectral efficiency. When the refiner is applied, the proposed scheme, the alternative scheme, and the LMMSE method have nearly the same spectral efficiency because their NMSEs are lower than $10^{-2}$. LMMSE requires 128 OFDM symbols for downlink training and feeds back $128\times 128$ complex numbers, whereas the proposed scheme only costs 1 to 10 OFDM symbols for downlink training and feeds back 1 to 10 complex numbers. If the refiner is absent, then the proposed scheme achieves relatively lower spectral efficiency than the alternative efficiency because of the smaller number of estimated paths, as well as the greatly reduced amount of downlink training and feedback overhead. On the other hand, although the NMSE performance is poor, the spectral efficiency is not badly impacted without the refinement module. Therefore, directly applying the learning-based estimates of parameters is acceptable in single user systems.
\section{Conclusion}\label{Sec:conclusion}
This study considered the FDD non-stationary massive MIMO system and proposed a deep learning-based scheme to reconstruct the downlink channel. Two key problems on the processing time and the non-stationary identification were successfully tackled by YOLO. The proposed downlink channel reconstruction scheme was designed to function in five modules given the power of YOLO. The visibility regions were detected by the non-stationary identifier, and estimation accuracy of angles and delays was improved by the angle and delay refiner. Moreover, the reduced case for stationary systems was discussed, and an alternative scheme for non-stationary systems was further analyzed. The numerical results verified the efficiency of the proposed scheme, and demonstrated that the NMSE was superior to that of the NOMP-based scheme in the FDD stationary massive MIMO systems.
\begin{figure}
\centering
\includegraphics[scale=0.55]{SchemesSE.pdf}
\caption{Spectral efficiency of the proposed and the alternative schemes.} \label{Fig:SchemesSE}
\end{figure}
\section{Appendix}\label{Sec:appendix}
\subsection{Proof of Property \ref{Theo:lightSpot1}}
The image of uplink pilots is generated by ${\tilde{\bf Y}}^{\rm ul}$, which is further obtained from ${\bar{\bf Y}}^{\rm ul}$. The $(m,n)$th entry of ${\bar{\bf Y}}^{\rm ul}_k$ is expressed as
\begin{equation}\label{Eq:appendix11}
[{\bar{\bf Y}}^{\rm ul}]_{m,n} = \sum_{l=1}^{L} \alpha_l \kappa_{{\rm a},l} \kappa_{{\rm t},l},
\end{equation}
where
\begin{equation}\label{Eq:appendix12}
\kappa_{{\rm a},l} = {\bf a}^H \left(-\frac{m}{\gamma_{\rm a}M}\right) \left({\bf a}(\Theta_l) \odot {\bf p}(\Phi_l) \right),
\end{equation}
and
\begin{equation}\label{Eq:appendix13}
\kappa_{{\rm t},l} = {{\bf q}^T(\Gamma_l){\bf q} \left(-\frac{n}{\gamma_{\rm t}N}\right)}.
\end{equation}
We initially derive the expression of $\kappa_{{\rm a},l}$. In accordance with \eqref{Eq:avec} and \eqref{Eq:pvec}, \eqref{Eq:appendix12} can be expressed by
\begin{equation}\label{Eq:appendix14}
\kappa_{{\rm a},l} = e^{j2\pi m_{l,{\rm start}} \left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right)} +\cdots+ e^{j2\pi m_{l,{\rm end}}\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right)},
\end{equation}
where $m_{l,{\rm start}} = (s_{l,{\rm start}}-1)M/S+1$ and $m_{l,{\rm end}} = s_{l,{\rm end}}M/S$. Utilizing the feature of geometric progression, we can further express \eqref{Eq:appendix14} by
\begin{equation}\label{Eq:appendix15}
\begin{aligned}
&\kappa_{{\rm a},l} = \\
&\frac{1-e^{j2\pi\left(m_{l,{\rm end}}-m_{l,{\rm start}}+1\right)\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right)}} {1-e^{j2\pi\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right)}} e^{j2\pi m_{l,{\rm start}}\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right)}.
\end{aligned}
\end{equation}
In addition,
\begin{equation}\label{Eq:appendix16}
{1-e^{j2\pi\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right)}} = e^{j\pi\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right)} \sin\left(\pi\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right) \right).
\end{equation}
Thereafter, we can obtain the module of $\kappa_{{\rm a},l}$ as
\begin{equation}\label{Eq:appendix17}
|\kappa_{{\rm a},l}| = \frac{\sin\left(\pi\left(m_{l,{\rm end}}-m_{l,{\rm start}}+1\right)\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right) \right)} {\sin\left(\pi\left(\Theta_l+\frac{1}{\gamma_{\rm a}M}\right) \right)},
\end{equation}
which is the sinc function shown in Fig.~\ref{Fig:CoordinateSystems}(b). In accordance with \eqref{Eq:appendix17}, $|\kappa_{{\rm a},l}|$ achieves its maximal value, i.e., $m_{l,{\rm end}}-m_{l,{\rm start}}+1$, when $\Theta_l+{m}/({\gamma_{\rm a}M})=0$. The center of the $l$th dark spot has the maximal value. Thus, the vertical coordinate of the dark spot center is
\begin{equation}\label{Eq:appendix18}
y=-\frac{m}{\gamma_{\rm a}M}=\Theta_l.
\end{equation}
Similarly, we calculate the module of $\kappa_{{\rm t},l}$ as
\begin{equation}\label{Eq:appendix19}
|\kappa_{{\rm t},l}| = \frac{\sin\left(\pi N \left(\Gamma_l-\frac{n}{\gamma_{\rm t}N} \right)\right)} {\sin\left(\pi \left(\Gamma_l-\frac{n}{\gamma_{\rm t}N} \right)\right)}.
\end{equation}
In accordance with \eqref{Eq:appendix19}, $\kappa_{{\rm t},l}$ achieves its maximal value, i.e., $N$, when $\Gamma_l-{n}/({\gamma_{\rm t}N})=0$. Thus, the horizontal coordinate of the dark spot center of the $l$th cross-style pattern is
\begin{equation}\label{Eq:appendix20}
x=\frac{n}{\gamma_{\rm t}N}=\Gamma_l.
\end{equation}
\subsection{Proof of Property \ref{Theo:lightSpot2}}
Given that $m_{l,{\rm end}}-m_{l,{\rm start}}+1 = (s_{l,{\rm end}}-s_{l,{\rm start}}+1)M/S$, \eqref{Eq:appendix17} can be further expressed by
\begin{equation}\label{Eq:appendix21}
|\kappa_{{\rm a},l}| = \frac{\sin\left(\pi\frac{M}{S}\left(s_{l,{\rm end}}-s_{l,{\rm start}}+1\right)\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right) \right)} {\sin\left(\pi\left(\Theta_l+\frac{1}{\gamma_{\rm a}M}\right) \right)}.
\end{equation}
\eqref{Eq:appendix21} shows that $\kappa_{{\rm a},l}$ achieves its minimum value, i.e., 0, when
\begin{equation}\label{Eq:appendix22}
\frac{M}{S}\left(s_{l,{\rm end}}-s_{l,{\rm start}}+1\right)\left(\Theta_l+\frac{m}{\gamma_{\rm a}M}\right) = q,
\end{equation}
where $q$ is an nonzero integer. The vertical coordinates of the white points along the vertical central line of the dark point are
\begin{equation}\label{Eq:appendix23}
y = -\frac{m}{\gamma_{\rm a}M} = \Theta_l+\frac{qh_l}{2},
\end{equation}
and $h_l$ is defined in \eqref{Eq:WidthHeight}. The vertical coordinate of the dark spot is $\Theta_l$; thus, the half-height of the dark spot is
\begin{equation}\label{Eq:appendix24}
\left|\left(\Theta_l-\frac{h_l}{2}\right)-\Theta_l\right|=\frac{h_l}{2}.
\end{equation}
Similarly, \eqref{Eq:appendix19} shows that $\kappa_{{\rm t},l}$ achieves its minimum value, i.e., 0, when
\begin{equation}\label{Eq:appendix25}
\Gamma_l-\frac{n}{{\gamma_{\rm t}N}} = \frac{qw_l}{2}.
\end{equation}
Accordingly, the half-width of the dark spot is $w_l/2$.
\subsection{Proof of Property \ref{Theo:PsApprox}}
If ${\tilde\Theta}_l\approx\Theta_l$ and ${\tilde\Gamma}_l\approx\Gamma_l$, then $P_{l,s}$ is approximated by $P_{l,s}\approx P^{\rm ul}\sum_{k=1}^L {\eta_{1,k}} +\eta_2$, where
\begin{equation}\label{Eq:appendix31}
\begin{aligned}
&\eta_{1,k} = \\ &\left|\alpha_k \left({\bf a}(\Theta_l)\odot{\bf p}(\{s\})\right)^H \left({\bf a}(\Theta_k)\odot{\bf p}(\Phi_k)\right){\bf q}^T(\Gamma_k) {\bf q}^*({\Gamma}_l)\right|^2,
\end{aligned}
\end{equation}
and
\begin{equation}\label{Eq:appendix32}
\eta_{2} = \left|\left({\bf a}(\Theta_l)\odot{\bf p}(\{s\})\right)^H {\bf Z}^{\rm ul} {\bf q}^*({\Gamma}_l)\right|^2.
\end{equation}
If $M/K$ is large, then $P_{l,s}\approx\mathbb{E}\{P_{l,s}\}$,
\begin{equation}\label{Eq:appendix33}
\mathbb{E}\{\eta_{1,k}\} =
\begin{cases}
P^{\rm ul}|\alpha_l|^2{M^2N^2}/{S^2}, & \text{if $k=l$ and $s\in\Phi_l$}, \\
0, & \text{if $k\ne l$},
\end{cases}
\end{equation}
and $\mathbb{E}\{\eta_{2}\} \approx {MN}/{S}$ hold. Finally, we obtain \eqref{Eq:PsApprox} by applying \eqref{Eq:appendix32} and \eqref{Eq:appendix33}.
|
1,116,691,501,248 | arxiv |
\section{Introduction}
This paper is a follow-up and update of our previous paper \cite{
Bailey:2015tba, Bailey:2015frw}.
In the standard model, the indirect CP violation parameter of the
neutral kaon system $\varepsilon_{K}$ is
\begin{align}
\label{eq:epsK_def}
\varepsilon_{K}
& \equiv \frac{\mathcal{A}(K_L \to \pi\pi(I=0))}
{\mathcal{A}(K_S \to \pi\pi(I=0))}
\nonumber \\
& = e^{i\theta} \sqrt{2}\sin{\theta}
\Big( C_{\varepsilon} \hat{B}_{K} X_\text{SD}
+ \frac{ \xi_{0} }{ \sqrt{2} } + \xi_\text{LD} \Big)
+ \mathcal{O}(\omega\varepsilon^\prime)
+ \mathcal{O}(\xi_0 \Gamma_2/\Gamma_1) \,,
\end{align}
where $C_{\varepsilon}$ is a well-known coupling, and $X_\text{SD}$
is the short distance contribution from the box diagrams.
Master formulas for $C_{\varepsilon}$, $X_\text{SD}$, $\xi_0$, and
$\xi_\text{LD}$ are given in Ref.~\cite{Bailey:2015tba}.
Since Lattice 2015, there have been major updates of lattice QCD
inputs such as $V_{cb}$, $\hat{B}_{K}$, $\xi_0$, and $\xi_2$.
Hence, it is time to update the current status of $\varepsilon_{K}$.
\section{Input parameter $|V_{cb}|$}
\input{table_Vcb}
Let us begin with $V_{cb}$.
In Table \ref{tab:Vcb-Vub}, we summarize updated results for $|V_{cb}|$
and $|V_{ub}|$.
In Ref.~\cite{DeTar:2015orc},
DeTar has collected the results for the $\bar{B}\to D\ell\bar{\nu}$
decay mode at non-zero recoil from both lattice QCD \cite{
Lattice:2015rga, Na:2015kha} and the experiments of Babar \cite{
Aubert:2008yv} and Belle \cite{ Glattauer:2015yag} to make a
combined fit of all of them.
This result corresponds to the green band in Fig.~\ref{fig:Vcb-Vub}.
We combine the results of Refs.~\cite{DeTar:2015orc} ($\bar{B}\to
D\ell\bar{\nu}$) and \cite{Bailey2014:PhysRevD.89.114504} ($\bar{B}\to
D^*\ell\bar{\nu}$) to obtain the uncorrelated weighted average, which
corresponds to the ``ex-combined'' result in Table \ref{tab:Vcb-Vub}.
This value is shown as an orange circle in Fig.~\ref{fig:Vcb-Vub}.
The black cross represents results of inclusive $|V_{cb}|$ and $|V_{ub}|$.
The inclusive results are about $3\sigma$ away from
those of the exclusive decays as well as
the LHCb results of $|V_{ub}/V_{cb}|$
(the magenta band in Fig.~\ref{fig:Vcb-Vub}).
\section{Input parameter $\xi_0$}
There are two independent methods to determine $\xi_0$ in lattice QCD:
One is the indirect method, and the other is the direct method.
The parameter $\xi_0$ is connected with $\varepsilon'/\varepsilon$ and $\xi_2$ as follows,
\begin{align}
&\xi_0 = \frac{\Im A_0}{\Re A_0}, \qquad
\xi_2 = \frac{\Im A_2}{\Re A_2}, \qquad
\Re \left(\frac{\varepsilon'}{\varepsilon} \right) =
\frac{\omega}{\sqrt{2} |\varepsilon_K|} (\xi_2 - \xi_0) \,.
\label{eq:e'/e:xi0}
\end{align}
In the indirect method, we determine $\xi_0$ from the experimental
values of $\Re(\varepsilon'/\varepsilon)$, $\varepsilon_K$, $\omega$, and the lattice QCD input
$\xi_2$ using Eq.~\eqref{eq:e'/e:xi0}.
Recently, RBC-UKQCD reported new results for $\xi_2$ in Ref.~\cite{
Blum:2015ywa}.
The results for $\xi_0$ using the indirect method are summarized
in Table \ref{tab:in-LD}.
\input{table_xi0}
Recently, RBC-UKQCD also reported new lattice QCD results for $\Im A_0$
calculated using domain wall fermions \cite{ Bai:2015nea}.
Using the experimental value of $\Re A_0$, we can determine $\xi_0$
directly from $\Im A_0$.
RBC-UKQCD also reported the S-wave $\pi-\pi$ (I=0) scattering
phase shift $\delta_0 = 23.8(49)(12)$ \cite{ Bai:2015nea}.
This value is $3.0\sigma$ lower than the conventional determination
of $\delta_0$ in Refs.~\cite{GarciaMartin:2011cn} (KPY-2011) and
\cite{ Colangelo:2001df, Colangelo2016:MITP} (CGL-2001).
The values for $\delta_0$ are summarized in Table \ref{tab:d0}.
In Fig.~\ref{fig:d0-exp}, we show the results of KPY-2011.
They used a singly subtracted Roy-like equation to do the
interpolation around $\sqrt{s} = m_K$ (kaon mass).
Their fitting to the experimental data works well from the threshold
to $\sqrt{s} = 800\mathop{\rm MeV}\nolimits$.
\input{table_d0}
In Fig.~\ref{fig:d0-rbc}, we show the fitting results of both
KPY-2011 and CGL-2001 as well as the RBC-UKQCD result.
There is essentially no difference between KPY-2011 and CGL-2001
in the region near $\sqrt{s} = m_K$.
Here, we observe the $3.0\sigma$ gap between RBC-UKQCD and
KPY-2011.
In contrast, in the case of $\delta_2$ (S-wave, I=2), there is no
difference between RBC-UKQCD and KPY-2011 within statistical
uncertainty.
\input{fig_d0}
Considering all aspects, we conclude that the direct calculation
of $\Im A_0$ and $\xi_0$ by RBC-UKQCD in Ref.~\cite{ Bai:2015nea}
may have unresolved issues.
Hence, we use the indirect method to determine $\xi_0$ in this paper.
Regarding $\xi_\text{LD}$, the long distance effect in
the dispersive part, there has been an on-going attempt to calculate
it on the lattice \cite{ Christ:2014qwa}.
However, this attempt \cite{Bai2016:Latt}, at present,
belongs to the category of
exploratory study rather than to that of precision measurement.
Hence, we use the rough estimate of $\xi_\text{LD}$ in Ref.~\cite{
Christ:2014qwa} in this paper, which is given in Table
\ref{tab:in-LD}.
\section{Input parameter $\hat{B}_{K}$}
In Table \ref{tab:in-BK}, we present results for $\hat{B}_{K}$ calculated in
lattice QCD with $N_f=2+1$ flavors.
Here, FLAG-2016 represents the global average over the results of
BMW-2011 \cite{ Durr2011:PhysLettB.705.477}, Laiho-2011 \cite{
Laiho:2011np}, RBC-UK-2016 \cite{ Blum:2014tka}, and SWME-2016
\cite{ Jang:2015sla}, which is reported in Ref.~\cite{ Aoki:2016frl}.
SWME-2014 represents the $\hat{B}_{K}$ result reported in Ref.~\cite{
Bae2014:prd.89.074504}.
RBC-UK-2016 represents that reported in Ref.~\cite{ Blum:2014tka}.
The results of SWME-2016 are obtained using fitting based on staggered
chiral perturbation theory (SChPT) in the infinite volume limit, while
those of SWME-2014 are obtained using fitting based on SChPT with
finite volume corrections included at the NLO level.
In this paper, we use the FLAG-2016 result of $\hat{B}_{K}$.
\section{Other input parameters}
For the Wolfenstein parameters $\lambda$, $\bar{\rho}$, and
$\bar{\eta}$, both CKMfitter and UTfit updated their results in
Refs.~\cite{ Charles:2015gya, UTfit2016:web}, while the angle-only-fit
has not been updated since 2015.
The results are summarized in Table \ref{tab:in-wolf}.
For the QCD corrections $\eta_{cc}$, $\eta_{ct}$, and
$\eta_{tt}$, we use the same values as in Ref.~\cite{ Bailey:2015tba},
which are given in Table \ref{tab:in-eta}.
Other input parameters are the same as in Ref.~\cite{
Bailey:2015tba} except for the charm quark mass $m_c(m_c)$, which are
summarized in Table \ref{tab:in-extra}.
For the charm quark mass, we use the HPQCD results of $m_c(m_c)$
reported in Ref.~\cite{ Chakraborty:2014aca}.
\input{table_in_par}
\section{Results for $\varepsilon_{K}$ with lattice QCD inputs}
In Fig.~\ref{fig:eps-flag}, we show the results for $\varepsilon_{K}$ evaluated
directly from the standard model with the lattice QCD inputs described
in the previous sections.
In Fig.~\ref{fig:eps-flag-ex}, the blue curve represents the
theoretical evaluation of $\varepsilon_{K}$ with the FLAG $\hat{B}_{K}$, AOF for the
Wolfenstein parameters, and exclusive $V_{cb}$ that corresponds to
ex-combined in Table \ref{tab:Vcb-Vub}.
Here the red curve represents the experimental value of $\varepsilon_{K}$.
In Fig.~\ref{fig:eps-flag-in}, the blue curve represents the same as
in \ref{fig:eps-flag-ex} except for using the inclusive $V_{cb}$
in Table \ref{tab:Vcb-Vub}.
Our preliminary results are, in units of $1.0\times 10^{-3}$,
\begin{align}
|\varepsilon_{K}| &= 1.69 \pm 0.17 && \text{for exclusive $V_{cb}$ (lattice QCD)}
\\
|\varepsilon_{K}| &= 2.10 \pm 0.21 && \text{for inclusive $V_{cb}$ (QCD sum rules)}
\\
|\varepsilon_{K}| &= 2.228 \pm 0.011 && \text{(experimental value)}
\end{align}
This indicates that there is $3.2\sigma$ tension in the exclusive
$V_{cb}$ channel (lattice QCD) and no tension in the inclusive
$V_{cb}$ channel (heavy quark expansion; QCD sum rules).
\input{fig_eps_flag}
\acknowledgments
We thank R.~Van de Water for helpful discussion on $V_{cb}$.
The research of W.~Lee is supported by the Creative Research
Initiatives Program (No.~20160004939) of the NRF grant funded by the
Korean government (MEST).
~J.A.B. is supported by the Basic Science Research Program of the
National Research Foundation of Korea (NRF) funded by the Ministry of
Education (No.~2015024974).
W.~Lee would like to acknowledge the support from the KISTI
supercomputing center through the strategic support program for the
supercomputing application research (No.~KSC-2014-G3-003).
Computations were carried out on the DAVID GPU clusters at Seoul
National University.
|
1,116,691,501,249 | arxiv |
\section{Introduction}
Given a tractable and often normalized base distribution $\pi_0(z)$ and unnormalized target $\tilde{\pi}_1(z)$, many statistical methods require a path $\gamma: [0, 1] \to \mathcal{P}$, where $\mathcal{P}$ is a family of unnormalized density functions with $\gamma(0) = \pi_0(z)$ and $\gamma(1) = \tilde{\pi}_1(z)$.
For example, marginal likelihood estimation methods such as \gls{TI} \citep{ogata1989monte} or \gls{AIS} \citep{neal2001annealed}
and \gls{MCMC} methods such as parallel tempering \citep{earl2005parallel} and \gls{SMC} \citep{del2006sequential} typically use the geometric path with mixing parameter $\beta$,
\begin{align}
\tilde{\pi}_{\beta}(z) = \exp\left\{(1-\beta)\log \pi_0(z) + \beta\log \tilde{\pi}_1(z)\right\},\label{eq:geo_path}
\end{align}
In the Bayesian context, $\pi_0(z)$ and $\pi_1(z)$ can represent the prior and posterior distribution, respectively, in which case the geometric path amounts to tempering the likelihood term \citep{friel2008marginal, nguyenEfficientSequentialMonteCarlo2015a}.
Previous work has demonstrated theoretical or empirical improvements upon the geometric path can be achieved, but the applicability of these methods remains limited in practice due to restrictive assumptions on the parametric form of the endpoint distributions. \citet{gelman1998simulating} derive an optimal path in distribution space but this is intractable to implement beyond toy examples. The moment-averaging path of \citet{grosse2013annealing} demonstrates performance gains for partition function estimation in Restricted Boltzmann Machines, but is only applicable for endpoint distributions which come from an exponential family. \citet{thang} proposed a path based on $\alpha$-divergence minimization using an iterative projection scheme from \cite{minka2005divergence} which is also reliant on exponential family assumptions.
In this work, we propose $q$-paths, which can be constructed between arbitrary endpoint distributions and admit a closed form that can be used directly for \gls{MCMC} sampling
\begin{align}
\tilde{\pi}_{\beta,q}(z) &= \bigg[ (1-\beta) \, \pi_0(z)^{1-q} + \beta \, \tilde{\pi}_1(z)^{1-q} \bigg]^{\frac{1}{1-q}}\label{eq:qpath_mix_form}
\end{align}
Our $q$-paths adapt the $\alpha$-integration of \citet{amari2007integration} to the problem of annealing between two unnormalized densities, with our notation $q$ intended to highlight connections with the deformed logarithm and exponential functions from nonextensive thermodynamics \citep{tsallis2009introduction, naudts2011generalised}. $q$-paths may be viewed as taking the generalized mean \citep{kolmogorov1930, de2016mean} of the endpoint densities according to a mixing parameter $\beta$ and monotonic transformation function $\ln_q(u) = \frac{1}{1-q}( u^{1-q} - 1)$. As $q \rightarrow 1$, we recover the natural logarithm and geometric mean in \cref{eq:geo_path}, while the arithmetic mean corresponds to $q=0$.
As previous analysis of the geometric path revolves around the exponential family of distributions \citep{grosse2013annealing, brekelmans2020tvo, brekelmans2020lref}, we show in Sec. \ref{sec:path_exp_fam} that our proposed paths have an interpretation as a $q$-exponential family of density functions
\begin{align}
\tilde{\pi}_{\beta,q}(z) = \pi_0(z) \, \exp_q \left\{ \beta \cdot \ln_q \frac{\tilde{\pi}_1(z)}{ \pi_0(z)} \right\}. \label{eq:qpath_exp_form}
\end{align}
\citet{grosse2013annealing} show that intermediate distributions along the geometric and moment-averaged paths correspond to the solution of a weighted forward or reverse \textsc{kl} divergence minimization objective, respectively. In Sec. \ref{sec:vrep_breg}, we generalize these variational representations to $q$-paths, showing that $\tilde{\pi}_{\beta,q}(z)$ minimizes the expected $\alpha$-divergence to the endpoints for an appropriate mapping between ${q \text{ and } \alpha}$
Finally, we highlight several implementation considerations in Sec. \ref{sec:experiments}, observing that $q = 1-\delta$ for small $\delta$ appears most useful both for qualitative mixing behavior and numerical stability. We provide a simple heuristic for setting an appropriate value of $q$, and find that $q$-paths can yield empirical gains for Bayesian inference using \gls{SMC} and marginal likelihood estimation for generative models using \gls{AIS}.
\section{q-Likelihood Ratio Exponential Families}\label{sec:path_exp_fam}
Similarly to \cref{eq:lkd_ratio_fam}, we relate $\tilde{\pi}_{\beta, q}$ to a $q$-exponential family with a single sufficient statistic and natural parameter $\beta$
\begin{align}
\tilde{\pi}_{\beta, q}(z) &= \bigg[(1 - \beta) \pi_0(z)^{1 - q} + \beta \tilde{\pi}_1(z)^{1 - q}\bigg]^\frac{1}{1 - q}\\
&= \bigg[\pi_0(z)^{1 - q} + \hl{\beta}\big(\tilde{\pi}_1(z)^{1 - q} - \hl{\pi_0(z)^{1 - q}}\big)\bigg]^\frac{1}{1 - q}\\
&= \hl{\pi_0(z)} \left[\hl{1} + \beta\left(\left(\frac{\tilde{\pi}_1(z)}{\hl{\pi_0(z)}}\right)^{1 - q} - \hl{1}\right)\right]^\frac{1}{1 - q}\\
&= \pi_0(z) \left[1 + \hl{(1-q)} \, \beta \, \hl{\ln_q}\left( \frac{\tilde{\pi}_1(z)}{\pi_0(z)}\right)\right]^\frac{1}{1 - q}\\
&= \pi_0(z) \, \hl{\exp_q} \left\{ \beta \cdot \ln_q\left(\frac{\tilde{\pi}_1(z)}{\pi_0(z)}\right)\right\}. \label{eq:qexp_form}
\end{align}
To mirror the likelihood ratio exponential family interpretation of the geometric path in \cref{eq:lkd_ratio_fam}, we multiply by a factor $Z_q(\beta)$ to write the normalized $q$-path distribution as
\begin{align}
\pi_{\beta,q}(z) &= \frac{1}{Z_q(\beta)} \, \pi_0(z)\exp_q \left\{\beta \cdot T(z) \right\} \\
Z_q(\beta) &:= \int \tilde{\pi}_{\beta, q}(z) \, dz \, , \quad T(z) := \ln_q \frac{\tilde{\pi}_1(z)}{\pi_0(z)}
\end{align}
which recovers \cref{eq:lkd_ratio_fam} as $q \to 1$.
Note that we normalize using $Z_q(\beta)$ instead of subtracting a $\psi_q(\beta)$ term inside the $\exp_q$ as in the standard definition of a parameteric $q$-exponential family \citep{naudts2009q, naudts2011generalised, amari2011q}
\begin{align}
\pi_{\theta,q}(z) &= g(z) \, \exp_q \big\{ \theta \cdot \phi_q(z) - \psi_q(\theta ) \big\}. \label{eq:qexp_fam1}
\end{align}
where we use $\phi_q(z)$ to indicate a general sufficient statistic vector which may differ from ${T(z) = \ln_{q} \tilde{\pi}_1(z) / \pi_0(z)}$ above.
While $\log Z(\beta) = \psi(\beta)$ for $q=1$, translating between these normalization constants for $q \neq 1$ requires a non-linear transformation of the parameters. This delicate issue of normalization has been noted in \citep{matsuzoe2019normalization,suyari2020advantages,naudts2011generalised}, and we give detailed discussion in App. \ref{app:normalization}.
In App. \ref{sec:student}, we use the $\psi_q(\theta)$ normalization constant to derive an analogue of the moment-averaging path between parametric $q$-exponential family endpoints.
\paragraph{$q$-Paths for Parametric Endpoints}
The geometric path has a particularly simple form when annealing between exponential family endpoint distributions
\begin{align}
\theta_{\beta} = (1-\beta) \, \theta_0 + \beta\, \theta_1 \label{eq:geo_path_nat_params} \, .
\end{align}
In \cref{app:same_family}, we verify \cref{eq:geo_path_nat_params} and show that the same result holds for $q$-paths between endpoint distributions within the same $q$-exponential family.
Intuitively, for the (generalized) exponential family distribution in \cref{eq:qexp_fam1}, we can write the unnormalized density ratio $\ln_q \tilde{\pi}_{\theta}(z) / g(z) = \theta \cdot \phi(z)$ as a linear function of the parameters $\theta$. Thus, the $q$-path generalized mean over density functions with $h_q(\tilde{\pi}_{\theta_i}) = \ln_q \tilde{\pi}_{\theta_i}(z)$ will translate to an arithmetic mean in the parameter space with $h_1(\theta_i) = \theta_i$.
\section{Comparison of Exponential Family and $q$-Exponential Family}\label{app:breg_table}
\begin{table}[h]
\centering
\caption{\label{tab:properties}}
\begin{tabular}{p{2cm} p{6.1cm} p{6.5cm}}
\cmidrule(lr){2-2} \cmidrule(l){3-3} \\[.25ex]
& Exponential Family ($q=1$) & $q$-Exponential Family \\ \midrule
Definition & $\pi_{\beta}(z) = \pi_0(z) \, \exp \{ \beta \cdot \phi(z) - \psi(\beta) \}$ & $\pi_{\lambda}(z) = \pi_0(z) \, \exp_q \{ \lambda \cdot \phi(z) - \psi_q(\lambda) \} $ \\[2ex]
Free Energy & $\psi(\beta) = \ln Z_{\beta} $ & $\psi_q(\lambda)$ \\[2ex]
Escort Dist. & $\Pi_{\beta, 1} = \pi_0(z)^{1-1} \, \pi_{\beta}(z)^1 = \pi_{\beta}(z) $ & $ \Pi_{\lambda,q} = \dfrac{1}{z_{q}(\lambda)} \pi_0(z)^{1-q} \, \pi_{\lambda,q}(z)^q $ \\[2ex]
Dual Params & $\dfrac{\partial \psi(\beta)}{\partial \beta^j} = \eta^j = \mathbb{E}_{\pi_{\beta}}[\phi_j(z)]$ \, (standard) & $\dfrac{\partial \psi_q(\lambda)}{\partial \lambda^j} =\eta^j_q =\mathbb{E}_{\Pi_{\lambda,q}}[\phi_j(z)] $ \, (escort) \\[2ex]
Bregman Div. & $D_{\psi}[\beta^{\prime} : \beta ] = D_{KL}[ \pi_{\beta} || \pi_{\beta^{\prime}} ] $ & $D_{\psi_q}[\lambda^{\prime} : \lambda ] = \mathbb{E}_{\Pi_{\lambda, q} }[ \ln_q \dfrac{\pi_{\lambda,q}}{\pi_0} - \ln_q \dfrac{\pi_{\lambda^{\prime},q}}{\pi_0}] $ \\[2ex]
& $\phantom{D_{\psi}[\beta^{\prime} : \beta ]} = \mathbb{E}_{\pi_{\beta}} [ \ln \pi_{\beta} - \ln \pi_{\beta^{\prime}} ]$ & \\[2ex]
Conjugate $\psi^{*}$ & $\psi^{*}(\eta) = D_{\psi}[\pi_0 || \pi_{\beta_{\eta}}] =D_{KL}[\pi_{\beta_{\eta}}||\pi_0 ] $ & $\psi_q^{*}(\eta) = D_{\psi_q}[\pi_0 ||\pi_{\lambda_{\eta},q}] = \mathbb{E}_{\Pi_{\lambda, q} }[ \ln_q \dfrac{\pi_{\lambda_{\eta},q}}{\pi_0} ] $\\[2ex]
& & $\phantom{\psi_q^{*}(\eta) = } \hfill = \dfrac{1}{1-q}\bigg(\dfrac{1}{z_q(\lambda)} -1 \bigg) $ \\[3.5ex] \\
Neg. Entropy & $\psi^{*}(\eta) = \mathbb{E}_{\pi_{\beta_{\eta}}}[ \log \pi_{\beta_{\eta}} ] $\footnotesize \quad (if $\pi_0 =$uniform)& $\psi_q^{*}(\eta) = \mathbb{E}_{\Pi_{\lambda, q}}[ \ln_q \pi_{\lambda_{\eta},q} ] $ \footnotesize \quad (if $\pi_0 = $uniform) \\[3ex]
\small Dual Definition & $\pi_{\beta}(z) = \pi_0(z) \exp \{ \beta \, ( \phi(z) - \eta ) + \psi^{*}(\eta) \}$ & $\pi_{\lambda}(z) = \pi_0(z)\exp_q \{ \lambda \, ( \phi(z) - \eta_q ) + \psi_q(\lambda) \}$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Abstract Mean is Invariant to Affine Transformations} \label{app:any_h}
In this section, we show that $h_{q}(u)$ is invariant to affine transformations. That is, for any choice of $a$ and $b$,
\begin{align}
h_{q}(u) =
\begin{cases}
a \cdot u^{1-q} + b \hfill & q \neq 1 \\
\log u \hfill & q = 1
\end{cases} \label{eq:alpha_abstract2}
\end{align}
yields the same expression for the abstract mean $\mu_{h_{\alpha}}$. First, we note the expression for the inverse $h^{-1}_{q}(u)$ at $q \neq 1$
\begin{align}
h^{-1}_{q}(u) = \left(\frac{u - b}{a}\right)^{\frac{1}{1-q}}.
\end{align}
Recalling that $\sum_i w_i = 1$, the abstract mean then becomes
\begin{align}
\mu_{h_{q}}(\{w_i\}, \{u_i\}) &= h_{q}^{-1}\left(\sum_i w_i h_{q}(u_i) \right) \\
&= h_{q}^{-1}\left(a \left(\sum_i w_iu_i^{1-q}\right) + b \right) \\
&=\bigg(\sum_i w_i u_i^{1-q} \bigg)^{\frac{1}{1-q}}
\end{align}
which is independent of both $a$ and $b$.
\section{Normalization in q-Exponential Families}\label{app:normalization}
The $q$-exponential family can also be written using the $q$-free energy $\psi_q(\theta)$ for normalization \cite{amari2011q, naudts2011generalised},
\begin{align}
\pi_{\theta,q}(z) &= \pi_0(z) \, \exp_q \big\{ \theta \cdot \phi(z) - \psi_q(\theta ) \big\} \, . \label{eq:qexp_fam_qf}
\end{align}
However, since $\exp_q \{ x + y \} = \exp_q \{ y \} \cdot \exp_q \{ \frac{x}{1 + (1-q) y} \}$ (see \cite{suyari2020advantages} or App. \ref{app:q_sum_product} below) instead of $\exp \{ x + y \} = \exp \{ x \} \cdot \exp \{ y \} $ for the standard exponential, we can not easily move between these ways of writing the $q$-family \cite{matsuzoe2019normalization}.
Mirroring the derivations of \citet{naudts2011generalised} pg. 108, we can rewrite \eqref{eq:qexp_fam_qf} using the above identity for $\exp_q \{ x+y\}$, as
\begin{align}
\pi^{(q)}_{\theta}(z) &= \pi_0(z) \, \exp_q \{ \theta \cdot \phi(z) - \psi_q(\theta) \} \label{eq:normalization1} \\
&= \pi_0(z) \, \exp_q \{ - \psi_q(\theta) \} \exp_q \big\{ \frac{\theta \cdot \phi(z)}{1+(1-q)(-\psi_q(\theta))} \big \} \label{eq:normalization2}
\end{align}
Our goal is to express $\pi^{(q)}_{\theta}(z)$ using a normalization constant $Z^{(q)}_\beta$ instead of the $q$-free energy $\psi_q(\theta)$. While the exponential family allows us to freely move between $\psi(\theta)$ and $\log Z_{\theta}$, we must adjust the natural parameters (from $\theta$ to $\beta$) in the $q$-exponential case. Defining
\begin{align}
\beta &= \frac{\theta}{1+(1-q)(-\psi_q(\theta))} \\
Z^{(q)}_\beta &= \frac{1}{\exp_q \{-\psi_q(\theta) \}}
\end{align}
we can obtain a new parameterization of the $q$-exponential family, using parameters $\beta$ and multiplicative normalization constant $Z^{(q)}_\beta$,
\begin{align}
\pi_{\beta,q}(z) &= \frac{1}{Z^{(q)}_\beta} \pi_0(z) \, \exp_{q} \{ \beta \cdot \phi(z) \} \\
&= \pi_0(z) \, \exp_q \big\{ \theta \cdot \phi(z) - \psi_q(\theta ) \big\} = \pi^{(q)}_{\theta}(z) \, .
\end{align}
See \citet{matsuzoe2019normalization}, \citet{suyari2020advantages}, and \citet{naudts2011generalised} for more detailed discussion of normalization in deformed exponential families.
\section{Minimizing $\alpha$-divergences}\label{app:alpha_integration}
\citet{amari2007integration} shows that the $\alpha$ power mean $\pi^{(\alpha)}_{\beta}$ minimizes the expected divergence to a single distribution, for \textit{normalized} measures and $\alpha = 2q-1$. We repeat similar derivations for the case of unnormalized endpoints $\{\tilde{\pi}_i\}$ and $\tilde{r}(z)$ and show
\begin{align}
\tilde{\pi}_{\beta, q} = \argmin \limits_{\tilde{r}(z)}(1-\beta)& D_{\alpha}[\tilde{\pi}_{0}(z)||\tilde{r}(z)] +\beta D_{\alpha}[\tilde{\pi}_{1}(z)||\tilde{r}(z)],
\end{align}
for $\alpha = 2q-1$.
\begin{proof}
Defining $w_0 = (1 - \beta)$ and $w_1 = \beta$, we consider minimizing the functional
\begin{align}
r^*(z) &=\argmin_{\tilde{r}(z)} J[r(z)]
=\argmin_{\tilde{r}(z)} \left(\sum_{i=0}^{N=1} w_i D_\alpha(\tilde{\pi}_i(z)||\tilde{r}(z)) \right)\label{eq:amari_lagrange}
\end{align}
\cref{eq:amari_lagrange} can be minimized using the Euler-Lagrange equations or using the identity
\begin{align}
\frac{\delta f(x)}{\delta f(x')} = \delta(x - x')\label{eq:delta_eq}
\end{align}
from \cite{meng2004introduction}. We compute the functional derivative of $J[r(z)]$ using \eqref{eq:delta_eq}, set to zero and solve for $r$:
\begin{align}
\frac{\delta J[r(z')]}{\delta r(z)} &= \frac{\delta}{\delta r(z)} \left(\sum_{i=0}^{N=1} w_i \left(\frac{1}{q} \int\tilde{p}(z')dz + \frac{1}{1-q} \int\tilde{r}(z')dz - \frac{1}{q(q-1)} \int {\tilde{\pi}_i(z')}^{1-q}r(z')^{q} dz' \right) \right)\\
&= \left(\sum_{i=0}^{N=1} w_i \left(\frac{1}{1-q} \int \hl{\frac{\delta \tilde{r}(z')}{\delta r(z)}} dz - \frac{1}{q(q-1)} \int {\tilde{\pi}_i(z')}^{1-q} \cdot \hl{q}\cdot r(z')^{\hl{q-1}} \hl{\frac{\delta \tilde{r}(z')}{\delta r(z)}}dz' \right) \right)\\
&= \left(\sum_{i=0}^{N=1} w_i \left(\frac{1}{1-q} \int \hl{\delta(z - z')} dz - \frac{1}{q-1} \int {\tilde{\pi}_i(z')}^{1-q} \cdot r(z')^{\hl{q-1}} \hl{\delta(z - z')}dz' \right) \right)\\
0 &= \frac{1}{1-q}\sum_{i=0}^{N=1} w_i \left(1 - {\tilde{\pi}_i(z)}^{1-q} \cdot r(z)^{q-1} \right) \\
\sum_{i=0}^{N=1} w_i &= \sum_{i=0}^{N=1} w_i {\tilde{\pi}_i(z)}^{1-q} \cdot r(z)^{q-1} \\
1 &= \sum_{i=0}^{N=1} w_i {\tilde{\pi}_i(z)}^{1-q} \cdot r(z)^{q-1} \\
r(z)^{1-q} &= \sum_{i=0}^{N=1} w_i {\tilde{\pi}_i(z)}^{1-q} \\
r(z) &= \left[(1-\beta) {\tilde{\pi}_0(z)}^{1-q} + \beta{\tilde{\pi}_1(z)}^{1-q}\right]^{1/1-q} = \tilde{\pi}_{\beta,q}(z)
\end{align}
\end{proof}
This result is similar to a general result about Bregman divergences in \citet{Banerjee2005} Prop. 1. although $D_{\alpha}$ is not a Bregman divergence over normalized distributions.
\subsection{Arithmetic Mean ($q=0$)}\label{app:mixture_path}
\newcommand{\pi_0}%{p_a}{\pi_0
\newcommand{\pi_1}%{p_b}{\pi_1
\newcommand{r}%{q}{r
For normalized distributions, we note that the moment-averaging path from \citet{grosse2013annealing} is not a special case of the $\alpha$-integration \cite{amari2007integration}. While both minimize a convex combination of reverse \textsc{kl} divergences, \citet{grosse2013annealing} minimize within the constrained space of exponential families,
while \citet{amari2007integration} optimizes over all normalized distributions.
More formally, consider minimizing the functional
\begin{align}
J[r}%{q] &= (1-\beta)D_{\mathrm{KL}}[\pi_{0}(z)||r(z)] +\beta D_{\mathrm{KL}}[\pi_{1}(z)||r(z)] \\
&= (1 - \beta)\int \pi_0}%{p_a(z) \log \frac{\pi_0}%{p_a(z)}{r}%{q(z)} dz + \beta \int \pi_1}%{p_b(z) \log \frac{\pi_1}%{p_b(z)}{r}%{q(z)} dz \\
&= \text{const} - \int \big[(1 - \beta) \pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z) \big] \cdot \log r}%{q(z) dz \label{eq:functional}
\end{align}
We will show how \citet{grosse2013annealing} and \citet{amari2007integration} minimize \eqref{eq:functional}.
\paragraph{Solution within Exponential Family}
\citet{grosse2013annealing} constrains $r}%{q(z) = \frac{1}{Z(\theta)} h(z) \exp (\theta^T g(z))$ to be a (minimal) exponential family model and minimizes \eqref{eq:functional} w.r.t $r}%{q$'s natural parameters $\theta$ (cf. \cite{grosse2013annealing} Appendix 2.2):
\begin{align}
\theta^*_i &= \argmin_\theta J(\theta) \\
&= \argmin_\theta \left(- \int \big[(1 - \beta) \pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z) \big] \left[ \log h(z) + \theta^T g(z) - \log Z(\theta) \right] dz \right)\\
&= \argmin_\theta \left(\log Z(\theta) - \int \big[(1 - \beta) \pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z) \big] \theta^T g(z) dz + \text{const}\right)
\end{align}
where the last line follows because $\pi_0}%{p_a(z)$ and $\pi_1}%{p_b(z)$ are assumed to be correctly normalized. Then to arrive at the moment averaging path, we compute the partials $\frac{\partial J(\theta)}{\partial \theta_i}$ and set to zero:
\begin{align}
\frac{\partial J(\theta)}{\partial \theta_i} &= \E_{r}%{q}[g_i(z)] - (1 - \beta)\E_{\pi_0}%{p_a}[g_i(z)] - \beta \E_{\pi_1}%{p_b}[g_i(z)] = 0 \\
\E_{r}%{q}[g_i(z)] &= (1 - \beta)\E_{\pi_0}%{p_a}[g_i(z)] - \beta \E_{\pi_1}%{p_b}[g_i(z)]
\end{align}
where we have used the exponential family identity $\frac{\partial \log Z(\theta)}{\partial \theta_i} = \E_{r}%{q_{\theta}}[g_i(z)]$ in the first line.
\paragraph{General Solution}
Instead of optimizing in the space of minimal exponential families, \citet{amari2007integration} instead adds a Lagrange multiplier to \eqref{eq:functional} and optimizes $r}%{q$ directly (cf. \cite{amari2007integration} eq. 5.1 - 5.12)
\begin{align}
{r}%{q}^* &= \argmin_{r}%{q} J'[r}%{q] \\
&= \argmin_{r}%{q} J[r}%{q] + \lambda \left(1 - \int r}%{q(z) dz\right) \label{eq:lagrange}
\end{align}
We compute the functional derivative of $J'[r}%{q]$ using \eqref{eq:delta_eq} and solve for $r$:
\begin{align}
\frac{\delta J'[r}%{q]}{\delta r}%{q(z)}=&- \int \big[(1 - \beta) \pi_0}%{p_a(z') + \beta \pi_1}%{p_b(z') \big] \frac{1}{r}%{q(z')} \frac{\delta r}%{q(z')}{\delta r}%{q(z)} dz' - \lambda \int \frac{\delta r}%{q(z')}{\delta r}%{q(z)} dz' \\
=&- \int \big[(1 - \beta) \pi_0}%{p_a(z') + \beta \pi_1}%{p_b(z') \big] \frac{1}{r}%{q(z')} \delta(z - z') dz' - \lambda \int \delta(z - z') dz' \\
=&- \big[(1 - \beta)\pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z)\big] \frac{1}{r}%{q(z)} - \lambda = 0
\end{align}
Therefore
\begin{align}
r}%{q(z) \propto \big[(1 - \beta)\pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z)\big],
\end{align}
which corresponds to our $q$-path at $q=0$, or $\alpha = -1$ in \citet{amari2007integration}. Thus, while both \citet{amari2007integration} and \citet{grosse2013annealing} start with the same objective, they arrive at different optimum because they optimize over different spaces.
\section{$q$-Exponential Families and Escort Moment-Averaging Path}\label{sec:student}\label{sec:parametric} \label{app:parametric}
In this section, we provide examples of parametric $q$-exponential family distributions and additional analysis for the special case of annealing between endpoints within the same parametric family. After reviewing the $q$-Gaussian and Student-$t$ distributions as standard examples of the $q$-exponential family, we present the \textit{escort}-moments path, which is analogous to \citet{grosse2013annealing} and relies on the dual parameters of the $q$-family. We experimentally evaluate these paths in toy examples in Fig. \ref{fig:toy_exp}, but note that the applicability of the escort-moments path is limited in practice.
\subsection{Examples of Parametric $q$-Exponential Family Distributions}
\paragraph{$q$-Gaussian and Student-$t$}
The $q$-Gaussian distribution appears throughout nonextensive thermodynamics \citep{naudts2009q, naudts2011generalised, tsallis2009introduction}, and corresponds to simply taking the $\exp_q$ of the familiar first and second moment sufficient statistics. In what follows, we ignore the case of $q<1$ since the $q$-Gaussian has restricted support based on the value of $q$. For $q > 1$, the $q$-Gaussian matches the Student-$t$ distribution, whose degrees of freedom parameter $\nu$ specifies the order of the $q$-exponential and introduces heavy tailed behavior.
The Student-$t$ distribution appears in hypothesis testing with finite samples, under the assumption that the sample mean follows a Gaussian distribution. In particular, the degrees of freedom parameter $\nu = n-1$ can be shown to correspond to an order of the ${q}$-exponential family with $\nu = (3-{q}) / ({q}-1)$ (in 1-d), so that the choice of ${q}$ is linked to the amount of data observed.
We can first write the multivariate Student-$t$ density, specified by a mean vector $\mu$, covariance ${\mathbf{\Sigma}}$, and degrees of freedom parameter $\nu$, in $d$ dimensions, as
\begin{align}
t_{\nu}(x | {\mathbf{\mu}}, {\mathbf{\Sigma}}) = \frac{1}{Z(\nu, {\mathbf{\Sigma}})} \big[ 1 + \frac{1}{\nu} (x - {\mathbf{\mu}})^T {\mathbf{\Sigma}}^{-1} (x-{\mathbf{\mu}}) \big]^{-\big(\frac{\nu + d }{2}\big)} \label{eq:student}
\end{align}
where $Z(\nu, {\mathbf{\Sigma}}) = \Gamma(\frac{\nu+d}{2})/\Gamma(\frac{\nu}{2}) \cdot |{\mathbf{\Sigma}}|^{-1/2} \nu^{-\frac{d}{2}} \pi^{-\frac{d}{2}}$. Note that $\nu > 0$, so that we only have positive values raised to the $-(\nu+d)/2$ power, and the density is defined on the real line.
The power function in \eqref{eq:student} is already reminiscent of the ${q}$-exponential, while we have first and second moment sufficient statistics as in the Gaussian case. We can solve for the exponent, or order parameter $q$, that corresponds to $-(\nu+d)/2$ using $-\big(\frac{\nu + d }{2}\big) = \frac{1}{1-{q}}$. This results in the relations
\begin{align}
\nu = \frac{d - d {q} +2}{{q} - 1} \qquad \text{or} \qquad {q} = \frac{\nu+d+2}{\nu+d}
\end{align}
We can also rewrite the $\nu^{-1} \, (x - {\mathbf{\mu}})^T {\mathbf{\Sigma}}^{-1} (x-{\mathbf{\mu}}) $ using natural parameters corresponding to $\{x, x^2\}$ sufficient statistics as in the Gaussian case (see, e.g.
Matsuzoe and Wada (2015) Example 4).
Note that the Student-$t$ distribution has heavier tails than a standard Gaussian, and reduces to a multivariate Gaussian as ${q} \rightarrow 1$ and $\exp_{{q}}(u) \rightarrow \exp(u)$. This corresponds to observing $n\rightarrow \infty$ samples, so that the sample mean and variance approach the ground truth \citep{murphy2007conjugate}.
\paragraph{Pareto Distribution}
The $q$-exponential family can also be used for modeling the \textit{tail} behavior of a distribution \citep{bercher2008new, vehtari2015pareto}, or, in other words, the probability of $p(x)$ restricted to $X > x_{\text{min}} $ and normalized.
For example, the generalized Pareto distribution is defined via the tail function
\begin{align}
P(X > x) = \begin{cases} \big(1 + \xi \frac{x-x_{\text{min}} }{\sigma} \big)^{-\frac{1}{\xi}} \quad \xi \neq 0 \\ \exp\{- \frac{x-x_{\text{min}} }{\sigma}\} \qquad \xi =0
\end{cases}
\end{align}
When $\xi \geq 0$, the domain is restricted to $x \geq x_{\text{min}}$, whereas when $\xi < 0$, the support is between $x_{\text{min}} \leq x \leq x_{\text{min}}-\frac{\sigma}{\xi}$. Writing the CDF as $1- P(X>x)$ and differentiating leads to
\begin{align}
p(x) = \frac{1}{\sigma}\big[1 + \xi \cdot \frac{x-x_{\text{min}}}{\sigma} \big]^{-\frac{1}{\xi}-1}
\end{align}
Solving $-\frac{1}{\xi}-1 = \frac{1}{1-q}$ in the exponent, we obtain $q = \frac{2\xi + 1}{\xi+1}$ or $\xi = \frac{q-1}{q-2}$ .
\subsection{$q$-Paths between Endpoints in a Parametric Family}\label{app:same_family}
If the two endpoints $\pi_0, \tilde{\pi}_1$ are within a $q$-exponential family, we can show that each intermediate distribution along the $q$-path of the same order is also within this $q$-family. However, we cannot make such statements for general endpoint distributions, members of different $q$-exponential families, or $q$-paths which do not match the index of the endpoint $q$-parametric families.
\paragraph{Exponential Family Case}
We assume potentially vector valued parameters $\theta = \{ \theta\}_{i=1}^N$ with multiple sufficient statistics $\phi(z) = \{ \phi_i(z) \}_{i=1}^N$, with $\theta \cdot \phi(z) = \sum_{i=1}^N \theta_i \phi_i(z)$.
For a common base measure $g(z)$, let $\pi_0(z) = g(z) \, \exp\{ \theta_0 \cdot \phi(z) \}$ and $\tilde{\pi}_1(z) = g(z) \, \exp \{ \theta_1 \cdot \phi(z) \}$. Taking the geometric mixture,
\begin{align}
\tilde{\pi}_\beta(z) &= \exp \big\{ (1-\beta) \, \log \pi_0(z) + \beta \, \log \tilde{\pi}_1(z) \big\} \\
&= \exp \big \{ \log g(z) + (1-\beta) \, \theta_0 \cdot \phi(z) + \beta \, \theta_1 \phi(z) \big \} \\
&= g(z) \exp \big \{ \big( (1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \big \}
\end{align}
which, after normalization, will be a member of the exponential family with natural parameter $(1-\beta) \, \theta_0 + \beta \, \theta_1$.
\paragraph{$q$-Exponential Family Case} For a common base measure $g(z)$, let $\pi_0(z) = g(z) \, \exp_q \{ \theta_0 \cdot \phi(z) \}$ and $\tilde{\pi}_1(z) = g(z) \, \exp_q \{ \theta_1 \cdot \phi(z) \}$. The $q$-path intermediate density becomes
\begin{align}
\tilde{\pi}^{(q)}_\beta(z) &= \big[ (1-\beta) \, \pi_0(z)^{1-q} + \beta \, \tilde{\pi}_1(z)^{1-q} \big]^{\frac{1}{1-q}} \\
&= \big[ (1-\beta) \, g(z)^{1-q} \, \exp_q \{ \theta_0 \cdot \phi(z) \}^{1-q} + \beta \, g(z)^{1-q} \,\exp_q \{ \theta_1 \cdot \phi(z) \} ^{1-q} \big]^{\frac{1}{1-q}} \\
&= \bigg[ g(z)^{1-q} \bigg ( (1-\beta) \, \, [1 + (1-q)( \theta_0 \cdot \phi(z))]^{\frac{1}{1-q}1-q} + \beta \, [1 + (1-q)( \theta_1 \cdot \phi(z))]^{\frac{1}{1-q} 1-q} \bigg) \bigg]^{\frac{1}{1-q}} \nonumber \\
&= g(z) \bigg[ 1 + (1-q) \bigg( \big((1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \bigg) \bigg]^{\frac{1}{1-q}} \\
&= g(z) \exp_q \big\{ \big((1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \big\}
\end{align}
which has the form of an unnormalized $q$-exponential family density with parameter $(1-\beta) \, \theta_0 + \beta \, \theta_1$.
\paragraph{Annealing between Student-$t$ Distributions}\label{app:student1d}
In \cref{fig:student_path}, we consider annealing between two 1-dimensional Student-$t$ distributions. We set $q=2$, which corresponds to $\nu = 1$ with $\nu = (3-q) / (q-1)$, and use the same mean and variance as the Gaussian example in Fig. \ref{fig:alpha_path}, with $\pi_0(z) = t_{\nu=1}( -4, 3)$ and $\pi_1(z) = t_{\nu=1}( 4, 1)$.
For this special case of both endpoint distributions within a parametric family, we can ensure that the $q=2$ path stays within the $q$-exponential family of Student-$t$ distributions, just as the $q=1$ path stayed within the Gaussian family in Fig. \ref{fig:alpha_path}.
Comparing the $q=0.5$ and $q=0.9$ paths in the Gaussian case (Fig. \ref{fig:q_path}) with the $q=1.0$ and $q=1.5$ path for the Student-$t$ family with $q=2$, we observe that mixing behavior appears to depend on the relation between the $q$-path parameter and the order of the $q$-exponential family of the endpoints. For our experiments in the main text, we did not find benefit to increasing $q>1$. However, the toy example above indicates that $q>1$ may be useful in some settings, for example involving heavier tailed distributions.
As $q \rightarrow \infty$, the power mean \eqref{eq:abstract_mean} approaches the $\min$ operation as $1-q \rightarrow -\infty$. In the Gaussian case in \cref{fig:q_path}, we see that, even at $q=2$, intermediate densities for all $\beta$ appear to concentrate in regions of low density under both $\pi_0$ and $\pi_T$. However, for the heavier-tailed Student-$t$ distributions, we must raise the $q$-path parameter significantly to observe similar behavior.
\begin{figure}[t]
\centering
\includegraphics[trim={0 0 0 0 },clip,width=0.99\textwidth]{sections/figs/student_t.pdf}
\caption{Intermediate densities between Student-$t$ distributions, $t_{\nu = 1}(-4, 3)$ and $t_{\nu = 1}(4,1)$ for various $q$-paths and 10 equally spaced $\beta$,
Note that $\nu=1$ corresponds to $q=2$, so that the $q=2$ path stays within the $q$-exponential family.}
\label{fig:alpha_path2}
\label{fig:student_path}
\vspace*{-.15cm}
\end{figure}
\subsection{Moment-Averaged Path as a Generalized Mean}\label{app:moments_as_generalized_mean}
While our $q$-paths can take arbitrary unnormalized density functions ${\textbf{u} = \big( \tilde{\pi}_0(z), \tilde{\pi}_1(z) \big)}$ as input arguments for the generalized mean, we can reinterpret the moment-averaging path as a generalized mean over the natural parameters ${\textbf{u} = \big( \theta_0, \theta_1 \big)}$. We contrast the difficulty of inverting the function $h(\theta)$ for the moments path (which involves the Legendre transform), against the simple form of the geometric or $q$-paths as arithmetic means in the parameter space $\theta$ as in \cref{app:same_family}.
The moment-averaged path is defined using a convex combination of the dual parameter vectors \citep{grosse2013annealing}, for the restricted case where $\pi_0(z)$ and $\pi_1(z)$ are members of the same exponential family, with parameters $\theta_0$ and $\theta_1$
\begin{align}
\eta(\theta_{\beta}) = (1-\beta) \, \eta(\theta_0) + \beta \, \eta(\theta_1) \, . \label{eq:moments_path}
\end{align}
To solve for the corresponding natural parameters, we calculate the Legendre transform, or a function inversion $\eta^{-1}$.
\begin{align}
\theta_{\beta} = \eta^{-1}\big((1-\beta) \, \eta(\theta_0) + \beta \, \eta(\theta_1)\big) \label{eq:moments_path_theta} \,.
\end{align}
Comparing to the form of \cref{eq:abstract_mean}, we can interpret the moment-averaging path as a generalized mean, with the natural parameters $\bf{u} = (\theta_0, \theta_1)$ as inputs and the sufficient statistic function as the transformation $h(\theta) = \eta(\theta)$, although calculating the inverse is difficult in practice.
This observation highlights the convenience of working with generalized means in unnormalized density function space as in $q$-paths. When constructing paths from generalized means in parameter space $\theta$, one may have to calculate normalization constants or consider the entire domain of the density function. By contrast, the expression for $q$-paths in \cref{eq:qpath_mix_form} only involves inverting a scalar function at each point in the input sample space $z$.
\subsection{Escort Moment-Averaged Path}
While exponential families are ubiquitous throughout machine learning, whether via common parametric distributions such as Gaussians or energy-based models such as (Restricted) Boltzmann Machines, models involving the $q$-exponential function have received comparatively little attention in machine learning.
Nevertheless, we derive an analogue of the moment-averaged path for endpoint distributions within the same $q$-exponential family, with several parametric examples in App. \ref{app:parametric}. We begin by recalling the definition,
\begin{align}
\pi_{\theta,q}(z) &= g(z) \, \exp_q \big\{ \theta \cdot \phi_q(z) - \psi_q(\theta ) \big\}. \label{eq:qexp_fam}
\end{align}
where $g(z)$ indicates a base distribution and $\psi_q(\theta)$ denotes the $q$-free energy, which is convex as a function of the parameter $\theta$ \citep{amari2011q}.
As in the case of the exponential family, differentiating the $q$-free energy yields a dual parameterization of the $q$-exponential family \citep{amari2011q}. However, the standard expectation is now replaced with the \textit{escort} expectation \citep{naudts2011generalised}
\begin{align}
\eta_q(\theta) = \nabla_\theta \psi_q(\theta) &= \int \frac{\tilde{\pi}_\theta^{(q)}(z)^{q}}{\int \tilde{\pi}_\theta^{(q)}(z)^{q}} \cdot \phi(z) dz \\
&:= \mathbb{E}_{\Pi_q(\theta)}[ \phi(z)]
\end{align}
where $\Pi_q(\theta) \propto \tilde{\pi}_{\theta,q}(z)^{q}$ is the escort distribution for a given for $\tilde{\pi}_{\theta,q}$ in a parametric $q$-exponential family. This reduces to the standard expectation for $q=1$ as in \cref{eq:dpsi_dtheta}.
We propose the escort moment-averaging path for endpoints within a $q$-exponential family, using linear mixing in the dual parameters. Letting the function $\eta_{\Pi}(\theta)$ output the escort expected sufficient statistics for a $q$-exponential family distribution with parameter $\theta$,
\begin{align}
\eta_{\Pi_q}(\theta_{\beta}) = (1-\beta) \, \eta_{\Pi_q}(\theta_0) + \beta \, \eta_{\Pi_q}(\theta_1)
\end{align}
To provide a concrete example of the escort moment-averaging path in Fig. \ref{fig:student_path}, we consider the Student-$t$ distribution, which uses the same first- and second-order sufficient statistics as a Gaussian distribution and a degrees of freedom parameter $\nu$ that specifies the order of the $q$-exponential function for $q \geq 1$. This parameter induces heavier tails than a standard Gaussian, which appears as a special case as $q \rightarrow 1$ and $\exp_{q}(u) \rightarrow \exp(u)$.
In Fig. \ref{fig:student_path}, we observe that the escort moments path spreads probability mass more widely than the $q$-path, which matches the observations of \citet{grosse2013annealing} in comparing the moment-averaging path to the geometric path for exponential family endpoints.
Note that the $q$-path remains within the $q$-exponential family as shown in \cref{app:same_family}.
We proceed to derive a closed form expression for the parameters of intermediate distributions along the escort moment-averaged path between Student-$t$ endpoints.
\begin{figure*}[t]
\centering
\includegraphics[trim={0 0 0 0 },clip,width=0.99\textwidth]{sections/figs/qmoments_qpath.pdf}
\caption{We visualize the escort-moments path for Student-$t$ endpoints with $t_{\nu}(-4, 3)$ and $t_{\nu}(4,1)$ for various $\nu = (3-q)/(1-q)$. We compare the corresponding $q$-path, whose intermediate densities remain within the $q$-exponential family, to the escort-moments path (\cref{eq:escort}). Note, $q=1.01$ closely resembles the moment-averaged path of \citet{grosse2013annealing}.
\label{fig:alpha_path}
\label{fig:student_path}
\vspace*{-.15cm}
\end{figure*}
\subsection{Escort Moment-Averaged Path with Student-$t$ Endpoints}\label{app:escort_student}
For the case of the Student-$t$ distribution with degrees of freedom parameter $\nu$ , the escort distribution is \textit{also} a Student-$t$ distribution, but with $\nu^\prime= \nu + 2$ and a rescaling of the covariance matrix $\frac{1}{Z_{\Pi}(\Sigma)} t_{\nu}(z ; \mu, \Sigma)^{q} = t_{\nu+2}(z ; \mu, \frac{\nu}{\nu+2} \Sigma)$
(Tanaka 2010, Matsuzoe 2017).
Finding the escort moment-averaged path thus becomes a moment matching problem over Student-$t$ distributions with a different $\nu$. We seek to find $\pi_\beta(z) = t_{\nu}(z ;\mu_{\beta}, \Sigma_{\beta})$ such that the expected sufficient statistics, under the escort distribution $\Pi_\beta(z) = t_{\nu+2}(z ; \mu_\beta, \frac{\nu}{\nu+2} \Sigma_\beta)$, are equal to
\begin{align}
\mathbb{E}_{\Pi_\beta}\left[ z \right] &= (1-\beta) \, \mathbb{E}_{\Pi_0}\left[ z \right] + \beta \, \mathbb{E}_{\Pi_1}\left[ z \right] \\
\mathbb{E}_{\Pi_\beta}\left[ z z^T \right] &= (1-\beta) \, \mathbb{E}_{\Pi_0}\left[ z z^T \right] + \beta \, \mathbb{E}_{\Pi_1}\left[ z z^T \right]
\end{align}
where optimization is over the parameters of the distribution $t_{\nu}(z ; \mu_{\beta}, \Sigma_{\beta})$. Note that $\mathbb{E}_{\Pi_\beta}\left[ z \right] = \mu_{\beta}$ since the mean is unchanged for the escort distribution, whereas $\mathbb{E}_{\Pi_\beta}\left[ z z^T \right] = \Sigma_{\Pi_{\beta}} + \mu_{\Pi_{\beta}} \mu_{\Pi_{\beta}}^T = \frac{\nu}{\nu+2} \Sigma_{\beta} + \mu_{\beta} \mu_{\beta}^T$.
Following similar derivations as in \citet{grosse2013annealing} Sec. 4 using the escort expressions, we have
\begin{align}
\mu_{\beta}= \mu_{\Pi_{\beta}} &= (1-\beta) \mu_0 + \beta \mu_1 \label{eq:escort}\\
\Sigma_{\beta} = \frac{\nu+2}{\nu} \Sigma_{\Pi_{\beta}} &= (1-\beta) \Sigma_0 + \beta \Sigma_1 + \frac{\nu+2}{\nu} \beta (1-\beta) (\mu_1 - \mu_0)(\mu_1-\mu_0)^T \nonumber
\end{align}
which implies that the escort moment-averaged distribution has the form $t_{\nu}(z; \mu_{\beta}, \Sigma_{\beta})$, with the same degrees of freedom $\nu$ as in the original $q$-exponential family.
\section{Additional Experiments for Parametric Endpoint Distributions }
In these experiments, we consider using \gls{AIS} to estimate the partition function ratio for well-separated 1-d Gaussian ($q=1$) and Student-$t$ ($q>1$) endpoint distributions. Our goal is to compare the performance of the moment-averaging or escort-moment averaging paths, which are limited to the case of parametric endpoints distributions, with the more general $q$-paths.
\paragraph{Gaussian}
To compare $q$-paths against the moment-averaging path \citep{grosse2013annealing}, we anneal between $\pi_0=\mathcal{N}(-4,3)$ and $\pi_1 = \mathcal{N}(4,1)$. Similarly, we anneal between $\pi_0=t_{\nu=1}(-4, 3)$ and $\pi_1 = t_{\nu=1}(4, 1)$, where $\nu = 1$ corresponds to $q=2$, to compare against the escort moment-averaged path in \cref{sec:student}. For all experiments, we use use parallel runs of \gls{HMC} \citep{neal2011mcmc} to obtain 2.5k independent samples from $\tilde{\pi}_{\beta, q}(z)$ using $K$ linearly spaced $\beta_t$ between $\beta_0=0$ and $\beta_K=1$. We perform a grid search over 20 log-spaced $\delta \in [10^{-5}, 10^{-1}]$ and report the best $q = 1 - \delta$.
Results are shown in \cref{fig:toy_exp}, where we observe $q$-paths outperform the geometric path in both cases, as well as the moment and $q$-moments paths which have closed-form expressions and exact samples. In App. \ref{app:student1d}, we provide additional analysis for annealing between two Student-$t$ distributions.
\paragraph{Student-$t$}
Since the Student-$t$ family generalizes the Gaussian distribution to $q \neq 1$, we can run a similar experiment annealing between two Student-$t$ distributions. We set $q=2$, which corresponds to $\nu = 1$ with $\nu = (3-q) / (q-1)$, and use the same mean and variance as the Gaussian example in Fig. \ref{fig:alpha_path} or Student-$t$ example in Fig. \ref{fig:student_path} with $\pi_0(z) = t_{\nu=1}( -4, 3)$ and $\pi_1(z) = t_{\nu=1}( 4, 1)$.
In \cref{fig:toy_exp}, we compare the escort-moment averaging path with $q=2$ to the geometric path and various $q$-paths. As shown in \cref{app:same_family}, the $q$-path with $q=2$ stays within the $q$-exponential family. The escort-moment averaging path does not outperform $q$-paths, which may be surprising since it appears to achieve interesting mass covering behavior in Fig. \ref{fig:student_path}. As in the Gaussian case, we see that $q$-paths with $q \neq 2$ can achieve improvements even when the endpoints Student-$t$ distributions use $q=2$.
\begin{figure*}[h]
\centering
\subfigure[$\mathcal{N}(-4,3) \rightarrow \mathcal{N}(4,1)$]{\includegraphics[trim={0 0cm 0 0cm},clip,width=0.42\textwidth]{sections/figs/toy/gaus.pdf}}
\subfigure[$t_{v=1}(-4, 3) \rightarrow t_{v=1}(4,1)$]{\includegraphics[trim={0 0cm 0 0cm},clip,width=0.42\textwidth]{sections/figs/toy/stud.pdf}}
\vspace*{-.25cm}
\caption{\acs{BDMC} gaps for various paths on toy models. $q$-Paths out perform both the moments and the escort-moments path, both of which make use of parametric endpoint assumptions. Best $q$ out of 20 shown.}
\label{fig:toy_exp}
\end{figure*}
\section{Sum and Product Identities for $q$-Exponentials}\label{app:q_sum_product}
In this section, we prove two lemmas which are useful for manipulation expressions involving $q$-exponentials, for example in moving between \cref{eq:normalization1} and \cref{eq:normalization2} in either direction.
\begin{lemma}
Sum identity
\begin{align}
\exp_q\left(\sum_{n=1}^N x_n\right) = \prod_{n=1}^{N} \exp_q \left(\frac{x_n}{1 + (1 - q)\sum_{i=1}^{n-1}x_i} \right)\label{eq:q_exp_sum}
\end{align}
\label{lemma:q_exp_sum}
\end{lemma}
\begin{lemma}
Product identity
\begin{align}
\prod_{n=1}^N \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1 - q)x_i\right)\right)\label{eq:q_exp_prod}
\end{align}
\label{lemma:q_exp_prod}
\end{lemma}
\subsection{Proof of Lemma 1
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\sum_{i=a}^bx_i = 0$ if $b < a$ so that the denominator on the \textsc{rhs} of \cref{eq:q_exp_sum} is $1$. Assuming \cref{eq:q_exp_sum} holds for $N$,
\begin{align}
\exp_q\left(\sum_{n=1}^{N+1} x_n\right) &= \left[ 1 + (1-q) \sum_{n=1}^{N+1} x_n \right]_{+}^{1/(1-q)} \\
&= \left[ 1 + (1-q) \left(\sum_{n=1}^{N} x_n\right) + (1-q)x_{N+1} \right]_{+}^{1/(1-q)} \\
&= \left[\left( 1 + (1-q) \sum_{n=1}^{N} x_n \right) \left(1 + (1-q)\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n}\right) \right]_{+}^{1/(1-q)} \\
&= \exp_q\left(\sum_{n=1}^N x_n\right) \exp_q \left(\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n} \right)\\
&= \prod_{n=1}^{N+1} \exp_q \left(\frac{x_n}{1 + (1-q)\sum_{i=1}^{n-1}x_i} \right) \text{(using the inductive hypothesis)}
\end{align}
\end{proof}
\subsection{Proof of Lemma 2}
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\prod_{i=a}^bx_i = 1$ if $b < a$. Assuming \cref{eq:q_exp_prod} holds for $N$, we will show the $N+1$ case. To simplify notation we define $y_N:=\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1 - q)x_i\right)$. Then,
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) &= \exp_q(x_{1})\left(\prod_{n=2}^{N+1}\exp_q(x_n)\right)\\
&= \exp_q(x_{0})\left(\prod_{n=1}^{N}\exp_q(x_n)\right) & \hspace*{-.5cm} \text{(reindex $n \to n - 1)$} \nonumber \\
&=\exp_q(x_{0})\exp_q(y_N) & \hspace*{-.5cm} \text{(inductive hypothesis)} \nonumber \\
&= \bigg[\left(1 + (1-q) \cdot x_{0}\right)\left(1 + (1-q) \cdot y_N\right) \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \cdot x_{0} + \big(1 + (1-q) \cdot x_{0} \big)(1-q) \cdot y_N \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \bigg(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\bigg) \bigg]_{+}^{1/(1-q)}\\
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\right)
\end{align}
Next we use the definition of $y_N$ and rearrange
\begin{align}
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)\left(x_1 + x_2(1 + (1-q) \cdot x_1) + ... + x_N \cdot \prod_{i=1}^{N-1}(1 + (1-q) \cdot x_i)\right)\right) \nonumber\\
&= \exp_q\left(\sum_{n=0}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q) x_i\right)\right).
\end{align}
Then reindexing $n \to n + 1$ establishes
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N+1}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q)x_i\right)\right).
\end{align}
\end{proof}
\section{Experimental details and results}\label{sec:experiment_details}
\begin{algorithm}[h]
\caption{ESS Heuristic for Q-paths}
\begin{spacing}{1.2}
\begin{algorithmic}[1] \label{alg:ess_heuristic}
\STATE {\bfseries Input:} Set of log weights $\{\log w_i\}_{i=1}^S$, random restarts $M$, sample variance $\sigma$
\STATE {\bfseries Output:} $q, \beta$ which minimizes ESS criterion from \citet{chopin2020introduction}.
\STATE Initialize $\delta_{0} = \max_i|\log w_i|$ and $\mathcal{L}_{\text{best}} = \infty $\\
\FOR{$j$ from $1$ to $M$}
\STATE Initialize $\beta_{0} = 1$, $q_0 = 1 - \rho^{-1}$ with $\rho \sim \mathcal{N}(\rho_0, \sigma)$
\STATE Solve $\beta^*, q^* = \argmin_{\beta, q} \mathcal{L}(\beta_0, q_0)$ with $\mathcal{L}$ defined in \cref{eq:ess_loss} using coordinate descent.
\IF{$\mathcal{L}(\beta^*, q^*) < \mathcal{L}_{\text{best}}$}
\STATE Set $q_{\text{best}} \leftarrow q^*$, $\beta_{\text{best}} \leftarrow \beta^*$, $\mathcal{L}_{\text{best}} \leftarrow \mathcal{L}(\beta^*, q^*)$
\ENDIF
\ENDFOR{}
\STATE {\bfseries return} $q_{\text{best}}, \beta_{\text{best}}$
\end{algorithmic}
\end{spacing}
\end{algorithm}
\subsection{Sequential Monte Carlo}\label{app:smc_exp_details}
We follow the experimental setup from Ch. 17.3 of \citet{chopin2020introduction} using the preprocessed Pima Indians diabetes ($N=768, D=9$) and Sonar datasets ($N=208, D=61$) available at \url{https://particles-sequential-monte-carlo-in-python.readthedocs.io/en/latest/datasets.html}. The model is specified as:
\begin{align}
p(w_j) &= \mathcal{N}(0, 5^2) \quad \quad
p(y_i|x_i, w) = \text{Bern}(p_i = \text{sigmoid}(x_i^Tw)) \\
p(\theta) &= \prod_{j=1}^Dp(w_j) \quad \quad
p(\mathcal{D}, \theta) = p(\theta)\prod_{i=1}^Np(y_i|x_i, w).
\end{align}
In \cref{alg:ess_heuristic} we use $M=100$ restarts and compute $\rho$ in $\log_{10}$ space with a sample variance $\sigma = 0.1$ (i.e $q = 1 - 10^{-\rho}$ for $\rho \sim \mathcal{N}(\log_{10}{(\rho_0)}, 0.1)$). For coordinate descent we use the modified Powell algorithm available from the scipy python library.
\subsection{Evaluating generative models using AIS}\label{app:ais_exp_details}
\begin{table}[!ht]
\centering
\caption{Settings for training and evaluating a \gls{VAE} generative model trained with \gls{TVO} on the Omniglot dataset.}
\begin{tabular}{c|c}
\textbf{Configuration} & \textbf{Value} \\ \hline
training examples & 24,345 \\
simulated examples & 2,500 \\
real test examples & 8,070 \\ \midrule
epochs & 5000 \\
number of importance samples & 50 \\
number of TVO partitions & 100 \\
TVO partition schedule & log uniform ($\beta_1=0.025$)\\
decoder & {[}50, 200, 200, 784{]} \\
encoder & {[}784, 200, 200, 50{]} \\
batch size & 100 \\
activation function & tanh
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{sections/figs/bdmc/bdmc_omniglot_bounds.pdf}
\caption{Stochastic lower and upper bounds produced by forward and reverse Hamiltonian AIS runs, for various numbers of annealing distributions ($K$) and $q$-values. Best viewed in colour. \label{fig:bdmc_omniglot_bounds}}
\end{figure}
\section{Comparison of Exponential Family and $q$-Exponential Family}\label{app:breg_table}
\begin{table}[h]
\centering
\caption{\label{tab:properties}}
\begin{tabular}{p{2cm} p{6.1cm} p{6.5cm}}
\cmidrule(lr){2-2} \cmidrule(l){3-3} \\[.25ex]
& Exponential Family ($q=1$) & $q$-Exponential Family \\ \midrule
Definition & $\pi_{\beta}(z) = \pi_0(z) \, \exp \{ \beta \cdot \phi(z) - \psi(\beta) \}$ & $\pi_{\lambda}(z) = \pi_0(z) \, \exp_q \{ \lambda \cdot \phi(z) - \psi_q(\lambda) \} $ \\[2ex]
Free Energy & $\psi(\beta) = \ln Z_{\beta} $ & $\psi_q(\lambda)$ \\[2ex]
Escort Dist. & $\Pi_{\beta, 1} = \pi_0(z)^{1-1} \, \pi_{\beta}(z)^1 = \pi_{\beta}(z) $ & $ \Pi_{\lambda,q} = \dfrac{1}{z_{q}(\lambda)} \pi_0(z)^{1-q} \, \pi_{\lambda,q}(z)^q $ \\[2ex]
Dual Params & $\dfrac{\partial \psi(\beta)}{\partial \beta^j} = \eta^j = \mathbb{E}_{\pi_{\beta}}[\phi_j(z)]$ \, (standard) & $\dfrac{\partial \psi_q(\lambda)}{\partial \lambda^j} =\eta^j_q =\mathbb{E}_{\Pi_{\lambda,q}}[\phi_j(z)] $ \, (escort) \\[2ex]
Bregman Div. & $D_{\psi}[\beta^{\prime} : \beta ] = D_{KL}[ \pi_{\beta} || \pi_{\beta^{\prime}} ] $ & $D_{\psi_q}[\lambda^{\prime} : \lambda ] = \mathbb{E}_{\Pi_{\lambda, q} }[ \ln_q \dfrac{\pi_{\lambda,q}}{\pi_0} - \ln_q \dfrac{\pi_{\lambda^{\prime},q}}{\pi_0}] $ \\[2ex]
& $\phantom{D_{\psi}[\beta^{\prime} : \beta ]} = \mathbb{E}_{\pi_{\beta}} [ \ln \pi_{\beta} - \ln \pi_{\beta^{\prime}} ]$ & \\[2ex]
Conjugate $\psi^{*}$ & $\psi^{*}(\eta) = D_{\psi}[\pi_0 || \pi_{\beta_{\eta}}] =D_{KL}[\pi_{\beta_{\eta}}||\pi_0 ] $ & $\psi_q^{*}(\eta) = D_{\psi_q}[\pi_0 ||\pi_{\lambda_{\eta},q}] = \mathbb{E}_{\Pi_{\lambda, q} }[ \ln_q \dfrac{\pi_{\lambda_{\eta},q}}{\pi_0} ] $\\[2ex]
& & $\phantom{\psi_q^{*}(\eta) = } \hfill = \dfrac{1}{1-q}\bigg(\dfrac{1}{z_q(\lambda)} -1 \bigg) $ \\[3.5ex] \\
Neg. Entropy & $\psi^{*}(\eta) = \mathbb{E}_{\pi_{\beta_{\eta}}}[ \log \pi_{\beta_{\eta}} ] $\footnotesize \quad (if $\pi_0 =$uniform)& $\psi_q^{*}(\eta) = \mathbb{E}_{\Pi_{\lambda, q}}[ \ln_q \pi_{\lambda_{\eta},q} ] $ \footnotesize \quad (if $\pi_0 = $uniform) \\[3ex]
\small Dual Definition & $\pi_{\beta}(z) = \pi_0(z) \exp \{ \beta \, ( \phi(z) - \eta ) + \psi^{*}(\eta) \}$ & $\pi_{\lambda}(z) = \pi_0(z)\exp_q \{ \lambda \, ( \phi(z) - \eta_q ) + \psi_q(\lambda) \}$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Abstract Mean is Invariant to Affine Transformations} \label{app:any_h}
In this section, we show that $h_{q}(u)$ is invariant to affine transformations. That is, for any choice of $a$ and $b$,
\begin{align}
h_{q}(u) =
\begin{cases}
a \cdot u^{1-q} + b \hfill & q \neq 1 \\
\log u \hfill & q = 1
\end{cases} \label{eq:alpha_abstract2}
\end{align}
yields the same expression for the abstract mean $\mu_{h_{\alpha}}$. First, we note the expression for the inverse $h^{-1}_{q}(u)$ at $q \neq 1$
\begin{align}
h^{-1}_{q}(u) = \left(\frac{u - b}{a}\right)^{\frac{1}{1-q}}.
\end{align}
Recalling that $\sum_i w_i = 1$, the abstract mean then becomes
\begin{align}
\mu_{h_{q}}(\{w_i\}, \{u_i\}) &= h_{q}^{-1}\left(\sum_i w_i h_{q}(u_i) \right) \\
&= h_{q}^{-1}\left(a \left(\sum_i w_iu_i^{1-q}\right) + b \right) \\
&=\bigg(\sum_i w_i u_i^{1-q} \bigg)^{\frac{1}{1-q}}
\end{align}
which is independent of both $a$ and $b$.
\section{Derivations of the $q$-Path}\label{app:q_path}
\subsection{Deformed Log Mixture}\label{app:ln_qmix}
In this section, we show that the unnormalized $\ln_q$ mixture
\begin{align}
\ln_q \tilde{\pi}^{(q)}_{\beta}(z) &= (1-\beta) \ln_q \pi_0(z) + \beta \ln_q \tilde{\pi}_1(z)
\end{align}
reduces to the form of the $q$-path intermediate distribution in \eqref{qpath_mix_form} and \eqref{eq:alpha_mix}. Taking $\exp_q$ of both sides,
\begin{align*}
\tilde{\pi}^{(q)}_{\beta}(z) &= \exp_q \left\{ (1-\beta) \ln_q \pi_0(z) + \beta \ln_q \tilde{\pi}_1(z) \right\} \\
&= \left[ 1 + (1-q) \left( \ln_q \pi_0(z) + \beta \left( \ln_q \tilde{\pi}_1(z) - \ln_q \pi_0(z) \right)\right) \right]_{+}^{\frac{1}{1-q}} \\
&= \left[ 1 + (1-q) \frac{1}{1-q} \left( \pi_0(z)^{1-q} - 1 + \beta \big( \tilde{\pi}_1(z)^{1-q} - 1 - \pi_0(z)^{1-q} + 1 \big)\right) \right]_{+}^{\frac{1}{1-q}} \\
&= \left[ 1 + \pi_0(z)^{1-q} - 1 + \beta \bigg( \tilde{\pi}_1(z)^{1-q} - \pi_0(z)^{1-q} \bigg) \right]_{+}^{\frac{1}{1-q}} \\
&= \left[ \pi_0(z)^{1-q} + \beta \, \tilde{\pi}_1(z)^{1-q} - \beta \, \pi_0(z)^{1-q} \right]_{+}^{\frac{1}{1-q}} \\
&= \left[ (1-\beta ) \, \pi_0(z)^{1-q} + \beta \, \tilde{\pi}_1(z)^{1-q} \, \right]_{+}^{\frac{1}{1-q}}
\end{align*}
\subsection{$q$-Exponential Family}\label{app:qexp_derivation}
Here, we show that the unnormalized $q$-path reduces to a form of the $q$-exponential family
\begin{align}
\tilde{\pi}_{\beta, q} &= \bigg[(1 - \beta) \pi_0(z)^{1 - q} + \beta \tilde{\pi}_1(z)^{1 - q}\bigg]^\frac{1}{1 - q}\\
&= \bigg[\pi_0(z)^{1 - q} + \beta\big(\tilde{\pi}_1(z)^{1 - q} - \pi_0(z)^{1 - q}\big)\bigg]^\frac{1}{1 - q}\\
&= \pi_0(z) \left[1 + \beta\left(\left(\frac{\tilde{\pi}_1(z)}{\pi_0(z)}\right)^{1 - q} - 1\right)\right]^\frac{1}{1 - q}\\
&= \pi_0(z) \left[1 + (1-q) \, \beta \, \ln_q\left( \frac{\tilde{\pi}_1(z)}{\pi_0(z)}\right)\right]^\frac{1}{1 - q}\\
&= \pi_0(z) \exp_q \left\{ \beta \cdot \ln_q\left(\frac{\tilde{\pi}_1(z)}{\pi_0(z)}\right)\right\}.
\end{align}
Defining $\phi(z) = \ln_{q} \frac{\tilde{\pi}_1(z)}{\pi_0(z)}$ and introducing a multiplicative normalization factor $Z_q(\beta)$, we arrive at
\begin{align}
\pi^{(q)}_{\beta}(z) = \frac{1}{Z_q(\beta)} \, \pi_0(z)\exp_q \left\{\beta \cdot \phi(z) \right\} \qquad
Z_q(\beta) &:= \int \pi_0(z) \exp_q \left\{ \beta \cdot \phi(z)\right\} \, dz .\label{eq:lnq_fam_form2}
\end{align}
\subsection{Normalization in q-Exponential Families}\label{app:normalization}
The $q$-exponential family can also be written using the $q$-free energy $\psi_q(\theta)$ for normalization \cite{amari2011q, naudts2011generalised},
\begin{align}
\pi_{\theta}^{(q)}(z) &= \pi_0(z) \, \exp_q \big\{ \theta \cdot \phi(z) - \psi_q(\theta ) \big\} \, . \label{eq:qexp_fam_qf}
\end{align}
However, since $\exp_q \{ x + y \} = \exp_q \{ y \} \cdot \exp_q \{ \frac{x}{1 + (1-q) y} \}$ (see \cite{suyari2020advantages} or App. \ref{app:q_sum_product} below) instead of $\exp \{ x + y \} = \exp \{ x \} \cdot \exp \{ y \} $ for the standard exponential, we can not easily move between these ways of writing the $q$-family \cite{matsuzoe2019normalization}.
Mirroring the derivations of \citet{naudts2011generalised} pg. 108, we can rewrite \eqref{eq:qexp_fam_qf} using the above identity for $\exp_q \{ x+y\}$, as
\begin{align}
\pi^{(q)}_{\theta}(z) &= \pi_0(z) \, \exp_q \{ \theta \cdot \phi(z) - \psi_q(\theta) \} \label{eq:normalization1} \\
&= \pi_0(z) \, \exp_q \{ - \psi_q(\theta) \} \exp_q \big\{ \frac{\theta \cdot \phi(z)}{1+(1-q)(-\psi_q(\theta))} \big \} \label{eq:normalization2}
\end{align}
Our goal is to express $\pi^{(q)}_{\theta}(z)$ using a normalization constant $Z^{(q)}_\beta$ instead of the $q$-free energy $\psi_q(\theta)$. While the exponential family allows us to freely move between $\psi(\theta)$ and $\log Z_{\theta}$, we must adjust the natural parameters (from $\theta$ to $\beta$) in the $q$-exponential case. Defining
\begin{align}
\beta &= \frac{\theta}{1+(1-q)(-\psi_q(\theta))} \\
Z^{(q)}_\beta &= \frac{1}{\exp_q \{-\psi_q(\theta) \}}
\end{align}
we can obtain a new parameterization of the $q$-exponential family, using parameters $\beta$ and multiplicative normalization constant $Z^{(q)}_\beta$,
\begin{align}
\pi^{(q)}_{\beta}(z) &= \frac{1}{Z^{(q)}_\beta} \pi_0(z) \, \exp_{q} \{ \beta \cdot \phi(z) \} \\
&= \pi_0(z) \, \exp_q \big\{ \theta \cdot \phi(z) - \psi_q(\theta ) \big\} = \pi^{(q)}_{\theta}(z) \, .
\end{align}
See \citet{matsuzoe2019normalization}, \citet{suyari2020advantages}, and \citet{naudts2011generalised} for more detailed discussion of normalization in deformed exponential families.
\section{Minimizing $\alpha$-divergences}\label{app:alpha_integration}
\citet{amari2007integration} shows that the $\alpha$ power mean $\pi^{(\alpha)}_{\beta}$ minimizes the expected divergence to a single distribution, for \textit{normalized} measures and $\alpha = 2q-1$. We repeat similar derivations but for the case of unnormalized endpoints $\{\tilde{\pi}_i\}$ and $\tilde{r}(z)$
\begin{align}
\tilde{\pi}_{\alpha}(z) &= \argmin \limits_{\tilde{r}(z)} \sum \limits_{i=1}^{N} w_i \, D_{\alpha}[\tilde{\pi}_i (z) : \tilde{r}(z) ] \label{eq:amari_vrep} \\
\text{where} \quad \tilde{\pi}_{\alpha}(z) &= \big( \sum \limits_{i=1}^{N} w_i \, \tilde{\pi}_i (z)^{\frac{1-\alpha}{2}} \big)^{\frac{2}{1-\alpha}} \label{eq:alpha_mix_app}
\end{align}
\begin{proof}
\begin{align}
\frac{d}{d\tilde{r}} \sum \limits_{i=1}^{N} w_i \, D_{\alpha}[\tilde{\pi}_i (z) : \tilde{r}(z) ] &= \frac{d}{d\tilde{r}} \frac{4}{1-\alpha^2} \sum \limits_{i=1}^{N} w_i \big( -\int \tilde{\pi}_i(z)^{\frac{1-\alpha}{2}}\, \tilde{r}(z)^{\frac{1+\alpha}{2}} dz + \frac{1+\alpha}{2} \int \tilde{r}(z) dz \big) \\
0 &=\frac{4}{1-\alpha^2} \big( -{\frac{1+\alpha}{2}} \sum \limits_{i=1}^{N} w_i \, \tilde{\pi}_i(z)^{\frac{1-\alpha}{2}}\, \tilde{r}(z)^{\frac{1+\alpha}{2}-1} + \frac{1+\alpha}{2} \big) \\
- \frac{2}{1-\alpha} &= -\frac{2}{1-\alpha} \sum \limits_{i=1}^{N} w_i \, \tilde{\pi}_i(z)^{\frac{1-\alpha}{2}} \tilde{r}(z)^{-\frac{1-\alpha}{2}} \\
\tilde{r}(z)^{\frac{1-\alpha}{2}} &= \sum \limits_{i=1}^{N} w_i \, \tilde{\pi}_i(z)^{\frac{1-\alpha}{2}} \\
\tilde{r}(z) &= \bigg(\sum \limits_{i=1}^{N} w_i \, \tilde{\pi}_i(z)^{\frac{1-\alpha}{2}} \bigg)^{\frac{2}{1-\alpha}}
\end{align}
\end{proof}
This result is similar to a general result about Bregman divergences in \citet{Banerjee2005} Prop. 1. although $D_{\alpha}$ is not a Bregman divergence over normalized distributions.
\subsection{Arithmetic Mean ($q=0$)}\label{app:mixture_path}
\newcommand{\pi_0}%{p_a}{\pi_0
\newcommand{\pi_1}%{p_b}{\pi_1
\newcommand{r}%{q}{r
For normalized distributions, we note that the moment-averaging path from \citet{grosse2013annealing} is \textit{not} a special case of the $\alpha$-integration \cite{amari2007integration}. While both minimize a convex combination of reverse \textsc{kl} divergences, \citet{grosse2013annealing} minimize within the constrained space of exponential families,
while \citet{amari2007integration} optimizes over \textit{all} normalized distributions.
More formally, consider minimizing the functional
\begin{align}
J[r}%{q] &= (1 - \beta)\int \pi_0}%{p_a(z) \log \frac{\pi_0}%{p_a(z)}{r}%{q(z)} dz + \beta \int \pi_1}%{p_b(z) \log \frac{\pi_1}%{p_b(z)}{r}%{q(z)} dz \\
&= \text{const} - \int \left[(1 - \beta) \pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z) \right] \log r}%{q(z) dz \label{eq:functional}
\end{align}
We will show how \citet{grosse2013annealing} and \citet{amari2007integration} minimize \eqref{eq:functional}.
\paragraph{Solution within Exponential Family}
\citet{grosse2013annealing} constrains $r}%{q(z) = \frac{1}{Z(\theta)} h(z) \exp (\theta^T g(z))$ to be a (minimal) exponential family model and minimizes \eqref{eq:functional} w.r.t $r}%{q$'s natural parameters $\theta$ (cf. \cite{grosse2013annealing} Appendix 2.2):
\begin{align}
\theta^*_i &= \argmin_\theta J(\theta) \\
&= \argmin_\theta \left(- \int \left[(1 - \beta) \pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z) \right] \left[ \log h(z) + \theta^T g(z) - \log Z(\theta) \right] dz \right)\\
&= \argmin_\theta \left(\log Z(\theta) - \int \left[(1 - \beta) \pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z) \right] \theta^T g(z) dz + \text{const}\right)
\end{align}
where the last line follows because $\pi_0}%{p_a(z)$ and $\pi_1}%{p_b(z)$ are assumed to be correctly normalized. Then to arrive at the moment averaging path, we compute the partials $\frac{\partial J(\theta)}{\partial \theta_i}$ and set to zero:
\begin{align}
\frac{\partial J(\theta)}{\partial \theta_i} &= \E_{r}%{q}[g_i(z)] - (1 - \beta)\E_{\pi_0}%{p_a}[g_i(z)] - \beta \E_{\pi_1}%{p_b}[g_i(z)] = 0 \\
\E_{r}%{q}[g_i(z)] &= (1 - \beta)\E_{\pi_0}%{p_a}[g_i(z)] - \beta \E_{\pi_1}%{p_b}[g_i(z)]
\end{align}
where we have used the exponential family identity $\frac{\partial \log Z(\theta)}{\partial \theta_i} = \E_{r}%{q_{\theta}}[g_i(z)]$ in the first line.
\paragraph{General Solution}
Instead of optimizing in the space of minimal exponential families, \citet{amari2007integration} instead adds a Lagrange multiplier to \eqref{eq:functional} and optimizes $r}%{q$ directly (cf. \cite{amari2007integration} eq. 5.1 - 5.12)
\begin{align}
{r}%{q}^* &= \argmin_{r}%{q} J'[r}%{q] \\
&= \argmin_{r}%{q} J[r}%{q] + \lambda \left(1 - \int r}%{q(z) dz\right) \label{eq:lagrange}
\end{align}
\cref{eq:lagrange} can be minimized using the Euler-Lagrange equations or using the identity
\begin{align}
\frac{\delta f(x)}{\delta f(x')} = \delta(x - x')\label{eq:delta_eq}
\end{align}
from \cite{meng2004introduction}. We compute the functional derivative of $J'[r}%{q]$ using \eqref{eq:delta_eq} and solve for $r$:
\begin{align}
\frac{\delta J'[r}%{q]}{\delta r}%{q(z)}=&- \int \big[(1 - \beta) \pi_0}%{p_a(z') + \beta \pi_1}%{p_b(z') \big] \frac{1}{r}%{q(z')} \frac{\delta r}%{q(z')}{\delta r}%{q(z)} dz' - \lambda \int \frac{\delta r}%{q(z')}{\delta r}%{q(z)} dz' \\
=&- \int \big[(1 - \beta) \pi_0}%{p_a(z') + \beta \pi_1}%{p_b(z') \big] \frac{1}{r}%{q(z')} \delta(z - z') dz' - \lambda \int \delta(z - z') dz' \\
=&- \big[(1 - \beta)\pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z)\big] \frac{1}{r}%{q(z)} - \lambda = 0
\end{align}
Therefore
\begin{align}
r}%{q(z) \propto \big[(1 - \beta)\pi_0}%{p_a(z) + \beta \pi_1}%{p_b(z)\big],
\end{align}
which corresponds to our $q$-path at $q=0$, or $\alpha = -1$ in \citet{amari2007integration}. Thus, while both \citet{amari2007integration} and \citet{grosse2013annealing} start with the same objective, they arrive at different optimum because they optimize over different spaces.
\section{Sum and Product Identities for $q$-Exponentials}\label{app:q_sum_product}
In this section, we prove two lemmas which are useful for manipulation expressions involving $q$-exponentials, for example in moving between \cref{eq:normalization1} and \cref{eq:normalization2} in either direction.
\begin{lemma}
Sum identity
\begin{align}
\exp_q\left(\sum_{n=1}^N x_n\right) = \prod_{n=1}^{N} \exp_q \left(\frac{x_n}{1 + (1 - q)\sum_{i=1}^{n-1}x_i} \right)\label{eq:q_exp_sum}
\end{align}
\label{lemma:q_exp_sum}
\end{lemma}
\begin{lemma}
Product identity
\begin{align}
\prod_{n=1}^N \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1 - q)x_i\right)\right)\label{eq:q_exp_prod}
\end{align}
\label{lemma:q_exp_prod}
\end{lemma}
\subsection{Proof of \cref{lemma:q_exp_sum}}
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\sum_{i=a}^bx_i = 0$ if $b < a$ so that the denominator on the \textsc{rhs} of \cref{eq:q_exp_sum} is $1$. Assuming \cref{eq:q_exp_sum} holds for $N$,
\begin{align}
\exp_q\left(\sum_{n=1}^{N+1} x_n\right) &= \left[ 1 + (1-q) \sum_{n=1}^{N+1} x_n \right]_{+}^{1/(1-q)} \\
&= \left[ 1 + (1-q) \left(\sum_{n=1}^{N} x_n\right) + (1-q)x_{N+1} \right]_{+}^{1/(1-q)} \\
&= \left[\left( 1 + (1-q) \sum_{n=1}^{N} x_n \right) \left(1 + (1-q)\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n}\right) \right]_{+}^{1/(1-q)} \\
&= \exp_q\left(\sum_{n=1}^N x_n\right) \exp_q \left(\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n} \right)\\
&= \prod_{n=1}^{N+1} \exp_q \left(\frac{x_n}{1 + (1-q)\sum_{i=1}^{n-1}x_i} \right) \text{(using the inductive hypothesis)}
\end{align}
\end{proof}
\subsection{Proof of \cref{lemma:q_exp_prod}}
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\prod_{i=a}^bx_i = 1$ if $b < a$. Assuming \cref{eq:q_exp_prod} holds for $N$, we will show the $N+1$ case. To simplify notation we define $y_N:=\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + = (1 - q)x_i\right)$. Then,
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) &= \exp_q(x_{1})\left(\prod_{n=2}^{N+1}\exp_q(x_n)\right)\\
&= \exp_q(x_{0})\left(\prod_{n=1}^{N}\exp_q(x_n)\right) & \hspace*{-.5cm} \text{(reindex $n \to n - 1)$} \nonumber \\
&=\exp_q(x_{0})\exp_q(y_N) & \hspace*{-.5cm} \text{(inductive hypothesis)} \nonumber \\
&= \bigg[\left(1 + (1-q) \cdot x_{0}\right)\left(1 + (1-q) \cdot y_N\right) \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \cdot x_{0} + \big(1 + (1-q) \cdot x_{0} \big)(1-q) \cdot y_N \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \bigg(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\bigg) \bigg]_{+}^{1/(1-q)}\\
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\right)
\end{align}
Next we use the definition of $y_N$ and rearrange
\begin{align}
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)\left(x_1 + x_2(1 + (1-q) \cdot x_1) + ... + x_N \cdot \prod_{i=1}^{N-1}(1 + (1-q) \cdot x_i)\right)\right) \nonumber\\
&= \exp_q\left(\sum_{n=0}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q) x_i\right)\right).
\end{align}
Then reindexing $n \to n + 1$ establishes
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N+1}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q)x_i\right)\right).
\end{align}
\end{proof}
\section{\citet{zhang2013nonparametric} Geodesics }
This section closely follows \citet{zhang2013nonparametric} Sec. 2.2-2.3.
We will seek to show that our $q$-path
\begin{align}
\ln_q \big(\tilde{\pi}_{t, q}(z)\big) = (1-t) \ln_q \big(\tilde{\pi}_{0}(z)\big) + t \, \ln_q \big(\tilde{\pi}_{1}(z) \big)
\end{align}
forms a geodesic for the affine connection induced by the $\alpha$-divergence $D_{\alpha}$ over unnormalized measures.
Consider the $\rho, \tau$ embeddings
\begin{align}
\rho(\tilde{\pi}(z)) = {\frac{2}{1-\alpha}} \big( \tilde{\pi}(z)^{\frac{1-\alpha}{2}} - 1 \big) \qquad \tau(\tilde{\pi}) = {\frac{2}{1+\alpha}} \big( \tilde{\pi}(z)^{\frac{1+\alpha}{2}} - 1 \big)
\end{align}
which are conjugate with respect to the function
\begin{align}
f(\rho) = {\frac{2}{1+\alpha}} \big( 1 + {\frac{1-\alpha}{2}} \cdot \rho)^{\frac{2}{1-\alpha}}
\end{align}
where conjugacy means that $\tau(\tilde{\pi}(z)) = f^{\prime}\big(\rho(\tilde{\pi}(z))\big)$.
This induces a family of $\alpha$-divergences $D^{(\alpha)}_{f,\rho}$, where $\alpha$ is intended to reflect \textit{referential duality}. Our use of $\alpha$ in the $\rho, \tau$ embedding functions, or $q$ in the $\ln_q$ path, rather reflects \textit{representational duality} in terms of the $\alpha$-representation (e.g. \citet{amari2010divergencefunctions}).
We obtain the flat divergence, which integrates an expression of Bregman form
\begin{align}
D^{(1)}_{f, \rho}[\tilde{\pi}_a:\tilde{\pi}_b] = \int f\big(\rho(\tilde{\pi}_a(z)) \big) - f\big(\rho(\tilde{\pi}_b(z)) \big) + \big( \rho(\tilde{\pi}_a(z)) - \rho(\tilde{\pi}_b(z)) \big) \cdot \tau(\tilde{\pi}_b) dz
\end{align}
which can be shown to correspond to Amari's $\alpha$ divergence for the choice of $f, \rho$ above.
\begin{align}
D^{(1)}_{f, \rho}[\tilde{\pi}_a:\tilde{\pi}_b] &= D_{\alpha}[\tilde{\pi}_a(z):\tilde{\pi}_b(z) ] \\
&= \frac{4}{(1-\alpha^2)} \bigg( \frac{1-\alpha}{2} \int \tilde{\pi}_a(z) \,dz + \frac{1+\alpha}{2} \int \tilde{\pi}_b(z) \,dz -\int \tilde{\pi}_a(z)^{\frac{1-\alpha}{2}} \, \tilde{\pi}_b(z)^{\frac{1+\alpha}{2}} dz \bigg)
\end{align}
As shown in Proposition 2 and Corollary 3 of \citet{zhang2013nonparametric}, we can define an information manifold $(\mathcal{M}_+, g_{ij}^{(D^{(1)}_{f,\rho})}, \nabla^{(1)}, \nabla^{(-1)})$. See \citet{zhang2013nonparametric} for additional details. We consider only the affine connection $\nabla^{(1)}$ in what follows, which is a special case of the $\alpha$-connection for the above manifold
\begin{align*}
\nabla^{(\alpha)}_{\dot{\gamma}} \dot{\gamma} = (d_{\dot{\gamma}} \dot{\gamma})_{\gamma_t} + \frac{d}{d\gamma} \big( {\frac{1+\alpha}{2}} \log \rho^{\prime}(\gamma(t)) + {\frac{1-\alpha}{2}} \log \tau^{\prime}(\gamma(t)) \big) \label{eq:affine_alpha}
\end{align*}
\subsection{$q$-Path as Geodesic}
Our $q$-path distributions $\tilde{\pi}_{t, q}$ along a curve parameterized by $t$, are defined using the power mean with function $\rho(u)$, and are thus linear in the $\rho$ representation
\begin{align}
\rho\big( \gamma(t)\big) = \rho\big(\tilde{\pi}_{t, q}(z)\big) = (1-t) \rho\big(\tilde{\pi}_{0}(z)\big) + t \, \rho\big(\tilde{\pi}_{1}(z) \big)
\end{align}
where $\rho(u) = \ln_q(u)$ for $q = 2\alpha-1$.
We seek to show that our $q$-path is a geodesic with respect to the affine connection $\nabla^{(1)}$, or an autoparallel path. Let $\gamma_t = \gamma(t) = \tilde{\pi}_{t,q}(z)$ indicate the $q$-path distribution with mixing parameter $t$, with $\dot{\gamma} = d \gamma_t / dt$.
Plugging $\alpha = 1$ into \eqref{eq:affine_alpha} , and with $ (d_{\dot{\gamma}} \dot{\gamma})_{\gamma_t} = \frac{d \dot{\gamma(t)}}{d\gamma_t} \cdot \dot{\gamma}$ , we seek to show that $\nabla_{\dot{\gamma}} \dot{\gamma} = 0$, or
\begin{align*}
\nabla^{(1)}_{\dot{\gamma}} \dot{\gamma} = \frac{d \dot{\gamma}}{d\gamma} \cdot \dot{\gamma} + \big(\dot{\gamma}\big)^{2} \cdot \frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big)
\end{align}
We will first derive several useful quantities. Since it is easier to differentiate $\rho(\gamma(t)) = (1-t) \rho(\pi_0) + t \, \rho(\tilde{\pi}_1)$ with respect to $\gamma$, we note that $\frac{d}{dt} \rho\big( \gamma(t) \big) = \frac{d \rho(\gamma) }{d\gamma(t)} \frac{d\gamma(t)}{dt}$. We can then rewrite the desired $\frac{d\gamma(t)}{dt}$ as
\begin{align}
\dot{\gamma} = \frac{d\gamma(t)}{dt} &= \bigg( \frac{d \rho(\gamma_t) }{d\gamma(t)} \bigg)^{-1} \, \frac{d \rho(\gamma_t) }{dt} = \gamma^{{\frac{1+\alpha}{2}}} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \\[1.4ex]
\implies \dot{\gamma} &=\gamma_t \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \quad (\text{for $\alpha = 1$}) \label{eq:dgamma_dt}
\end{align}
since
\begin{align}
\frac{d \rho(\gamma_t) }{d\gamma_t} &= \frac{d}{d \gamma_t} {\frac{2}{1-\alpha}} \big( \gamma^{\frac{1-\alpha}{2}} - 1 \big) = \gamma_t^{{\frac{1-\alpha}{2}} - \frac{2}{2}} = \gamma_t^{\frac{-(1+\alpha)}{2}} \\
\text{and } \quad \frac{d \rho(\gamma_t) }{dt} &= \frac{d}{dt} \big( (1-t) \rho(\pi_0) + t \, \rho(\tilde{\pi}_1) \big) = \rho(\tilde{\pi}_1) - \rho(\pi_0)
\end{align}
We can now differentiate the $\frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big) $ term in the affine connection. From above, $d \rho(\gamma_t) / d\gamma(t) = \gamma_t^{\frac{-(1+\alpha)}{2}}$, so that
\begin{align}
\frac{d}{d\gamma} \big( \log \frac{d}{d\gamma} \rho(\gamma(t)) \big) &= \frac{d}{d\gamma} \log \big( \gamma_t^{\frac{-(1+\alpha)}{2}} \big) = -\frac{(1+\alpha)}{2} \cdot \frac{d}{d\gamma} \log \gamma_t \\
&\phantom{\frac{d}{d\gamma} \log \big( \gamma_t^{\frac{-(1+\alpha)}{2}} \big)} = -\frac{(1+\alpha)}{2} \gamma_t^{-1} \\
\implies \frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big) &= -\gamma_t^{-1} \quad (\text{for $\alpha=1$}) \label{eq:db_dgamma}
\end{align}
Putting it all together, and with dot products turning into scalar products for our one-dimensional curve $t$, we have
\begin{align}
\nabla^{(1)}_{\dot{\gamma}} \dot{\gamma} &= \frac{d \dot{\gamma}}{d\gamma_t} \cdot \dot{\gamma} + \big(\dot{\gamma}\big)^{2} \frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big) \\
&= \frac{d }{d\gamma_t} \bigg( \gamma_t \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \bigg) \cdot \dot{\gamma} - (\dot{\gamma})^{2} \gamma_t^{-1}
\qquad \quad \text{(using \eqref{eq:dgamma_dt} and \eqref{eq:db_dgamma})} \\
&= \dot{\gamma} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) - (\dot{\gamma})^{2} \cdot {\dot{\gamma}}^{-1} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \qquad \text{(rearranging \eqref{eq:dgamma_dt})} \\
&= \dot{\gamma} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) - \dot{\gamma} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \\
&=0
\end{align}
\clearpage
\begin{figure}[t]
\centering
\subfigure[$q=0$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_01.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_09.pdf}}
\subfigure[$q=0.9$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_18.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_20.pdf}}
\subfigure[$q=1.1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_12_redo.pdf}}
\subfigure[$q=1.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_2_redo.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_3_redo.pdf}}%
\subfigure[$q=3$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_4_redo.pdf}}
\caption{Intermediate densities between $\mathcal{N}(-4, 3)$ and $\mathcal{N}(4,1)$ for various $q$-paths and 10 equally spaced $\beta$. The path approaches a mixture of Gaussians with weight $\beta$ at $q=0$. For the geometric mixture ($q=1$), intermediate $\pi_{\beta}$ stay within the exponential family since both $\pi_0$, $\pi_T$ are Gaussian.}
\label{fig:alpha_path}
\vspace*{-.15cm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[$q=0$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha-1_student_df2_2.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha05_student_df2_2.pdf}}
\subfigure[$q=0.95$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha95_student_df2_2.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha1_student_df2_2.pdf}}
\subfigure[$q=1.1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha11_student_df2_2.pdf}}
\subfigure[$q=1.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha15_student_df2_2.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha2_student_df2_2.pdf}}%
\subfigure[$q=3$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha3_student_df2_2.pdf}}
\caption{Intermediate densities between $\text{Student}(\text{df}=2, -4, 3)$ and $\text{Student}(\text{df}=2, 4, 1)$ for various $q$-paths and 10 equally spaced $\beta$.}
\label{fig:q_path}
\vspace*{-.15cm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[$q=0$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha-1_student_df2_2.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha05_student_df2_2.pdf}}
\subfigure[$q=0.95$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha95_student_df2_2.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha1_student_df2_2.pdf}}
\subfigure[$q=1.1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha11_student_df2_2.pdf}}
\subfigure[$q=1.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha15_student_df2_2.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha2_student_df2_2.pdf}}%
\subfigure[$q=3$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha3_student_df2_2.pdf}}
\caption{Tail behavior for densities between $\text{Student}(\text{df}=2, -4, 3)$ and $\text{Student}(\text{df}=2, 4, 1)$ for various $q$-paths and 10 equally spaced $\beta$.}
\label{fig:q_path}
\vspace*{-.15cm}
\end{figure}
\section{Sum and Product Identities for $q$-Exponentials}\label{app:q_sum_product}
In this section, we prove two lemmas which are useful for manipulation expressions involving $q$-exponentials, for example in moving between \cref{eq:normalization1} and \cref{eq:normalization2} in either direction.
\begin{lemma}
Sum identity
\begin{align}
\exp_q\left(\sum_{n=1}^N x_n\right) = \prod_{n=1}^{N} \exp_q \left(\frac{x_n}{1 + (1 - q)\sum_{i=1}^{n-1}x_i} \right)\label{eq:q_exp_sum}
\end{align}
\label{lemma:q_exp_sum}
\end{lemma}
\begin{lemma}
Product identity
\begin{align}
\prod_{n=1}^N \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1 - q)x_i\right)\right)\label{eq:q_exp_prod}
\end{align}
\label{lemma:q_exp_prod}
\end{lemma}
\subsection{Proof of \cref{lemma:q_exp_sum}}
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\sum_{i=a}^bx_i = 0$ if $b < a$ so that the denominator on the \textsc{rhs} of \cref{eq:q_exp_sum} is $1$. Assuming \cref{eq:q_exp_sum} holds for $N$,
\begin{align}
\exp_q\left(\sum_{n=1}^{N+1} x_n\right) &= \left[ 1 + (1-q) \sum_{n=1}^{N+1} x_n \right]_{+}^{1/(1-q)} \\
&= \left[ 1 + (1-q) \left(\sum_{n=1}^{N} x_n\right) + (1-q)x_{N+1} \right]_{+}^{1/(1-q)} \\
&= \left[\left( 1 + (1-q) \sum_{n=1}^{N} x_n \right) \left(1 + (1-q)\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n}\right) \right]_{+}^{1/(1-q)} \\
&= \exp_q\left(\sum_{n=1}^N x_n\right) \exp_q \left(\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n} \right)\\
&= \prod_{n=1}^{N+1} \exp_q \left(\frac{x_n}{1 + (1-q)\sum_{i=1}^{n-1}x_i} \right) \text{(using the inductive hypothesis)}
\end{align}
\end{proof}
\subsection{Proof of \cref{lemma:q_exp_prod}}
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\prod_{i=a}^bx_i = 1$ if $b < a$. Assuming \cref{eq:q_exp_prod} holds for $N$, we will show the $N+1$ case. To simplify notation we define $y_N:=\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + = (1 - q)x_i\right)$. Then,
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) &= \exp_q(x_{1})\left(\prod_{n=2}^{N+1}\exp_q(x_n)\right)\\
&= \exp_q(x_{0})\left(\prod_{n=1}^{N}\exp_q(x_n)\right) & \hspace*{-.5cm} \text{(reindex $n \to n - 1)$} \nonumber \\
&=\exp_q(x_{0})\exp_q(y_N) & \hspace*{-.5cm} \text{(inductive hypothesis)} \nonumber \\
&= \bigg[\left(1 + (1-q) \cdot x_{0}\right)\left(1 + (1-q) \cdot y_N\right) \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \cdot x_{0} + \big(1 + (1-q) \cdot x_{0} \big)(1-q) \cdot y_N \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \bigg(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\bigg) \bigg]_{+}^{1/(1-q)}\\
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\right)
\end{align}
Next we use the definition of $y_N$ and rearrange
\begin{align}
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)\left(x_1 + x_2(1 + (1-q) \cdot x_1) + ... + x_N \cdot \prod_{i=1}^{N-1}(1 + (1-q) \cdot x_i)\right)\right) \nonumber\\
&= \exp_q\left(\sum_{n=0}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q) x_i\right)\right).
\end{align}
Then reindexing $n \to n + 1$ establishes
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N+1}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q)x_i\right)\right).
\end{align}
\end{proof}
\section{Annealing between Student-$t$ Distributions}\label{app:additional}
\subsection{Student-$t$ Distributions and $q$-Exponential Family}
The Student-$t$ distribution appears in hypothesis testing with finite samples, under the assumption that the sample mean follows a Gaussian distribution. In particular, the degrees of freedom parameter $\nu = n-1$ can be shown to correspond to an order of the ${q}$-exponential family with $\nu = (3-{q}) / ({q}-1)$ (in 1-d), so that the choice of ${q}$ is linked to the amount of data observed.
We can first write the multivariate Student-$t$ density, specified by a mean vector $\mu$, covariance ${\mathbf{\Sigma}}$, and degrees of freedom parameter $\nu$, in $d$ dimensions, as
\begin{align}
t_{\nu}(x | {\mathbf{\mu}}, {\mathbf{\Sigma}}) = \frac{1}{Z(\nu, {\mathbf{\Sigma}})} \big[ 1 + \frac{1}{\nu} (x - {\mathbf{\mu}})^T {\mathbf{\Sigma}}^{-1} (x-{\mathbf{\mu}}) \big]^{-\big(\frac{\nu + d }{2}\big)} \label{eq:student}
\end{align}
where $Z(\nu, {\mathbf{\Sigma}}) = \Gamma(\frac{\nu+d}{2})/\Gamma(\frac{\nu}{2}) \cdot |{\mathbf{\Sigma}}|^{-1/2} \nu^{-\frac{d}{2}} \pi^{-\frac{d}{2}}$. Note that $\nu > 0$, so that we only have positive values raised to the $-(\nu+d)/2$ power, and the density is defined on the real line.
The power function in \eqref{eq:student} is already reminiscent of the ${q}$-exponential, while we have first and second moment sufficient statistics as in the Gaussian case. We can solve for the exponent, or order parameter $q$, that corresponds to $-(\nu+d)/2$ using $-\big(\frac{\nu + d }{2}\big) = \frac{1}{1-{q}}$. This results in the relations
\begin{align}
\nu = \frac{d - d {q} +2}{{q} - 1} \qquad \text{or} \qquad {q} = \frac{\nu+d+2}{\nu+d}
\end{align}
We can also rewrite the $\nu^{-1} \, (x - {\mathbf{\mu}})^T {\mathbf{\Sigma}}^{-1} (x-{\mathbf{\mu}}) $ using natural parameters corresponding to $\{x, x^2\}$ sufficient statistics as in the Gaussian case (see, e.g. \citet{matsuzoe2015deformed} Example 4).
Note that the Student-$t$ distribution has heavier tails than a standard Gaussian, and reduces to a multivariate Gaussian as ${q} \rightarrow 1$ and $\exp_{{q}}(u) \rightarrow \exp(u)$. This corresponds to observing $n\rightarrow \infty$ samples, so that the sample mean and variance approach the ground truth \cite{murphy2007conjugate}.
\begin{figure}[t]
\centering
\subfigure[$q=0$] {\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.19\textwidth]{sections/figs/gif/ridge_01.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.19\textwidth]{sections/figs/gif/ridge_09.pdf}}
\subfigure[$q=0.9$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.2\textwidth]{sections/figs/gif/ridge_18.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.19\textwidth]{sections/figs/gif/ridge_20.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 .1cm 0.25cm 0 },clip,width=0.19\textwidth]{sections/figs/gif/gaussian_2.pdf}}
\caption{Intermediate densities between $\mathcal{N}(-4, 3)$ and $\mathcal{N}(4,1)$ for various $q$-paths and 10 equally spaced $\beta$. The path approaches a mixture of Gaussians with weight $\beta$ at $q=0$. For the geometric mixture ($q=1$), intermediate $\pi_{\beta}$ stay within the exponential family since both $\pi_0$, $\pi_T$ are Gaussian.}
\label{fig:alpha_path22}
\label{fig:gaussian_path}
\vspace*{-.15cm}
\end{figure}%
\begin{figure}[t]
\centering
\includegraphics[trim={0 0 0 0 },clip,width=0.99\textwidth]{sections/figs/student_t.pdf}
\caption{Intermediate densities between Student-$t$ distributions, $t_{\nu = 1}(-4, 3)$ and $t_{\nu = 1}(4,1)$ for various $q$-paths and 10 equally spaced $\beta$,
Note that $\nu=1$ corresponds to $q=2$, so that the $q=2$ path stays within the $q$-exponential family.\\[3ex]
We provide code to reproduce experiments at \url{https://github.com/vmasrani/q\_paths}.
}
\label{fig:alpha_path}
\label{fig:student_path}
\vspace*{-.15cm}
\end{figure}
\subsection{Annealing between 1-d Student-$t$ Distributions}
Since the Student-$t$ family generalizes the Gaussian distribution to $q \neq 1$, we can run a similar experiment annealing between two Student-$t$ distributions. We set $q=2$, which corresponds to $\nu = 1$ with $\nu = (3-{q}) / ({q}-1)$, and use the same mean and variance as the Gaussian example in Fig. \ref{fig:alpha_path22}, with $\pi_0(z) = t_{\nu=1}( -4, 3)$ and $\pi_1(z) = t_{\nu=1}( 4, 1)$.
We visualize the results in Fig. \ref{fig:student_path}. For this special case of both endpoint distributions within a parametric family, we can ensure that the $q=2$ path stays within the $q$-exponential family of Student-$t$ distributions. We make a similar observation for the Gaussian case and $q=1$ in Fig. \ref{fig:gaussian_path}. Comparing the $q=0.5$ and $q=0.9$ Gaussian path with the $q=1.0$ and $q=1.5$ path, we observe that mixing behavior appears to depend on the relation between the $q$-path parameter and the order of the $q$-exponential family of the endpoints.
As $q \rightarrow \infty$, the power mean \eqref{eq:abstract_mean} approaches the $\min$ operation as $1-q \rightarrow -\infty$. In the Gaussian case, we see that, even at $q=2$, intermediate densities for all $\beta$ appear to concentrate in regions of low density under both $\pi_0$ and $\pi_T$. However, for the heavier-tailed Student-$t$ distributions, we must raise the $q$-path parameter significantly to observe similar behavior.
\newcommand{g}{g}
\subsection{Endpoints within a Parametric Family}
If the two endpoints $\pi_0, \tilde{\pi}_1$ are within a $q$-exponential family, we can show that each intermediate distribution along the $q$-path of the same order is also within this $q$-family. However, we cannot make such statements for general endpoint distributions or members of different $q$-exponential families.
\paragraph{Exponential Family Case}
We assume potentially vector valued parameters $\theta = \{ \theta\}_{i=1}^N$ with multiple sufficient statistics $\phi(z) = \{ \phi_i(z) \}_{i=1}^N$, with $\theta \cdot \phi(z) = \sum_{i=1}^N \theta_i \phi_i(z)$.
For a common base measure $g(z)$, let $\pi_0(z) = g(z) \, \exp\{ \theta_0 \cdot \phi(z) \}$ and $\tilde{\pi}_1(z) = g(z) \, \exp \{ \theta_1 \cdot \phi(z) \}$. Taking the geometric mixture,
\begin{align}
\tilde{\pi}_\beta(z) &= \exp \big\{ (1-\beta) \, \log \pi_0(z) + \beta \, \log \tilde{\pi}_1(z) \big\} \\
&= \exp \big \{ \log g(z) + (1-\beta) \, \theta_0 \cdot \phi(z) + \beta \, \theta_1 \phi(z) \big \} \\
&= g(z) \exp \big \{ \big( (1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \big \}
\end{align}
which, after normalization, will be a member of the exponential family with natural parameter $(1-\beta) \, \theta_0 + \beta \, \theta_1$.
\paragraph{$q$-Exponential Family Case} For a common base measure $g(z)$, let $\pi_0(z) = g(z) \, \exp_q \{ \theta_0 \cdot \phi(z) \}$ and $\tilde{\pi}_1(z) = g(z) \, \exp_q \{ \theta_1 \cdot \phi(z) \}$. The $q$-path intermediate density becomes
\begin{align}
\tilde{\pi}^{(q)}_\beta(z) &= \big[ (1-\beta) \, \pi_0(z)^{1-q} + \beta \, \tilde{\pi}_1(z)^{1-q} \big]^{\frac{1}{1-q}} \\
&= \big[ (1-\beta) \, g(z)^{1-q} \, \exp_q \{ \theta_0 \cdot \phi(z) \}^{1-q} + \beta \, g(z)^{1-q} \,\exp_q \{ \theta_1 \cdot \phi(z) \} ^{1-q} \big]^{\frac{1}{1-q}} \\
&= \bigg[ g(z)^{1-q} \bigg ( (1-\beta) \, \, [1 + (1-q)( \theta_0 \cdot \phi(z))]^{\frac{1}{1-q}1-q} + \beta \, [1 + (1-q)( \theta_1 \cdot \phi(z))]^{\frac{1}{1-q} 1-q} \bigg) \bigg]^{\frac{1}{1-q}} \nonumber \\
&= g(z) \bigg[ 1 + (1-q) \bigg( \big((1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \bigg) \bigg]^{\frac{1}{1-q}} \\
&= g(z) \exp_q \big\{ \big((1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \big\}
\end{align}
which has the form of an unnormalized $q$-exponential family density with parameter $(1-\beta) \, \theta_0 + \beta \, \theta_1$.
\section{Derivation of Path $q$-Family}\label{app:q_path}
With normalization, the $q$-exponential family is usually written as
\begin{align}
\pi_{t,q}(z) &= \pi_0(z) \, \exp_q \big\{ \eta \cdot \phi(z) - \psi_q(\eta) \big\} \label{eq:lnq_fam_form1}
\end{align}
which is suggestive of the convex conjugate for $\psi_q$ via
\begin{align}
\ln_q \frac{\pi_{t,q}(z) }{\pi_0(z)} = \eta \cdot \phi(z) - \psi_q(\eta)
\end{align}
We now derive the $q$-exponential family from the $\alpha$ mixture between $\pi_0$ and $\pi_1$, where we will obtain sufficient statistics of $\phi(z) = \ln_{q} \frac{\pi_1(z)}{\pi_0(z)}$.
\begin{proof}
\begin{align}
\pi_{\alpha, \beta}(z) &\propto \bigg[(1 - \beta) \pi_0^{1 - \alpha} + \beta\pi_1^{1 - \alpha}\bigg]^\frac{1}{1 - \alpha}\\
&= \bigg[\pi_0^{1 - \alpha} + \beta\big(\pi_1^{1 - \alpha} - \pi_0^{1 - \alpha}\big)\bigg]^\frac{1}{1 - \alpha}\\
&= \pi_0 \left[1 + \beta\left(\left(\frac{\pi_1}{\pi_0}\right)^{1 - \alpha} - 1\right)\right]^\frac{1}{1 - \alpha}\\
&= \pi_0 \left[1 + (1-\alpha)\beta\ln_\alpha\left(\frac{\pi_1}{\pi_0}\right)\right]^\frac{1}{1 - \alpha}\\
&= \pi_0 \exp_\alpha\left[\beta\ln_\alpha\left(\frac{\pi_1}{\pi_0}\right)\right].
\end{align}
Defining $\phi(z) = \ln_{q} \frac{\pi_1(z)}{\pi_0(z)}$ and normalizing, we arrive at
\begin{align}
\pi_{\alpha, \beta}(z) = \frac{\pi_0}{Z_\alpha(\beta)}\exp_\alpha\left[\beta \phi(z) \right] \quad
Z_\alpha(\beta) &:= \int dz \pi_0 \exp_\alpha\left[\beta \phi(z) \right].\label{eq:lnq_fam_form2}
\end{align}
To get \cref{eq:lnq_fam_form2} into the standard $q$-exponential family form, we define
\begin{align}
\psi_\alpha := \ln_\alpha^* \left(Z_\alpha(\beta)\right) \label{eq:psi_alpha}
\end{align}
and rewrite \cref{eq:lnq_fam_form2} using \cref{eq:q_exp_prod}
\begin{align}
\pi_{\alpha, \beta}(z) &= \frac{\pi_0}{Z_\alpha(\beta)}\exp_\alpha\left[\beta \phi(z) \right] \\
&= \frac{\pi_0}{\exp^*_\alpha\left(\psi_\alpha\right)}\exp_\alpha\left(\beta \phi(z) \right) & \text{Using \cref{eq:psi_alpha}}\\
&= \pi_0 \exp_\alpha\left(-\psi_\alpha\right) \exp_\alpha\left(\beta \phi(z)\right) & \text{Using $\exp^*_\alpha(u) = 1/\exp_\alpha(-u)$}\\
&= \pi_0 \exp_\alpha\left(-\psi_\alpha + \bigg(1 - (1 - \alpha) \psi_\alpha\bigg)\beta\phi(z)\right) & \text{Using \cref{eq:q_exp_prod}}\\
&= \pi_0 \exp_\alpha\left(-\psi_\alpha + \eta \phi(z)\right)
\end{align}
where we have defined
\begin{align}
\eta := \beta \cdot \big(1 - (1 - \alpha) \psi_\alpha\big).
\end{align}
\end{proof}
\section{Alternative Derivation}
Note that $\exp_q \{ x + y \} = \exp_q \{ y \} \cdot \exp_q \{ \frac{x}{1 + (1-q) y} \}$ from \textcolor{red}{Vaden's theorem below.}, instead of $\exp \{ x + y \} = \exp \{ x \} \cdot \exp \{ y \} $ for the standard exponential. This implies that
\begin{align}
\pi_{q,\beta} = \pi_0 \, \exp_q \{ \beta \cdot \phi(z) - \psi_q \} &= \pi_0 \, \exp_q\{ -\psi_q \} \exp_q \{ \frac{\beta}{1+(1-q)(-\psi_q)} \cdot \phi(z) \}
\end{align}
Our goal is to express $\pi_{q,\beta}(z)$ using a normalization constant $Z$ instead of the $q$-free energy $\psi_q(\beta)$. While the exponential family allows us to freely move between $\log Z_{\beta}$
and $\psi(\beta)$, in the $q$-exponential case, we must adjust the parameters. Defining
\begin{align}
\eta &= \frac{\beta}{1+(1-q)(-\psi_q)} \\
Z_q(\eta) &= \frac{1}{\exp_q \{-\psi_q \}} := \exp_{q}^{*}\{ -\psi_q\} \\
\end{align}
we obtain a new parameterization of the $q$-exponential family, using parameters $\eta$ and multiplicative normalization constant $Z_q(\eta)$, as
\begin{align}
\pi_{q,\beta} = \pi_{q,\eta} &= \frac{1}{Z_q(\eta)} \exp_{q} \{ \eta \cdot \phi(z) \}
\end{align}
See \citet{matsuzoe2019normalization} for more detailed discussion of normalization in deformed exponential families.
\section{\citet{zhang2013nonparametric} Geodesics }
This section closely follows \citet{zhang2013nonparametric} Sec. 2.2-2.3.
We will seek to show that our $q$-path
\begin{align}
\ln_q \big(\tilde{\pi}_{t, q}(z)\big) = (1-t) \ln_q \big(\tilde{\pi}_{0}(z)\big) + t \, \ln_q \big(\tilde{\pi}_{1}(z) \big)
\end{align}
forms a geodesic for the affine connection induced by the $\alpha$-divergence $D_{\alpha}$ over unnormalized measures.
Consider the $\rho, \tau$ embeddings
\begin{align}
\rho(\tilde{\pi}(z)) = {\frac{2}{1-\alpha}} \big( \tilde{\pi}(z)^{\frac{1-\alpha}{2}} - 1 \big) \qquad \tau(\tilde{\pi}) = {\frac{2}{1+\alpha}} \big( \tilde{\pi}(z)^{\frac{1+\alpha}{2}} - 1 \big)
\end{align}
which are conjugate with respect to the function
\begin{align}
f(\rho) = {\frac{2}{1+\alpha}} \big( 1 + {\frac{1-\alpha}{2}} \cdot \rho)^{\frac{2}{1-\alpha}}
\end{align}
where conjugacy means that $\tau(\tilde{\pi}(z)) = f^{\prime}\big(\rho(\tilde{\pi}(z))\big)$.
This induces a family of $\alpha$-divergences $D^{(\alpha)}_{f,\rho}$, where $\alpha$ is intended to reflect \textit{referential duality}. Our use of $\alpha$ in the $\rho, \tau$ embedding functions, or $q$ in the $\ln_q$ path, rather reflects \textit{representational duality} in terms of the $\alpha$-representation (e.g. \citet{amari2010divergencefunctions}).
We obtain the flat divergence, which integrates an expression of Bregman form
\begin{align}
D^{(1)}_{f, \rho}[\tilde{\pi}_a:\tilde{\pi}_b] = \int f\big(\rho(\tilde{\pi}_a(z)) \big) - f\big(\rho(\tilde{\pi}_b(z)) \big) + \big( \rho(\tilde{\pi}_a(z)) - \rho(\tilde{\pi}_b(z)) \big) \cdot \tau(\tilde{\pi}_b) dz
\end{align}
which can be shown to correspond to Amari's $\alpha$ divergence for the choice of $f, \rho$ above.
\begin{align}
D^{(1)}_{f, \rho}[\tilde{\pi}_a:\tilde{\pi}_b] &= D_{\alpha}[\tilde{\pi}_a(z):\tilde{\pi}_b(z) ] \\
&= \frac{4}{(1-\alpha^2)} \bigg( \frac{1-\alpha}{2} \int \tilde{\pi}_a(z) \,dz + \frac{1+\alpha}{2} \int \tilde{\pi}_b(z) \,dz -\int \tilde{\pi}_a(z)^{\frac{1-\alpha}{2}} \, \tilde{\pi}_b(z)^{\frac{1+\alpha}{2}} dz \bigg)
\end{align}
As shown in Proposition 2 and Corollary 3 of \citet{zhang2013nonparametric}, we can define an information manifold $(\mathcal{M}_+, g_{ij}^{(D^{(1)}_{f,\rho})}, \nabla^{(1)}, \nabla^{(-1)})$. See \citet{zhang2013nonparametric} for additional details. We consider only the affine connection $\nabla^{(1)}$ in what follows, which is a special case of the $\alpha$-connection for the above manifold
\begin{align*}
\nabla^{(\alpha)}_{\dot{\gamma}} \dot{\gamma} = (d_{\dot{\gamma}} \dot{\gamma})_{\gamma_t} + \frac{d}{d\gamma} \big( {\frac{1+\alpha}{2}} \log \rho^{\prime}(\gamma(t)) + {\frac{1-\alpha}{2}} \log \tau^{\prime}(\gamma(t)) \big) \label{eq:affine_alpha}
\end{align*}
\subsection{$q$-Path as Geodesic}
Our $q$-path distributions $\tilde{\pi}_{t, q}$ along a curve parameterized by $t$, are defined using the power mean with function $\rho(u)$, and are thus linear in the $\rho$ representation
\begin{align}
\rho\big( \gamma(t)\big) = \rho\big(\tilde{\pi}_{t, q}(z)\big) = (1-t) \rho\big(\tilde{\pi}_{0}(z)\big) + t \, \rho\big(\tilde{\pi}_{1}(z) \big)
\end{align}
where $\rho(u) = \ln_q(u)$ for $q = 2\alpha-1$.
We seek to show that our $q$-path is a geodesic with respect to the affine connection $\nabla^{(1)}$, or an autoparallel path. Let $\gamma_t = \gamma(t) = \tilde{\pi}_{t,q}(z)$ indicate the $q$-path distribution with mixing parameter $t$, with $\dot{\gamma} = d \gamma_t / dt$.
Plugging $\alpha = 1$ into \eqref{eq:affine_alpha} , and with $ (d_{\dot{\gamma}} \dot{\gamma})_{\gamma_t} = \frac{d \dot{\gamma(t)}}{d\gamma_t} \cdot \dot{\gamma}$ , we seek to show that $\nabla_{\dot{\gamma}} \dot{\gamma} = 0$, or
\begin{align*}
\nabla^{(1)}_{\dot{\gamma}} \dot{\gamma} = \frac{d \dot{\gamma}}{d\gamma} \cdot \dot{\gamma} + \big(\dot{\gamma}\big)^{2} \cdot \frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big)
\end{align}
We will first derive several useful quantities. Since it is easier to differentiate $\rho(\gamma(t)) = (1-t) \rho(\pi_0) + t \, \rho(\tilde{\pi}_1)$ with respect to $\gamma$, we note that $\frac{d}{dt} \rho\big( \gamma(t) \big) = \frac{d \rho(\gamma) }{d\gamma(t)} \frac{d\gamma(t)}{dt}$. We can then rewrite the desired $\frac{d\gamma(t)}{dt}$ as
\begin{align}
\dot{\gamma} = \frac{d\gamma(t)}{dt} &= \bigg( \frac{d \rho(\gamma_t) }{d\gamma(t)} \bigg)^{-1} \, \frac{d \rho(\gamma_t) }{dt} = \gamma^{{\frac{1+\alpha}{2}}} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \\[1.4ex]
\implies \dot{\gamma} &=\gamma_t \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \quad (\text{for $\alpha = 1$}) \label{eq:dgamma_dt}
\end{align}
since
\begin{align}
\frac{d \rho(\gamma_t) }{d\gamma_t} &= \frac{d}{d \gamma_t} {\frac{2}{1-\alpha}} \big( \gamma^{\frac{1-\alpha}{2}} - 1 \big) = \gamma_t^{{\frac{1-\alpha}{2}} - \frac{2}{2}} = \gamma_t^{\frac{-(1+\alpha)}{2}} \\
\text{and } \quad \frac{d \rho(\gamma_t) }{dt} &= \frac{d}{dt} \big( (1-t) \rho(\pi_0) + t \, \rho(\tilde{\pi}_1) \big) = \rho(\tilde{\pi}_1) - \rho(\pi_0)
\end{align}
We can now differentiate the $\frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big) $ term in the affine connection. From above, $d \rho(\gamma_t) / d\gamma(t) = \gamma_t^{\frac{-(1+\alpha)}{2}}$, so that
\begin{align}
\frac{d}{d\gamma} \big( \log \frac{d}{d\gamma} \rho(\gamma(t)) \big) &= \frac{d}{d\gamma} \log \big( \gamma_t^{\frac{-(1+\alpha)}{2}} \big) = -\frac{(1+\alpha)}{2} \cdot \frac{d}{d\gamma} \log \gamma_t \\
&\phantom{\frac{d}{d\gamma} \log \big( \gamma_t^{\frac{-(1+\alpha)}{2}} \big)} = -\frac{(1+\alpha)}{2} \gamma_t^{-1} \\
\implies \frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big) &= -\gamma_t^{-1} \quad (\text{for $\alpha=1$}) \label{eq:db_dgamma}
\end{align}
Putting it all together, and with dot products turning into scalar products for our one-dimensional curve $t$, we have
\begin{align}
\nabla^{(1)}_{\dot{\gamma}} \dot{\gamma} &= \frac{d \dot{\gamma}}{d\gamma_t} \cdot \dot{\gamma} + \big(\dot{\gamma}\big)^{2} \frac{d}{d\gamma} \big( \log \rho^{\prime}(\gamma(t)) \big) \\
&= \frac{d }{d\gamma_t} \bigg( \gamma_t \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \bigg) \cdot \dot{\gamma} - (\dot{\gamma})^{2} \gamma_t^{-1}
\qquad \quad \text{(using \eqref{eq:dgamma_dt} and \eqref{eq:db_dgamma})} \\
&= \dot{\gamma} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) - (\dot{\gamma})^{2} \cdot {\dot{\gamma}}^{-1} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \qquad \text{(rearranging \eqref{eq:dgamma_dt})} \\
&= \dot{\gamma} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) - \dot{\gamma} \cdot \big( \rho(\tilde{\pi}_1) - \rho(\pi_0) \big) \\
&=0
\end{align}
\clearpage
\begin{figure}[t]
\centering
\subfigure[$q=0$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_01.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_09.pdf}}
\subfigure[$q=0.9$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_18.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/figs/gif/ridge_20.pdf}}
\subfigure[$q=1.1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_12_redo.pdf}}
\subfigure[$q=1.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_2_redo.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_3_redo.pdf}}%
\subfigure[$q=3$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_4_redo.pdf}}
\caption{Intermediate densities between $\mathcal{N}(-4, 3)$ and $\mathcal{N}(4,1)$ for various $q$-paths and 10 equally spaced $\beta$. The path approaches a mixture of Gaussians with weight $\beta$ at $q=0$. For the geometric mixture ($q=1$), intermediate $\pi_{\beta}$ stay within the exponential family since both $\pi_0$, $\pi_T$ are Gaussian.}
\label{fig:alpha_path}
\vspace*{-.15cm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[$q=0$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha-1_student_df2_2.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha05_student_df2_2.pdf}}
\subfigure[$q=0.95$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha95_student_df2_2.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha1_student_df2_2.pdf}}
\subfigure[$q=1.1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha11_student_df2_2.pdf}}
\subfigure[$q=1.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha15_student_df2_2.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha2_student_df2_2.pdf}}%
\subfigure[$q=3$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha3_student_df2_2.pdf}}
\caption{Intermediate densities between $\text{Student}(\text{df}=2, -4, 3)$ and $\text{Student}(\text{df}=2, 4, 1)$ for various $q$-paths and 10 equally spaced $\beta$.}
\label{fig:q_path}
\vspace*{-.15cm}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[$q=0$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha-1_student_df2_2.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha05_student_df2_2.pdf}}
\subfigure[$q=0.95$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha95_student_df2_2.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.24\textwidth]{sections/app/draft_figs/ridge_alpha1_student_df2_2.pdf}}
\subfigure[$q=1.1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha11_student_df2_2.pdf}}
\subfigure[$q=1.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha15_student_df2_2.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha2_student_df2_2.pdf}}%
\subfigure[$q=3$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.23\textwidth]{sections/app/draft_figs/ridge_alpha3_student_df2_2.pdf}}
\caption{Tail behavior for densities between $\text{Student}(\text{df}=2, -4, 3)$ and $\text{Student}(\text{df}=2, 4, 1)$ for various $q$-paths and 10 equally spaced $\beta$.}
\label{fig:q_path}
\vspace*{-.15cm}
\end{figure}
\section{Sum and Product Identities for $q$-Exponentials}\label{app:q_sum_product}
In this section, we prove two lemmas which are useful for manipulation expressions involving $q$-exponentials, for example in moving between \cref{eq:normalization1} and \cref{eq:normalization2} in either direction.
\begin{lemma}
Sum identity
\begin{align}
\exp_q\left(\sum_{n=1}^N x_n\right) = \prod_{n=1}^{N} \exp_q \left(\frac{x_n}{1 + (1 - q)\sum_{i=1}^{n-1}x_i} \right)\label{eq:q_exp_sum}
\end{align}
\label{lemma:q_exp_sum}
\end{lemma}
\begin{lemma}
Product identity
\begin{align}
\prod_{n=1}^N \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1 - q)x_i\right)\right)\label{eq:q_exp_prod}
\end{align}
\label{lemma:q_exp_prod}
\end{lemma}
\subsection{Proof of \cref{lemma:q_exp_sum}}
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\sum_{i=a}^bx_i = 0$ if $b < a$ so that the denominator on the \textsc{rhs} of \cref{eq:q_exp_sum} is $1$. Assuming \cref{eq:q_exp_sum} holds for $N$,
\begin{align}
\exp_q\left(\sum_{n=1}^{N+1} x_n\right) &= \left[ 1 + (1-q) \sum_{n=1}^{N+1} x_n \right]_{+}^{1/(1-q)} \\
&= \left[ 1 + (1-q) \left(\sum_{n=1}^{N} x_n\right) + (1-q)x_{N+1} \right]_{+}^{1/(1-q)} \\
&= \left[\left( 1 + (1-q) \sum_{n=1}^{N} x_n \right) \left(1 + (1-q)\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n}\right) \right]_{+}^{1/(1-q)} \\
&= \exp_q\left(\sum_{n=1}^N x_n\right) \exp_q \left(\frac{x_{N+1}}{1 + (1-q) \sum_{n=1}^{N} x_n} \right)\\
&= \prod_{n=1}^{N+1} \exp_q \left(\frac{x_n}{1 + (1-q)\sum_{i=1}^{n-1}x_i} \right) \text{(using the inductive hypothesis)}
\end{align}
\end{proof}
\subsection{Proof of \cref{lemma:q_exp_prod}}
\begin{proof}
We prove by induction. The base case ($N=1$) is satisfied using the convention $\prod_{i=a}^bx_i = 1$ if $b < a$. Assuming \cref{eq:q_exp_prod} holds for $N$, we will show the $N+1$ case. To simplify notation we define $y_N:=\sum_{n=1}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + = (1 - q)x_i\right)$. Then,
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) &= \exp_q(x_{1})\left(\prod_{n=2}^{N+1}\exp_q(x_n)\right)\\
&= \exp_q(x_{0})\left(\prod_{n=1}^{N}\exp_q(x_n)\right) & \hspace*{-.5cm} \text{(reindex $n \to n - 1)$} \nonumber \\
&=\exp_q(x_{0})\exp_q(y_N) & \hspace*{-.5cm} \text{(inductive hypothesis)} \nonumber \\
&= \bigg[\left(1 + (1-q) \cdot x_{0}\right)\left(1 + (1-q) \cdot y_N\right) \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \cdot x_{0} + \big(1 + (1-q) \cdot x_{0} \big)(1-q) \cdot y_N \bigg]_{+}^{1/(1-q)}\\
&= \bigg[1 + (1-q) \bigg(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\bigg) \bigg]_{+}^{1/(1-q)}\\
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)y_N\right)
\end{align}
Next we use the definition of $y_N$ and rearrange
\begin{align}
&= \exp_q \left(x_{0} + \big(1 + (1-q) \cdot x_{0} \big)\left(x_1 + x_2(1 + (1-q) \cdot x_1) + ... + x_N \cdot \prod_{i=1}^{N-1}(1 + (1-q) \cdot x_i)\right)\right) \nonumber\\
&= \exp_q\left(\sum_{n=0}^{N}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q) x_i\right)\right).
\end{align}
Then reindexing $n \to n + 1$ establishes
\begin{align}
\prod_{n=1}^{N+1} \exp_q(x_n) = \exp_q\left(\sum_{n=1}^{N+1}x_n \cdot \prod_{i=1}^{n-1} \left(1 + (1-q)x_i\right)\right).
\end{align}
\end{proof}
\section{Annealing between Student-$t$ Distributions}\label{app:additional}
\paragraph{Student-$t$ Distribution}
\vm{Moved from main}
The Student-$t$ distribution appears in hypothesis testing with finite samples, under the assumption that the sample mean follows a Gaussian distribution. In particular, the degrees of freedom parameter $\nu = n-1$ can be shown to correspond to an order of the $q$-exponential family with $\nu = (3-q) / (q-1)$ (in 1-d), so that the choice of $q$ is linked to the amount of data observed.
We can first write the multivariate Student-$t$ density, specified by a mean vector $\mu$, covariance $\Sigma$, and degrees of freedom parameter $\nu$, in $d$ dimensions, as
\begin{align}
t_{\nu}(x | \mu, \Sigma) = \frac{1}{Z(\nu, \Sigma)} \big[ 1 + \frac{1}{\nu} (x - \mu)^T \Sigma^{-1} (x-\mu) \big]^{-\big(\frac{\nu + d }{2}\big)} \label{eq:student}
\end{align}
where $Z(\nu, \Sigma) = \Gamma(\frac{\nu+d}{2})/\Gamma(\frac{\nu}{2}) \cdot |\Sigma|^{-1/2} \nu^{-\frac{d}{2}} \pi^{-\frac{d}{2}}$. Note that $\nu > 0$, so that we only have positive values raised to the $-(\nu+d)/2$ power, and the density is defined on the real line.
The power function in \eqref{eq:student} is already reminiscent of the $q$-exponential, while we have first and second moment sufficient statistics as in the Gaussian case. Solving for the exponent, or order of $\exp_{q}$, that corresponds to $-(\nu+d)/2$, we obtain
\begin{align}
-\big(\frac{\nu + d }{2}\big) = \frac{1}{1-q} \quad \implies \\
q = \frac{\nu+d+2}{\nu+d} \quad \text{or} \quad \nu = \frac{d - d q +2}{q - 1}
\end{align}
We can also rewrite the $\nu^{-1} \, (x - \mu)^T \Sigma^{-1} (x-\mu) $ using natural parameters corresponding to $\{x, x^2\}$ sufficient statistics as in the Gaussian case (see, e.g. \citet{matsuzoe2015deformed} Example 4).
Note that the Student-$t$ distribution has heavier tails than a standard Gaussian, and reduces to a multivariate Gaussian as $q \rightarrow 1$ and $\exp_{q}(u) \rightarrow \exp(u)$. This corresponds to observing $n\rightarrow \infty$ samples, so that the sample mean and variance approach the ground truth.
\citet{gelman2013bayesian} Sec. 3.3, 3.6
or \citet{murphy2007conjugate} Sec. 5,9 show that the Student-$t$ distribution with $\nu = n - d + 1$ degrees of freedom arises as the posterior predictive for either the unknown mean $\mu$ or a new data observation $x$, after observing $n$ samples from a $d$-dimensional Gaussian and assuming an inverse Wishart prior for the covariance.
\begin{align}
p(\mu| x_{\leq n}, \kappa_n, \mu_n, S_n) = t_{\nu_n - d + 1} \big(\mu_n, \frac{S_n}{\kappa_n (\nu_n - d + 1) }\big)
\end{align}
where $\mu_n$ is the current estimate of the sample mean. The predictive for an observation $x_{n+1}$ is similar, but with the variance scaled by $\frac{S_n \cdot (\kappa_n+1)}{\kappa_n (\nu_n - d + 1) }$. Conjugate updates are easy for all parameters, with $\kappa_n$ and $\nu_n$ tracking the number of observations, $\mu_n$ and $S_n$ tracking the statistics from observed samples.
\paragraph{Pareto Distributions: Modeling Tail Events and Power Law Behavior}
The $q$-exponential family can also be used for modeling the \textit{tail} behavior of a distribution \citep{bercher2008new, bercher2008tsallis, vehtari2015pareto}, or, in other words, the probability of $p(x)$ restricted to $X > x_{\text{min}} $ and normalized.
For example, the generalized Pareto distribution is defined via the tail function
\begin{align}
P(X > x) = \begin{cases} \big(1 + \xi \frac{x-x_{\text{min}} }{\sigma} \big)^{-\frac{1}{\xi}} \quad \xi \neq 0 \\ \exp\{- \frac{x-x_{\text{min}} }{\sigma}\} \qquad \xi =0
\end{cases}
\end{align}
When $\xi \geq 0$, the domain is restricted to $x \geq x_{\text{min}}$, whereas when $\xi < 0$, the support is between $x_{\text{min}} \leq x \leq x_{\text{min}}-\frac{\sigma}{\xi}$. Writing the CDF as $1- P(X>x)$ and differentiating leads to
\begin{align}
p(x) = \frac{1}{\sigma}\big[1 + \xi \cdot \frac{x-x_{\text{min}}}{\sigma} \big]^{-\frac{1}{\xi}-1}
\end{align}
Solving $-\frac{1}{\xi}-1 = \frac{1}{1-q}$ in the exponent, we obtain $q = \frac{2\xi + 1}{\xi+1}$ or $\xi = \frac{q-1}{q-2}$ .
\subsection{Student-$t$ Distributions and $q$-Exponential Family}
The Student-$t$ distribution appears in hypothesis testing with finite samples, under the assumption that the sample mean follows a Gaussian distribution. In particular, the degrees of freedom parameter $\nu = n-1$ can be shown to correspond to an order of the ${q}$-exponential family with $\nu = (3-{q}) / ({q}-1)$ (in 1-d), so that the choice of ${q}$ is linked to the amount of data observed.
We can first write the multivariate Student-$t$ density, specified by a mean vector $\mu$, covariance ${\mathbf{\Sigma}}$, and degrees of freedom parameter $\nu$, in $d$ dimensions, as
\begin{align}
t_{\nu}(x | {\mathbf{\mu}}, {\mathbf{\Sigma}}) = \frac{1}{Z(\nu, {\mathbf{\Sigma}})} \big[ 1 + \frac{1}{\nu} (x - {\mathbf{\mu}})^T {\mathbf{\Sigma}}^{-1} (x-{\mathbf{\mu}}) \big]^{-\big(\frac{\nu + d }{2}\big)} \label{eq:student}
\end{align}
where $Z(\nu, {\mathbf{\Sigma}}) = \Gamma(\frac{\nu+d}{2})/\Gamma(\frac{\nu}{2}) \cdot |{\mathbf{\Sigma}}|^{-1/2} \nu^{-\frac{d}{2}} \pi^{-\frac{d}{2}}$. Note that $\nu > 0$, so that we only have positive values raised to the $-(\nu+d)/2$ power, and the density is defined on the real line.
The power function in \eqref{eq:student} is already reminiscent of the ${q}$-exponential, while we have first and second moment sufficient statistics as in the Gaussian case. We can solve for the exponent, or order parameter $q$, that corresponds to $-(\nu+d)/2$ using $-\big(\frac{\nu + d }{2}\big) = \frac{1}{1-{q}}$. This results in the relations
\begin{align}
\nu = \frac{d - d {q} +2}{{q} - 1} \qquad \text{or} \qquad {q} = \frac{\nu+d+2}{\nu+d}
\end{align}
We can also rewrite the $\nu^{-1} \, (x - {\mathbf{\mu}})^T {\mathbf{\Sigma}}^{-1} (x-{\mathbf{\mu}}) $ using natural parameters corresponding to $\{x, x^2\}$ sufficient statistics as in the Gaussian case (see, e.g. \citet{matsuzoe2015deformed} Example 4).
Note that the Student-$t$ distribution has heavier tails than a standard Gaussian, and reduces to a multivariate Gaussian as ${q} \rightarrow 1$ and $\exp_{{q}}(u) \rightarrow \exp(u)$. This corresponds to observing $n\rightarrow \infty$ samples, so that the sample mean and variance approach the ground truth \cite{murphy2007conjugate}.
\begin{figure}[t]
\centering
\subfigure[$q=0$] {\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.19\textwidth]{sections/figs/gif/ridge_01.pdf}}
\subfigure[$q=0.5$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.19\textwidth]{sections/figs/gif/ridge_09.pdf}}
\subfigure[$q=0.9$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.2\textwidth]{sections/figs/gif/ridge_18.pdf}}
\subfigure[$q=1$]{\includegraphics[trim={0 1.2cm 0 1cm},clip,width=0.19\textwidth]{sections/figs/gif/ridge_20.pdf}}
\subfigure[$q=2$]{\includegraphics[trim={0 .1cm 0.25cm 0 },clip,width=0.19\textwidth]{sections/figs/gif/gaussian_2.pdf}}
\caption{Intermediate densities between $\mathcal{N}(-4, 3)$ and $\mathcal{N}(4,1)$ for various $q$-paths and 10 equally spaced $\beta$. The path approaches a mixture of Gaussians with weight $\beta$ at $q=0$. For the geometric mixture ($q=1$), intermediate $\pi_{\beta}$ stay within the exponential family since both $\pi_0$, $\pi_T$ are Gaussian.}
\label{fig:alpha_path22}
\label{fig:gaussian_path}
\vspace*{-.15cm}
\end{figure}%
\begin{figure}[t]
\centering
\includegraphics[trim={0 0 0 0 },clip,width=0.99\textwidth]{sections/figs/student_t.pdf}
\caption{Intermediate densities between Student-$t$ distributions, $t_{\nu = 1}(-4, 3)$ and $t_{\nu = 1}(4,1)$ for various $q$-paths and 10 equally spaced $\beta$,
Note that $\nu=1$ corresponds to $q=2$, so that the $q=2$ path stays within the $q$-exponential family.\\[3ex]
We provide code to reproduce experiments at \url{https://github.com/vmasrani/q\_paths}.
}
\label{fig:alpha_path}
\label{fig:student_path}
\vspace*{-.15cm}
\end{figure}
\subsection{Annealing between 1-d Student-$t$ Distributions}
Since the Student-$t$ family generalizes the Gaussian distribution to $q \neq 1$, we can run a similar experiment annealing between two Student-$t$ distributions. We set $q=2$, which corresponds to $\nu = 1$ with $\nu = (3-{q}) / ({q}-1)$, and use the same mean and variance as the Gaussian example in Fig. \ref{fig:alpha_path22}, with $\pi_0(z) = t_{\nu=1}( -4, 3)$ and $\pi_1(z) = t_{\nu=1}( 4, 1)$.
We visualize the results in Fig. \ref{fig:student_path}. For this special case of both endpoint distributions within a parametric family, we can ensure that the $q=2$ path stays within the $q$-exponential family of Student-$t$ distributions. We make a similar observation for the Gaussian case and $q=1$ in Fig. \ref{fig:gaussian_path}. Comparing the $q=0.5$ and $q=0.9$ Gaussian path with the $q=1.0$ and $q=1.5$ path, we observe that mixing behavior appears to depend on the relation between the $q$-path parameter and the order of the $q$-exponential family of the endpoints.
As $q \rightarrow \infty$, the power mean \eqref{eq:abstract_mean} approaches the $\min$ operation as $1-q \rightarrow -\infty$. In the Gaussian case, we see that, even at $q=2$, intermediate densities for all $\beta$ appear to concentrate in regions of low density under both $\pi_0$ and $\pi_T$. However, for the heavier-tailed Student-$t$ distributions, we must raise the $q$-path parameter significantly to observe similar behavior.
\subsection{Endpoints within a Parametric Family}
If the two endpoints $\pi_0, \tilde{\pi}_1$ are within a $q$-exponential family, we can show that each intermediate distribution along the $q$-path of the same order is also within this $q$-family. However, we cannot make such statements for general endpoint distributions or members of different $q$-exponential families.
\paragraph{Exponential Family Case}
We assume potentially vector valued parameters $\theta = \{ \theta\}_{i=1}^N$ with multiple sufficient statistics $\phi(z) = \{ \phi_i(z) \}_{i=1}^N$, with $\theta \cdot \phi(z) = \sum_{i=1}^N \theta_i \phi_i(z)$.
For a common base measure $g(z)$, let $\pi_0(z) = g(z) \, \exp\{ \theta_0 \cdot \phi(z) \}$ and $\tilde{\pi}_1(z) = g(z) \, \exp \{ \theta_1 \cdot \phi(z) \}$. Taking the geometric mixture,
\begin{align}
\tilde{\pi}_\beta(z) &= \exp \big\{ (1-\beta) \, \log \pi_0(z) + \beta \, \log \tilde{\pi}_1(z) \big\} \\
&= \exp \big \{ \log g(z) + (1-\beta) \, \theta_0 \cdot \phi(z) + \beta \, \theta_1 \phi(z) \big \} \\
&= g(z) \exp \big \{ \big( (1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \big \}
\end{align}
which, after normalization, will be a member of the exponential family with natural parameter $(1-\beta) \, \theta_0 + \beta \, \theta_1$.
\paragraph{$q$-Exponential Family Case} For a common base measure $g(z)$, let $\pi_0(z) = g(z) \, \exp_q \{ \theta_0 \cdot \phi(z) \}$ and $\tilde{\pi}_1(z) = g(z) \, \exp_q \{ \theta_1 \cdot \phi(z) \}$. The $q$-path intermediate density becomes
\begin{align}
\tilde{\pi}^{(q)}_\beta(z) &= \big[ (1-\beta) \, \pi_0(z)^{1-q} + \beta \, \tilde{\pi}_1(z)^{1-q} \big]^{\frac{1}{1-q}} \\
&= \big[ (1-\beta) \, g(z)^{1-q} \, \exp_q \{ \theta_0 \cdot \phi(z) \}^{1-q} + \beta \, g(z)^{1-q} \,\exp_q \{ \theta_1 \cdot \phi(z) \} ^{1-q} \big]^{\frac{1}{1-q}} \\
&= \bigg[ g(z)^{1-q} \bigg ( (1-\beta) \, \, [1 + (1-q)( \theta_0 \cdot \phi(z))]^{\frac{1}{1-q}1-q} + \beta \, [1 + (1-q)( \theta_1 \cdot \phi(z))]^{\frac{1}{1-q} 1-q} \bigg) \bigg]^{\frac{1}{1-q}} \nonumber \\
&= g(z) \bigg[ 1 + (1-q) \bigg( \big((1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \bigg) \bigg]^{\frac{1}{1-q}} \\
&= g(z) \exp_q \big\{ \big((1-\beta) \, \theta_0 + \beta \, \theta_1 \big) \cdot \phi(z) \big\}
\end{align}
which has the form of an unnormalized $q$-exponential family density with parameter $(1-\beta) \, \theta_0 + \beta \, \theta_1$.
\section{Derivation of Path $q$-Family}\label{app:q_path}
With normalization, the $q$-exponential family is usually written as
\begin{align}
\pi_{t,q}(z) &= \pi_0(z) \, \exp_q \big\{ \eta \cdot \phi(z) - \psi_q(\eta) \big\} \label{eq:lnq_fam_form1}
\end{align}
which is suggestive of the convex conjugate for $\psi_q$ via
\begin{align}
\ln_q \frac{\pi_{t,q}(z) }{\pi_0(z)} = \eta \cdot \phi(z) - \psi_q(\eta)
\end{align}
We now derive the $q$-exponential family from the $\alpha$ mixture between $\pi_0$ and $\pi_1$, where we will obtain sufficient statistics of $\phi(z) = \ln_{q} \frac{\pi_1(z)}{\pi_0(z)}$.
\begin{proof}
\begin{align}
\pi_{\alpha, \beta}(z) &\propto \bigg[(1 - \beta) \pi_0^{1 - \alpha} + \beta\pi_1^{1 - \alpha}\bigg]^\frac{1}{1 - \alpha}\\
&= \bigg[\pi_0^{1 - \alpha} + \beta\big(\pi_1^{1 - \alpha} - \pi_0^{1 - \alpha}\big)\bigg]^\frac{1}{1 - \alpha}\\
&= \pi_0 \left[1 + \beta\left(\left(\frac{\pi_1}{\pi_0}\right)^{1 - \alpha} - 1\right)\right]^\frac{1}{1 - \alpha}\\
&= \pi_0 \left[1 + (1-\alpha)\beta\ln_\alpha\left(\frac{\pi_1}{\pi_0}\right)\right]^\frac{1}{1 - \alpha}\\
&= \pi_0 \exp_\alpha\left[\beta\ln_\alpha\left(\frac{\pi_1}{\pi_0}\right)\right].
\end{align}
Defining $\phi(z) = \ln_{q} \frac{\pi_1(z)}{\pi_0(z)}$ and normalizing, we arrive at
\begin{align}
\pi_{\alpha, \beta}(z) = \frac{\pi_0}{Z_\alpha(\beta)}\exp_\alpha\left[\beta \phi(z) \right] \quad
Z_\alpha(\beta) &:= \int dz \pi_0 \exp_\alpha\left[\beta \phi(z) \right].\label{eq:lnq_fam_form2}
\end{align}
To get \cref{eq:lnq_fam_form2} into the standard $q$-exponential family form, we define
\begin{align}
\psi_\alpha := \ln_\alpha^* \left(Z_\alpha(\beta)\right) \label{eq:psi_alpha}
\end{align}
and rewrite \cref{eq:lnq_fam_form2} using \cref{eq:q_exp_prod}
\begin{align}
\pi_{\alpha, \beta}(z) &= \frac{\pi_0}{Z_\alpha(\beta)}\exp_\alpha\left[\beta \phi(z) \right] \\
&= \frac{\pi_0}{\exp^*_\alpha\left(\psi_\alpha\right)}\exp_\alpha\left(\beta \phi(z) \right) & \text{Using \cref{eq:psi_alpha}}\\
&= \pi_0 \exp_\alpha\left(-\psi_\alpha\right) \exp_\alpha\left(\beta \phi(z)\right) & \text{Using $\exp^*_\alpha(u) = 1/\exp_\alpha(-u)$}\\
&= \pi_0 \exp_\alpha\left(-\psi_\alpha + \bigg(1 - (1 - \alpha) \psi_\alpha\bigg)\beta\phi(z)\right) & \text{Using \cref{eq:q_exp_prod}}\\
&= \pi_0 \exp_\alpha\left(-\psi_\alpha + \eta \phi(z)\right)
\end{align}
where we have defined
\begin{align}
\eta := \beta \cdot \big(1 - (1 - \alpha) \psi_\alpha\big).
\end{align}
\end{proof}
\section{Alternative Derivation}
Note that $\exp_q \{ x + y \} = \exp_q \{ y \} \cdot \exp_q \{ \frac{x}{1 + (1-q) y} \}$ from \textcolor{red}{Vaden's theorem below.}, instead of $\exp \{ x + y \} = \exp \{ x \} \cdot \exp \{ y \} $ for the standard exponential. This implies that
\begin{align}
\pi_{q,\beta} = \pi_0 \, \exp_q \{ \beta \cdot \phi(z) - \psi_q \} &= \pi_0 \, \exp_q\{ -\psi_q \} \exp_q \{ \frac{\beta}{1+(1-q)(-\psi_q)} \cdot \phi(z) \}
\end{align}
Our goal is to express $\pi_{q,\beta}(z)$ using a normalization constant $Z$ instead of the $q$-free energy $\psi_q(\beta)$. While the exponential family allows us to freely move between $\log Z_{\beta}$
and $\psi(\beta)$, in the $q$-exponential case, we must adjust the parameters. Defining
\begin{align}
\eta &= \frac{\beta}{1+(1-q)(-\psi_q)} \\
Z_q(\eta) &= \frac{1}{\exp_q \{-\psi_q \}} := \exp_{q}^{*}\{ -\psi_q\} \\
\end{align}
we obtain a new parameterization of the $q$-exponential family, using parameters $\eta$ and multiplicative normalization constant $Z_q(\eta)$, as
\begin{align}
\pi_{q,\beta} = \pi_{q,\eta} &= \frac{1}{Z_q(\eta)} \exp_{q} \{ \eta \cdot \phi(z) \}
\end{align}
See \citet{matsuzoe2019normalization} for more detailed discussion of normalization in deformed exponential families.
\section{q-Paths from Power Means} \label{sec:q_paths}
$q$-paths are derived using a generalized notion of the mean due to \citet{kolmogorov1930}. For any monotonic function $h(u)$, we define the \textit{generalized mean}
\begin{align}
\mu_{h}({\bf{u, w}} ) = h^{-1} \left(\sum_{i=1}^N w_i \cdot h(u_i)\right), \label{eq:abstract_mean}
\end{align}
where $\mu_{h}$ outputs a scalar given a normalized measure ${{\bf{w}} = (w_1, ..., w_N)}$ (with $\sum_{i=1}^N w_i = 1$) over a set of input elements ${\bf{u}} = (u_1, ..., u_N)$ \citep{de2016mean}.\footnote{The generalized mean is also referred to as the \textit{abstract}, \textit{quasi-arithmetic}, or \textit{Kolmogorov-Nagumo} mean in the literature.}
The generalized mean can be thought of as first applying a nonlinear transformation function to each input, applying the desired weights in the transformed space, and finally mapping back to the distribution space.
The geometric and arithmetic means are \textit{homogeneous}, that is, they have the linear scale-free property ${\mu_{h}(c \cdot {\bf{u}}, {\bf{w}}) = c \cdot \mu_{h}({\bf{u, w}})}$. \citet{hardy1953} shows the unique class of functions $h(u)$ that yield means with the homogeneity property are of the form
\begin{align}
h_{q}(u) =
\begin{cases}
a \cdot u^{1-q} + b & q \neq 1 \\
\log u \hfill & q = 1
\end{cases}. \label{eq:alpha_abstract}
\end{align}
for any $a$ and $b$. Setting ${a = b = 1 / (1-q)}$, we can recognize $h_q(u)$ as the deformed logarithm $\ln_q(u)$ from \cref{eq:lnq}.
Generalized means which use the class of functions $h_q(u)$ we refer to as \textit{power means}, and show in App. \ref{app:any_h} that for any choice of $a$ and $b$,
\begin{align}
\mu_{h_q}({\bf{u, w}}) = \left[\sum_{i=1}^N w_i \cdot u_i^{1-q} \right]^{\frac{1}{1-q}}. \label{eq:powermean}
\end{align}
Notable examples include the arithmetic mean at $q = 0$, geometric mean as $q \rightarrow 1$, and the $\min$ or $\max$ operation as $q \rightarrow \pm \infty$. For $q = \frac{1+\alpha}{2}$, $a = \frac{1}{1-q}$, and $b =0$, the function $h_{q}(u)$ matches the $\alpha$-representation in information geometry \citep{amari2016information}, and the resulting power mean over normalized probability distributions as input $\textbf{u}$ is known as the $\alpha$-integration \citep{amari2007integration}
For annealing between unnormalized density functions, we propose the $q$-path of intermediate $\tilde{\pi}_{\beta,q}(z)$ based on the power mean. Observing that the geometric mixture path in \cref{eq:geo_path} takes the form of a generalized mean for $h(u) = \ln(u)$, we choose the deformed logarithm
\begin{align}
h_q(u) := \ln_q(u) \quad \quad h_q^{-1}(u) = \exp_q(u),
\end{align}
as the transformation function for the power mean.
This choice will facilitate our parallel discussion of geometric and $q$-paths in terms of generalized logarithms and exponentials in \cref{sec:path_exp_fam}.
Using ${{\bf{u}} = (\pi_0, \tilde{\pi}_1)}$ as the input elements and ${{\bf{w}} = (1-\beta, \beta)}$ as the mixing weights in \cref{eq:powermean}, we obtain a simple, closed form expression for the $q$-path intermediate densities
\begin{align}
\tilde{\pi}_{\beta,q} (z)
&= \bigg[(1-\beta) \, \pi_0 (z)^{1-q} + \beta \, \tilde{\pi}_1 (z)^{1-q} \bigg]^{\frac{1}{1-q}} \label{eq:qpath_mix_form22}
\end{align}
Crucially, \cref{eq:qpath_mix_form22} can be directly used as an energy function in \gls{MCMC} sampling methods such as \gls{HMC} \citep{neal2011mcmc}, and our $q$-paths do not require additional assumptions on the endpoint distributions.
Finally, to compare against the geometic path, we write the $q$-path in terms of the generalized mean in \cref{eq:abstract_mean}
\begin{align}
\tilde{\pi}_{\beta,q} &= \exp_q\bigg\{(1-\beta) \, \ln_q \pi_0 (z) + \beta \, \ln_q \tilde{\pi}_1 (z)\bigg\} \, , \label{eq:lnq_mixture}
\end{align}
from which we can see that $\tilde{\pi}_{\beta,q}$ recovers the geometric path in \cref{eq:geo_path} as $q \to 1$, $\ln_q(u) \rightarrow \log(u)$, and $\exp_q(u) \rightarrow \exp(u)$. Taking the deformed logarithm of both sides also yields an interpretation of the geometric or $q$-paths as $\ln$ or $\ln_q$-mixtures of density functions, respectively.
\section{Related Work}\label{sec:related}
In \cref{sec:path_exp_fam} and \cref{app:parametric}, we discuss connections between $q$-paths and the $q$-exponential family. Examples of parametric $q$-exponential families include the Student-$t$ distribution, which has the same first- and second-moment sufficient statistics as the Gaussian and a degrees of freedom parameter $\nu$ that specifies a value of $q > 1$. This induces heavier tails than the standard Gaussian and leads to conjugate Bayesian interpretations in hypothesis testing with finite samples \citep{murphy2007conjugate, gelman2013bayesian}.
The generalized Pareto distribution is another member of the $q$-exponential family, and has been used for modeling heavy-tail behavior \citep{pickands1975statistical, bercher2008new, tsallis2009introduction}, smoothing outliers for importance sampling estimators \citep{vehtari2015pareto}, or evaluating variational inference \citep{yao2018yes}.
$q$-logarithms and exponentials have also appeared in methods for classification \citep{ding2011t, amid2019two}, robust hypothesis testing \citep{qin2017robust}, mixture modeling \citep{qin2013maximum}, variational inference \citep{ding2011t, kobayashi2020q}, and expectation propagation \citep{futami2017expectation, minka2004power}.
In \cref{sec:vrep_breg}, we showed that each $q$-path density $\tilde{\pi}_{\beta,q}(z)$ specifies the minimizing argument for a variational objective in \cref{eq:vrep_exp} or \cref{eq:vrep_alpha}. The value of the objective in \cref{eq:vrep_exp} is a mixture of \textsc{kl} divergences, and can be interpreted as a generalized Jensen-Shannon divergence \citep{Nielsen_2019, nielsen2021variational} or Bregman information \citep{Banerjee2005}. \citet{deasy2021constraining} explore this mixture of divergences as a regularizer in variational inference, while \citet{brekelmans2020lref} provide additional analysis for case of $q=1$.
\section{Experiments}\label{sec:experiments}
Code for all experiments is available at \url{https://github.com/vmasrani/qpaths_uai_2021}.
\subsection{Sequential Monte Carlo in Bayesian Inference}\label{sec:experiments_smc}
\begin{figure}[t]
\begin{minipage}{.95\columnwidth}
\vspace*{-.1cm}
\centering
\includegraphics[width=0.9\textwidth]{sections/figs/smc/qbest_qmin.pdf}
\vspace*{-0.1cm}
\captionof{figure}{Evaluating the choice of $q$ for \gls{SMC}. Since the scale of the likelihood $\tilde{\pi}_1$ depends on the number of data examples, we expect the numerical stability of $q$-paths to vary by $N$. While the minimum $q$ yielding a stable estimator (orange) increases with $N$,
the best performing $q$-path (blue) is still $q=1-\delta$ for small $\delta >0$.
}\label{fig:varying_n}
\end{minipage}
\end{figure}
In this section, we use \gls{SMC} to sample posterior parameters $\pi_1(\theta) = p(\theta|\mathcal{D}) \propto p(\theta) \prod_{n=1}^N p(x_n | \theta)$ and estimate the log marginal likelihood $\log p(\mathcal{D})= \log \int p(\theta)p(\mathcal{D}|\theta) d\theta$ in a Bayesian logistic regression models on the ``tall'' Pima Indians diabetes dataset ($N=768, D=8$) and ``wide'' Sonar dataset ($N=208, D=61$) (see \cref{sec:experiment_details}). Ground truth $\log p(D)$ is computed using 50k samples and 20 move steps, and for all runs we use 10k samples and plot median error across ten seeds. Grid search shows best of 20 runs, where we sweep over 20 log-spaced $\delta \in [10^{-5}, 10^{-1}]$.
We explore the use of $q$-paths in both the non-adaptive case, with a fixed linear $\beta$ schedule with $K=10$ intermediate distributions, and the adaptive case, where the next value of $\beta_{t+1}$ is chosen to yield an \gls{ESS} of $N/2$ \citep{chopin2020introduction}.
For the non-adaptive case, we find in \cref{fig:smc_sampling} that $q \in [0.9954, 0.9983]$ can achieve more accurate marginal likelihood estimates than the geometric path with fewer movement steps and drastically reduced variance. In \cref{tab:pima_table} we see that $q$-paths achieve gains over the geometric path in both the linear and adaptive setting across both datasets.
\begin{figure*}[!t]
\begin{minipage}{.49\textwidth}
\vspace*{-.15cm}
\centering
\includegraphics[width=0.99\columnwidth]{sections/figs/bdmc/bdmc_omniglot_bounds_real.pdf}
\vspace*{-.05cm}
\captionof{subfigure}{Estimating $\log p(x)$ on real data using \gls{AIS}.}\label{fig:bdmc_omniglot_real}
\end{minipage}
\begin{minipage}{.49\textwidth}
\vspace*{-.1cm}
\subfigure{\includegraphics[width=0.99\columnwidth]{sections/figs/bdmc/bdmc_omniglot_gap.pdf}}
\vspace*{-.25cm}
\captionof{subfigure}{\acs{BDMC} Gap on simulated data.}\label{fig:bdmc_omniglot_gap}
\end{minipage}
\vspace*{-.15cm}
\caption{Evaluating Generative Models using \gls{AIS} with $q$-paths on Omniglot dataset. Best viewed in color.}
\label{fig:bdmc_omniglot_result}
\end{figure*}
\paragraph{Numerical Stability and Implementation}
To implement $q$-paths in practice, we begin by considering the log of the expression in \cref{eq:qexp_form}, which is guaranteed to be non-negative because $\tilde{\pi}_{\beta, q}(z)$ is an unnormalized density.
\begin{align}
&\log \tilde{\pi}_{\beta, q}(z) = \\
&\log \pi_0(z) + \frac{1}{1 - q} \log \left[1 + (1-q) \cdot \beta \cdot \ln_q\left( \frac{\tilde{\pi}_1(z)}{\pi_0(z)}\right)\right]\nonumber, \label{eq:q_energy}
\end{align}
We focus attention on $\ln_q \tilde{\pi}_1(z)/\pi_0(z)$ term, which is potentially unstable for $q\neq 1$ since it takes importance weights $w =\tilde{\pi}_1(z)/\pi_0(z)$ as input.
Since we are usually given log weights in practice, we consider the identity mapping $w = \exp (\log w)$ and reparameterize ${q = 1 - \frac{1}{\rho}}$ to obtain
\begin{align}
\ln_q\left(\exp \log w\right) &= \frac{1}{1-q}\left[ \left(\exp \log w \right)^{1-q} - 1\right]\\
&= \rho\left[ \left(\exp \log w \right)^{\frac{1}{\rho}} - 1\right]\\
&= \rho\left[\exp \{ \frac{1}{\rho} \log w \} - 1\right] \, .
\end{align}
This suggests $q$ should be chosen such that the exponential doesn't overflow or underflow, which can be accomplished by setting $\rho$ on the order of
\begin{align}
\rho = \max_i |\log w_i|. \label{eq:rho_choice}
\end{align}
where $i$ indexes a set of particles $\{z_i\}$.
This choice is reminiscent of the log-sum-exp trick and ensures $|\frac{1}{\rho} \log w| \le 1$.
In \cref{fig:varying_n}, we explore the impact of changing the scale of $\log w$ on the numerical stability of $q$-paths. For the case of inferring global model parameters over $N$ i.i.d. data points $p(\mathcal{D}) = \prod_{n=1}^N p(x_n)$, we can see that
the scale of the unnormalized densities $\tilde{\pi}_1(\theta, \mathcal{D}) = p(\theta) \prod_{n=1}^N p(x_n | \theta)$ differs based on the number of datapoints, where increasing $N$ decreases the magnitude of $\log w = \log \tilde{\pi}_1(\theta, \mathcal{D})$ with $\tilde{\pi}_0(\theta) = p(\theta)$.
We randomly subsample $N$ data points for conditioning our model, and observe the effect on both the best-performing $q$ and the numerical stability of \gls{SMC} with $q$-paths. The minimum value of $q$ for which we can obtain stable estimators rises as the number of datapoints $N$ increases and the scale of $\tilde{\pi}_1(\theta, \mathcal{D})$ becomes smaller.
\paragraph{Sensitivity to $q$}
While setting $\rho$ on the order of $\max_i |\log w_i|$ ensures numeric stability, \cref{fig:varying_n} indicates that numerical stability may not be sufficient for achieving strong performance in \gls{SMC}. In fact, $q$-paths with values just less than $1$ consistently perform best across all values of $N$.
To understand this observation, recall the example in Fig. \ref{fig:q_path_separated} where the initial and target distribution are well-separated and even the $q=0.98$ path begins to resemble a mixture distribution. This is clearly undesirable for path sampling techniques, where the goal is to bridge between base and target densities with distributions that are easier to sample.
\paragraph{Heuristic for Choosing $q$}
Motivated by the observations above and the desire to avoid grid search, we provide a rough heuristic to find a $q$ which is well-suited to a given estimation problem.
Taking inspiration from the \gls{ESS} criterion used to select $\beta_{t+1}$ in our \gls{SMC} experiments above \citep{chopin2020introduction}, we select $q$ to obtain a target value of \gls{ESS} for the first intermediate $\beta_1$
\begin{align}
\mathcal{L}(\beta_1, q) &= ||\text{ESS}(\beta_1, q) - \text{ESS}_\text{target} ||_2^2 \label{eq:ess_loss} \\
\text{ESS}(\beta, q) &= \frac{\big(\sum_{i} w_i(\beta, q)\big)^2 }{\sum_i w_i\big(\beta, q\big)^2} \, \, \text{with} \, \, w_i(\beta, q) = \frac{\tilde{\pi}_{\beta, q}(z_i)}{\pi_0(z_i)}. \nonumber
\end{align}
As in the case of the adaptive $\beta$ scheduling heuristic for \gls{SMC}, we set the target $\text{ESS}_\text{target} = N/2$ to ensure adequate sampling diversity \citep{jasraInferenceLevyDrivenStochastic2011, schaferSequentialMonteCarlo2013,buchholzAdaptiveTuningHamiltonian2021, chopin2020introduction}.
For fixed scheduling, the value of $\beta_1$ may be known and thus we can easily select $q$ to obtain the target value $\text{ESS}(\beta_1, q) \approx \text{ESS}_\text{target}$. However, in adaptive scheduling, $\beta_1$ is not known and the objective $\mathcal{L}(\beta_1, q)$ is non-convex in $\beta_1, q$. In \cref{app:ais_exp_details}, we provide a coordinate descent algorithm to find local optima using random initializations around an initial $q = 1- \frac{1}{\rho}$ for $\rho$ as in \cref{eq:rho_choice}, with results in \cref{tab:pima_table}.
Note that this heuristic sets $q$ based on a set of initial $z_i \sim \pi_0(z)$, and thus does not consider information about the \gls{MCMC} sampling used to transform and improve samples.
Nevertheless, in \cref{tab:pima_table} we observe that $q$-paths initialized by this heuristic can outperform the geometric path on benchmark \gls{SMC} binary regression tasks.
Comparison with grid search results indicate that further performance gains might be achieved with an improved heuristic.
\subsection{Evaluating generative models using AIS} \label{sec:experiments_ais}
\gls{AIS} with geometric paths is often considered the gold-standard for evaluating decoder-based generative models \citep{wuetal17}. In this section, we evaluate whether $q$-paths can improve marginal likelihood estimation for a \gls{VAE} trained using the \gls{TVO} \citep{masrani2019thermodynamic} on the Omniglot dataset.
First, we use \gls{AIS} to evaluate the trained generative model on the true test set, with a Gaussian prior $\pi_0(z) = p(z)$ as the base distribution and true posterior $\pi_1(z) = p(z|x) \propto p(x,z)$ as the target. Intermediate distributions then become $\tilde{\pi}_{\beta}(z) = p(z)p(x|z)^{\beta}$. We report stochastic lower bound estimates \citep{grosse2015sandwiching} of $\mathbb{E}_{p_{\text{data}}(x)} \log p(x)$ in \cref{fig:bdmc_omniglot_gap}, where we have plotted the negative likelihood bound so that lower is better. Even for a large number of intermediate distributions, we find that $q \in [0.992, 0.998]$ can outperform the geometric path.
When exact posterior samples are available, we can use a reverse \gls{AIS} chain from the target density to the base to obtain a stochastic {\it upper bound} on the $\log$ marginal likelihood \citep{grosse2015sandwiching}. While such samples are not available on the real data, we can use simulated data drawn from the model using ancestral sampling $x, z \sim p(z)p(x|z)$ as the dataset, and interpret $z$ as a posterior sample. We use the \gls{BDMC} gap, or difference between the stochastic lower and upper bounds obtained from forward and reverse chains on simulated data, to evaluate the quality of the \gls{AIS} procedure.
In \cref{fig:bdmc_omniglot_result}, we report the average \gls{BDMC} gap on $2500$ simulated data examples, and observe that $q$-paths with $q=0.994$ or $q=0.996$ consistently outperform the geometric path as we vary the number of intermediate distributions $K$.
\section{Background}
\subsection{Geometric annealing Path}
\label{sec:geo_path}
The geometric mixture path is the most ubiquitous method for specifying a set of intermediate distributions between a tractable base distribution $\pi_0$ and unnormalized target $\tilde{\pi}_1$,
\begin{align}
\pi_{\beta} (z) &= \frac{\pi_0(z)^{1-\beta} \, \tilde{\pi}_1(z)^{\beta}}{Z(\beta)}, \quad \text{where} \label{eq:geopath_} \\
Z(\beta) &= \int \pi_0(z)^{1-\beta} \, \tilde{\pi}_1(z)^{\beta} dz.\label{eq:geopath0}
\end{align}
The geometric path may also be written as an exponential family of distributions, with natural parameter $\beta$ and sufficient statistic $T(z) = \log \tilde{\pi}_1(z) / \pi_0(z)$ corresponding to the log importance ratio.
We follow \citet{grunwald2007minimum, brekelmans2020tvo, brekelmans2020lref} in referring to this as a \textit{likelihood ratio exponential family}, with
\begin{align}
\pi_{\beta}(z)&= \pi_0(z) \exp \bigg\{ \, \beta \cdot \log \frac{\tilde{\pi}_1(z)}{\pi_0(z)} \, - \psi( \beta) \bigg\}\label{eq:lkd_ratio_fam}\\
\psi(\beta) &:= \log Z(\beta) = \log \int \pi_0(z)^{1-\beta} \tilde{\pi}_1(z)^{\beta} dz. \label{eq:lkd_ratio_partition}
\end{align}
It is often more convenient to work with \cref{eq:lkd_ratio_fam}, because one gains access to known exponential family properties that are not apparent from \cref{eq:geopath_} \citep{grosse2013annealing, brekelmans2020tvo,brekelmans2020lref}.
In \cref{sec:path_exp_fam} we provide an analogous interpretation for $q$-paths in terms of $q$-exponential families.
\subsection{Moment Averaging Path}
Previous work \citep{grosse2013annealing} considers alternative annealing paths in the restricted setting where $\pi_0(z)$ and $\pi_1(z)$ are members of the same exponential family, with parameters $\theta_0$ and $\theta_1$ respectively. Writing the base measure as $g(z)$ and sufficient statistics as $\phi(z)$,
\begin{align}
\pi_{\theta}(z) = g(z) \exp \{ \theta \cdot \phi(z) - \psi(\theta) \} \label{eq:std_exp_fam}
\end{align}
\citet{grosse2013annealing} propose the \textit{moment-averaged} path based on the dual or `moment' parameters of the exponential family, which correspond to the expected sufficient statistics
\begin{align}
\eta(\theta) = \frac{d \psi(\theta)}{d \theta} = \langle \mathbb{E}_{\pi_{\theta}}\left[ \phi_j(z) \right] \rangle_{j=1}^N \, , \label{eq:dpsi_dtheta}
\end{align
with $\langle \cdot \rangle$ indicating vector notation and $\psi(\theta)$ denoting the log partition function of \cref{eq:std_exp_fam}.
In minimal exponential families, the sufficient statistic function $\eta(\theta)$ is a bijective mapping
between a natural parameter vector and dual parameter vector \citep{wainwrightjordan}.
The moment-averaged path is defined using a convex combination of the dual parameter vectors \citep{grosse2013annealing}
\begin{align}
\eta(\theta_{\beta}) = (1-\beta) \, \eta(\theta_0) + \beta \, \eta(\theta_1) \, . \label{eq:moments_path}
\end{align}
To solve for the corresponding natural parameters, we calculate the Legendre transform, or a function inversion $\eta^{-1}$.
\begin{align}
\theta_{\beta} = \eta^{-1}\big((1-\beta) \, \eta(\theta_0) + \beta \, \eta(\theta_1)\big) \label{eq:moments_path_theta} \,.
\end{align}
This inverse mapping is often not available in closed form and can itself be a difficult estimation problem \citep{wainwrightjordan, grosse2013annealing}, which limits the applicability of the moment-averaged path in practice.
\subsection{q-Deformed Logarithm / Exponential}\label{sec:q_definition}
\label{sec:qdeformed}
While the standard exponential arises in statistical mechanics via the Boltzmann-Gibbs distribution, \citet{tsallis1988possible} proposed a generalized exponential which has formed the basis of nonextensive thermodynamics and found wide application in the study of complex systems \citep{gell2004nonextensive, tsallis2009introduction}.
Consider modifying the integral representation of the natural logarithm $\ln u := \int_1^u \frac{1}{x}dx$ using an arbitrary power function
\begin{align}
\ln_q u = \int_1^u \frac{1}{x^q}dx.\label{eq:lnq_int}
\end{align}
Solving \cref{eq:lnq_int} yields the definition of the $q$-logarithm
\begin{align}
\ln_q(u) := \frac{1}{1-q} \left( u^{1-q} - 1 \right) \label{eq:lnq} \, .
\end{align}
We define the $q$-exponential as the inverse of $q$-logarithm $\exp_q(u) := \ln_q^{-1}(u) $
\begin{align}
\exp_q(u) = \big[ 1 + (1-q) \, u \big]_{+}^{\frac{1}{1-q}} \, , \label{eq:expq}
\end{align}
where $[x]_{+}= \max\{0, x\}= \textsc{relu}(x)$ ensures that $\exp_q(u)$ is non-negative and fractional powers can be taken for $q<1$, and thus restricts the domain where $\exp_q(u)$ takes nonzero values to $u > -1/(1-q)$. We omit this notation in subsequent derivations because our $q$-paths in \cref{eq:qpath_mix_form} take non-negative densities as arguments for the $1/(1-q)$ power.
Note also that both the $q$-log and $q$-exponential recover the standard logarithm and exponential function in the limit,
\begin{align*}
&\lim_{q \rightarrow 1} \ln_q(u) & &&&& & \lim_{q \rightarrow 1} \exp_q(u)\\
=&\lim_{q \rightarrow 1} \frac{\frac{d}{dq} (u^{1-q} -1)}{\frac{d}{dq} (1-q)} & &&&& =&\lim_{q \rightarrow 1}\left[1 + (1 - q) \cdot u \right]^{\frac{1}{1-q}}\\
=& \frac{ - \log u \cdot u^{1-q}}{-1} \bigg|_{q=1} & &&&& =&\lim_{n \rightarrow \infty}\left[1 + \frac{u}{n}\right]^{n}\\
=&\log (u) & &&&& :=&\exp(u).
\end{align*}
In \cref{sec:path_exp_fam} we use this property to show $q$-paths recover the geometric path as $q \to 1$.
\section{Conclusion}
In this work, we proposed $q$-paths as a generalization of the geometric mixture path which can be constructed between arbitrary endpoint distributions and admits a closed form energy function. We provided a $q$-likelihood ratio exponential family interpretation of our paths, and derived a variational representation of $q$-path intermediate densities as minimizing the expected $\alpha$-divergence to the endpoints.
Finally, we observed empirical gains in \gls{SMC} and \gls{AIS} sampling using $q$-paths with $q = 1-\delta$ for small $\delta$.
Future work might consider more involved heuristics for choosing $q$, such as running truncated, parallel sampling chains, to capture the interplay between choices of $\beta, q,$ and sampling method.
Applying $q$-paths in settings such as sampling with \gls{PT} or variational inference using the \gls{TVO}, remain interesting questions for future work.
\section{Variational Representations} \label{sec:vrep_breg}
\citet{grosse2013annealing} observe that intermediate distributions along the geometric path can be viewed as the solution to a weighted \textsc{kl} divergence minimization
\begin{align}
&\pi_{\beta} = \argmin\limits_{r}(1-\beta)D_{\mathrm{KL}}[r\|\pi_{0}] +\beta D_{\mathrm{KL}}[r\|\pi_{1}] \label{eq:vrep_exp}
\end{align}
where the optimization is over arbitrary distributions $r(z)$.
When the endpoints come from an exponential family of distributions and the optimization is limited to only this parametric family $\mathcal{P}_{e}$, \citet{grosse2013annealing} find that the moment-averaged path is the solution to a \textsc{kl} divergence minimization with the order of the arguments reversed
\begin{align}
&\pi_{\eta}=\argmin\limits_{r \in \mathcal{P}_{e}} (1-\beta)D_{\mathrm{KL}}[\pi_{0}\|r]+\beta D_{\mathrm{KL}}[\pi_{1}\|r]\label{eq:vrep_moments}.
\end{align}
In App. \ref{app:alpha_integration}, we follow similar derivations as \citet{amari2007integration} to show that the $q$-path density $\tilde{\pi}_{\beta,q}$ minimizes the $\alpha$-divergence to the endpoints
\begin{align}
\tilde{\pi}_{\beta, q}=\argmin\limits_{\tilde{r}}(1- \beta)& D_{\alpha}[\tilde{\pi}_{0}||\tilde{r}] +\beta D_{\alpha}[\tilde{\pi}_{1}||\tilde{r}]\label{eq:vrep_alpha}
\end{align}
where the optimization is over arbitrary measures $\tilde{r}(z)$. Amari's $\alpha$-divergence over unnormalized measures, for $\alpha = 2q-1$ (\cite{amari2016information} Ch. 4), is defined
\begin{align}
D_{\alpha}[\tilde{r}:\tilde{p} ]& = \small \frac{4}{(1-\alpha^2)} \bigg( \frac{1-\alpha}{2} \int \tilde{r}(z)dz \label{eq:alpha_div}\\
&\phantom{=}+ \frac{1+\alpha}{2} \int \tilde{p}(z) dz -\int \tilde{r}(z)^{\frac{1-\alpha}{2}} \, \tilde{p}(z)^{\frac{1+\alpha}{2}} dz \bigg) \nonumber
\end{align}
The $\alpha$-divergence variational representation in \cref{eq:vrep_alpha} generalizes \cref{eq:vrep_exp}, since the \textsc{kl} divergence $D_{\mathrm{KL}}[\tilde{r}||\tilde{p}]$ is recovered (with the order of arguments reversed)\footnote{The \textsc{kl} divergence extended to unnormalized measures is defined $D_{KL}[\tilde{q}:\tilde{p} ] = \int \tilde{q}(z) \log \frac{\tilde{q}(z)}{\tilde{p}(z)} dz - \int \tilde{q}(z) dz + \int \tilde{p}(z) dz$.} as $q \rightarrow 1$.
However, while the $\alpha$-divergence tends to $D_{\mathrm{KL}}[\tilde{p}||\tilde{r}]$ as ${q \to 0}$, \cref{eq:vrep_alpha} \textit{does not} generalize \cref{eq:vrep_moments} since the optimization in \cref{eq:vrep_moments} is restricted to the parametric family $\mathcal{P}_{e}$.
For the case of arbitrary endpoints, the \textit{mixture} distribution rather than the moment-averaging distribution minimizes the reverse \textsc{kl} divergence in \cref{eq:vrep_moments}, producing different paths as seen in \cref{fig:moments_vs_mixture}. We discuss this distinction in greater detail in \cref{app:mixture_path} and \cref{app:moments_as_generalized_mean}.
\begin{figure}
\centering
\subfigure[\text{Moment-Avg}]{\includegraphics[trim={0 0 0 0},clip, scale=.25]{sections/figs/moments.pdf}}
\subfigure[$q=0$]{\includegraphics[trim={0 0 0 0},clip, scale =.25]{sections/figs/q0_path.pdf}}
\caption{Moment-averaging path and $q=0$ mixture path between $\mathcal{N}(-4, 3)$ and $\mathcal{N}(4,1)$. See \cref{sec:vrep_breg}, \cref{app:mixture_path}, and \cref{app:moments_as_generalized_mean} for discussion.}
\label{fig:moments_vs_mixture}
\end{figure}
\subsubsection*{References}}
\usepackage{mathtools}
\usepackage{booktabs}
\usepackage{tikz}
\newcommand{\swap}[3][-]{#3#1#2}
\title{Instructions for Authors: Title in Title Case}
\author[1]{\href{mailto:<[email protected]>?Subject=Your UAI 2021 paper}{Jane~J.~von~O'L\'opez}{}}
\author[1]{Harry~Q.~Bovik}
\author[1,2]{Further~Coauthor}
\author[3]{Further~Coauthor}
\author[1]{Further~Coauthor}
\author[3]{Further~Coauthor}
\author[3,1]{Further~Coauthor}
\affil[1]{%
Computer Science Dept.\\
Cranberry University\\
Pittsburgh, Pennsylvania, USA
}
\affil[2]{%
Second Affiliation\\
Address\\
…
}
\affil[3]{%
Another Affiliation\\
Address\\
…
}
\begin{document}
\maketitle
\begin{abstract}
This is the abstract for this article.
It should give a self-contained single-paragraph summary of the article's contents, including context, results, and conclusions.
Avoid citations; but if you do, you must give essentially the whole reference.
For example: This whole paper is devoted to praising É. Š. Åland von Vèreweg's most recent book (“Utopia's government formation problems during the last millenium”, Springevier Publishers, 2016).
Also, do not put mathematical notation and abbreviations in your abstract; be descriptive.
So not “we solve \(x^2+A xy+y^2\), where \(A\) is an RV”, but “we solve quadratic equations in two unknowns in which a single coefficient is a random variable”.
The reason is that mathematical notation will not display correctly when the abstract is reused on the proceedings website, for example, and that one should not assume the abstract's reader knows the abbreviation.
Of course the same remarks hold for your paper's title.
\end{abstract}
\section{Introduction}\label{sec:intro}
UAI 2021 papers have to be prepared using \LaTeX.
To start writing your paper, copy \texttt{uai2021-template.tex} and replace title, authorship, and content with your own.
The UAI 2021 paper style is based on a custom \textsf{uai2021} class.
The class file sets the page geometry and visual style.\footnote{%
The class uses the packages \textsf{adjustbox}, \textsf{environ}, \textsf{letltxmacro}, \textsf{geometry}, \textsf{footmisc}, \textsf{caption}, \textsf{textcase}, \textsf{titlesec}, \textsf{titling}, \textsf{authblk}, \textsf{enumitem}, \textsf{microtype}, \textsf{lastpage}, and \textsf{kvoptions}.
}
The class file also loads basic text fonts.\footnote{%
Fonts loaded are \textsf{times} (roman), \textsf{helvet} (sanserif), \textsf{courier} (fixed-width), and \textsf{textcomp} (common symbols).
}
\emph{You may not modify the geometry or style in any way, for example, to squeeze out a little bit of extra space.}
(Also do not use \verb|\vspace| for this.)
Feel free to use convenience functionality of loaded packages such as \textsf{enumitem}.
The class enables hyperlinking by loading the \textsf{hyperref} package.
You are free to load any packages available in \TeX{Live}~2020 that are compatible with the UAI class.\footnote{In case this template or your submission does not compile, always first make sure your \TeX\ installation is up-to-date.}
(Mik\TeX{} and Mac\TeX{} generally contain the same packages.)
Do not load conflicting packages—you will get an error message—, as this complicates creating the proceedings.
Please avoid using obsolete commands, such as \verb|\rm|, and obsolete packages, such as \textsf{epsfig}.\footnote{%
See \url{https://ctan.org/pkg/l2tabu}.
}
\swap[ ]{in the header of your source file.}{Feel free to include your own macros}
\section{General Formatting Instructions}
As a general rule: \emph{follow the template}.
\subsection{Authorship}
Reviewing is double-blind.
However, you can already fill in your author names and affiliations in the \verb|\author| block in the preamble following the example of the template because the class will remove it as long as the option \textsf{accepted} is not passed to the class.
Nevertheless, make sure any other information in the paper does not disclose your identity, for example URLs to supplementary material.
\subsection{Sectioning}
Three numbered sectioning commands are provided: \verb|\section|, \verb|\subsection|, and \verb|\subsubsection|.
Please respect their order, so do not put a \verb|\subsubsection| directly beneath a \verb|\section|.
One unnumbered sectioning command is provided, \verb|\paragraph|.
It can be used directly below any numbered section level.
Do not use any other sectioning commands.
\subsubsection{Typing the Section Titles}
The \verb|\section| and \verb|\subsection| titles are uppercased by the class.
Please type them in title case.
(This is used in the PDF bookmarks.)
Please also write the \verb|\subsubsection| titles in title case.
\paragraph{What is title case?}
\href{https://en.wikipedia.org/wiki/Title_case}{Wikipedia} explains:
\begin{quote}
Title case or headline case is a style of capitalization used for rendering the titles of published works or works of art in English.
When using title case, all words are capitalized except for ‘minor’ words (typically articles, short prepositions, and some conjunctions) unless they are the first or last word of the title.
\end{quote}
\subsection{References, Citations, Footnotes}\label{sec:etc}
\subsubsection{Cross-Referencing}
Always use \verb|\label| and \verb|\ref|—or a command with a similar effect—when cross-referencing.
For example, this subsection is Section~\ref{sec:etc}.
\subsubsection{Citations}
Citations should include the author's last name and year.
They should be part of the sentence.
An example parenthetical citation: “Good introductions to the topic are available \citep{latexcompanion}.”
An example textual citation: “\citet{einstein} discusses electrodynamics of moving bodies.”
Do not use a parenthetical citation where a textual one is appropriate.
An example of what \emph{not} to do: “\citep{einstein} discusses electrodynamics of moving bodies.”
We strongly advise to use reference list software such as Bib\TeX{} and a citation package such as \textsf{natbib}.
The reference style you use should be compatible with the author-year citations.
Both the citation style and reference style used should be consistent.
For the original submission, take care not to reveal the authors' identity through the manner in which one's own previous work is cited.
For example, writing
“I discussed electrodynamics of moving bodies before \citep{einstein}.” would be inappropriate, as it reveals the author's identity.
Instead, write “\citet{einstein} discussed electrodynamics of moving bodies.”
\subsubsection{Footnotes}
You can include footnotes in your text.\footnote{
Use footnotes sparingly, as they can be distracting, having readers skip back and forth between the main text and the foot of the page.
}
The footnote mark should follow the fragment to which it refers, so a footnote\footnote{
A footnote is material put at the foot of a page.
}
for a word has a footnote mark attached to that word and a footnote for a phrase or sentence has a footnote mark attached to the closing punctuation.
\section{Math}\label{sec:math}
The class file does not load any math support package like \textsf{amsmath}\footnote{%
See the \textsf{amsmath} documentation at \url{https://ctan.org/pkg/amsmath} for further details.
}.
We advise using the \textsf{mathtools}\footnote{%
See the \textsf{mathtools} documentation at \url{https://ctan.org/pkg/mathtools} for further details.
}
package, which extends \textsf{amsmath} with fixes and even more useful commands.
Feel free to load other support packages for symbols, theorems, etc.
Use the \textsf{amsmath} environments for displayed equations.
So, specifically, use the \texttt{equation} environment instead of \verb|$$...$$| and the \texttt{align} environment instead of \texttt{eqnarray}.\footnote{For reasons why you should not use the obsolete \texttt{eqnarray} environment, see Lars Madsen, \textit{Avoid eqnarray!} TUGboat 33(1):21--25, 2012.}
An \texttt{equation}:
\begin{equation}\label{eq:example}
0 = 1 - 1.
\end{equation}
Two \texttt{align}'ed equations:
\begin{align*}
1 + 2 &= 3,\\
1 - 2 &= -1.
\end{align*}
Equations can also be put inline, of course.
For example, Equation~\eqref{eq:example}: \(0=1+1\).
(Notice that both inline and displayed math are part of the sentence, so punctuation should be added to displayed math.)
The \textsf{amsmath} and \textsf{mathtools} packages provide a lot of nice functionality, such as many common math operators, e.g., \(\sin\) and \(\max\), and also commands for defining new ones.
\section{Floats}\label{sec:floats}
Floats, such as figures, tables and algorithms, are moving objects and are supposed to float to the nearest convenient location.
Please do not force them to go in the middle of a paragraph.
They must respect the column width.
Two-column floats are possible.
They appear at the top of the next page, so strategic placement may be necessary.
For an example, see Figure~\ref{fig:tikz}.
They may not enter the margins.
\begin{figure*}
\centering
\begin{tikzpicture}[xscale=1.5]
\coordinate (origin);
\draw[->] (origin) -- +(1cm,0) node[below] {$x$};
\draw[->] (origin) -- +(0,1cm) node[left] {$y$};
\fill[gray] (45:1cm) circle[radius=.2cm];
\end{tikzpicture}
\caption{A Nice Filled Ellipse with a Pair of Coordinate Axes.}\label{fig:tikz}
\end{figure*}
All material in floats should be legible and of good quality.
So avoid very small or large text and pixelated or fuzzy lines.
\subsection{Figures}\label{sec:figures}
Figures should go in the \texttt{figure} environment and be centered therein.
The caption should go below the figure.
Use \verb|\includegraphics| for external graphics files but omit the file extension.
Supported formats are \textsf{pdf} (preferred for vector drawings and diagrams), \textsf{png} (preferred for screenshots), and \textsf{jpeg} (preferred for photographs).
Do not use \verb|\epsfig| or \verb|\psfig|.
If you want to scale the image, it is better to use a fraction of the line width rather than an explicit length.
For example, see Figure~\ref{fig:toronto}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth,page=3]{toronto}
\caption{A View of a Nice City.}\label{fig:toronto}
\end{figure}
Do not use \verb|\graphicspath|.
If the images are contained in a subdirectory, specify this when you include the image, for example \verb|\includegraphics{figures/mypic}|.
\subsection{Tables}\label{sec:tables}
Tables should go in the \texttt{table} environment and be centered therein.
The caption should go above the table and be in title caps.
For an example, see Table~\ref{tab:data}.
\begin{table}
\centering
\caption{An Interesting Table.}\label{tab:data}
\begin{tabular}{rl}
\toprule
\bfseries Dataset & \bfseries Result\\
\midrule
Data1 & 0.12345\\
Data2 & 0.67890\\
Data3 & 0.54321\\
Data4 & 0.09876\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Algorithms}\label{sec:algorithms}
You can load your favorite algorithm package, such as \textsf{algorithm2e}\footnote{See the \textsf{algorithm2e} documentation at \url{https://ctan.org/pkg/algorithm2e}.}.
Use the environment defined in the package to create a centered float with an algorithm inside.
\section{Back Matter}
There are a some final, special sections that come at the back of the paper, in the following order:
\begin{itemize}
\item Author Contributions
\item Acknowledgements
\item References
\end{itemize}
They all use an unnumbered \verb|\subsubsection|.
For the first two special environments are provided.
(These sections are automatically removed for the anonymous submission version of your paper.)
The third is the ‘References’ section.
(See below.)
(This ‘Back Matter’ section itself should not be included in your paper.)
\begin{contributions}
Briefly list author contributions.
This is a nice way of making clear who did what and to give proper credit.
H.~Q.~Bovik conceived the idea and wrote the paper.
Coauthor One created the code.
Coauthor Two created the figures.
\end{contributions}
\begin{acknowledgements}
Briefly acknowledge people and organizations here.
\emph{All} acknowledgements go in this section.
\end{acknowledgements}
|
1,116,691,501,250 | arxiv | \section{Introduction}
Transport networks occur in a vast variety of natural systems. From river basins to leaf venation, we observe a spectrum of solutions to the problem of optimal distribution through a landscape, given a set of constraints on the construction of links. Previous work on treelike networks originates in the study of river basins, which can be modeled as convergent random walks, or equivalently as flow through a linear slope field with additive noise. This approach, the Scheidegger Model \cite{ScheiPaper}, does not incorporate the effects of convergent and divergent topologies, which occur both in geomorphology (e.g. endorheic basins) and in biologically relevant systems. The vasculature of the retina, for instance, may be viewed as divergent for the arteries, which originate at the center, and convergent for the veins, which terminate there. Another example of this morphology occurs in the functional units of organs, such as the liver, have distinct zones where the arteries originate and the veins terminate, respectively. Convergence and divergence in river networks originates in the presence of curvature in the underlying topography. Thus, to begin to understand the morphologies of transport networks found in these biologically-relevant examples, we investigate the effects of curving the manifold in which we embed the Scheidegger Model, from a plane into a cone \cite{BohnMag,RodriguezIturbe}.
The study of river networks depends upon the ability to impose a hierarchy on the streams, dividing the main channel from its tributaries. Ordering by magnitude, where each node is labeled with the number of drains, becomes rapidly intractable, due to the exponential growth of binary trees. Strahler stream ordering assigns all tips a value of one. At every junction, the resulting stream takes the value of the larger of its two inputs, unless they are the same, in which case it is augmented by one \cite{Strahler}. Binary trees grow exponentially (the number of nodes within a distance $r$ of the center goes as $2^r$) and cannot be embedded linearly into Euclidean space, which grows only algerbraically (the volume of space goes as $r^d$, where $d$ is the number of spatial dimensions). Strahler stream ordering, which requires pairs of order $n-1$ streams to create one of order $n$, is inherently logarithmic and able to ``fit" a binary tree into Euclidean space \cite{DoddsRothman}. Horton first examined the scaling of the length of, number of and area drained by a link in a river network with its Strahler order. His work and subsequent examinations of examples around the world showed that these three quantities nearly always scale in exponential fashion, defining an empirical range of length, bifurcation, and area ratios respectively \cite{Horton, MaritanPRE}. These scaling relationships originate in the logarithmic nature of Strahler stream ordering. The addition of curvature, which changes the scaling of area with radius from $a\sim r^2$, affects the ability to accommodate a binary tree. Spaces with negative intrinsic curvature grow exponentially and have no need for logarithmic embedding. There is thus an underlying connection between Horton's scaling laws and curvature of the embedding topology.
Numerous other scaling relationships (usually power law) have been observed in river networks(see table in \cite{DoddsRothman}). Notably, it has also been found that $l\sim a^h$, with $l$ the mainstream length and $a$ the basin area, a relationship known as Hack' law \cite{DoddsRothman}. By relating basin morphology to a correlate of flow volume (the area drained), Hack's Law provides a quantification of basin shape, useful for vascular and other distribution trees\cite{RodriguezIturbe, DoddsRothman}.
Scheidegger (1967) proposed a simplified model of river networks, in order to better understand the origin of these scaling laws. Consider a hexagonal grid of points, tilted out of the plane. Each point drains downhill and moves with equally probability to either the right or the left. Basins are collections of convergent random walks. The Hack exponent is 2/3: a basin is the area between two random walks (the neighboring streams). This distance is itself a random walk; the area between two such random walks should scale as $l^{3/2}$ \cite{ScheiPaper, Takayasu, Dhar}. The Scheidegger model may also be created by adding a slope field plus random noise to a lattice and requiring downhill flow from every point \cite{ScheiBook, DoddsRothman} . This form is both a simplification of the physics of streamflow and amenable to embedding in convergent and divergent topologies. Adding a radial instead of a unidirectional slope embeds the model in a cone \footnote{While all the intrinsic curvature in a cone is found in the tip, its effects are felt throughout the cone}. A negative radial slope models the (temporary) divergent flow off of a mountaintop\footnote{Permanent channelization only occurs where the Laplacian of the slope field is negative; while divergent river networks occur, they are fugitive}, while a positive value represents the flow into an endorheic basin or a deep valley. The effects of curvature have been investigated previously for their effects on channelization and stream head formation, but not for their effects on network morphology \cite{ScheiBook, smith1972stability, izumi2000linear}.
\section{Simulation Procedure}
Our procedure was to generate a grid of $5r_0^2$ random points in an annular region of outer radius $r_0$ and inner radius $r_0/5$ with connectivities defined through the Delaunay triangulation. To eliminate edge effects, points were placed in the square region of width $2r_0$ centered on the origin and then removed if they fell outside the desired annulus. Each point was assigned a height equal to its Euclidean distance from the origin times the radial slope of the simulation, $m$, plus frozen white noise, $\eta$, giving: $z= mr+\eta$.
Each point was then connected to its neighbor with the lowest value. If all neighbors had a larger z value, then the point was assumed to be the mouth of a river network. These points almost always only occurred on the edge of the region, except in the flattest cases, see below. From the resulting connectivities matrix, we could recursively calculate the magnitude and Strahler order numbers at each point in the grid. Additionally, all points that flowed into a single mouth were collected as a basin allowing us to collect volume, length, and order statistics.
The following results are robust to lattice geometry. Unless otherwise noted, all simulations were conducted with the following parameters: a radius of 67 units, a radial slope of -1 or 1, and a noise strength of 0.1.
\section{Results and Discussion}
At positive and negative values of the radial slope, m, two distinct phases emerge. In the case of a convergent network, $m>0$, there are relatively few, but large basins, whereas for a divergent network,$ m<0$, there are many more, considerably smaller and narrower basins. See Figure \ref{Networks}, where each basin is represented by a single color. The basins in the divergent case all appear to share the same characteristic, leaflike shape: narrow at the top, widening in the center, and tapering towards the mouth, reminiscent of tilings of the Poincare disk \cite{Coxeter, CircleLimits}.
\begin{figure}
\includegraphics{Fig1PRL.png}
\caption{(Color Online) Simulated Networks for convergent ($m=1$) and divergent ($m=-1$) embedding topologies on the top and bottom, respectively. Basins, defined as collections of points with the same outlet, are colored the same. Note the differences in shape and size of basins. In the convergent case, most basins take the form of large sectors of the annular region, whereas in the divergent case there are basins at all length scales, with a characteristic leaflike shape (e.g. the heavy-outlined basins).\label{Networks}}
\end{figure}
Iterating simulations from $m=-1.5$ to $m=1.5$, to investigate the crossover region, we found a sharp increase in the average number of basins as we approached m=0 with the power-law scaling indicative of a singularity, $N_{basins}\sim m^\gamma$, (see figure \ref{PhaseTrans}). Additional properties such as the average basin length exhibit a zero, $\bar{l}\sim m^\delta$, when m vanishes, again with power law scaling. These results lead us to conclude that there is a phase transition at zero radial slope for our modified Scheidegger Model.
\begin{figure}
\includegraphics{Fig2PRL.png}
\caption{(Color Online) Phase Transition. As $m\rightarrow 0$ from both sides, the number of basins approaches a singularity and the average of basin length falls to zero, both in power law fashion, with exponents respectively $=-0.96\pm0.05$ and $=0.94\pm0.07$. Results here come from averaging 40 simulations at $r=67$. \label{PhaseTrans}}
\end{figure}
We used finite-size scaling analysis to determine the critical exponent(s). Conducting simulations at exponentially distributed network radii from 4 to 100, we determined a value of 1 for the finite-size scaling exponent, as predicted by the conventional theory of finite-size effects \cite{Goldenfeld}. From these results, we found a value of $=-0.96\pm0.05$ for $\gamma$, the critical exponent for $N_{basins}$. Extrapolating to the thermodynamic limit, we can estimate $N_{basins}$ in the divergent case as $3r/2$ and in the convergent case, $r/2$. Hence, the ratio of basins to lattice points vanishes.
Beyond the average number and size of a basin, we examine the distribution of basin sizes and shapes. We took the cases $m=\pm1$ at a simulation radius of 67 and aggregated 40 simulation runs. Given the wide dispersal in basin sizes, we binned the data logarithmically and examined the logarithm of the number of counts. In the convergent case, there was a sharp enhancement of basins at the largest size, but little other structure. In the divergent case, the data fell onto a line of slope $\approx0.25$, indicating a scale-free distribution with exponent $\approx-0.75$ (see Fig \ref{Hack}, upper panel), quantifying the self similarity of basin shapes seen in Figure \ref{Networks}. Examining $\ln a$ vs. $\ln l$, in the convergent case, we observe a knee in the distribution, as finite size effects greatly limited mainstream length, see Figure \ref{Hack}, lower panel. The largest basins were three decades greater in area and two in length than the smallest, matching the Scheidegger prediction of $h\approx2/3$. In the divergent case, finite-size effects were generally much less important. We found a (nearly) linear relationship between $a$ and $l (h\approx1)$. Since basins on a divergent cone tend to be narrow and elongated with little branching, this value is not unexpected. The Hack exponent provides a useful, and easily calculated, metric for the differentiation of convergent and divergent basins.
\begin{figure}
\includegraphics{Fig3PRL.png}
\caption{(Color Online) Properties of convergent (blue) and divergent Basins (red). On the top, the frequency of basin sizes is plotted logarithmically. The convergent basins, in blue (dark gray) display an enhancement at large sizes, whereas there is a scale free distribution of basins in the divergent case, in red (light gray). On the bottom, the Hack's law relationship is plotted, $\ln a $ vs $ \ln l$; data points are x's and a linear regression is overlaid. Note the differing slopes for the convergent case it is approximately 2/3, the Scheidegger value. For the divergent case it is nearly unity.\label{Hack}}
\end{figure}
The maximum Strahler order observed in networks we could reliably simulate, was found to be 6 and to only occur in convergent topologies. With such a low maximum order, it is impossible to properly define and calculate the bifurcation and other Hortonian ratios \cite{DoddsRothman}. We examined instead the distribution of Strahler numbers and found no difference in the PDF of Strahler numbers for 5th and 6th order basins between divergent and convergent networks. While the size and shape of basins may vary wildly due to the embedding manifold, the hierarchy of the links within each basin does not change.
\section{Conclusions and Applications}
When one embeds the Scheidegger model in a curved manifold, it is not the stream ordering properties that change, but rather the way that the manifold is tiled by the river basins. We have observed severe changes in the number and length of basins and in the Hack exponent, but not the distribution of Strahler orders. The same within-basin ordering principles yield grossly different bulk morphologies at the level of basins. In essence, the curving of the embedding manifold affects the shapes of the basins themselves, while the area within each basin may be treated as locally flat.
We observe two distinct phases that can only be transformed into one another after passage through a singularity, where the average number of basins blows up to order $N$ from order $\sqrt{N}$ in convergent and divergent embedding topologies. Given the minimal specifications of our implementation of the Scheidegger model, a noisy set of heights, z, and flow between neighbors chosen by $\Delta z$, we may expect that the general features are reflected in natural systems. Specifically, we predict that convergent and divergent networks will exhibit different morphologies, quantified by the Hack exponent.
A potential system for testing our model is vascular networks. As mentioned previously, divergent topography does not lead to permanent channelization, preventing us from comparing the flow off of mountaintops to that into endorheic basins. Two-dimensional vascular networks are typified by leaf venation; that system, however, does not exhibit a difference between convergence and divergence: flow both to and from the stem is carried in the same veins. Nearly two-dimensional systems may be found in certain tissues, such as the retina. At the smallest level, however, vascular networks become loopy meshes. At larger scales, these networks are treelike. We hypothesize a morphological difference between arterial and venous trees. Assuming a uniform drainage density, here a constant level of blood demand throughout thetissue in question, we may compare the length of these trees with the cross-sectional area of the terminal arteries and veins, as it correlates with flow volume \cite{Gray, Murray1, Murray2}.
Given a large enough sample of arteries and veins, the distribution of ``basin" sizes, which in vasculature would correspond with vessel cross-sectional area, could be investigated. We predict a scale-free distribution of terminal artery sizes and a distribution peaked at the system size for veins. Given the simplicity and generality of the assumptions of our model, the absence of these properties would warrant careful study of the mechanisms shaping the growth and form of vasculature.
While this system does not pose a perfect test of our model, as it exists in three dimensions, and at its lowest level (the capillaries), it is full of loops, naive models of angiogenesis, in which vascularature growth is dependent on the concentration of VEGF, or other chemical signals, map directly onto our version of the Scheidegger model \cite{GlazierVasc, GlazierVascTumor}. The concentration of a signaling molecule originates in a certain point (or area), providing a radial slope due to diffusion, and Poisson noise provides the the fluctuations to impart randomness into the gradient.
While nearly all vascular networks exist in three dimensions, we do not expect the dramatic differences between convergent and divergent networks to disappear with the addition of another Euclidean dimension. As binary trees grow exponentially, the difference between two and three dimensions that grow algerbraically should be immaterial. The differences between convergence, where streams are forced together, and divergence, where the additional space would amplify the scale-free distribution of basin sizes should be amplified. At vanishing $m$, $N_{basins}$ will still be of order $r^2$, preserving the phase transition. A proper understanding of distribution networks, then, must account for the severe differences between convergence and divergence, or explain their absence, given that they arise from the most basic theoretical assumptions.
\begin{acknowledgments}
JO and MM would like to thank Eleni Katifori, Alex Petroff, and Carl Modes for helpful discussions throughout. Supported in part by the National Science Foundation under grant NSF PHY-1058899
\end{acknowledgments}
\section{Appendix}
|
1,116,691,501,251 | arxiv | \section{Introduction}
Linear colliders offer unique opportunities to study high energy
photon-photon collisions obtained using the process of Compton
backscattering of laser light off electron beams from the linear
collider \cite{Telnov97}. This option is included now in conceptual
design reports of the NLC, JLC and TESLA/SBLC projects of $e^+e^-$
linear collider \cite{NLC,CDR}. The expected physics at the Photon
Linear Collider (PLC) is very rich and complementary to that in
$e^+e^-$ collisions. In particular PLC will be especially attractive
tool in probing the the electroweak symmetry breaking sector via
precision measurements of anomalous $W$ self couplings.
In this paper a short survey of the most important processes of electroweak
gauge boson production in photon-photon collisions is given.
\section{$\gamma\g\to W^+W^-$ cross sections and quantum ${\cal O}(\alpha)$
corrections}
The reaction $\gamma\g\to W^+W^-$ would be the dominant source of the
$W^+W^-$ pairs at future linear colliders, provided that photon-photon
collider option will be realized. The Born cross section of $W^+W^-$
pair production in photon-photon collisions in the scattering angle
interval $10^\circ < \theta^\pm < 170^\circ$ is 61~pb at
$\sqrt{s_{\gamma\g}}=500$~GeV and 37~pb at 1~TeV. Corresponding cross
sections of $W^+W^-$ pair production in $e^+e^-$ collisions are an
order of magnitude smaller: 6.6~pb at 500~GeV and 2.5~pb at 1~TeV.
With more than a million $WW$ pairs per year a photon-photon collider
can be really considered as a $W$-factory and an ideal place to
conduct precision tests on the anomalous triple and quartic couplings
of the $W$ bosons.
With the natural order of magnitude on anomalous couplings one needs
to know the ${\cal SM}$ cross sections with a precision better than
1\% to extract these small numbers. From a theoretical point of view
this calls for the very careful analysis of at least $\cal{O}(\alpha)$
corrections to the cross section of $W^+W^-$ pair production in $\gamma\g$
collisions, which were recently calculated including virtual
corrections \cite{DennerDittmaierSchusterAAWW} and including complete
$\cal{O}(\alpha)$ corrections with account of both virtual
one-loop corrections and real photon and $Z$-boson emission
\cite{JikiaAAWW}.
\begin{figure}
\setlength{\unitlength}{1in}
\begin{picture}(6,3.5)
\put(-.15,0){\epsfig{file=tot_pp.eps,width=3.2in,height=3.5in}}
\put(2.85,0){\epsfig{file=tot_pm.eps,width=3.2in,height=3.5in}}
\end{picture}
\fcaption{Total cross sections of $WW(\gamma)$ production for various
polarizations. Born and corrected cross sections are shown. The
curves nearest to the helicity notations represent the corrected cross
sections.}
\end{figure}
Figure~1 shows total cross section of $WW$ pair production summed over
$WW$ and $WW\gamma$ final states and integrated over $W^\pm$ scattering
angles in the interval $10^\circ<\theta^\pm<170^\circ$ as a function
of energy for various polarizations \cite{JikiaAAWW}. The bulk of the
cross section originates from transverse $W_TW_T$ pair production.
Transverse $W$'s are produced predominantly in the forward/backward
direction and the helicity conserving amplitudes are dominating. Cross
sections integrated over the whole phase space are non-decreasing with
energy. For a finite angular cutoff they do decrease as $1/s$, but
still they are much larger than suppressed cross sections. For the
dominating $++++$, $+-+-$, $+--+$ helicity configurations corrections
are negative and they rise with energy ranging from $-3\%$ at 500~GeV
to $-25\%$ at 2~TeV.
\begin{table}
\tcaption{Total unpolarized Born cross sections and relative
corrections for various intervals of $W^\pm$ scattering
angles. Corrections originating from real hard photon
($\omega_\gamma>k_c=0.1$~GeV) and $Z$-boson emission as well as IR-finite
sum of soft photon and virtual boson contributions, fermion virtual
corrections and total corrections are given separately.}
\begin{center}
\begin{center}
$\sqrt{s} = 300$~GeV
\end{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline\hline
$\theta_{W^\pm}$, ${}^\circ$ & $\sigma^{Born}$, $pb$ & $\delta^{hard}$, \% &
$\delta^{Z}$, \% & $\delta^{soft+bose}$, \% & $\delta^{fermi}$, \%
&$\delta^{tot}$, \% \\ \hline
$ 0^\circ < \theta < 180^\circ$ & 70.22 & 4.15
& 2.64$\cdot 10^{-2}$& $-$7.09 & 0.327 & $-$1.37
\\
$ 10^\circ < \theta < 170^\circ$ & 64.46 & 4.11
& 2.74$\cdot 10^{-2}$& $-$7.31 & 0.257 & $-$1.59
\\
$ 30^\circ < \theta < 150^\circ$ & 38.15 & 4.09
& 3.27$\cdot 10^{-2}$& $-$8.62 & $-$0.123 & $-$2.67
\\
$ 60^\circ < \theta < 120^\circ$ & 12.96 & 4.02
& 2.94$\cdot 10^{-2}$& $-$10.7 & $-$0.415 & $-$3.75
\\
\hline\hline
\end{tabular}
\vspace{.5cm}
\begin{center}
$\sqrt{s} = 500$~GeV
\end{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline\hline
$\theta_{W^\pm}$, ${}^\circ$ & $\sigma^{Born}$, $pb$ & $\delta^{hard}$, \% &
$\delta^{Z}$, \% & $\delta^{soft+bose}$, \% & $\delta^{fermi}$, \%
&$\delta^{tot}$, \% \\ \hline
$ 0^\circ < \theta < 180^\circ$ & 77.50 & 7.96 & 0.468
& $-$10.1 & 9.04$\cdot 10^{-2}$& $-$1.63
\\
$ 10^\circ < \theta < 170^\circ$ & 60.71 & 7.89 & 0.541
& $-$10.7 & $-$0.242 & $-$2.52
\\
$ 30^\circ < \theta < 150^\circ$ & 21.85 & 8.05 & 0.817
& $-$13.0 & $-$1.34 & $-$5.50
\\
$ 60^\circ < \theta < 120^\circ$ & 5.681 & 8.02 & 0.789
& $-$14.8 & $-$2.13 & $-$8.12
\\
\hline\hline
\end{tabular}
\vspace{.5cm}
\begin{center}
$\sqrt{s} = 1000$~GeV
\end{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline\hline
$\theta_{W^\pm}$, ${}^\circ$ & $\sigma^{Born}$, $pb$ & $\delta^{hard}$, \% &
$\delta^{Z}$, \% & $\delta^{soft+bose}$, \% & $\delta^{fermi}$, \%
&$\delta^{tot}$, \% \\ \hline
$ 0^\circ < \theta < 180^\circ$ & 79.99 & 13.3 & 1.55
& $-$18.7 & $-$5.51$\cdot 10^{-2}$& $-$3.89
\\
$ 10^\circ < \theta < 170^\circ$ & 37.04 & 13.4 & 2.39
& $-$22.6 & $-$1.28 & $-$8.10
\\
$ 30^\circ < \theta < 150^\circ$ & 6.924 & 14.2 & 3.96
& $-$32.1 & $-$3.80 & $-$17.8
\\
$ 60^\circ < \theta < 120^\circ$ & 1.542 & 14.2 & 3.88
& $-$37.1 & $-$5.13 & $-$24.1
\\
\hline\hline
\end{tabular}
\vspace{.5cm}
\begin{center}
$\sqrt{s} = 2000$~GeV
\end{center}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline\hline
$\theta_{W^\pm}$, ${}^\circ$ & $\sigma^{Born}$, $pb$ & $\delta^{hard}$, \% &
$\delta^{Z}$, \% & $\delta^{soft+bose}$, \% & $\delta^{fermi}$, \%
&$\delta^{tot}$, \% \\ \hline
$ 0^\circ < \theta < 180^\circ$ & 80.53 & 19.0 & 2.91
& $-$27.2 & $-$7.45$\cdot 10^{-2}$& $-$5.33
\\
$ 10^\circ < \theta < 170^\circ$ & 14.14 & 20.1 & 6.38
& $-$41.6 & $-$2.99 & $-$18.1
\\
$ 30^\circ < \theta < 150^\circ$ & 1.848 & 21.5 & 9.77
& $-$60.1 & $-$6.54 & $-$35.4
\\
$ 60^\circ < \theta < 120^\circ$ & 0.3936 & 21.6 & 9.60
& $-$67.6 & $-$8.04 & $-$44.5
\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
In Table~1 Born cross sections and relative corrections are given for
several intervals of $W^\pm$ scattering angles \cite{JikiaAAWW}. At
high energies large cancellations occur between negative virtual
corrections and positive corrections corresponding to real photon or
$Z$-boson emission. Consequently, although the correction originating
from the $WWZ$ production is completely negligible at
$\sqrt{s_{\gamma\g}}=0.3$~TeV, it is of the same order of magnitude as
hard photon correction at 2~TeV. Although at $300\div 500$~GeV
corrections are quite small ranging from $-1.3\%$ to $-8\%$, depending
on angular cuts, at TeV energies the value of radiative corrections in
the central region of $W^+W^-$ production become quite large, so that
corrections in the region $60^\circ < \theta < 120^\circ$ are $6\div
8$ times larger than the corrections to the total cross section at
$1\div 2$~TeV. They range from $-24\%$ to $-45\%$. Thus if precision
measurements are to be made at TeV energy, more careful theoretical
analysis could be needed in order to reliably predict the value of the
cross section in the central region where the value of the cross
section is the most sensitive to the $W$ anomalous couplings.
\section{$\gamma\g\to ZZ$ production}
$Z$-pair production in photon-photon collisions plays a special role
due to the possibility to observe the Higgs signal in \mbox{$\gamma\gamma$}\ collisions
for the Higgs bosons heavier that $2M_Z$ in $ZZ$ decay mode
\cite{GunionHaber,BBC} if one of the $Z$'s is required to decay to
$l^+l^-$ to suppress the huge tree-level $\gamma\gamma\to W^+W^-$
continuum background. However, even though there is no tree-level $ZZ$
continuum background, such a background due to the reaction
$\gamma\gamma\to ZZ$ does arise at the one-loop level in the
electroweak theory
\cite{JikiaAAZZ,BergerAAZZ,DicusKaoAAZZ,VeltmanAAZZ} which makes the
Higgs observation in the $ZZ$ mode impossible for $m_h\mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} (350\div
400)$~GeV. It was found that for $185\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_h\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 300$~GeV the $ZZ$
mode will provide a 10-20\% determination of the quantity
$\Gamma(h\to\gamma\gamma)\cdot BR(h\to ZZ)$.
\begin{figure}
\setlength{\unitlength}{1in}
\begin{picture}(6,3.5)
\put(-.15,0){\epsfig{file=z2a.eps,width=3.2in,height=3.5in}}
\put(2.85,0){\epsfig{file=z4b.eps,width=3.2in,height=3.5in}}
\end{picture}
\fcaption{(a) Cross section of the $ZZ$ pair production in polarized
$\gamma\g$ collisions versus c.m.s. energy of the $e^+e^-$ collisions
computed taking into account photon spectrum of the backscattered
laser beams. Both $Z$-bosons have $|\cos \theta_Z|<\cos
30^{\mbox{o}}$. Curves for $Z_LZ_L$ (solid line), $Z_TZ_T$ (dashed
line) and $Z_LZ_T$ (dotted line) production are shown. Different
curves for longitudinal $Z_LZ_L$ pair production correspond to Higgs
boson masses of 300, 500, 800, 1000~GeV and infinity.\\
(b) The invariant mass, $M_{ZZ}$, distribution of $Z$-bosons for
$\gamma\g\to ZZ$ in photon-photon collisions at
$\sqrt{s_{e^+e^-}}=500$~GeV and $m_H=250$, 300 and 350~GeV. Curves for
$Z_LZ_L$ (dotted line), $Z_TZ_T$ (dashed line) and $Z_LZ_T$ (long
dashed line) production are shown in addition to the sum over all
polarizations of the $Z$-boson (solid line).}
\end{figure}
In Fig.~2 the cross section of the $ZZ$ pair production and invariant
mass distribution at the PLC are shown \cite{JikiaAAZZ}. With the
polarizations of the initial electron (positron) and laser beams shown,
the photon-photon energy spectrum peaks just below the highest allowed
photon-photon energy and colliding photons are produced mainly with
equal mean helicities $\langle\xi_1\xi_2\rangle\sim 1$. As for the case of $W$
pair production, at high energies the cross section is dominated by
the transversely polarized $Z_TZ_T$ pair production. As it was
already mentioned, while clear Higgs boson peaks are observable at
$\sqrt{s_{e^+e^-}}=500$ GeV for $m_H=250$ and 300 GeV in Fig.~2b, a
background from transverse $Z_TZ_T$ pair production makes the
observation of heavier Higgs signal quite problematic.
\begin{figure}
\setlength{\unitlength}{1in}
\begin{picture}(6,3.5)
\put(-.15,0){\epsfig{file=wwzzpp.eps,width=3.2in,height=3.5in}}
\put(2.85,0){\epsfig{file=wwzzpm.eps,width=3.2in,height=3.5in}}
\end{picture}
\fcaption{Comparison between the cross sections for $m_H=100$~GeV and
$m_H=\infty$ for equal and opposite helicities of the initial
photons. For the reaction $\gamma\g\to WWWW$ the following cross sections
are shown: total cross section (solid line), the $TTTT+TTTL$ cross
sections (dotted line), the sum of cross sections with at least two
longitudinal final $W$'s (dashed line). For the reaction $\gamma\g\to
WWZZ$ corresponding cross sections are denoted by solid, dotted and
dash-dotted lines.}
\end{figure}
\section{$W^+W^-\to W^+W^-$, $ZZ$ scattering}
At center-of-mass energy above 1~TeV the effective $W$ luminosity
becomes substantial enough to allow for the study of $W^+W^-\to
W^+W^-$, $ZZ$ scattering in the reactions $\mbox{$\gamma\gamma\,$}\to WWWW$, $WWZZ$,
when each incoming photon turns into a virtual $WW$ pair, followed by
the scattering of one $W$ from each such pair to form $WW$ or
$ZZ$.
It was found \cite{JikiaWWWW,CheungWWWW} that a signal of SM
Higgs boson with $m_h$ up to 700~GeV (1~TeV) could be probed in these
processes at 1.5~TeV (2~TeV) PLC, assuming integrated luminosity of
200~fb$^{-1}$ (300~fb$^{-1}$). However even larger luminosity is
needed in order to extract the signal of enhanced $W_LW_L$ production
in models of electroweak symmetry breaking without Higgs boson
\cite{JikiaWWWW}. The main problem is again large background from
transverse $W_TW_TW_TW_T$, $W_TW_TZ_TZ_T$ production.
Event rates as well as signal/background ratio and the statistical
significance corresponding to various values of the Higgs boson mass
and cosine of the dead cone angle $z_0$ are given in Table~2 for total
energies of 1.5 and 2~TeV \cite{JikiaWWWW}. The value of integrated
luminosity of 200~fb$^{-1}$ is assumed and branching ratio of 50\% for
hadronic decays of $WW$, $ZZ$ pairs is included. At $\sqrt{s}=1.5$~TeV
we require that the invariant mass $M_{34}$ of central pair $WW$, $ZZ$
lie in the interval 400~GeV$<M_{34}<$~600~GeV for $m_H=500$~GeV and
500~GeV$<M_{34}<$~800~GeV for $m_H=700$~GeV. For $m_H=1$~TeV and
$\sqrt{s}=2$~TeV 450~GeV$<M_{34}<$~1.1~TeV.
\begin{table}[htb]
\tcaption{Event rates for signal ($S$) and background ($B$) summed
over $WWWW$ and $WWZZ$ final states as well as signal/background ratio
and statistical significance. }
\begin{center}
\begin{tabular}{|c|c|cccc||cccc|} \hline\hline
\multicolumn{2}{|c|}{}
&\multicolumn{4}{|c||}{$z_0=\cos(10^\circ)$}
&\multicolumn{4}{|c|}{$z_0=\cos(5^\circ)$}\\ \hline
$\sqrt{s_{e^+e^-}}$, TeV & $m_H$, GeV & $S$ & $B$ & $S/B$ & $S/\sqrt{B}$
& $S$ & $B$ & $S/B$ & $S/\sqrt{B}$ \\ \hline
1.5 & 500 & 84 & 34 & 2.5 & 14 & 218 & 56 & 3.9 & 29 \\
& 700 & 24 & 23 & 1.0 & 5.0 & 53 & 37 & 1.4 & 8.7 \\ \hline
2 & 1000 & 14 & 21 & 0.67 & 3.0 & 74 & 59 & 1.3 & 9.6 \\ \hline\hline
\end{tabular}
\end{center}
\end{table}
As one can see from Fig.~3 the contribution from two longitudinal weak
bosons $TTLL$ and $TLTL$, which are sensitive to heavy Higgs boson
contribution, are about an order of magnitude smaller than that for
$TTTT$ production \cite{JikiaWWWW}. So, for the total cross sections
one should expect 10$\%$ signal-to-background ratio.
\section{$\gamma\g\to \gamma\g$, $\gamma Z$}
Neutral gauge boson $\gamma\g$, $\gamma Z$, $ZZ$ pair production processes in
photon--photon fusion represent special interest because these
processes are absent at the classical level and are generated at the
one-loop level due to quantum corrections. The collision of high
energy, high intensity photon beams at the Photon Collider would
provide novel opportunities for such processes. The distinctive
feature of the electroweak vector boson loops contribution is that it
leads to the differential cross sections behaving as $d\sigma/dt\propto
1/t^2$ in the high energy limit and, hence, to a nondecreasing with
energy total cross sections.
The total cross sections of $\gamma\g$, $\gamma Z$ pair production are shown
in Figs.~4, 5, respectively. $W$ loop contribution dominates at
photon-photon collision energies above 250~GeV.
\begin{figure}
\setlength{\unitlength}{1in}
\begin{picture}(6,3.5)
\put(-.15,0){\epsfig{file=a1.eps,width=3.2in,height=3.5in}}
\put(2.85,0){\epsfig{file=a2.eps,width=3.2in,height=3.5in}}
\end{picture}
\fcaption{Total cross section of photon-photon scattering in
monochromatic photon-photon collisions versus $\gamma\g$ c.m. energy for
different helicities of the incoming photons. Total cross section
(solid line) as well as $W$ boson loop contribution (dashed line) and
fermion loop contribution (dotted line) are shown.}
\end{figure}
\begin{figure}
\setlength{\unitlength}{1in}
\begin{picture}(6,3.5)
\put(-.15,0){\epsfig{file=sigpp.eps,width=3.2in,height=3.5in}}
\put(2.85,0){\epsfig{file=sigpm.eps,width=3.2in,height=3.5in}}
\end{picture}
\fcaption{Total cross section of $\gamma Z$ pair production in
monochromatic photon-photon collisions versus $\gamma\g$ c.m. energy for
different helicities of the incoming photons and final $Z$
boson. Total cross section (solid line) as well as $W$ boson loop
contribution (dashed line) and fermion loop contribution (dotted line)
are shown.}
\end{figure}
In fact, the measurement of $\gamma\g\to\gamma Z$ cross section is a
measurement of $Z\gamma\g\gamma$ coupling, which could be also measured in
three photon $Z$ decay. However, is is well known, decay of the $Z$
boson into three photons via both fermion and $W$ boson loops in SM
has too small branching ratio (of the order of $3\cdot 10^{-10}$
\cite{zaaa}) to be observed at LEP experiments. From the other side,
at PLC the $\gamma Z$ final state, which should be background free,
has the largest observable rate (if no light Higgs boson is present)
in comparison to $\gamma\gamma$ and $ZZ$ ({\it e.g.}, a three hundred
$\gamma Z$ pairs yearly can be produced at the Photon Collider
realized at the 500 GeV electron linear collider). So, even the unique
Standard Model $\gamma\g\gamma Z$ vertex can be measured in photon-photon
collisions. Numerical values of the total cross section (with the
angular cuts imposed) are given in Table~3 \cite{AAAZ}.
\begin{table}[htb]
\tcaption{Event rates for $\gamma Z$ pair production at PLC}
\begin{center}
\begin{tabular}{|c||c|c|c|}\hline\hline
$\sqrt{s_{e^+e^-}}$ & 300 GeV & 500 GeV & 1 TeV \\ \hline
$\sigma(\gamma\gamma\to\gamma Z_T)$ & 9.3 fb & 32 fb & 53 fb \\
$\sigma(\gamma\gamma\to\gamma Z_L)$ & 0.28 fb & 0.51 fb & 0.39 fb \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\newpage
|
1,116,691,501,252 | arxiv |
\section{Conclusion}
\vspace{-0.7em}
XXX
\section{Experimental Setup}
\vspace{-0.5em}
The setup used to demonstrate SWIFT is shown in Fig. 2(a). A pair of commercial Oclaro (now Lumentum) digital-supermode distributed Bragg reflector (DS-DBR) lasers were driven by 250 MS/s arbitrary waveform generators (AWGs) with 125~MHz bandwidth. Detailed IV measurements were used to map supplied voltage to desired current. Each laser was connected to a commercial InPhenix SOA, supporting 69~nm of bandwidth with typical characteristics of 7~dB noise figure, 20~dB gain, and 10~dBm saturation power. Each SOA was driven with a 45~mA current source modulated by a 12~GS/s AWG with $\pm$0.5~V output and amplified to $\pm$4~V using an electrical amplifier. All four optical devices were held at 25$^\circ$C using temperature controllers. The SOAs were coupled together and passed to a digital coherent receiver (50 GS/s, 22~GHz bandwidth) and a digital sampling oscilloscope (50 GS/s, 30~GHz bandwidth), which provided optimisation feedback to the DS-DBR lasers and to the SOAs respectively.
\begin{figure*}[!t]
\centering
\label{fig:exp_results_1}
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0}
\begin{tabular}{ccc}
\begin{overpic}[scale=0.45]{Figures/setup_small_labels.png}
\put(-5,55){(a)}
\end{overpic}
&
\begin{overpic}[scale=0.14]{Figures/fo.png}
\put(12,50){(b)}
\end{overpic}
&
\begin{overpic}[scale=0.14]{Figures/dsdbr_cdf.png}
\put(12,50){(c)}
\end{overpic}
\end{tabular}
\vspace{-1em}
\caption{(a) Experimental setup of time-multiplexed SWIFT tuneable lasers (TL) gated by SOAs. (b) TL frequency offset (FO) of worst-case current swing w/ \& w/o optimiser. (c) CDF of all worst-case laser switch combinations w/ \& w/o optimiser.}
\vspace{-2em}
\end{figure*}
\section{Introduction}
\vspace{-.5em}
The most common data center network (DCN) packet length is $<$256 bytes which translates to 20~ns slots in 100G links \cite{clark2018}. Optical circuit switching (OCS) aims to transform data centre networks (DCNs) but needs to operate at packet speed and granularity \cite{benjamin2020}.
Recent breakthroughs have brought OCS closer to reality. A hardware-based OCS scheduling algorithm has demonstrated synchronous scheduling of up to 32,768 nodes within 2.3~ns \cite{benjamin2020}. A clock phase caching method has enabled clock and data recovery in less than 625 ps, supporting 10,000 remote nodes \cite{clark2018}. Yet, energy-efficient, sub-ns, many-channel optical switching remains a challenge.
Wideband fast tuneable lasers have demonstrated switching on ns timescales \cite{simsarian2006,gerard2020}, and as low as 500~ps but over limited bandwidths \cite{ueda2019}. Static laser diodes (LDs) gated by semiconductor optical amplifiers (SOAs) have achieved 912 ps 10-90\% rise-fall times with $\sim$2~ns settling time ($\pm5\%$ of the target value) \cite{shi2019}; however, the power consumption and device count limit the scalability of this approach. A similar method used an optical comb where each wavelength was filtered then gated by an SOA \cite{lange2020}; the power consumption and device count therefore also increase linearly with number of channels, limiting scalability (see Fig.~1(b)).
In this paper, we introduce SWIFT: a modular system with Scalable Wideband Interconnects for Fast Tuning. SWIFT combines pairs of optimised widely tuneable lasers (TLs), multiplexing
their wavelength reconfiguration on packet timescales. The lasers are gated by pairs of fast switching SOAs, resulting in wideband, sub-ns switching. The modular design of SWIFT (Fig. 1(a)) shows that just two lasers and two SOAs cover each optical transmission band. SWIFT power consumption is, therefore, practically independent of channel count; Fig.~1(b) shows that SWIFT becomes more power efficient than alternative sub-ns switching sources
beyond 8$\times$50~GHz spaced channels.
\begin{figure*}[!b]
\centering
\label{fig:principle}
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0}
\begin{tabular}{cccc}
\hspace{1mm}%
\includegraphics[scale=0.54]{Figures/swift_modules.png}
&
\hspace{0mm}%
\includegraphics[scale=0.19]{Figures/power_consumption_annotated.png}
&
\hspace{1mm}%
\includegraphics[scale=0.5]{Figures/pulse_reduced.png}
&
\hspace{3mm}%
\includegraphics[scale=0.11]{Figures/switch_time_vs_num_wavelengths.png}
\\
(a)&(b)&(c)&(d)
\end{tabular}
\vspace{-1em}
\caption{
(a) Modular SWIFT architecture across S-, C- and L-bands.
(b) Power consumption comparison of laser switch designs vs. no. of channels, using data reported in \cite{benjamin2020arXiv}.
(c) PULSE DCN architecture with SWIFT modules (in red). (d) Comparison of switching times (reported rise (solid) and estimated settling (faded)) against no of channels for different switch systems.
}
\vspace{-2em}
\end{figure*}
The SWIFT modules can be deployed as transmitters in DCN architectures such as PULSE \cite{benjamin2020arXiv}, as shown in Fig.~1(c). In this architecture, each node has $x$ SWIFT transmitters (highlighted in red), each local pod has $N$ nodes, and $x^2$ star couplers enable there to be $x$ source and $x$ destination pods. Thus, PULSE network's number of end-points scales with $N \times x$, where $N$ is limited by the number of wavelength channels. The large number of channels supported by SWIFT therefore allows for significant scalability in the PULSE DCN \cite{benjamin2020}.
The concept of time-multiplexed, fast tuneable lasers was proposed in \cite{benjamin2020arXiv,ryan2008}, but faced the challenge of optimising multiple lasers and SOAs for reliable fast tuning. SWIFT overcomes this by applying artificial intelligence (AI) techniques to the devices, enabling autonomous optimisation. This has allowed us to demonstrate, for the first time, a time-multiplexed, gated laser tuning system that can tune over 6.05~THz of bandwidth and consistently switch in 547~ps or better to support 20~ns timeslots. SWIFT outperforms other fast switching systems in terms of rise time, settling time and channel count, as shown in Fig~1(d).
\section{Results and Discussion}
\vspace{-0.5em}
\subsection{Regression optimised laser switching}
Fast wavelength switching can be achieved by applying `pre-emphasis' to the drive sections of an integrated semiconductor laser. Until recently, pre-emphasis values had to be carefully tuned by hand for select samples then extrapolated \cite{simsarian2006}.
Here, we apply a linear regression optimiser to automatically calculate the pre-emphasis values for reliable fast tuning. We measured the output of the DS-DBR laser during a switching event using the coherent receiver, then used the instantaneous frequency response as the error term within a linear regression optimiser to iteratively update the applied pre-emphasis values \cite{gerard2020}. Fig 2(b) shows an example of the laser's switching response before and after application of the optimiser.
We applied this optimiser to 21 of the 122 supported channels, testing the extremes of lasing frequency and drive current, covering 462 any-to-any switching events across 6.05~THz (1524.11-1572.48~nm). Fig. 2(c) shows the cumulative distribution of the time taken to reach $\pm$5~GHz of the target wavelength. We measure a worst case switch time of 14.7~ns, and a worst case frequency offset after 20~ns of $-$4.5~GHz. This indicates that SWIFT is potentially suitable for burst mode coherent detection, as 28~GBd dual-polarisation quadrature phase shift keying is tolerant of frequency offsets up to $\pm$7~GHz \cite{simsarian2014}.
\vspace{-0.5em}
\subsection{Particle swarm optimised SOA switching}
\vspace{-0.3em}
SOA driving signals must also be optimised to approach their theoretical rise/fall times of $\sim100$ ps.
Previous optimisation attempts did not consider settling times nor the ability to automate the optimisation of driving conditions for 1,000s of different SOAs in real DCNs \cite{gallep2002, figueiredo2015}.
To solve this, PSO (a population-based metaheuristic for optimising continuous nonlinear functions by combining swarm theory with evolutionary programming) was used in this work to optimise the SOA driving signals. PSO has previously been applied to proportional-integral-derivative (PID) tuning in control theory \cite{kusuma2016}, but has not yet been used as an autonomous method for optical switch control. In the optimisation, $n = 160$ particles (driving signals) were initialised in an $m = 240$ (number of points in the signal) hyperdimensional search space and iteratively `flown' through the space by evaluating each particle's position with a fitness function $f$, defined as the mean squared error between the drive signals' corresponding optical outputs (recorded on the oscilloscope) and an ideal target `set point' (SP) with 0 overshoot, settling time and rise time. As shown in Fig. 3(a) and (b), the
$\pm$ 5\% settling time (effective switching time) of the SOA was reduced from 3.72~ns (when driven by a simple square driving signal) to 547~ps, with the 10-90\% rise time also reduced from 697~ps to 454~ps. The PSO routine required no prior knowledge of the SOA, therefore provides a flexible, automated and scalable method for optimising SOA gating.
\begin{figure*}[!t]
\centering
\label{fig:exp_results_2}
\setlength\tabcolsep{0pt}
\renewcommand{\arraystretch}{0}
\begin{tabular}{ccc}
\begin{overpic}[scale=0.15]{Figures/soa_outputs_annotated.png}
\put(15,52){(a)}
\end{overpic}
&
\begin{overpic}[scale=0.15]{Figures/soa_outputs_zoomed_annotated.png}
\put(12,52){(b)}
\end{overpic
&
\begin{overpic}[scale=0.03]{Figures/1572p48_1524p11_1565p5_1530p72_2_channel_gating.png}
\put(5,40){(c)}
\end{overpic
\\
\begin{overpic}[scale=0.22]{Figures/transitions.png}
\put(3,45){(d)}
\end{overpic
&
\begin{overpic}[scale=0.15]{Figures/osa_outputs_all_pairs.png}
\put(12,50){(e)}
\end{overpic}
&
\begin{overpic}[scale=0.037]{Figures/burst_fo_data.png}
\put(-1,45){(f)}
\end{overpic}
\\
\end{tabular}
\vspace{-1em}
\caption{SOA outputs showing (a) step \& (b) PSO rise \& settling times, (c) SWIFT system output, (d) $\lambda$-to-$\lambda$ 90-90\% switching times, (e) optical spectrum of the 21 worst-case channels, (f) frequency offset (FO) of DSDBR (top) \& SWIFT (bottom).}
\vspace{-2em}
\end{figure*}
\vspace{-0.3em}
\vspace{-0.5em}
\subsection{SWIFT module demonstration}
\vspace{-0.3em}
After optimisation, the DS-DBR lasers were driven with with 12.5~MHz pre-emphasised square waves, resulting in $\leq$40~ns bursts on each wavelength. The lasers were driven 20~ns out of phase, so that one lased while the other reconfigured. The SOAs were driven by 25~MHz PSO-optimised signals, resulting in 20~ns gates, and aligned to block the first 15~ns and last 5~ns of each laser burst, yielding four wavelength bursts of 20~ns each (see Fig.~2(a)). Fig. 3(c) shows the oscilloscope output for the most difficult switching instance, where DS-DBR laser 1 switched from 1572.48~nm to 1524.11~nm, incurring a large rear current swing of 45~mA. The oscilloscope shows a flat intensity response across each wavelength for 20~ns bursts, thereby providing twice
the granularity reported in \cite{shi2019}. Packet-to-packet power variations are due to slight variations in laser wavelength power; these can be addressed by applying slot specific SOA drive currents (not possible in our setup). Measuring switch time by the 90-90\% transition time, we report switch times for the four transitions of 771, 812, 521, and 792 ps, respectively. These are shown in Fig.~3(d).
Furthermore, Fig.~3(f) shows the coherent receiver output of the four wavelength slots with and without gating. The observed frequency ripples are a result of the low sample rate of our 250~MS/s AWG that introduce Fourier components to the driving square wave; these can be easily suppressed by using a higher sample rate. Despite this, each slot stays within 5~GHz of its target.
We repeated this process for each of the channels under test. Fig. 3(e) shows the optical spectrum for all channels, all undergoing gated switching. We measured a worst case value for the side mode suppression ratio of 35~dB, optical power output of 0.8~dBm for a single wavelength (at 1572.48~nm) and corresponding extinction ratio of 22~dB. The fully time-multiplexed optical output power of SWIFT was $>$6~dBm.
This represents the largest number of sub-ns switching channels from a single sub-system ever reported, supporting 122$\times{}$50~GHz spaced channels.
\textbf{In conclusion}, we propose a scalable, low power, tuneable wavelength subsystem capable of sub-ns switching. Using pairs of time-multiplexed tuneable lasers, gated by SOAs, we have experimentally demonstrated switching times of less than 900~ps for 122 x 50~GHz channels.
Reliable and fast tuning was achieved for each laser and SOA using regression and particle swarm optimisation AI techniques. This enables automated, device-specific optimisation and represents a critically important technology in OCS architectures, potentially transforming DCN architectures.
\vspace{-0.2em}
\begin{center}\begin{small}\it{This work is supported by EPSRC (EP/R035342/1), IPES CDT, iCASE and Microsoft Research.}\end{small}
\end{center}
\vspace{-1em}
|
1,116,691,501,253 | arxiv | \section{Introduction}\label{sec:intro}
There has been an emerging trend in representation learning that learns to disentangle from an image latent codes accounting for the various dimensions of the input, e.g., illumination, pose, and attributes~\cite{DBLP:BaoCWLH17,DBLP:ShuYHSSS17,disentangled:cvpr17}.
Yet one of the preliminary forms of this problem -- to decompose an image into its intrinsic \emph{albedo} and \emph{shading} -- has drawn less attention. Solutions to the intrinsic image decomposition problem would enable material editing, provide cues for depth estimation, and provide a computational explanation to the long standing lightness constancy problem in perception. However, even with exciting progress (e.g.~\cite{Chen:iccv13,DBLP:KimPSL16}), this problem still remains a challenging task for continuing effort.
Part of the difficulty arises from the under-determinedness of this problem. Based on prior knowledge of albedo and shading, the Retinex algorithm constrains the decomposition into a thresholding problem in the gradient domain. This model is practical, but would fail to handle complex material or geometry that has sharp edges or casts shadows under strong point sources. Another part of the difficulty lies in the complexity of the forward image generation process -- a process that transforms scene material, geometry and illumination into a 2D image via the dynamics of optical interactions and projection. Intrinsic image decomposition is partly trying to \emph{invert} this process.
\begin{figure}
\centering
\begin{overpic}[width=1.0\linewidth,clip,trim=0 850 1450 0]{illustration2.pdf}\end{overpic}\\
\caption {Given an input image, our \emph{lapPyrNet} jointly produces Laplacian pyramid components that collapse into the target albedo and shading images in high quality. Our network features by a multi-channel architecture that treats intrinsic image decomposition as image-to-image transformation in separate frequency bands in the scale space.}\label{fig:illustration}\vspace{-4mm}
\end{figure}
In this work, we treat the intrinsic image decomposition process in an image-to-image transformation framework, using a deep neural network as a function approximator to learn the mapping relations.
While models of similar ideas have been proposed (e.g.~\cite{narihira2015direct,lettry2016darn}),
our model explores the \emph{scale space} of the network input and output, and considers to simply the transformation as a whole by horizontally expanding the functor approximation pipeline into a parallel set of sub-band transformations.
The contribution of this work is in developing a scale-space decomposition network for intrinsic image generation.
We do this by resuing the classical Gaussian and Laplacian pyramid structure with learnable down/up samplers. The result is a multi-branch network that produces a level-of-detail decomposition of the output albedo and shading; each decomposition component is predicted by one sub-network, which is aggregated together to match the target (Figure~\ref{fig:illustration}). We propose novel loss functions that respect the distinctive properties of albedo and shading for edge preservation and smoothness, respectively.
We further implement a data augmentation scheme to fight against the scarcity of labeled data -- that is, we take inspiration from \emph{breeder learning}~\cite{nair2008analysis}, and use a preliminarily trained network to generate predictions from unlabeled images, and a \emph{synthesis} procedure to perturb and generate new data with exact ground truth labels for iterative model refinement. This data augmentation scheme is applicable to other network training that learns to invert a generative process.
We have evaluated our model on the MPI-Sintel dataset~\cite{butler2012naturalistic} and the MIT intrinsic image dataset~\cite{Grosse:2009}. Experimental results demonstrate the effectiveness of the proposed model and our network engineering components. Our final model achieves state-of-the-art performance with a significant margin over previous methods in a variety of evaluation metrics.
\section{Related work}\label{sec:related}
\noindent
{\bf Intrinsic images: }
A series of solutions have been seen since Barrow and Tenenbaum first propose this problem~\cite{tenenbaum2000global}, for example, the Retinex method~\cite{land1971lightness,grosse09intrinsic}, learning based method using local texture and color cues~\cite{Tappen:2006:pami}, and joint optimization using data-driven priors~\cite{Barron:2012A}. With the advent of deep neural networks, solution to this problem has shifted to a pure data-driven, end-to-end training with various forms of feed forward convolutional neural networks. Direct Intrinsics~\cite{narihira2015direct} is a successful early example of this type, using a (back then seemingly bold) multi-layer CNN architecture to transform an image directly into shading and albedo. Successive models include the work of Kim et al.~\cite{DBLP:KimPSL16} that predicts depth and the other intrinsic components together with a joint convolutional network that has shared intermediate layers and a joint CRF loss, and the DARN~\cite{lettry2016darn} network that incorporates a discriminator network and the adversarial training scheme to enhance of the performance of a ``generator'' network that produces the decomposition results.
\noindent
{\bf Image scale space and pyramid structures:}
The investigation of image scale space is no less old-fashioned than that of the intrinsic image decomposition in vision.
The studies of Koenderink~\cite{koenderink1984} in the 1980's reveals a diffusion process that ``explicitly defines the deep structure of (an) image'' that relates to the DOG structure revealed in even earlier studies~\cite{Marr1980}. Around the same time, Burt and Adelson proposed the Laplacian pyramid structure that decomposes an image into a hierarchical Level-Of-Detail (LOD) representation using successive Gaussian filtering and the DOG operator~\cite{BurtAdelson:1983}. Scale space decomposition also widely exists in other fields of study, such as 3D graphics (e.g.~\cite{Guskov:1999}) and numerical computing (e.g.~\cite{multigrid:wesseling04}).
Deep convolutional networks provide a natural hierarchical feature pyramid for multi-scale information processing.
The feature pyramid network (FPN) makes predictions from multi-level feature maps for object detection with top-down communication~\cite{lin2016feature}.
Pinheiro et al.~\cite{pinheiro2015learning} propose a two-way hierarchical feature aggregation network for object segmentation.
The work of Ghiasi et al.~\cite{ghiasi2016laplacian} produces segmentation score maps with spatia-semantic trade-offs from different network layers, and aggregates them into a final segmentation map by pyramid collapsing.
The work of Lai et al.~\cite{lai2017deep} utilizes a similarly deeply stacked network and feature maps to generate image detail map of multi-scales for image super-resolution. Notably, all of the above work utilizes hierarchical features from a CNN network for multi-scale processing.
In generative modeling, a Laplacian pyramid inspired GAN network is proposed by Denton et al.~\cite{denton2015deep} that learns generative modules in a Laplacian pyramid structure for image generation.
\noindent
{\bf Image-to-image transformation: }
There is a variety of vision tasks that can be formulated as image-to-image transformation problem. Intrinsic image decomposition is one such example. Isola et al.~\cite{isola2016image} recently introduced an image-to-image translation network for several other tasks, including image colorization, sketch-to-image, and image-to-map generation. In this work, Isola et al. model the image-to-image transformation as a conditional generative process and use an adversarial loss for network training.
Note that a set of other vision tasks, such as dense pixel labeling (e.g. object segmentation~\cite{IslamCVPR17}), depth estimation from single image~\cite{XuCVPR17depth}, and the recent label-to-image synthesis network (\cite{ChenK17aa}, also in~\cite{isola2016image}) can also be framed as the image-to-image transformation problem, that is, to map pixels to pixels. Instead of hand engineering the mapping process for each task individually, we engineered a generic, extensible network architecture that is tangential to the work of Isola et al.~\cite{isola2016image} and features in exploiting the dimension of scale-space decomposition for the form of input/output transformation of this problem.
\section{Method}
Let us first consider the transformation of an input image $I$ to an output image $A$ as a complex, highly nonlinear, and pixel-wise nonlocal mapping function $I \rightarrow f(I)$. It has been well demonstrated that deep convolutional neural networks are a general and practical parametrization and optimization framework for a variety of such mapping relations (from image classification to image-to-language translation). Now, let us consider how to adapt the network architecture to the \emph{image-to-image} transformation problem, in which the input and output are both images that have a natural Level-Of-Detail (LOD) pyramid structure, and that the mapping function linking the input to the output may also have a multi-channel decomposition based on the pyramid hierarchy.
In the next section (\ref{sec:reformation}) we are going to describe our model reformation process from a ResNet architecture that exploits this property to our final multi-channel hierarchical network architecture.
We write the Gaussian pyramid of an image $I$ as $[I_0, I_1,..., I_K]$, where $I_0 = I$ and $K$ is the total number of layers. We denote the $k$-th Laplacian pyramid layer by $\mathcal{L}_{k}(I) = I_k - u(I_{k+1})$ where $u$ is the up-sample operator. By definition, the Laplacian pyramid expansion of the image is $I = [\mathcal{L}_0(I), \mathcal{L}_1(I),..., \mathcal{L}_{k-1}(I), I_K]$, where $\mathcal{L}_0(I)$ is the detail layer of the original resolution and $I_K$ is the lowest resolution layer defined in the Gaussian pyramid.
\begin{figure}
\centering
\begin{overpic}[width=1.0\linewidth,clip,trim=15 0 142 0]{reformation.pdf}\end{overpic}\\
\caption {Network architecture reformation (see section~\ref{sec:reformation}).}\label{fig:reformation
\end{figure}
\subsection{Network Architecture and Reformation}~\label{sec:reformation}
First, let us use a simplified network of two blocks ($L$ and $H$) to model the mapping $I \rightarrow f(I)$: $L$ for the mapping of the low frequency band, and $H$ handles the mapping in the high frequency band and whatever residuals that are left out by $L$. With the skip connection and summation of the output of $L$ to the output of $H$, this network (Figure~\ref{fig:reformation}-a) is an instantiation of the ResNet architecture~\cite{he2016deep}.
Next, by applying Laplacian pyramid expansion on the output, we can split the loss for (a) into two components: the output of $L$ is restrained to fit the low-frequency Gaussian component, and that of $H$ to fit the Laplacian detail component separately (Figure~\ref{fig:reformation}-b). This reformed network is equivalent to (a) but with tighter constraints.
A critical transition is from (b) to (c) -- as it turns out to be possible to re-wire the two stacked blocks into two parallel branches, by connecting the output of $L$ to that of $H$ with summation, and adjusting the loss on $H$ accordingly. The resulted network structure (c) is equivalent to (b) -- they represent two equivalent forms of the Laplacian decomposition equation, i.e., by moving the residual component from lhs to rhs and change the sign. The loss of $L$ in (c) remains the same as a regularizer and our experiments find it is optional and is a barrier for numerical performance.
The network structure (c) is the building block for our final extended model.
The final extended model is illustrated in Figure~\ref{fig:reformation}-d, for which we introduce multiple sub-network blocks $H_0,H_1,...H_{K-1}$ for the high frequency bands and one subnetwork block $L_K$ for the low frequency, in analogy to the Laplacian pyramid decomposition structure: the inputs to the network blocks are down-sampled in cascade, and outputs of the network blocks are up-sampled and aggregated from left to right to form the target output. All of the parameters of the down-sample and up-sample operators (the gray-shaded trapezoids in Figure~\ref{fig:reformation}) are learned in network. All of the network blocks share the same architectural topology, which we refer as ``residual blocks'' and describe in detail in section~\ref{sec:residual_block}.
\iffalse
\noindent
{\bf Discussion:} Note that our final network (d) in a non-obvious derivation from a ResNet counterpart, with a significant distinction that the sub-network blocks are not stacked up in sequence but wired \emph{in parallel}. The fact that this flattened parallel network structure relates to a deeply stacked network with skip connection may be worth of further investigation, for which we have not reached any conclusion. Besides, this parallel structure resembles that of a multi-branch network~\cite{xie2016aggregated}, but here each branch is a rather complex network module (see section~\ref{sec:residual_block}) instead of a few light-weighted convolutional filters as in \cite{xie2016aggregated}. Therefore, our network instantiation can be seen as a mixture of the designing principles of the multi-branch network~\cite{xie2016aggregated}, the Inception network~\cite{szegedy2015going}, and the Network-in-Network architecture~\cite{lin2013network}.
\fi
\subsection{Residual Block}\label{sec:residual_block}
The residual blocks are end-to-end convolutional subnetworks that share the same topology, and transform the input in different scales to the corresponding Laplacian pyramid components. Each residual block consists of 6 sequentially concatenated Conv(3x3)-ELU-Conv(3x3)-ELU sub-structures (Figure~\ref{fig:residualblock} (a-b)). Because we are predicting per-pixel value from an input image, no fully connected layers are used. We adopt the skip connection scheme that is popular in recent researches (e.g. ~\cite{he2016deep}, ~\cite{lin2016feature}), including some variant of the DenseNet architecture by Huang et al.~\cite{huang2016densely}. Specifically, in each sub-structure, the output of the last Conv is element-wise accumulated with a skip connection, and the result is the input to the last ELU unit. The intermediate layers have 32 feature channels and output is a 3-channel image or residual image. A 1x1 Conv is added to the skip connection path of the first and last layer for dimension expansion/reduction to match the output of the residual path (Figure~\ref{fig:residualblock} (c)).
Instead of ReLU and Batch Normalization, we use Exponential Linear Units (ELU) as our activation function~\cite{Djork2016Fast}, because ELU can generate negative activation value when $x<0$ and has zero-mean activations, both of which improve the robustness to noise and convergence in training when our network becomes deeper. Besides, we removed the BN layer because it can be partially replaced by ELU which is 2x faster and more memory efficient.
\begin{figure}
\centering
\begin{overpic}[width=1.0\linewidth,clip]{block.pdf}\end{overpic}\\
\caption {Illustration of our Residual Block}\label{fig:residualblock
\end{figure}
\subsection{Loss Function}
The loss function is defined as follows:
\begin{align}
\mathcal{L} = \lambda_d\mathcal{L}_{data} + \lambda_p\mathcal{L}_{percep} + \lambda_t\mathcal{L}_{tv}
\end{align}
which contains a \emph{Data} loss, a feature-based \emph{Perceptual} loss and a \emph{Total Variation} loss as regularization. The hyper parameters are empirically set as: $\lambda_d$ = 1.0; $\lambda_p$ =0.5; $\lambda_t = 10^{-4}$.
\noindent{\bf Data loss:}
The data loss defines pixel level similarity between the predicted image and the ground-truth. Instead of using the pixel-wise MSE, we employ the following \textit{joint bilateral filtering} (also known as \textit{cross bilateral filtering}\cite{eisemann2004flash,petschnigg2004digital}) loss combined with the constraint that the multiplication of the predicted albedo and shading should match the input:\\
\begin{align}
\color{red}
&\mathcal{L}_{data} = \sum\limits_{\mathcal{C} \in \{A,S\}}\frac{1}{N_p} \sum\limits_{p\in \mathcal{C}}\vert\vert{\mathcal{J}_p} - C_p\vert\vert_2^2 +\vert\vert \widetilde{A}\!*\!\widetilde{S}\!-\!{I}\vert\vert_2^2\\
&\mathcal{J}_p = \frac{1}{\mathcal{W}_p}\sum\limits_{q\in \mathcal{N}(p)}G_{\sigma_s}(\vert\vert p-q\vert\vert)G_{\sigma_r}(\vert \mathcal{C}_p - \mathcal{C}_q \vert)\widetilde{\mathcal{C}}_p\\
&\mathcal{W}_p= \sum\limits_{q\in \mathcal{N}(p)}G_{\sigma_s}(\vert\vert p-q\vert\vert)G_{\sigma_r}(\vert{\mathcal{C}}_p - \mathcal{C}_q \vert)
\end{align}
The cross bilateral filtering loss ensures smoothness of the output albedo and shading, and preserves sharp edges for albedo and strong cast shadows in shading (e.g. Figure~\ref{fig:illustration}). In contrary, the alternative MSE loss tends to produce blurry edges across boundaries in the output, which is also seen in \cite{narihira2015direct} and \cite{lettry2016darn} (see Figure~\ref{fig:comparison}). Here $\sigma_s$ = 1.0, $\sigma_r$ uses the adaptive bilateral filtering mechanism, $G_{\sigma_s}$ and $G_{\sigma_r}$ are the spatial and range Gaussian kernels, both with neighborhood size 5x5.
\noindent{\bf Perceptual loss:}
High-level semantic structures should be preserved in the transformation process as well, so a CNN-feature based perceptual loss~\cite{johnson2016perceptual,dosovitskiy2016generating} is used. We make use of the standard VGG-19~\cite{Simonyan2014Very} network to extract semantic information from neuron activations. Our perceptual loss is defined as follows:
\begin{align}
\mathcal{L}_{feat} = \sum\limits_{\mathcal{C} \in \{A,S\}}\sum\limits_{l} \frac{1}{F_lH_lW_l}\vert\vert \Phi_l(\widetilde{\mathcal{C}}) - \Phi_l(\mathcal{C}) \vert\vert_2^2
\end{align}
where $\Phi_l(\mathcal{C})$ is the network activations of $C$ at the $l$-th layer that have size $F_l\times H_l\times W_l$, and $l = relu1\_2, relu2\_2, relu3\_4$ and $relu4\_4$ are the VGG-19 network layers before pooling.
\noindent{\bf Total Variation loss:}
Lastly, we use a total variation term to impose smoothness of the output results.
\begin{align}
\mathcal{L}_{tv} = \sum\limits_{\mathcal{C} \in \{A,S\}}\sum\limits_{i,j}\vert\widetilde{\mathcal{C}}_{i+i,j}-\widetilde{\mathcal{C}}_{i,j}\vert + \vert\widetilde{\mathcal{C}}_{i,j+1}-\widetilde{\mathcal{C}}_{i,j}\vert
\end{align}
where $i$ and $j$ are image row and column indices.
Our final model is trained with the above loss on the output of $H_0$ combined with all outputs from lower level branches (Figure~\ref{fig:reformation}-(d)). This constrains all network channels simultaneously and gradients can back-propagate and dispatch more flexibly. Another training scheme, as we mentioned in section~\ref{sec:reformation}, is to train the network from left to right in an \emph{incremental} manner ($L_{K}$, $H_{K-1}$, $H_{K-2}$, ...), and every time has the loss defined for the corresponding Gaussian pyramid level, e.g. $loss(A_K, \widetilde{A}_{K})$, $loss(A_{K-1}, \widetilde{A}_{K-1})$, $loss(A_{0}, ...\widetilde{A}_{0})$ for the albedo network. This incremental training constrains the network to output a near-perfect Gaussian pyramid, and that the sub-network $H_{i}, i=K-1,...0$ outputs the expected Laplacian detail layer. Figure~\ref{fig:illustration} shows intermediate outputs of the network trained in this scheme for illustration. Except we state otherwise, the quantitative results are obtained using the simultaneous training scheme.
\subsection{Self-Augmented Training}
In this section, we describe a data augmentation strategy for incorporating unlabeled images to self-augment our network training process. We draw the inspiration from the work of \textit{breeder learning}~\cite{nair2008analysis}. The idea is to employ a forward generative model to generate new training pairs for a model by perturbing parameters produced by the model to be augmented. This mechanism bears the spirit of Boostrap to some extent and turns out to be quite effective. For example, Li et al.~\cite{li2017modeling} recently applied this strategy in an appearance modeling network by generating training images from model's predicted reflectance of unlabeled images.
We start with a preliminary network trained with a moderately sized dataset that has ground-truth albedo and shading. We then apply the network to a set of new images and obtain the estimated albedo $\tilde{A}$ and shading $\tilde{S}$. With a straightforward synthesis procedure, we can generate a new image from the estimations. Note that by our loss definitions, $\tilde{A}$ and $\tilde{S}$ are not hard constrained to exactly match the input image (as in \cite{lettry2016darn}), so the new synthesized images will deviate from the original ones.
To introduce further perturbation in the augmented dataset, we additionally apply an \textit{Adaptive Manifold Filtering} (AMF, ~\cite{Gastal2012Adaptive}) operation to $\tilde{A}$ and $\tilde{S}$ and use the filtered results to synthesize new data (see Figure~\ref{fig:refine}). The AMF filtering operator suppresses noise or unwanted details in $\tilde{A}$ and $\tilde{S}$ that may come from the input images or produced by the premature network, and serves to ``regularize'' the manifold of the new synthesized images and their ground-truth label space so that the network is not misled to overfit capricious details in the self-augmented training process.
\begin{figure}
\centering
\begin{overpic}[width=1.0\linewidth,clip]{refine.pdf}\end{overpic}\\
\caption {Our data augmentation process uses a preliminarily train model to produce estimations for unlabeled data, and use the estimation result to \emph{synthesize} new data for self-augmented training.}\label{fig:refine}\vspace{-5mm}
\end{figure}
\section{Evaluation}
In this section we describe evaluation of the model on the MPI-Sintel dataset and the MIT Intrinsic Images dataset and show results in Table~\ref{tab:sintel_image_split}-\ref{tab:mit_scenes} and Figure~\ref{fig:comparison_MIT}-\ref{fig:comparison}.
\subsection{Experiment Setup}
\noindent
{\bf DataSet and Metrics}
The MPI-Sintel dataset\cite{butler2012naturalistic} is composed of 18 scene level computer generated image sequences, 17 of which contain 50 images of the scene and one contains 40 images. We follow~\cite{narihira2015direct,lettry2016darn} and use the $ResynthSintel$ version in our experiment because the data satisfies the $\mathcal{A} \times \mathcal{S} = \mathcal{I}$ constraint. Two types of train/test split (\textit{scene split} and \textit{image split}) are used for head-to-head comparison with previous work. The \textit{scene split} splits the dataset at scene level which takes half of the scenes for training and the rest scenes for testing. The \textit{image split} randomly pick half of the images for training/testing without considering their scene category. The original version of the MIT Intrinsic dataset~\cite{Grosse:2009} has 20 object-level images taking in a laboratory environment setup, each with 11 different lighting conditions. We use the same strategy of \cite{BarronTPAMI2015} to split the data for direct comparison.
\noindent Evaluations are based on the following metrics:\vspace{-2mm}
\begin{description}
\item[si-MSE] scale-invariant mean squared error (si-MSE) defines the pixel-wise MSE up to a free scaling factor (see \cite{BarronTPAMI2015}).
\item[si-LMSE] scale invariant local mean square error (si-LMSE) measures the averaged si-MSE on local window patches as the window slides over the image with a stride. The window size is usually set to 10\% of the image size along the larger dimension and stride is half of the window size:
\[\begin{aligned}
& \text{si-LMSE}(\mathcal{C}_{gt},\widetilde{\mathcal{C}}) =\frac{1}{N_{\mathcal{W}}} \sum_{\omega\in\mathcal{W}}\text{si-MSE}(\mathcal{C}_{gt}^{\omega},\widetilde{\mathcal{C}}^{\omega})
\end{aligned}\]
\item[LMSE] The LMSE measure is the ``normalized'' si-LMSE measure on albedo and shading together. We use this metric on the MIT Intrinsic Images dataset. Local window size for si-LMSE is set to 20 (as in \cite{grosse09intrinsic}):
\[
\begin{aligned}
& \text{LMSE} = \frac{1}{2}\frac{\text{si-LMSE}(\mathcal{S}_{gt},\widetilde{\mathcal{S}})}{\text{si-LMSE}(\mathcal{S}_{gt},0)} + \frac{1}{2}\frac{\text{si-LMSE}(\mathcal{A}_{gt},\widetilde{\mathcal{A}})}{\text{si-LMSE}(\mathcal{A}_{gt},0)}
\end{aligned}
\]
\item[DSSIM] The structural similarity is quantized by \textit{dissimilarity structural similarity index measure} as $\frac{1-\text{SSIM}}{2}$ (see \cite{zhou:ssim04} for SSIM definition).
\end{description}
\noindent
{\bf Implementation Details}
We implemented our model in the PyTorch framework with mini-batch size 8.
In training, we get the input image by randomly cropping patches of size 256 $\times$ 256 after scaling by a random factor in [0.8,1.2] and using random horizontal flipping with probability 0.5. We empirically construct 4 levels of pyramids and initialize all the weights with the strategy of~\cite{he2015delving}. Besides, we adopt the Adam~\cite{kingma2014adam} optimization method with a learning rate starting at $10^{-4}$ and decreasing to $10^{-6}$. We use 2x the size of the training data as the size of the augmentation data in both experiments.
\begin{table*}[htb]
\centering
\begin{tabular}{cccccccccc}
\hline
\multirow{2}{*}{Sintel \textit{image split}}&
\multicolumn{3}{c}{si-MSE}&\multicolumn{3}{c}{si-LMSE}&\multicolumn{3}{c}{DSSIM}\cr\cline{2-10}
&A&S&avg&A&S&avg&A&S&avg\cr\hline\hline
Baseline: Constant Shading&5.31&4.88&5.10&3.26&2.84&3.05&21.40&20.60&21.00\c
Baseline: Constant Albedo&3.69&3.78&3.74&2.40&3.03&2.72&22.80&18.70&20.75\c
Color Retinex~\cite{grosse09intrinsic} &6.06&7.27&6.67&3.66&4.19&3.93&22.70&24.00&23.35\c
Lee et al.~\cite{Lee:2012:EII} &4.63&5.07&4.85&2.24&1.92&2.08&19.90&17.70&18.80\c
Barron \& Malik~\cite{BarronTPAMI2015} &4.20&4.36&4.28&2.98&2.64&2.81&21.00&20.60&20.80\c
Chen and Koltun~\cite{Chen:iccv13}&3.07&2.77&2.92&1.85&1.90&1.88&19.60&16.50&18.05\c
Direct Intrinsic~\cite{DirectIntrinsic:2015}&1.00&0.92&0.96&0.83&0.85&0.84&20.14&15.05&17.60\c
DARN~\cite{DARN16}&1.24&1.28&1.26&0.69&0.70&0.70&12.63&12.13&12.38\c
Kim et al.~\cite{DBLP:KimPSL16}&0.7&0.9&0.7&0.6&0.7&0.7&9.2&10.1&9.7\c
Fan et al.~\cite{DBLP:FanWHC17}&0.67&0.60&0.63&0.41&0.42&0.41&10.50&7.83&9.16\cr\hline\hline
Ours Sequential &0.83&0.74&0.79&0.58&0.54&0.56&7.61&7.91&7.76\cr
Ours Hierarchical & 0.81 & 0.78 & 0.79 & 0.58 & 0.58 & 0.58 & 8.18 & 7.16 & 7.62 \cr
Ours w/o Pyramid&0.92&1.37&1.15&0.65&1.15&0.90&8.44&10.96&9.70\c
Ours w/ MSE loss&0.72&0.62&0.67&0.62&0.46&0.50&7.98&6.37&7.18\cr
Ours w/ `FPN' input &0.73&0.60&0.67&0.49&0.43&0.46&6.84&6.76&6.80\cr\hline
Ours Final*&0.66&0.60&0.63&0.44&0.42&0.43&6.56&6.37&6.47\cr
Ours Final+DA&\bf{0.61}&\bf{0.57}&\bf{0.59}&\bf{0.41}&\bf{0.39}&\bf{0.40}&\bf{5.86}&\bf{5.97}&\bf{5.92}\cr\hlin
\end{tabular}
\caption{Quantitative Evaluation ($\times$100) on the MPI-Sintel \textit{image split}}\label{tab:sintel_image_split}
\end{table*}
\begin{table*}[htb]
\centering
\begin{tabular}{cccccccccc}
\hline
\multirow{2}{*}{Sintel \textit{scene split}}&
\multicolumn{3}{c}{si-MSE}&\multicolumn{3}{c}{si-LMSE}&\multicolumn{3}{c}{DSSIM}\cr\cline{2-10}
&A&S&avg&A&S&avg&A&S&avg\cr\hline\hline
Direct Intrinsic~\cite{DirectIntrinsic:2015}&2.01&2.24&2.13&1.31&1.48&1.39&20.73&15.94&18.33\c
DARN~\cite{DARN16}&1.77&1.84&1.81&0.98&0.95&0.97&14.21&14.05&14.13\c
Fan et al.~\cite{DBLP:FanWHC17}&1.81&1.75&1.78&1.22&1.18&1.20&16.74&13.82&15.28\cr\hline\hline
Ours Sequential & 1.61 & 1.56 & 1.58 & 1.05 & 1.11 & 1.08 & 10.24 & 11.90 & 11.07 \cr
Ours Hierarchical&1.59&1.51&1.55&0.98&1.01&0.99&8.70&9.55&9.13\cr
Ours w/o Pyramid&1.82&2.01&1.92&1.01&1.39&1.20&14.43&14.27&14.35\c
Ours w/ MSE loss &1.47&1.44&1.46&0.92&0.95&0.93&9.48&10.97&10.23\cr
Ours w/ `FPN' input &1.46&1.40&1.43&0.96&0.97&0.97&8.50&9.30&8.90\cr\hline
Our Final* &1.38&1.38&1.38&0.92&0.93&0.92&8.46&9.26&8.86\cr
Our Final+DA &\bf{1.33}&\bf{1.36}&\bf{1.35}&\bf{0.82}&\bf{0.89}&\bf{0.85}&\bf{7.70}&\bf{8.66}&\bf{8.18}\cr\hline
\end{tabular}
\caption{Quantitative Evaluation ($\times$100) on the MPI-Sintel \textit{scene split}}\label{tab:sintel_scene_split}
\end{table*}
\iffalse
\begin{figure}
\centering
\includegraphics[width=\linewidth]{hierarchical_optimization.png}
\caption{hierarchical optimization in each level}\label{fig:hierarchical}
\end{figure}
\fi
\subsection{Evaluation on MPI-Sintel Dataset}
The evaluation results on the MPI-Sintel dataset are in Table~\ref{tab:sintel_image_split}-\ref{tab:sintel_scene_split} and Figure~\ref{fig:comparison}. Again, our model produces favorable results over previous methods, especially in the \emph{scene split} where the network is less prone to ``overfit'' for the test data.
\noindent
{\bf Comparison with Previous Work:}
We first compare our model with a series of previous methods, including the two naive baselines \emph{Constant Shading} and \emph{Constant Albedo}, a few of the traditional methods (\cite{grosse09intrinsic,Lee:2012:EII,Chen:iccv13,BarronTPAMI2015}), and the recent up-to-date neural network based models (\cite{DirectIntrinsic:2015,DARN16,DBLP:KimPSL16,DBLP:FanWHC17}). The result shows our model with/without data augmentation both yield new state-of-the-art performance across all the three metrics.
We do want to point out the quantitative result of all methods (including ours) on the Sintel \emph{image split} might be misleading to some extent. This is because the image sequences of the same scene category in the Sintel dataset are very similar to each other, so by splitting all the data at image level (images of the same scene type may appear in both train and test sets), an over-fit network on the training set will still appear to ``perform'' well on the test set. But the \emph{scene split} dataset will not have this problem. An interesting result in the Tables is that the margin of our results to previous results is larger in the \emph{scene split} (Table~\ref{tab:sintel_scene_split}) than the \emph{image split} (Table~\ref{tab:sintel_image_split}). In the Tables, even though we hold a fairly moderate margin on the \emph{image split}, the margin we hold on the \emph{scene split} is up to $25\%$ in si-MSE and $43\%$ in DSSIM, showing that our network can generalize significantly better for this more challenging data split
\noindent
{\bf From Sequential to Parallel Architecture:}
An important network architecture reformation we described in section~\ref{sec:reformation} is from the sequential
structure to the multi-branch parallel structure (Figure~\ref{fig:reformation}-(a) to (c)). This reformation flattens a deeply stacked network into a set of parallel channels, therefore alleviates the issues of gradient back-propagation. The row ({\bf Ours Sequential}) displays the result by the sequential architecture (a) in Figure~\ref{fig:reformation}. It shows this architecture produces comparable performance against previous works, but suboptimal to our final model, especially in the DSSIM metric (7.76 and 11.07 down to 6.47 and 8.86).
\noindent
{\bf Hierarchical Optimization \textit{vs} Joint Optimization:}
Another architectural optimization in our work is removing the constraint (loss) at each Laplacian pyramid level (Figure~\ref{fig:reformation}-(c)), and simultaneously train all the network channels with a single loss constraint (Figure~\ref{fig:reformation}-(d)). We call the optimization scheme in the latter case \emph{joint optimization}, and that of the former \emph{hierarchical optimization}. A figure is included in the supplemental material explaining more details of the hierarchical optimization. In Table~\ref{tab:sintel_image_split}-\ref{tab:sintel_scene_split}, it shows a $10\%-15\%$ improvement by the joint optimization scheme across all metrics.
\noindent
{\bf Self-Comparison on other Factors:}
We also have a set of controlled self-comparison with respect to other factors, including the \emph{pyramid structure}, \emph{loss function}, \emph{alternating network input}, and \emph{data augmentation}. \\%Qualitative results of self-comparisons are included in the supplemental material.\\
\noindent
{\bf \textit{Pyramid structure}}
The row ({\bf Ours w/o Pyramid}) displays result using a single-channel network, i.e. we use a single residual block to produce output from input directly without having the multi-band decomposition structure. The results in Table~\ref{tab:sintel_image_split} and Table~\ref{tab:sintel_scene_split} show that our counterpart model with the pyramid structure improves over than \emph{30\%} compared to controlled setting by turning this feature off. Note the network complexity grows sub-linearly up to a constant factor as the number of pyramid layer increases.\\
\noindent
{\bf \textit{Loss function:}}
The row ({\bf Ours w/ MSE loss}) displays result by replacing our loss function with the classical MSE loss. It turns out the quantitative error with the MSE loss does not degrade by a large factor in the scale-invariant MSE metrics. However, qualitative results in supplemental material do reveal the MSE loss produces results with blurry edges. The structure-based metric (DSSIM) also shows a clearer margin (from 10.23 to 8.86 in the scene split) between the MSE loss and our loss.\\%The numbers of quantitative results ({\bf Ours w/o Elaborate loss}) are close to our model ({\bf Ours Final*}) in Table~\ref{tab:sintel_image_split} and Table~\ref{tab:sintel_scene_split}, while Figures in supplementary materials indicated that our loss function make a great progress in visual quality.\\
\noindent
{\bf \textit{CNN features as input}}
We further investigate the affect of having Gaussian pyramid image components as input of our network in this task, as most existing multi-scale deep networks (e.g. \cite{lin2016feature,pinheiro2015learning,ghiasi2016laplacian,lai2017deep}) use multi-scale features produced by a CNN network. The row ({\bf Ours w/ `FPN' input}) shows the result that takes CNN features as input following exactly the FPN network~\cite{lin2016feature}. The comparison shows our final model holds a slight but unclear advantage, meaning that the high level features of a CNN still well preserve much of the necessary semantic information for our \emph{pixel-to-pixel} transformation network.\\
\noindent
{\bf \textit{Data augmentation}}
The last row in Table~\ref{tab:sintel_image_split}-\ref{tab:sintel_scene_split} shows the effect of our data augmentation. We obtain a set of cartoon clips crawled from the Web that share similar property with the MPI dataset (see an example in Figure~\ref{fig:refine}). The size of the augmentation data is set to 2 times of the labeled training data. Further increasing the augmentation data size did not produce important improvement in our experiment.
\subsection{Evaluation on MIT Intrinsic Images Dataset}
We also evaluated the performance of our model against a set of previous methods
on the {MIT Intrinsic Images dataset}. The results are shown in Table~\ref{tab:mit_scenes} and Figure~\ref{fig:comparison_MIT}.
In this set of experiments, we conducted data augmentation in two different setups: {\bf Ours + DA} and {\bf Ours + $\text{DA}^{+}$}. The difference is in the \emph{data} that we take for the augmentation. {\bf Ours + DA} is by the ordinary setting where the augmenting data is searched from the web by a set of similar object category names the dataset provides. In {\bf Ours + $\text{DA}^{+}$}, instead, we generate the augmenting dataset from the same set of objects (depth and reflectance) of the MIT Intrinsic Images dataset under new illumination conditions (spherical harmonic illuminations from \cite{Barron:2012B} and the rendering method by \cite{Ramamoorthi:2001:SH}). This creates a dataset that highly resembles the original dataset and is practically impossible to acquire in real case. In other words, it sets a ceiling for the quality of augmentation data. The results in Table~\ref{tab:mit_scenes} shows that both augmentation setups are effective, and the latter one gives clue to the limit we can get from the data augmentation scheme we introduced for this task.
\begin{figure}
\centering
\includegraphics[width=\linewidth,clip,trim=0 180 1200 0]{MIT_Comparison2.pdf}
\caption{Qualitative results on the MIT Intrinsic dataset examples. Top three rows are albedo; the bottom three rows are shading.}\label{fig:comparison_MIT}\vspace{0mm}
\end{figure}
\begin{table}[t]
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{\bf{Mit Intrinsic Data}}&
\multicolumn{3}{c}{si-MSE}&\multicolumn{1}{c}{LMSE}\cr\cline{2-5}
&Albedo&Shading&Average&Total\cr\hline\hline
Zhou \etal ~\cite{zhou2015learning}&0.0252&0.0229&0.0240&0.0319\c
Barron \etal~\cite{BarronTPAMI2015}&0.0064&0.0098&0.0081&0.0125\c
Shi \etal~\cite{shi2016learning}&0.0216&0.0135&0.0175&0.0271\c
Direct Intrinsic \etal~\cite{DirectIntrinsic:2015}&0.0207&0.0124&0.0165&0.0239\c
Fan \etal~\cite{DBLP:FanWHC17}&0.0127&0.0085&0.0106&0.0200\cr\hline
Ours*&0.0089&0.0073&0.0081&0.0141\c
Ours + DA &0.0085&0.0064&0.0075&0.0133\cr\hline
Ours + $\text{DA}^{+}$ &0.0074&0.0061&0.0068&0.0121\cr\hline
\end{tabular}
}
\caption{Evaluation on the {\bf MIT Intrinsic Images dataset}. }\label{tab:mit_scenes
\end{table}
\begin{figure*}
\centering
\vspace{-8mm}
\includegraphics[width=0.82\textwidth,clip,trim=0 50 0 25]{Comparison.pdf}
\caption{Qualitative results on four examples of the MPI-Sintel benchmark dataset and comparison to previous methods (results are excerpted from paper with limited resolution). Notice our decomposition results exhibit good edge preserving property and are visually close to the ground truth. }\label{fig:comparison}
\end{figure*}
\section{Conclusion}
We have introduced a Laplacian pyramid inspired neural network architecture for intrinsic images decomposition. The network models the problem as image-to-image transformation and expands the input and output in their scale space. We have conducted experiments on the MPI Sintel and MIT dataset and produced state-of-the-art quantitative results and good qualitative results. For future work, we expect the proposed network architecture to be tested and refined on other image-to-image transformation problems, e.g., pixel labeling or depth regression.
\vspace{3mm}
\noindent{\bf Acknowledgment}
We thank all the anonymous reviewers. This work is supported in part by National Key R$\&$D Program of China (No. 2017YFB1002703), by NSFC (No. 61602406), by ZJNSF (No. Q15F020006), and by a special fund from the Alibaba - ZJU Joint Institute of Frontier Technologies.
{\small
\bibliographystyle{ieee}
|
1,116,691,501,254 | arxiv | \section{Introduction}
Ultra-short laser pump-probe techniques in condensed-matter systems have opened the possibility
to generate correlated nonequilibrium phases, such as photo-induced metallic states in Mott insulators
\cite{Iwai03}, and to study their dynamics on femtosecond timescales.
On a fundamental level, seeing how correlations evolve {\em in time} can shed new light on many-body
effects which have been investigated for decades under equilibrium conditions. A paradigm example is the
formation of polaronic quasiparticles, i.e., the self-trapping of an electron in a lattice distortion, or ``phonon cloud''.
This phenomenon was predicted in the early days of quantum mechanics \cite{Landau1933} and has been
thoroughly investigated for a large class of systems \cite{Devreese2009, AlexandrovBook, FehskeBook},
more recently also for ultra-cold gases and trapped ions \cite{Bruderer2007, Stojanovic2012}.
In nonequilibrium, however, many questions related to the dynamics of systems with strong
electron-phonon coupling remain only partially understood.
Signatures of strong electron-phonon coupling and polaronic effects in photo-excited systems have been found
for the self-trapping of excitons \cite{Tomimoto1998, Dexheimer2000, Sugita2001, Morissey2010}, in Mott
insulators \cite{Dean2011,Novelli2014}, and organic materials \cite{Morrissey2013, kaiser2014optical, Mitrano2014}.
A direct observation of the self-localization process was achieved by two-photon photoemission for electrons in
surface states which couple to adsorbate layers \cite{Ge1998, Ge2000, Miller2002, Gahl2002}.
While polaronic effects can be visible already within one phonon period after photo-excitation, it is not entirely clear
how, and how fast, the actual polaron ground state can be reached.
The presence of non-equilibrated polarons, on the other hand, would determine
carrier mobilities in transient metallic states and can thus be of importance also for potential technological applications
like ultra-fast switches. It is therefore important to pinpoint
signatures of excited polarons, to understand
their properties, and whether these can be controlled, e.g., by the photo-excitation process.
These questions have motivated considerable effort to understand the nonequilibrium polaron dynamics
from a theoretical perspective. A large body of work has been performed for the Holstein model \cite{Holstein1959a},
which describes tight-binding electrons coupled to an optical phonon with frequency $\omega_0$. The physical
picture for the polaron formation process which has emerged from these studies
suggests
two important bottlenecks: For
large $\omega_0$, one finds long-lived beating between well-separated polaron sub-bands in the many-body
spectrum \cite{Ku2007, Fehske2011}, while in the opposite and experimentally very relevant adiabatic regime,
in which $\omega_0$ is smaller than the electron hopping, a semiclassical argument \cite{Emin1976, Kabanov1993}
predicts an energy barrier between delocalized and localized states. In the present work we solve the model exactly
in the large coordination limit to see how relaxation of high energy electrons by emission of phonons, strongly damped
coherent oscillations, long-lived delocalized states, and trapping in excited polaron states come together in
particular in the adiabatic limit
and how they are
reflected in characteristic signatures of the photoemission spectrum.
Even for a single electron (the relevant limit to describe diluted polarons), the Holstein model is difficult to be solved
in nonequilibrium, because established approaches like Quantum Monte Carlo \cite{Prokofev1998}
cannot be used. Variants of exact diagonalization \cite{Ku2007, Fehske2011, DeFilippis2012, DeFilippis2012b,
Vidmar2011, Matsueda2012, Golez2012}
provide an accurate and versatile description of the electron-phonon dynamics in many regimes, but they
rely on a cutoff of the phonon Hilbert space and become challenging in the adiabatic regime in which the
phonon cloud involves a large number of oscillator quanta. Our work is based
on nonequilibrium dynamical mean-field theory (DMFT) \cite{Aoki2014}, which is exact in the limit of
large lattice coordination numbers \cite{Metzner1989}.
In DMFT, a lattice model is mapped onto a single impurity coupled to a self-consistent
bath. While the real-time dynamics of this impurity problem can usually be solved only approximately (see, e.g.,
Ref.~\cite{Werner2013phonon} for the Holstein model), the limit of low electron density in the Holstein model
provides a remarkable exception. In equilibrium, the DMFT equations for this case can be written exactly in terms
of a continued fraction for the electron Green's function \cite{Ciuchi1997}. Technically, this solution is similar to earlier
diagrammatic approaches \cite{Sumi1974, Cini1988}, and also to the momentum averaged technique
\cite{Goodvin2006, Berciu2006}, which have provided a solution throughout all regimes of the single-electron
Holstein model in equilibrium (adiabatic and non-adiabatic, weak and strong coupling). These diagrammatic techniques
rely on a momentum-independent self-energy which is approximate in finite dimensions, but shows good agreement
with Monte Carlo particularly in the strong-coupling regime \cite{Goodvin2011}. Here we generalize the exact DMFT
solution of Ref.~\cite{Ciuchi1997} to the case of nonequilibrium DMFT.
\section{Model and Methods}
The Holstein model \cite{Holstein1959a} is defined by the Hamiltonian
\begin{align}
\label{eq:Hol1}
& H
=
-J\sum_{\langle ij \rangle}(c_{i}^{\dagger}c_{j}+ h.c.)
+\sum_i H_\text{loc}^{(i)},
\\
\label{eq:Hol2}
&H_\text{loc}^{(i)}
=
\omega_{0} b_{i}^{\dagger}b_{i}
+g n_{i}(b_{i}+b_{i}^{\dagger}) + \epsilon_f n_i.
\end{align}
The first term in Eq.~\eqref{eq:Hol1} describes tight-binding electrons with nearest-neighbor hopping $J$ on a lattice;
$c_{i}^{\dagger}$ and $c_{i}$ are the creation and annihilation operator of an electron on lattice site $i$,
respectively. The local part \eqref{eq:Hol2} of the Hamiltonian represents one harmonic oscillator
with frequency $\omega_0$ at each lattice site, i.e., a dispersion-less optical phonon mode,
whose coordinate $X_i=(b_i^\dagger+b_i)/\sqrt{2}$ is linearly coupled to
the electron occupancy $n_i= c_i^\dagger c_i$; $\epsilon_f$ defines the zero of the energy.
We focus on the dilute limit, where correlations between electrons are negligible,
so that expectation values of observables are proportional to the density $n_{el}=\langle c_i^\dagger c_i\rangle$ and
can obtained from the solution of the model with only one electron. The hopping $J$ is taken as a unit of energy,
and times are measured in terms of $1/J$. The results are obtained for a Bethe lattice with a semi-elliptic
density of states $D(\epsilon) = \sqrt{4-\epsilon^2}/\sqrt{2\pi}$.
To get an understanding of polaron formation in the Holstein model,
the limit of isolated lattice sites (atomic limit) is a convenient starting point \cite{MahanBook}.
In this limit, the presence of an electron on the site shifts the equilibrium position of the oscillator:
omitting site-indices for convenience, the local part of the Hamiltonian can be rewritten as
\begin{equation}
\label{hloc-shift}
H_{\mathrm{loc}} = \frac{\omega_{0}}{2}\big[(X+X_0)^2 + P^2\big] + (\epsilon_f-E_P)\,\hat n,
\end{equation}
where $X=(b^\dagger+b)/\sqrt{2}$ and $P=i(b^\dagger-b)/\sqrt{2}$ are coordinate and momentum
of the oscillator, respectively, $X_0 = \sqrt{2}g/\omega_0 \hat n$, and
\begin{align}
E_P=\frac{g^2}{\omega_0},
\end{align}
is the lowering of the ground state energy which defines the bare {\em polaron binding energy}.
In the lattice model, the energy ratio $E_P/J$ distinguishes the regimes of weak-coupling ($E_P\ll J$)
and strong coupling ($E_P \gg J$). For strong coupling, self-localized electron
states at energy $E=-E_P$ at different sites are coupled by the hopping and form a band of delocalized polaronic states;
it's bandwidth is reduced with respect to the free bandwidth by the Frank-Condon factor, which takes into account the
coherent motion of the lattice distortion with the electron, i.e., the overlap $|\langle 0|e^{iPX_0}|0\rangle|^2$ between
the ground states $|0\rangle$ and $e^{iPX_0}|0\rangle$ of the oscillator and the displaced oscillator \eqref{hloc-shift} respectively.
A second important scale for the Holstein model is the ratio $\alpha=\omega_{0}/J$, which distinguishes the adiabatic
behavior ($\alpha \lesssim 1$), in which the phonon is slow compared to the electron, from the non-adiabatic behavior
($\alpha \gtrsim 1$). In the adiabatic strong-coupling regime, the number of oscillator quanta in the phonon cloud
proliferates (in the atomic limit, $\langle b^\dagger b \rangle=g^2/\omega_0^2=E_P/\omega_0$), which makes the
dynamics in this regime qualitatively distinct from the non-adiabatic regime.
To study polaron formation in time, we start the simulations from an initial state in which electrons and lattice are decoupled, and the mean kinetic energy of the electron is comparable to the free bandwidth, whereas the lattice temperature
$T_\text{latt}$ is low ($T_\text{latt} < J,\omega_0$). This initial state may be taken as a simple model for the situation
immediately after electrons have been promoted into an empty valence band by photo-excitation from a conduction band,
because the process of rapid inter-band excitation leaves the lattice unaffected up to a good approximation.
The precise form of the initial electron energy distribution is not important for the subsequent dynamics
as long as it is broad on the
scale of the bandwidth,
and we take it to be a hot electron distribution with electron
temperature $T_\text{el}^*\sim 1-10 \,J$.
To monitor the dynamics of the model we compute the time-resolved photoemission spectrum, which can be obtained
from the electronic Green's function. In the low density limit, the relevant propagators
for adding an electron ($\widetilde G^>$) and removing an electron ($\widetilde G^<$) are given by
\begin{align}
\label{ggtrtilde}
\widetilde G^>(t,t')
&=
\frac{-i}{Z_{0}}\mathrm{Tr}_{N=0}[e^{-\beta H}c_i(t)c_i^\dagger(t')],
\\
\label{glestilde}
\widetilde G^<(t,t')
&=
\frac{i}{Z_{1}n_{el}}\mathrm{Tr}_{N=1}[e^{-\beta H}c_i^\dagger(t')c_i(t)],
\end{align}
where $\text{Tr}_{N=n}$ is the trace over the $n$-electron sector, and $Z_n=\mathrm{Tr}_{N=n}[\,e^{-\beta H}]$.
(Note that we have assumed translational invariance and normalized $\widetilde G^<$ by the electron density $n_{el}$,
so that $\widetilde G^<(t,t)=i$.) The photoemission spectrum as a function of probe time $t$ and energy $\omega$ is obtained
from $\widetilde G^<$ by partial Fourier-transform and convolution with the envelope $S(t)$ of the probe pulse
\cite{FreericksKrishnamurthyPruschke2009},
\begin{equation}
\label{trpes}
I(\omega,t)=
\int
\frac{dt_1dt_2}{2\pi i}
\,S(t_1)S(t_2)\,e^{i\omega(t_1-t_2)} \widetilde G^{<}(t+t_1,t+t_2).
\end{equation}
In equilibrium, $\widetilde G^{<}(t,t')$ is translationally invariant in time, so that $I(\omega)$ is given by the convolution
\begin{align}
\label{eqpes}
I(\omega) = \int d\omega' A^{<}(\omega-\omega') |\tilde S(\omega')|^2,
\end{align}
of the power spectrum
$|\tilde S(\omega)|^2=|\int \!dt\, e^{i\omega t }S(t)|^2/2\pi$
of the probe pulse with the occupied density of states, $A^{<}(\omega) =
(1/2\pi i)\int dt \, e^{i\omega t} \widetilde G^{<}(t,0)$. In addition to the photoemission spectrum, we will compute time-local
observables, i.e., the kinetic energy per site, $E_\text{kin}(t) = -J\sum_{\langle ij\rangle } \langle c_i^\dagger c_j \rangle/Ln_{el}$,
as well as the average number of oscillation quanta in the phonon cloud (i.e., at a site occupied by an electron),
$N_{ph}(t)= \langle n_i b_i^{\dagger}b_i\rangle/n_{el}$ (the expectation values are translationally invariant
and normalized by the electron density).
We compute the dynamics of the Holstein model using the nonequilibrium generalization of DMFT \cite{Aoki2014}.
In the limit of low density, the solution can be made exact, yielding both Green's functions \eqref{ggtrtilde} and
\eqref{glestilde}. In equilibrium \cite{Ciuchi1997}, computing the propagator $\widetilde G^>$ is sufficient, because
$\widetilde G^>$ and $\widetilde G^<$ are related by a fluctuation dissipation relation. For the nonequilibrium case,
we thus have to reformulate the equations of Ref.~\cite{Ciuchi1997} in real-time and provide additional equations
for $\widetilde G^<$ (or equivalently, one set of equations on the Keldysh contour). The resulting equations are
Volterra integral equations whose numerical solution is controlled by the maximum number $N_\text{max}$ of
phonons on each site;
the computational effort increases however only linearly with $N_\text{max}$,
so that we can obtain converged
results
with $N_\text{max}=50$ for several tens of hopping times.
To keep the presentation concise, the detailed
formalism is explained in the appendix \ref{ap1}.
\section{Results}
\begin{figure}[tbp]
\centerline{ \includegraphics[width=0.99\columnwidth]{figure01_1.eps}}
\centerline{ \includegraphics[width=0.99\columnwidth]{figure01_2.eps}}
\caption{
Relaxation in the weak-coupling regime. {\bf a)} Time-evolution of the kinetic energy for
three values of the coupling ($T_{\mathrm{latt}}=0.1$, $T_\text{el}^*=10$, $\omega_{0}=1$). The inset
shows the power law behavior of $dE_{kin}/dt$ for $g=0.4$; the red line are data, the
dashed black line is a power law $\sim1/t^3$. {\bf b)} Time-evolution of the
average phonon number for the same parameters. The horizontal dashed lines indicates
the corresponding values of $N_{ph}$ in thermal equilibrium at $T=T_\text{latt}$. {\bf c)}
and {\bf d)} Time-resolved
photoemission spectrum $I(\omega,t)$
for $g=0.2$. The spectrum is obtained from Eq.~\eqref{trpes},
using a Gaussian probe pulse $S(t)\propto\exp(-t^{2}/2\delta^{2})$ with duration $\delta=3$.}
\label{fig:Ekin_weak}
\end{figure}
\subsection{Weak coupling regime}
The weak-coupling regime is rather well described by rate equations (see below),
which can capture the cooling of the initial hot electron state by emission of phonons.
Nevertheless it is illustrative to
look at the corresponding DMFT solution, to contrast the
behavior
for strong-coupling below. Figure~\ref{fig:Ekin_weak}a
and b show the relaxation of the kinetic energy and the phonon number $N_{ph}$ for various coupling strength $g$.
After a short transient, the time-evolution of both quantities follows a monotonous relaxation, which becomes faster
with increasing coupling strength. Similarly, the relaxation can be seen in the time-resolved photoemission spectrum
(Fig.~\ref{fig:Ekin_weak}c). At early times, the occupied density of states reflects the initial hot electron state and is smeared
over the full band.
(In the uncorrelated equilibrium state, the occupied density of states is
$A^<(\omega) \propto D(\omega)e^{-\omega/{T_\text{el}^*}}$.)
Subsequently, electrons reduce their kinetic energy by the emission of phonons, and
spectral weight is concentrated closer to the lower band edge.
For weak electron-phonon coupling, relaxation phenomena at long times are captured by
a kinetic equation \cite{Mahan1987}, which is also in agreement with exact diagonalization studies
\cite{Ku2007, Golez2012}. For low lattice temperature ($T_\text{latt}\ll \omega_0$), an electron with
band energy $\epsilon$
can only emit phonons, at a rate
determined by the coupling $g$ and the density of (final)
states,
\begin{equation}
\label{trelax}
\frac{1}{\tau(\epsilon)} = g^2 D(\epsilon-\omega_0).
\end{equation}
This result is obtained from Fermi's golden rule, or equivalently,
the imaginary part of the equilibrium self-energy $\text{Im} \Sigma(\epsilon+i0)$.
The $g^2$-dependence of the relaxation time is indeed confirmed by the DMFT results when one fits
the time-dependence of the photoemission spectrum $I(\omega,t)$ in a certain energy window with a simple
exponential function $A \exp(-t/\tau) +C$ (this will be analyzed further below, see the curve $1/\tau$ in
Fig.~\ref{fig:dist_strong1}d). Furthermore, from Eq.~\eqref{trelax} one sees that a thermal equilibrium state
can never be reached, because the density of states vanishes if the final energy $\epsilon-\omega_0$ is below
the lower band edge. This phase-space effect can be seen explicitly in our data: At long times,
the time-resolved photoemission spectrum remains shifted with respect to the spectrum of the equilibrium state
at temperature $T=T_\text{latt}$ (see dotted horizontal lines in Figs.~\ref{fig:Ekin_weak}d).
Finally, we note that due to the energy-dependent relaxation time, the long-time asymptotic behavior of averaged
quantities is not necessarily exponential. This can be seen for the kinetic energy: For a density of states
$D(\epsilon) \propto \sqrt{\epsilon-E_0}$ with a van-Hove singularity at the lower band edge $E_0$ (as for a
three-dimensional lattice, or the semi-elliptic density of states used here), the rate Eq.~\eqref{trelax}
implies a power-law long-time asymptotic behavior of $E_\text{kin}$
with $d E_\text{kin}/dt \sim t^{-3}$. (For a
one-dimensional density of states, one would expect an exponential decay \cite{Golez2012}.) This behavior
is
observed
in the numerical data (see Fig.~\ref{fig:Ekin_weak}a, inset), which is a nice confirmation of the
rate equation analysis.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{figure02.eps}
\caption{Relaxation of $E_{kin}$ and $N_{ph}$ at strong and intermediate coupling. {\bf a)} and {\bf c)}
Non-adiabatic regime ($\omega_0=1$), for $E_P=1$ ($g=1$) and $E_P=2.25$ ($g=1.5$),
and $T_\text{latt}=0.2$ and $T_\text{el}^*=10$. {\bf b)} and {\bf d)} Adiabatic regime ($\omega_0=0.2$), for
$E_P=1.25$ ($g=0.5$) and $E_P=1.8$ ($g=0.6$), and initial conditions $T_\text{el}^*=1$ and $T_\text{el}^*=2$.
Horizontal dashed lines indicate expectation values of the respective quantities in equilibrium at $T=T_\text{latt}$.
}
\label{fig:Ekin_strong1}
\end{figure}
\subsection{Strong coupling regime: Overview}
In the remainder of this paper we focus on the intermediate and strong coupling regime, where small polarons
are formed in equilibrium. Figure~\ref{fig:Ekin_strong1} shows the relaxation of $E_{kin}$ and $N_{ph}$ for
couplings $E_P\approx1$ to $E_P\approx2$, and phonon frequencies $\omega_0=0.2$ and $\omega_0=1$
in the adiabatic and non-adiabatic regime, respectively. The sudden coupling of the electron
and phonons leads to coherent oscillations, which are more pronounced
for large $\omega_0$. Furthermore, the absolute value of the kinetic energy becomes smaller with increasing $g$,
indicating a stronger localization of the carriers, and $N_{ph}$ shows a pronounced enhancement of the phonon cloud.
These effects provide a first glance at the crossover from intermediate to strong coupling.
A further analysis of the photoemission spectrum (Fig.~\ref{fig:PES_strong1}) will show that
the observed dynamical results from a mixture of two different relaxation path, involving either
delocalized and localized states.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{figure03_1.eps}
\includegraphics[width=\columnwidth]{figure03_2.eps}
\caption{Time-resolved photoemission spectrum $I(\omega,t)$ at strong coupling.
{\bf a)} and {\bf b)} Adiabatic regime: $\omega_0=0.2$, $g=0.66$ ($E_P=2.18$), $T_\text{latt}=0.1$, $T_\text{el}^*=10$.
The spectrum is computed from Eq.~\eqref{trpes} with a Gaussian probe pulse $S(t)\propto\exp(-t^2/2\delta ^2)$
and a probe pulse duration $\delta=3$, smaller than the oscillation period $2\pi/\omega_0$. The right panel
{\bf b)} shows the spectrum at selected times, and a comparison to the equilibrium spectrum at $T=T_\text{latt}$
(black dashed line); the energy zero $\epsilon_f$ is fixed such that $\omega=0$ is the lower edge of the
free band. {\bf c)} and {\bf d)} Similar to upper panels, for a comparable value of the polaron binding $E_P$ in
the non-adiabatic regime: $\omega_0=1$, $g=1.5$ ($E_P=2.25$), $T_\text{latt}=0.1$ $T_\text{el}^*=10$. Probe pulse duration $\delta=1$.}
\label{fig:PES_strong1}
\end{figure}
In the adiabatic case, $\omega_0=0.2$ (Figs.~\ref{fig:PES_strong1}a and b), we can distinguish several characteristic
features in the photo\-emission spectrum: (i) A rapid decay of the weight at high energies ($\omega \gtrsim 1$, $t\lesssim 20$),
starting from the broad distribution of the initial hot electron state. (ii) Buildup of spectral weight far below the lower edge
of the free band (around $\omega=-3$) within less than one period $2\pi/\omega_0$, and a beating of weight between
this region and $\omega\approx 0$ at the frequency $\omega_0$. Finally, (iii), even though the oscillations are
damped, the spectrum is still different from the spectrum in the thermal state at temperature $T=T_\text{latt}$ (dashed line in panel b),
and displays two peaks instead of a single polaron band. Other than at weak-coupling, the differences between transient
and equilibrium spectra occur on energy scales considerably larger than $\omega_0$.
Spectra for the non-adiabatic regime ($\omega_0=1$) are shown in Figs.~\ref{fig:PES_strong1}c and d:
Coherent oscillations are reflected in a rigid-like shift of the occupied density of states, and a two-peak
structure of the transient state is not observed.
To develop a physical understanding of these observations, we will perform an analysis in two directions: a comparison
to the spectrum of an isolated site will allow us to single out characteristic spectral signatures of (excited) polaron states
and show how they reflect the structure of the phonon cloud, and a momentum-resolved spectrum will distinguish
contributions from polarons and delocalized electrons.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{figure04_1.eps}
\includegraphics[width=\columnwidth]{figure04_2.eps}
\caption{Photoemission spectrum for the atomic limit. {\bf a)} and {\bf b)}
Time-{\em in}dependent spectra,
assuming the initial polaron is in the ground state ($m=0$, see Eq.~\eqref{alesequiatom}) of the
displaced
oscillator \eqref{hloc-shift},
or in an excited state $m=1,2$ [c.f.~Eq.~\eqref{laguerre}]. Parameters are like in Fig.~\ref{fig:PES_strong1}: $\omega_0=0.2$, $g=0.66$,
probe pulse duration $\delta=3$ for panel {\bf a)} and $\omega_0=1$, $g=1.5$, probe pulse duration $\delta=1$
for panel {\bf b)}. Blue solid line is the spectrum taken from Fig.~\ref{fig:PES_strong1}.
Note that the energy zero $\epsilon_f$ for the spectra in the atomic limit is adapted to account
for the difference between the polaron binding energy in the lattice and a the isolated site.
{\bf c)}
and {\bf d)} Photoemission spectrum after a sudden switch-on of the coupling $g$ [obtained from Eqs.~\eqref{trpes}
and \eqref{gquench}], for the same parameters as {\bf a)} and {\bf b)}, respectively.}
\label{fig:alimit}
\end{figure}
\begin{figure*}[tbp]
\centering
\includegraphics[width=\columnwidth]{figure05_1.eps}
\includegraphics[width=\columnwidth]{figure05_2.eps}
\caption{Phonon number distribution and polaron crossover. {\bf a)} to {\bf c)}
$P_\text{ph}(n)$ at $\omega_{0}=0.2$ for different couplings and times ($T_{\mathrm{latt}}=0.1$, $T_\text{el}^*=10$).
The dashed black line corresponds to the equilibrium state at temperature $T_\text{latt}$.
{\bf d)} The position of the maxima in $P_\text{ph}(n)$ for equilibrium ($n_{eq}$, orange filled circles) and
at time $t=40$
(blue filled circles, see right vertical axis).
Open symbols show the ratio $\Delta \omega/\omega_0$
at the same time, where $\Delta \omega$ is the splitting of the two peaks in the photoemission spectrum.
Dashed lines labelled
$m=0,1,2,3$ show the position of the maximum of the distribution functions of the
displaced oscillator in it's $m$th eigenstate [c.f.~Eq.~\eqref{laguerre}) with $\gamma=g/\omega_0$, the
maximum with the largest $n$ is shown]. The red curve with square symbols
(left vertical axis)
shows the relaxation time $1/\tau$ of the high-energy part of the photoemission spectrum
(see main text).}
\label{fig:dist_strong1}
\end{figure*}
\subsection{Atomic limit and spectroscopic signatures of excited polarons}
In the atomic limit, the Holstein model can
be solved analytically, both in and out of equilibrium, using a Lang-Firsov
transformation \cite{MahanBook} or it's time-dependent generalization \cite{Werner2013phonon}. Details of the solution are
summarized in Appendix \ref{ap2}. In the ground state, the
polaron
corresponds to the displaced oscillator
[Eq.~\eqref{hloc-shift} with $n=1$], and the occupied density of states is given by a set of delta-peaks,
\begin{align}
\label{alesequiatom}
A^<(\omega) = \sum_{n=0}^\infty P(n) \,\delta(\omega - E_P - n\omega_0),
\end{align}
where the weights $P(m)$ are given by the phonon number distribution in the polaron state. This result has
an intuitive understanding: photoemission removes an electron from the bound state at energy $-E_P$
and transfers the oscillator into it's excited state $|n\rangle$ with a probability which is
given by the overlap of $|n\rangle$ and the oscillator state $|\psi\rangle$ {\em before} removing the electron,
$|\langle n |\psi\rangle|^2=P(n)$.
At zero temperature, $|\psi\rangle = e^{iX_0 P} | 0\rangle$ is the ground state of the oscillator \eqref{hloc-shift}
with $X_0=\sqrt{2}g/\omega$, and $P(n)=e^{-\gamma^2}\gamma^{2n}/n!$ is a Poisson distribution with
mean $\gamma^2 = g^2/\omega_0^2$. The corresponding photoemission spectrum, Eq.~\eqref{eqpes},
already matches the lattice result quite accurately for the parameters of Fig.~\ref{fig:PES_strong1},
as shown by the curves
labelled $m=0$
in Figs.~\ref{fig:alimit}a and b.
It is thus worthwhile to take the isolated site also as a starting point to analyze the peculiar double
peak spectra of the non-thermal state after dephasing of oscillations transient state at $\omega_0=0.2$.
(The dephasing of oscillations is studies in more detail in Sec.~\ref{sec-oscillations} below.)
At first sight,
one may assume that
a peak in $I(\omega,t)$ which is shifted several multiples of $\omega_0$
with respect to the ground state polaron implies a highly excited state. We will now argue, however, that
the two-peak structure of the spectrum in the adiabatic case can be taken as the characteristic signature
of a low lying excited polaron state. For this purpose we compute the photoemission spectrum for an
isolated site, assuming that the displaced oscillator is initially in it's $m$th excited eigenstate. In this
case Eq.~\eqref{alesequiatom} still holds, with
the phonon excitation energy $n\omega_0$
in the delta function replaced by $(n-m)\omega_0$. The phonon distribution function of the exited state,
$P_{m}(n) \equiv | \langle m |e^{iPX_0}| n\rangle |^2 $, is given by
\begin{align}
\label{laguerre}
P_{m}(n+m)
=
P_0(n) \frac{n! m!}{(n+m)!} L_{m}^{(n)}(\gamma^2)^2,
\end{align}
where $P_0(n) = e^{-\gamma^2} \gamma^{2n}/n!$ is the Poisson distribution of the ground state
($\gamma=g/\omega_0$), and $L_{m}^{(n)}(x)$ is a generalized Laguerre polynomial (see Appendix \ref{ap2}).
In particular, we have $L_{1}^{(n)}(x) = n+1 - x$, i.e., the distribution function $P_1(n)$ is suppressed
at $n=\gamma^2$ (close to the maximum $\gamma^2$ of $P_0$), which implies a double peak.
In general the $m$th polynomial has $m$ zeros, reflecting the probability distribution
function of the oscillator coordinate. A comparison of these excited state spectra with the time-dependent
spectra of the lattice model
shows that the splitting of the two peaks in $I(\omega,t)$ (Fig.~\ref{fig:alimit}a) or the width of the distribution
(Fig.~\ref{fig:alimit}b) after the decay of the oscillations
is well in agreement with the fact that a low lying excited polaron state ($m=0,1,2$) is reached.
The main difference to the lattice result is a strong enhancement of the peak around $\omega=0$
in the adiabatic case, which will be analyzed in Sec.~\ref{sec:delocaloized} below.
Because in the atomic limit the photoemission spectrum reflects the number distribution function in the phonon cloud,
it is interesting to analyze $P(n)$ directly in the lattice model and see whether a similar relation can be established.
The phonon-number distribution in the lattice, which is defined by the translation-invariant correlation function
\begin{equation}
\label{dist_ph}
P_{\mathrm{ph}}(n,t)=\frac{1}{Ln_{el}}\sum\limits_{i}\langle n_{i}\delta_{b_{i}^{\dagger}b_{i},n}(t)\rangle,
\end{equation}
is plotted in Fig.~\ref{fig:dist_strong1} for various coupling strength in the adiabatic limit. Initially (at time zero, not shown), the
distribution is a Boltzmann distribution $P_{\mathrm{ph}}(n,0)\propto e^{-n\omega_0/T_\text{latt}}$. In the equilibrium state at coupling $g$
(dashed lines), the formation of a polaron is indicated by a peak at finite $n=n_{eq}$, which approaches the Poisson result
$n_{eq}=g^2/\omega_{0}^2=E_P/\omega_0$ for large $g$, see Fig.~\ref{fig:dist_strong1}d.
The real-time data (solid lines in Fig.~\ref{fig:dist_strong1}a-c) show an initial increase of phonon numbers (phonon states up to $n=50$
must be kept to simulate the dynamics in this regime). For the weaker coupling case (Fig.~\ref{fig:dist_strong1}a), $P_{\mathrm{ph}}(n,t)$
then evolves towards the equilibrium distribution. For couplings beyond a crossover scale $g\approx0.58$ ($g^2=0.336$, $E_P=1.68$),
where the polaron peak forms in equilibrium, a maximum
$n^*$ which is shifted with respect to $n_{eq}$ appears in addition to the zero-centered distribution
(Figs.~\ref{fig:dist_strong1}b,c).
Comparison of $n^*$ with the position of the maximum of the distribution of the excited polaron
states [Eq.~\eqref{laguerre}] for $m=0,2,3$ also confirms the previous finding that the polaron
is transferred into an a low-lying excited state.
A similar
characterization of excited polaron states
by their number distribution has also been discussed for an isolated Holstein impurity \cite{Fehske2011}.
The relation \eqref{alesequiatom} in the atomic limit would imply that the separation of the two maxima
$n=n^*$ and $n=0$ in $P_\text{ph}$ is related to the separation $\Delta \omega$ of two peaks in the
photoemission spectrum $I(\omega,t)$ by $\Delta \omega /\omega_0 = n^*$ (up to the energy resolution
of the probe pulse). This relation
indeed holds quite accurately in the lattice, see Fig.~\ref{fig:dist_strong1}d:
the position of the maxima $n^*$ at large time ($t=40$) depends on coupling and time,
but it quite accurately matches the value $\Delta \omega (t)/\omega_0$ (open and filled
blue circles in symbols in Fig.~\ref{fig:dist_strong1}d).
Hence the photoemission spectrum is a good measure for the phonon cloud also in the lattice model.
In particular we note that in the adiabatic case excited polarons appear generically for couplings
beyond crossover scale for polaron formation in equilibrium, and since the splitting $\Delta \omega$ is
of the order of $E_P$ rather than the small scale $\omega_0$, this feature could be taken to monitor
the polaron crossover in experiment.
On the other hand, it is interesting to see that no signature of the crossover is seen in the behavior of
high-energy electrons. For this we integrate the spectrum $I(\omega,t)$ over the high-energy part
($2\le \omega \le 6$ in Fig.~\ref{fig:PES_strong1}a) and fit the result with an exponential function
$A \exp(-t/\tau) +C$. The relaxation rate $1/\tau$ is a smooth function and almost linear with of
$g^2$ over the whole crossover regime (see red line in Fig.~\ref{fig:dist_strong1}d).
\begin{figure}[tbp]
\centering
\includegraphics[width=0.99\columnwidth]{figure06.eps}
\caption{Momentum-resolved photoemission spectrum $I(\bm k,\omega,t)$ for two different times
as a function of the electron dispersion $\epsilon_{\bm k}$, in the adiabatic case (same parameters as
Fig.~\ref{fig:PES_strong1}a and b).
Dotted lines show the location of the maximum intensity as a function of $\omega$.
The inset in {\bf a)} shows the adiabatic potential for $g=0.66$ and $\omega_0=0.2$
(see text).
}
\label{fig:erpes}
\end{figure}
\subsection{Disentangling free and bound states}
\label{sec:delocaloized}
We now focus on the marked asymmetry of the two peaks in the transient spectra, Fig.~\ref{fig:PES_strong1}b.
Because the peak at higher energy also roughly coincides with the energy of the lower band edge in the free band,
one may assume that the additional weight of the peak at higher energy is due to a contribution from delocalized
states.
To confirm this picture, we look at the momentum-resolved photoemission spectrum
$I(\bm k,\omega,t)$, to show that the asymmetric contribution is localized in $\bm k$.
$I(\bm k,\omega,t)$ is obtained from Eq.~\eqref{trpes} by replacing the local Green's function with the momentum-resolved Green's
function
$\widetilde G_{\bm k}^<(t,t') = i \text{Tr}_{N=1} [e^{-\beta H}c_{\bm k}^\dagger(t') c_{\bm k}(t) ]/Z$.
With a momentum-independent
self-energy, dependence on $\bm k$ appears only via the electron dispersion $\epsilon_{\bm k}$, which extends
from $-2$ to $2$ for the semi-elliptic density of states.
The local spectrum is simply $I(\omega,t)=\int d\epsilon \,D(\epsilon) I(\epsilon,\omega,t)$.
In Fig.~\ref{fig:erpes}, $I(\epsilon_{\bm k},\omega,t)$ is plotted for two different times.
At early time one observes one maximum $\omega_1(\epsilon_{\bm k})$ in $I(\epsilon_{\bm k},\omega,t)$
for each $\epsilon_{\bm k}$ (see white dotted line in Fig.~\ref{fig:erpes}a). The linear relation
$\omega_1 \sim \epsilon_{\bm k}$ still reflects the behavior of free electrons. At later times, a
flat band with two maxima $\omega_1(\epsilon_{\bm k})$ and $\omega_2(\epsilon_{\bm k})$
appears which reflects the polaron states (white dotted lines in Fig.~\ref{fig:erpes}b). The ratio of the
two maxima, $I(\epsilon,\omega_1(\epsilon),t)/I(\epsilon,\omega_2(\epsilon),t)$, is
however strongly enhanced at $\epsilon=-2$; it is $25.06$, $0.92$, and $0.597$ for
$\epsilon=-2$, $0$, $2$, respectively. This confirms that
the asymmetry of the two peaks in the ${\bm k}$-integrated spectrum $I(\omega,t)$ indeed
comes mainly from the region $\epsilon_{\bm k}=-2$, and thus may be assigned to an
additional contribution from delocalized states, which could not be disentangled from
the upper polaron peak by the energy-resolved spectrum alone.
The presence of metastable delocalized states has long been predicted from semiclassical arguments
\cite{Emin1976, Kabanov1993} from the existence of a potential energy barrier between delocalized
and polaron states in the adiabatic potential $V_{ad}(x)$. In high-dimensions \cite{Ciuchi1997}, the
latter is given by the sum of the classical energy cost $\omega_0 x^2/2$ for displacing the oscillator at
one lattice site, and the corresponding lowering of the ground state due to the impurity with
potential $\sqrt{2}gx$. Since the electronic ground state energy
is not
lowered if the impurity
potential lies within the bandwidth, there is always an energy cost for creating small distortions,
and thus an energy barrier for bringing the system into a self-trapped state. In infinite-dimensions,
$V_{ad}(x)$ can be computed analytically \cite{Ciuchi1997}. In weak coupling, $V_{ad}$ slightly
deviates from the zero-centered harmonic oscillator.
A second minimum in $V_{ad}$ appears for $E_{P}>1.28\equiv E_{P}^{(1)}$, and
becomes the global minimum for $E_{P}>1.68\equiv E_{P}^{(2)}$, see inset Fig.~\ref{fig:erpes}b. Note
that the scale $E_{P}^{(2)}$ is nicely in agreement with the crossover scale $g=0.58$ in Fig.~\ref{fig:dist_strong1},
beyond which we observe the formation of excited polarons. The
global
minimum describes the ground
state properties of the localized state, and the local minimum at $x=0$ corresponds to a delocalized state
in the semiclassical picture.
\subsection{Coherent oscillations}
\label{sec-oscillations}
In this section we will finally discuss the initial coherent oscillations which follow the coupling
of the electrons to the lattice and the resulting sudden displacement of the oscillator zero.
In the non-adiabatic regime, oscillations are reflected in a rigid-like shift of the band
(Fig.~\ref{fig:PES_strong1}c). One can see that this is the behavior expected for a
single oscillator:
In the atomic limit, the Green's function for a sudden switch-on of the coupling
can be obtained exactly; it is
related to the time-translationally invariant equilibrium one
[$\widetilde G^<_{eq}(t)=i\int d\omega\, e^{-i\omega t} A^<(\omega)$, with Eq.~\eqref{alesequiatom}]
by a simple time-dependent factor (see Appendix \ref{ap2}),
\begin{align}
\label{gquench}
\widetilde G^<(t,t')
&=
\widetilde G^<_{eq}(t-t') Q(t)Q^*(t'),\\
Q(t) &= \exp[2i g^2/\omega_0^2 \sin(\omega_0t)].
\end{align}
In the photoemission spectrum, Eq.~\eqref{trpes}, the oscillating factor $Q(t)$ roughly acts like a shift of
the probing frequency $\omega$ by $2E_P \cos(\omega_0t)$ when the probe pulse is shorter than
$2\pi/\omega_0$, so that the $\sin(\omega_0(t+t_1))\approx\sin(\omega_0t)+t_1\omega_0\cos(\omega_0t)$
in $Q(t)$. The resulting photo\-emission spectrum is shown in Figs.~\ref{fig:alimit}c and d.
(Longer pulses, which average over many cycles, would lead to time-independent bands split by $\omega_0$.)
From the comparison of Fig.~\ref{fig:PES_strong1} with Figs.~\ref{fig:alimit}c and d it is apparent
that only in the non-adiabatic regime does the lattice result reflect the coherent oscillations
found in the atomic limit. This shows a qualitative difference between the two
regimes. In the adiabatic regime, the same bare polaron binding $E_P$ corresponds to a
larger number of phonon energy quanta. An electron can thus easily emit several phonons
to neighboring sites, so that vibrational dephasing occurs already on the timescale of one
phonon-period. In the non-adiabatic regime, in contrast, the total excitation energy corresponds
to very few oscillator quanta right from the beginning, so that emission of phonons is restricted
by phase space effects and the system remains in long-lived beating oscillations, which is in
agreement with results from
exact diagonalization
\cite{Ku2007,Fehske2011}.
\section{Conclusion}
In conclusion, we have obtained the numerically exact solution of the single-electron
Holstein model within nonequilibrium DMFT. The results provide a comprehensive
picture how an excited ``hot'' electron distribution relaxes due to optical phonons,
both at weak and strong coupling, and in the adiabatic and non-adiabatic
regimes. Most important are the results for small phonon frequencies (adiabatic regime) and
strong coupling, where polaronic states are expected in equilibrium. After a quick dephasing
of initial coherent oscillations, the system reaches a state in which excited polarons coexist
with metastable delocalized states. While we cannot resolve the final relaxation to the ground
state (the time range of our simulations extends to several phonon periods), the observed
transient features are expected to be important for a photo-induced metallic state at
strong-electron-phonon coupling. (In fact, in real systems the lifetime of the entire photo-induced
state may be shorter than the final equilibration time.)
Moreover, we discuss how the photoemission spectrum reflects properties of the phonon cloud
and can thus be used to characterize the transient state: Excited polarons lead to a characteristic
double-peak structure of the almost flat (i.e., weakly momentum dependent) polaron band.
Delocalized states, on the other hand, can be identified because their distribution is peaked in
momentum space. Nonequilibrium polarons and metastable delocalized states appear beyond
a well-defined polaron crossover scale. At the same time, no signature of the crossover is seen
in the relaxation behavior of high-energy electrons. This suggest that the high-energy
relaxation rates can be used in experiment to estimate the coupling by a
analysis in terms of Fermi Golden rule \cite{Sentef2013} even in the regime
where small polarons are formed.
As far as a comparison is possible, our results are in qualitative agreement with earlier predictions,
and with results for low-dimensional systems: A beating between excited polaron states in
the non-adiabatic case is in agreement with exact diagonalization results for one dimension
\cite{Ku2007, Fehske2011}. The dynamics of the strong coupling adiabatic regime most difficult
to describe in a quantum mechanical lattice calculation. A barrier for relaxation from delocalized
states to self-trapped states was predicted by semiclassical arguments \cite{Emin1976, Kabanov1993},
and it is in agreement with the occurrence of a level anti-crossing between localized and
delocalized ground states in the energy spectrum \cite{Ku2007}.
Even though the simple Holstein model is not directly applicable to many experiments, the
coexistence of long-lived polarons and metastable delocalized states may be qualitatively
correct for systems which at the moment do not allow for a simple modeling. In fact, the
coexistence of a Drude peak and polaronic features in photo-excited states has been
observed in optical experiments on TaS$_2$ \cite{Dean2011}. If delocalized states are stabilized
by an energy barrier, this suggests unique ways to control the properties of photo-excited
states: The number of mobile carriers may be modified by second pulse that helps to bring
electrons over the barrier, either by field-localization of the electrons, which can transiently
increase the electron-lattice effects \cite{Werner2014}, or by exciting the delocalized carriers.
In this way the carrier mobility could be {\em lowered} by a
pulse, allowing for a controlled switch-on of a metallic state (by photo-exciting carriers), followed
by a {\em switch-off} (by localizing carriers). Such possibilities will be investigated in future work.
From a technical perspective, we note that the structure of the DMFT equations in equilibrium (a
continued fraction) is similar to the momentum averaged technique \cite{Berciu2006}. Hence
the Green's function formalism presented in our work can be directly applied to extend the
latter approach to nonequilibrium, which would be a promising way to study the time-resolved
optical conductivity of the transient state in finite dimensions \cite{Goodvin2011}.
\acknowledgments
We thank J. Bon\v{c}a, U. Bovensiepen, D. Gole\v{z}, Z.~Lenar\v{c}i\v{c}, P. Prelov\'sek, Ph.~Werner, and L.~Vidmar
for fruitful discussions. ME acknowledges the Aspen Center for Physics and the NSF Grant No.~1066293 for
hospitality during writing of the manuscript. The calculations were run in part on the supercomputer HLRN
of the North-German Supercomputing Alliance.
|
1,116,691,501,255 | arxiv | \section{Introduction}
Fix an orientable surface $\Sigma$. The goal of this paper is to quantify the extent to
which algebraically complicated elements of $\pi_1(\Sigma)$ must exhibit topological complexity.
We begin with some definitions. Let $c : S^1 \rightarrow \Sigma$ be a closed curve. We define
the {\em self-intersection number} of $c$, denoted $i(c)$, to be minimum over all curves $c'$ which are freely homotopic
to $c$ of the quantity
$$\frac{1}{2}|\{\text{$(x,y)$ $|$ $x,y \in S^1$, $x \neq y$, $c'(x)=c'(y)$}\}|.$$
The factor $1/2$ appears because each self-intersection is counted twice. Also, recall that if $G$ is a
group, then the {\em lower central series} of $G$ is the inductively defined
sequence
$$\LCS{1}{G} = G \quad \text{and} \quad \LCS{k+1}{G} = [\LCS{k}{G},G].$$
If $\pi_1(\Sigma)$ is nonabelian, then it is easy
to see that for $k \geq 1$, there exist $x \in \LCS{k}{\pi_1(\Sigma)}$ with $i(x)$ arbitrarily large.
However, a consequence of Theorems \ref{theorem:lcsbdry} and \ref{theorem:lcsgeneral} below is that
there do not exist nontrivial $x \in \LCS{k}{\pi_1(\Sigma)}$ with $i(x)$ arbitrarily small.
To state these theorems, we define
$$\LCSNorm{k}{\Sigma} = \Min\{\text{$i(x)$ $|$ $x \in \LCS{k}{\pi_1(\Sigma)}$, $x \neq 1$}\}.$$
Our first result is the following.
\begin{theorem}
\label{theorem:lcsbdry}
Let $\Sigma_{g,b}$ be a orientable genus $g$ surface with $b \geq 1$ boundary components. Assume that
$\pi_1(\Sigma_{g,b})$ is nonabelian. Then for all $k \geq 1$ we have
$$\LCSNorm{k}{\Sigma_{g,b}} \geq \frac{k}{4g+b-1} - 1.$$
\end{theorem}
\noindent
Theorem \ref{theorem:lcsbdry} will be proven in \S \ref{section:lcsbdry}.
The key to our proof of Theorem \ref{theorem:lcsbdry} is the following result, which is proven in \S
\ref{section:fox}. If $G$ is a group and $S \subset G$, then
for $x \in \langle S \rangle$ we will denote by $\Length{S}{x}$ the length of the
shortest word in $S \cup S^{-1}$ which equals $x$.
\begin{theorem}
\label{theorem:fox}
Let $F(S)$ be the free group on a set $S$ with $|S| > 1$ and let $k \geq 1$. Then for all
non-trivial $w \in \LCS{k}{F(S)}$ we have $k \leq \Length{S}{w}$.
\end{theorem}
\noindent
This improves upon work of Fox, who in \cite[Lemma 4.2]{FoxFree} proved a result
that implies that $\Length{S}{w} \geq \frac{1}{2} k$.
\begin{remark}
If we could extend Theorem \ref{theorem:fox} to fundamental groups of closed surfaces, then
we could also extend Theorem \ref{theorem:lcsbdry} to closed surfaces.
\end{remark}
\begin{remark}
We conjecture that Theorem \ref{theorem:fox} is not sharp. Indeed,
we suspect that the length of the shortest word in the $k^{\Th}$ term of the lower central series
of a nonabelian free group is quadratic in $k$. As evidence, in the proofs of
the upper bounds of Theorems \ref{theorem:lcsgeneral} and \ref{theorem:dergeneral} below
we will construct elements lying in the $k^{\Th}$ term of the lower central series
of a rank $2$ free group whose word length is quadratic in $k$. If this conjecture
were true, then we could replace the lower bound in Theorem \ref{theorem:lcsbdry} with
a function which is quadratic in $k$.
\end{remark}
For general surfaces (not necessarily compact or of finite type), we prove the following.
\begin{theorem}
\label{theorem:lcsgeneral}
Let $\Sigma$ be an orientable surface with $\pi_1(\Sigma)$ nonabelian. Then for $k \geq 1$ we have
$$\log_8 (k) - 1 \leq \LCSNorm{k}{\Sigma} \leq 8 k^4.$$
\end{theorem}
\noindent
The proof of the lower bound in Theorem \ref{theorem:lcsgeneral} is
in \S \ref{section:lcsgenerallower} and the proof of the upper bound
is in \S \ref{section:upperbounds}.
\begin{remark}
Although the lower bound in Theorem \ref{theorem:lcsgeneral} is weaker than the lower bound in Theorem
\ref{theorem:lcsbdry} in terms of the order of $k$, it is uniform over all surfaces.
\end{remark}
Recall that a group $G$ is {\em residually nilpotent} if $\cap_{k=1}^{\infty} \LCS{k}{G} = 1$. Our proof of Theorem
\ref{theorem:fox} is an elaboration of a proof due to Fox \cite{FoxFree} of a theorem of Magnus \cite{MagnusFree}
that says that free groups are residually nilpotent. Conversely, an immediate consequence of Theorem
\ref{theorem:lcsgeneral} (which does not use Theorem \ref{theorem:fox}) is the following theorem, which for surface
groups is due independently to Baumslag \cite{BaumslagSurfaces} and Frederick \cite{FrederickSurfaces}.
\begin{corollary}
\label{corollary:residuallynilpotent}
Free groups and fundamental groups of closed surfaces are both residually nilpotent.
\end{corollary}
\noindent
Our proof of Theorem \ref{theorem:lcsgeneral} (and hence of Corollary \ref{corollary:residuallynilpotent})
shares some ideas with Hempel's beautiful short proof \cite{Hempel} of the residual finiteness of free
groups and surface groups.
The final result of this paper gives an analogue of Theorem \ref{theorem:lcsgeneral} for the derived
series. Recall that if $G$ is a group, then the {\em derived series} of $G$ is
the inductively defined sequence
$$\DER{1}{G} = G \quad \text{and} \quad \DER{k+1}{G} = [\DER{k}{G},\DER{k}{G}].$$
Setting
$$\DERNorm{k}{\Sigma} = \Min\{\text{$i(x)$ $|$ $x \in \DERR{k}{\pi_1(\Sigma)}$, $x \neq 1$}\},$$
our result is as follows.
\begin{theorem}
\label{theorem:dergeneral}
Let $\Sigma$ be an orientable surface with $\pi_1(\Sigma)$ nonabelian. Then for $k \geq 3$ we have
$$2^{\lceil k/2 \rceil - 2} \leq \DERNorm{k}{\Sigma} \leq 2^{4k-5}.$$
\end{theorem}
\noindent
The lower bound in Theorem \ref{theorem:dergeneral} is proven in \S \ref{section:dergenerallower}
and the upper bound is proven in \S \ref{section:upperbounds}. Our proof of the lower
bound in Theorem \ref{theorem:dergeneral} is inspired by an unpublished note of
Reznikov \cite{Reznikov}, which outlines an argument giving a linear
lower bound on $\DERNorm{k}{\Sigma}$ for $\Sigma$ closed. We remark that though \cite{Reznikov}
seems to claim that it is dealing with the lower central series, both its definitions
and its arguments make it clear that the author intends to discuss the derived series.
\begin{remark}
In our definitions above, for $x \in \pi_1(\Sigma,\ast)$ the number $i(x)$ depends only on the free homotopy class of
$x$. If we required that our homotopies fix $\ast$ and $\ast \in \Interior(\Sigma)$, then $i(x)$ would be unchanged.
If instead $\ast \in \partial \Sigma$, then $i(x)$ might differ. However, since the lower central series and derived
series are normal, requiring the homotopies to fix the basepoint would not change $\LCSNorm{k}{\Sigma}$ or
$\DERNorm{k}{\Sigma}$.
\end{remark}
\begin{acknowledgements}
We would like to thank
Khalid Bou-rabee, Nathan Broaddus, Matthew Day, Thomas Koberda, and Ben McReynolds
for useful conversations and suggestions. We would especially like to thank Benson
Farb for sharing \cite{Reznikov} with us and asking whether bounds of the sort we
prove might hold.
\end{acknowledgements}
\section{Lower bounds}
\label{section:lowerbounds}
In this section, we prove the lower bounds in Theorems \ref{theorem:lcsbdry}, \ref{theorem:lcsgeneral},
and \ref{theorem:dergeneral}.
\subsection{Lower central series, compact surfaces with boundary}
\label{section:lcsbdry}
We begin with Theorem \ref{theorem:lcsbdry}.
\Figure{figure:lcspics}{LCSPics}{a. An immersed curve $f$ whose singularities consist of $i(f)=5$ isolated
double points.
\CaptionSpace b. The maximal tree $T$
\CaptionSpace c. The $2$-disc $D$
\CaptionSpace d. Result of contracting $D$}
\begin{proof}[{Proof of Theorem \ref{theorem:lcsbdry}}]
Let $f : S^1 \rightarrow \Interior(\Sigma_{g,b})$ be an immersion whose singularities consist of $i(f)$ isolated double
points (see Figure \ref{figure:lcspics}.a).
Assume that $f$ is freely homotopic to a nontrivial element of $\LCS{k}{\pi_1(\Sigma_{g,b})}$.
Our goal is to show that $i(f) \geq \frac{k}{4g+b-1} - 1$.
The first step is to ``comb'' the double points to a single point on the surface. The immersion $f$
factors through an embedding of a graph whose vertices correspond to the singularities of $f$. More precisely,
there is a $4$-regular graph $G$ with $i(f)$ vertices, an embedding $\tilde{f} : G \rightarrow \Interior(\Sigma_{g,b})$, and
an immersion $c : S^1 \rightarrow G$ with $f = \tilde{f} \circ c$ so that the inverse image under $c$ of
the interior of every edge of $G$ is connected.
Let $T$ be a maximal
tree in $G$. Hence $\tilde{f}(T)$ is an embedded tree in $\Interior(\Sigma_{g,b})$ (see
Figure \ref{figure:lcspics}.b). Any sufficiently small closed neighborhood $D$
of $\tilde{f}(T)$ satisfies the following two properties (see Figure \ref{figure:lcspics}.c).
\begin{itemize}
\item $D$ is homeomorphic to a closed $2$-disc.
\item For all edges $e$ of $G$ that do not lie in $T$,
the set $\tilde{f}(e) \cap D$ has exactly two connected components.
\end{itemize}
It is easy to see that there is a map
$r : \Sigma_{g,b} \rightarrow \Sigma_{g,b}$ so that $r$ is homotopic
to the identity, so that $r|_{\Sigma_{g,b} \setminus D}$ is injective, and so
that $r(D) = \ast$ for some point $\ast \in \Interior(\Sigma_{g,b})$. Let $D' = \tilde{f}^{-1}(D)$. By
construction, $D'$ is a closed regular neighborhood of $T$ in $G$.
Set $G' = G / D'$, so $G'$ is a wedge of circles, and let $c' : S^1 \rightarrow G'$
be the composition
of $c$ with the projection $G \rightarrow G/D'$. There is then an embedding
$\tilde{f}' : G' \rightarrow \Interior(\Sigma_{g,b})$
so that $\tilde{f}' \circ c' = r \circ f$ (see Figure \ref{figure:lcspics}.d).
Let $w \in \pi_1(\Sigma_{g,b},\ast)$ be the based curve corresponding to $\tilde{f}' \circ c'$. Since $\tilde{f}' \circ c'$
is freely homotopic to $f$, we have $w \in \LCS{k}{\pi_1(\Sigma_{g,b},\ast)}$. Let
$S \subset \pi_1(\Sigma_{g,b},\ast)$ be a maximal collection of elements satisfying the following three properties.
\begin{itemize}
\item For each circle $L$ in $G'$ so that $\tilde{f}'|_{L}$ is not null-homotopic, there
exists some $x \in S$ so that $\tilde{f}'|_{L} = x^{\pm 1}$.
\item For $x,y \in S$, if $x = y^{\pm 1}$ then $x=y$.
\item The curves in $S$ can be realized simultaneously by simple closed curves that only intersect at $\ast$.
\end{itemize}
Since $G$ is a $4$-regular graph with $i(f)$ vertices, it has $2 i(f)$ edges.
Also, the maximal tree $T$ has $i(f)$ vertices and hence $i(f)-1$ edges. We conclude that
$G'$ is a wedge of $2 i(f) - (i(f)-1) = i(f)+1$ circles, so $\Length{S}{w} \leq i(f)+1$.
We will confuse the set of homotopy classes $S$ with the corresponding set of simple closed curves that only
intersect at $\ast$. Via an Euler characteristic calculation, we see that cutting $\Sigma_{g,b}$ along the curves in
$S$ yields $b$ annuli and $4g+b-2$ triangles. By gluing the triangles together in an appropriate manner (as
in the standard combinatorial proof of the classification of surfaces; see \cite[Chapter 1]{Massey}), we identify
$\Sigma_{g,b}$ with a $(4g+b)$-sided polygon $P$ with $4g$ sides identified in pairs, all vertices identified, and
annuli glued to the $b$ unpaired sides. Each of the curves in $S$ is identified with either a side of $P$ or an arc
in $P$ joining two vertices.
In particular, $S$ contains a free generating set $S'$ for $\pi_1(\Sigma_{g,b},\ast)$ consisting of the following curves.
\begin{itemize}
\item A curve corresponding to one edge from each of the pairs in the $4g$ paired edges in $P$.
\item A curve corresponding to all but one of the $b$ unpaired edges in $P$.
\end{itemize}
Observe that every element of $S$ can be written as a word of length at most $4g+b-1$ in $S'$, so $\Length{S'}{w} \leq
(4g+b-1) \Length{S}{w}$. Theorem \ref{theorem:fox} says that $k \leq \Length{S'}{w}$, so we conclude that
$$k \leq \Length{S'}{w} \leq (4g+b-1) \Length{S}{w} \leq (4g+b-1) (i(f)+1).$$
Rearranging this inequality gives the desired conclusion.
\end{proof}
\subsection{Some preliminary lemmas}
We now prove two lemmas that are needed in the proofs of Theorems \ref{theorem:lcsgeneral} and
\ref{theorem:dergeneral}.
\begin{lemma}
\label{lemma:coveringlemma}
Let $\Sigma$ be a compact orientable surface with $\pi_1(\Sigma)$ non-abelian and let
$f : S^1 \rightarrow \Sigma$ be a non-nullhomotopic closed curve. Then there exists
a degree $8$ normal cover $\widetilde{\Sigma} \rightarrow \Sigma$ so that one of the following holds.
\begin{itemize}
\item $f$ does not lift to a closed curve on $\widetilde{\Sigma}$.
\item $f$ lifts to a closed curve $\tilde{f} : S^1 \rightarrow \widetilde{\Sigma}$ with
$i(\tilde{f}) < i(f)$.
\end{itemize}
\end{lemma}
\begin{remark}
Since the cover in the conclusion of Lemma \ref{lemma:coveringlemma} is normal, $f$ lifts to a closed curve if and
only if any curve freely homotopic to $f$ lifts to a closed curve.
\end{remark}
\begin{proof}[{Proof of Lemma \ref{lemma:coveringlemma}}]
By the remark following the lemma, we may assume without loss of generality
that $f$ is an immersion whose singularities consist of $i(f)$ isolated double points. There are two cases.
\BeginCases
\begin{case}
$f$ is simple.
\end{case}
We must construct a degree $8$ normal cover to which $f$ does not lift to a closed curve. In other words,
choosing $\ast \in f(S^1)$ and letting $x \in \pi_1(\Sigma,\ast)$ be the based curve corresponding to $f$,
we must find a finite group $H$ with $|H|=8$ and a surjection $\psi : \pi_1(\Sigma,\ast) \rightarrow H$
with $x \notin \Ker(\psi)$.
If $f$ is not nullhomologous and if $\phi : \pi_1(\Sigma,\ast) \rightarrow \HH_1(\Sigma;\Z)$ is the
abelianization map, then $\phi(x)$ is a primitive vector. There is therefore a surjection
$\phi' : \HH_1(\Sigma;\Z) \rightarrow \Z / 8\Z$ so that $\phi'(\phi(x)) \neq 0$. We conclude that
we can use $H = \Z / 8 \Z$ and $\psi = \phi' \circ \phi$.
Assume now that $f$ is nullhomologous. Letting $g$ be the genus and $b$ the number of boundary
components of $\Sigma$, it follows that
there is a generating set $S=\{\alpha_1,\beta_1,\ldots,\alpha_g,\beta_g,x_1,\ldots,x_b\}$
for $\pi_1(\Sigma,\ast)$ so that
$$\pi_1(\Sigma,\ast) = \langle \text{$\alpha_1,\beta_1,\ldots,\alpha_g,\beta_g,x_1,\ldots,x_b$ $|$ $[\alpha_1,\beta_1] \cdots [\alpha_g,\beta_g] = x_1 \cdots x_b$} \rangle$$
and so that $x = [\alpha_1,\beta_1] \cdots [\alpha_{g'},\beta_{g'}]$ for some $g' \leq g$. Let $H$ be the
dihedral group of order $8$, so
$$H = \langle \text{$\sigma,r$ $|$ $\sigma^2=1$, $r^4=1$, $\sigma r \sigma = r^{-1}$} \rangle.$$
We define a surjection $\psi : \pi_1(\Sigma,\ast) \rightarrow H$ in the following way. If $b = 0$,
then $g' < g$ and we define $\psi(\alpha_1) = \psi(\alpha_g) = \sigma$,
$\psi(\beta_1) = \psi(\beta_g) = r \sigma$, and $\psi(s) = 1$ for all $s \in S$ with
$x \notin \{\alpha_1,\beta_1,\alpha_g,\beta_g\}$.
It is easy to check that the surface group relation is satisfied and that the resulting homomorphism $\psi$
is a surjection. If $b > 0$,
then $\pi_1(\Sigma,\ast)$ is free on $S \setminus \{x_b\}$. We define $\psi(\alpha_1) = \sigma$,
$\psi(\beta_1) = r \sigma$, and $\psi(s) = 1$ for all $s \in S \setminus \{x_b\}$ with
$s \notin \{\alpha_1,\beta_1,\alpha_g,\beta_g\}$. Trivially
$\psi$ extends to a surjection. In either case, we have $\psi(x) = [\sigma, r \sigma] \neq 1$, as desired.
\begin{case}
$f$ is not simple.
\end{case}
\Figure{figure:lcspics2}{LCSPics2}{
\CaptionSpace a. A nonsimple closed curve $f$ like in Step 2 of the proof of Lemma \ref{lemma:coveringlemma}. The
simple closed subcurve $f'$ is in bold.
\CaptionSpace b. An example of a subcurve $f'$ that is nullhomotopic.
\CaptionSpace c. We reduce the number of self-intersections of $f$.}
Let $A$ be the set of nontrivial proper subarcs of $S^1$ whose endpoints are mapped by $f$ to the same point of
$\Sigma$. By assumption $A$ is finite and nonempty. Partially order the elements of $A$ by inclusion and let
$\alpha$ be a minimal element with endpoints $a_1$ and $a_2$. Since $\alpha \in A$, the map $f|_{\alpha} : \alpha
\rightarrow \Sigma$ factors through a map $f' : S^1 \rightarrow \Sigma$, and from the minimality of
$\alpha$ we deduce that $f'$ is a simple closed curve (see Figure \ref{figure:lcspics2}.a). In addition, $f'$ is not
nullhomotopic, since if $f'$ were nullhomotopic then we could homotope $f$ so as to decrease its number of
self-intersections (see Figures \ref{figure:lcspics2}.b--c).
By Case 1, there is a degree $8$ normal cover $\widetilde{\Sigma} \rightarrow \Sigma$ to which $f'$ does not
lift to a closed curve. If $f$ does not lift to a closed curve on $\widetilde{\Sigma}$, then we are done. Assume,
therefore, that $f$ can be lifted to a closed curve $\tilde{f} : S^1 \rightarrow \widetilde{\Sigma}$. Define
\begin{align*}
D(f) = \{\text{$(x,y)$ $|$ $x,y \in S^1$, $x \neq y$, $f(x)=f(y)$}\},\\
D(\tilde{f}) = \{\text{$(x,y)$ $|$ $x,y \in S^1$, $x \neq y$, $\tilde{f}(x)=\tilde{f}(y)$}\}.
\end{align*}
We clearly have $D(\tilde{f}) \subset D(f)$. Moreover, by construction $(a_1,a_2) \notin D(\tilde{f})$. We conclude
that $\tilde{f}$ has fewer self-intersections than $f$, so $i(\tilde{f}) < i(f)$, as desired.
\end{proof}
We will also need the following simple lemma, which allows us to deduce results about noncompact surfaces
from results about compact surfaces.
\begin{lemma}
\label{lemma:reducetocompact}
Let $\Sigma$ be an oriented surface with $\pi_1(\Sigma)$ nonabelian. Also, let $f: S^1 \rightarrow \Sigma$
be a non-nullhomotopic closed curve which is freely homotopic to an element of $\LCS{k}{\pi_1(\Sigma)}$ for some
$k \geq 1$. Then there is a compact surface $\Sigma'$ with $\pi_1(\Sigma')$ nonabelian and an
embedding $i : \Sigma' \hookrightarrow \Sigma$ satisfying the following properties.
\begin{itemize}
\item There is a map $f' : S^1 \rightarrow \Sigma'$ so that $f = i \circ f$.
\item The curve $f'$ is freely homotopic to an element of $\LCS{k}{\pi_1(\Sigma')}$.
\end{itemize}
\end{lemma}
\begin{proof}
Any iterated commutator only involves a finite number of curves and any homotopy stays within a compact subset of
$\Sigma$.
\end{proof}
\subsection{Lower central series, general surfaces}
\label{section:lcsgenerallower}
We now prove the lower bound in Theorem \ref{theorem:lcsgeneral}. The proof will require the following
lemma.
\begin{lemma}
\label{lemma:grouptheory} Fix $p,n,m \geq 1$ with $p$ prime, and let $G_0 \rhd G_1 \rhd \cdots \rhd G_n$ be a
subnormal sequence of groups so that for $1 \leq i \leq n$ we have $[G_{i-1}:G_i] = p^{m}$. Then there exists some
group $H$ so that $H < G_n$, so that $H \lhd G_0$, and so that $[G_0 : H] = p^{N}$ for some $1 \leq N \leq m
\frac{p^{mn} - 1}{p^m - 1}$.
\end{lemma}
\noindent For the proof of Lemma \ref{lemma:grouptheory}, we will need the following.
\begin{lemma}
\label{lemma:grouptheory2}
Fix $p,r,s \geq 1$ with $p$ prime, and let $A \rhd B \rhd C$ be groups so that
$[A:B] = p^{r}$ and $[B:C] = p^{s}$. Then there exists a group $D$ so that $D < C$, so that
$D \lhd A$, and so that $[A:D] = p^N$ for some $1 \leq N \leq p^r s + r$.
\end{lemma}
\begin{proof}
Define $D = \bigcap_{a \in A} a^{-1} C a$. Clearly we have $D < C$ and $D \lhd A$, so we must only
prove the indicated result about $[A:D]$. Let $T = \{a_1,\ldots,a_{p^{r}}\}$ be a complete set of coset
representatives for $B$ in $A$ with $a_1 = 1$. Hence we have $D = \bigcap_{j=1}^{p^r} a_j^{-1} C a_j$. For
$1 \leq i \leq p^{r}$, define $C_i = \bigcap_{j=1}^{i} a_j^{-1} C a_j$. We thus have
$$A \rhd B \rhd C = C_1 \rhd C_2 \rhd \cdots \rhd C_{p^{r}} = D.$$
We claim that for $1 < i \leq p^{r}$ we have $[C_{i-1}:C_{i}] = p^{k_i}$ for some $0 \leq k_i \leq s$. Indeed, we have
$$C_{i-1} / C_{i} = C_{i-1} / (a_{i}^{-1} C a_i \cap C_{i-1}) \cong (C_{i-1} \cdot (a_i^{-1} C a_i)) / a_i^{-1} C a_i < B / a_i^{-1} C a_i.$$
Since $[B:a_i^{-1} C a_i] = [B:C] = p^s$, the claim follows. We conclude that
\begin{align*}
[A:D] &= [A:B][B:C][C_1:C_2] \cdots [C_{p^r-1}:C_{p^r}] = p^r p^s p^{k_2} \cdots p^{k_{p^r}} \leq p^r (p^s)^{p^r},
\end{align*}
as desired.
\end{proof}
\begin{proof}[{Proof of Lemma \ref{lemma:grouptheory}}]
The proof will be by induction on $n$. The base case $n=1$ is trivial. Now assume that $n > 1$ and that the lemma
is true for all smaller $n$. Applying the inductive hypothesis to the sequence $G_1 \rhd \cdots \rhd G_n$, we obtain
a group $H'$ so that $H' < G_n$, so that $H' \lhd G_1$, and so that $[G_1:H'] = p^{N'}$ with $N' \leq m
\frac{p^{m(n-1)} - 1}{p^m - 1}$. We can therefore apply Lemma \ref{lemma:grouptheory2} to the sequence $G_0 \rhd G_1
\rhd H'$ and obtain a group $H$ so that $H < H' < G_n$, so that $H \lhd G_0$, and so that $[G_0 : H] = p^N$ for some
$N$ that satisfies
$$N \leq p^{m} N' + m \leq m \frac{p^{mn}- p^m}{p^m - 1} + m = m
\frac{p^{mn} - 1}{p^m - 1},$$ as desired.
\end{proof}
We will also need the following standard property of $p$-groups. Recall that a group $G$ is
{\it at most $n$-step nilpotent} if $\LCS{n}{G} = 1$.
\begin{lemma}[{\cite[Theorem 5.33]{RotmanBook}}]
\label{lemma:pgroups}
Let $p$ be a prime and let $G$ be a group with $|G|=p^n$ for some $n \in \N$. Then
$G$ is at most $n$-step nilpotent.
\end{lemma}
We can now prove the lower bound in Theorem \ref{theorem:lcsgeneral}.
\begin{proof}[{Proof of Theorem \ref{theorem:lcsgeneral}, lower bound}]
Let $f : S^1 \rightarrow \Sigma$ be an immersion whose singularities consist of $i(f)$ isolated double
points. Assume that $f$ is freely homotopic to a nontrivial element of $\LCS{k}{\pi_1(\Sigma)}$.
Our goal is to show that $i(f) \geq \log_8 (k) - 1$; i.e.\ that $k \leq 8^{i(f)+1}$.
By Lemma \ref{lemma:reducetocompact}, we may assume that $\Sigma$ is compact. Choose a basepoint
$\ast \in f(S^1)$ and let $x \in \pi_1(\Sigma,\ast)$ be the based curve corresponding to $f$.
Applying Lemma \ref{lemma:coveringlemma} repeatedly, we obtain a subnormal sequence
$$\pi_1(\Sigma,\ast) = G_0 \rhd G_1 \rhd \cdots \rhd G_n$$
with $n \leq i(f) + 1$ so that $x \notin G_n$ and so that $[G_{i-1}:G_i] = 2^{3}$
for $1 \leq i \leq n$. Applying Lemma \ref{lemma:grouptheory},
we obtain a group $H$ so that $H < G_n$, so that $H \lhd \pi_1(\Sigma,\ast)$, and so that
$[\pi_1(\Sigma,\ast):H] = 2^N$ for some
$$N \leq 3 \frac{2^{3n} - 1}{2^3 - 1} \leq 8^n \leq 8^{i(f)+1}.$$
By Lemma \ref{lemma:pgroups}, we deduce that $\pi_1(\Sigma,\ast) / H$ is at most $8^{i(f)+1}$-step nilpotent. In
other words,
$$\LCS{8^{i(f)+1}}{\pi_1(\Sigma,\ast)} < H.$$
Since $H$ is a normal subgroup of $\pi_1(\Sigma,\ast)$ and $f$ is freely homotopic to $x \notin H$, it
follows that $f$ is not freely homotopic to any element of $H$. We conclude that $k \leq 8^{i(f)+1}$, as
desired.
\end{proof}
\subsection{Derived series}
\label{section:dergenerallower}
We now prove the lower bound in Theorem \ref{theorem:dergeneral}. The
proof will require the following lemma.
\begin{lemma}
\label{lemma:simpleclosedcurves}
Let $\Sigma$ be an orientable surface (not necessarily compact) with $\pi_1(\Sigma)$ nonabelian.
Also, let $f : S^1 \rightarrow \Sigma$ be a non-nullhomotopic simple closed
curve. Then $f$ is not freely homotopic to any element of $\LCS{3}{\pi_1(\Sigma)}$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:reducetocompact}, we may assume that $\Sigma$ is compact.
Assume that $f$ is freely homotopic to $x \in \LCS{3}{\pi_1(\Sigma)}$. Since $f$ is simple,
Lemma \ref{lemma:coveringlemma} implies that there is a finite group $H$ with $|H| = 2^3$ and a surjection
$\psi : \pi_1(\Sigma) \rightarrow H$ so that $\psi(x) \neq 1$. Lemma \ref{lemma:pgroups} says that $H$ is at
most $3$-step nilpotent, so $\LCS{3}{\pi_1(\Sigma)} \subset \Ker(\psi)$, a contradiction.
\end{proof}
We will also need the following standard lemma.
\begin{lemma}[{\cite[Exercise 5.50]{RotmanBook}}]
\label{lemma:threesubgroups}
If $G$ is a group, then for all $k \geq 1$ we have $\DER{k}{G} < \LCS{2^{k-1}}{G}$.
\end{lemma}
We can now prove the lower bound in Theorem \ref{theorem:dergeneral}.
\begin{proof}[{Proof of Theorem \ref{theorem:dergeneral}, lower bound}]
We will prove that $2^{\lceil k/2 \rceil - 2} \leq \DERNorm{k}{\Sigma}$ for $k \geq 3$ by induction on $k$. The base
cases $k=3$ and $k=4$ follow from Lemma \ref{lemma:simpleclosedcurves} combined with Lemma
\ref{lemma:threesubgroups}. Now assume that $k > 4$ and that the result is true for all smaller $k$. It is enough
to prove that $\DERNorm{k}{\Sigma} \geq 2 \cdot \DERNorm{k-2}{\Sigma}$. Consider an immersion $f : S^1 \rightarrow \Sigma$
whose singularities consist of $i(f)$ isolated double points. Assume that $i(f) < 2 \cdot \DERNorm{k-2}{\Sigma}$.
Our goal is to show that $f$ is not freely homotopic to any element of $\DERR{k}{\pi_1(\Sigma)}$.
Let $\pi : \widetilde{\Sigma} \rightarrow \Sigma$ be the normal covering corresponding to the subgroup
$\DERR{k-2}{\pi_1(\Sigma)}$. If $f$ does not lift to a closed curve in $\widetilde{\Sigma}$, then $f$ is not freely
homotopic to any element of $\DERR{k-2}{\pi_1(\Sigma)}$, and thus is certainly not freely
homotopic to any element of $\DERR{k}{\pi_1(\Sigma)}$.
Assume, therefore, that there is a lift $\tilde{f} : S^1 \rightarrow \widetilde{\Sigma}$ of $f$. We claim that
$\tilde{f}$ is a simple closed curve. Indeed, define
\begin{align*}
D(f) = \{\text{$(x,y)$ $|$ $x,y \in S^1$, $x \neq y$, $f(x)=f(y)$}\},\\
D(\tilde{f}) = \{\text{$(x,y)$ $|$ $x,y \in S^1$, $x \neq y$, $\tilde{f}(x)=\tilde{f}(y)$}\}.
\end{align*}
Clearly $D(\tilde{f}) \subset D(f)$, and we want to prove that $D(\tilde{f}) = \emptyset$. Consider any $(x,y) \in
D(f)$. The points $x$ and $y$ divide $S^1$ into two arcs $\alpha$ and $\alpha'$, and the restrictions of $f$ to both
$\alpha$ and $\alpha'$ are closed curves. The number of self-intersections of one of $f|_{\alpha}$ and
$f|_{\alpha'}$ (say $f|_{\alpha}$) is less than half of the number of self-intersections of $f$. Hence the
closed curve defined by $f|_{\alpha}$ has fewer than than $\DERNorm{k-2}{\Sigma}$ self-intersections, so it is not
freely homotopic to any element of $\DERR{k-2}{\pi_1(\Sigma)}$. We conclude that $\tilde{f}|_{\alpha}$ is not a
closed curve, so $(x,y) \notin D(\tilde{f})$, as desired.
Observe now that by Lemmas \ref{lemma:simpleclosedcurves} and \ref{lemma:threesubgroups}, the curve
$\tilde{f}$ is not freely homotopic to any element of $\DERR{3}{\pi_1(\widetilde{\Sigma})}$. Since
$$\DERR{3}{\pi_1(\widetilde{\Sigma})} = \DERR{3}{\DERR{k-2}{\pi_1(\Sigma)}} = \DERR{k}{\pi_1(\Sigma)},$$
we conclude that $f$ is not freely homotopic to any element of $\DERR{k}{\pi_1(\Sigma)}$, as desired.
\end{proof}
\section{Upper bounds}
\label{section:upperbounds}
We now prove the upper bounds in Theorems \ref{theorem:lcsgeneral} and \ref{theorem:dergeneral}. We will need two
lemmas.
\begin{lemma}
\label{lemma:wordlength}
Let $(\Sigma,\ast)$ be a based surface and let $S \subset \pi_1(\Sigma,\ast)$ be a set
consisting of elements that can be realized simultaneously by simple closed curves that only intersect at $\ast$. Then
for all $x \in \langle S \rangle \subset \pi_1(\Sigma,\ast)$, we have $i(x) \leq \binom{\Length{S}{x}}{2}$.
\end{lemma}
\begin{proof}
We can assume that $\ast \in \Interior(\Sigma)$. Set $n = \Length{S}{x}$ and write $x = s_1 \cdots s_n$ with $s_i \in
S \cup S^{-1}$ for $1 \leq i \leq n$. For $1 \leq i \leq n$, we can choose embeddings $f_i : S^1 \rightarrow \Sigma$
so that $f_i$ represents $s_i$. Moreover, we can choose the $f_i$ so that for $1 \leq i < j \leq n$ we have
$f_i(S^1) \cap f_j(S^1) = \{\ast\}$. Let $D \subset \Sigma$ be a closed embedded $2$-disc with $\ast \in D$ so that for
all $1 \leq i \leq n$ the intersection $f_i(S^1) \cap D$ is a connected arc. Parametrize $D$ so that $D$ is the unit
disc in $\R^2$ and $\ast = (0,0)$. For $1 \leq i \leq n$, let $f'_i : [0,1] \rightarrow \Sigma$ be a parametrization
of the oriented arc $f_i(S^1) \setminus \Interior(D)$. Observe that for $1 \leq i < j \leq n$ we have $f'_i([0,1])
\cap f'_j([0,1]) = \emptyset$.
We can now construct a curve $f : S^1 \rightarrow \Sigma$ that is freely homotopic to $x$ in the following
way. The curve $f$ first traverses $f'_1$, then goes along a straight line in $D$ from $f'_1(1)$ to $f'_2(0)$, then
traverses $f'_2$, then goes along a straight line in $D$ from $f'_2(1)$ to $f'_3(0)$, then traverses $f'_3$, etc. The
curve $f$ ends with a straight line in $D$ from $f'_n(1)$ to $f'_1(0)$. Clearly $f$ is freely homotopic to $x$. Moreover,
all self-intersections of $f$ must occur in $D$. Since $f(S^1) \cap D$ consists of $n$ straight lines and any two of these
lines can intersect at most once, we conclude that $f$ has at most $\binom{n}{2}$ self-intersections, as desired.
\end{proof}
\begin{lemma}
\label{lemma:freegroup}
Let $S = \{a_1,a_2\}$ and let $F_S$ be the free group on $S$. Then for all $k \geq 1$ there
exists some $w \in F_S$ with $w \neq 1$ so that $\Length{S}{w} \leq 4^{k-1}$ and $w \in \DER{k}{F_S}$.
\end{lemma}
\begin{proof}
Define elements $x_k$ and $y_k$ inductively as follows.
\begin{align*}
x_1 = a_1 \quad &\text{and} \quad y_1 = a_2,\\
x_k = [x_{k-1},y_{k-1}] \quad &\text{and} \quad y_k=[x_{k-1},y_{k-1}^{-1}].
\end{align*}
Clearly $\Length{S}{x_k} \leq 4^{k-1}$ and $x_k \in \DER{k}{F_S}$ for $k \geq 1$. We
must therefore only prove that $x_k \neq 1$ for $k \geq 1$. In fact, we will
prove by induction on $k$ that $x_k$ and $y_k$ generate a rank $2$ free subgroup of $F_S$ for $k \geq 1$.
The base case $k=1$ is trivial. Now assume that $k > 1$ and that $x_{k-1}$ and $y_{k-1}$ generate
a rank $2$ free subgroup. Since neither $x_k$ nor $y_k$ is trivial, they must generate either a rank
$2$ or rank $1$ free subgroup. But since $x_{k-1}$ and $y_{k-1}$ generate a rank $2$ free subgroup, we have
$$[x_k,y_k] = [[x_{k-1},y_{k-1}],[x_{k-1},y_{k-1}^{-1}]] \neq 1,$$
so we conclude that $x_k$ and $y_k$ cannot generate a rank $1$ subgroup.
\end{proof}
We can now prove the upper bounds in Theorems \ref{theorem:lcsgeneral} and \ref{theorem:dergeneral}.
\begin{proof}[{Proof of Theorem \ref{theorem:dergeneral}, upper bound}]
We wish to prove that $\DERNorm{k}{\Sigma} \leq 2^{4k-5}$ for $k \geq 3$. In fact, this inequality holds
for $k \geq 1$ (the assumption that $k \geq 3$ is necessary only in the lower bound), so fix
$k \geq 1$. We claim that there
exists some $a_1,a_2 \in \pi_1(\Sigma,\ast)$ that generate a rank $2$ free subgroup of $\pi_1(\Sigma,\ast)$ and
can be realized simultaneously by simple closed curves that only intersect at $\ast$. If $\Sigma$ is compact, then this
is trivial. Otherwise, $\pi_1(\Sigma,\ast)$ must be a nonabelian free group (see, e.g., \cite[\S 44A]{AhlforsRiemann}),
so we can find $a_1',a_2' \in \pi_1(\Sigma,\ast)$
that generate a rank $2$ free subgroup. Like in the proof of the Theorem \ref{theorem:lcsbdry}, we can
``comb'' the intersections and self-intersections of $a_1'$ and $a_2'$ to $\ast$ and find a set $S' \subset \pi_1(\Sigma,\ast)$
of elements that can be realized simultaneously by simple
closed curves that only intersect at $\ast$ so that both $a_1'$ and $a_2'$ can be expressed as products
of elements of $S' \cup (S')^{-1}$. There must then exist $a_1,a_2 \in S'$ that generate a rank $2$ free
subgroup, as desired.
Set $S = \{a_1,a_2\}$. By Lemma \ref{lemma:freegroup},
there is some $w \in \langle S \rangle$ so that $\Length{S}{w} \leq 4^{k-1}$ and so that $w \in
\DERR{k}{\pi_1(\Sigma)}$. By Lemma \ref{lemma:wordlength}, we deduce that
$$i(w) \leq \binom{\Length{S}{x}}{2} \leq \frac{4^{k-1}(4^{k-1}-1)}{2} \leq \frac{1}{2} 4^{2k-2} = 2^{4k-5},$$
so $\DERNorm{k}{\Sigma} \leq 2^{4k-5}$, as desired.
\end{proof}
\begin{proof}[{Proof of Theorem \ref{theorem:lcsgeneral}, upper bound}]
Fix $k \geq 1$. We can then find an integer $l$ so that $\log_2(k) \leq l-1 \leq \log_2(k)+1$. The upper bound of
Theorem \ref{theorem:dergeneral} (which as we observed above holds for $k \geq 1$) implies that we can find $x \in
\DERR{l}{\pi_1(\Sigma)}$ so that
$$i(x) \leq 2^{4l-5} \leq 2^{4 (\log_2(k)+2)-5} = 8k^4.$$
By Lemma \ref{lemma:threesubgroups}, we have $x \in \LCS{k}{\pi_1(\Sigma)}$, so we conclude that $\LCSNorm{k}{\Sigma}
\leq 8k^4$, as desired.
\end{proof}
\section{Word length in the lower central series}
\label{section:fox}
In this section, we will prove Theorem \ref{theorem:fox}. As was indicated in the introduction,
this proof is inspired by an argument of Fox \cite[Lemma 4.2]{FoxFree}. Our main tool
will be the Fox free differential calculus, so we begin by recalling a number of basic
facts about this calculus. A good reference is \cite{FoxFree}.
Let $F$ be the free group on a set $S$ and let $\varepsilon : \Z F \rightarrow \Z$ be
the augmentation map; i.e.\ the unique linear map with $\varepsilon(g) = 1$ for all $g \in F(S)$.
\begin{definition}
A {\em free derivative} is a linear map $D : \Z F \rightarrow \Z F$ so that
$D(xy) = (D(x))\varepsilon(y) + x D(y)$ for all $x,y \in \Z F$.
\end{definition}
An easy induction establishes that if $D$ is a free derivative, then for $v_1,\ldots,v_k \in \Z F$
we have
\begin{equation}
\label{eqn:productrule}
D(v_1 \cdots v_k) = \sum_{i=1}^k (v_1 \cdots v_{i-1})(D(v_i))\varepsilon(v_{i+1}) \cdots \varepsilon(v_{k}).
\end{equation}
A consequence of \eqref{eqn:productrule} is that for $g \in F$, we have
\begin{equation}
\label{eqn:inverserule}
D(g^{-1}) = - g^{-1} D(g).
\end{equation}
The basic existence result for free derivatives is the following.
\begin{lemma}[{\cite[\S 2]{FoxFree}}]
\label{lemma:freederivexis}
For every $s \in S$, there is a unique free derivative $D_s$ satisfying $D_s(s) = 1$ and $D_s(s') = 0$
for $s' \in S$ with $s' \neq s$.
\end{lemma}
\noindent
By \eqref{eqn:productrule} and \eqref{eqn:inverserule}, we have
\begin{equation}
\label{eqn:exprule}
\varepsilon(D_s(s^k)) = k
\end{equation}
for all $s \in S$ and $k \in \Z$.
For $k \geq 1$ and $s_1,\ldots,s_k \in S$, we will call
the product $D_{s_1} \cdots D_{s_k}$ a {\em free derivative of order $k$}.
The basic fact connecting the Fox free differential calculus to
the lower central series of $F$ is the following easy lemma.
\begin{lemma}[{\cite[3.1]{ChenFoxLyndon}}]
\label{lemma:freelcs}
For $k \geq 2$ and $g \in \LCS{k}{F}$, we have $\epsilon(D(g)) = 0$ for all free derivatives $D$
of order less than or equal to $k-1$.
\end{lemma}
We can now prove Theorem \ref{theorem:fox}.
\begin{proof}[{Proof of Theorem \ref{theorem:fox}}]
Consider $w \in \LCS{k}{F(S)}$ so that $w \neq 1$. Our goal is to show that $k \leq \Length{S}{w}$. We will
produce a free derivative $D$ whose order is at most $\Length{S}{w}$ so that $\epsilon(D(w)) \neq 0$.
By Lemma \ref{lemma:freelcs}, it will follow that
$$w \notin \LCS{1+\Length{S}{w}}{F(S)},$$
and hence that $k \leq \Length{S}{w}$.
Write $w = u_1 \cdots u_n$ with $u_i = s_i^{m_i}$ for some $s_i \in S$ and $m_i \in \Z \setminus \{0\}$ for $1 \leq i \leq n$.
Choose this expression so that $s_i \neq s_{i+1}$ for $1 \leq i < n$. We thus have
$n \leq \Length{S}{w}$. Define $D = D_{s_1} \cdots D_{s_n}$.
We must show that $\varepsilon(D(w)) \neq 0$. In fact, we will show that for all $1 \leq j \leq n$ we have
\begin{align}
&D_{s_{j}} D_{s_{j+1}} \cdots D_{s_{n}}(w) \label{eqn:biggoal}\\
&\quad\quad = \sum_{1 \leq i_{j} < i_{j+1} < \cdots < i_n \leq n} (u_1 \cdots u_{i_j-1})(D_{s_j}(u_{i_j})) \varepsilon(D_{s_{j+1}}(u_{i_{j+1}})) \cdots \varepsilon(D_{s_n}(u_{i_n})). \notag
\end{align}
In particular, the case $j=1$ will yield
$$D(w) = D_{s_1}(u_1) \varepsilon(D_{s_2}(u_2)) \cdots \varepsilon(D_{s_n}(u_n)).$$
Using \eqref{eqn:exprule}, we will then be able to deduce that
$$\varepsilon(D(w)) = \varepsilon(D_{s_1}(u_1)) \cdots \varepsilon(D_{s_n}(u_n)) = m_1 \cdots m_n \neq 0,$$
as desired.
The proof of \eqref{eqn:biggoal} will be by induction on $n-j$. The base case $n-j=0$ follows from
\eqref{eqn:productrule} and the fact that $\varepsilon(u_i) = 1$ for all $1 \leq i \leq n$. Now assume
that $n-j > 0$ and that \eqref{eqn:biggoal} holds for all smaller $n-j$. Since $s_i \neq s_{i+1}$ for
$1 \leq i < n$, we must have $D_{s_j} D_{s_j+1}(u_{j+1}) = 0$. Using this together with \eqref{eqn:productrule}, our inductive
hypothesis, and the fact that $\varepsilon(u_i) = 1$ for all $1 \leq i \leq n$, we obtain
\begin{align*}
&D_{s_{j}} D_{s_{j+1}} \cdots D_{s_{n}}(w) \\
&\quad\quad = D_{s_j}(\sum_{1 \leq i_{j+1} < \cdots < i_n \leq n} (u_1 \cdots u_{i_{j+1}-1})(D_{s_{j+1}}(u_{i_{j+1}})) \varepsilon(D_{s_{j+2}}(u_{i_{j+2}})) \cdots \varepsilon(D_{s_n}(u_{i_n})) \\
&\quad\quad = \sum_{1 \leq i_{j+1} < \cdots < i_n \leq n} (\sum_{i=1}^{i_{j+1}-1} (u_1 \cdots u_{i-1})(D_{s_j}(u_i)) \varepsilon(D_{s_{j+1}}(u_{i_{j+1}})) \cdots \varepsilon(D_{s_n}(u_{i_n}))) \\
&\quad\quad = \sum_{1 \leq i_{j} < i_{j+1} < \cdots < i_n \leq n} (u_1 \cdots u_{i_j-1})(D_{s_j}(u_{i_j})) \varepsilon(D_{s_{j+1}}(u_{i_{j+1}})) \cdots \varepsilon(D_{s_n}(u_{i_n})),
\end{align*}
and we are done.
\end{proof}
|
1,116,691,501,256 | arxiv | \section{Introduction}
The aim of this note is to extend results on a model of chemically reacting heat conducting compressible gaseous mixture based on the model considered e.g. in \cite{Gi}. Results dealing with steady solutions for this model appeared recently in \cite{GPZ} and \cite{PiPo} for the case of Dirichlet boundary conditions; see also \cite{Za} which can be considered actually as a first result in this direction (dealing, however, with a slightly simplified model).
The common feature of these three papers is the fact that the
weak solutions (and also variational entropy solutions in \cite{PiPo}) were obtained for any relatively rough data, without any assumption on their size or on the distance to a known (possibly regular) solution.
This paper is devoted to the proof of existence of weak and variational entropy solutions to the model introduced in \cite{GPZ}. Due to the slip boundary conditions for the velocity we are able to extend the range of parameters for which the weak solutions exist. This corresponds to the fact which has been observed several times for the compressible Navier--Stokes or the Navier--Stokes--Fourier system; the slip boundary conditions allow for better density (and sometimes also velocity) estimates which leads to stronger results in this case, see e.g. \cite{MuPo1}, \cite{PoMu}, \cite{JN}, \cite{MuPo2}, \cite{JNP} or also \cite{MPZ_Handbook}.
In what follows, we use standard notation for the Lebesgue, Sobolev and other standard function spaces as well as for norms in these spaces. The scalar valued functions will be denoted by the standard font (e.g., $\varrho$ and $\vartheta$ for the density and temperature, respectively), the vector valued functions will be printed in bold face (e.g., $\vc{u}$ for the velocity) and the tensor valued functions using a special font (e.g., $\tn{S}$ for the viscous part of the stress tensor). The generic constants are denoted by $C$ and their value may change from line to line or even in the same formula.
\subsection{The model}
We consider the following system of partial differential equations
\begin{equation}\label{1}
\begin{array}{c}
{\rm div}\, (\varrho \vc{u}) = 0,\\
{\rm div}\, (\varrho \vc{u} \otimes \vc{u}) - {\rm div}\, \tn{S} + \nabla \pi =\varrho \vc{f},\\
{\rm div}\, (\varrho E\vc{u} )+{\rm div}\,(\pi\vc{u}) +{\rm div}\,\bf{Q}- {\rm div}\, (\tn{S}\vc{u})=\varrho\vc{f}\cdot\vc{u},\\
{\rm div}\, (\varrho Y_k \vc{u})+ {\rm div}\, {\vc F}_{k} = m_k\omega_{k},\quad k\in \{1,\ldots,n\},
\end{array}
\end{equation}
where the unknown quantities are the total density $\varrho$, the velocity field $\vc{u}$, the temperature $\vartheta$ (appearing in (\ref{1}) implicitly, see below) and the mass fractions $Y_k = \varrho_k/\varrho$, where $\{\varrho_k\}_{k=1}^n$ are the densities of the constituents. As $\sumkN \varrho_k = \varrho$, we have $\sumkN Y_k =1$. The other functions, i.e. the stress tensor $\tn{S}$, the pressure $\pi$, the total energy $E$, the heat flux $\bf{Q}$, the diffusion fluxes ${\vc F}_k$ and the molar production rates $\omega_k$ are given functions of these unknows and will be introduced below. Furthermore, $\vc{f}$ is the given field of the external forces (e.g., the gravity force) and $m_k$ denotes the molar masses of the $k$th constituent, $k=1,2,\dots, n$.
System (\ref{1}) is completed by the boundary conditions on $\partial \Omega$
\begin{equation} \label{2}
\begin{array}{rcl}
{\vc F}_{k}\cdot\vc{n}&=&0, \\
-\vc{Q}\cdot\vc{n}+L(\vartheta-\vartheta_{0})&=&0,
\end{array}
\end{equation}
and for the velocity we assume the Navier boundary condition (the slip b.c.)
\begin{equation} \label{4}
\vc{u} \cdot \vc{n} = 0, \qquad (\tn{S} \vc{n} + f \vc{u})\times \vc{n} = \vc{0}.
\end{equation}
Above, the boundary condition for the temperature means that the heat flux is proportional to the difference of the temperature inside and outside the boundary. The coefficient $f$ (assumed to be constant in what follows) denotes the friction.
We also prescribe the total mass
\begin{equation} \label{5}
\intO {\varrho} = M>0.
\end{equation}
\subsubsection{The stress tensor and the pressure}
We assume the stress tensor $\tn{S}$ to be a given linear function of the symmetric part of the velocity gradient
\begin{equation} \label{6}
{\tn S} = {\tn S}(\vartheta, \widetilde{\tn{D}}(\vc{u}))= \mu\Big[\nabla \vc{u}+(\nabla \vc{u})^{T}-\frac{2}{3}{\rm div}\, \vc{u} \tn{I}\Big]+\nu({\rm div}\, \vc{u})\tn{I},
\end{equation}
where $\widetilde{\tn{D}}(\vc{u}) = \frac 12 (\nabla \vc{u}+(\nabla \vc{u})^{T})$, the coefficients $\mu=\mu(\vartheta)>0$ (Lipschitz continuous in $\mathbb{R}^+$) $\nu=\nu(\vartheta)\geq 0$ (continuous in $\mathbb{R}^+$), are the shear and bulk viscosity coefficients, respectively. We assume
\begin{equation} \label{7}
\Un{\mu}(1+\vartheta)\leq\mu(\vartheta)\leq\Ov{\mu}(1+\vartheta),\quad 0\leq\nu(\vartheta)\leq\Ov{\nu}(1+\vartheta)
\end{equation}
for positive constants $\underline{\mu},\overline{\mu},\overline{\nu}$.
Furthermore, $\tn{I}$ is the identity matrix.
The pressure
\begin{equation}\label{8}
\pi =\pi(\varrho,\vartheta)=\pi_{c}(\varrho)+\pi_{m}(\varrho,\vartheta)
\end{equation}
where the cold pressure is assumed in the form
\begin{equation} \label{9}
\pi_{c}=\varrho^{\gamma}, \quad \gamma>1.
\end{equation}
A more general pressure form can be assumed, as in the case of the steady compressible Navier--Stokes--Fourier system, see e.g. \cite{NoPo_JDE} or \cite{FPT} in the context of the chemically reacting flows. We, however, prefer to keep its form as simple as possible.
The molecular pressure $\pi_{m}$, according to the Boyle law, satisfies
\begin{equation}\label{9a}
\pi_{m}=\pi_m(\varrho,\vartheta) = \sumkN p_k(\varrho,\vartheta) = \sumkN\frac{\varrho Y_k}{m_k}\vartheta,
\end{equation}
where, for simplicity, the gas constant is taken to be equal to one.
\subsubsection{The energy and the heat flux}
The specific total energy $E$ is a sum of the specific kinetic and specific internal energies (we denote $\vec Y = (Y_1,Y_2,\dots,Y_n)$)
\begin{equation} \label{10}
E= E(\varrho,\vc{u},\vartheta,\vec Y)=\frac{1}{2}|\vc{u}|^{2}+e(\varrho,\vartheta,\vec Y).
\end{equation}
Due to the form of the pressure the internal energy consists of two components
\begin{equation} \label{11}
e=e_{c}(\varrho)+e_{m}(\vartheta,\vec Y),
\end{equation}
where the cold energy $e_c$ and the molecular internal energy $e_m$ are given by
\begin{equation} \label{12}
e_{c}=\frac{1}{\gamma-1}\varrho^{\gamma-1},\qquad\qquad e_m= \sumkN Y_ke_k=\vartheta\sumkN c_{vk}Y_k.
\end{equation}
Above, $c_{vk}$ are the constant-volume specific heats and can be different for different species. They are related to the
constant-pressure specific heats by
\begin{equation}\label{13}
c_{pk}=c_{vk}+\frac {1}{m_k}
\end{equation}
and both $c_{vk}$ and $c_{pk}$ are assumed to be constant.
The heat flux $\bf{Q}$ consists of two terms
\begin{equation}\label{13a}
\vc{Q}=\sumkN h_k {\vc F}_{k}+\vc{q},
\end{equation}
where the first term represents transfer of energy due to the species molecular diffusion (and $h_k$, defined below, are the enthalpies) and the second one the Fourier law
\begin{equation}\label{13b}
\vc{q}=-\kappa\nabla\vartheta.
\end{equation}
The coefficient $\kappa=\kappa(\vartheta)$ is the thermal conductivity coefficient and we assume
\begin{equation} \label{13c}
\underline{\kappa}(1+\vartheta^{m})\leq\kappa(\vartheta)\leq\Ov{\kappa}(1+\vartheta^{m})
\end{equation}
for some constants $m,\underline{\kappa},\overline{\kappa}>0$.
\subsubsection{Diffusion flux and species production rates}
The form of the diffusion flux is the most important part modeling the interaction between the species. Following \cite{Gi} we assume that
\begin{equation} \label{14}
{\vc F}_k = -\sumlN C_{kl} \vc{d}_l, \quad k=1,2,\dots, n,
\end{equation}
where
\begin{equation}
\label{15}
\vc{d}_k = \nabla \Big(\frac{p_k}{\pi_m}\Big) + \Big( \frac{p_k}{\pi_m} - \frac{\varrho_k}{\varrho}\Big) \nabla \log \pi_m
=\frac{\nabla p_k}{\pi_m}-Y_k\frac{\nabla \pi_m}{\pi_m}.
\end{equation}
Furthermore, we introduce another matrix $\tn{D}$
$$
C_{kl} = Y_k D_{kl},
$$
where the diffusion matrix $\tn{D}=\tn{D}(\vartheta,\vec{Y})$ has the following properties
\begin{equation}\label{pr15}
\begin{gathered}
\tn{D}=\tn{D}^T,\quad
N(\tn{D})=\mathbb{R} \vec{Y},\quad
R(\tn{D})={\vec{U}}^{\bot},\\
\tn{D} \quad\text{ is positive semidefinite over } \mathbb{R}^n,
\end{gathered}
\end{equation}
with $\vec{Y}=(Y_1,\ldots,Y_n)^T>0$ and $\vec{U} = (1,\dots, 1)^T$.
Above $N(\tn{D})$ denotes the nullspace of matrix $\tn{D}$, $R(\tn{D})$ denotes its range, and ${\vec{U}}^{\bot}$ denotes the orthogonal complement of $\vec{U}$. Moreover, the matrix $\tn{D}$ is positively definite over $\vec{U}^{\bot}$ and there exists $\delta>0$ such that
\begin{equation} \label{16}
\delta \langle \tn{Y}^{-1}\vec{x},\vec{x}\rangle \leq \langle \tn{D}\vec{x},\vec{x}\rangle \quad \forall \vec{x} \in \vec{U}^{\bot},
\end{equation}
where $\tn{Y}= {\rm diag}\, (Y_1,\dots,Y_n)$ and $\langle \cdot,\cdot\rangle$ denotes the scalar product in $\mathbb{R}^n$.
Furthermore, $D_{ij}$ are differentiable functions of $\vartheta,Y_1,\ldots,Y_n$ for any $i,j\in\{1,\ldots,n\}$ such that
\begin{equation} \label{1.21a}
|Y_iD_{ij}(\vartheta,\vec{Y})| \leq C(\vec{Y}) (1+\vartheta^a)
\end{equation}
for some $a\geq 0$, and $C(\vec{Y})$ is bounded in $[0,1]^n$. Finally,
\begin{equation} \label{17}
\sumkN {\vc F}_k=\vc{0}.
\end{equation}
Note that we can also consider the Fick law in the form
\begin{equation} \label{1.22a}
{\vc F}_k = D(\vartheta,\vec Y) \nabla Y_k, \quad k=1,2,\dots, n,
\end{equation}
and the function $D(\cdot,\cdot)$ is a differentiable function fulfilling a similar estimate as (\ref{1.21a}), i.e.
\begin{equation} \label{1.22b}
0<D_0 \leq D(\vartheta,\vec Y) \leq C(\vec Y) (1+\vartheta^a)
\end{equation}
with $a\geq 0$ and $C(\cdot)$ bounded in $[0,1]^n$. Condition (\ref{17}) is indeed fulfilled provided $\sumkN Y_k =1$.
Concerning the species production rates, we assume that $\{\omega_k\}_{k=1}^n$ are differentiable functions of $\varrho,\vartheta,\vec{Y}$ which are bounded, and
such that
\begin{equation} \label{18}
\omega_k \geq - CY_k^r \quad \textrm{for some} \quad C,r>0,
\end{equation}
which means that a species cannot decrease faster
than proportionally to some positive power of its fraction (a possible natural choice is $r=1$). Moreover, this condition clearly implies the compatibility condition $\omega_k \geq 0$ if $Y_k=0$. Furthermore,
\begin{equation} \label{19}
\sumkN m_k \omega_k =0.
\end{equation}
Note that e.g. in \cite{FPT}, instead of $m_k \omega_k$ it is assumed that the species production rate is modeled as $\varrho m_k\omega_k$. We may easily treat here this version and in a sense (see comments to the entropy inequality below) it is in fact simpler.
\subsubsection{Entropy and other thermodynamic potentials; entropy production rate}
Since our main thermodynamic quantities are the internal energy and the pressure, the other thermodynamic potentials are assumed to be given functions of them. In what follows, we assume the thermodynamics connected with a mixture of ideal gases with addition of the cold pressure term and the corresponding term in the internal energy.
The specific enthalpy (of each constituent) has the form
\begin{equation} \label{20}
h_k = c_{pk} \vartheta, \qquad h = \sumkN Y_k h_k,
\end{equation}
where $c_{pk}$ fulfills (\ref{13}). The specific entropy
\begin{equation} \label{21}
s_k = c_{vk} \log \vartheta - \frac{1}{m_k} \log \Big(\frac{\varrho Y_k}{m_k}\Big), \qquad s= \sumkN Y_k s_k,
\end{equation}
and the Gibbs function (Gibbs free energy)
\begin{equation} \label{22}
g_k = h_k -\vartheta s_k, \qquad g = \sumkN Y_k g_k.
\end{equation}
Moreover, the Gibbs formula has the form
\begin{equation}\label{23}
\vartheta {\vc{D}} s={\vc{D}} e+\pi{\vc{D}}\left({\frac {1}{\varrho}}\right)-\sumkN g_{k}{\vc{D}} Y_{k}.
\end{equation}
Using (\ref{23}) it is possible to derive an equation for the specific entropy $s$
\begin{equation}\label{24}
{\rm div}\,(\varrho s\vc{u})+{\rm div}\,\left( \frac{\vc{Q}}{\vartheta}-\sumkN \frac{g_{k}}{\vartheta}{\vc F}_{k}\right)=\sigma,
\end{equation}
where the entropy production rate
\begin{equation} \label{25}
\sigma=\frac{{\tn S}:\nabla\vc{u}}{\vartheta}-{\frac{\vc{Q}\cdot\nabla\vartheta}{\vartheta^{2}}}-\sumkN{\vc F}_{k}\cdot\nabla\left({\frac{g_{k}} {\vartheta}}\right)-\frac{\sumkN m_k g_{k}\omega_{k}}{\vartheta}.
\end{equation}
Note that the entropy production rate can be expressed in the form
\begin{equation} \label{26}
\sigma=\frac{{\tn S}:\nabla\vc{u}}{\vartheta}+{\frac{\kappa |\nabla\vartheta|^2}{\vartheta^{2}}}-\sumkN\frac{{\vc F}_{k}}{m_k}\cdot\nabla\log p_k-\frac{\sumkN m_k g_{k}\omega_{k}}{\vartheta}.
\end{equation}
Then we easily see that the first two terms are non-negative due to the form of the stress tensor and the positivity of $\kappa$. Moreover, we assume that
\begin{equation} \label{26a}
\sumkN m_k g_k \omega_k \leq 0,
\end{equation}
which implies that also the fourth term is non-negative. Finally,
$$
\begin{aligned}
-\sumkN\frac{{\vc F}_{k}}{m_k}\cdot\nabla\log p_k &= \sum_{k,l=1}^n \frac{Y_k D_{kl}}{m_k} \Big[\nabla \Big(\frac{p_l}{\pi_m}\Big) + \Big(\frac{p_l}{\pi_m}-\frac{\varrho_l}{\varrho}\Big)\nabla \log \pi_m \Big] \nabla \log p_k \\
&= \sum_{k,l=1}^n \frac{Y_k D_{kl}}{m_k} \Big[ \frac{\nabla p_l}{\pi_m} -\frac{\varrho_l}{\varrho} \frac{\nabla \pi_m}{\pi_m} \Big] \frac{\nabla p_k}{p_k}\\
&= \frac{\pi_m}{\varrho\vartheta}\sum_{k,l=1}^n D_{kl} \Big[ \frac{\nabla p_l}{\pi_m} -Y_l \frac{\nabla \pi_m}{\pi_m} \Big] \Big[ \frac{\nabla p_k}{\pi_m} -Y_k \frac{\nabla \pi_m}{\pi_m} \Big]\\
+ \frac{\pi_m}{\varrho\vartheta}& \sum_{k,l=1}^n D_{kl} \Big[ \frac{\nabla p_l}{\pi_m} -Y_l \frac{\nabla \pi_m}{\pi_m} \Big] Y_k \frac{\nabla \pi_m}{\pi_m} \geq 0
\end{aligned}
$$
due to the properties of the matrix $\tn{D}$, as $\sumkN {\vc F}_k = \vc{0}$ implies
\begin{equation} \label{27}
\sum_{k=1}^n Y_k D_{kl} = 0 \quad \forall l=1,2,\dots, n.
\end{equation}
Notice in particular that Fick law \eqref{1.22a} is not a special case of (\ref{14})--(\ref{pr15}).
\subsection{Formulation of the problem}
We now formulate the problem we treat in this paper.
\subsubsection{Non-diagonal diffusion matrix}
Unfortunately, the general non-diagonal form of the diffusion flux is too complex to be considered in the full generality. In particular, the above deduced lower bound of the corresponding term in the entropy production rate does not allow us to control the gradient of the mass fractions. A certain attempt has been done in the evolutionary case, see \cite{MPZ}, however, it leads to the necessity to control the (total) density gradient, which can be obtained for the fluids with density dependent viscosities satisfying the Bresch--Desjardins identity. The same idea does not work in the steady problem and therefore we must restrict ourselves (cf. \cite{GPZ} or \cite{PiPo}) to the case when all molar masses are comparable. We therefore assume that $m_1=m_2=\dots=m_n$ and without loss of generality we set this common value to be equal to one.
Then $\pi_m = \sumkN \varrho_k \vartheta = \varrho\vartheta$, $\frac{\nabla p_l}{\pi_m} -Y_l \frac{\nabla \pi_m}{\pi_m} = \nabla Y_l + Y_l \frac{\nabla (\varrho\vartheta)}{\varrho\vartheta} - Y_l \frac{\nabla (\varrho\vartheta)}{\varrho\vartheta}= \nabla Y_l$.
Therefore
$$
{\vc F}_k=-\sum_{l=1}^n Y_k D_{kl}\nabla Y_l
$$
and
$$
-\sumkN\frac{{\vc F}_{k}}{m_k}\cdot\nabla\log p_k = -\sumkN{\vc F}_{k}\cdot\Big(\frac{\nabla Y_k}{Y_k}+\frac{\nabla(\varrho\vartheta)}{\varrho\vartheta}\Big)= \sumkN D_{kl} \nabla Y_l \nabla Y_k \geq c |\nabla \vec{Y}|^2,
$$
provided $\vec{Y} \geq 0$ and $\sumkN Y_k =1$. Exactly this estimate allows to obtain the existence of a solution in this case. We may therefore consider
\bigskip
\noindent {\bf Problem P} (Non-diagonal diffusion matrix)
We consider system (\ref{1}) with boundary conditions (\ref{2}), (\ref{4}), given total mass (\ref{5}), and (\ref{6})--(\ref{17}), (\ref{18})--(\ref{25}) with equal molar masses $m_1=m_2=\dots=m_n =1$.
\bigskip
\subsubsection{Fick's law}
A similar estimate as above we get also in the case of the Fick law (\ref{1.22a}). We have for the molar masses being the same (and for notational simplicity, equal to 1)
\begin{multline*}
-\sumkN\frac{{\vc F}_{k}}{m_k}\cdot\nabla\log p_k = -\sumkN{\vc F}_{k}\cdot\nabla\log p_k \\ =\sumkN \frac{D(\vartheta,\vec Y)}{Y_k} \nabla Y_k\cdot \nabla Y_k + \sumkN D(\vartheta,\vec Y) \nabla Y_k \cdot \frac{\nabla (\varrho\vartheta)}{\varrho \vartheta} \geq D_0 |\nabla \vec Y|^2.
\end{multline*}
Note that in the case of the same molar masses the Fick law behaves exactly in the same way as Problem P for the same molar masses and therefore we do not consider it separately.
\section{Definitions of solutions. Existence results}
Problem P with the Dirichlet boundary conditions for the velocity has been studied in \cite{GPZ} and \cite{PiPo}, we therefore present in analogy to this case the definitions of weak and variational entropy solutions, in the spirit of paper \cite{JNP}. We introduce
$$
C^1_{\vc{n}} (\Omega) = \{\vc{w} \in C^1(\Ov{\Omega}); \vc{w} \cdot \vc{n}=0 \text{ on } \partial \Omega\}.
$$
\begin{definition}\label{d1}
We say the set of functions $(\varrho,\vc{u},\vartheta, \vec{Y})$ is a weak solution to system
(\ref{1}) with boundary conditions (\ref{2}), (\ref{4}), given total mass (\ref{5}), and (\ref{6})--(\ref{17}), (\ref{18})--(\ref{25}) with equal molar masses $m_1=m_2=\dots=m_n =1$
provided
\begin{itemize}
\item
$\varrho \geq 0$ a.e. in $\Omega$, $\varrho \in L^{6\gamma/5}(\Omega)$, $\int_{\Omega} \varrho{\, \rm d}x=M$
\item
$\vc{u} \in W^{1,2}(\Omega)$, $\vc{u}\cdot \vc{n} = 0$ a.e. on $\partial \Omega$, $\varrho |\vc{u}|$ and $\varrho |\vc{u}|^2 \in L^{\frac{6}{5}}(\Omega)$
\item
$\vartheta \in W^{1,2}(\Omega) \cap L^{3m}(\Omega)$, $\varrho \vartheta, \varrho\vartheta|\vc{u}|, {\tn S}\vc{u}, \kappa|\nabla \vartheta| \in L^1(\Omega)$
\item
$\vec{Y}\in W^{1,2}(\Omega)$, $Y_k \geq 0$ a.e. in $\Omega$, $\sumkN Y_k = 1$ a.e. in $\Omega$, ${\vc F}_k\cdot \vc{n}|_{\partial \Omega}=0$
\end{itemize}
and the following integral equalities hold\\
$\bullet$ the weak formulation of the continuity equation
\begin{equation}\label{weak_cont}
\intO{\varrho \vc{u}\cdot\nabla\psi} = 0
\end{equation}
holds for any test function $\psi\in C^{1}(\Ov{\Omega})$;\\
$\bullet$ the weak formulation of the momentum equation
\begin{equation} \label{weak_mom}
-\intO{\big(\varrho\left(\vc{u}\otimes\vc{u}\right):\nabla\vcg{\varphi}-\tn{S}:\nabla \vcg{\varphi}\big)}+ f\intpO{\vc{u} \cdot \vcg{\varphi}}-\intO {\pi {\rm div}\,\vcg{\varphi}}=\intO{\varrho\vc{f}\cdot\vcg{\varphi}}
\end{equation}
holds for any test function $\vcg{\varphi}\in C^{1}_{\vc{n}}(\Omega)$;\\
$\bullet$ the weak formulation of the species equations
\begin{equation}\label{weak_spe}
-\intO{ Y_{k}\varrho\vc{u}\cdot\nabla\psi}-\intO{{\vc F}_k\cdot\nabla\psi}=\intO{\omega_{k}\psi}
\end{equation}
holds for any test function $\psi\in C^{1}(\Ov{\Omega})$ and for all $k=1,\ldots,n$;\\
$\bullet$ the weak formulation of the total energy balance
\begin{equation} \label{weak_ene}
\begin{aligned}
&-\intO{\left(\frac{1}{2}\varrho|\vc{u}|^{2}+\varrho e\right)\vc{u}\cdot\nabla\psi}+\intO{\kappa\nabla\vartheta\cdot\nabla\psi}
-\intO{\left(\sumkN{h_{k}{\vc F}_{k}}\right)\cdot\nabla\psi}\\
&=\intO{\varrho\vc{f}\cdot\vc{u}\psi}-\intO{\lr{\tn{S}\vc{u}}\cdot\nabla\psi}+\intO{\pi\vc{u}\cdot\nabla\psi}\\
&-\intpO{L(\vartheta-\vartheta_{0})\psi} - f\intpO{|\vc{u}|^2 \psi}
\end{aligned}
\end{equation}
holds for any test function $\psi\in C^{1}(\Ov{\Omega})$.
\end{definition}
Indeed, the total energy balance which contains the term behaving as $\varrho|\vc{u}|^3$ limits the range for $\gamma$ and $m$ for which we are able to prove existence of a weak solution. Following a similar situation for the compressible Navier--Stokes--Fourier system (both steady and evolutionary, see \cite{FeNo} or \cite{MPZ_Handbook}) we introduce another type of solution, where the total energy balance is replaced by the entropy inequality.
\begin{definition} \label{d2}
We say the set of functions $(\varrho,\vc{u},\vartheta, \vec{Y})$ is a variational entropy solution to problem
(\ref{1}) with boundary conditions (\ref{2}), (\ref{4}), given total mass (\ref{5}), and (\ref{6})--(\ref{17}), (\ref{18})--(\ref{25}) with equal molar masses $m_1=m_2=\dots=m_n =1$
provided
\begin{itemize}
\item
$\varrho \geq 0$ a.e. in $\Omega$, $\varrho \in L^{s\gamma}(\Omega)$ for some $s>1$, $\int_{\Omega} \varrho{\, \rm d}x=M$
\item
$\vc{u} \in W^{1,2}(\Omega)$, $\vc{u}\cdot \vc{n} = 0$ a.e. on $\partial \Omega$, $\varrho \vc{u} \in L^{\frac{6}{5}}(\Omega)$
\item
$\vartheta \in W^{1,r}(\Omega) \cap L^{3m}(\Omega)$, $r>1$, $\varrho \vartheta, \tn {S}:\frac{\nabla \vc{u}}{\vartheta}, \kappa\frac{|\nabla \vartheta|^2}{\vartheta^2}, \kappa\frac{\nabla \vartheta}{\vartheta} \in L^1(\Omega)$,
$\frac{1}{\vartheta} \in L^1(\partial \Omega)$
\item
$\vec{Y}\in W^{1,2}(\Omega)$, $Y_k \geq 0$ a.e. in $\Omega$, $\sumkN Y_k = 1$ a.e. in $\Omega$, ${\vc F}_k\cdot \vc{n}|_{\partial \Omega}=0$
\end{itemize}
satisfy equations (\ref{weak_cont}--\ref{weak_spe}),
the following entropy inequality
\begin{multline} \label{entropy_ineq}
\int_{\Omega} \frac{ \tn {S} : \nabla \vc{u}}{\vartheta}\psi {\, \rm d}x
+\int \kappa\frac{|\nabla \vartheta|^2}{\vartheta^2}\psi {\, \rm d}x
-\int_{\Omega}\sumkN \omega_k (c_{pk}-c_{vk} \log \vartheta + \log Y_k)\psi{\, \rm d}x\\
+\int_{\Omega} \psi \sum_{k,l=1}^n D_{kl}\nabla Y_k \cdot \nabla Y_l {\, \rm d}x
+\intpO{\frac{L}{\vartheta}\vartheta_0\psi} \leq
\int \frac{\kappa \nabla \vartheta \cdot \nabla \psi}{\vartheta} {\, \rm d}x
-\int_{\Omega} \varrho s \vc{u} \cdot \nabla \psi {\, \rm d}x\\
-\int_{\Omega} \log \vartheta \Big(\sumkN \vc{F}_k c_{vk}\Big) \cdot \nabla \psi {\, \rm d}x
+\int_{\Omega} \Big(\sumkN \vc{F}_k \log Y_k\Big) \cdot\nabla \psi {\, \rm d}x
+ \intpO{L\psi}
\end{multline}
for all non-negative $\psi \in C^1(\overline{\Omega})$
and the global total energy balance (i.e. (\ref{weak_ene}) with $\psi \equiv 1$)
\begin{equation} \label{glob_ene}
f\intpO{|\vc{u}|^2}+ \intpO{L(\vartheta-\vartheta_0)} = \intO{\varrho \vc{f}\cdot \vc{u}}.
\end{equation}
\end{definition}
Formally, the entropy inequality (\ref{entropy_ineq}) is nothing but a weak formulation of the entropy inequality (\ref{24}). However, some modifications are required. First of all, we are not able to keep equality, but due to the technique used to prove existence of such solutions we face the problem that in several terms we are not able to pass to the limit directly and we have to apply the weak lower semicontinuity here. Note further that (\ref{entropy_ineq}) does not contain all terms from (\ref{24}), some of them are missing. These terms are formally equal to zero due to assumptions that $\omega_k$ and ${\vc F}_k$ sum up to zero. We removed them from the formulation of the entropy inequality due to the fact that we cannot exclude the situation that $\varrho=0$ in some large portions of $\Omega$ (with positive Lebesgue measure), thus $\log \varrho$ is not well defined there. However, the variational entropy solution still has the property that any sufficiently smooth variational entropy solution in the sense above is a classical solution to our problem, provided the density is strictly positive in $\Omega$. Replacing the form of the source terms in the species balance equations by $\varrho \omega_k$ we even do not face this problem.
We are now in position to formulate our main result.
\begin{theorem} \label{t1}
Let $\gamma > 1$, $M>0$, $m > \max\{\frac 23, \frac{2}{3(\gamma-1)}\}$, $a < \frac{3m}{2}$, $\vartheta_0 \in L^1(\partial \Omega)$, $\vartheta_0 \geq K_0 >0$ a.e. on $\partial \Omega$. Let $\Omega \in C^2$ be not axially symmetric. Then there exists at least one variational entropy solution to Problem P in the sense of Definition \ref{d2}. Moreover, $(\varrho,\vc{u})$ is the renormalized solution to the continuity equation.
In addition, if $m > 1$, $\gamma > \frac 54$, $a< \frac{3m-2}{2}$, then the solution is a weak solution in the sense of Definition \ref{d1}.
If $\Omega$ is axially symmetric, let $f>0$. Then there exists at least one variational entropy solution
to Problem P. In addition, if $\gamma > \frac 54$, $m > 1$, $m> \frac{16\gamma}{15\gamma -16}$ (if $\gamma \in (\frac 54,\frac 43])$ or $m> \frac{18-6\gamma}{9\gamma -7}$ (if $\gamma \in (\frac 43,\frac 53))$ then the solution is a weak solution.
\end{theorem}
\begin{remark} \label{r2}
Recall that the pair $(\varrho,\vc{u})$ is a renormalized solution to the continuity equation provided $\vc{u} \in W^{1,2}(\Omega)$, $\varrho \in L^{\frac 65}(\Omega)$ and for any $b \in C^{1}(0,\infty)\cap C([0,\infty))$, $b'(z) = 0$ for $z\geq M$ for some $M>0$
$$
\int_{\Omega} \Big(b(\varrho) {\rm div}\, \psi + (b(\varrho)-b'(\varrho)\varrho){\rm div}\, \vc{u} \psi\Big) {\, \rm d}x =0
$$
for all $\psi \in C^1(\Ov{\Omega})$.
\end{remark}
\section{Proof of the existence results}
As explained above, it is enough to prove Theorem \ref{t1} for the case of the generally nondiagonal diffusion matrix. Indeed, for the Fick law, the proof could be simplified due to the special structure of the diffusion flux, but we prefer not to deal with the modification and indicate only one place which is slightly different. The result is exactly the same as in Theorem \ref{t1}.
\begin{proof} { (of Theorem \ref{t1}).}
First, we define for positive parameters $\delta > \varepsilon > \lambda > \eta >0$ the following approximations of different quantities appearing in the formulation of Problem I. We start with
\begin{equation} \label{101}
{\vc J}_k =-\sumlN Y_k Y_l\widehat{ D}_{kl}(\vartheta,\vec{Y})\nabla Y_l/Y_l
-\big(\varepsilon(\varrho+1) Y_k+\lambda\big)\nabla Y_k/Y_k,\\
\end{equation}
with
\begin{equation} \label{102}
\widehat {D}_{kl}(\vartheta,\vec{Y}) = \frac{1}{(\sigma_Y+\varepsilon)^r} D_{kl}(\vartheta,\vec{Y})
\end{equation}
for suitably chosen $r\geq 0$,
where $\sigma_Y=\sumkN Y_k$. The reason for this notation is that, unless we let $\lambda \to 0^+$, it is not clear whether $\sigma_Y=1$. We will only know that $Y_k \geq 0$.
For the case of the Fick law this regularization can be simplified. However, in order to keep the unified approach,
we only slightly modify this step. Instead of (\ref{101}) we set
\begin{equation} \label{101a}
{\vc J}_k =-\sumlN \widehat{ D}(\vartheta,\vec{Y})\nabla Y_k
-\big(\varepsilon(\varrho+1) Y_k+\lambda\big)\nabla Y_k/Y_k,\\
\end{equation}
where $\widehat D$ is defined similarly as $\widehat{\tn D}$ in (\ref{102}).
Furthermore, we introduce a regularization of the stress tensor
\begin{equation}\label{103}
\tn{S}_{\eta}={\frac{\mu_{\eta}(\vartheta)}{1+\eta\vartheta}}\left[\nabla \vc{u}+(\nabla\vc{u})^T-\frac{2}{3}{\rm div}\, \vc{u} \, \tn{I}\right]+{\frac{\nu_\eta(\vartheta)}{1+\eta\vartheta}}\lr{{\rm div}\, \vc{u}}\tn{I},
\end{equation}
where $\mu_{\eta},\nu_{\eta}$ are standard mollifications of the viscosity functions.
Next,
\begin{equation} \label{104}
\kappa_{\delta,\eta} = \kappa^\eta + \delta \vartheta^B + \delta \vartheta^{-1}
\end{equation}
is a regularization of heat conductivity coefficient, where the exponent $B>0$ sufficiently large will
be determined later, and $\kappa^\eta$ is the standard mollification of the heat conductivity.
We take the following approximation of the specific entropy
\begin{equation} \label{105}
s_k^\lambda=c_{vk}\log \vartheta - \log Y_k - \log(\varrho+\sqrt{\lambda}),
\end{equation}
and, similarly
\begin{equation} \label{106}
g_k^\lambda = c_{pk} \vartheta - \vartheta s_k^\lambda, \quad s^\lambda=\sumkN Y_k s_k^\lambda.
\end{equation}
In what follows, we present only the main steps of the existence proof, pointing always out the specific paper, where more details can be found.
\medskip
\noindent {\it Step I: Formulation of the approximate problem.}
We consider additionally one more parameter, $N\in \mathbf{N}$ denoting the dimension for the Galerkin approximation of the velocity.
Let $\{\vc{w}_n\}_{n=1}^{\infty}$ be an orthogonal basis of $W^{1,2}(\Omega)$ such that $\vc{w}\cdot \vc{n}=0$ on $\partial \Omega$ such that
$\vc{w}_i \in W^{2,q}(\Omega)$ for $q<\infty$ (we can take for example eigenfunctions of the Lam\'e
system with slip boundary conditions).
We look for
$(\varrho_{N,\eta,\lambda,\varepsilon,\delta},\vc{u}_{N,\eta,\lambda,\varepsilon,\delta},\vec{Y}_{N,\eta,\lambda,\varepsilon,\delta},\vartheta_{N,\eta,\lambda,\varepsilon,\delta})$
(from now on we skip the indices) such that\\
$\bullet$ the approximate continuity equation
\begin{equation} \label{108}
\begin{aligned}
\varepsilon\varrho+{\rm div}\, (\varrho \vc{u}) &= \varepsilon\Delta\varrho+\varepsilon \Ov{\varrho},\\
\nabla\varrho\cdot\vc{n}| _{\partial\Omega}&=0,
\end{aligned}
\end{equation}
where $\bar \varrho = \frac{M}{|\Omega|}$,
is satisfied pointwisely\\
$\bullet$ the Galerkin approximation for the momentum equation (note that the convective term reduces to the standard form provided ${\rm div}\,(\varrho\vc{u})=0$, even in the weak sense)
\begin{multline}\label{109}
\intOB{\frac{1}{2}\varrho\vc{u}\cdot\nabla\vc{u}\cdot\vc{w}-\frac{1}{2}\varrho\left(\vc{u}\otimes\vc{u}\right):\nabla\vc{w}+\tn{S}_{\eta}:\nabla\vc{w}}\\+f \intpO{\vc{u} \cdot \vc{w}}-\intO{(\pi+\delta\varrho^{\beta}+\delta\varrho^{2}){\rm div}\,\vc{w}}=\intO{\varrho \vc{f}\cdot\vc{w}
\end{multline}
is satisfied for each test function $\vc{w}\in X_{N}$, where $\vc{u} \in X_N$,
$X_N={\rm span}\{\vc{w}_i\}_{i=1}^N$, and $\beta>0$ is large enough\\
$\bullet$ the approximate species mass balance equations
\begin{equation} \label{110}
\begin{array}{c}
{\rm div}\, \vc{J}_k=\omega_{k}+\varepsilon \Ov{\varrho}_k-\varepsilon Y_k\varrho-{\rm div}\,(Y_k\varrho\vc{u} )+\varepsilon{\rm div}\,(Y_k\nabla\varrho) -\sqrt{\lambda} \log Y_k , \\
\vc{J}_k \cdot \vc{n}| _{\partial\Omega} = 0
\end{array}
\end{equation}
are satisfied pointwisely,
where $\sumkN \bar \varrho_k=\bar \varrho$, for example we take $\bar \varrho_k=\frac{\bar \varrho}{n}$
\\
$\bullet$ the approximate internal energy balance
\begin{equation} \label{111}
\begin{aligned}
-{\rm div}\,\left(\kappa_{\delta,\eta}{\frac{\varepsilon+\vartheta}{\vartheta}}\nabla \vartheta\right)
= &-{\rm div}\,(\varrho e\vc{u})-\pi{\rm div}\,\vc{u} +{\frac{\delta}{\vartheta}}
+\tn{S}_{\eta}:\nabla{\vc{u}} \\
&+\delta\varepsilon(\beta\varrho^{\beta-2}+2)|\nabla\varrho|^{2}-{\rm div}\,\left(\vartheta\sumkN c_{vk} \vc{J}_k\right)
\end{aligned}
\end{equation}
with the boundary condition
\begin{equation}\label{112}
\kappa_{\delta,\eta} \frac{\varepsilon+\vartheta}{\vartheta}\nabla\vartheta\cdot\vc{n}| _{\partial\Omega}+(L+\delta\vartheta^{B-1})(\vartheta-\vartheta_{0}^\eta)+\varepsilon \log\vartheta +\lambda \vartheta^{\frac B2} \log \vartheta=0
\end{equation}
is satisfied pointwisely, where $\vartheta_0^\eta$ is a smooth, strictly positive approximation of $\vartheta_0$
and $\kappa_{\delta,\eta}$ is as above.
Next, we write down the entropy equality for the approximate system. Note that it is not an additional assumption, but a consequence of the approximate relations above and it is possible to deduce its form (see \cite{PiPo} for more details in the case of the Dirichlet boundary conditions) under the regularity assumptions which correspond to the regularity of solutions to the approximate problem stated above.
\medskip
\noindent {\it Step II: Solvability of the approximate system.}
Following \cite{GPZ} and \cite{PiPo} we can prove
\begin{proposition}\label{p1}
Let $\delta$, $\varepsilon$, $\lambda$ and $\eta$ be positive numbers and $N$ be a positive integer.
Under the assumptions of Theorem \ref{t1}
there exists a solution to system (\ref{108}--\ref{112}) such that
$\varrho\in W^{2,q}(\Omega)$ $\forall q<\infty$, $\varrho\geq0$ in $\Omega$, $\intO{\varrho}=M$, $\vc{u}\in X_N$, $\vec{Y}\in W^{1,2}(\Omega)$ with $\log Y_k \in W^{2,q}(\Omega)$ $\forall q<\infty$, $Y_k>0$ a.e. in $\Omega$ and $\vartheta\in W^{2,q}(\Omega)$ $\forall q<\infty$, $\vartheta\geq C(N)>0$.
Moreover, this solution satisfies the entropy equation
\begin{multline} \label{113}
\int_{\Omega} \frac{\psi \tn {S}_\eta : \nabla \vc{u}}{\vartheta}{\, \rm d}x
+\int_\Omega \kappa_{\delta,\eta}\frac{(\varepsilon+\vartheta)}{\vartheta}\frac{|\nabla \vartheta|^2}{\vartheta^2}\psi {\, \rm d}x \\
-\int_{\Omega}\omega_k (c_{pk}-c_{vk} \log \vartheta + \log Y_k)\psi{\, \rm d}x
+\int_{\Omega}\frac{\delta \psi}{\vartheta^2}{\, \rm d}x \\
-\int_{\Omega} \psi \sumkN \widehat{\vc F}_k \cdot \nabla \log Y_k {\, \rm d}x
+ \int_{\partial \Omega} \frac{\psi}{\vartheta}(L+\delta \vartheta^{B-1})\vartheta_0^\eta \, {\rm d}S \\
+\int_{\Omega}\frac{\delta \varepsilon (\beta \varrho^{\beta-2}+2)|\nabla \varrho|^2\psi}{\vartheta}{\, \rm d}x
+\int_{\Omega}\psi\sumkN(\varepsilon(\varrho+1)Y_k+\lambda)\Big|\frac{\nabla Y_k}{Y_k}\Big|^2{\, \rm d}x \\
=\int_{\Omega} \frac{\kappa_{\delta,\eta}(\varepsilon+\vartheta)\nabla \vartheta \cdot \nabla \psi}{\vartheta^2} {\, \rm d}x
-\int_{\Omega} \varrho s^\lambda \vc{u} \cdot \nabla \psi {\, \rm d}x \\
- \int_{\Omega} \sumkN (c_{vk}\log \vartheta -\log Y_k) \widehat{\vc F}_k \cdot \nabla \psi {\, \rm d}x - \varepsilon \int_{\Omega} \psi \sumkN Y_k c_{pk} (\Delta \varrho + \bar\varrho - \varrho) {\, \rm d}x\\
+ \int_{\Omega} \psi \varrho \vc{u} \cdot \Big(\big(\sumkN Y_k\big) \nabla \log (\varrho+\sqrt{\lambda}) - \nabla \log \varrho\Big){\, \rm d}x \\
+ \int_{\partial \Omega} \frac{\psi}{\vartheta}\big( (L+\delta\vartheta^{B-1})\vartheta + \varepsilon \log \vartheta + \lambda \vartheta^{B/2}\log \vartheta\big)\, {\rm d}S \\
-\varepsilon \int_{\Omega} \sumkN Y_k \nabla \varrho \cdot \nabla \Big(\frac{g_k^\lambda \psi}{\vartheta}\Big) {\, \rm d}x
-\sqrt{\lambda} \int_{\Omega} \Big(\sumkN g_k^\lambda \log Y_k\Big) \frac{\psi}{\vartheta}{\, \rm d}x \\
+\int_{\Omega}\varepsilon(\Delta \varrho+\bar \varrho-\varrho)(\varrho^{\gamma-1}+e+\theta)\frac{\psi}{\vartheta}{\, \rm d}x\\
+\varepsilon \int_{\Omega} \sumkN (\bar \varrho_k - Y_k \varrho) \frac{g_k^\lambda \psi}{\vartheta}{\, \rm d}x
-\int_{\Omega}\sumkN (\varepsilon(\varrho+1)Y_k+\lambda)\frac{\nabla Y_k}{Y_k}\cdot\nabla\psi{\, \rm d}x\\
+\int_{\Omega}\sumkN\big(\varepsilon(\varrho+1)Y_k+\lambda\big)s_k^\lambda\frac{\nabla Y_k}{Y_k}\cdot \nabla\psi{\, \rm d}x \\
-\int_{\Omega}\psi\sumkN(\varepsilon(\varrho+1)Y_k+\lambda)\frac{\nabla Y_k}{Y_k}\cdot \nabla\log(\varrho+\sqrt{\lambda}){\, \rm d}x
\end{multline}
and the following
estimate
\begin{multline} \label{114}
\sqrt{\lambda}\sumkN \Big(\|Y_k\|_{1,2}+\Big\|\frac{\nabla Y_k}{Y_k}\Big\|_2+\lambda^{-1/4}\|\log Y_k\|_2\Big) + \sumkN \Big\|\frac{|\nabla Y_k|^2}{Y_k} \Big\|_1
+\|\nabla \vartheta^{B/2}\|_2 \\ + \Big\|\frac{\nabla \vartheta}{\vartheta^{2}}\Big\|_2
+ \Big\|\frac{\nabla \varrho}{\sqrt{\varrho+\sqrt{\lambda}}}\Big\|_2
+\|\vartheta^{-2}\|_1+\|\vartheta\|_{B,\partial \Omega}+\Big\|\frac{\log \vartheta}{\vartheta}\Big\|_{1,\partial \Omega}
+\|\nabla^2 \varrho\|_2 \\ + \|\vc{u}\|_{1,2} + \|\nabla \varrho\|_6 \leq C,
\end{multline}
where $C$ is independent of $N$, $\eta$ and $\lambda$.
\end{proposition}
Note that if $\Omega$ is axially symmetric (and $f$ is thus positive), the estimate of the $\|\vc{u}\|_2$ (or, more precisely, of $\|\vc{u}\|_{2,\partial \Omega}$) must be deduced from the momentum equation.
\noindent We now let subsequently $N\to +\infty$, $\eta \to 0^+$, $\lambda \to 0^+$, $\varepsilon \to 0^+$, and $\delta \to 0^+$.
\medskip
\noindent {\it Step III: Limit passage $N \to +\infty$.}
Using the bounds from Proposition \ref{p1}, weak lower semicontinuity of several terms in the entropy inequality and the fact that for both Galerkin approximation and the limit momentum balance we can use the corresponding velocity as test function (i.e., we have energy equality in both cases), we may let $N\to +\infty$ in the system (\ref{108})--(\ref{113}) above. Note that instead of entropy equality we get entropy inequality.
\medskip
\noindent {\it Step IV: Limit passage $\eta \to 0^+$.}
As we cannot ensure strong convergence of the quadratic term on the rhs of the approximate internal energy balance,
before starting with the limit passage we must replace it by the approximate total energy balance, i.e. we add the kinetic energy balance to the limit version of (\ref{111}). We get
\begin{multline} \label{115}
-\intO{\Big[\varrho e+ \frac 12 \varrho |\vc{u}|^2 + (\pi+ \delta \varrho^\beta + \delta \varrho^2)\Big]\vc{u}\cdot \nabla \psi}
\\
-\intO{\Big(\tn{S}_\eta \vc{u} \cdot \nabla \psi + \delta \vartheta^{-1} \psi\Big)}
+\intO{\kappa_{\delta,\eta}{\frac{\varepsilon+\vartheta}{\vartheta}}\nabla\vartheta\cdot\nabla \psi} \\
+ \int_{\partial \Omega}\big[(L+\delta \vartheta^{B-1})(\vartheta-\vartheta_0^\eta) +\varepsilon \log \vartheta + \lambda \vartheta^{\frac B2} \log\vartheta\big] \psi \, {\rm d}S + f\intpO{|\vc{u}|^2\psi}\\
+ \sum_{k=1}^n c_{vk} \intO{\Big[\vartheta \sum_{l=1}^n Y_k\widehat{D}_{kl}\nabla Y_l \cdot \nabla \psi + \vartheta (\varepsilon(\varrho+1)Y_k +\lambda) \frac{\nabla Y_k}{Y_k}\cdot \nabla \psi\Big]} \\
= \intO{\varrho \vc{f} \cdot \vc{u} \psi}
+ \frac{\delta}{\beta-1} \intO{(\varepsilon \beta \Ov{\varrho} \varrho^{\beta-1}\psi + \varrho^\beta \vc{u} \cdot \nabla \psi - \varepsilon \beta \varrho^\beta \psi)} \\
+\delta \intO{(2\varepsilon \Ov{\varrho} \varrho\psi + \varrho^2 \vc{u} \cdot \nabla \psi - 2\varepsilon \varrho^2 \psi)}.
\end{multline}
Recalling that the bounds in (\ref{114}) are independent of $\eta$, it is not difficult to see that we may now let $\eta \to 0^+$ and pass to the limit in our system of equations. Now, if $\Omega$ is axially symmetric, we read the estimate of $\|\vc{u}\|_{2,\partial \Omega}$ from the total energy balance with $\psi$ constant. The same holds also for all subsequent limit passages.
\medskip
\noindent {\it Step V: Limit passage $\lambda \to 0^+$.}
Recall that at this moment it is not yet true that $\varrho = \sumkN \varrho_k$. However, we have at least
(see (6.12) in \cite{GPZ})
\begin{equation} \label{116}
\|\sumkN \nabla Y_k\|_2 + \|(\sumkN Y_k)-1\|_6 \leq C(\lambda) \sim \sqrt{\lambda}\to 0 \quad \textrm{for} \quad \lambda \to 0.
\end{equation}
This bound, together with (\ref{114}), implies
\begin{equation} \label{116a}
\sumkN \|\nabla Y_k\|_{\frac{12}{7}} \leq C
\end{equation}
with $C$ independent of $\lambda$. We may therefore let $\lambda \to 0^+$ and pass to the limit in our problem (for the details see \cite{PiPo}). Due to (\ref{116}) we see that after the limit passage we have
$$
\sumkN Y_k =1, \qquad \text{ i.e. } \sumkN \varrho_k = \varrho.
$$
\medskip
\noindent {\it Step VI: Limit passage $\varepsilon \to 0^+$.}
The last two limit passages are nowadays standard in the theory of compressible Navier--Stokes--Fourier system. First, to let $\varepsilon \to 0^+$, we need additional estimates of the total density. We may use the technique of Bogovskii type estimates (see e.g. \cite{NoSt} for more details) to get for $\beta \gg 1$
\begin{equation} \label{117}
\|\varrho\|_{\frac 53\beta} \leq C(\delta).
\end{equation}
This can be achieved by testing the approximate momentum equation on the level $\varepsilon >0$ by solution to
$$
\begin{aligned}
{\rm div}\, \vcg{\varphi} &= \varrho^{\frac 23 \beta} - \frac{1}{|\Omega|}\intO{\varrho^{\frac 23 \beta}}, \\
\vcg{\varphi} & = \vc{0} \quad \mbox { on } \partial \Omega.
\end{aligned}
$$
Indeed, this estimate does not imply the compactness of the density sequence and further work must be done: we have to combine the {\it effective viscous flux identity} and the {\it renormalized continuity equation}, see \cite{MPZ} or \cite{NoSt} for more details. After letting $\varepsilon \to 0^+$ we have \\
$\bullet$ the continuity equation
\begin{equation} \label{118}
\int_\Omega \varrho \vc{u}\cdot \nabla \psi= 0
\end{equation}
for all $\psi \in C^1(\Ov{\Omega})$ \\
$\bullet$ the weak formulation of the approximate momentum equation
\begin{equation} \label{119}
\begin{array}{c}
\displaystyle
\int_\Omega\Big(-\varrho\left(\vc{u}\otimes\vc{u}\right):\nabla\vcg{\varphi}-\tn{S}:\nabla\vcg{\varphi}\Big){\, \rm d}x+ f\intpO{\vc{u} \cdot \vcg{\varphi}}\\
\displaystyle
-\int_\Omega (\pi+\delta\varrho^{\beta}+\delta\varrho^{2}){\rm div}\,\vcg{\varphi} {\, \rm d}x
=\int_\Omega\varrho \vc{f}\cdot\vcg{\varphi} {\, \rm d}x
\end{array}
\end{equation}
for all $\vcg{\varphi}\in C^1_{\vc{n}}(\Omega)$\\
$\bullet$ the weak formulation of the approximate species balance equations
\begin{equation}\label{120}
\begin{array}{c}
\displaystyle \int_\Omega \Big( -Y_{k}\varrho\vc{u}\cdot \nabla \psi +\sum_{l=1}^n Y_k D_{kl}\nabla Y_l \cdot \nabla \psi\Big){\, \rm d}x = \intO{\omega_{k} \psi}
\end{array}
\end{equation}
for all $\psi \in C^1(\Ov{\Omega})$ ($k=1,2,\dots,n$)\\
$\bullet$ the weak formulation of the approximate total energy equation
\begin{equation} \label{121}
\begin{array}{c}
\displaystyle -\intO{\Big[\varrho e+ \frac 12 \varrho |\vc{u}|^2 + (\pi + \delta \varrho^\beta + \delta \varrho^2)\Big]\vc{u}\cdot \nabla \psi}
\\
\displaystyle -\intO{\Big(\tn{S} \vc{u} \cdot \nabla \psi + \delta \vartheta^{-1} \psi\Big)}
+\intO{\kappa_{\delta}\nabla\vartheta\cdot\nabla \psi} + f \intpO{|\vc{u}|^2 \psi}\\
\displaystyle + \int_{\partial \Omega}\big[(L+\delta \vartheta^{B-1})(\vartheta-\vartheta_0)\big] \psi \, {\rm d}S
\displaystyle + \intO{\vartheta\sum_{k,l=1}^n c_{vk} Y_k D_{kl}\nabla Y_l \cdot \nabla \psi} \\
\displaystyle = \intO{\varrho \vc{f} \cdot \vc{u} \psi} + \frac{\delta}{\beta-1} \intO{\varrho^\beta \vc{u} \cdot \nabla \psi}
\displaystyle+\delta \intO{\varrho^2 \vc{u} \cdot \nabla \psi}
\end{array}
\end{equation}
for all $\psi \in C^1(\Ov{\Omega})$ \\
$\bullet$ the weak formulation of the entropy inequality
\begin{multline} \label{122}
\int_{\Omega} \frac{\psi \tn{S} : \nabla \vc{u}}{\vartheta}{\, \rm d}x
+\int \kappa_{\delta}\frac{|\nabla \vartheta|^2}{\vartheta^2}\psi {\, \rm d}x
-\int_{\Omega}\sumkN \omega_k (c_{pk}-c_{vk} \log \vartheta + \log Y_k)\psi {\, \rm d}x\\
+\int_{\Omega}\frac{\delta \psi}{\vartheta^2}{\, \rm d}x + \int_{\Omega} \psi \sumkN \sumlN D_{kl}\nabla Y_l \nabla Y_k{\, \rm d}x
+ \int_{\partial \Omega} \frac{\psi}{\vartheta}(L+\delta \vartheta^{B-1})\vartheta_0 \, {\rm d}S\\
\leq \int \frac{\kappa_{\delta}\nabla \vartheta \cdot \nabla \psi}{\vartheta} {\, \rm d}x
-\int_{\Omega} \varrho s \vc{u} \cdot \nabla \psi {\, \rm d}x \\
-\int_{\Omega} \sumkN (c_{vk} \log \vartheta-\log Y_k) \vc{F}_k \cdot\nabla \psi {\, \rm d}x
+ \int_{\partial \Omega} (L+\delta\vartheta^{B-1}) \psi \, {\rm d}S
\end{multline}
for all $\psi \in C^1(\Ov{\Omega})$, nonnegative.
More details can be found in \cite{PiPo}.
\medskip
\noindent {\it Step VII: Estimates independent of $\delta$.} We denote the solution corresponding to $\delta >0$ as $(\vr_\delta,\vc{u}_\delta,\vartheta_\delta,\vec{Y}_{\delta})$.
The entropy inequality and the total energy balance, both with a constant test function, yield the following estimates for $\Omega$ not axially symmetric
\begin{equation} \label{123}
\|\vartheta_\delta\|_{1,\partial \Omega} + \delta \|\vartheta_\delta^B\|_{1,\partial \Omega} \leq C\Big(1+ \Big|\intO{\vr_\delta\vc{u}_\delta\cdot \vc{f}}\Big| + \delta \|\vartheta_\delta^{-1}\|_1\Big).
\end{equation}
\begin{multline} \label{124}
\|\nabla \vec{Y}_\delta\|_2^2 + \|\nabla \vartheta_\delta^{\frac m2}\|_2^2 + \|\vc{u}_\delta\|_{1,2}^2 + \|\vartheta_\delta^{-1}\|_{1,\partial \Omega} \\ + \delta \big(\|\nabla \vartheta_\delta^{\frac B2}\|_2^2 + \|\nabla \vartheta_\delta^{-\frac 12}\|_2^2 + \|\vartheta_\delta^{-2}\|_1 +\|\vartheta_\delta^{B-2}\|_{1,\partial \Omega}\big) \leq C(1+ \delta \|\vartheta_\delta^{B-1}\|_{1,\partial \Omega}).
\end{multline}
If $\Omega$ is axially symmetric (and $f>0$), then we must add to the left-hand side of (\ref{123}) $\|\vc{u}_\delta\|_{2,\partial \Omega}^2$ and replace in the left-hand side of (\ref{124}) $\|\vc{u}_\delta\|_{1,2}^2$ by $$
\intO{\frac{\tn{S}(\vartheta_\delta,\widetilde{\tn{D}}(\vc{u}_\delta)):\nabla \vc{u}_\delta}{\vartheta_\delta}}.
$$
Recall also that we know $0\leq (Y_k)_\delta\leq 1$, $k=1,2,\dots, n$.
It is not difficult to bound the $\delta$-dependent terms on the right-hand sides to get for $\Omega$ not axially symmetric
\begin{equation}\label{125}
\begin{aligned}
&\|\nabla \vec{Y}_\delta\|_2+\|\vec{Y}_\delta\|_{\infty} + \|\nabla \vartheta_\delta^{\frac m2}\|_2 + \|\vc{u}_\delta\|_{1,2} + \|\vartheta_\delta^{-1}\|_{1,\partial \Omega} \\
&+ \delta (\|\nabla \vartheta_\delta^{\frac B2}\|_2^2 + \|\nabla \vartheta_\delta^{-\frac 12}\|_2^2 + \|\vartheta_\delta^{-2}\|_1 +\|\vartheta_\delta^{B-2}\|_{1,\partial \Omega}) \leq C
\end{aligned}
\end{equation}
and
\begin{equation}\label {vt_3m_0}
\|\vartheta_\delta\|_{3m} \leq C\Big(1+ \Big|\int_{\Omega}\vr_\delta\vc{u}_\delta\cdot\vc{f}{\, \rm d}x\Big| \Big),
\end{equation}
while for $\Omega$ axially symmetric we remove from the left-hand side of (\ref{125}) $\|\vc{u}_\delta\|_{1,2}$ and add to the left-hand side of (\ref{vt_3m_0}) $\|\vc{u}_\delta\|_{1,2}^2$.
The main issue are now the density estimates. We aim at obtaining
\begin{equation} \label{136a}
{\rm sup}_{x_0 \in \overline \Omega}\int_{\Omega}\frac{\pi(\vr_\delta,\vartheta_\delta)+ (1-\alpha)\vr_\delta |\vc{u}_\delta|^2}{|x-x_0|^\alpha}{\, \rm d}x \leq C,
\end{equation}
for some $\alpha>0$ as large as possible. We distinguish three cases. If $x_0$ is sufficiently far from $\partial \Omega$, then it is possible to use as test function in (\ref{119})
\begin{equation} \label{129}
\vcg{\varphi}(x) = \frac{x-x_0}{|x-x_0|^\alpha} \tau^2,
\end{equation}
where $\tau$ is a suitable cut-off function. Next, we study the remaining two cases, i.e. $x_0\in \partial \Omega$ and $x_0 \in \Omega$, but close to $\partial \Omega$.
To simplify the idea, let us assume that we deal with the part of boundary of $\Omega$ which is flat and is described by $x_3 = 0$, i.e. $z(x')=0$, $x' \in {\mathcal O}\subset \mathbb{R}^2$ with the normal vector $\vc{n} = (0,0,-1)$ and $\vcg{\tau}_1 =(1,0,0)$, $\vcg{\tau}_2=(0,1,0)$ the tangent vectors. The general case can be studied using the standard technique of flattening the boundary, see e.g. \cite{JNP}. Consider first that $x_0$ lies on the boundary of $\Omega$, i.e. $(x_0)_3 = 0$. Then it is possible to use as the test function in the approximate momentum equation
$$
\vc{w}(x) = \vc{v}(x-x_0),
$$
where
$$
\vc{v}(x) =
\frac{1}{|x|^\alpha}(x_1,x_2,x_3) = (x\cdot \vcg{\tau}_1)\vcg{\tau}_1 + (x\cdot \vcg{\tau}_2)\vcg{\tau}_2 + ((0,0,x_3-z(x'))\cdot \vc{n})\vc{n}, \quad x_3\geq 0.
$$
Note that if $(x_0)_3=0$ we get precisely what we need, i.e. estimate (\ref{136a}) (but with $\sup_{x_0 \in \partial \Omega}$ instead of $\sup_{x_0 \in \Ov{\Omega}}$.
However, if $x_0$ is close to the boundary but not on the boundary, i.e. $(x_0)_3>0$, but small, we lose control of some terms for $0<x_3<(x_0)_3$. In this case, as for the Dirichlet boundary conditions, we must modify the test functions. We first consider
$$
\vc{v}^1(x) = \left\{
\begin{array}{ll}
\frac{1}{|x-x_0|^\alpha}\big((x-x_0)_1,(x-x_0)_2,(x-x_0)_3\big) , & x_3 \geq \frac{(x_0)_3}{2}, \\[8pt]
\frac{1}{|x-x_0|^\alpha}\Big((x-x_0)_1,(x-x_0)_2,4(x-x_0)_3 \frac{x_3^2}{|(x-x_0)_3|^2}\Big), & 0<x_3 < \frac{(x_0)_3}{2}.
\end{array}
\right.
$$
Nonetheless, using $\vc{v}^1$ as test function we would still miss control of some terms from the convective term, more precisely of those, which contain at least one velocity component $u_3$, however, only close to the boundary, i.e. for $x_3 < (x_0)_3/2$. Hence we further consider
$$
\vc{v}^2(x) = \left\{
\begin{array}{ll}
\displaystyle \frac{(0,0,x_3)}{(x_3+ |x-x_0| |\ln |x-x_0||^{-1})^\alpha} , & |x-x_0| \leq 1/K, \\[8pt]
\displaystyle \frac{(0,0,x_3)}{(x_3+ 1/K |\ln K|^{-1})^\alpha} , & |x-x_0| > 1/K
\end{array}
\right.
$$
for $K$ sufficiently large (but fixed, independently of the distance of $x_0$ from $\partial \Omega$). Note that both functions have zero normal trace, belong to $W^{1,q}(\Omega;\mathbb{R}^3)$ and their norms are bounded uniformly (with respect to the distance of $x_0$ from $\partial \Omega$) provided $1\leq q<\frac 3\alpha$. Thus we finally use as the test function in the approximate momentum balance
\begin{equation} \label{140}
\vcg{\varphi} = \vc{v}^1(x) + K_1 \vc{v}^2(x)
\end{equation}
with $K_1$ suitably chosen (large). Note that the choice of $K$ and $K_1$ is done in such a way that the unpleasant terms from both functions are controlled by those from the other one which provide us a positive information. This is possible due to the fact that the unpleasant terms from $\vc{v}^2$ are multiplied by $|\ln|x-x_0||^{-1} \leq |\ln K|^{-1}\ll 1$.
We can therefore verify that
\begin{multline}\label{141}
\sup_{x_0 \in \Ov{\Omega}} \intO{ \frac{p(\vr_\delta,\vartheta_\delta) +\delta(\vr_\delta^\beta + \vr_\delta^2)+ (1-\alpha)\vr_\delta |\vc{u}_\delta|^2}{|x-x_0|^\alpha}} \\
\leq C (1+ \delta \|\vr_\delta\|_\beta^\beta + \|p(\vr_\delta,\vartheta_\delta)\|_1 + (1+ \|\vartheta_\delta\|_{3m})\|\vc{u}_\delta\|_{1,2} + \|\vr_\delta|\vc{u}_\delta|^2 \|_1),
\end{multline}
provided $0<\alpha < \max\{1, \frac{3m-2}{2m}\}$, $m>\frac 23$,
and, moreover, the test function (\ref{140}) belongs to $W^{1,p}(\Omega;\mathbb{R}^3)$ for $1\leq p < \frac 3\alpha$ with the norm bounded independently of the distance of $x_0$ from $\partial \Omega$.
We exploit the estimates in the following way. We define now for $1\leq a\leq \gamma$ and $0<b<1$
\begin{equation} \label{II3.11}
\mathcal{B} = \intO{\big(\vr_\delta^a |\vc{u}|^2 + \vr_\delta^b |\vc{u}|^{2b+2}\big)}.
\end{equation}
Then we have
\begin{equation} \label{II3.12}
\|\vr_\delta \vc{u}_\delta\|_1 \leq C \mathcal{B}^{\frac{a-b}{2(ab+a-2b)}}.
\end{equation}
and for $1<s< \frac{1}{2-a}$ (if $a<2$), $0<(s-1)\frac{a}{a-1} <b<1$
\begin{equation} \label{II3.13}
\|\vr_\delta |\vc{u}_\delta|^2\|_s \leq C \mathcal{B}^{\frac{a-b/s}{ab+a-2b}}.
\end{equation}
Next, we use the Bogovskii-type estimate and get for $1<s<\frac 1{2-a}$ (if $a<2$), $0< (s-1)\frac{a}{a-1} <b<1$, $s\leq \frac{6m}{3m+2}$, $m> \frac 23$
\begin{equation} \label{II3.13a}
\intO{\big(\vr_\delta^{s\gamma} + \vr_\delta^{(s-1)\gamma} p(\vr_\delta, \vartheta_\delta) + (\vr_\delta |\vc{u}_\delta|^2)^s + \delta \vr_\delta^{\beta + (s-1)\gamma}\big)} \leq C(1+ \mathcal{B}^{\frac{sa-b}{ab+a-2b}}).
\end{equation}
We distinguish two cases. First, for $m\geq 2$ the only restriction on $\alpha$ is actually $\alpha <1$. In the other case, if $m \in (\frac 23, 2)$, we have the restriction $\alpha < \frac{3m-2}{2m}$.
Therefore, if $m \geq 2$, we set $a=\gamma$ and using Fatou's lemma and H\"older inequality we show for $b \in ((s-1)\frac{\gamma}{\gamma-1},1)$, $1<s<\frac{2}{2-\gamma}$ and $s \leq \frac{6m}{3m+2}$
\begin{equation} \label{II3.28a}
\begin{array}{c}
\displaystyle
\sup_{x_0 \in \Ov{\Omega}} \intO{\frac{p(\vr_\delta,\vartheta_\delta) + (\vr_\delta |\vc{u}_\delta|^2)^b}{|x-x_0|}} \\
\displaystyle \leq C \big(1+ \delta \|\vr_\delta\|_\beta^\beta + \|p(\vr_\delta,\vartheta_\delta)\|_1 + (1+ \|\vartheta_\delta\|_{3m})\|\vc{u}_\delta\|_{1,2} + \|\vr_\delta|\vc{u}_\delta|^2 \|_1\big).
\end{array}
\end{equation}
If $m<2$, we keep $1\leq a<\gamma$ and using H\"older inequality we end up with
\begin{equation} \label{II3.33}
\begin{array}{c}
\displaystyle \sup_{x_0 \in \Ov{\Omega}}\intO{\frac{\vr_\delta^a + (\vr_\delta |\vc{u}_\delta|^2)^b}{|x-x_0|}} \\
\displaystyle \leq C \big(1+ \delta \|\vr_\delta\|_\beta^\beta + \|p(\vr_\delta,\vartheta_\delta)\|_1 + (1+ \|\vartheta_\delta\|_{3m})\|\vc{u}_\delta\|_{1,2} + \|\vr_\delta|\vc{u}_\delta|^2 \|_1\big)^{\frac a \gamma} \\
\displaystyle+ C \big(1+ \delta \|\vr_\delta\|_\beta^\beta + \|p(\vr_\delta,\vartheta_\delta)\|_1 + (1+ \|\vartheta_\delta\|_{3m})\|\vc{u}_\delta\|_{1,2} + \|\vr_\delta|\vc{u}_\delta|^2 \|_1\big)^{b}
\end{array}
\end{equation}
which holds for $b \in ((s-1)\frac{\gamma}{\gamma-1},1)$, $1<s<\frac{2}{2-\gamma}$, $\alpha > \max\{\frac{3a-2\gamma}{a}, \frac{3b-2}{b}\}$.
Let us consider now
\begin{equation} \label{II3.34}
\begin{array}{c}
\displaystyle
-\Delta h = \vr_\delta^a + \vr_\delta^b |\vc{u}_\delta|^{2b} - \frac{1}{|\Omega|} \intO{(\vr_\delta^a + \vr_\delta^b |\vc{u}_\delta|^{2b})}, \\
\displaystyle
\frac{\partial h}{\partial \vc{n}}|_{\partial \Omega} = 0.
\end{array}
\end{equation}
It is well-known that the unique strong solution admits the following representation
\begin{equation} \label{II3.35}
h(x) = \int_\Omega G(x,y) (\vr_\delta^a + \vr_\delta^b|\vc{u}_\delta|^{2b}) \, \mbox{d} y - \frac{1}{|\Omega|} \int_\Omega G(x,y) \, \mbox{d} y \intO{(\vr_\delta^a + \vr_\delta^b |\vc{u}_\delta|^{2b})};
\end{equation}
since $G(x,y)\leq C |x-y|^{-1}$, we get due to estimates above
\begin{equation} \label{II3.35a}
\|h\|_\infty \leq C(1+ \mathcal{B}^{\frac{\gamma -b/s}{b\gamma + \gamma -2b}}),
\end{equation}
provided
\begin{equation} \label{II3.35b}
1<s< \frac{1}{2-\gamma}, \quad 0<(s-1)\frac{\gamma}{\gamma-1} <b<1, \quad s\leq \frac{6m}{3m+2}, \quad m\geq 2,
\end{equation}
and
\begin{equation} \label{II3.35c}
\|h\|_\infty \leq C(1+ \mathcal{B}^{\frac{a -b/s}{ab + a -2b} \frac a \gamma} + \mathcal{B}^{\frac{a -b/s}{ab + a -2b} b}),
\end{equation}
provided
\begin{equation} \label{II3.35d}
\begin{array}{c}
\displaystyle 1<s< \frac{1}{2-a}, \quad 0<(s-1)\frac{a}{a-1} <b<1, \quad s\leq \frac{6m}{3m+2}, \\[8pt]
\displaystyle \alpha > \frac{3a-2\gamma}{a}, \quad \alpha > \frac{3b-2}{b}, \quad \alpha < \frac{3m-2}{2m}, \quad \frac 23 m\leq 2.
\end{array}
\end{equation}
Now, from \eqref{II3.11} and \eqref{II3.34}, we have
\begin{equation} \label{II3.36}
\begin{array}{c}
\displaystyle \mathcal{B} = \intO{-\Delta h \vc{u}_\delta^2} + \frac{1}{|\Omega|} \intO{\vc{u}_\delta^2}\intO{(\vr_\delta^a + \vr_\delta^b |\vc{u}_\delta|^{2b})} \\[8pt]
\leq 2\|\nabla \vc{u}_\delta\|_2 D^{\frac 12} + C(\varepsilon) \|\vc{u}_\delta\|_{1,2}^2 (1+ \mathcal{B}^{\Gamma+\varepsilon})
\end{array}
\end{equation}
for any $\varepsilon >0$, where
$$ \Gamma =\left\{
\begin{array}{ll}
\displaystyle \frac{\gamma-b}{b\gamma +\gamma -2b} \quad &\mbox{ if } m \geq 2\\
\displaystyle \max\Big\{\frac{a-b}{ab+a-2b} \frac{a}{\gamma}, \frac{a-b}{ab+a-2b} b\Big\} \quad &\mbox{ if } \frac 23 <m<2,
\end{array}\right.
$$
and
\begin{multline*}
\displaystyle D = \intO{|\nabla h \otimes \vc{u}_\delta|^2} -\intO{h \Delta h |\vc{u}_\delta|^2} - \intO{h \nabla h \cdot \nabla \vc{u}_\delta \cdot \vc{u}_\delta} \\
\leq \|h\|_\infty (\mathcal{B} + C(\varepsilon)\|\vc{u}_\delta\|_{1,2}^2 \mathcal{B}^{\Gamma +\varepsilon} +\|\nabla \vc{u}_\delta\|_2 D^{\frac 12}).
\end{multline*}
Therefore, due to \eqref{II3.35a},
\begin{equation} \label{II3.40}
\begin{array}{c}
\mathcal{B} \leq C \big(1+ \mathcal{B}^{\frac{\gamma-b/s}{b\gamma + \gamma-2b}}\big) \quad \mbox{ if } m\geq 2, \\
\mathcal{B} \leq C\big(1+ \mathcal{B}^{\frac {a-b/s}{ab+a-2b}\frac{a}{\gamma}} + \mathcal{B}^{\frac {a-b/s}{ab+a-2b} b}\big) \quad \mbox {if } \frac 23 <m<2.
\end{array}
\end{equation}
Checking carefully all conditions above we end up for $\Omega$ not axially symmetric,
$\gamma >1$ and $m>\frac{2}{4\gamma-3}$, $m>\frac 23$, and for $\Omega$ axially symmetric ($f>0$), $\gamma >1$ and $m>\frac{6-2\gamma}{3\gamma-1}$, $m>\frac 23$
that there exists $s>1$ such that
\begin{equation} \label{142}
\begin{array}{lcr}
\sup_{\delta>0} \|\vr_\delta\|_{\gamma s} &<& \infty, \\
\sup_{\delta>0} \|\vr_\delta\vc{u}_\delta \|_{s} &<& \infty, \\
\sup_{\delta>0} \|\vr_\delta|\vc{u}_\delta|^2\|_{s} &<& \infty, \\
\sup_{\delta>0} \|\vc{u}_\delta\|_{1,2} &<& \infty, \\
\sup_{\delta>0} \|\vartheta_\delta\|_{3m} &<& \infty, \\
\sup_{\delta>0} \|\vartheta_\delta^{m/2}\|_{1,2} &<& \infty, \\
\sup_{\delta>0} \delta \|\vr_\delta^{\beta + (s-1)\gamma}\|_{1} &<& \infty.
\end{array}
\end{equation}
Moreover, we can take $s>\frac 65$ provided
$\gamma >\frac 54$, $m>\max \{1,\frac{2\gamma +10}{17\gamma -15}\}$ (if $\Omega$ is not axially symmetric), or $\gamma >\frac 54$, $m> \frac{16\gamma}{15\gamma -16}$ (if $\gamma \in (\frac 54,\frac 43])$ or $m> \frac{18-6\gamma}{9\gamma -7}$ (if $\gamma \in (\frac 43,\frac 53))$ (if $\Omega$ is axially symmetric).
\medskip
\noindent{\it Step VIII: Compactness of the density sequence.}
Here, the standard tools from the theory of compressible Navier--Stokes equations can be applied. We have namely the {\it effective viscous flux identity}
\begin{equation} \label{134}
\begin{array}{c}
\displaystyle
\overline{p(\varrho,\vartheta) T_k(\varrho)} - \Big(\frac 43 \mu(\vartheta) + \xi(\vartheta)\Big) \overline{T_k(\varrho) {\rm div}\, \,\vc{u}} \\
\displaystyle = \overline{p(\varrho,\vartheta)} \,\, \overline{T_k(\varrho)} - \Big(\frac 43 \mu(\vartheta) + \xi(\vartheta)\Big) \overline{T_k(\varrho)} {\rm div}\, \,\vc{u},
\end{array}
\end{equation}
where
$$
T_k(z) = k T\Big(\frac{z}{k}\Big), \qquad
T(z) = \left\{ \begin{array}{c}
z \mbox{ for } 0\leq z\leq 1, \\
\mbox{ concave on } (0,\infty), \\
2 \mbox{ for } z\geq 3.
\end{array}
\right.
$$
Next, for the {\it oscillation defect measure}
\begin{equation} \label{135}
\mbox{{\bf osc}}_{\mathbf q} [\vr_\delta\to\varrho](Q) = \sup_{k>1} \Big(\limsup_{\delta \to 0^+} \int_Q |T_k(\vr_\delta)-T_k(\varrho)|^q {\, \rm d}x\Big)
\end{equation}
we can show that
there exists $q>2$ such that
\begin{equation} \label{136}
\mbox{{\bf osc}}_{\mathbf{q}} [\vr_\delta\to\varrho](\Omega) < \infty,
\end{equation}
provided $m>\max\{\frac{2}{3(\gamma-1)},\frac 23\}$. This implies the validity of the renormalized continuity equation even in the case when the density sequence is not bounded in $L^2(\Omega)$. Then it is not difficult to conclude that
$$
\lim_{k \to \infty} \limsup_{\delta \to 0^+} \intO{|T_k(\vr_\delta)
-T_k(\varrho)|^q} = 0
$$
with $q$ as in the estimate of the oscillation defect measure. Hence
$$
\|\vr_\delta-\varrho\|_1 \leq \|\vr_\delta-T_k(\vr_\delta)\|_1 + \|T_k(\vr_\delta) -T_k(\varrho)\|_1 + \|T_k(\varrho) -\varrho\|_1,
$$
which yields strong convergence of the density in $L^1(\Omega)$, and also in $L^p(\Omega)$
for $1 \leq p < s\gamma$. Note that if $s>\frac 65$ we may pass to the limit in the total energy balance while if $s>1$ solely, we may pass to the limit only in the entropy inequality and in the total energy balance with a constant test function. This finishes the proof of Theorem \ref{t1}.
\end{proof}
\bibliographystyle{amsalpha}
|
1,116,691,501,257 | arxiv | \section{Introduction}
Feedback from active galactic nuclei (AGN) is considered to be one of the important processes in regulating galaxy growth over cosmic time.
Several research groups have implemented AGN feedback in cosmological hydrodynamic simulations \citep[e.g.][]{DiMatteo05, Booth09}, however they all have to assume a subgrid model for the accretion onto SMBH and feedback with uncertain parameters due to limited resolution in cosmological simulations.
Here we take a different approach and perform small-scale simulations of gas accretion onto SMBH using the GADGET-3 SPH code \citep[originally described by][]{Springel05e} on small-scales of $r<200$\,pc. Our long-term goal is to develop a better model of AGN feedback for large-scale cosmological simulations based on the results of small-scale BH accretion simulations.
In this article, we report our initial results of adiabatic Bondi accretion simulations, as well as those with radiative cooling and heating by X-rays from the central SMBH \citep{Barai11}.
We run a large set of simulations of spherically symmetric Bondi accretion with different resolution, volume, and initial conditions.
The Bondi problem is ideal for testing any hydrodynamic code, since it has a semi-analytic solution.
\section{Simulations of Bondi Accretion onto a SMBH}
\begin{figure}
\centering
\includegraphics[width=0.85 \linewidth]{Nagamine_fig1.eps}
\caption{Radial profile snapshot of gas particles in a run with X-ray heating and cooling, showing radial velocity, density, temperature and Mach number. The solid line is the median value, the dashed lines are the 95 percentile, and the dotted lines are the min/max values. The red curve in each panel shows the free-fall scaling, and the blue curve shows the corresponding ZEUS simulation results: $v_{\rm ZEUS}$ and $v_{\rm ff}$ are indistinguishable, $\rho_{\rm ZEUS}$ is at a constant offset from $\rho_{\rm ff}$ because of different normalization. The green curve in the bottom-left panel indicates the free-fall temperature with only adiabatic processes. Figure taken from \citet{Barai11}.
}
\label{fig:profile}
\end{figure}
The central SMBH has a mass of $M_{\rm BH} = 10^8 M_\odot$, and the initial conditions are generated using $\gamma=1.01$, $\rho_\infty = 10^{-19}$\,g/cm$^3$, and $T_{\rm init}=T_\infty = 10^7$\,K. The corresponding Bondi radius is $R_{\rm B} = GM_{\rm BH}/c_{s,\infty}^2 = 3.0$\,pc, the Bondi time is $t_B = R_B / c_s = 7.9\times 10^3$\,yrs, and the sonic radius is $R_s = 1.5$\,pc. The total initial mass of the gas sphere is in the range of $10^5 - 10^7 M_\odot$ depending on the setup, and the particle counts were varied between $64^3 - 256^3$ for resolution test.
The inner boundary radius is $r_{\rm in} = 0.1$\,pc from the central SMBH, and we regard that the particle is accreted to SMBH once it crosses $r_{\rm in}$. The outer boundary radius was varied between $r_{\rm out} = 5 - 200$\,pc. For the full list of runs and parameters, see Table~1 of \citet{Barai11}.
First we find that the adiabatic simulations reproduce the expected Bondi solution very well for a limited time.
Since the gas particles near the outer boundary escape due to adiabatic expansion, after a certain time the mass accretion rate start to decrease from the Bondi rate.
This is a problem owing to the difficulty of setting a boundary condition in the SPH method, and it does not arise with the mesh codes, as one can simply fix the outer boundary with $\rho_\infty$ and $T_\infty$.
In the second set of runs with radiative cooling and heating by X-rays, we follow the approximate treatment of \citet{Blondin94} and \citet{Proga07}, assuming that the 10 keV bremsstrahlung radiation from the central SMBH is illuminating the optically thin gas.
In Figure~\ref{fig:profile}, we show the particle radial profile for various quantities in a run with $L_X=0.01 L_{\rm Edd}$, where $L_{\rm Edd}$ is the Eddington luminosity.
We find that the simulation roughly follows the Bondi flow solution even with the X-ray heating and cooling, except at $r<0.5$\,pc where we see a spurious heating due to artificial viscosity (AV) of SPH particles. We have confirmed that this effect is due to AV by turning it on and off. With no AV, the overheating does not occur, and the gas particles overshoot the SMBH due to lack of viscosity and many are not accreted. The mass inflow rate at $r_{\rm in}$ in this run is enhanced over the Bondi accretion rate by a factor of a few due to cooling.
\section{Non-spherical Fragmentation due to Thermal Instability, and Outflows due to X-ray Thermal Feedback}
To study the effect of X-ray thermal feedback, we gradually increase $L_X/L_{\rm Edd}$ from $5\times 10^{-5}$ to $5\times 10^{-2}$.
In these runs, the initial condition contains 12.7 million particles for a gas sphere of $9.77\times 10^6 M_\odot$, each gas particle mass of $0.791 M_\odot$, adiabatic index $\gamma = 5/3$, $r_{\rm out}=200$\,pc, $\rho_\infty = 10^{-23}$\,g/cm$^3$, and $T_\infty=10^5$\,K.
The Bondi radius for these parameters is 183.9\,pc, therefore most of our computational volume is within the Bondi radius and we fully resolve below the Bondi radius with minimum smoothing length of $\sim 0.1$\,pc.
In general we find that, with increasing $L_X$, the mass accretion rate at $r_{\rm in}$ decreases, and the outflow rate at $r_{\rm out}$ increases.
When $L_X$ becomes $\ge 0.01 L_{\rm Edd}$, we start to see an interesting transition from inflow to outflow, and non-spherical fragmentation of gas into multiphase medium takes place due to thermal instability.
The filamentary cold gas continues to flow in, and the hot gas tries to escape through the channels between cold filaments.
Examination of the flow motion shows that the filaments get stretched, fragments, and the `clouds' merge.
Such fragmentation of accreting gas can assist in the formation of clouds around AGN, induce star-formation, and contribute to the observed variability of narrow-line regions.
Figure~\ref{fig:rhotemp} shows an example of such multiphase structure in a run with $L_X = 0.01 L_{\rm Edd}$, when the outflow is still not very prominent and the hot gas is still being accreted to SMBH.
Studies with 1D \& 2D ZEUS simulation shows that this fragmentation does not occur in a spherically symmetric Bondi flow, unless some perturbation is introduced by hand. The SPH simulations inherently have tiny fluctuations in the density field due to its algorithmic nature, which can be amplified through thermal instability.
As $L_X$ is increased to $>$\,$0.01 L_{\rm Edd}$, the outflow starts to dominate over the inflow, and the hot gas escapes through the channels between cold filaments, as shown in the left panel of Figure~\ref{fig:highLx}. We find that the transition from inflow to outflow occurs in-between $L_X/L_{\rm Edd} = 0.01-0.02$, but note that this transition luminosity would depend on the value of $\rho_\infty$. In other words, the more relevant parameter for the transition is the range of photoionization parameter $\xi \propto L_X / \rho$ for the unstable branch of the $T-\xi$ equilibrium curve.
With a high enough $L_X$ ($= 0.05 L_{\rm Edd}$), a strong outflow is produced due to strong thermal feedback, and the mass outflow rate at $r=r_{\rm out}$ increases dramatically. A jet-like plume of hot and buoyant gas escapes from the central region to the outer boundary, as shown in the right panel of Figure~\ref{fig:highLx}.
These results on the non-spherical flow have been reported in our second paper in series \citep{Barai12}, where we examined the $T-\xi$ equilibrium curve in detail and determine the unstable $\xi$-range.
\begin{figure}
\centering
\includegraphics[width=0.45 \linewidth]{Nagamine_fig2a.eps}
\includegraphics[width=0.45 \linewidth]{Nagamine_fig2b.eps}\\
\includegraphics[width=0.9 \linewidth]{Nagamine_fig2c.eps}\\
\caption{Density and temperature cross-sections in a run with $L_X / L_{\rm Edd} = 0.01$.
The top two panels show the inner $\pm 30$\,pc, and the bottom two the inner $\pm 4$\,pc of the $[x - y]$ plane through $z = 0$.
Cold, dense filamentary structure has developed due to thermal instability, which is falling into the SMBH rapidly.
The hot gas is trying to escape, but a strong outflow has not developed yet due to low $L_X$.
}
\label{fig:rhotemp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48 \linewidth]{Nagamine_fig3a.eps}
\includegraphics[width=0.48 \linewidth]{Nagamine_fig3b.eps}
\caption{Temperature cross-section in runs with $L_X / L_{\rm Edd} = 0.02$ (left) and 0.05 (right).
The left panel shows the inner $\pm 100$\,pc, and the right panel shows the entire computational volume of $\pm 200$\,pc.
In the left panel, the cold, dense, filamentary structure has developed due to thermal instability, which is falling into the SMBH.
The hot gas tries to escape through the channels in-between the cold filaments. In the right panel, the outflow dominates over the entire computational volume, and the hot plume of gas is escaping from the center to the outer boundary.
}
\label{fig:highLx}
\end{figure}
\section{Conclusions}
We find that GADGET-3 SPH code can reproduce the spherically symmetric Bondi accretion flow properly with mainly two limitations: 1) the gas particles escape from the outer boundary due to adiabatic expansion; 2) spurious heating is observed near the inner boundary around SMBH due to the artificial viscosity of SPH.
We also examined the impact of radiative cooling and heating due to X-rays. The accretion flow roughly follows the Bondi solution when the central X-ray luminosity is relatively low, but the mass accretion rate at $r_{\rm in}$ is enhanced by a factor of a few over the Bondi accretion rate due to cooling.
The simulation starts to exhibit non-spherical fragmentation due to thermal instability once $L_X$ exceeds $\simeq 0.01 L_{\rm Edd}$.
At $L_X=0.02 L_{\rm Edd}$, the hot gas escapes in-between the cold filaments.
When $L_X$ is further increased to $0.05 L_{\rm Edd}$, the outflow completely dominates over the inflow, and most of the gas escapes from the computational volume. A jet-like outflow feature is also observed at this $L_X$, which selects a preferential direction for hot gas to escape rapidly.
Such non-spherical features of accreting gas can assist in the formation of clouds around AGN, induce star-formation, and contribute to the observed variability of narrow-line regions.
In the future, we plan to implement rotation, radiation pressure, and different initial geometry of gas distribution.
We will measure the efficiencies of thermal, radiative, and kinetic feedback, and compare them with those measured by \citet{Kurosawa09b} and those used in the cosmological simulations.
\acknowledgements We are grateful to V. Springel for allowing us to use the GADGET-3 code.
This work is supported in part by the NSF grant AST-0807491, National Aeronautics and Space Administration under Grant/Cooperative Agreement No. NNX08AE57A issued by the Nevada NASA EPSCoR program, and the President's Infrastructure Award from UNLV. DP also acknowledges the UNLV sabbatical assistance.
This research is also supported by the NSF through the TeraGrid resources provided by the Texas Advanced Computing Center. Some numerical simulations and analyses have been performed on the UNLV Cosmology Cluster.
|
1,116,691,501,258 | arxiv | \section{Introduction}
\label{intro}
The Transiting Exoplanet Survey Satellite (TESS) is a NASA Astrophysics Explorer mission \citep{2014SPIE.9143E..20R}, scheduled for launch at the end of 2017 and with a nominal mission duration of 2 years. TESS may be seen as the successor to the NASA \textit{Kepler}\xspace mission \citep[][]{2010Sci...327..977B}, and will as \textit{Kepler}\xspace search for exoplanets using the transit method --- here, a planet is identified from the dimming produced when it passes in front of its host star. Different from \textit{Kepler}\xspace, TESS will focus on the nearest and brightest stars in the sky, allowing for detailed follow-up observations, and will over its nominal mission nearly cover the full sky.
The primary science goal of \textit{Kepler}\xspace was to determine the frequency of Earth-like planets in and near the habitable zone of solar-type stars \citep[][]{2010Sci...327..977B}; TESS will instead focus on finding exoplanets smaller than Neptune where a detailed characterization is possible from follow-up observations.
With the advent of the space-based missions CoRoT \citep[][]{2002ESASP.485...17B} and \textit{Kepler}\xspace, the field of asteroseismology has flourished over the last decade \citep[][]{2015EPJWC.10100001G}. The reason for this advancement is that the photometric requirements needed for detecting transiting exoplanets coincide with those needed for asteroseismology, to wit, photometric observations of long duration and high precision. This synergy was realized early on for both the CoRoT and \textit{Kepler}\xspace missions, and led for \textit{Kepler}\xspace to the formation of the \textit{Kepler}\xspace Asteroseismic Investigation (KAI). Via the \textit{Kepler}\xspace Asteroseismic Science Consortium \citep[KASC;][]{2010AN....331..966K} this provided direct access to the data from \textit{Kepler}\xspace and helped to organize the work within the broad asteroseismic community.
Building on the success of KASC, the asteroseismic studies in TESS will be organized in the TESS Asteroseismic Science Consortium \citep[TASC;][]{tasc_doc}.
In the following we will focus on the preparation of data from TESS for the sake of asteroseismology.
\section{The TESS mission}
\label{tess}
Over its nominal mission TESS will observe the full sky, starting in the southern hemisphere. The total field of view (FOV) of the four cameras of TESS (each with 4 CCDs) will cover a rectangular slap of the sky spanning $24^{\circ}\times 96^{\circ}$, starting from an ecliptic latitude of ${\sim}6^{\circ}$. A given $24^{\circ}\times 96^{\circ}$ field will be observed for ${\sim}27$-days, corresponding to two orbits of the TESS spacecraft in its highly elliptical 13.7-day Lunar resonances orbit --- we refer to such a field as an observing `Sector'. Given the observing strategy adopted in TESS, some regions will be observed for longer than ${\sim}27$-days. Most notable are the regions within $12^{\circ}$ of the ecliptic poles that will be observed continuously, these are the so-called continuous viewing zones (CVZs).
Observing cadences will come at 20 and 120 seconds, and full-frame-images (FFIs) will be obtained every 30 minutes.
Over the course of the nominal 2 year mission the number of stars observed in 20-sec and 120-sec cadences will exceed $200{,}000$, and data for ${>}20{,}000{,}000$ stars are predicted from the 30-min FFIs. The pixels in TESS are, with a size of $21.1''$, significantly larger than those of \textit{Kepler}\xspace, which measured $3.98''$. However, the pixel response function in TESS is very similar to that of \textit{Kepler}\xspace, with ${\sim}50\%$ of light contained within 1 pixel, and ${\sim}90\%$ contained within $4\times 4$ pixels. The band-pass of TESS, roughly spanning the interval from $600-1000$ nm and centred on the $I_C$ band, is redder than that of \textit{Kepler}\xspace which was centred on the $R_C$ band (see \fref{fig-0}). At short wavelengths the TESS spectral response function is dominated by a long-pass filter transmission, and by the CCD quantum efficiency at long wavelengths.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{MikkelNLund_Fig1}
\caption{Spectral response functions $\mathcal{S}_{\lambda}$ for \textit{Kepler}\xspace \citep{VCleve} and TESS \citep{2014SPIE.9143E..20R}, normalised to a maximum of 1. Shown are also the standard Johnson-Cousins $UBVR_CI_C$ photometric systems from \cite[][]{1990PASP..102.1181B}, normalised to maximum values of 0.6.}
\label{fig-0}
\end{figure}
Considering the number of stars observed and the larger number of pixels on average devoted to each of these (${\sim}100$ pixels vs. ${\sim}32$ in \textit{Kepler}\xspace), the data rate for TESS from 120-sec cadence data will be a factor of ${\sim}13$ that of \textit{Kepler}\xspace. If FFIs are included the data rate rises to a factor of ${\sim}25$ that of \textit{Kepler}\xspace (Jenkins et al., in prep.).
Data will be down-linked every $13.7$-days when the TESS spacecraft reaches the perigee of its orbit. Here data will be transferred from TESS to the Deep Space Network (DSN), which will act as the relay for the TESS observations.
\section{The TESS Asteroseismic Investigation}
\label{tai}
As mentioned in \sref{intro}, the \textit{Kepler}\xspace Asteroseismic Investigation (KAI) was organized within the broad international community in the KASC.
Building on this, the TESS Asteroseismic Investigation (TAI) will be organized in the TESS Asteroseismic Science Consortium \citep[TASC;][]{tasc_doc}.
Like KASC, the investigations within TASC will be divided between a number of Working Groups (WGs), each of which deals with the utilization of data for a specific
group of objects. Each WG will have two co-chairs who will have the overall responsibility for the running of the WG, and these will be members of the TASC steering committee (SC).
The TASC-SC, including also the TASC Board, is responsible for the overall running of TASC and will reports to the TESS team on issues pertaining to target selection.
TASC will furthermore organize workshops aiming at target selection, science collaboration and data analysis.
Data and communication platforms for the WGs will be facilitated for TASC via the TESS Asteroseismic Science Operations Center (TASOC)\footnote{\url{tasoc.dk}}, hosted at the Stellar Astrophysics Centre (SAC) at Aarhus University, Denmark. TASOC will furthermore provide long-term storage of all data products.
By and large, TASOC will copy the facilities of the \textit{Kepler}\xspace Asteroseismic Science Operations Centre (KASOC)\footnote{\url{kasoc.phys.au.dk}}.
Membership of TASC is open and any member of TASC can apply to become a member of a given WG.
The WG-0 ``\textit{TASOC – Basic photometric algorithms and calibration of time / TASC data products}'' will, as the name suggests, be responsible for maintaining the TASOC portal and the timely provision of data products for the whole of TASC. In \sref{sec-1} below we outline the different main tasks and responsibilities of WG-0.
\section{WG-0 tasks}
\label{sec-1}
WG-0 will have the overall responsibility for delivering analysis-ready data for asteroseismology to TASC in a timely fashion.
For each 27-day pointing, ${\sim}750$ targets at 120-sec cadence, and ${\sim}60$ targets at a 20-sec cadence, will be available for asteroseismology.
WG-0 is, however, committed to the preparation of data for all targets with 120-sec and 20-sec cadences, not only those designated for asteroseismology.
Additionally, WG-0 will analyse the 30-min FFIs in order to facilitate the detection of oscillations in red giants, SPBs, RR Lyraes, $\beta$ Cep stars, Cepheids, etc., and will also produce light curves for eclipsing binaries. To produce optimally prepared data for the many different types of studies conducted within TASC, WG-0 will maintain close collaborations with the other WGs of TASC.
The TESS Science Processing Operations Center (SPOC) will process all 120-sec targets in the same manner as done by the Science Operations Center (SOC) for \textit{Kepler}\xspace. This includes, for instance, the calibration of pixels, extraction of photometry and astrometry, definition of optimal pixel masks for aperture photometry, correction for systematic errors, etc. --- \ie, an end-to-end analysis.
For FFIs, SPOC is only committed to calibrating and archiving the pixels, while no corrections will be done at all for 20-sec data products (see \sref{sec-1.1}).
Data products from both WG-0 and SPOC will be modelled after those from \textit{Kepler}\xspace (Jon Jenkins, private comm.).
\subsection{20-sec-specific data correction}
\label{sec-1.1}
The 20-sec cadence data have been included amongst the cadences employed by TESS primarily for the sake of asteroseimology. The 20-sec cadence will be especially useful for studies of high-frequency oscillators, such as white dwarfs and some main-sequence solar-like oscillators.
Because this sampling has been introduced for asteroseismology, only fully raw data will be delivered by the TESS team.
WG-0 will then be responsible for the full calibration and analysis of these data, including basic corrections for 2D black levels; detector gain/linearity; smear; flat-fielding; and the removal of cosmic rays.
\subsubsection{Cosmic rays}
\label{sec-1.1.1}
For 120-sec data and the 30-min FFIs, cosmic-ray (CR) signals will be mitigated on-board before the cadences are created from the 2-sec integrations in TESS.
The idea for this mitigation is, at the time of writing, to identify outliers in the 2-sec light curves of individual pixels. If a given pixel is found to be affected by CRs, the identified 2-sec samplings are removed before the data are co-added to the 120-sec and 30-min cadences.
Given that the 20-sec data will only consist of 10 such 2-sec integrations, it has been decided that removing the CRs from the co-added data on ground is more optimal. For every 20-sec cadence there is a ${\sim}1.7\%$ chance per pixel for a CR hit.
WG-0 will before launch need to identify suitable methods for such a correction.
It is worth noting that CRs in TESS will impact the photometry in a manner quite different to that in \textit{Kepler}\xspace, because of the difference in the pixels between TESS and \textit{Kepler}\xspace. In TESS, the pixels have a width of $\rm 15\mu m$ and a depth of $\rm 100\mu m$, whereas \textit{Kepler}\xspace use pixels with a width of $\rm 27\mu m$ and a depth of $\rm 15\mu m$. The reason for this choice is the desire for a high spectral response at long wavelengths (\fref{fig-0}), which requires significantly deeper pixels due to the quantum efficiency of the detector material.
The deeper pixels, however, means that the cross-section of the detector for an incoming CR is much larger than in \textit{Kepler}\xspace. \fref{fig-1} shows a simulated pixel field at two different 20-sec cadences, where one (right panel) is affected by a CR. Where such an event in \textit{Kepler}\xspace would likely only have affected a single pixel, it can in TESS produce a trail which impacts many pixels.
\begin{figure}
\centering
\includegraphics[trim={4cm 1.7cm 4.5cm 0},clip, width=0.45\columnwidth]{MikkelNLund_Fig2}
\hfill
\includegraphics[trim={4cm 1.7cm 4.5cm 0},clip, width=0.45\columnwidth]{MikkelNLund_Fig3}
\caption{Potential effect of CRs in TESS. Shown are two 20-sec cadences of the same simulated pixel-field, one of which (right) are affected by a CR producing a trail impacting many pixels.}
\label{fig-1}
\end{figure}
\subsection{Sky backgrounds}
\label{sec-1.2}
For 20-sec data and FFIs WG-0 will need to estimate sky-background (SB) levels.
The non-instrumental SB is mainly composed of the contribution from the diffuse background of unresolved stars and galaxies and the sky glow from Zodiacal light, which depends especially on ecliptic latitude \citep[see, \eg,][]{2015ApJ...809...77S}. Before launch, WG-0 will work towards a proper and robust estimation of the SB for the highly diverse fields covered by TESS, going from near-ecliptic to polar and from very sparse to very dense (including regions containing stellar clusters, see \fref{fig-2}).
\subsection{Extracting photometry}
\label{sec-1.3}
WG-0 is committed to extracting light curves for all possible sources in the 20-sec, 120-sec, and 30-min FFI data.
As mentioned in \sref{intro}, this will over the course of the nominal 2-year mission amount to ${>}200{,}000$ star from 20-sec and 120-sec cadences, and ${>}20{,}000{,}000$ stars from the 30-min FFIs.
This number of targets, coupled with the requirement of a timely processing, means that the pipeline constructed for this task will need to be both fast and robust.
The pipeline will also have to be flexible in terms of its ability to process very diverse fields, including dense fields close to the ecliptic, nebulous regions with high contamination from the SB, and open as well as globular clusters (\fref{fig-2}). It will be especially interesting in the pre-flight tests (\sref{sec-2}) to see what can be expected for studies of star clusters given the relatively large TESS pixels.
Many methods exist for extracting photometry from CCD images, including aperture, point-spread-function (PSF), and so-called optimal photometry \citep[][]{1987PASP...99..191S,1989PASP..101..616H,1998MNRAS.296..339N,2005PASP..117.1113M}. Some of these have already been adapted, or extended upon, for the \textit{Kepler}\xspace and K2 missions \citep[][]{2010ApJ...713L..97B,2015ApJ...806...30L,2015MNRAS.447.2880A,2016MNRAS.456.1137L,2016MNRAS.tmp.1279N}.
Each of the methods have their pros and cons --- aperture photometry is by far the simplest and fastest method, but deciding the optimal size and shape of the aperture is not always straightforward, and it is far from optimal for dense and crowded regions; optimal photometry can provide a more accurate extraction, but it is slower, requires knowledge (albeit not particularly accurate) of the PSF, and is still not optimal for dense and crowded regions; PSF photometry is optimal for dense and crowded regions, but requires accurate knowledge of the PSF and is again slower than aperture photometry. Concerning the PSF, it is worth noting that the TESS PSF will include both off-axis aberrations and chromatic aberrations arising both from the refractive elements of the TESS camera and from the deep-depletion CCDs, absorbing redder photons deeper in the silicon.
All these aspects of the different possible methods must be considered in a final pipeline --- ideally, each method should be thoroughly tested on realistic simulated data, considering here also the hardware requirements that will be needed to keep up with the high data rates of TESS.
In the end, light curves may well have to be extracted with a range of different methods, depending on the type or crowding of the field under study. Another option might be to run several methods for all fields, with the optimum choice of extracted photometry being made only after the fact.
\subsection{Light curve preparation}
\label{sec-1.4}
Following the extraction of raw light curves from pixel data, WG-0 will for each star produce an analysis-ready light curve for asteroseismology, corrected for any instrumental features.
From \textit{Kepler}\xspace we know that instrumental features can come in many forms \citep[][]{2010ApJ...713L.120J,2011MNRAS.414L...6G,2014MNRAS.445.2698H}, including jumps from drops in pixel sensitivity, or from differences in sensitivities between the CCDs that a given star might land on. Such shifts in CCD position happened every Quarter in \textit{Kepler}\xspace, and will also occur with TESS for stars with observing durations exceeding the ${\sim}27$ days of an observing Sector; secular changes from variations in focus (\eg from a change in solar heating of the spacecraft), or drifts either in pointing or from differential velocity aberrations; abrupt changes after safe-mode events or data down-links (which will happen every 13.7-days with TESS); transient events such as the Argabrightening events found in \textit{Kepler}\xspace \citep[][]{2011SPIE.8151E..17W}, CRs, or from momentum dumps in the reaction wheels orienting the spacecraft.
Currently, we can only speculate about the instrumental features that will be found in TESS, but it is near certain that some features will be found.
The instrumental features that might be found cannot simply be rectified in the same manner for all types of stars under study by TASC (including solar-like oscillators, RR Lyraes, white dwarfs, eclipsing binaries, etc.).
When observing a given star, the observed signal will be a mix of physical and instrumental contributions. Given that the time scales, amplitudes, and phase stability of the physical component will depend on the type of star observed, and thus also on its overlap with the instrumental signals, the method for isolating the instrumental contribution and preserving the astrophysical signal will in effect also depend on the stellar type.
The idea in WG-0 is to build on the collective knowledge of the community by bringing together people with expertise on the data preparation for different stellar types \citep[see, \eg,][]{2011MNRAS.414L...6G,2011MNRAS.411..878K,2012PASP..124..963K,2013EPSC....8..599D,2014MNRAS.445.2698H,2016AJ....151...68K}. Many methods for rectifying light curves for analysis were developed during the \textit{Kepler}\xspace mission, and more recently for the re-purposed K2 mission \citep[see, \eg,][]{2014PASP..126..398H,2014PASP..126..948V,2015ApJ...806...30L,2016MNRAS.455L..36P,2016van.cleve.pasp,2016MNRAS.459.2408A}.
WG-0 will develop a data-correction pipeline that adopts a star-based approach to the mitigation of instrumental effects; this will build on pipelines developed during the \textit{Kepler}\xspace mission for specific types of stars. For the pipeline it is worth keeping the high data rate of TESS in mind --- not only should the pipeline be robust and able to handle a diverse range of stellar types, it should also be fast enough to allow for a timely facilitation of processed data.
Several versions of light curves will be available via TASOC for a given star, including a raw uncorrected light curve; a `standard' light curve where the correction method adopted is the same for all stars; and a star-type customized light curve (based on the inputs and request of the TASC community).
\subsection{Absolute timing}
\label{sec-1.5}
The TESS on-board clock should be accurate and stable to better than ${\sim}5$ msec. To obtain a similar accuracy on the time stamps in Barycenter Julian Days (BJD) in the Earth frame, the correction to the light travel time between the spacecraft and the DSN should be accurate to the same level. This will be achieved from knowing the 3D-position of the TESS spacecraft in space to a high level of accuracy (1500 km, corresponding to a light travel time of 5 msec).
However, delays may occur in the ground system (\eg after data down-links or safe mode event) that cannot be accounted for without an independent assessment of any temporal shifts.
For the sake of ground-based follow-up observations, \eg of transiting exoplanet hosts, it is naturally worth knowing the absolute time stamps of the data.
Requirements on the accuracy of the absolute timing comes also from asteroseismology \citep[][]{tasc_time}:
\begin{itemize}
\item[$\circ$] To reach the highest possible photometric quality from 120-sec observations, and the photon noise limit for the brightest stars, the absolute photometry needs to be accurate and stable to better than 5 msec.
\item[$\circ$] To reach the theoretical accuracy of high-amplitude coherent oscillations one needs the time at which each exposure is obtained to be very accurate over the period of an ($27$-day) observing Sector. For coherent pulsation modes this requires that the length of exposure is accurate over an observing Sector to better than 5 msec.
\item[$\circ$] To allow comparisons between ground-based observations with those from TESS, one needs to be able to estimate the absolute time of a given photometric data point and establish a stable reference (e.g. central time of a given observation). For coherent pulsation modes the absolute time (in HJD/BJD) should be known to better than 0.5 sec; for solar-like oscillations the required accuracy is better than 1 sec over a ${\sim}10$ day period.
\end{itemize}
For the calculations leading to these estimates see \cite[][]{tasc_time}.
The TESS team will make the corrections based on calculated light travel times; WG-0 is then committed to making independent checks of the absolute time stamps.
The regular calibrations will be achieved by performing contemporaneous observations between TESS and ground-based facilities of several objects with photometry varying rapidly in time, such as bright, deep, detached eclipsing binaries.
The absolute time shift, if any, can then be determined by cross-correlating the contemporaneous time series. The ideal objects for these checks will be found in the CVZs of TESS.
The work on the absolute timing issue will be handled by a dedicated sub-group of WG-0.
As the checks of absolute times should be done regularly, and possibly after any data down-link or safe-mode event, the sub-group will have to be able to respond and obtain ground-based data on short notice.
WG-0 will here depend on members of the TASC community with access to ground-based facilities.
\subsection{Stellar classification}
\label{sec-1.6}
An additional sub-group will be formed under WG-0 to perform stellar classification of stars observed with TESS.
The classification is important to select the proper course of action in rectifying a given light curve for asteroseismic studies (\sref{sec-1.4}).
WG-0 will conduct studies of the best classification of stars from the raw photometric data from TESS --- this will be achieved using techniques from machine learning \citep[see, \eg,][]{2009A&A...506..519D,2016MNRAS.456.2260A,2016MNRAS.459.3721B,2016MNRAS.457.3119D}, which will be tested on simulated TESS data before launch.
\begin{figure*}[!tp]
\centering
\includegraphics[width=0.44\textwidth]{MikkelNLund_Fig4}
\hfill
\includegraphics[width=0.44\textwidth]{MikkelNLund_Fig5}
\caption{Simulated pixels fields from \texttt{SPyFFI} of the Large Magellanic Cloud (LMC; left) and the globular cluster $\omega$ Centauri (NGC 5139; right).}
\label{fig-2}
\end{figure*}
\section{Pre-flight tests}
\label{sec-2}
In order for WG-0 to be able to construct a data processing pipeline that is ready when the first data from TESS are received, numerous tests will be conducted on simulated data (\sref{sec-2.1}).
\subsection{Pixel-data simulation}
\label{sec-2.1}
Pre-flight analysis will be performed on simulated TESS pixel data made using the ``Spiffy Python for Full Frame Images'' (\texttt{SPyFFI}) simulator.
The simulator was created at the Massachusetts Institute of Technology (MIT) by Zachory K. Berta-Thompson (private comm.).
As the name suggests, \texttt{SPyFFI} is a Python-based code for simulating TESS pixel data, including FFIs.
To simulate a given field, \texttt{SPyFFI} uses a user-specified input catalogue with stellar positions and magnitudes. The UCAC4 \citep[][]{2013AJ....145...44Z} catalog is currently used, but eventually the TESS Input Catalog \citep[TIC;][]{TIC} will be adopted. \texttt{SPyFFI} includes realistic models for the TESS pixel response, differential velocity aberration, cosmic rays, spacecraft jitter, focus changes, and sky backgrounds (and the parameters of all of these contributions can be adjusted to test methods from best- to worst-case scenarios).
\fref{fig-2} gives examples of two simulated TESS pixel fields, one of the Large Magellanic Cloud (LMC) and one of the $\omega$ Centauri globular cluster.
\texttt{SPyFFI} furthermore has the option of assigning a simulated light curve to a given star in a given field.
These light curves can include transits, eclipses, spot modulations, and/or oscillations.
The light curves with solar-like oscillations and granulation signals are produced using the asteroFLAG simulator \cite[][]{2008AN....329..549C}; light curves for classical oscillators have been constructed
with frequencies, phases, and amplitudes from such stars observed by \textit{Kepler}\xspace (Vichi Antoci and Steven Kawaler, private comm.).
\subsection{T'DA workshop series}
\label{sec-2.2}
To address the issues of TESS data preparation for asteroseismology, WG-0 is organizing the workshop series ``TESS Data for Asteroseismology'' (T'DA).
The idea is to bring together people from the broad community, who either have expertise from missions such as \textit{Kepler}\xspace or CoRoT, or who are students planning to work on data analysis issues.
The T'DA series is planned to include, at least, workshops dedicated to (1) extracting light curves from pixel data; (2) correcting light curves for the optimal output from asteroseismic analysis; and (3) stellar classification.
The first workshop (T'DA1), entitled ``From Pixels to Light Curves'', will be held at the University of Birmingham, UK, from 31st Oct. to 2nd Nov. 2016.
\subsection*{Acknowledgements}
{\small The authors would like to thank the organisers of the ``Seismology of the Sun and the Distant Stars 2016 --- Using Today’s Successes to Prepare the Future'' (SpaceTK16) conference, a joint TASC2-KASC9 Workshop -- SPACEINN \& HELAS8 Conference, where MNL presented a talk on the contents of these proceedings.
Funding for the Stellar Astrophysics Centre (SAC) is provided by The Danish National Research Foundation (Grant DNRF106). MNL acknowledges the support of The Danish Council for Independent Research | Natural Science (Grant DFF-4181-00415). WJC acknowledges the support of the UK Science and Technology Facilities Council (STFC).
}
|
1,116,691,501,259 | arxiv | \section{Introduction}
The Bombieri-Vinogradov Theorem states that for any $A>0$ there is a $B=B(A)$ such that
\begin{equation}
\sum_{q\le x^{1/2}/(\log{x})^B}\sup_{(a,q)=1}\Bigl|\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Bigr|\ll_A \frac{x}{(\log{x})^A},
\label{eq:BV}
\end{equation}
thereby showing equidistribution of primes up to $x$ in arithmetic progressions on average over moduli $q$ a bit smaller than $x^{1/2}$. For the purpose of many applications in analytic number theory this serves as an adequate substitute for the Generalized Riemann Hypothesis.
One useful technical feature of \eqref{eq:BV} is that it is completely uniform over the residue classes which appear. In particular, for $q$ outside of some small bad set of moduli, \textit{every} residue class $\Mod{q}$ contains roughly the expected number of primes. There are $\exp(x^{1/2+o(1)})$ possible collections of different residue classes $a\Mod{q}$ with $q\le x^{1/2}/(\log{x})^B$, and all of these are considered in \eqref{eq:BV}.
It is expected that one should be able to extend the range of moduli in \eqref{eq:BV} to a summation over all $q\le x^{1-\epsilon}$ (this is the Elliott-Halberstam Conjecture - see \cite{ElliottHalberstam}), but simply extending the summation to moduli larger than $x^{1/2}$ remains an important open problem.
Some important progress was made in a series of works by Bombieri, Fouvry, Friedlander and Iwaniec \cite{BFI1,BFI2,BFI3,Fouvry,Fouvry2,FouvryIwaniec,FouvryIwaniec2}, who produced variants of \eqref{eq:BV} which held for moduli $q$ of size $x^{1/2+\delta}$ (for some small $\delta>0$) at the cost of imposing some additional restrictions. One limitation of these results was that the estimates put significant restrictions on the residue classes $a\Mod{q}$ which appeared. Any method exploiting bounds for sums of Kloosterman sums via the spectral theory of automorphic forms \cite{DeshouillersIwaniec} necessarily introduces a dependence on the residue class appearing, and this essentially restricts one to only considering the same residue class $a\ll x^\epsilon$ for all moduli $q$. In such works there are therefore only $O(x^{1/2+\delta+\epsilon})$ collections of residue classes under consideration. This limitation on uniformity of residue classes was the key reason that these works were not applicable to the work of Goldston-Pintz-Y\i ld\i r\i m \cite{GPY}, which would produce bounded gaps between primes if one could obtain a suitable variant of \eqref{eq:BV} for moduli of size $x^{1/2+\delta}$.
The key technical innovation in the breakthrough work of Zhang \cite{Zhang} on bounded gaps between primes was a variant of \eqref{eq:BV} for smooth moduli which was more uniform with respect to the residue classes considered. Zhang's work took a fixed polynomial $f$, moduli $q$ of size $x^{1/2+\delta}$ having no prime factors bigger than $x^{\delta/2}$, and allowed one to consider all residue classes $a\Mod{q}$ with $f(a)\equiv 0\Mod{q}$. This estimate was sufficiently uniform to combine with the work of Goldston-Pintz-Y\i ld\i r\i m to give bounded gaps between primes. An important technical feature enabling this uniformity was that rather than relying on estimates from the spectral theory of automorphic forms, Zhang ultimately relied only on exponential sum estimates coming from algebraic geometry, which have the benefit of being much more uniform with respect to the residue classes.
Zhang's work was refined further by the Polymath project \cite{Polymath}, who showed that a variant of his methods allowed one to produce an estimate where the residue class $a$ was the same for all moduli, but otherwise the estimate was completely uniform. Specifically, they showed that for suitably small $\delta>0$
\begin{equation}
\sup_{a\in\mathbb{Z}}\sum_{\substack{q\le x^{1/2+\delta}\\ (q,a)=1\\ p|q\Rightarrow p\le x^{\delta} }}\Bigl|\pi(x;q,a)-\frac{\pi(x)}{\phi(q)}\Bigr|\ll_{A} \frac{x}{(\log{x})^A}.
\label{eq:Polymath}
\end{equation}
Such an estimate considers $\exp(x^{\delta+o(1)})$ different residue classes in total, which is less that the Bombieri-Vinogradov Theorem \eqref{eq:BV}, but considerably more than the other results on primes in arithmetic progressions to moduli beyond $x^{1/2}$.
The aim of this paper is to produce variants of \eqref{eq:BV} with moduli of size $x^{1/2+\delta}$ with a similar quality of uniformity with respect to the residue classes under consideration as the original Bombieri-Vinogradov Theorem. As with many of the previous works, our methods require us to restrict ourselves to moduli $q$ which have factors of a convenient size.
Our first estimate allows us to consider $q\sim x^{1/2+\delta}$ with complete uniformity, provided we restrict ourselves to $q$ with a factor of size close to $x^{1/10}$, and we satisfy ourselves with having a weaker error term.
\begin{thrm}[Uniform equidistribution of primes with weak error term]\label{thrm:WeakEquidistribution}
Let $C>0$ be a sufficiently large absolute constant and $\delta>0$. Let $Q_1\le x^{1/10-3\delta}/(\log{x})^C$ and $Q_2\le x^{4/10+4\delta}(\log{x})^C$. Then we have
\[
\sum_{Q_1\le q_1\le 2 Q_1}\sum_{Q_2\le q_2\le 2Q_2}\sup_{(a,q_1q_2)=1}\Bigl|\pi(x;q_1q_2,a)-\frac{\pi(x)}{\phi(q_1q_2)}\Bigr|\ll_C\delta \pi(x)+\frac{x(\log\log{x})^2}{(\log{x})^2}.
\]
\end{thrm}
Since $\pi(x,a;q)\ll \pi(x)/\phi(q)$ for $q\le x^{1-\epsilon}$ by the Brun-Titchmarsh Theorem, the trivial bound for the quantity considered in Theorem \ref{thrm:WeakEquidistribution} is $\pi(x)$, and so we are only winning a factor $O(\delta)$ over the trivial bound. In particular, Theorem \ref{thrm:WeakEquidistribution} has no content unless $\delta$ is sufficiently small. Theorem \ref{thrm:WeakEquidistribution} gives a version of a theorem of Bombieri-Friedlander-Iwaniec \cite{BFI3} which is now completely uniform with respect to residue classes (whereas previously the estimate was restricted to a single fixed integer $a$ of size $O(1)$), but with the constraint that the moduli have a factor of size close to $x^{1/10}$. By way of comparison, there are $\exp(x^{1/2+\delta+o(1)})$ collections of residue classes under consideration, which is more than any of the previous results, and comparable to \eqref{eq:BV} extended to moduli of size $x^{1/2+\delta}$.
Our second estimate gives a good error term, with more flexible constraints on the moduli, at the cost of weakening the level of uniformity in the residue classes slightly and requiring that the moduli split into 3 factors.
\begin{thrm}[Almost uniform equidistribution for primes]\label{thrm:AlmostUniform}
Let $0<\delta<1/1000$, $A>0$, and $Q_1,Q_2,Q_3\ge 1$ satisfy $Q_1Q_2Q_3=x^{1/2+\delta}$ and
\[
x^{40\delta}<Q_2<x^{1/20-7\delta},\qquad \frac{x^{1/10+12\delta}}{Q_2}<Q_3<\frac{x^{1/10-4\delta}}{Q_2^{3/5}}.
\]
Then we have
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sup_{(b,q_1q_2)=1}\sum_{q_3\le Q_3}\sup_{\substack{(a,q_1q_2q_3)=1\\ a\equiv b\Mod{q_1q_2}}}\Bigl|\pi(x;q_1q_2q_3,a)-\frac{\pi(x)}{\phi(q_1q_2q_3)}\Bigr|\ll_{A,\delta}\frac{x}{(\log{x})^A}.
\]
\end{thrm}
In Theorem \ref{thrm:AlmostUniform} the residue class $a\Mod{q_1q_2q_3}$ which is considered is only allowed to lie in a residue class $b\Mod{q_1q_2}$ which doesn't depend on $q_3$, but otherwise is completely uniform. However, as with Theorem \ref{thrm:WeakEquidistribution} there are $\exp(x^{1/2+\delta+o(1)})$ collections of residue classes under consideration, and now we obtain an estimate with a good error term. An immediate consequence of Theorem \ref{thrm:AlmostUniform} is an extension of \eqref{eq:Polymath} to a wider collection of moduli.
Our final estimate gives uniform equidistribution with a good error term for a minorant for the indicator function of the primes.
\begin{thrm}[Uniform equidistribution of a minorant]\label{thrm:Minorant}
Let $\delta>0$ be sufficiently small. Then there is a function $\rho:\mathbb{N}\rightarrow \mathbb{R}$ satisfying the following conditions:
\begin{enumerate}
\item $\rho(n)$ is a minorant for the primes:
\[
\rho(n)\le \begin{cases}
1,\qquad &n\text{ is prime},\\
0,&\text{otherwise}.
\end{cases}
\]
\item $\rho(n)$ is close to the indicator function of the primes:
\[
\sum_{n\le x}\rho(n)\ge \frac{\pi(x)}{8}.
\]
\item $\rho(n)$ is equidistributed in arithmetic progressions to large moduli:
For any $Q_1\in [x^{2/5+5\delta},x^{3/7}]$, $Q_2=x^{1/2+\delta}/Q_1$ and $A>0$ we have
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sup_{(a,q_1q_2)=1}\Bigl|\sum_{\substack{n\le x\\ n\equiv a\Mod{q_1q_2}}}\rho(n)-\frac{1}{\phi(q_1q_2)}\sum_{\substack{n\le x\\ (n,q_1q_2)=1}}\rho(n)\Bigr|\ll_{\delta,A} \frac{x}{(\log{x})^A}.
\]
\end{enumerate}
\end{thrm}
Although Theorem \ref{thrm:Minorant} has a more technical formulation, we expect it to be more applicable in practice. For many problems concerning the primes one is often ultimately interested in showing a lower bound for the number of primes in a set, and so it suffices to work with a suitable minorant throughout the argument. The conditions on $Q_1,Q_2$ could certainly be relaxed quite significantly - we have made no attempt to numerically optimize the constants involved. Similarly, we haven't given an explicit quantification on what `sufficiently small' requires, but an explicit numerical upper bound could be given with a bit more effort. As with previous estimates, $\exp(x^{1/2+\delta+o(1)})$ collections of residue classes are considered in \ref{thrm:Minorant}.
One consequence of Theorem \ref{thrm:Minorant} is a uniform lower bound for the number of primes in arithmetic progressions to moduli $qr\le x^{1/2+\delta}$, provided $qr$ avoids a small bad set $\mathcal{B}$ and $r\le x^{1/10-3\delta}$.
\begin{crllry}[Primes in all progressions for almost-all moduli]\label{crllry:LowerBound}
Let $\delta>0$ be sufficiently small, $A>0$ and $x>x_0(\delta,A)$ be sufficiently large in terms of $\delta$ and $A$. Then there is a set $\mathcal{B}\subseteq [1,x^{1/2+\delta}]$ with $\#\mathcal{B}\le x^{1/2+\delta}/(\log{x})^A$ such that if $q\le x^{1/2+\delta}$ has a divisor in $[x^{2/5+\delta},x^{3/7}]$ and $q\notin \mathcal{B}$, then for every $a$ coprime to $q$ we have
\[
\pi(x,a;q)\asymp \frac{\pi(x)}{\phi(q)}.
\]
\end{crllry}
In particular, Corollary \ref{crllry:LowerBound} shows that for almost all pairs $q,r$ with $q\in[x^{2/5+\delta}, x^{3/7}]$ and $r\le x^{1/2+\delta}/q$, \textit{every} primitive residue class contains at least one prime.
\begin{rmk}
The implied constants in Theorems \ref{thrm:WeakEquidistribution}-\ref{thrm:Minorant} are ineffective due to issues regarding a possible Siegel zero, but Theorem \ref{thrm:WeakEquidistribution} could be made effective with explicit constants with a little more care.
\end{rmk}
\begin{rmk}
The error terms in Theorems \ref{thrm:AlmostUniform} and Theorem \ref{thrm:Minorant} could be upgraded to $O(x^{1-\epsilon})$ and made effective if some small set of bad moduli were excluded.
\end{rmk}
\section{Outline}\label{sec:Outline}
First we sketch some of the key new ideas in our work. As with previous results, we perform a combinatorial decomposition of the primes (such as Heath-Brown's identity) to reduce the problem to estimating certain convolutions in arithmetic progressions. In particular, it suffices to get estimates of the shape
\[
\sum_{q\sim Q}\sum_{r\sim R}c_{q,r}\sum_{n\sim N}\alpha_n\sum_{m\sim M}\beta_m\Bigl(\mathbf{1}_{n m\equiv a_{q,r}\Mod{qr}}-\frac{\mathbf{1}_{(n m,qr)=1}}{\phi(qr)}\Bigr)\ll_A \frac{x}{(\log{x})^A},
\]
for suitable 1-bounded coefficients $c_{q,r}$, $\alpha_n,\beta_m$ and integers $(a_{q,r},qr)=1$ for certain ranges of $N,M,Q,R$ with $NM\asymp x$ and $QR=x^{1/2+\delta}$. Applying Cauchy-Schwarz in the $m,q$ variables, expanding the square and Fourier-completing the resulting sum reduces this to estimating sums like
\[
\sum_{q\sim Q}\sum_{r_1,r_2\sim R}c_{q,r_1}\overline{c_{q,r_2}}\hspace{-0.5cm}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_1}}\equiv n_2\overline{a_{q,r_2}}\Mod{q}}}\hspace{-0.5cm}\alpha_{n_1}\overline{\alpha_{n_2}}\sum_{1\le h\le H}e\Bigl(\frac{h a_{q,r_1}\overline{n_1 r_2}}{q r_1}\Bigr)e\Bigl(\frac{h a_{q,r_2}\overline{n_2 q r_1}}{r_2}\Bigr).
\]
In the work of Zhang and Polymath, $a_{q,r}=a$ was independent of $q,r$, so the congruence on $n_1,n_2$ simplified to $n_1\equiv n_2\Mod{q}$. This then enabled one to let $n_2=n_1+k q$, apply Cauchy-Schwarz in the $n_1,k,q$ variables (or $n_1,k,q,r_1$), resulting in an exponential sum over $n_1$ with smooth coefficients to modulus $O(QR^4)$. The Weil bound then gives a saving for this sum provided $Q^{1/2}R^2<N$.
In our situation we cannot simplify the congruence in this way since there is a dependence between $n_1,n_2,q,r_1,r_2$ via the $a_{q.r}$ factors, and so we require a different approach. Somewhat inspired by transference arguments from additive combinatorics (see \cite{Green,GreenTao}), our aim is to use Cauchy-Schwarz repeatedly to systematically replace the unknown coefficients $\alpha_n$ with smooth coefficients. We note that in our situation we need to obtain good power savings to make up for the fact that the trivial bound is now larger than our desired bound by a factor of $H$ (which one should think of as a small power of $x$), and so we are in a situation which is rather different to that of dense variables. In particular, we need to ensure that there is enough `entropy' in the terms that we square at each stage so as to ensure that the diagonal contributions give an adequate saving, which restricts possible manoevres we can make. Moreover, to maintain control over our summation we need to keep the $q$ variable always on the outside, and we couple the variables $n_i$ with $r_i$.
If we can find a means to apply Cauchy-Schwarz in some order to smooth all occurrences of $\alpha_n$, then we might hope to end up with a sum of exponential sums which look like (a smoothed version of)
\[
\sum_{\substack{n_1,\dots ,n_j\sim N\\ n_1\overline{a_{q,r_1}}\equiv n_2\overline{a_{q,r_2}}\equiv \dots \equiv n_j\overline{a_{q,r_j}}\Mod{q} }}e\Bigl(\frac{c_0\overline{n_1}}{q}\Bigr)\prod_{i=1}^j e\Bigl(\frac{c_i \overline{n_i}}{r_i}\Bigr),
\]
for some constants $c_0,c_1,\dots,c_j$ (depending on $q,r_1,\dots,r_j$). Fourier completing each summation in turn transforms this to (something like)
\[
\frac{N^j}{Q^j R^j}\sum_{\substack{\ell_1,\dots,\ell_j\ll QR/N}}S\Bigl(c_0,\sum_{i=1}^j a_{q,r_i}\overline{r_i}\ell_i;q\Bigr)\prod_{i=1}^jS(c_i,\ell_i;r_i),
\]
where $S(m,n;c)$ is the standard Kloosterman sum. If $N<QR$ the Weil bound gives a bound $Q^{1/2}R^{j/2}$ for our sum, which is a power-saving over the trivial bound $N^j/Q^{j-1}$ if $N$ is a bit larger than $Q^{1-1/(2j)}R^{1/2}$ and wins more than a factor $H^j$ if $Q$ is a large power of $R$. Provided we can do such a reduction (and can adequately handle all diagonal-type behavior) then this enables us to obtain an estimate of the desired type, at least for some ranges of $N,M,Q,R$.
Our main estimate follows precisely this approach, first smoothing the $n_1$ variable, then smoothing the $n_2,n_2'$ variables and producing a sum of the above type with $j=4$. This ultimately gives a satisfactory estimate in the range
\[
Qx^{2\delta+\epsilon}<N<x^{1/2-3\delta-\epsilon},
\]
provided $Q>x^{2/5+4\delta+\epsilon}$. In particular, if $Q\approx x^{2/5}$ and $\delta,\epsilon\approx 0$ this almost covers the entire range $[x^{2/5},x^{1/2}]$, and so by symmetry we could essentially estimate any convolution involving a factor of length $N\in [x^{2/5},x^{3/5}]$.
If we genuinely had this full range, then this would cover all terms appearing in the Heath-Brown identity except for those involving $1,2$ or $3$ long smooth components. Terms with 1 or 2 long smooth components are easy to deal with thanks to known (uniform) results about the divisor function $d_2$ in arithmetic progressions. Thus we are left to estimate the terms with 3 long smooth components, and one rough component of length at most $x^{1/10}$. This requires an estimate of the form
\[
\sum_{q\sim Q}\sum_{r\sim R}c_{q,r}\sum_{m\sim M}\beta_m\sum_{\substack{n_1\sim N_1\\ n_2\sim N_2\\ n_3\sim N_3} }\Bigl(\mathbf{1}_{n m\equiv a_{q,r}\Mod{qr}}-\frac{\mathbf{1}_{(n m,qr)=1}}{\phi(qr)}\Bigr)\ll \frac{x}{(\log{x})^A},
\]
where we have written $n=n_1n_2n_3$ and $MN_1N_2N_3\asymp x$. By building on the work of Friedlander-Iwaniec \cite{FIDivisor}, Heath-Brown \cite{HBDivisor} and Polymath \cite{Polymath}, relying on estimates coming from Deligne's work \cite{Deligne1,Deligne2}, we are able to establish such an estimate provided $R>M x^{O(\delta)}$ and $Q>M^2 x^{O(\delta)}$. In the case when $\delta \approx 0$, $R\approx x^{1/10}$, $Q\approx x^{2/5}$ this almost covers all such terms. The slight failure to cover some of these terms presents an issue for Theorem \ref{thrm:WeakEquidistribution}, but we can use the fact that almost all $q$ have a small factor $\in[x^{100\delta}(\log{x})^{100C},x^{1/100}]$ to circumvent this.
Even in the situation $R\approx x^{1/10}$, $Q\approx x^{2/5}$, we still cannot quite handle all the terms which appear in the Heath-Brown identity. The key terms we cannot handle are convolutions of 5 terms each of length $x^{1/5+O(\delta)}$, or convolutions of 4 terms each of length $x^{1/4+O(\delta)}$. Since there are only a very small number of such terms when $\delta$ is small, slightly refined estimates of this type suffice for the purposes of Theorem \ref{thrm:WeakEquidistribution} and Theorem \ref{thrm:Minorant} using sieve methods.
By adapting the `de-amplifying' technique used in \cite{May1}, we are able to refine our original Type II estimate if we assume stronger divisibility conditions on the moduli. By introducing a congruence constraint (similar to the $q$-analogue of Van-der-Corput's method \cite{GrahamRingrose,HBVanDerCorput}) we are able to reduce the modulus of the final exponential sums appearing, at the cost of worsening the contribution from various diagonal terms. The upshot of this is that we are able to handle the terms with 5 factors of length $x^{1/5+O(\delta)}$ provided the moduli have three conveniently sized factors.
Unfortunately we are still not able to handle the terms with four factors each of length $x^{1/4+O(\delta)}$. To get around this issue we impose some slight constraints on the residue classes $a_{q_1,q_2,q_3}\Mod{q_1q_2q_3}$ which appear, namely that $a_{q_1,q_2,q_3}\Mod{q_1q_2}$ is independent of $q_3$ (but $a_{q_1,q_2,q_3}\Mod{q_3}$ can be arbitrary). In this case we are able to adapt the method of Zhang which produces satisfactory estimates with $N=x^{1/2+O(\delta)}$, and this then enables us to handle all convolution types, giving Theorem \ref{thrm:AlmostUniform}.
\section{Acknowledgements}
I would like to thank John Friedlander, Ben Green, Henryk Iwaniec and Kyle Pratt for useful discussions and suggestions. JM is supported by a Royal Society Wolfson Merit Award, and this project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 851318).
\section{Notation}
We use the Vinogradov $\ll$ and $\gg$ asymptotic notation, and the big oh $O(\cdot)$ and $o(\cdot)$ notation. $f\asymp g$ denotes both $f\ll g$ and $g\ll f$ hold. Dependence on a parameter will be denoted by a subscript. Throughout the paper $x$ will be a large parameter, and all asymptotics should be thought of as $x\rightarrow\infty$.
Throughout the paper, $\epsilon$ will be a single fixed small real number; $\epsilon=10^{-100}$ would probably suffice. We will let $\psi:\mathbb{R}\rightarrow\mathbb{R}$ denote a fixed smooth function supported on $[1/2,5/2]$ which is equal to $1$ on the interval $[1,2]$ and satisfies $\|\psi^{(j)}\|_\infty\ll (4^j j!)^2$ for all $j\ge 0$. (See \cite[Page 368, Corollary]{BFI2} for the construction of such a function.) Any bounds in our asymptotic notation will be allowed to depend on $\epsilon$ and $\psi$.
The letter $p$ will be reserved to denote a prime number. We use $\phi$ to denote the Euler totient function, $e(x):=e^{2\pi i x}$ the complex exponential, $\tau_k(n)$ the $k$-fold divisor function, $\mu(n)$ the M\"obius function. We let $P^-(n)$, $P^+(n)$ denote the smallest and largest prime factors of $n$ respectively, and $\hat{f}$ denote the Fourier transform of $f$ over $\mathbb{R}$ - i.e. $\hat{f}(\xi)=\int_{-\infty}^{\infty}f(t)e(-\xi t)dt$. Summations assumed to be over all positive integers unless noted otherwise. We use the notation $n\sim N$ to denote the conditions $N<n\le 2N$. We use $\mathbf{1}$ to denote the indicator function of a statement. For example,
\[
\mathbf{1}_{n\equiv a\Mod{q}}=\begin{cases}1,\qquad &\text{if }n\equiv a\Mod{q},\\
0,&\text{otherwise}.
\end{cases}
\]
We will use $(a,b)$ to denote $\gcd(a,b)$ when it does not conflict with notation for ordered pairs. For $(n,q)=1$, we will use $\overline{n}$ to denote the inverse of the integer $n$ modulo $q$; the modulus will be clear from the context. For example, we may write $e(a\overline{n}/q)$ - here $\overline{n}$ is interpreted as the integer $m\in \{0,\dots,q-1\}$ such that $m n\equiv 1\Mod{q}$. Occasionally we will also use $\overline{\lambda}$ to denote complex conjugation; the distinction of the usage should be clear from the context.
\begin{dfntn}[Siegel-Walfisz condition]
We say that a complex sequence $\alpha_n$ satisfies the \textbf{Siegel-Walfisz condition} if for every $d\ge 1$, $q\ge 1$ and $(a,q)=1$ and every $A>1$ we have
\begin{equation}
\Bigl|\sum_{\substack{n\sim N\\ n\equiv a\Mod{q}\\ (n,d)=1}}\alpha_n-\frac{1}{\phi(q)}\sum_{\substack{n\sim N\\ (n,d q)=1}}\alpha_n\Bigr|\ll_A \frac{N\tau(d)^{O(1)}}{(\log{N})^A}.
\label{eq:SiegelWalfisz}
\end{equation}
\end{dfntn}
We note that $\alpha_n$ certainly satisfies the Siegel-Walfisz condition if $\alpha_n=1$, if $\alpha_n=\mu(n)$ or if $\alpha_n$ is the indicator function of the primes.
\section{Main Propositions}
As mentioned in the introduction, to prove Theorems \ref{thrm:WeakEquidistribution}-\ref{thrm:Minorant} we follow the standard approach of reducing the task of counting primes to that of estimating various bilinear quantities with essentially arbitrary coefficients - `Type II' estimates. (The `Type I' estimates of this paper will essentially just be the trivial estimate for integers in an arithmetic progression.) Since these estimates can be of independent interest and are potentially useful for other applications, we first give our main propositions here, and then deduce Theorems \ref{thrm:WeakEquidistribution}-\ref{thrm:Minorant} from them. The bulk of the paper is then spent establishing each of these propositions in turn.
The main new proposition is the following result, which we will establish later in Section \ref{sec:MainProp}.
\begin{prpstn}[Type II estimate]\label{prpstn:MainProp}
Let $A>0$ and $C=C(A)$ be sufficiently large in terms of $A$. Let $QR=x^{1/2+\delta}$ and $NM\asymp x$ satisfy
\begin{align*}
x^{6\delta }(\log{x})^{C}\le R\le \frac{x^{1/10-3\delta}}{(\log{x})^C},\qquad
Q x^{2\delta}(\log{x})^C\le N\le \frac{x^{1/2-3\delta}}{(\log{x})^C}.
\end{align*}
Let $\alpha_n,\beta_m$ be complex sequences with $|\alpha_n|,|\beta_n|\le \tau(n)^A$ and $\alpha_n$ satisfying the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}. Then we have
\[
\sum_{q\sim Q}\sum_{r\sim R}\sup_{(a,qr)=1}\Bigl|\sum_{m\sim M}\sum_{n\sim N}\alpha_n\beta_m\Bigl(\mathbf{1}_{n m\equiv a\Mod{q r}}-\frac{\mathbf{1}_{(nm,qr)=1}}{\phi(q r)}\Bigr)\Bigr|\ll_{A} \frac{x}{(\log{x})^A}.
\]
\end{prpstn}
Proposition \ref{prpstn:MainProp} (and the subsequent propositions in this section) does not require that $\delta>0$ (although the result follows from the Bombieri-Vinogradov Theorem for $\delta\le -C\log\log{x}/\log{x}$). We have chosen this formulation to emphasize the fact that we are interested in the regime when $Q R$ is close to $x^{1/2}$. An alternative formulation of the constraints is given by
\begin{align*}
R^5 Q^6\le \frac{x^3}{(\log{x})^C},\quad Q^3 R^4\le \frac{x^{8/5}}{(\log{x})^{C}},\quad
\frac{Q^3 R^2(\log{x})^C}{x}\le N\le \frac{x^2}{Q^3 R^3(\log{x})^C}.
\end{align*}
As mentioned in the outline, when $R\approx x^{1/10}$ and $\delta$ is small, Proposition \ref{prpstn:MainProp} covers arbitrary convolutions with one factor of length $N\in [x^{2/5+\epsilon},x^{1/2-\epsilon}]$. To extend the range of applicability to $N\approx x^{2/5-\epsilon}$ and to reduce the requirements of the sizes of $R$, $Q$ we also have the following technical variant of Proposition \ref{prpstn:MainProp}, which we will prove in Section \ref{sec:SecondProp}.
\begin{prpstn}[Type II estimate near $x^{2/5}$]\label{prpstn:SecondProp}
Let $A>0$. Let $\alpha_n,\beta_m$ be complex sequences with $|\alpha_n|,|\beta_n|\le \tau(n)^A$ and with $\alpha_n$ satisfying the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}. Let $Q_1Q_2Q_3=x^{1/2+\delta}$ and $NM\asymp x$ with
\[
Q_2 x^{6\delta+5\epsilon}\le Q_3\le \frac{x^{1/10-3\delta-5\epsilon}}{Q_2^{3/5}},
\]
and
\[
\max\Bigl(Q_1 x^{2\delta+5\epsilon},\, Q_2Q_3 x^{1/4+13\delta/2+5\epsilon}\Bigr)<N<\frac{x^{1/2-3\delta-5\epsilon}}{Q_2}.
\]
Then we have that
\begin{align*}
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\sum_{q_3\sim Q_3}\sup_{(a,q_1q_2q_3)=1}\Bigl|\sum_{n\sim N}\alpha_n \sum_{m\sim M}\beta_m \Bigl(\mathbf{1}_{nm\equiv a\Mod{q_1q_2q_3}}-\frac{\mathbf{1}_{(nm,q_1q_2q_3)=1}}{\phi(q_1q_2q_3)}\Bigr)\Bigr|\\
\ll_A \frac{x}{(\log{x})^A}.
\end{align*}
\end{prpstn}
For example, if $\delta>0$ is small, $Q_1\approx x^{2/5-1/1000+\delta}$, $Q_2\approx x^{5/1000}$, $Q_3\approx x^{1/10-4/1000}$, then the inequalities on $Q_1,Q_2,Q_3$ are satisfied and Proposition \ref{prpstn:SecondProp} covers the range $N\in[x^{2/5-1/2000},x^{2/5+1/100}]$, and so extends Proposition \ref{prpstn:MainProp} to $N\le x^{2/5}$. Overcoming this $x^{2/5}$ barrier is vital for the proof of Theorem \ref{thrm:AlmostUniform}.
Neither Proposition \ref{prpstn:MainProp} nor Proposition \ref{prpstn:SecondProp} can handle `balanced' convolutions with $N,M\approx x^{1/2}$. Unfortunately we are not able to produce an estimate which is completely uniform for such terms, which is why Theorem \ref{thrm:WeakEquidistribution} and Theorem \ref{thrm:AlmostUniform} fail to give a full extension of the Bombieri-Vinogradov Theorem to moduli $q\sim x^{1/2+\delta}$ with suitable factorization properties. To handle such terms we resort to imposing some restrictions on our residue classes, which then enables us to adapt the ideas underlying a key estimate of Zhang \cite{Zhang} to this setting. This is our third proposition, which we will establish in Section \ref{sec:Zhang}.
\begin{prpstn}[Type II estimate near $x^{1/2}$]\label{prpstn:Zhang}
Let $A>0$ and let $\alpha_n,\beta_m$ be complex sequences with $|\alpha_n|,|\beta_m|\le \tau(n)^A$ and with $\alpha_n$ satisfying the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}. Let $Q_1Q_2=x^{1/2+\delta}$ and $NM\asymp x$ with
\begin{align*}
Q_1^7 Q_2^{12}&< x^{4-10\epsilon},\qquad
x^{2\delta+\epsilon} Q_1< N < \frac{x^{1-\epsilon}}{Q_1}.
\end{align*}
Then we have that
\begin{align*}
\sum_{q_1\sim Q_1}\sup_{(b,q_1)=1}\sum_{q_2\sim Q_2}\sup_{\substack{(a,q_1q_2)=1\\ a\equiv b\Mod{q_1}}}\Bigl|\sum_{n\sim N}\alpha_n \sum_{m\sim M}\beta_m \Bigl(\mathbf{1}_{n m\equiv a\Mod{q_1q_2}}-\frac{\mathbf{1}_{(n m,q_1q_2)=1}}{\phi(q_1q_2)}\Bigr)\Bigr|\\
\ll_A \frac{x}{(\log{x})^A}.
\end{align*}
\end{prpstn}
Each of Propositions \ref{prpstn:MainProp}-\ref{prpstn:Zhang} apply to essentially arbitrary coefficients $\alpha_n,\beta_m$, but fail to handle terms when $N\approx x^{1/3},M\approx x^{2/3}$. For the purposes of estimating primes, however, Type II estimates such as Proposition \ref{prpstn:MainProp} allow us to reduce to the situation where we can assume various convolution factors are smooth functions. To cover the remaining cases for primes we require estimates with one small arbitrary factor and three smooth factors, which is closely related to estimates for the ternary divisor function in arithmetic progressions. This leads us to our final proposition, which is based on ideas in \cite{Polymath}, and will be proven in Section \ref{sec:Triple}.
\begin{prpstn}\label{prpstn:Triple}
Let $A>0$. Let $QR=x^{1/2+\delta}$ and $N_1 N_2 N_3 M\asymp x$ with
\[
M<\min\Bigl(\frac{R}{x^{4\delta}(\log{x})^C},\frac{Q^{1/2}}{x^{2\delta}(\log{x})^C}\Bigr)
\]
for some constant $C=C(A)$ sufficiently large in terms of $A$. Let $\alpha_m$ be a complex sequence with $|\alpha_m|\le \tau(m)^A$. Let $\psi_1,\psi_2,\psi_3$ be smooth functions supported on $[1,2]$ with $\|\psi^{(j)}_1\|_\infty,\|\psi^{(j)}_2\|_\infty,\|\psi^{(j)}_3\|_\infty \ll ((j+1)\log{x})^{A j}$ for all $j\ge 0$. Let $\Delta_\mathscr{K}(a;qr)=\Delta_\mathscr{K}(a;qr;n_1,n_2,n_3,m)$ be given by
\[
\Delta_\mathscr{K}(a;qr):=\psi_1\Bigl(\frac{n_1}{N_1}\Bigr)\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)\psi_3\Bigl(\frac{n_3}{N_3}\Bigr)\Bigl(\mathbf{1}_{n_1n_2n_3 m\equiv a\Mod{qr}}-\frac{\mathbf{1}_{(n_1n_2n_3m,q r)=1}}{\phi(qr)}\Bigr).
\]
Then we have
\[
\sum_{q\sim Q}\sum_{r\sim R}\sup_{(a,qr)=1}\Bigl|\sum_{m\sim M}\alpha_m\sum_{n_1\sim N_1}\sum_{n_2\sim N_2}\sum_{n_3\sim N_3}\Delta_{\mathscr{K}}(a;qr)\Bigr|\ll_A \frac{x}{(\log{x})^A}.
\]
\end{prpstn}
For the purposes of Theorem \ref{thrm:WeakEquidistribution} it is vital that we are able to handle $M\approx x^{1/10}$ when $R\approx x^{1/10}$, $Q\approx x^{2/5}$, and so our estimate is only just sufficient for this purpose. Unlike the earlier propositions (which ultimately only rely on the Weil bound for Kloosterman sums), Proposition \ref{prpstn:Triple} relies on Deligne's work \cite{Deligne1,Deligne2} to handle certain multidimensional exponential sums (correlations of hyper-Kloosterman sums with an additive twist.)
An immediate consequence of Proposition \ref{prpstn:Triple} is the following corollary on the exponent of distribution of the ternary divisor function.
\begin{crllry}
Let $A>0$ and $C=C(A)$ sufficiently large in terms of $A$. Let $Q,R$ satisfy
\[
Q^4 R^3+ Q^3 R^4<\frac{x^2}{(\log{x})^C}.
\]
Then we have that
\[
\sum_{q\le Q} \sum_{r\le R}\sup_{(a,qr)=1}\Bigl|\sum_{\substack{n\le x\\ n\equiv a\Mod{q r}}}\tau_3(n)-\frac{1}{\phi(q r)}\sum_{\substack{n\le x\\ (n,qr)=1}}\tau_3(n)\Bigr|\ll \frac{x}{(\log{x})^A}.
\]
\end{crllry}
This improves Heath-Brown's result \cite{HBDivisor} on the range of equidistribution on average for $\tau_3(n)$ provided we restrict to moduli that have a factor in $[x^{2/21},x^{3/7}]$, and extends the result of Fouvry-Kowalski-Michel \cite{FKMDivisor} to larger moduli with additional uniformity in the residue classes provided the moduli have a factor in $[x^{2/17},x^{7/17}]$.
\section{Preparatory lemmas}
Before embarking on the deduction of Theorems \ref{thrm:WeakEquidistribution}-\ref{thrm:Minorant}, we first collect some basic lemmas and some consequences of our main propositions.
\begin{lmm}[Heath-Brown identity \cite{HBVaughan}]\label{lmm:HeathBrown}
Let $k\ge 1$ and $n\le 2x$. Then we have
\[
\Lambda(n)=\sum_{j=1}^k (-1)^j \binom{k}{j}\sum_{\substack{n=m_1\cdots m_kn_1\cdots n_{k}\\ m_1,\,\dots,\,m_k\le 2x^{1/k}}}\mu(m_1)\cdots \mu(m_k)\log{n_{1}}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 7.8]{May1}.
\end{proof}
\begin{lmm}[Reduction to fundamental lemma type condition]\label{lmm:Buchstab}
Let $y\ge 1$ and $z_1\ge z_2$. Then there are 1-bounded sequences $\alpha_d$, $\beta_{d}$ supported on $P^-(d)\ge z_2$ depending only on $d,z_1,z_2$ such that
\[
\mathbf{1}_{P^-(n)>z_1}=\sum_{\substack{m d=n\\ d\le y}}\alpha_d\mathbf{1}_{P^-(m)> z_2}+\sum_{\substack{n=p d m\\ d\le y<d p\\ z_2<p\le z_1\\ P^-(d)\ge p}}\beta_{d}\mathbf{1}_{P^-(m)> p}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 9.2]{May1}.
\end{proof}
\begin{lmm}[Smooth partition of unity]\label{lmm:Partition}
Let $C\ge 3$. There exists smooth non-negative functions $\widetilde{\psi}_1,\dots,\widetilde{\psi}_J$ with $J\le (\log{x})^C+2$ such that
\begin{enumerate}
\item $\|\widetilde{\psi}_i^{(j)}\|_\infty\ll_{C} ((j+1)\log{x})^{j C}$ for each $1\le i\le J$ and each $j\ge 0$.
\item We have that
\[
\sum_{j=1}^J \widetilde{\psi}_j(t)=\begin{cases}
0,\qquad &\text{if }t\le 1-1/(\log{x})^C,\\
O(1), &\text{if }1-1/(\log{x})^C\le t\le N,\\
1,&\text{if }1\le t\le 2,\\
O(1), &\text{if }2\le t\le 2+1/(\log{x})^C,\\
0, &\text{if }2+1/(\log{x})^C\le t.\\
\end{cases}
\]
\end{enumerate}
\end{lmm}
\begin{proof}
This is \cite[Lemma 18.1]{May1}.
\end{proof}
\begin{lmm}[Double divisor function estimate]\label{lmm:DoubleDivisor}
Let $A>0$ and let $\mathcal{I}_1,\mathcal{I}_2\subseteq [1,x]$ be intervals. Then we have uniformly for $q\le x^{2/3-\epsilon}$ and $(a,q)=1$
\[
\sum_{\substack{n_1n_2\sim x\\ n_1\in\mathcal{I}_1 \\ n_2\in\mathcal{I}_2\\ n_1n_2\equiv a\Mod{q} }}1=\frac{1}{\phi(q)}\sum_{\substack{n_1n_2\sim x\\ n_1\in\mathcal{I}_1 \\ n_2\in\mathcal{I}_2\\ (n_1n_2,q)=1 }}1+O_A\Bigl(\frac{x}{q(\log{x})^A}\Bigr).
\]
\end{lmm}
\begin{proof}
We first take a suitable smooth approximation to the indicator functions of the intervals $\mathcal{I}_1,\mathcal{I}_2$ using Lemma \ref{lmm:Partition}, and then apply \cite[Lemma 6.1]{May2}.
\end{proof}
\begin{lmm}[Asymptotics for rough numbers]\label{lmm:Buchstab2}
Let $x^\epsilon \le z\le x$. Then we have
\[
\sum_{n<x}\mathbf{1}_{P^-(n)\ge z}=\frac{(1+o(1))x}{\log{x}}\omega\Bigl(\frac{\log{x}}{\log{z}}\Bigr),
\]
where $\omega(u)$ is the continuous, piecewise smooth function defined for $u\ge 1$ by the delay differential equation
\begin{align*}
\omega(u)=\frac{1}{u}\text{ for }1\le u\le 2, \qquad
\frac{\partial}{\partial u}(u \omega(u) )=\omega(u-1)\text{ for }2\le u.
\end{align*}
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.1]{Opera}.
\end{proof}
\begin{lmm}[Upper bound sieve]\label{lmm:Sieve}
There exists a sequence $\lambda_d^+$ supported on $d\le x^\epsilon$ such that $|\lambda_d^+|\le 1$ and
\begin{align*}
\sum_{d|n}\lambda_d^+&\ge \begin{cases}
1,\qquad &P^-(n)\ge x^\epsilon,\\
0,&\text{otherwise},
\end{cases}\\
\sum_{d\le x^\epsilon}\frac{\lambda_d^+}{d}&\ll \frac{1}{\log{x}}.
\end{align*}
\end{lmm}
\begin{proof}
This follows from \cite[Lemma 6.3]{IwaniecKowalski}, for example.
\end{proof}
\begin{lmm}[Separation of variables]\label{lmm:Separation}
Let $N_1,\dots,N_r\asymp x$ with $N_1,\dots,N_r\ge 1$. Let $\alpha_{n_1,\dots,n_r}$ be a 1-bounded non-negative real sequence.
Suppose that for all intervals $\mathcal{I}_1,\dots,\mathcal{I}_r$ with $\mathcal{I}_i\subseteq[N_i,2N_i]$ and every $A>0$
\[
\sum_{q\sim Q}\sup_{(a,q)=1}\Bigl|\sum_{\substack{n_1,\dots,n_r\\ n_i\in\mathcal{I}_i\forall i}}\alpha_{n_1,\dots,n_r}\Bigl(\mathbf{1}_{n_1\cdots n_r\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n_1\cdots n_r,q)=1}}{\phi(q)}\Bigr)\Bigr|\ll_A \frac{x}{(\log{x})^A}.
\]
Then for every $A>0$ we have
\[
\sum_{q\sim Q}\sup_{(a,q)=1}\Bigl|\mathop{\sideset{}{^*}\sum}_{\substack{n_1,\dots,n_r\\ n_i\sim N_i\forall i}}\alpha_{n_1,\dots,n_r}\Bigl(\mathbf{1}_{n_1\cdots n_r\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n_1\cdots n_r,q)=1}}{\phi(q)}\Bigr)\Bigr|\ll_{A,r} \frac{x}{(\log{x})^A}.
\]
where by $\mathop{\sideset{}{^*}\sum}$ we indicate that the summation is restricted to $O(1)$ conditions of the form $n_1^{\alpha_1}\cdots n_r^{\alpha_r}\le B$ for some quantities $\alpha_1,\dots,\alpha_r,B$. The implied constant may depend on the $\alpha_i$.
\end{lmm}
\begin{proof}
This is a subdivision argument. If $N_j\ll (\log{x})^{O(1)}$ then we consider each value of $n_j$ individually. Thus it suffices to consider the case when $N_i>(\log{x})^C$ for all $i$ for a suitably large constant $C$. Let $J=\lfloor (\log{x})^C\rfloor$. We partition the interval $[N_i,2N_i)$ into $J$ disjoint subintervals $\mathcal{I}_{i,j}=[N_i(1+(j-1)/J),N_i(1+j/J))$ for $j\in\{1,\dots,J\}$. We do this for each $i\in\{1,\dots,r\}$, so there are $J^r\ll (\log{x})^{Cr}$ such subintervals in total. We call an $r$-tuple $(j_1,\dots,j_r)\in\{1,\dots,J\}^r$ \textit{exceptional} if there exists $\mathbf{a},\mathbf{b}\in\mathcal{I}_{1,j_1}\times\mathcal{I}_{2,j_2}\times\dots\times\mathcal{I}_{r,j_r}$ such that one of the conditions of the summation holds for $\mathbf{a}$ but not $\mathbf{b}$ - that is if $a_1^{\alpha_1}\cdots a_r^{\alpha_r}\le B< b_1^{\alpha_1}\cdots b_r^{\alpha_r}$. Since any $n_i\in\mathcal{I}_{i,j_i}$ satisfies $n_i=N_{i}(1+j_i/J+O(1/\log^C{x}))$, we see that if $(j_1,\dots,j_r)$ is exceptional then for some suitable $(\alpha_1,\dots,\alpha_r,B)$
\[
N_{1}^{\alpha_1}\cdots N_{r}^{\alpha_r}\Bigl(1+\frac{j_1}{J}\Bigr)^{\alpha_1}\cdots\Bigl(1+\frac{j_r}{J}\Bigr)^{\alpha_r}\Bigl(1+O_{\alpha_1,\dots,\alpha_r}\Bigl(\frac{1}{(\log{x})^C}\Bigr)\Bigr)=B.
\]
There are $O_{\alpha_1,\dots,\alpha_r}(J^{r-1})$ possible such exceptional tuples $(j_1,\dots,j_r)$. (Any such constraint cannot have all $\alpha_i=0$, and if $\alpha_\ell\ne0$ then there are $O_{\alpha_\ell}(1)$ choices of $j_\ell$ for each choice of the other $j_i$.) Since there are $O(1)$ such constraints, there are $O(J^{r-1})$ exceptional tuples in total (with the implied constant depending on all the $\alpha_i$). We call a tuple $(j_1,\dots,j_r)$ \textit{good} if for all $\mathbf{a}\in\mathcal{I}_{1,j_1}\times\mathcal{I}_{2,j_2}\times\dots\times\mathcal{I}_{r,j_r}$ we have $a_1^{\alpha_1}\cdots a_r^{\alpha_r}\le B$. Since $\alpha_{n_1,\dots,n_r}\ge 0$, we see that
\begin{align}
\mathop{\sideset{}{^*}\sum}_{\substack{n_1,\dots,n_r\\ n_i\sim N_i\forall i}}&\alpha_{n_1,\dots,n_r}\Bigl(\mathbf{1}_{n_1\cdots n_r\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n_1\cdots n_r,q)=1}}{\phi(q)}\Bigr)\nonumber\\
&\ge \sum_{(j_1,\dots,j_r)\text{ good}}\Bigl(\mathop{\sum}_{\substack{n_1,\dots,n_r\\ n_i\in \mathcal{I}_{i,j_i}\forall i}}\alpha_{n_1,\dots,n_r}\Bigl(\mathbf{1}_{n_1\cdots n_r\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n_1\cdots n_r,q)=1}}{\phi(q)}\Bigr)\Bigr)\nonumber\\
&-\sum_{(j_1,\dots,j_r)\text{ exceptional}}\Bigl(\mathop{\sum}_{\substack{n_1,\dots,n_r\\ n_i\in \mathcal{I}_{i,j_i}\forall i}}\alpha_{n_1,\dots,n_r}\frac{\mathbf{1}_{(n_1\cdots n_r,q)=1}}{\phi(q)}\Bigr).\label{eq:SubdivisionLower}
\end{align}
By the assumption of the lemma, the contribution from good tuples when summed over $q\sim Q$ (with absolute values) is small. By trivial estimation we also have
\begin{align*}
\sum_{\substack{(j_1,\dots,j_r)\\\text{ exceptional}}}\Bigl(\sum_{\substack{n_1,\dots,n_r\\ n_i\in \mathcal{I}_{i,j_i}\forall i}}\alpha_{n_1,\dots,n_r}\frac{\mathbf{1}_{(n_1\cdots n_r,q)=1}}{\phi(q)}\Bigr)&\ll \frac{1}{\phi(q)}J^{r-1} \sup_{j_1,\dots,j_r}\prod_{i=1}^r\#\mathcal{I}_{i,j_i}\\
&\ll_r \frac{x}{\phi(q)(\log{x})^C}.
\end{align*}
Thus \eqref{eq:SubdivisionLower} gives a suitable lower bound. By upper bounding the main summation in an analogous manner we obtain a suitable upper bound. This gives the result.
\end{proof}
\begin{lmm}[Terms which can be handled trivially]\label{lmm:Trivial}
Let $A,C>0$ and $\delta\in[0,1/1000]$. Let $\lambda_d^+$ be the upper bound sieve weights of Lemma \ref{lmm:Sieve}. Let
\begin{align*}
\mathcal{B}_1&:=\Bigl[\frac{x^{2/5}}{4} ,x^{2/5+6\delta}(\log{x})^{2C}\Bigr]\cup\Bigl[\frac{x^{1/2-3\delta}}{(\log{x})^C},2x^{1/2}\Bigr],\\
\mathcal{B}_2&:=[x^{3/7},x^{3/7+8\delta}]\cup[x^{1/2-4\delta},x^{1/2+4\delta}]\cup[x^{4/7-8\delta},x^{4/7}].
\end{align*}
Let $\mathcal{B}\in\{\mathcal{B}_1,\mathcal{B}_2\}$ and set
\begin{align*}
\rho_\mathcal{B}(n)&:=\sum_{\substack{m_1 m_2=n\\ m_1\in\mathcal{B}}}\Bigl(\sum_{\substack{d_1|m_1\\ d_1<x^{\epsilon} }}\lambda_{d_1}^+\Bigr)\Bigl(\sum_{\substack{d_2|m_2\\ d_2<x^\epsilon }}\lambda_{d_2}^+\Bigr).
\end{align*}
Then we have that:
\begin{enumerate}
\item $\rho_\mathcal{B}(n)$ is equidistributed in arithmetic progressions: For $Q<x^{3/5}$ we have
\[
\sum_{q\sim Q}\sup_{(a,q)=(b,q)=1}\Bigl|\sum_{\substack{n\sim x\\ n\equiv a\Mod{q}}}\rho_\mathcal{B}(n)-\sum_{\substack{n\sim x \\ n\equiv b\Mod{q}}}\rho_\mathcal{B}(n)\Bigr|\ll_{A,C}\frac{x}{(\log{x})^A}.
\]
\item $\rho_\mathcal{B}(n)$ is an upper bound for terms with a subproduct in $\mathcal{B}$: For $n\sim x$
\[
\sum_{\substack{n=m_1\dots m_j n_1\dots n_j\\ P^-(n)\ge x^\epsilon \\ \prod_{i\in\mathcal{I}_1}m_i\prod_{i\in\mathcal{I}_2}n_i\in\mathcal{B}\text{ some }\mathcal{I}_1,\mathcal{I}_2\subseteq\{1,\dots,j\}}}1\ll \rho_\mathcal{B}(n).
\]
\item $\rho_\mathcal{B}(n)$ has small average:
\[
\sum_{n\sim x}\rho_\mathcal{B}(n)\ll_C\delta \pi(x)+\frac{x\log\log{x}}{(\log{x})^2}.
\]
\end{enumerate}
\end{lmm}
\begin{proof}
The second claim follows immediately from Lemma \ref{lmm:Sieve} and the fact that $P^-(n)\ge x^\epsilon$ implies that $j\ll 1$ so there are $O(1)$ choices of $\mathcal{I}_1,\mathcal{I}_2$ which occur each with multiplicity $O(1)$. The third claim similarly follows from the fact that $\rho_{\mathcal{B}}(n)$ is a sieve upper bound. If $\mathcal{B}=\mathcal{B}_1$:
\begin{align*}
\sum_{n\sim x}\rho_{\mathcal{B}}(n)&=\sum_{d_1,d_2<x^\epsilon}\lambda^+_{d_1}\lambda_{d_2}^+\sum_{\substack{n_1'n_2'\sim x/(d_1d_2)\\ d_1n_1'\in\mathcal{B}}}1\\
&=x\sum_{d_1,d_2<x^\epsilon}\frac{\lambda^+_{d_1}\lambda_{d_2}^+}{d_1d_2}\sum_{d_1n_1'\in\mathcal{B} }\frac{1}{n_1'}+O(x^{2\epsilon}\#\mathcal{B})\\
&=x\sum_{d_1,d_2<x^\epsilon}\frac{\lambda^+_{d_1}\lambda_{d_2}^+}{d_1d_2}\Bigl(\log(8x^{9\delta}(\log{x})^{3C})+O(x^{-1/5})\Bigr)+O(x^{1/2+2\epsilon})\\
&=x\Bigl(9\delta\log{x}+3C\log\log{x}+\log{8}\Bigr)\Bigl(\sum_{d<x^\epsilon}\frac{\lambda_d^+}{d}\Bigr)^2+O(x^{1-\epsilon})\\
&\ll _C\frac{\delta x}{\log{x}}+\frac{x\log\log{x}}{(\log{x})^2}.
\end{align*}
Here we used Lemma \ref{lmm:Sieve} in the final line. The argument for $\mathcal{B}=\mathcal{B}_2$ is entirely analogous. Thus we are left to establish the first claim. Substituting the definition of $\rho_\mathcal{B}$, we see that
\begin{align*}
\sum_{\substack{n\sim x\\ n\equiv a\Mod{q}}}\rho_{\mathcal{B}}(n)&=\sum_{\substack{d_1<x^{\epsilon} }}\lambda_{d_1}^+\sum_{\substack{d_2<x^\epsilon }}\lambda_{d_2}^+\sum_{\substack{n_1'n_2'\sim x/(d_1 d_2) \\ d_1n_1'\in\mathcal{B}\\ d_1d_2n_1'n_2'\equiv a\Mod{q} }}1.
\end{align*}
By Lemma \ref{lmm:DoubleDivisor}, we have that for $(d_1d_2,q)=1$ and $q\le (x/d_1d_2)^{2/3-\epsilon}$
\[
\sum_{\substack{n_1'n_2'\sim x/(d_1 d_2) \\ d_1n_1'\in\mathcal{B}\\ d_1d_2n_1'n_2'\equiv a\Mod{q} }}1=\frac{1}{\phi(q)}\sum_{\substack{n_1'n_2'\sim x/(d_1 d_2) \\ d_1n_1'\in\mathcal{B}\\ (n_1'n_2',q)=1 }}1+O_A\Bigl(\frac{x}{q(\log{x})^A}\Bigr).
\]
Thus we see that for $Q=x^{3/5}$
\[
\sup_{q\le Q}\sup_{(a,q)=(b,q)=1}\Bigl|\sum_{\substack{n\sim x\\ n\equiv a\Mod{q}}}\rho_{\mathcal{B}}(n)-\sum_{\substack{n\sim x \\ n\equiv b\Mod{q}}}\rho_{\mathcal{B}}(n)\Bigr|\ll_A\frac{x}{q(\log{x})^A}.
\]
This gives the result.
\end{proof}
\begin{lmm}[Type II terms]\label{lmm:TypeII}
Let $A>0$, $C=C(A)$ sufficiently large in terms of $A$ and $Q_1=x^{1/10-3\delta}(\log{x})^{-C}$, $Q_2=x^{2/5+4\delta}(\log{x})^C$. Let
\begin{align*}
\mathcal{G}&:=\Bigl[x^{2/5+6\delta}(\log{x})^{2C},\frac{x^{1/2-3\delta}}{(\log{x})^C}\Bigr],\\
\rho_\mathcal{G}(n)&:=\sum_{j=1}^5 \frac{(-1)^j \binom{5}{j}}{\log{x}}\hspace{-1cm}\sum_{\substack{n_1\cdots n_j m_1\cdots m_j=n \\ P^-(n)\ge x^\epsilon \\ m_1,\dots,m_j \le 2x^{1/5}\\ \prod_{i\in\mathcal{I}_1}n_i\prod_{i\in\mathcal{I}_2}m_i\in\mathcal{G}\text{ some }\mathcal{I}_1,\mathcal{I}_2\subseteq \{1,\dots,j\}}}\hspace{-1cm}\mu(m_1)\cdots \mu(m_j)\log{n_1}.
\end{align*}
Then we have that
\[
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\sup_{(a,q_1 q_2)=1}\Bigl|\sum_{\substack{n\sim x\\ n\equiv a\Mod{q_1 q_2}}}\rho_{\mathcal{G}}(n)-\frac{1}{\phi(q_1q_2)}\sum_{\substack{n\sim x\\ (n,q_1q_2)=1}}\rho_{\mathcal{G}}(n)\Bigr|\ll_A \frac{x}{(\log{x})^A}.
\]
\end{lmm}
\begin{proof}[Proof assuming Proposition \ref{prpstn:MainProp}]
We use inclusion-exclusion to rewrite the condition that there exists a subproduct lying in $\mathcal{G}$ as a linear combination of terms where some fixed subproducts lie in $\mathcal{G}$. We separately consider all possible combinations of signs of the $\mu$ functions (so the terms are all positive or all negative), and then use Lemma \ref{lmm:Separation} to remove the dependencies from the conditions $n_1\cdots n_jm_1\cdots m_j\sim x$ and that some subproducts lie in $\mathcal{G}$ by splitting the summation into short intervals. Finally, by grouping variables suitably we can apply Proposition \ref{prpstn:MainProp}, which gives the result.
\end{proof}
\begin{lmm}\label{lmm:Triple}
Let $A>0$ and $C=C(A)$ be sufficiently large in terms of $A$. Let $x^{\epsilon}\le N_1\le N_2\le N_3\le x^{2/5}$ and $1\le M\le x^{2/5}/N_3$ satisfy $MN_1N_2N_3\asymp x$. Let $\mathcal{I}_1\subseteq[N_1,2N_1],\,\mathcal{I}_2\subseteq[N_2,2N_2],\,\mathcal{I}_3\subseteq[N_3,2N_3]$ be intervals and $\alpha_m$ be a 1-bounded complex sequence. Let
\[
\Delta(a;q):=\sum_{m\sim M}\alpha_m\mathop{\sum_{n_1\in\mathcal{I}_1}\sum_{n_2\in\mathcal{I}_2}\sum_{ n_3\in\mathcal{I}_3}}\limits_{P^-(n_1n_2n_3)\ge x^\epsilon}\Bigl(\mathbf{1}_{m n_1 n_2 n_3\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m n_1n_2n_3,q)=1}}{\phi(q)}\Bigr).
\]
Let $Q_1,Q_2,Q_3$ satisfy $Q_1Q_2Q_3=x^{1/2+\delta}$, $Q_1=x^{1/10-3\delta}(\log{x})^{-C}$ and
\[
Q_2\in [x^{20\delta} (\log{x})^{5C},x^{1/100}].
\]
Then we have that
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sum_{q_3\le Q_3}\sup_{(a,q_1q_2q_3)=1}|\Delta(a;q_1q_2q_3)|\ll_A\frac{x}{(\log{x})^A}.
\]
\end{lmm}
\begin{proof}[Proof assuming Proposition \ref{prpstn:MainProp} and Proposition \ref{prpstn:Triple}]
By Lemma \ref{lmm:Buchstab}, letting $y_1:=x^{2/5+6\delta}(\log{x})^{2C}/(MN_3)\ge 1$, for some 1-bounded $\alpha'_d,\beta_d$ we have
\begin{equation}
\mathbf{1}_{P^-(n_1)\ge x^\epsilon}=\sum_{\substack{n_1=d_1n_1'\\ d_1\le y_1}}\alpha'_{d_1}+\sum_{\substack{n_1=d_1p_1p_2 n_1'\\ d_1\le y_1\le d_1p_1\\ P^-(d_1),p_2\ge p_1\\ p_1\le x^\epsilon}}\beta_{d_1}\mathbf{1}_{P^-(n_1')\ge p_2\text{ or }n_1'=1}.
\label{eq:RoughDecomp}
\end{equation}
(Here we wrote $p_2=P^-(n/d_1p_1)$.) If $y_1<d_1 p_1\le y_1x^\epsilon$ and $m\sim M$ then $m n_3 d_1 p_1 \in [Q_2Q_3 x^{2\delta}(\log{x})^C,x^{3/7}]$, and all the terms in the second summation give rise to a product which lies in our Type II range, and so can be handled satisfactorily. Expliclity, let $\Delta'(a;q)$ be given by
\begin{align*}
\Delta'(a;q)&:=\sum_{m\sim M}\alpha_m\sum_{\substack{d_1p_1p_2n_1'\in\mathcal{I}_1 \\ d_1\le y_1\le d_1p_1\\ P^-( d_1),p_2\ge p_1\\ p_1\le x^\epsilon}}\beta_{d_1}\mathbf{1}_{P^-(n_1')\ge p_2\text{ or }n_1'=1}\mathop{\sum_{n_2\in\mathcal{I}_2}\sum_{ n_3\in\mathcal{I}_3}}\limits_{P^-(n_2n_3)\ge x^\epsilon}\\
&\qquad \times\Bigl(\mathbf{1}_{m d_1 p_1 p_2 n_1' n_2 n_3\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m d_1 p_1 p_2 n_1'n_2n_3,q)=1}}{\phi(q)}\Bigr).
\end{align*}
By Lemma \ref{lmm:Separation} (considering positive and negative real and imaginary parts of $\alpha_{m}\beta_{d_1}$ separately), and Lemma \ref{lmm:TypeII} (grouping $m,d_1,p_1,n_3$ together, $p_2,n_1',n_2$ together and $q_2,q_3$ together), we see that
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sum_{q_3\le Q_3}\sup_{(a,q_1q_2q_3)=1}|\Delta'(a;q_1q_2q_3)|\ll_A\frac{x}{(\log{x})^A},
\]
and so the second term in \eqref{eq:RoughDecomp} contributes negligibly. Thus we just need to consider the first term of \eqref{eq:RoughDecomp}. Therefore, using Lemma \ref{lmm:Separation} again, it suffices to show that
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sum_{q_3\le Q_3}\sup_{(a,q_1q_2q_3)=1}|\Delta''(a;q_1q_2q_3)|\ll_A\frac{x}{(\log{x})^A},
\]
where (letting $m'=d_1m$)
\[
\Delta''(a;q):=\sum_{m'\sim M'}\alpha''_{m'}\sum_{n_1'\in\mathcal{I}_1'}\mathop{\sum_{n_2\in\mathcal{I}_2'}\sum_{ n_3\in\mathcal{I}_3'}}\limits_{P^-(n_2n_3)\ge x^\epsilon}\Bigl(\mathbf{1}_{m' n_1' n_2 n_3\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m' n_1'n_2n_3,q)=1}}{\phi(q)}\Bigr)
\]
for some intervals $\mathcal{I}_1'\subseteq [N_1',2N_1']$ and $\mathcal{I}_2'\subseteq [N_2,2N_2]$, $\mathcal{I}_3'\subseteq[N_3,2N_3]$ where $N_1'\ll N_1$, $M'\ll x^{2/5+14\delta}(\log{x})^{2C}/N_3$ with $M' N_1'\asymp MN_1$, and for some $1$-bounded complex function $\alpha_m''$. By repeating this argument for $n_2,n_3$ in place of $n_1$ it suffices to show that
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sum_{q_3\le Q_3}\sup_{(a,q_1q_2q_3)=1}|\Delta'''(a;q_1q_2q_3)|\ll_A\frac{x}{(\log{x})^A},
\]
where
\[
\Delta'''(a;q):=\sum_{m\sim M''}\alpha'''_m\sum_{n_1\in\mathcal{I}_1''}\sum_{n_2\in\mathcal{I}_2''}\sum_{ n_3\in\mathcal{I}_3''}\Bigl(\mathbf{1}_{m n_1 n_2 n_3\equiv a\Mod{q}}-\frac{\mathbf{1}_{(m n_1n_2n_3,q)=1}}{\phi(q)}\Bigr)
\]
for some intervals $\mathcal{I}_1''\subseteq [N_1',2N_1']$, $\mathcal{I}_2''\subseteq[N_2',2N_2']$ and $\mathcal{I}_3''\subseteq[N_3',2N_3']$ with $N_i'\ll N_i$ for $i\in\{1,2,3\}$ and with $M''\ll x^{2/5+14\delta}(\log{x})^{2C}/\max(N_1',N_2',N_3')$ and $M'' N_1'N_2'N_3'\asymp MN_1N_2N_3\asymp x$. We see that $N_i' M''\le x^{2/5+6\delta}(\log{x})^{2C}$ for all $i$ implies that $M''{}^3N_1'N_2'N_3'\le x^{6/5+18\delta}(\log{x})^{6C}$, which gives $M''\ll x^{1/10+9\delta}(\log{x})^{3C}$ since $M''N_1'N_2'N_3'\asymp x$. In particular, $M''\ll Q_1 Q_2x^{-4\delta} (\log{x})^{-C},Q_3^{1/2}x^{-2\delta}(\log{x})^{-C}$, which is the condition of Proposition \ref{prpstn:Triple} (grouping $Q_1,Q_2$).
Finally, by Lemma \ref{lmm:Partition} we may replace the indicator functions of the intervals $\mathcal{I}_1'',\mathcal{I}_2'',\mathcal{I}_3''$ by suitable smooth functions $\psi_1,\psi_2,\psi_3$ which satisfy $\psi^{(j)}(t)\ll ((j+1)\log{x})^{j C_2}$ for some suitably large constant $C_2=C_2(A)$. The result now follows from Proposition \ref{prpstn:Triple} (grouping $q_1q_2$ together).
\end{proof}
\begin{lmm}[Extended Type II estimate]\label{lmm:ExtendedTypeII}
Let $\delta,A>0$ and $N\in [x^{2/5}/4,4x^{3/5}]$, $M\in[x/(2N),2x/N]$ and let $Q_1,Q_2,Q_3\ge 1$ satisfy $Q_1Q_2Q_3=x^{1/2+\delta}$ with
\[
Q_2<x^{1/16-10\delta-10\epsilon},\qquad \max\Bigl(\frac{x^{1/10+11\delta+10\epsilon}}{Q_2},Q_2 x^{14\delta+10\epsilon}\Bigr)<Q_3<\frac{x^{1/10-3\delta-5\epsilon}}{Q_2^{3/5}}.
\]
Let $\alpha_n,\beta_n$ be complex coefficients satisfying the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz} with $|\alpha_n|,|\beta_n|\le \tau(n)^B$, and set
\[
\Delta(a;q):=\sum_{n\sim N}\alpha_n\sum_{m\sim M}\beta_m\Bigl(\mathbf{1}_{n m\equiv a\Mod{q}}-\frac{\mathbf{1}_{(n m,q)=1}}{\phi(q)}\Bigr).
\]
Then we have that
\begin{equation*}
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\sup_{(b,q_1q_2)=1}\sum_{q_3\sim Q_3}\sup_{\substack{(a,q_1q_2q_3)=1\\ a\equiv b\Mod{q_1q_2}}}|\Delta(a;q_1q_2q_3)|\ll_{A,B} \frac{x}{(\log{x})^A}.
\end{equation*}
\end{lmm}
\begin{proof}[Proof assuming Proposition \ref{prpstn:SecondProp} and Proposition \ref{prpstn:Zhang}] By symmetry we may assume that $N\le M$ so $N\le 2x^{1/2}$. If
\begin{equation}
Q_3<x^{1/10-3\delta-2\epsilon},
\label{eq:Con1}
\end{equation}
we see that $Q_1^7 Q_2^7 Q_3^{12}=x^{7/2+7\delta}Q_3^5<x^{4-10\epsilon}$. Thus, grouping $q_1 ,q_2$ together, we see that Proposition \ref{prpstn:Zhang} gives the estimate of the lemma for
\[
Q_1 Q_2 x^{2\delta+\epsilon} <N <\frac{x^{1-\epsilon}}{Q_1Q_2}.
\]
Similarly, provided
\begin{align}
Q_2 x^{14\delta+10\epsilon}&\le Q_3, \label{eq:Con2} \\
Q_3Q_2^{3/5}&\le x^{1/10-3\delta-5\epsilon}, \label{eq:Con3}
\end{align}
we see that Proposition \ref{prpstn:SecondProp} gives the result for
\[
\max\Bigl(Q_1 x^{2\delta+5\epsilon},\, Q_2Q_3 x^{1/4+13\delta/2+5\epsilon}\Bigr)<N<\frac{x^{1/2-3\delta-5\epsilon}}{Q_2}.
\]
Together, we see that the ranges for $N$ cover the range $[x^{2/5}/4,2x^{1/2}]$ provided
\begin{align}
Q_1 Q_2&<x^{1/2-2\epsilon}, \label{eq:Con4}\\
Q_1 Q_2^2&<x^{1/2-13\delta-7\epsilon},\label{eq:Con5}\\
Q_1&<x^{2/5-2\delta-10\epsilon},\label{eq:Con6}\\
Q_2Q_3&<x^{3/20-7\delta-10\epsilon}.\label{eq:Con7
\end{align}
We see that for \eqref{eq:Con2} and \eqref{eq:Con3} to give a non-trivial range for $Q_3$ we must have
\begin{equation}
Q_2<x^{1/16-10\delta-9\epsilon}.\label{eq:Con8}
\end{equation}
We see that \eqref{eq:Con1} and \eqref{eq:Con7} are implied by \eqref{eq:Con3} and \eqref{eq:Con8}. Recalling $Q_1Q_2Q_3=x^{1/2+\delta}$, we see \eqref{eq:Con4} and \eqref{eq:Con5} are implied by \eqref{eq:Con2}. Thus we are left with \eqref{eq:Con2},\eqref{eq:Con3}, \eqref{eq:Con6} and \eqref{eq:Con8}, which give the constraints
\[
Q_2<x^{1/16-10\delta-9\epsilon},\qquad \max\Bigl(\frac{x^{1/10+11\delta+10\epsilon}}{Q_2},Q_2 x^{14\delta+10\epsilon}\Bigr)<Q_3<\frac{x^{1/10-3\delta-5\epsilon}}{Q_2^{3/5}}.
\]
By symmetry, these then cover the range $N\in[x^{2/5},x^{3/5}]$, giving the result.
\end{proof}
We are now in a position to establish Theorems \ref{thrm:WeakEquidistribution}-\ref{thrm:Minorant} assuming Propositions \ref{prpstn:MainProp}-\ref{prpstn:Triple}.
\section{Proof of Theorem \ref{thrm:WeakEquidistribution}}
We now establish Theorem \ref{thrm:WeakEquidistribution} from Proposition \ref{prpstn:MainProp} and Proposition \ref{prpstn:Triple} using the Heath-Brown identity.
\begin{proof}
By partial summation, it suffices to show the result for integers $n$ weighted by weight $\Lambda(n)\mathbf{1}_{P^-(n)\ge x^\epsilon}/\log{x}$ rather than primes, and by dyadic dissection it suffices to establish it for $n\sim x$. We apply the Heath-Brown identity (Lemma \ref{lmm:HeathBrown}) with $k=5$, and multiply by $\mathbf{1}_{P^-(n)\ge x^\epsilon}$. This gives
\begin{equation}
\frac{\Lambda(n)\mathbf{1}_{P^-(n)\ge x^\epsilon}}{\log{x}}=\sum_{j=1}^5 \frac{(-1)^j \binom{5}{j}}{\log{x}} \sum_{\substack{n=m_1\cdots m_j n_1\cdots n_{j}\\ m_1,\,\dots,\,m_j\le 2x^{1/5}\\ P^-(m_1),\dots,P^-(n_j)\ge x^\epsilon}}\mu(m_1)\cdots \mu(m_k)\log{n_{1}}.
\label{eq:HeathBrown}
\end{equation}
Define the intervals $\mathcal{B}$ and $\mathcal{G}$ by
\begin{align*}
\mathcal{B}&:=\Bigl[\frac{x^{2/5}}{4},x^{2/5+6\delta}(\log{x})^{2C}\Bigr]\cup\Bigl[\frac{x^{1/2-3\delta}}{(\log{x})^C},2x^{1/2}\Bigr],\\
\mathcal{G}&:=\Bigl[x^{2/5+6\delta}(\log{x})^{2C},\frac{x^{1/2-3\delta}}{(\log{x})^C}\Bigr].
\end{align*}
We split the right hand side of \eqref{eq:HeathBrown} into terms where some sub-product of $n_1,\dots,n_j,$ $m_1,\dots,m_j$ lies in $\mathcal{G}$, terms where no subproduct lies in $\mathcal{G}$ but some subproduct lies in $\mathcal{B}$, and terms with no subproduct in $\mathcal{B}\cup\mathcal{G}$. Explicitly, this gives
\[
\frac{\Lambda(n)\mathbf{1}_{P^-(n)\ge x^\epsilon}}{\log{x}}=\rho_1(n)+\rho_2(n)+\rho_3(n),
\]
where
\begin{align*}
\rho_1(n)&:=\sum_{j=1}^5 \frac{(-1)^j \binom{5}{j}}{\log{x}}\hspace{-1cm}\sum_{\substack{n_1\cdots n_j m_1\cdots m_j=n \\ P^-(n)\ge x^\epsilon \\ m_1,\dots,m_j \le 2x^{1/5}\\ \prod_{i\in\mathcal{I}_1}n_i\prod_{i\in\mathcal{I}_2}m_i\in\mathcal{G}\text{ some }\mathcal{I}_1,\mathcal{I}_2\subseteq \{1,\dots,j\}}}\hspace{-1cm}\mu(m_1)\cdots \mu(m_j)\log{n_1},\\
\rho_2(n)&:=\sum_{j=1}^5 \frac{(-1)^j \binom{5}{j}}{\log{x}}\hspace{-1cm}\sum_{\substack{n_1\cdots n_j m_1\cdots m_j=n \\ P^-(n)\ge x^\epsilon \\ m_1,\dots,m_j \le 2x^{1/5}\\ \prod_{i\in\mathcal{I}_1}n_i\prod_{i\in\mathcal{I}_2}m_i\notin\mathcal{G}\text{ all }\mathcal{I}_1,\mathcal{I}_2\subseteq \{1,\dots,j\}\\ \prod_{i\in\mathcal{I}_1}n_i\prod_{i\in\mathcal{I}_2}m_i\in\mathcal{B}\text{ some }\mathcal{I}_1,\mathcal{I}_2\subseteq \{1,\dots,j\}}}\hspace{-1cm}\mu(m_1)\cdots \mu(m_j)\log{n_1},\\
\rho_3(n)&:=\sum_{j=1}^5 \frac{(-1)^j \binom{5}{j}}{\log{x}}\hspace{-1cm}\sum_{\substack{n_1\cdots n_j m_1\cdots m_j=n \\ P^-(n)\ge x^\epsilon \\ m_1,\dots,m_j \le 2x^{1/5}\\ \prod_{i\in\mathcal{I}_1}n_i\prod_{i\in\mathcal{I}_2}m_i\notin\mathcal{G}\cup\mathcal{B}\text{ all }\mathcal{I}_1,\mathcal{I}_2\subseteq \{1,\dots,j\}}}\hspace{-1cm}\mu(m_1)\cdots \mu(m_j)\log{n_1}.
\end{align*}
By Lemma \ref{lmm:TypeII}, we see that $\rho_1(n)=\rho_{\mathcal{G}}(n)$ satisfies
\[
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\sup_{(a,q_1 q_2)=1}\Bigl|\sum_{\substack{n\sim x\\ n\equiv a\Mod{q_1 q_2}}}\rho_{1}(n)-\frac{1}{\phi(q_1q_2)}\sum_{\substack{n\sim x\\ (n,q_1q_2)=1}}\rho_{1}(n)\Bigr|\ll_A \frac{x}{(\log{x})^A}.
\]
By Lemma \ref{lmm:Trivial}, we see that $\rho_2(n)$ satisfies $|\rho_2(n)|\ll \rho_\mathcal{B}(n)$ which is equdistributed in arithmetic progressions, and so
\begin{align*}
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}&\sup_{(a,q_1 q_2)=1}\Bigl|\sum_{\substack{n\sim x\\ n\equiv a\Mod{q_1 q_2}}}\rho_{2}(n)-\frac{1}{\phi(q_1q_2)}\sum_{\substack{n\sim x\\ (n,q_1q_2)=1}}\rho_{2}(n)\Bigr|\\
&\ll \sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\Bigl(\sup_{(a,q_1 q_2)=1}\sum_{\substack{n\sim x\\ n\equiv a\Mod{q_1 q_2}}}\rho_{\mathcal{B}}(n)+\frac{1}{\phi(q_1q_2)}\sum_{\substack{n\sim x\\ (n,q_1q_2)=1}}\rho_{\mathcal{B}}(n)\Bigr)\\
&\ll \Bigl(\sum_{\substack{n\sim x}}\rho_{\mathcal{B}}(n)\Bigr)\Bigl(\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\frac{1}{\phi(q_1q_2)}\Bigr)\\
&\ll \frac{\delta x}{\log{x}}+\frac{x\log\log{x}}{(\log{x})^2}.
\end{align*}
Thus we are left to consider the contribution of $\rho_3(n)$ when no subproduct lies in $\mathcal{G}\cup \mathcal{B}=[x^{2/5}/4,2x^{1/2}]$. Since $m_1\cdots m_j n_1\cdots n_j\sim x$, this means that no subproduct lies in $[x^{2/5},2x^{3/5}]$. First we consider the case when $n_i<x^{2/5}$ for all $i$.
By relabelling any $n_i<x^{1/5}$, we may assume that $x^{2/5}>n_1\ge n_2\ge \dots \ge n_{j_1}\ge x^{1/5}\ge m_1\ge \dots\ge m_{j_2}$. We see that $j_1\ge 1$, since otherwise all factors would be at most $x^{1/5}$, so some subproduct would lie in $[x^{2/5},x^{3/5}]$. If $n_1m_1\cdots m_j<x^{2/5}$ for some $0\le j< j_2$, then $n_1 m_1\cdots m_{j+1}< x^{3/5}$ since $m_{j+1}\le x^{1/5}$. Since no subproduct lies in $[x^{2/5},x^{3/5}]$, we see $n_1m_1\dots m_{j+1}< x^{2/5}$. By applying this for $j=0,1,\dots,j_2-1$ in turn, we see that $n_1m<x^{2/5}$ where $m=\prod_{i=1}^{j_2}m_i$. Since $x\le n_1\cdots n_{j_1}m\le (n_1 m)^{j_1}\le x^{2j_1/5}$, we see that $j_1\ge 3$. Since $n_i>x^{1/5}$ for all $i\in\{1,\dots,j_1\}$, we see that $n_{1}n_{2}\ge x^{2/5}$, and so $n_{1}n_{2}> 2x^{3/5}$ since there are no subproducts in $[x^{2/5},2x^{3/5}]$. But then $n_1\cdots n_{j_1}> 2x^{1/5+j_1/5}$, so we must have $j_1\le 3$. Therefore $j_1=3$. Finally, since $n_1m<x^{2/5}$, we see that $n_1n_2n_3m^3\le (n_1m)^3\le x^{6/5}$, so $m<x^{1/10}$ since $n_1n_2n_3m \sim x$.
We are almost able to reduce to Proposition \ref{prpstn:Triple} and Proposition \ref{prpstn:MainProp} via Lemma \ref{lmm:Buchstab}, but unfortunately the ranges for $m$ do not quite overlap. To get around this, we use the fact that almost all $q_2\sim Q_2$ have a divisor in $\mathcal{I}_0:=[x^{28\delta} (\log{x})^{5C},x^{1/100}]$ and so we can use Lemma \ref{lmm:Triple}. Indeed, using a sieve upper bound (e.g. Lemma \ref{lmm:Sieve}) we see that
\begin{align}
\sum_{\substack{q_2\sim Q_2\\ \not\exists d|q_2\text{ s.t. }d\in\mathcal{I}_0}}1\ll \sum_{q\le x^{28\delta}(\log{x})^{5C}}\sum_{\substack{r\le 2Q_2/q\\ P^-(r)\ge x^{1/100}}}1&\ll \frac{Q_2}{\log{x}} \sum_{q\le x^{28\delta}(\log{x})^{5C}}\frac{1}{q_2}\nonumber\\
&\ll Q_2\Bigl(\delta+\frac{\log\log{x}}{\log{x}}\Bigr).
\label{eq:BadQBound}
\end{align}
Another simple sieve upper bound (e.g. Lemma \ref{lmm:Sieve}) gives
\begin{align}
\Bigl|\sum_{\substack{n\sim x\\ n\equiv a\Mod{q_1q_2} }}\rho_3(n)-\frac{1}{\phi(q_1q_2)}\sum_{\substack{n\sim x\\ (n,q_1q_2)=1}}\rho_3(n)\Bigr|&\ll \sum_{\substack{n\sim x\\ n\equiv a\Mod{q_1q_2}\\ P^-(n)\ge x^\epsilon }}1+\frac{1}{\phi(q_1q_2)}\sum_{\substack{n\sim x\\ (n,q_1q_2)=1\\ P^-(n)\ge x^{\epsilon} }}1\nonumber\\
&\ll \frac{x}{\phi(q_1q_2)\log{x}}.
\label{eq:TrivSieveBound}
\end{align}
Combining \eqref{eq:BadQBound} and \eqref{eq:TrivSieveBound}, we see that the contribution from $q_2$ with no factor in $\mathcal{I}_0$ is acceptably small. Thus we only need to consider the contribution when $q_2$ has a factor in $\mathcal{I}_0$, and so it suffices to show that
\begin{align*}
\sum_{q_1\sim Q_1}\sum_{d\sim D}\sum_{q_2'\sim Q_2'}\sup_{(a,q_1d q_2')=1}\Bigl|\sum_{\substack{n\sim x\\ n\equiv a\Mod{q_1d q_2'}}}\widetilde{\rho}_3(n)-\frac{1}{\phi(q_1d q_2')}\sum_{\substack{n\sim x\\ (n,q_1d q_2')=1}}\widetilde{\rho}_3(n)\Bigr|\\
\ll_A\frac{x}{(\log{x})^A}
\end{align*}
over all choices of $D,Q_2'$ with $D Q_2'\asymp Q_2$ and $x^{28\delta}(\log{x})^{5C}\le D\le x^{1/100}$, where
\[
\widetilde{\rho}_3(n):=\sum_{\substack{n=m n_1 n_2 n_3\\ n_i m\le x^{2/5}/4\,\forall i\\ P^-(n)\ge x^\epsilon}}\gamma_m
\]
for some 1-bounded $\gamma_m$ suppprted on $m\le x^{1/10}$. This now follows from applying Lemma \ref{lmm:Separation} (after splitting according to positive and negative real and imaginary parts) and Lemma \ref{lmm:Triple}.
Finally, we consider the case when $n_i>x^{3/5}$ for some $i$. By Lemma \ref{lmm:Buchstab} these terms are of the form
\[
\sum_{\substack{n=n'm'\\ m'\le x^{2/5+\epsilon} }}\alpha_{m'}+\sum_{\substack{n=m'p_1p_2n'\\ p_2> p_1 \\ m'<x^{2/5+\epsilon}<m'p_1\\ p_1<x^\epsilon}}\beta_{m',p_1}\mathbf{1}_{P^-(n')\ge p_2 \text{ or }n'=1}
\]
for some 1-bounded coefficients $\alpha_{m'}, \beta_{m',p_1}$. It is trivial that the first term above is equidistributed in arithmetic progressions, and (after applying Lemma \ref{lmm:Separation} to remove the dependencies) the second term is also equidistributed by Proposition \ref{prpstn:MainProp} (grouping $m'p_1$ and $n'p_2$ together). This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{thrm:AlmostUniform}}
We now establish Theorem \ref{thrm:AlmostUniform} from Lemma \ref{lmm:ExtendedTypeII} (which relies on Proposition \ref{prpstn:SecondProp} and Proposition \ref{prpstn:Zhang}) and Proposition \ref{prpstn:Triple}. The proof is similar to the proof of Theorem \ref{thrm:WeakEquidistribution}.
\begin{proof}
Since the implied constant depends on $\delta$, we may assume that $\delta>100\epsilon$.
By partial summation and dyadic dissection, it suffices to consider integers $n\sim x$ weighted by $\Lambda(n)/\log{x}$ instead of primes $p\le x$ and moduli $q_1\sim Q_1$, $q_2\sim Q_2$, $q_3\sim Q_3$ with $Q_1Q_2Q_3\le x^{1/2+\delta}$. By the Heath-Brown identity, we have
\begin{equation}
\Lambda(n)=\sum_{j=1}^5 (-1)^j \binom{5}{j} \sum_{\substack{n=m_1\cdots m_j n_1\cdots n_{j}\\ m_1,\,\dots,\,m_j\le 2x^{1/5}\\ P^-(m_1),\dots,P^-(n_j)\ge x^\epsilon}}\mu(m_1)\cdots \mu(m_k)\log{n_{1}}.
\label{eq:HeathBrown2}
\end{equation}
We split the right hand side of \eqref{eq:HeathBrown2} according to whether a sub-product of $n_1,\dots,n_j,$ $m_1,\dots,m_j$ lies in $[x^{2/5}/4,4x^{3/5}]$ or not. Let $\rho_1(n)$ denote the terms with a subproduct in $[x^{2/5}/4,4x^{3/5}]$ and $\rho_2(n)$ denote the terms with no subproduct in $[x^{2/5}/4,4x^{3/5}]$. By Lemma \ref{lmm:Separation} and Lemma \ref{lmm:ExtendedTypeII}, we see that
\begin{align*}
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\sup_{(b,q_1q_2)=1}\sum_{q_3\sim Q_3}\sup_{\substack{(a,q_1 q_2 q_3)=1\\ a\equiv b\Mod{q_1q_2} }}\Bigl|\sum_{n\sim x}\rho_{1}(n)\Bigl(\mathbf{1}_{ n\equiv a\Mod{q_1 q_2 q_3}}-\frac{\mathbf{1}_{(n,q_1q_2q_3)=1}}{\phi(q_1q_2 q_3)}\Bigr)\Bigr|\\
\ll_A \frac{x}{(\log{x})^A}.
\end{align*}
Thus we are left to consider the contribution of $\rho_2(n)$ when no subproduct lies in $[x^{2/5}/4,4x^{3/5}]$. As in the proof of Theorem \ref{thrm:WeakEquidistribution}, after relabelling we may assume that $\rho_2(n)$ is of the form
\[
\rho_2(n)=\sum_{\substack{n_1n_2n_3m=n\\ n_1,n_2,n_3\in [x^{1/5},x^{2/5}]\\ m\le x^{2/5}/\max(n_1,n_2,n_3)}}\alpha_m
\]
for some coefficients $|\alpha_n|\le \tau(n)^8$. Since $Q_2Q_3\ge x^{1/10+11\delta+10\epsilon}x^{-4\delta}(\log{x})^{-C}>x^{1/10+\epsilon}$, we see that Lemma \ref{lmm:Separation} and Proposition \ref{prpstn:Triple} give
\[
\sum_{q_1\sim Q_1}\sum_{q_2\sim Q_2}\sum_{q_3\sim Q_3}\sup_{(a,q_1 q_2 q_3)=1}\Bigl|\sum_{n\sim x}\rho_{2}(n)\Bigl(\mathbf{1}_{ n\equiv a\Mod{q_1 q_2 q_3}}-\frac{\mathbf{1}_{(n,q_1q_2q_3)=1}}{\phi(q_1q_2 q_3)}\Bigr)\Bigr|\ll_A \frac{x}{(\log{x})^A}.
\]
This gives the result.
\end{proof}
\section{Proof of Theorem \ref{thrm:Minorant}}
We now establish Theorem \ref{thrm:Minorant} from Proposition \ref{prpstn:MainProp} using Harman's sieve (\cite{Harman}). As mentioned in the introduction, the numerical side of these estimates could be improved considerably.
\begin{proof}
Clearly we may assume that $x$ is sufficiently large in terms of $\delta$. By dyadic dissection it suffices to show the result summing over $n\sim x$ rather than $n\le x$. To simplify notation, set $z_1:=x^{1/7}, z_2:=x^{3/7}, z_3:=x^{4/7}$. Let
\[
\rho(n,z):=\begin{cases}
1,\qquad &P^-(n)> z,\\
0,&\text{otherwise,}
\end{cases}
\]
and
\begin{align*}
\mathcal{G}&:=[x^{3/7+8\delta},x^{1/2-4\delta}]\cup[x^{1/2+4\delta},x^{4/7-8\delta}],\\
\mathcal{B}&:=[x^{3/7},x^{3/7+8\delta}]\cup[x^{1/2-4\delta},x^{1/2+4\delta}]\cup[x^{4/7-8\delta},x^{4/7}].
\end{align*}
For $Q_2\in [x^{2/5+5\delta},x^{3/7}]$, it follows from Proposition \ref{prpstn:MainProp} that terms with a divisor in $\mathcal{G}$ will satisfy suitable equidistribution estimates. Trivially any convolution involving a smooth sequence of length greater than $x^{1/2+\delta}$ also equidistributes suitably. To construct our minorant $\rho$ we wish to decompose $\rho(n,\sqrt{2x})$ (the indicator function of the primes) into various terms which are either equidistributed or have a reasonable lower bound which is equidistributed. We do this following Harman's sieve (see \cite{Harman}.) Since $\mathcal{B}$ consists of intervals which are short on a logarithmic scale, terms with a factor in $\mathcal{B}$ will ultimately contribute negligibly.
By Buchstab's identity (inclusion-exclusion on the smallest prime factor)
\begin{align*}
\rho(n,\sqrt{2x})&=\rho(n,z_1)-\hspace{-0.2cm}\sum_{\substack{p|n\\ z_1<p\le z_2}}\rho\Bigl(\frac{n}{p},p\Bigr)-\hspace{-0.2cm}\sum_{\substack{p|n\\ z_2< p\le \sqrt{2x}} }\rho\Bigl(\frac{n}{p},p\Bigr)=:S_1(n)-S_2(n)-S_3(n).
\end{align*}
For $n\sim x$ and $p\le \sqrt{2x}$, we see $\rho(n/p,p)=\rho(n/p,\min(p,\sqrt{2x/p}))$ since if $p>(2x)^{1/3}$ this counts primes $n/p$. Decomposing the middle term $S_2$ further gives
\begin{align*}
S_2(n)&=\sum_{\substack{p|n\\ z_1<p\le z_2}}\rho\Bigl(\frac{n}{p},\min\Bigl(p,\sqrt{\frac{2x}{p}}\Bigr)\Bigr)=\sum_{\substack{p|n\\ z_1<p\le z_2}}\rho\Bigl(\frac{n}{p},z_1\Bigr)-\hspace{-0.2cm}\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ p_1p_2^2\le 2x}}\rho\Bigl(\frac{n}{p_1p_2},p_2\Bigr)\\
&=\sum_{\substack{p|n\\ z_1<p\le z_2}}\rho\Bigl(\frac{n}{p},z_1\Bigr)-\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ p_1p_2\le z_2\\ p_1 p_2^2\le z_3}}\rho\Bigl(\frac{n}{p_1p_2},p_2\Bigr)-\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ p_1p_2\le z_2\\ p_1 p_2^2> z_3}}\rho\Bigl(\frac{n}{p_1p_2},p_2\Bigr)\\
&\quad-\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ z_2<p_1p_2\le z_3}}\rho\Bigl(\frac{n}{p_1p_2},p_2\Bigr)-\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ p_1p_2^2\le 2x\\ z_3< p_1p_2}}\rho\Bigl(\frac{n}{p_1p_2},p_2\Bigr)\\
&=:S_{2,1}(n)-S_{2,2}(n)-S_{2,3}(n)-S_{2,4}(n)-S_{2,5}(n).
\end{align*}
Furthermore, we see that
\[
S_{2,2}(n)=\hspace{-0.2cm}\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ p_1p_2\le z_2\\ p_1 p_2^2\le z_3}}\hspace{-0.2cm}\rho\Bigl(\frac{n}{p_1p_2},z_1\Bigr)-\hspace{-0.2cm}\sum_{\substack{p_1p_2p_3|n\\ z_1<p_3\le p_2\le p_1\le z_2\\ p_1p_2\le z_2 \\p_1 p_2^2\le z_3}}\hspace{-0.2cm}\rho\Bigl(\frac{n}{p_1p_2 p_3},p_3\Bigr)=:S_{2,2,1}(n)-S_{2,2,2}(n).
\]
Note that in $S_{2,2,2}(n)$ since $x^{1/7}<p_3\le p_2\le p_1\le x^{3/7}$ and $p_1p_2^2\le x^{4/7}$ we have $p_1p_2p_3\in[x^{3/7},x^{4/7}]$. Thus the $S_{2,2,2}(n)$ terms (and also the $S_3(n),S_{2,4}(n)$ terms) are close to being suitable for our Type II estimate Proposition \ref{prpstn:MainProp}. Specifically,
\begin{align*}
S_3(n)&=\sum_{\substack{p|n\\ z_2< p\le \sqrt{2x} \\ p\in \mathcal{G}}}\rho\Bigl(\frac{n}{p},p\Bigr)+\hspace{-0.1cm}\sum_{\substack{p|n\\ z_2< p\le \sqrt{2x} \\ p\in \mathcal{B}}}\rho\Bigl(\frac{n}{p},p\Bigr)=:S_{3}^\mathcal{G}(n)+S_3^\mathcal{B}(n),\\
S_{2,4}(n)&=\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ p_1p_2\in\mathcal{G} }}\rho\Bigl(\frac{n}{p_1p_2},p_2\Bigr)+\hspace{-0.1cm}\sum_{\substack{p_1p_2|n\\ z_1<p_2\le p_1\le z_2\\ p_1p_2\in\mathcal{B} }}\rho\Bigl(\frac{n}{p_1p_2},p_2\Bigr)=:S_{2,4}^\mathcal{G}(n)+S_{2,4}^\mathcal{B}(n),\\
S_{2,2,2}(n)&=\sum_{\substack{p_1p_2p_3|n\\ z_1<p_3\le p_2\le p_1\le z_2\\ p_1p_2\le z_2 \\p_1 p_2^2\le z_3\\ p_1p_2p_3\in\mathcal{G}}}\rho\Bigl(\frac{n}{p_1p_2p_3},p_3\Bigr)+\hspace{-0.1cm}\sum_{\substack{p_1p_2p_3|n\\ z_1<p_3\le p_2\le p_1\le z_2\\ p_1p_2\le z_2 \\p_1 p_2^2\le z_3\\ p_1p_2p_3\in\mathcal{B}}}\rho\Bigl(\frac{n}{p_1p_2p_3},p_3\Bigr)\\
&=:S_{2,2,2}^\mathcal{G}(n)+S_{2,2,2}^\mathcal{B}(n).
\end{align*}
Let $z_0:=x^{1/14-4\delta}$, and let $e\le z_2$ be such that $e|n$. By two applications of Lemma \ref{lmm:Buchstab}, we have for some 1-bounded coefficients $\alpha_d,\alpha'_d,\beta_d,\beta'_d$
\begin{align*}
\rho\Bigl(\frac{n}{e},z_1\Bigr)&=\sum_{\substack{d_1|n/e\\ d_1\le z_2/e}}\alpha_{d_1}\rho\Bigl(\frac{n}{d_1 e},z_0\Bigr)+\sum_{\substack{d_1p_1|n/e\\ d_1\le z_2/e< d_1p_1\\ z_0< p_1\le z_1\\ P^-(d_1)\ge p_1}}\beta_{d_1}\rho\Bigl(\frac{n}{d_1 p_1 e},p_1\Bigr)\\
&=\sum_{\substack{d_2d_1|n/e\\ d_2d_1\le z_2/e}}\alpha'_{d_2}\alpha_{d_1}\rho\Bigl(\frac{n}{d_1 d_2 e},1\Bigr)+\sum_{\substack{d_2p_2d_1|n/e \\ d_2d_1\le z_2/e< d_2d_1p_2\\ 1<p_2\le z_0 \\ P^-(d_2)\ge p_2}}\beta'_{d_2}\alpha_{d_1}\rho\Bigl(\frac{n}{d_1 d_2 p_2 e},p_2\Bigr)\\
&\qquad+\sum_{\substack{d_1p_1|n/e\\ d_1\le z_2/e< d_1p_1\\ z_0< p_1\le z_1\\ P^-(d_1)\ge p_1\\ d_1p_1e\in\mathcal{G} }}\beta_{d_1}\rho\Bigl(\frac{n}{d_1 p_1 e},p_1\Bigr)+\sum_{\substack{d_1p_1|n/e\\ d_1\le z_2/e< d_1p_1\\ z_0< p_1\le z_1\\ P^-(d_1)\ge p_1\\ d_1p_1e\in\mathcal{B} }}\beta_{d_1}\rho\Bigl(\frac{n}{d_1 p_1 e},p_1\Bigr)\\
&=:T^{\text{triv}}(n,e)+T^{\mathcal{G}}_1(n,e)+T_2^{\mathcal{G}}(n,e)+T^{\mathcal{B}}(n,e).
\end{align*}
We observe that in $T_1^{\mathcal{G}}(n,e)$ since $p_2\le x^{1/14-4\delta}$ we have $ed_2d_1p_2\le z_2x^{1/14 - 4\delta}=x^{1/2-4\delta}$, and so $ed_2d_1p_2\in\mathcal{G}$.
Using this decomposition we see that
\begin{align*}
S_1(n)&=T^{\text{triv}}(n,1)+\Bigl(T^{\mathcal{G}}_1(n,1)+T_2^{\mathcal{G}}(n,1)\Bigr)+T^{\mathcal{B}}(n,1)=:S_{1}^{\text{triv}}(n)+S_{1}^{\mathcal{G}}(n)+S_{1}^{\mathcal{B}}(n),\\
S_{2,1}(n)&=\sum_{\substack{p|n\\ z_1<p\le z_2}}T^{\text{triv}}(n,p)+\sum_{\substack{p|n\\ z_1<p\le z_2}}\Bigl(T_1^{\mathcal{G}}(n,p)+T_2^{\mathcal{G}}(n,p)\Bigr)+\sum_{\substack{p|n\\ z_1<p\le z_2}}T^{\mathcal{B}}(n,p)\\
&=:S_{2,1}^{\text{triv}}(n)+S_{2,1}^{\mathcal{G}}(n)+S_{2,1}^{\mathcal{B}}(n).
\end{align*}
Similarly, we write
\[
S_{2,2,1}(n)=S_{2,2,1}^{\text{triv}}(n)+S_{2,2,1}^{\mathcal{G}}(n)+S_{2,2,1}^{\mathcal{B}}(n).
\]
Thus, putting this all together we find
\begin{align*}
\rho(n,\sqrt{2x})&=S^{\text{triv}}(n)+S^{\mathcal{G}}(n)+S^{\mathcal{B}}(n)+S_{2,3}(n)+S_{2,5}(n),
\end{align*}
where
\begin{align*}
S^{\text{triv}}(n)&:=S_1^\text{triv}(n)-S_{2,1}^\text{triv}(n)+S^\text{triv}_{2,2,1}(n),\\
S^{\mathcal{G}}(n)&:=S_1^\mathcal{G}(n)-S_{2,1}^{\mathcal{G}}(n)+S_{2,2,1}^{\mathcal{G}}(n)-S_{2,2,2}^{\mathcal{G}}(n)+S_{2,4}^{\mathcal{G}}(n)-S_3^\mathcal{G}(n),\\
S^{\mathcal{B}}(n)&:=S_1^\mathcal{B}(n)-S_{2,1}^{\mathcal{B}}(n)+S_{2,2,1}^{\mathcal{B}}(n)-S_{2,2,2}^{\mathcal{B}}(n)+S_{2,4}^{\mathcal{B}}(n)-S_3^\mathcal{B}(n).
\end{align*}
By Lemma \ref{lmm:Separation} (to remove dependencies from inequalities) and Proposition \ref{prpstn:MainProp} since each of $S_1^{\mathcal{G}}(n),S_{2,1}^{\mathcal{G}}(n),S_{2,2,1}^{\mathcal{G}}(n),S_{2,2,2}^{\mathcal{G}}(n),S_{2,4}^{\mathcal{G}}(n),S_3^{\mathcal{G}}(n)$ are supported on $n$ with a factor in $\mathcal{G}$, we have that
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sup_{(a,q_1q_2)=1}\Bigl|\sum_{n\sim x}S^\mathcal{G}(n)\Bigl(\mathbf{1}_{n\equiv a\Mod{q_1q_2}}-\frac{\mathbf{1}_{(n,q_1q_2)=1}}{\phi(q_1q_2)}\Bigr)\Bigr|\ll_A\frac{x}{(\log{x})^A}.
\]
Similarly, since each of $S_1^{\mathcal{B}}(n),S_{2,1}^{\mathcal{B}}(n),S_{2,2,1}^{\mathcal{B}}(n),S_{2,2,2}^{\mathcal{B}}(n),S_{2,4}^{\mathcal{B}}(n),S_3^{\mathcal{B}}(n)$ are supported on $n$ with a factor in $\mathcal{B}$ and $P^-(n)\ge z_0$, we have that
\[
-C\rho_B(n)\le S^{\mathcal{B}}(n)\le C\rho_\mathcal{B}(n)
\]
for some suitably large absolute constant $C$, where $\rho_{\mathcal{B}}$ is the function defined in Lemma \ref{lmm:Trivial}. We recall from Lemma \ref{lmm:Trivial} that $\rho_{\mathcal{B}}(n)$ satisfies
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sup_{(a,q_1q_2)=1}\Bigl|\sum_{n\sim x}\rho_{\mathcal{B}}(n)\Bigl(\mathbf{1}_{n\equiv a\Mod{q_1q_2}}-\frac{\mathbf{1}_{(n,q_1q_2)=1}}{\phi(q_1q_2)}\Bigr)\Bigr|\ll_A\frac{x}{(\log{x})^A}.
\]
Finally, recalling the definition of $T^{\text{triv}}(n,e)$, we see that for $e<z_2$ and $q<x^{1-\epsilon}/z_2$, we have that
\begin{align*}
\sum_{\substack{n\sim x\\ e|n}}T^{\text{triv}}(n,e)\mathbf{1}_{n\equiv a\Mod{q}}&=\sum_{\substack{d_1d_2\le z_2/e\\ (d_1d_2,q)=1}}\alpha_{d_2}'\alpha_{d_1}\sum_{\substack{n'\sim x/(d_1d_2e)\\ n'\equiv a\overline{d_1d_2e}\Mod{q}}}1\\
&=\sum_{\substack{n\sim x\\ e|n}}T^{\text{triv}}(n,e)\mathbf{1}_{(n,q)=1}+O_A\Bigl(\frac{x}{q e (\log{x})^A}\Bigr).
\end{align*}
Thus
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sup_{(a,q_1q_2)=1}\Bigl|\sum_{n\sim x}S^\text{triv}(n)\Bigl(\mathbf{1}_{n\equiv a\Mod{q_1q_2}}-\frac{\mathbf{1}_{(n,q_1q_2)=1}}{\phi(q_1q_2)}\Bigr)\Bigr|\ll_A\frac{x}{(\log{x})^A}.
\]
With this set-up, we define our minorant $\rho(n)$ by
\begin{equation}
\rho(n):=S^\text{triv}(n)+S^{\mathcal{G}}(n)-C\rho_{\mathcal{B}}(n).
\end{equation}
Since $S_{2,3}(n),S_{2,5}(n)\ge 0$ and $S^{\mathcal{B}}(n)\ge -C\rho_\mathcal{B}(n)$, we see that
\[
\rho(n)\le \rho(n,\sqrt{2x}).
\]
Since $S^{\text{triv}}(n),S^{\mathcal{G}}(n),\rho_{\mathcal{B}}(n)$ are equidistributed in arithmetic progressions, we also have
\[
\sum_{q_1\le Q_1}\sum_{q_2\le Q_2}\sup_{(a,q_1q_2)=1}\Bigl|\sum_{n\sim x}\rho(n)\Bigl(\mathbf{1}_{n\equiv a\Mod{q_1q_2}}-\frac{\mathbf{1}_{(n,q_1q_2)=1}}{\phi(q_1q_2)}\Bigr)\Bigr|\ll_A\frac{x}{(\log{x})^A}.
\]
This gives the first and third claims of Theorem \ref{thrm:Minorant}. We are therefore left to establish the bound $\sum_{n\sim x}\rho(n)\ge \sum_{p\sim x}1/8$. We note that
\[
\rho(n)\ge \rho(n,\sqrt{2x})-S_{2,3}(n)-S_{2,5}(n)-2C \rho_{\mathcal{B}}(n).
\]
Thus, by partial summation, the prime number theorem, Lemma \ref{lmm:Buchstab2} and Lemma \ref{lmm:Trivial}, we have that
\begin{align*}
\sum_{n\sim x}\rho(n)&\ge \frac{(1+o(1))x}{\log{x}}\Biggl(1-\int_{4/21}^{2/7}\int_{2/7-u/2}^{\min(u,3/7-u)}\omega\Bigl(\frac{1-u-v}{v}\Bigr)\frac{ du dv}{u v^2}\\
&\qquad -\int_{2/7}^{3/7}\int_{4/7-u}^{\min(u,(1-u)/2)}\omega\Bigl(\frac{1-u-v}{v}\Bigr)\frac{ du dv}{u v^2}-O(\delta)\Biggr).
\end{align*}
(Here $\omega(u)$ is the Buchstab function described in Lemma \ref{lmm:Buchstab2}.) Crudely bounding $\omega(v)\le 1$ and then calculating the integrals gives
\[
\sum_{n\sim x}\rho(n)\ge \frac{(1+o(1))x}{\log{x}}\Bigl(\frac{25}{12}-\frac{19}{6}\log{2}+\frac{1}{4}\log{3}-O(\delta)\Bigr)\ge \frac{1}{8}\sum_{p\sim x}1
\]
for $x$ large enough and $\delta$ small enough. This gives the result.
\end{proof}
\section{Further preparatory lemmas}
We are left to establish Propositions \ref{prpstn:MainProp}-\ref{prpstn:Triple}. Before embarking upon this, we first collect some basic lemmas for use later on.
\begin{lmm}[Poisson Summation]\label{lmm:Completion}
Let $C>0$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth function which is supported on $[-10,10]$ and satisfies $\|f^{(j)}\|_\infty\ll ((j+1)\log{x})^{j C}$ for all $j\ge 0$, and let $M,q\le x$. Then we have
\[
\sum_{m\equiv a\Mod{q}} f\Bigl(\frac{m}{M}\Bigr)=\frac{M}{q}\hat{f}(0)+\frac{M}{q}\sum_{1\le |h|\le H}\hat{f}\Bigl(\frac{h M}{q}\Bigr)e\Bigl(\frac{ah}{q}\Bigr)+O_C(x^{-100}),
\]
for any choice of $H\ge q(\log{x})^{2C+1} /M$.
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.4]{May1}.
\end{proof}
\begin{lmm}[Summation with coprimality constraint]\label{lmm:TrivialCompletion}
Let $C>0$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth function which is supported on $[-10,10]$ and satisfies $\|f^{(j)}\|_\infty\ll ((j+1)\log{x})^{j C}$ for all $j\ge 0$. Then we have
\[
\sum_{(m,q)=1}f\Bigl(\frac{m}{M}\Bigr)=\frac{\phi(q)}{q}M+O(\tau(q)(\log{x})^{2C}).
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.6]{May1}.
\end{proof}
\begin{lmm}[Completion of inverses]\label{lmm:InverseCompletion}
Let $C>0$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth function which is supported on $[-10,10]$ and satisfies $\|f^{(j)}\|_\infty\ll ((j+1)\log{x})^{j C}$ for all $j\ge 0$. Let $(d,q)=1$. Then we have for any $H\ge (\log{x})^{2C+1} d q/N$
\begin{align*}
\sum_{\substack{(n,q)=1\\ n\equiv n_0\Mod{d}}}&f\Bigl(\frac{n}{N}\Bigr)e\Bigl(\frac{b\overline{n}}{q}\Bigr)=\frac{N}{d q}\sum_{|h|\le H}\hat{f}\Bigl(\frac{h N}{d q}\Bigr)e\Bigl(\frac{n_0\overline{q}h}{d}\Bigr)S(h\overline{d},b;q)+O_C(x^{-100}),
\end{align*}
where $S(m,n;c)$ is the standard Kloosterman sum
\[
S(m,n;c):=\sum_{\substack{b\Mod{c}\\ (b,c)=1}}e\Bigl(\frac{m b+ n \overline{b}}{c}\Bigr).
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.5]{May1}, with a slightly different presentation of the terms.
\end{proof}
\begin{lmm}[Weil bound]\label{lmm:Kloosterman}
Let $S(m,n;c)$ be the standard Kloosterman sum (as given in Lemma \ref{lmm:InverseCompletion}). Then we have that
\[
S(m,n;c)\ll \tau(c)c^{1/2}\gcd(m,n,c)^{1/2}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.3]{May1}.
\end{proof}
\begin{lmm}[Properties of $F$ sum]\label{lmm:FSum}
Define
\begin{align*}
F(h_1,h_2,h_3;a,q)&:=\sum_{\substack{b_1,b_2,b_3\in(\mathbb{Z}/q\mathbb{Z})^\times \\ b_1b_2b_3=a}}e\Bigl(\frac{h_1b_1+h_2b_2+h_3b_3}{q}\Bigr),\\
\Kl_3(a;q)&:=\frac{1}{q}\sum_{\substack{b_1,b_2,b_3\in \mathbb{Z}/q\mathbb{Z}\\ b_1b_2b_3=a}}e\Bigl(\frac{b_1+b_2+b_3}{q}\Bigr).
\end{align*}
Then we have the following:
\begin{enumerate}
\item If $(q_1,q_2)=1$ then
\[
F(h_1,h_2,h_3;a;q_1q_2)=F(h_1,h_2,h_3;a \overline{q_1}^3;q_2)F(h_1,h_2,h_3;a\overline{q_2}^3;q_1).
\]
\item If $(b,q)=1$ then
\[
F(h_1,h_2,h_3;a;q)=F(b h_1,b h_2,b h_3;a \overline{b}^3;q).
\]
\item If $(a,q)\ne 1$ then
\[
F(h_1,h_2,h_3;a;q)=0.
\]
\item If $(a,q)=1$ and $\gcd(h_1,h_2,h_3,q)=d$ then
\[
F(h_1,h_2,h_3; a ;q)=\frac{\phi(q)^2}{\phi(q/d)^2} F\Bigl(\frac{h_1}{d},\frac{h_2}{d},\frac{h_3}{d};a;\frac{q}{d}\Bigr).
\]
\item If $(a,q)=1$ and $\gcd(h_1h_2h_3,q)=1$ then
\[
F(h_1,h_2,h_3;a;q)=q\Kl_3(ah_1h_2h_3;q).
\]
\item If $(a,q)=1$ and $\gcd(h_1h_2h_3,q)\ne 1$ and $\gcd(h_1,h_2,h_3,q)=1$ and $\mu^2(q)=0$ then
\[
F(h_1,h_2,h_3;a;q)=0.
\]
\item If $(a,q)=1$ and $q|h_1h_2h_3$ and $\gcd(h_1,h_2,h_3,q)=1$ and $\mu^2(q)=1$ then $F(h_1,h_2,h_3;a;q)$ depends only on $(h_1,q)$, $(h_2,q)$, $(h_3,q)$ and $q$, and satisfies
\[
|F(h_1,h_2,h_3;a,q)|\ll \frac{(h_1,q)(h_2,q)(h_3,q)}{q}.
\]
\end{enumerate}
\end{lmm}
\begin{proof}
This is \cite[Lemma 19.3]{May1}.
\end{proof}
\begin{lmm}[Minkowski-reduced basis]\label{lmm:Basis}
Let $\Lambda\subseteq\mathbb{R}^k$ be a lattice and $\|\cdot\|$ the Euclidean norm on $\mathbb{R}^k$. Then there is a set $\{\mathbf{v}_1,\dots,\mathbf{v}_r\}$ of linearly independent vectors in $\mathbb{R}^k$ such that
\begin{enumerate}
\item $\{\mathbf{v}_1,\dots,\mathbf{v}_r\}$ is a basis:
\[
\Lambda=\mathbf{v}_1\mathbb{Z}+\dots+\mathbf{v}_r\mathbb{Z}.
\]
\item The $\mathbf{v}_i$ are quasi-orthogonal: For any $x_1,\dots,x_r\in\mathbb{R}$ we have
\[
\|x_1\mathbf{v}_1+\dots+x_r\mathbf{v}_r\|\asymp \sum_{i=1}^r\|x_i\mathbf{v}_i\|.
\]
\item The sizes of the $\mathbf{v}_i$ are controlled by successive minima: If $\lambda_1\le \lambda_2\dots\le \lambda_r$ are the successive minima of $\Lambda$, then $\|\mathbf{v}_i\|\asymp \lambda_i$ for all $i$. In particular,
\[
\|\mathbf{v}_1\|\cdots\|\mathbf{v}_r\|\asymp \det(\Lambda).
\]
\end{enumerate}
The implied constants above depend only on the ambient dimension $k$. Here $\det(\Lambda)$ is the $r$-dimensional volume of the fundamental parallelepiped, given by
\[
\Bigl\{\sum_{i=1}^r x_i\mathbf{v}_i:\,x_1,\dots,x_r\in[0,1]\Bigr\},
\]
and the $j^{th}$ successive minimum is the smallest quantity $\lambda_j$ such that $\Lambda$ contains $j$ linearly independent vectors of norm at most $\lambda_j$.
\end{lmm}
\begin{proof}
This is \cite[Lemma 4.1]{Polys}.
\end{proof}
\begin{lmm}[Barban-Davenport-Halberstam type estimate]\label{lmm:Barban}
Let $B>0$ and let $\alpha_n$ be a complex sequence which satisfies the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz} and satisfies $|\alpha_n|\le \tau(n)^{B}$. Then for any $A>0$ there is a constant $C=C(A,B)$ such that if $Q<N/(\log{N})^{C}$ we have
\[
\sum_{q\le Q}\tau(q)^{B}\sum_{\substack{b\Mod{q}\\ (b,q)=1}}\Bigl|\sum_{n\sim N}\alpha_n \Bigl(\mathbf{1}_{n\equiv b\Mod{q}}-\frac{\mathbf{1}_{(n,q)=1}}{\phi(q)}\Bigr)\Bigr|^2\ll_{A,B} \frac{N^2}{(\log{N})^A}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.7]{May1}. Note that the implied constant in Lemma \ref{lmm:Barban} depend on the implied constants in \eqref{eq:SiegelWalfisz}.
\end{proof}
\begin{lmm}[Splitting into coprime sets]\label{lmm:FouvryDecomposition}
Let $\mathcal{N}\subseteq \mathbb{Z}_{>0}^2$ be a set of pairs $(a,b)$ satisfying:
\begin{enumerate}
\item $a,b\le x^{O(1)}$,
\item $\gcd(a,b)=1$,
\item The number of prime factors of $a$ and of $b$ is $\ll (\log\log{x})^3$.
\end{enumerate}
Then there is a partition $\mathcal{N}=\mathcal{N}_1\sqcup\mathcal{N}_2\sqcup\dots \sqcup\mathcal{N}_J$ into $J$ disjoint subsets with
\[
J\ll \exp\Bigl(O(\log\log{x})^4\Bigr),
\]
and such that if $(a,b)$ and $(a',b')$ both lie in the same subset $\mathcal{N}_j$, then $\gcd(a,b')=\gcd(a',b)=1$.
\end{lmm}
\begin{proof}
This is \cite[Lemma 12.2]{May1}.
\end{proof}
\begin{lmm}[Divisor function bounds]\label{lmm:Divisor}
Let $|b|< x-y$ and $y\ge q x^\epsilon$. Then we have
\[
\sum_{\substack{x-y\le n\le x\\ n\equiv a\Mod{q}}}\tau(n)^C\tau(n-b)^C\ll \frac{y}{q} (\tau(q)\log{x})^{O_{C}(1)}.
\]
\end{lmm}
\begin{proof}
This is \cite[Lemma 7.7]{May1}.
\end{proof}
\begin{lmm}[Most moduli have small square-full part]\label{lmm:Squarefree}
Let $Q<x^{1-\epsilon}$ and $A,B>0$. Let $\gamma_b$ be a complex sequence satisfying $|\gamma_b|\le \tau(b)^{B}$. Let $sq(n)$ denote the square-full part of $n$. (i.e. $sq(n)=\prod_{p:p^2|n}p^{\nu_p(n)}$). Then for $C=C(A,B)$ sufficiently large in terms of $A,B$ we have that
\[
\sum_{\substack{q\sim Q\\ sq(q)\ge (\log{x})^C}}\sup_{\substack{a,a'\\ (a a',q)=1}}\Bigl|\sum_{b\le x}\gamma_b\Bigl(\mathbf{1}_{b\equiv a\Mod{q}}-\mathbf{1}_{b\equiv a'\Mod{q}}\Bigr)\Bigr|\ll_{A,B_0} \frac{x}{(\log{x})^A}.
\]
\end{lmm}
\begin{proof}
This is a slight reformulation of \cite[Lemma 12.8]{May1}, noting that the argument there is actually uniform in the residue class.
\end{proof}
\begin{lmm}[Most moduli have small smooth part]\label{lmm:Smooth}
Let $Q<x^{1-\epsilon}$ and $A,B>0$. Let $\gamma_b$ be a complex sequence with $|\gamma_b|\le \tau(n)^{B}$ and set $z_0:=x^{1/(\log\log{x})^3}$ and $y_0:=x^{1/\log\log{x}}$. Let $sm(n;z)$ denote the $z$-smooth part of $n$. (i.e. $sm(n;z)=\prod_{p\le z}p^{\nu_p(n)}$). Then we have that
\[
\sum_{\substack{q\sim Q\\ sm(q;z_0)\ge y_0}}\sup_{\substack{a,a'\\ (aa',q)=1}}\Bigl|\sum_{b\le x}\gamma_b\Bigl(\mathbf{1}_{b\equiv a\Mod{q}}-\mathbf{1}_{b\equiv a'\Mod{q}}\Bigr)\Bigr|\ll_{A,B} \frac{x}{(\log{x})^A}.
\]
\end{lmm}
\begin{proof}
This is a slight reformulation of \cite[Lemma 12.9]{May1}, noting that the argument there is actually uniform in the residue class.
\end{proof}
With these lemmas established, we now prove Propositions \ref{prpstn:MainProp}-\ref{prpstn:Triple} in turn.
\section{Main Type II estimate}\label{sec:MainProp}
In this section we establish Proposition \ref{prpstn:MainProp}, which is the main technical result in this paper.
As remarked in Section \ref{sec:Outline}, our aim is to combine a number of applications of Cauchy-Schwarz to smooth the unknown coefficients and allow for an effective use of completion of sums. By carefully handling suitable side cases to maintain control of the intermediate stages, we eventually arrive at a 4-variable summation which can be handled adequately using the Weil bound for Kloosterman sums. Despite this being a multi-dimensional sum we do not use the more advanced theory due to Deligne since the final sums factor into Kloosterman sums.
For Theorem \ref{thrm:WeakEquidistribution} it is vital that we only allow losses of size $x^{O(\delta)}(\log{x})^{O(1)}$ in the bounds on $R,N$ (and that the estimates are uniform in $\delta$), which means some care is required when performing completion of sums. This is despite the fact we have power-saving estimates in most of the ranges involved; at the endpoints we only just have a suitable log-power saving, and the full range is critical for Theorem \ref{thrm:WeakEquidistribution}.
The key to the proof of Proposition \ref{prpstn:MainProp} is to understand the following sum
\begin{align*}
\mathscr{S}&:=\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R'\\(r_1',r_2')=1}}c_{q,r_0r_1'}\overline{c_{q,r_0r_2'}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1'}}\equiv n_2\overline{a'_{q,r_0r_2'}}\Mod{qr_0}\\ (n_1,q r_0 r_1')=(n_2,q r_0 r_2')=1\\ \tau(n_1),\tau(n_2)\le (\log{x})^{C_1} }}\alpha_{n_1}\overline{\alpha}_{n_2}\\
&\qquad\qquad \times\sum_{\substack{m\equiv a_{q,r_0r_1'}\overline{n_1}\Mod{qr_0r_1'}\\ m\equiv a'_{q,r_0r_2'}\overline{n_2}\Mod{r_2'}}}\psi\Bigl(\frac{m}{M}\Bigr),
\end{align*}
where $c_{q,r_0,r},\alpha_n$ are complex sequences supported on $(q,r)=1$ with $r$ square-free and $\tau(qr)\le (\log{x})^B$, and $a_{q,r},a'_{q,r}$ are integer sequences with $(a_{q,r},qr)=(a'_{q,r},qr)=1$. Here we recall that $\psi$ is a fixed function supported on $[1/2,5/2]$ satisfying $|\psi^{(j)}(x)|\ll (4^j j!)^2$ for all $j\ge 0$, $x\in\mathbb{R}$.
We will estimate this sum over Lemmas \ref{lmm:GCD}-\ref{lmm:OffDiag}, leading to Lemma \ref{lmm:MainConclusion}. We then conclude Proposition \ref{prpstn:MainProp} as a consequence of this estimate. We first remove the possibility that $r_0$ is large.
\begin{lmm}[Bound for terms with large GCD]\label{lmm:GCD}
Let $A>0$ and $C_1=C_1(A)$ be sufficiently large in terms of $A$. Let $QR=x^{1/2+\delta}\ge x^{1/2}(\log{x})^{-A}$ and $R'\asymp R/R_0$
\[
N>Q x^{2\delta}(\log{x})^{3C_1}>x^\epsilon,
\]
and let $a_{q,r}$ and $a'_{q,r}$ be integer sequences coprime to $qr$. Let $\mathscr{S}'$ be given by
\begin{align*}
\mathscr{S}'&:=\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R'\\ (r_1',r_2')=1\\ (r_1'r_2',r_0q)=1}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1'}}\equiv n_2\overline{a'_{q,r_0r_2'}}\Mod{qr_0}\\ (n_1,q r_0 r_1')=(n_2,q r_0 r_2')=1}}\sum_{\substack{m\sim M\\m\equiv a_{q,r_0r_1'}\overline{n_1}\Mod{qr_0r_1'}\\ m\equiv a'_{q,r_0r_2'}\overline{n_2}\Mod{r_2'}}}1.
\end{align*}
Then if $R_0\ge N/( Q(\log{x})^{C_1})$ we have
\[
\mathscr{S}'\ll_A \frac{MN^2}{(\log{x})^{A} Q}.
\]
\end{lmm}
Thus provided $N$ is a bit larger than $Q$, we only need to consider $R_0<N/((\log{x})^{C_1} Q)$.
\begin{proof}
Let $r_0,r_1',r_2',q$ be given. We see that the congruence $n_1\overline{a_{q,r_0r_1'}}\equiv n_2\overline{a_{q,r_0r_2'}}\Mod{qr_0}$ on $n_1,n_2$ forces the ordered pair $(n_1,n_2)$ to lie in a lattice $\Lambda\subseteq \mathbb{Z}^2$ of determinant $qr_0$. Let $\{\mathbf{z}_1,\mathbf{z}_2\}$ be a Minkowski-reduced basis for this lattice, as given by Lemma \ref{lmm:Basis}. Then there are constants $L_1,L_2$ (depending only on $r_0,r_1',r_2',q$) such that any pair $n_1,n_2\sim N$ with $n_1\overline{a_{q,r_0r_1'}}\equiv n_2\overline{a_{q,r_0r_2}}\Mod{qr_0}$ is given by
\[
\begin{pmatrix}
n_1\\
n_2
\end{pmatrix}=\lambda_1\mathbf{z}_1+\lambda_2\mathbf{z}_2,
\]
for some integers $\lambda_1,\lambda_2$ with $|\lambda_1|\le L_1$ and $|\lambda_2|\le L_2$. Moreover, $L_1,L_2$ satisfy $L_1L_2\asymp N^2/\det(\Lambda)\ll N^2/(QR_0)$ and $L_1,L_2\le N$. Without loss of generality let $L_1\le L_2$.
We first consider the contribution to $\mathscr{S}'$ from all terms with $\lambda_1>0$, so we must have $L_1\ge 1$. In this case there are $O(L_1L_2)\ll N^2/(QR_0)$ choices of $\lambda_1,\lambda_2$. Given a choice of $\lambda_1$ and $\lambda_2$, the congruences $m\equiv a_{q,r_0r_1'}\overline{n_1}\Mod{qr_0r_1'}$ and $m\equiv a_{q,r_0r_2'}\overline{n_2}\Mod{r_2'}$ force $m$ to lie in a single residue class $\Mod{qr_0r_1'r_2'}$. Thus there are $O(1+M/(QR_0 R'{}^2))$ solutions $m$ for each choice of $n_1,n_2,q,r_0,r_1',r_2'$. Thus the total contribution from all of these terms is
\begin{align}
\ll \sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{r_1',r_2'\sim R'}\sum_{\substack{\lambda_1\ll L_1\\ \lambda_2\ll L_2\\ L_1>1}}\Bigl(1+\frac{M}{QR_0 R'{}^2}\Bigr)&\ll QR_0 R'{}^2L_1L_2\Bigl(\frac{M}{QR_0 R'{}^2}+1\Bigr)\nonumber\\
&\ll \frac{MN^2}{Q R_0}+N^2 R'{}^2.\label{eq:S'Bound1}
\end{align}
We now consider the contribution to $\mathscr{S}'$ from those terms with $\lambda_1=0$, so $n_1=\lambda_2z_{21}$ and $n_2=\lambda_2z_{22}$ for some integers $z_{21},z_{22}$ depending only on $q,r_0,r_1'r_2'$. Thus the congruences $m\equiv a_{q,r_0r_1'}\overline{n_1}\Mod{qr_0r_1'}$ and $m\equiv a_{q,r_0r_2'}\overline{n_2}\Mod{r_2'}$ simplify to fix $\lambda_2m$ to lie in a single residue class $\Mod{qr_0r_1'r_2'}$. Thus, for a given choice of $q,r_0,r_1',r_2'$, the number of choices of $\lambda_2,m$ is
\[
\ll \sup_{c\Mod{q r_0r_1'r_2'}}\sum_{\substack{k\ll L_2M\\ k\equiv c\Mod{q r_0r_1'r_2'} }}\tau(k)\ll \frac{M L_2}{Q R_0 R'{}^2}(\log{x})^{O(1)}+x^{o(1)}.
\]
Thus the total contribution from all of these terms is
\begin{align}
\ll \sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{r_1',r_2'\sim R'}\Bigl(\frac{ML_2(\log{x})^{O(1)}}{QR_0R'{}^2}+x^{o(1)}\Bigr)&\ll Q R_0 R'{}^2 \Bigl(\frac{ML_2}{Q R_0 R'{}^2}(\log{x})^{O(1)}+x^{o(1)}\Bigr)\nonumber\\
&\ll (\log{x})^{O(1)}MN+x^{o(1)}Q R_0 R'{}^2.\label{eq:S'Bound2}
\end{align}
Putting together \eqref{eq:S'Bound1} and \eqref{eq:S'Bound2}, we obtain
\begin{equation}
\mathscr{S}'\ll \frac{MN^2}{Q R_0}+N^2R'{}^2+(\log{x})^{O(1)}MN+x^{o(1)} Q R_0 R'{}^2.
\label{eq:S'Bound}
\end{equation}
We recall that we wish to show $\mathscr{S}'\ll_A MN^2/((\log{x})^{A} Q)$. Recalling that $R'\asymp R/R_0$, we see that \eqref{eq:S'Bound} gives this provided
\begin{align}
R_0&>(\log{x})^A,\label{eq:R0Bound1}\\
R_0&>\frac{(\log{x})^{A/2} Q^{1/2}R}{M^{1/2}}\asymp (\log{x})^{A/2}\Bigl(\frac{QR}{x^{1/2}}\Bigr) \Bigl(\frac{N}{Q}\Bigr)^{1/2},\label{eq:R0Bound2}\\
N&>(\log{x})^{A+O(1)} Q,\label{eq:NBound}\\
R_0&>\frac{x^{\epsilon} Q^2R^2}{M N^2 }.\label{eq:R0Bound3}
\end{align}
We recall that $MN\asymp x$ and $QR \asymp x^{1/2+\delta}$. In particular, if we have
\begin{equation}
N>Q x^{2\delta}(\log{x})^{3C_1}>x^\epsilon,\label{eq:NBound1}
\end{equation}
for $C_1=C_1(A)$ sufficiently large then \eqref{eq:NBound} is clearly satisfied and all of \eqref{eq:R0Bound1}, \eqref{eq:R0Bound2} and \eqref{eq:R0Bound3} are satisfied provided $R_0\ge N/((\log{x})^{C_1} Q)$. This gives the result.
\end{proof}
\begin{lmm}[Fourier Expansion]\label{lmm:Fourier}
Let $A,B>0$ and let $C_1=C_1(A,B)$ and $B_2=B_2(A,B)$ be sufficiently large in terms of $A,B$. Let $R_0<N/((\log{x})^{C_1} Q)$, $R'\ll R/R_0$ and $QR=x^{1/2+\delta}$. Let $|c_{q,r}|\le 1$ and $|\alpha_n|\le \tau(n)^B$ be complex sequences with $\alpha_n$ satisfying the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz} and $c_{q,r}$ supported on square-free $r$ with $(r,q)=1$ and $\tau(q r)\le (\log{x})^B$. Let $a_{q,r},a'_{q,r}$ be an integer sequences with $(a_{q,r},qr)=(a'_{q,r},qr)=1$. Set
\begin{align*}
\mathscr{S}&:=\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R'\\(r_1',r_2')=1}}c_{q,r_0r_1'}\overline{c_{q,r_0r_2'}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1'}}\equiv n_2\overline{a'_{q,r_0r_2'}}\Mod{qr_0}\\ (n_1,qr_0r_1')=(n_2,qr_0r_2')=1\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2} }}\alpha_{n_1}\overline{\alpha}_{n_2}\\
&\qquad\qquad \times\sum_{\substack{m\equiv a_{q,r_0r_1'}\overline{n_1}\Mod{qr_0r_1'}\\ m\equiv a'_{q,r_0r_2'}\overline{n_2}\Mod{r_2'}}}\psi\Bigl(\frac{m}{M}\Bigr),\\
\mathscr{S}_{MT}&:=\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R'\\ (r_1',r_2')=1}}c_{q,r_0r_1'}\overline{c_{q,r_0r_2'}}\sum_{\substack{n_1,n_2\sim N\\ (n_1,qr_0r_1')=(n_2,qr_0r_2')=1\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2 }}}\alpha_{n_1}\overline{\alpha}_{n_2}\frac{M\hat{\psi}(0)}{q r_0r_1'r_2'\phi(q r_0)}.
\end{align*}
Then we have
\[
\mathscr{S}=\mathscr{S}_{MT}+\frac{M(\log{x})^{2B B_2} }{Q R_0 R'{}^2}\mathscr{S}_2+O_A\Bigl(\frac{M N^2}{Q (\log{x})^{2A} }\Bigr),
\]
where $H:=Q N R_0 R'{}^2(\log{x})^5/ x$,
\begin{align*}
\mathscr{S}_2&:=\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c'_{q,r_0,r_1}\overline{c'_{q,r_0,r_2}}\hspace{-1cm}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1}}=n_2\overline{a'_{q,r_0r_2}}\Mod{q r_0}\\ (n_1,qr_0r_1)=(n_2,qr_0r_2)=1}}\hspace{-1cm}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{1\le |h|\le H}\hat{\psi}\Bigl(\frac{h M}{q r_0r_1r_2}\Bigr)\xi,\\
\xi&:=e\Bigl(\frac{a_{q,r_0r_1}h\overline{n_1 r_1r_2}}{q r_0}\Bigr)e\Bigl(\frac{a_{q,r_0r_1}h\overline{n_1 q r_0 r_2}}{r_1}\Bigr)\Bigl(\frac{a'_{q,r_0r_2}h\overline{n_2 q r_0 r_1}}{r_2}\Bigr),
\end{align*}
for some $1$-bounded coefficients $\alpha'_n$ and $c'_{q,r_0,r}$ satisfying $|c'_{q,r_0,r}|\le |c_{q,r_0r}|$.
\end{lmm}
A key point is that $\mathscr{S}_{MT}$ is independent of the choice of residue classes $a_{q,r}$, and so to show that $\mathscr{S}$ is approximately independent of the residue classes it suffices to show that $\mathscr{S}_2$ is small.
\begin{proof}
To simplify notation we suppress some of the dependencies on $q,r_0$ by letting $b_{r}:=a_{q,r_0r}$, $b'_r:=a'_{q,r_0r}$, $c'_r:=c_{q,r_0r}R'Q^{1/2}R_0^{1/2}/(rr_0^{1/2}q^{1/2})$ and $q_0:=qr_0$. (We note that for $r\sim R'$ we have $c'_r\le 1$.) Similarly, let $\alpha''_n:=\alpha_n\mathbf{1}_{\tau(n)\le (\log{x})^{B_2}}$. By Lemma \ref{lmm:Completion}, for $H:=QR_0 R'{}^2N(\log{x})^5/x$ we have that (noting that $(q r_0 r_1,r_2)=1$)
\begin{equation}
\sum_{\substack{m\equiv b_{r_1}\overline{n_1}\Mod{q_0r_1}\\ m\equiv b'_{r_2}\overline{n_2}\Mod{r_2}}}\psi\Bigl(\frac{m}{M}\Bigr)=\frac{M}{q_0r_1r_2}\sum_{|h|\le H}\hat{\psi}\Bigl(\frac{Mh}{q_0r_1r_2}\Bigr)\xi+O(x^{-100}),
\label{eq:FourierExpansion}
\end{equation}
where, as in the statement of the lemma, we put
\[
\xi:=e\Bigl(\frac{b_{r_1}h\overline{n_1 r_1r_2}}{q_0}\Bigr)e\Bigl(\frac{b_{r_1}h\overline{n_1 q_0 r_2}}{r_1}\Bigr)\Bigl(\frac{b'_{r_2}h\overline{n_2 q_0 r_1}}{r_2}\Bigr).
\]
We substitute \eqref{eq:FourierExpansion} into our expression for $\mathscr{S}$, giving
\begin{align*}
\mathscr{S}&=\frac{M}{QR_0 R'{}^2}\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c'_{r_1}\overline{c'_{r_2}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{b_{r_1}}=n_2\overline{b'_{r_2}}\Mod{q_0}}}\alpha''_{n_1}\overline{\alpha''_{n_2}}\sum_{|h|\le H}\hat{\psi}\Bigl(\frac{M h}{q_0r_1'r_2'}\Bigr)\xi\\
&\qquad+O(x^{-10}).
\end{align*}
We separate out the $h=0$ term, which contributes to $\mathscr{S}$ a total
\begin{align*}
\frac{M\hat{\psi}(0)}{Q R_0 R'{}^2}&\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c'_{r_1}\overline{c'_{r_2}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{b_{r_1}}=n_2\overline{b'_{r_2}}\Mod{q_0}\\ (n_1,q_0r_1)=(n_2,q_0r_2)=1}}\alpha''_{n_1}\overline{\alpha''_{n_2}}\\
&=\frac{M\hat{\psi}(0)}{QR_0 R'{}^2}\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c'_{r_1}\overline{c'_{r_2}}\sum_{\substack{n_1,n_2\sim N\\ (n_1,q_0r_1)=(n_2,q_0r_2)=1}}\alpha''_{n_1}\overline{\alpha''_{n_2}}\frac{1}{\phi(q_0)}\\
&\qquad+\frac{M \hat{\psi}(0)}{Q R_0 R'{}^2}\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c'_{r_1}\overline{c'_{r_2}}\sum_{\substack{n_1\sim N\\ (n_1,q_0r_1)=1}}\alpha''_{n_1}\\
&\qquad\qquad \times\sum_{\substack{n_2\sim N\\ (n_2,q_0r_2)=1}}\overline{\alpha''_{n_2}}\Bigl(\mathbf{1}_{n_2\equiv b'_{r_2}n_1\overline{b_{r_1}}\Mod{q_0}}-\frac{1}{\phi(q_0)}\Bigr).
\end{align*}
Recalling that $c'_r=c_{q,r_0r}R' Q^{1/2}R_0^{1/2}/(r r_0^{1/2} q^{1/2})$ we see that the first term above is the expression $\mathscr{S}_{MT}$ given by the lemma. By Cauchy-Schwarz, we have
\begin{align*}
&\frac{M\hat{\psi}(0)}{Q R_0 R'{}^2}\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c'_{r_1}\overline{c'_{r_2}}\sum_{\substack{n_1\sim N\\ (n_1,q_0r_1)=1}}\alpha''_{n_1}\\
&\qquad \times \sum_{\substack{n_2\sim N\\ (n_2,q_0r_2)=1}}\overline{\alpha''_{n_2}}\Bigl(\mathbf{1}_{n_2\equiv b'_{r_2}n_1\overline{b_{r_1}}\Mod{q_0}}-\frac{1}{\phi(q_0)}\Bigr)\\
&\ll \frac{M N (\log{x})^{O_B(1)}}{Q R_0}\Bigl(\sup_{r_2\sim R'}\sum_{q_0\sim QR_0}\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}\Bigl|\hspace{-0.2cm}\sum_{\substack{n\sim N\\ (n,r_2 q_0)=1 \\ \tau(n)\le (\log{x})^{B_2} }}\hspace{-0.2cm}\alpha_n\Bigl(\mathbf{1}_{n\equiv b\Mod{q_0}}-\frac{1}{\phi(q_0)}\Bigr)\Bigr|^2\Bigr)^{1/2}.
\end{align*}
Here we used the fact that for any choice of $b\Mod{q_0}$ and $r_1,r_2$ there are $O(N/(QR_0))$ choices of $n_1$ such that $b'_{r_2}n_1\overline{b_{r_1}}\equiv b\Mod{q_0}$ since $R_0<N/Q$, and recalled that $|\alpha''_n|\le \tau(n)^B$. Finally we substituted $\alpha''_n=\alpha_n\mathbf{1}_{\tau(n)\le (\log{x})^{B_2}}$.
By Lemma \ref{lmm:Divisor}, we may remove the condition $\tau(n)\le (\log{x})^{B_2}$ at the cost of an error term of size
\begin{align*}
&\frac{MN (\log{x})^{O_B(1)}}{Q R_0}\Bigl(\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}\Bigl|\sum_{\substack{n\sim N\\ n\equiv b\Mod{q_0}}}\frac{\tau(n)^{B+1}}{(\log{x})^{B_2}}\Bigr|^2\Bigr)^{1/2}\\
&\ll \frac{M N^2 (\log{x})^{O_B(1)-B_2}}{Q R_0}.
\end{align*}
This is $O_A(MN^2/Q\log^{2A}{x})$ provided $B_2$ is large enough in terms of $A$ and $B$.
Since $\alpha_n$ satisfies the Siegel-Walfisz condition, we see that $\mathbf{1}_{(n,r_2)=1}\alpha_n$ also does for any choice of $r_2$. Therefore, since $R_0<N/((\log{x})^{C_1} Q)$, by the Barban-Davenport-Halberstam Theorem (Lemma \ref{lmm:Barban}), for $C_1$ sufficiently large in terms of $A,B$ we have
\[
(\log{x})^{O_B(1)}\sum_{q_0\sim Q R_0}\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}\Bigl|\sum_{\substack{n\sim N\\ (n,r_2q_0)=1}}\alpha_n\Bigl(\mathbf{1}_{n\equiv b\Mod{q_0}}-\frac{1}{\phi(q_0)}\Bigr)\Bigr|^2\ll_A \frac{N^2}{(\log{x})^{2A}}.
\]
Thus contribution to $\mathscr{S}$ from the terms with $h=0$ is $\mathscr{S}_{MT}+O_A(MN^2/(Q\log^{2A}{x}))$. By letting $\alpha_n':=\alpha''_n/(\log{x})^{B B_2}\le 1$ and letting $\mathscr{S}_2$ denote the terms with $h\ne 0$, we obtain the result.
\end{proof}
We note that $\mathscr{S}_2$ depends on $B$ since $c'_{q,r_0,r}$ is supported on $\tau(q r_0r)\le (\log{x})^B$, but not on $A,B_2$ or $C_1$. We wish to show that for every choice of $A_2>0$ we have
\[
\mathscr{S}_2\ll_{A_2,B} \frac{N^2 R'{}^2 R_0}{(\log{x})^{A_2}}.
\]
\begin{lmm}[Simplify moduli]\label{lmm:Simplify}
Let $\mathscr{S}_2$ be as in Lemma \ref{lmm:Fourier}. Then we have that
\[
\mathscr{S}_2\ll (\log{x})^4\sup_{\substack{D_1\le D_2}}\sum_{d_1\sim D_1}\sum_{d_2\sim D_2}\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_3|,
\]
where
\[
\mathscr{S}_3:=\sum_{\substack{r_1'\sim R_1'}}c_{r_1'}\sum_{\substack{r_2'\sim R_2'\\ (r_1',r_2')=1}}\overline{c'_{r_2'}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{b_{r_1'}}=n_2\overline{b'_{r_2'}}\Mod{q r_0}\\ (n_1,q r_0 d_1 r_1')=(n_2,q r_0 d_2 r_2')=1}}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{\substack{h'\sim H''\\ (h',r_1'r_2')=1}}\xi,
\]
and where $R_1':=R'/ d_1$, $R_2':=R' / d_2$, $H''\le H/(d_1 d_2)$, $c_{r}$ and $c_r'$ are 1-bounded sequences (depending on $q,r_0,d_1,d_2$) satisfying $|c_{r}|\le |c'_{q,r_0,d_1r}|$ and $|c'_{r}|\le |c'_{q,r_0,d_2r}|$ supported on $\tau(r)\le (\log{x})^B$, $b_r$, $b_r'$ are integer sequences (depending on $q,r_0,d_1,d_2$) satisfying $(b_r,q r_0 r)=(b'_{r},q r_0 r)=1$, and
\[
\xi:=e\Bigl(\frac{b_{r_1'}h'\overline{n_1 r_1'r_2'}}{q r_0}\Bigr)e\Bigl(\frac{b_{r_1'}h'\overline{n_1 q r_0 r_2'}}{r_1'}\Bigr)\Bigl(\frac{b'_{r_2'}h'\overline{n_2 q r_0 r_1'}}{r_2'}\Bigr).
\]
\end{lmm}
\begin{proof}
We first wish to separate the dependencies between the $h,r_1,r_2,q,r_0$ variables in the $\hat{\psi}(h M/(qr_0r_1r_2))$ factor, which we do by partial summation. Let $q_0:=qr_0$ as before. Since $\hat{\psi}^{(j)}(x)\ll_{j,k} |x|^{-k}$ for any $j,k\in\mathbb{Z}_{\ge 0}$, we have that
\[
\frac{\partial^{j_1+j_2+j_3+j_4}}{\partial h^{j_1}\partial r_1^{j_2}\partial r_2^{j_3}\partial q_0^{j_4} }\hat{\psi}\Bigl(\frac{h M}{q_0r_1r_2}\Bigr)\ll_{j_1,j_2,j_3,j_4} h^{-j_1}r_1^{-j_2}r_2^{-j_3}q_0^{-j_4}.
\]
Thus, by partial summation
\begin{align*}
\mathscr{S}_2&=\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c'_{q,r_0,r_1}\overline{c'_{q,r_0,r_2}}\hspace{-1 cm}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1}}=n_2\overline{a'_{q,r_0r_2}}\Mod{q_0}\\ (n_1,q_0r_1)=(n_2,q_0r_2)=1}}\hspace{-1 cm}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{1\le |h|\le H}\hat{\psi}\Bigl(\frac{Mh}{q_0r_1r_2}\Bigr)\xi\\
&\ll \log{x} \sup_{t_1,t_2,t_3,t_4}\Bigl|\sum_{q\sim Q}\sum_{\substack{r_0\sim R_0\\ qr_0\le t_4}}\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1\\ r_1\le t_1\\ r_2\le t_2}}c'_{q,r_0,r_1}\overline{c'_{q,r_0,r_2}}\hspace{-1cm}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1}}=n_2\overline{a'_{q,r_0r_2}}\Mod{q r_0}\\ (n_1,q r_0 r_1)=1\\ (n_2,q r_0 r_2)=1}}\hspace{-1cm}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{1\le |h|\le t_3}\xi\Bigr|.
\end{align*}
Let the supremum occur at $t_1',t_2',t_3',t_4'$. We let $c_{q,r_0,r_1}'':=\mathbf{1}_{r_1\le t_1',qr_0\le t_4'}c_{q,r_0,r_1}'$ and $c'''_{q,r_0,r_2}:=\mathbf{1}_{r_2\le t_2',qr_0\le t_4'}c_{q,r_0,r_2}'$ (noting that these are $1$-bounded). We split $h$ into dyadic ranges, and note that since the terms with $h>0$ are the complex conjugates of the terms with $h<0$, it suffices to just bound the terms with $h>0$. Thus we find for some $H'\le H$
\begin{align*}
\mathscr{S}_2&\ll (\log{x})^2\sum_{q\sim Q}\sum_{r_0\sim R_0}\Bigl|\sum_{\substack{r_1,r_2\sim R'\\ (r_1,r_2)=1}}c''_{q,r_0,r_1}c'''_{q,r_0,r_2}\hspace{-0.5cm}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1}}=n_2\overline{a'_{q,r_0r_2}}\Mod{q r_0}\\ (n_1,q r_0 r_1)=(n_2,q r_0 r_2)=1}}\hspace{-0.5cm}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{h\sim H'} \xi\Bigr|.
\end{align*}
We now wish to remove potential common factors between $h$ and $r_1,r_2$. Let $d_1=(h,r_1)$ and $d_2=(h,r_2)$ and let $r_1=d_1r_1'$, $r_2=r_2'd_2$ and $h=d_1d_2h'$. By putting each of $d_1,d_2$ into dyadic ranges, we see that
\[
\mathscr{S}_2\ll (\log{x})^4\sup_{\substack{D_1,D_2}}\sum_{d_1\sim D_1}\sum_{d_2\sim D_2}\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_2'|,
\]
where $\mathscr{S}_2'=\mathscr{S}_2'(d_1,d_2,q,r_0)$ is given by
\[
\mathscr{S}_2':=\sum_{\substack{r_1'\sim R_1'}}c_{r_1'}\sum_{\substack{r_2'\sim R_2'\\ (r_1',r_2')=1}}\overline{c'_{r_2'}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{b_{r_1'}}=n_2\overline{b'_{r_2'}}\Mod{q r_0}\\ (n_1,q r_0d_1r_1')=(n_2,q r_0d_2r_2')=1}}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{\substack{h'\sim H''\\ (h',r_1'r_2')=1}}\xi,
\]
where $c_{r_1'}:=c''_{q,r_0,d_1 r_1'}$, $c'_{r_2}:=c'''_{q,r_0,d_2 r_2'}$, $b_{r_1'}:=a_{q,r_0d_1 r_1'}$, $b'_{r_2'}:=a'_{q,r_0d_2 r_2'}$ and
\begin{align*}
R_1'&:=\frac{R'}{d_1},\quad R_2':=\frac{R'}{d_2},\quad H'':=\frac{H'}{d_1 d_2},
\end{align*}
and where
\begin{align*}
\xi&=e\Bigl(\frac{a_{q,r_0r_1}h\overline{n_1 r_1r_2}}{q r_0}\Bigr)e\Bigl(\frac{a_{q,r_0r_1}h\overline{n_1 q r_0 r_2}}{r_1}\Bigr)\Bigl(\frac{a'_{q,r_0r_2}h\overline{n_2 q r_0 r_1}}{r_2}\Bigr)\\
&=e\Bigl(\frac{b_{r_1'}h'\overline{n_1 r_1'r_2'}}{q r_0}\Bigr)e\Bigl(\frac{b_{r_1'}h'\overline{n_1 q r_0 r_2'}}{r_1'}\Bigr)\Bigl(\frac{b'_{r_2'}h'\overline{n_2 q r_0 r_1'}}{r_2'}\Bigr).
\end{align*}
By symmetry we may assume without loss of generality that $D_1\le D_2$, so $R_1'\gg R_2'$. This gives the result.
\end{proof}
Recalling that we wish to show that $\mathscr{S}_2\ll N^2 R_0 R'{}^2/(\log{x})^{A_2}$, we see that we wish to show for every choice of $A_3>0$
\[
\mathscr{S}_3\ll_{A_3,B} \frac{N^2 R'_1 R_2'}{Q(\log{x})^{A_3} }.
\]
Our first step is to apply Cauchy-Schwarz to remove the eliminate $\alpha_{n_1}$ coefficients. We cannot simultaneously eliminate the $\alpha_{n_2}$ coefficients because the modulus would increase too much if we didn't keep the $q$ variable on the outside of the summation, but the diagonal terms would contribute too much if the inner sum only involved a subset of the $r_1,r_2,h$ variables.
\begin{lmm}[First Cauchy]\label{lmm:Cauchy}
Let $\mathscr{S}_3$ be as in Lemma \ref{lmm:Simplify}. Then we have
\[
\mathscr{S}_3^2\ll (\log{x})^2 N R_1'\sup_{\substack{E_1,R_1''\\ E_1 R_1''\asymp R_1'}} |\mathscr{S}_4|,
\]
where
\begin{align*}
\mathscr{S}_4&:=\sum_{e_1\sim E_1}\sum_{\substack{r_1'\sim R_1''}}\eta_{e_1,r_1'}\sum_{\substack{r_2,r_3\sim R_2'\\ (e_1 r_1',r_2r_3)=1}}\overline{c'_{r_2}}c'_{r_3}\sum_{\substack{h_2,h_3\sim H''\\ h_2r_3\equiv h_3r_2\Mod{e_1}\\ (h_2,e_1 r_1' r_2)=1\\ (h_3,e_1 r_1' r_3)=1\\ (h_2r_3-h_3r_2,r_1')=1 }}\sum_{(n_1,q_0e_1 r_1')=1}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_0\xi_1\\
&\qquad\times \Biggl(\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{r_2}}=n_1\overline{b_{e_1 r_1'}}\Mod{q_0}\\ (n_2,q_0d_2r_2)=1}}\overline{\alpha'_{n_2}}\xi_2\Biggr) \Biggl(\sum_{\substack{n_3\sim N\\ n_3\overline{b'_{r_3}}=n_1\overline{b_{e_1 r_1'}}\Mod{q_0}\\ (n_3,q_0d_2r_3)=1}}\alpha'_{n_3}\xi_3\Biggr),
\end{align*}
where $q_0:=q r_0$, and $|\eta_{e_1,r_1'}|\le |c_{e_1 r_1'}|$ is supported on square-free coprime $e_1,r_1'$ with $\tau(e_1 r_1')\le (\log{x})^B$ and $(e_1 r_1',q_0)=1$, and where
\begin{align*}
\xi_0&:=e\Bigl(\frac{b_{e_1 r_1'}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1 e_1 r_1'}}{q_0}\Bigr),\qquad &
\xi_1&:=e\Bigl(\frac{b_{e_1 r_1'}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1 e_1 q_0}}{r_1'}\Bigr),\\
\xi_2&:=e\Bigl(\frac{b'_{r_2}h_2\overline{n_2 q_0 e_1 r_1'}}{r_2}\Bigr),\qquad &
\xi_3&:=e\Bigl(-\frac{b'_{r_3}h_3\overline{n_3 q_0 e_1 r_1'}}{r_3}\Bigr).
\end{align*}
\end{lmm}
\begin{proof}
To simplify notation we let $q_0:=q r_0$. We Cauchy in $n_1$ and $r_1$. This gives
\[
\mathscr{S}_3^2\ll NR_1' \mathscr{S}_3',
\]
where (dropping the condition $(n_1,d_1)=1$ for an upper bound)
\begin{align*}
\mathscr{S}_3'&:=\sum_{\substack{r_1\sim R_1' }}|c_{r_1}|\sum_{\substack{n_1\sim N\\ (n_1,r_1 q_0)=1}}\Bigl|\sum_{\substack{r_2\sim R_2'\\ (r_2,r_1)=1}}\overline{c'_{r_2}}\sum_{\substack{n_2\sim N\\ n_1\overline{b_{r_1}}=n_2\overline{b'_{r_2}}\Mod{q_0}\\ (n_2,q_0d_2r_2)=1}}\overline{\alpha'_{n_2}}\sum_{\substack{h\sim H''\\ (h,r_1r_2)=1}}\xi\Bigr|^2.
\end{align*}
We insert a smooth majorant for the $n_1$ summation, giving the upper bound
\begin{align*}
\mathscr{S}_3'&\le \sum_{\substack{r_1\sim R_1'}}|c_{r_1}|\sum_{(n_1,r_1 q_0)=1}\psi\Bigl(\frac{n_1}{N}\Bigr)\Bigl|\sum_{\substack{r_2\sim R_2'\\ (r_2,r_1)=1}}\overline{c'_{r_2}}\sum_{\substack{n_2\sim N\\ n_1\overline{b_{r_1}}=n_2\overline{b'_{r_2}}\Mod{q_0}\\ (n_2,q_0d_2r_2)=1}}\overline{\alpha'_{n_2}}\sum_{\substack{h\sim H''\\ (h,r_1r_2)=1}}\xi\Bigr|^2.
\end{align*}
Let $\mathscr{S}_3''$ denote the right hand side above. Expanding the square, we see that
\begin{align*}
\mathscr{S}_3''&=\sum_{\substack{r_1\sim R_1'}}|c_{r_1}|\sum_{\substack{r_2,r_3\sim R_2' \\ (r_2r_3,r_1)=1}}\overline{c'_{r_2}}c'_{r_3}\sum_{\substack{h_2,h_3\sim H''\\ (h_2,r_1r_2)=1\\ (h_3,r_1r_3)=1}}\sum_{(n_1,q_0r_1)=1}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_0\xi_1\\
&\qquad\times \Biggl(\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{r_2}}=n_1\overline{b_{r_1}}\Mod{q_0}\\ (n_2,q_0d_2r_2)=1}}\overline{\alpha'_{n_2}}\xi_2\Biggr) \Biggl(\sum_{\substack{n_3\sim N\\ n_3\overline{b'_{r_3}}=n_1\overline{b_{r_1}}\Mod{q_0}\\ (n_3,q_0d_2r_3)=1}}\alpha'_{n_3}\xi_3\Biggr),
\end{align*}
where
\begin{align*}
\xi_0&:=e\Bigl(\frac{b_{r_1}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1 r_1}}{q_0}\Bigr),\qquad &
\xi_1&:=e\Bigl(\frac{b_{r_1}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1q_0}}{r_1}\Bigr),\\
\xi_2&:=e\Bigl(\frac{b'_{r_2}h_2\overline{n_2q_0r_1}}{r_2}\Bigr),\qquad &
\xi_3&:=e\Bigl(-\frac{b'_{r_3}h_3\overline{n_3q_0r_1}}{r_3}\Bigr).
\end{align*}
We now wish to extract possible common factors. Let
\[
e_1:=\gcd(h_2r_3-h_3r_2,r_1),
\]
and let $r_1=e_1 r_1'$. Since $c_r$ is supported on square-free $r$, we only need to consider $(r_1',e_1)=1$. We then see that
\[
\xi_1=e\Bigl(\frac{b_{r_1}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1 q_0}}{r_1}\Bigr)=e\Bigl(\frac{b_{e_1r_1'}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1 e_1 q_0}}{r_1'}\Bigr).
\]
We now put $e_1,r_1'$ into dyadic ranges. Taking the worst ranges, we see that
\begin{align*}
\mathscr{S}_3''&\ll (\log{x})^2\sup_{\substack{E_1,R_1''\\ E_1R_1''\asymp R_1}}\mathscr{S}_4,
\end{align*}
where $\mathscr{S}_4$ is given by
\begin{align*}
\mathscr{S}_4&:=\sum_{e_1 \sim E_1}\sum_{\substack{r_1'\sim R_1''}}\eta_{e_1,r_1'}\sum_{\substack{r_2,r_3\sim R_2'\\ (e_1 r_1',r_2 r_3)=1}}\overline{c'_{r_2}}c'_{r_3}\sum_{\substack{h_2,h_3\sim H''\\ h_2r_3\equiv h_3r_2\Mod{e_1}\\ (h_2,e_1 r_1' r_2)=1\\ (h_3,e_1 r_1' r_3)=1\\ (h_2r_3-h_3r_2,r_1')=1 }}\sum_{(n_1,q_0 e_1 r_1')=1}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_0\xi_1\\
&\qquad\times \Biggl(\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{r_2}}=n_1\overline{b_{e_1 r_1'}}\Mod{q_0}\\ (n_2,q_0 d_2 r_2)=1}}\overline{\alpha'_{n_2}}\xi_2\Biggr) \Biggl(\sum_{\substack{n_3\sim N\\ n_3\overline{b'_{r_3}}=n_1\overline{b_{e_1 r_1'}}\Mod{q_0}\\ (n_3,q_0d_2r_3)=1}}\alpha'_{n_3}\xi_3\Biggr).
\end{align*}
Here $|\eta_{e_1,r_1'}|\le |c_{e_1 r_1'}|$ is supported on $e_1 r_1'\sim R_1'$ with $e_1 r_1'$ square-free and $\tau(e_1 r_1')\le(\log{x})^B$ and $(e_1 r_1',q_0)=1$. This gives the result.
\end{proof}
Recalling that we wish to show that $\mathscr{S}_3\ll N^2 R_1' R_2'/(Q(\log{x})^{A_3})$, we see that we wish to show for every choice of $A_4>0$
\[
\mathscr{S}_4\ll_{A_4,B} \frac{N^3 E_1 R_1''R_2'{}^2}{(\log{x})^{A_4} Q^2 }.
\]
\begin{lmm}[Pseudo-diagonal terms]\label{lmm:Diag1}
Let $A,B>0$, let $R_0\ll N/Q$ and $\mathscr{S}_4$ be as in Lemma \ref{lmm:Cauchy}, where we recall that $|\eta_{e_1,r_1'}|\le 1$ is supported on $\tau(e_1 r_1')\le (\log{x})^B$. Let $C=C(A,B)$ sufficiently large in terms of $A$ and $B$, and let $R_1''$ and $N$ satisfy
\begin{align*}
R_1''&\ll \Bigl(\frac{N}{Q R_0}\Bigr)^{2/3},\\
N&\ll \frac{x^{1/2-3\delta}}{(\log{x})^C}.
\end{align*}
Then we have that
\[
\mathscr{S}_4\ll_{A,B} \frac{N^3 E_1 R_1'' R_2'{}^2}{ (\log{x})^{A} Q^2}.
\]
\end{lmm}
\begin{proof}
We split the summation in $\mathscr{S}_4$ according to the residue class of $n_1\overline{b_{e_1 r_1'}}\Mod{q_0}$, giving
\begin{align}
\mathscr{S}_4&:=\sum_{e_1 \sim E_1 }\sum_{\substack{r_1'\sim R_1'' }}\eta_{e_1,r_1'}\sum_{\substack{r_2,r_3\sim R_2'\\ (e_1 r_1',r_2r_3)=1}}\overline{c'_{r_2}}c'_{r_3}\sum_{\substack{h_2,h_3\sim H''\\ h_2r_3\equiv h_3r_2\Mod{e_1}\\ (h_2,e_1 r_1' r_2)=1\\ (h_3,e_1 r_1' r_3)=1\\ (h_2r_3-h_3r_2,r_1')=1 }}\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}\xi_0\nonumber\\
&\times \Biggl(\sum_{\substack{n_2\sim N\\ n_2\overline{b_{r_2}'}\equiv b\Mod{q_0} \\ (n_2,d_2r_2)=1}}\overline{\alpha'_{n_2}}\xi_2\Biggr)\Biggl(\sum_{\substack{n_3\sim N\\ n_3\overline{b'_{r_3}}\equiv b\Mod{q_0}\\ (n_3,d_2r_3)=1}}\alpha'_{n_3}\xi_3\Biggr) \Biggl(\sum_{\substack{n_1\\ n_1\overline{b_{e_1 r_1'}}\equiv b\Mod{q_0}\\ (n_1,e_1 r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_1\Biggr),\label{eq:S41}
\end{align}
where
\[
\xi_0:=e\Bigl(\frac{(h_2\overline{r_2}-h_3\overline{r_3})\overline{b e_1 r_1'}}{q_0}\Bigr)
\]
doesn't depend on the $n_i$.
We concentrate on the inner sum over $n_1$. Since $R_1''\ll N/(Q R_0)$ the sum is essentially a complete sum, except for the coprimality constraint $(n_1,e_1)=1$. By M\"obius inversion, we have that
\begin{align*}
\sum_{\substack{n_1\\ n_1\overline{b_{e_1 r_1'}}\equiv b\Mod{q_0}\\ (n_1,e_1 r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_1=\sum_{f|e_1 }\mu(f)\sum_{\substack{n_1\\ n_1\overline{b_{e_1 r_1'}}\equiv b\Mod{q_0}\\ (n_1,r_1')=1\\ f|n_1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_1.
\end{align*}
By Lemma \ref{lmm:InverseCompletion}, we have that for $L_1=L_1(f):=(\log{x})^5 q_0 r_1' f/N$
\begin{align}
&\sum_{\substack{n_1\overline{b_{e_1 r_1'}}\equiv b \Mod{q_0} \\ f|n_1\\ (n_1,r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)e\Bigl(\frac{b_{e_1 r_1'}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1e_1 q_0}}{r_1'}\Bigr)\nonumber\\
&\qquad=\frac{N}{q_0r_1' f}\sum_{|\ell_1|\le L_1}\hat{\psi}\Bigl(\frac{\ell_1 N}{q_0 r_1' f}\Bigr)S(b_{e_1 r_1'}(h_2\overline{r_2}-h_3\overline{r_3})\overline{f e_1 q_0},\ell_1\overline{q_0};r_1')e\Bigl(\frac{b_{e_1 r_1'}b\overline{f e_1 r_1'}}{q_0}\Bigr)\nonumber\\
&\qquad\qquad+O(x^{-100}).\label{eq:S4N1Sum}
\end{align}
We separate the term with $\ell=0$. Since $(b_{e_1 r_1'}(h_2\overline{r_2}-h_3\overline{r_3})\overline{f e_1 q_0},r_1')=1$, we see that $S(b_{e_1 r_1'}(h_2\overline{r_2}-h_3\overline{r_3})\overline{f e_1 q_0},0;r_1')$ is a Ramanujan sum, and therefore equal to $\mu(r_1')$. For the remaining terms we use the standard Kloosterman sum bound of Lemma \ref{lmm:Kloosterman}. If $\tau(e_1 r_1')\le (\log{x})^B$, this gives
\begin{align*}
\sum_{\substack{n_1\\ n_1\overline{b_{e_1 r_1'}}\equiv b\Mod{q_0}\\ (n_1,e_1 r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_1&\ll\sum_{f|e_1 }\frac{N}{q_0 r_1' f}\Bigl(1+\sum_{0<|\ell_1|\le L_1}r_1'{}^{1/2}\tau(r_1') (\ell_1,r_1')^{1/2} \Bigr)\\
&\ll (\log{x})^{2B+5}\Bigl(\frac{N}{Q R_0 R_1''}+R_1''{}^{1/2}\Bigr).
\end{align*}
By assumption of the lemma we have that $R_1''\ll N^{2/3}/ (QR_0)^{2/3}$, and so the first term dominates. Substituting this into \eqref{eq:S41} (recalling that $\eta_{e_1,r_1'}$ is 1-bounded and supported on $\tau(e_1 r_1')\le (\log{x})^B$ and that $c'_{r}$ is supported on $(r,q_0)=1$), we have
\begin{align}
\mathscr{S}_4&\ll \frac{(\log{x})^{2B+5} N}{Q R_0 R_1''}\sum_{e_1\sim E_1}\sum_{\substack{r_1'\sim R_1''\\ \tau(e_1 r_1')\le (\log{x})^B}}\sum_{\substack{r_2,r_3\sim R_2'\\ (q_0e_1 r_1',r_2r_3)=1}}\sum_{\substack{h_2,h_3\sim H''\\ h_2r_3\equiv h_3r_2\Mod{e_1}\\ (h_2,e_1 r_1' r_2)=1\\ (h_3,e_1 r_1' r_3)=1\\ (h_2r_3-h_3r_2,r_1')=1 }}|\mathscr{S}_4'|+O(x^{-10}),\label{eq:S42}
\end{align}
where
\begin{align*}
\mathscr{S}_4'&:=\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}} \Bigl|\sum_{\substack{n_2\sim N\\ n_2\overline{b_{r_2}'}\equiv b\Mod{q_0} \\ (n_2,d_2r_2)=1}}\overline{\alpha'_{n_2}}\xi_2\Bigr|\Bigl|\sum_{\substack{n_3\sim N\\ n_3\overline{b'_{r_3}}\equiv b\Mod{q_0}\\ (n_3,d_2r_3)=1}}\alpha'_{n_3}\xi_3\Bigr| .
\end{align*}
We now apply Cauchy-Schwarz, giving
\begin{align}
\mathscr{S}_4'&\ll \sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}\Bigl(\Bigl|\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{r_2}}\equiv b\Mod{q_0}\\ (n_2,d_2r_2)=1}}\overline{\alpha'_{n_2}}\xi_2\Bigr|^2+\Bigl|\sum_{\substack{n_3\sim N\\ n_3\overline{b'_{r_3}}=b\Mod{q_0}\\ (n_3,d_3r_3)=1}}\alpha'_{n_3}\xi_3\Bigr|^2\Bigr).\label{eq:S4p}
\end{align}
Substituting \eqref{eq:S4p} into \eqref{eq:S42} and using the symmetry in $d_2,d_3,n_2,r_2,n_3,r_3$, we see that
\begin{align*}
\mathscr{S}_4&\ll \frac{(\log{x})^{2B+5} N}{Q R_0 R_1''}\sum_{e_1\sim E_1}\sum_{\substack{r_1'\sim R_1''\\ \tau(e_1 r_1')\le (\log{x})^B}}\sum_{\substack{r_2,r_3\sim R_2'\\ (q_0 e_1 r_1',r_2r_3)=1}}\sum_{\substack{h_2,h_3\sim H''\\ h_2r_3\equiv h_3r_2\Mod{e_1}\\ (h_2,e_1 r_1' r_2)=1\\ (h_3,e_1 r_1' r_3)=1\\ (h_2r_3-h_3r_2,r_1')=1 }}\\
&\qquad \times\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}\Biggl|\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{r_2}}\equiv b\Mod{q_0}\\ (n_2,d_2r_2)=1}}\overline{\alpha'_{n_2}}\xi_2\Biggr|^2+O(x^{-10}).
\end{align*}
We recall that $\xi_2=e(b'_{r_2} h_2\overline{n_2 e_1 r_1' q_0}/r_2)$. We split the summation according to the residue class of $h_2\overline{r_1' e_1 }\Mod{r_2}$, giving
\begin{align}
\mathscr{S}_4&\ll \frac{(\log{x})^{2B+5} N}{QR_0R_1''}\sum_{\substack{r_2\sim R_2'\\ (r_2,q_0)=1}}\,\sum_{\substack{c\Mod{r_2}\\ (c,r_2)=1}}n_{c;r_2}\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}\Biggl|\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{r_2}}\equiv b\Mod{q_0}\\ (n_2,d_2r_2)=1}}\overline{\alpha'_{n_2}}e\Bigl(\frac{b'_{r_2}c\overline{n_2q_0}}{r_2}\Bigr)\Biggr|^2\nonumber\\
&\qquad +O(x^{-10}),\label{eq:S43}
\end{align}
where
\[
n_{c;r_2}:=\sum_{e_1\sim E_1}\sum_{\substack{r_1'\sim R_1''\\ (e_1 r_1',r_2)=1\\ \tau(e_1 r_1')\le (\log{x})^B}}\sum_{\substack{h_2\sim H''\\ (h_2,r_2)=1\\ h_2\overline{e_1 r_1'}\equiv c\Mod{r_2} }}\sum_{r_3\sim R_2'}\sum_{\substack{h_3\sim H''\\ e_1 | h_3r_2-h_2r_3}}1.
\]
We concentrate on $n_{c'r_2}$. There are $O(H''{}^2)$ choices of $h_2,h_3$. Given a choice of $h_2$, there are $O(E_1 R_1''/R_2)$ choices of $r_1=e_1 r_1'$ satisfying $r_1 \equiv h_2\overline{c}\Mod{r_2}$. Given a choice of $r_1$, there are $\tau(r_1)\le (\log{x})^B$ choices of $e_1,r_1'$ such that $e_1 r_1'=r_1$. Given a choice of $h_2,h_3,e_1$,there are $O(1+R_2/E_1)$ choices of $r_3\equiv h_3r_2\overline{h_2}\Mod{e_1}$ (recall $(h_2,e_1)=1$). Therefore if $E_1\le R_2$, we have that $n_{c;r_2}\ll (\log{x})^B H''{}^2 R_1/E_1\ll (\log{x})^B H''{}^2 R_1''$.
If instead $E_1>R_2$, we let $h_1:=(h_3r_2-h_2r_3)/e_1$ which is of size $O(H'' R_2/E_1)$. We then see that $h_1e_1\equiv h_2 r_3\Mod{r_2}$ and $h_2\equiv c e_1 r_1'\Mod{r_2}$, so $h_1\equiv c r_1'r_3\Mod{r_2}$. There are $O(H''{}^2 R_2 R_1''/E_1)$ choices of $h_1,h_2,r_1'$. Given a choice of $h_1,r_1'$ there are $O(1)$ choices of $r_3\equiv h_1\overline{c r_1'}\Mod{r_2}$. Given a choice of $h_2,r_1'$ there are $O(E_1/R_2)$ choices of $e_1\equiv h_2\overline{c r_1'}\Mod{r_2}$. Finally, given a choice of $h_1,h_2,e_1,r_3$, there is at most one choice of $h_3=(h_1e_1+h_2r_3)/r_2$. Thus in total there are $O(H''{}^2 R_1'')$ choices, and so
\begin{equation}
n_{c;r_2}\ll (\log{x})^B H''{}^2 R_1''
\label{eq:S4Count}
\end{equation}
regardless of the size of $E_1$.
Substituting this bound \eqref{eq:S4Count} into \eqref{eq:S43}, and then extending the $b,c$ summations, we find that
\begin{align*}
\mathscr{S}_4&\ll \frac{(\log{x})^{3B+5} N H''{}^2}{QR_0}\sum_{\substack{r_2\sim R_2'\\ (r_2,q_0)=1}}\sum_{\substack{c\Mod{r_2}}}\sum_{b\Mod{q_0}}\Biggl|\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{r_2}}\equiv b\Mod{q_0}\\ (n_2,d_2r_2)=1}}\overline{\alpha'_{n_2}}e\Bigl(\frac{b'_{r_2}c\overline{n_2q_0}}{r_2}\Bigr)\Biggr|^2\\
&\qquad +O(x^{-10})\\
&\ll \frac{(\log{x})^{3B+5} N H''{}^2 R_2'}{QR_0}\sum_{\substack{r_2\sim R_2'\\ (r_2,q_0)=1}}\sum_{\substack{n_2,n_3\sim N\\ n_2\equiv n_3\Mod{r_2 q_0}\\ (n_2n_3,d_2r_2)=1}}\alpha'_{n_3}\overline{\alpha'_{n_2}}+O(x^{-10})\\
&\ll \frac{(\log{x})^{3B+5} N^2 H''{}^2 R_2' R}{QR_0}.
\end{align*}
In the final line we used the fact that $\alpha'_n$ is 1-bounded and that $N\ll x^{1/2}\ll Q R$.
We recall that $H''\ll (\log{x})^5 Q N R_0 R_1'R_2'/x$ and $R_2'\ll R/R_0$, so this gives
\begin{equation}
\mathscr{S}_4\ll (\log{x})^{3B+10}\frac{N^4 Q R_1'{}^2 R_2'{}^2 R^2 }{x^{2}}.
\label{eq:S4DiagBound}
\end{equation}
Thus we obtain $\mathscr{S}_4\ll N^3 E_1 R_1'' R_2'{}^2/((\log{x})^{A} Q^2)$ provided
\begin{equation}
N\ll \frac{x^{2} }{(\log{x})^C R^3Q^3}=\frac{x^{1/2-3\delta}}{(\log{x})^C}
\end{equation}
for $C=C(A,B)$ sufficiently large. This gives the result.
\end{proof}
\begin{lmm}[Off-diagonal terms, second Cauchy]\label{lmm:SecondCauchy}
Let $\mathscr{S}_4$ be as in Lemma \ref{lmm:Cauchy}, let $R_0\ll N/Q$ and
\[
R_1''> \Bigl(\frac{N}{ Q R_0}\Bigr)^{2/3}.
\]
Then we have that
\[
|\mathscr{S}_4|^2\ll (\log{x}) \frac{ N^2 R_2'{}^2}{QR_0}\Bigl(\mathscr{S}_5+\mathscr{S}_6\Bigr),
\]
where $\mathscr{S}_5,\mathscr{S}_6$ are given by
\begin{align*}
\mathscr{S}_5&:=\sum_{e_1,e_1'\sim E_1}\sum_{\substack{r_1,r_1'\sim R_1''\\ e_1 r_1,e_1'r_1'\in\mathcal{R} }}\sum_{\substack{r_2,r_3\sim R_2'\\ (e_1 e_1'r_1 r_1',r_2r_3)=1 \\ r_2,r_3\in\mathcal{R}}}\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_2',h_3,h_3'\sim H'' \\ e_1'r_1'(h_2r_3-h_3r_2)= e_1r_1(h_2'r_3-h_3'r_2)}}|\mathscr{S}_7|,\\
\mathscr{S}_{6}&:=\sum_{e_1,e_1'\sim E_1}\sum_{\substack{r_1,r_1'\sim R_1''\\ e_1 r_1,e_1'r_1'\in\mathcal{R} }}\sum_{\substack{r_2,r_3\sim R_2'\\ (e_1 e_1' r_1 r_1',r_2r_3)=1\\ r_2,r_3\in\mathcal{R}}}\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_2',h_3,h_3'\sim H'' \\ e_1'r_1'(h_2r_3-h_3r_2)\ne e_1 r_1(h_2'r_3-h_3'r_2)}}|\mathscr{S}_7|,\\
\mathscr{S}_7&:=\sum_{\substack{n_1, n_1',n_2,n_3\\ n_1'\overline{b_{e_1' r_1'}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ n_2\overline{b'_{r_2}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0} \\ n_3\overline{b'_{r_3}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_1,q_0 e_1 r_1)=(n_1',e_1'r_1')=1\\ (n_2,r_2)=(n_3,r_3)=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\psi\Bigl(\frac{n_1'}{N}\Bigr)\psi\Bigl(\frac{n_2}{N}\Bigr)\psi\Bigl(\frac{n_3}{N}\Bigr)\xi_0'\xi_1'\xi_{1'}'\xi_2'\xi_3',
\end{align*}
and $\mathcal{R}:=\{r:\mu^2(r)=1,\,\tau(r)\le(\log{x})^B,\,(r,q_0)=1\}$,
\begin{align*}
\xi_{0}'&:=e\Bigl(\frac{b_{e_1r_1}\overline{n_1}(\overline{e_1 r_1}(h_2\overline{r_2}-h_3\overline{r_3})-\overline{e_1' r_1'}(h_2'\overline{r_2}-h_3'\overline{r_3}))}{q_0}\Bigr),\\
\xi_{1}'&:=e\Bigl(\frac{b_{e_1r_1}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1 e_1 q_0}}{r_1}\Bigr),\\
\xi_{1'}'&:=e\Bigl(\frac{-b_{e_1'r_1'}(h_2'\overline{r_2}-h_3'\overline{r_3})\overline{n_1'e_1'q_0}}{r_1'}\Bigr),\\
\xi_{2}'&:=e\Bigl(\frac{b'_{r_2}\overline{n_2}(h_2\overline{e_1 r_1}-h_2'\overline{e_1' r_1'})\overline{q_0}}{r_2}\Bigr),\\
\xi_{3}'&:=e\Bigl(\frac{-b'_{r_3}\overline{n_3}(h_3\overline{e_1 r_1}-h_3'\overline{e_1' r_1'})\overline{q_0}}{r_3}\Bigr).
\end{align*}
Here we use $\mathop{\sideset{}{^*}\sum}$ to denote the fact we have the additional conditions that
\begin{align*}&(h_2'r_3-h_3'r_2,r_1')=1,\quad(h_2r_3-h_3r_2,r_1)=1,\quad (h_2,e_1 r_1r_2)=1,\quad (h_3,e_1 r_1 r_3)=1,\\
&(h_2',e_1'r_1'r_2)=1, \quad (h_3',e_1'r_1'r_3)=1,\quad h_2r_3-h_3r_2\equiv 0\Mod{e_1}, \quad h_2'r_3-h_3'r_2\equiv 0\Mod{e_1'}.
\end{align*}
\end{lmm}
\begin{proof}
Rearranging the order of summation in $\mathscr{S}_4$, we have that
\begin{align*}
\mathscr{S}_4&:=\sum_{\substack{r_2,r_3\sim R_2}}\overline{c'_{r_2}}c'_{r_3}\sum_{\substack{n_2\sim N\\ (n_2,q_0d_2 r_2)=1}}\overline{\alpha'_{n_2}}\sum_{\substack{n_3\sim N\\ n_3\overline{b'_{r_3}}\equiv n_2\overline{b'_{r_2}}\Mod{q_0}\\ (n_3, q_0 d_2 r_3)=1}}\alpha'_{n_3}\sum_{\substack{e_1\sim E_1 \\ (e_1,r_2r_3)=1}}\sum_{\substack{h_2,h_3\sim H''\\ h_2r_3\equiv h_3r_2\Mod{e_1}\\ (h_2,e_1 r_2)=(h_3,e_1 r_3)=1 }}\\
&\qquad\times \Biggl(\sum_{\substack{r_1'\sim R_1''\\ (r_1',h_2h_3r_2r_3)=1\\ (h_2r_3-h_3r_2,r_1')=1}}\eta_{e_1,r_1'}\sum_{\substack{n_1\\ n_1\overline{b_{e_1 r_1'}}\equiv n_2\overline{b'_{r_2}}\Mod{q_0}\\ (n_1,e_1 r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_0\xi_1\xi_2\xi_3\Biggr).
\end{align*}
We Cauchy in $r_2,r_3,n_2,n_3$, giving (using the fact that $R_0\ll N/Q$)
\begin{equation}
|\mathscr{S}_4|^2\ll \frac{N^2 R_2'{}^2}{QR_0}\mathscr{S}_4'',
\label{eq:S4Off}
\end{equation}
where $\mathscr{S}_4''$ is given by
\begin{align*}
\mathscr{S}_4''&:=\sum_{\substack{r_2,r_3\sim R_2'\\ r_2,r_3\in\mathcal{R}}}\,\sum_{\substack{n_2,n_3\sim N\\ n_2\overline{b'_{r_2}}\equiv n_3\overline{b'_{r_3}}\Mod{q_0}\\ (n_2,q_0r_2)=(n_3,q_0 r_3)=1}}|\mathscr{S}_4''|^2,\\
\mathscr{S}_4'''&:=\sum_{\substack{r_1'\sim R_1''\\ e_1\sim E_1\\ (e_1r_1',r_2r_3)=1 }}\eta_{e_1,r_1'}\sum_{\substack{h_2,h_3\sim H''\\ h_2r_3\equiv h_3r_2\Mod{e_1}\\ (h_2,e_1 r_1'r_2)=1\\ (h_3,e_1 r_1' r_3)=1\\ (h_2r_3-h_3r_2,r_1')=1}}\sum_{\substack{n_1\\ n_1\overline{b_{e_1 r_1'}}\equiv n_2\overline{b'_{r_2}}\Mod{q_0}\\ (n_1,e_1 r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_0\xi_1\xi_2\xi_3.
\end{align*}
Here we have used the fact that $c'_{r}$ is 1-bounded and supported on $r\in \mathcal{R}:=\{r:\,\mu^2(r)=1,\,\tau(r)\le (\log{x})^B\}$, and we dropped the conditions $(d_2,r_2)=(d_2,r_3)=1$ for an upper bound.
We insert a smooth majorant for the $n_2,n_3$ summations and expand the square, giving
\begin{align}
\mathscr{S}_4''&\le\sum_{\substack{r_2,r_3\sim R_2'\\ r_2,r_3\in\mathcal{R}}}\,\sum_{\substack{n_2,n_3\\ n_2\overline{b'_{r_2}}\equiv n_3\overline{b'_{r_3}}\Mod{q_0}\\ (n_2,q_0 r_2)=(n_3,q_0 r_3)=1}}\psi\Bigl(\frac{n_2}{N}\Bigr)\psi\Bigl(\frac{n_3}{N}\Bigr)|\mathscr{S}_4'''|^2\nonumber\\
&=\sum_{e_1,e_1'\sim E_1}\sum_{\substack{r_1,r_1'\sim R_1''}}\eta_{e_1,r_1}\overline{\eta_{e_1',r_1'}}\sum_{\substack{r_2,r_3\sim R_2'\\ (e_1 e_1'r_1r_1',r_2r_3)=1 \\ r_2,r_3\in\mathcal{R}}}\,\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_2',h_3,h_3'\sim H''}}
\nonumber\\
&\qquad \times \sum_{\substack{n_1, n_1',n_2,n_3\\ n_1'\overline{b_{e_1'r_1'}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ n_2\overline{b'_{r_2}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0} \\ n_3\overline{b'_{r_3}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_2,q_0 r_2)=(n_3,q_0 r_3)=1\\ (n_1,e_1 r_1)=(n_1',e_1' r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\psi\Bigl(\frac{n_1'}{N}\Bigr)\psi\Bigl(\frac{n_2}{N}\Bigr)\psi\Bigl(\frac{n_3}{N}\Bigr)\xi_0'\xi_1'\xi_{1'}'\xi_2'\xi_3',\label{eq:S4pp}
\end{align}
where
\begin{align*}
\xi_0'&:=e\Bigl(\frac{b_{e_1 r_1}(h_2\overline{r_2}-h_3\overline{r_3})\overline{n_1 e_1 r_1}-b_{e_1'r_1'}(h_2'\overline{r_2}-h_3'\overline{r_3})\overline{n_1' e_1'r_1'}}{q_0}\Bigr),
\end{align*}
and $\xi_1',\xi_{1'}',\xi_2',\xi_3'$ are as given in the statement of the lemma. Here we use $\mathop{\sideset{}{^*}\sum}$ in the summation over $h_2,h_2',h_3,h_3'$ to denote the fact we have the additional conditions that
\begin{align*}
&(h_2'r_3-h_3'r_2,r_1')=1,\quad(h_2r_3-h_3r_2,r_1)=1,\quad (h_2,e_1r_1 r_2)=1,\quad (h_3,e_1r_1 r_3)=1,\\
&(h_2',e_1'r_1'r_2)=1, \quad (h_3',e_1'r_1'r_3)=1,\quad h_2r_3-h_3r_2\equiv 0\Mod{e_1},\quad h_2'r_3-h_3'r_2\equiv 0\Mod{e_1'}.
\end{align*}
Since we have the condition $n_1'\overline{b_{e_1' r_1'}}\equiv n_1\overline{b_{e_1r_1} }\Mod{q_0}$, we see that $\xi_0'$ simplifies to give
\begin{align*}
\xi_{0}'&:=e\Bigl(\frac{b_{e_1 r_1}\overline{n_1}(\overline{e_1 r_1}(h_2\overline{r_2}-h_3\overline{r_3})-\overline{e_1'r_1'}(h_2'\overline{r_2}-h_3'\overline{r_3}))}{q_0}\Bigr),
\end{align*}
which matches the expression given in the statement of the lemma. We insert absolute values around the $n_1,n_1',n_2,n_3$ summation, and recall that $|\eta_{e_1,r_1}|\le 1$ and supported on $e_1r_1\in\mathcal{R}$. This means \eqref{eq:S4pp} simplifies to give
\begin{equation}
\mathscr{S}_4''\le \sum_{e_1,e_1'\sim E_1}\sum_{\substack{r_1,r_1'\sim R_1''\\ e_1r_1,e_1'r_1'\in\mathcal{R} }}\sum_{\substack{r_2,r_3\sim R_2'\\ (e_1 e_1'r_1r_1',r_2r_3)=1 \\ r_2,r_3\in\mathcal{R}}}\,\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_2',h_3,h_3'\sim H''}}|\mathscr{S}_7|,
\label{eq:S4pp2}
\end{equation}
where $\mathscr{S}_7$ is as given by the lemma. Finally, we separate our upper bound \eqref{eq:S4pp2} into $\mathscr{S}_5$, the `diagonal' terms with $e_1'r_1'(h_2r_3-h_3r_2)=e_1r_1(h_2'r_3-h_3'r_2)$, and $\mathscr{S}_6$, the `off-diagonal' terms with with $e_1'r_1'(h_2r_3-h_3r_2)\ne e_1r_1(h_2'r_3-h_3'r_2)$. Thus we find that
\begin{equation}
\mathscr{S}_4''\le \mathscr{S}_5+\mathscr{S}_6,
\label{eq:S4pp3}
\end{equation}
where $\mathscr{S}_5,\mathscr{S}_6$ are as given by the lemma. Substituting \eqref{eq:S4pp3} into our upper bound \eqref{eq:S4Off} for $\mathscr{S}_4$ then gives the result.
\end{proof}
We are left to show that
\[
|\mathscr{S}_5|,|\mathscr{S}_6|\ll\frac{ (E_1 R_1''R_2')^2 N^4 R_0}{(\log{x})^{A} Q^3}.
\]
First we consider the contribution from $\mathscr{S}_5$, the terms with $e_1'r_1'(h_2r_3-h_3 r_2)=e_1r_1(h_2'r_3-h_3'r_2)$.
\begin{lmm}[Diagonal terms]\label{lmm:Diag2}
Let $A,B>0$ and let $\mathscr{S}_5=\mathscr{S}_5(B)$ be as in Lemma \ref{lmm:SecondCauchy}. Let $R_1''$ satisfy
\[
R_1''\ge \Bigl(\frac{N}{ Q R_0}\Bigr)^{2/3},
\]
and let $N,R$ satisfy
\begin{align*}
x^{6\delta}(\log{x})^C<R&< N< \frac{x^{1/2-2\delta}}{(\log{x})^C},
\end{align*}
for some $C=C(A,B)$ sufficiently large in terms of $A$ and $B$. Then we have that
\[
\mathscr{S}_5\ll \frac{(E_1 R_1''R_2')^2N^4}{(\log{x})^{A} Q^3 R_0}.
\]
\end{lmm}
\begin{proof}
Since $\mathscr{S}_5$ is the terms with $e_1'r_1'(h_2r_3-h_3 r_2)=e_1r_1(h_2'r_3-h_3'r_2)$, we see that
\[
\xi'_0=1.
\]
We recall that the summation in $\mathscr{S}_5$ is restricted to $(h_2r_3-h_3r_2,r_1)=1=(h_2'r_3-h_3'r_2,r_1')$. Thus we see that $e_1'r_1'(h_2r_3-h_3 r_2)=e_1r_1(h_2'r_3-h_3'r_2)$ implies that $r_1=r_1'$ and $r_3(h_2e_1'-h_2'e_1)=r_2(h_3e_1'-h_3'e_1)$.
We now wish to remove possible GCDs. Let $r'=\gcd(r_2,r_3)$ and so $r_2=r' e_2$, $r_3=r' e_3$ with $r',e_2,e_3$ pairwise coprime (recall $r_2,r_3$ are square-free). Then we see that $e_3(h_2e_1'-h_2'e_1)=e_2(h_3e_1'-h_3'e_1)$, so $h_2e_1'-h_2'e_1=e_2\ell$ and $h_3e_1'-h_3'e_1=e_3\ell$ for some $\ell$. In these new variables, the phases $\xi_1',\xi_{1'}',\xi_2',\xi_3'$ simplify to give
\begin{align*}
\xi_1'&=e\Bigl(\frac{b_{e_1 r_1}\overline{n_1}(h_2\overline{e_2}-h_3\overline{e_3})\overline{r' q_0 e_1}}{r_1}\Bigr),\quad &
\xi_{1'}'&=e\Bigl(-\frac{b_{e_1'r_1}\overline{n_1'}(h_2\overline{e_2}-h_3\overline{e_3}) \overline{r' q_0 e_1}}{r_1}\Bigr),\\
\xi_2'&=e\Bigl(\frac{b'_{e_2r'}\overline{n_2}\ell \overline{r_1 q_0 e_1 e_1'}}{r'}\Bigr), &
\xi_3'&=e\Bigl(\frac{-b'_{e_3r'}\overline{n_3}\ell \overline{r_1 q_0 e_1 e_1'}}{r'}\Bigr).
\end{align*}
We recall that the summation is also restricted by the condition $h_2r_3-h_3r_2\equiv 0\Mod{e_1}$. Since $(e_1,r_2)=1$ and $r'|r_2$, we have that $(r',e_1)=1$ so this condition simplifies to $h_2e_3-h_3e_2\equiv 0\Mod{e_1}$. The condition $h_2e_1'-h_2'e_1=e_2\ell$ gives the constraint $h_2e_1'-e_2\ell\equiv 0\Mod{e_1}$. We see that there is at most one choice of $h_2',h_3'$ for any given choice of $h_2,h_3,\ell,e_1,e_1',e_2,e_3$. Finally, we must have that $e_2\asymp e_3$ since $r_2,r_3\sim R'$. Thus, putting $r'$ into dyadic ranges, we see that this gives the bound
\begin{align}
\mathscr{S}_5&\ll (\log{x})\sup_{E_2\le R_2'}\sum_{e_1,e_1'\sim E_1}\sum_{\substack{e_2,e_3\asymp E_2\\ (e_2e_3,e_1 e_1')=1\\ (e_2,e_3)=1}}\sum_{\substack{r'\sim R_2'/E_2\\ e_2r'\in\mathcal{R}\\ e_3r'\in\mathcal{R}\\ (r',e_1 e_1')=1}}\sum_{\substack{r_1\sim R_1''\\ e_1r_1\in\mathcal{R}\\ e_1'r_1\in\mathcal{R} \\ (r_1,e_2e_3r')=1 }}\nonumber\\
&\qquad\times \mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_3\sim H\\ h_2e_3\equiv h_3e_2\Mod{e_1}}}\sum_{\substack{\ell\ll H'' E_1 /E_2\\ h_2 e_1'-e_2\ell\equiv 0\Mod{e_1}}}|\mathscr{S}_5'|,
\label{eq:S91}
\end{align}
where $\mathscr{S}_5'=\mathscr{S}_5'(e_1,e_1',e_2,e_3,r_1,r',\ell,h_2,h_3)$ is given by
\begin{align}
\mathscr{S}_5':=&\sum_{\substack{n_1,n_1',n_2,n_3 '\\ n_1'\overline{b_{e_1'r_1}}\equiv n_1\overline{b_{e_1r_1}}\Mod{q_0} \\ n_2\overline{b'_{e_2 r'}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0} \\ n_3\overline{b'_{e_3 r'}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_2,e_2 r')=(n_3,e_3 r')=1 \\ (n_1,q_0 e_1 r_1)=(n_1',e_1' r_1')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\psi\Bigl(\frac{n_1'}{N}\Bigr)\psi\Bigl(\frac{n_2}{N}\Bigr)\psi\Bigl(\frac{n_3}{N}\Bigr)\xi_{1}'\xi_{1'}'\xi_{2}'\xi_{3}'.\label{eq:S9'1}
\end{align}
We concentrate on $\mathscr{S}_5'$. To simplify notation, let us define $k_1,k_1'\Mod{r_1}$ and $k_2,k_3\Mod{r'}$ by
\begin{align*}
k_1&:=b_{e_1 r_1}(h_2\overline{e_2}-h_3\overline{e_3})\overline{r' e_1 q_0},\qquad
&k_1'&:=-b_{e_1' r_1}(h_2\overline{e_2}-h_3\overline{e_3})\overline{r' e_1 q_0},\\
k_2&:=b'_{e_2 r'}\ell\overline{r_1 e_1 e_1'q_0},\qquad
&k_3&:=-b'_{e_3 r'}\ell\overline{r_1 e_1 e_1'q_0}.
\end{align*}
We note that each of these are coprime to their respective modulus apart from possible common factors between $\ell$ and $r'$, and that $\xi_1',\xi_{1'}',\xi_2',\xi_3'$ simplify to
\begin{align*}
\xi_1'&=e\Bigl(\frac{\overline{n_1}k_1}{r_1}\Bigr),\qquad
\xi_{1'}'&=e\Bigl(\frac{\overline{n_1'}k_1'}{r_1}\Bigr),\qquad
\xi_2'&=e\Bigl(\frac{\overline{n_2}k_2}{r'}\Bigr), \qquad
\xi_3'&=e\Bigl(\frac{\overline{n_3}k_3}{r'}\Bigr).
\end{align*}
By M\"obius inversion and then Lemma \ref{lmm:InverseCompletion}, for $L_1':=(\log{x})^5 q_0 f_1'r_1/N$ we have that
\begin{align}
& \sum_{\substack{n_1'\\ n_1'\overline{b_{e_1'r_1}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_1',e_1'r_1)=1}}\psi\Bigl(\frac{n_1'}{N}\Bigr)\xi_{1'}'=\sum_{f_1'|e_1'}\mu(f_1') \sum_{\substack{n_1'\\ n_1'\overline{b_{e_1'r_1}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_1',r_1)=1\\ f_1'|n_1'}}\psi\Bigl(\frac{n_1'}{N}\Bigr)\xi_{1'}'\nonumber\\
&\qquad=\sum_{f_1'|e_1'} \frac{\mu(f_1')N}{q_0f_1'r_1}\sum_{|\ell_1'|\le L_1' }\hat{\psi}\Bigl(\frac{N\ell_1'}{q_0f_1'r_1}\Bigr)e\Bigl(\frac{\ell_1' n_1 b_{e_1'r_1}\overline{b_{e_1 r_1} f_1'r_1}}{q_0}\Bigr)S(k_1'\overline{f_1'},\ell_1'\overline{q_0};r_1)\nonumber\\
&\qquad\qquad+O(x^{-100}).\label{eq:N1'Sum}
\end{align}
Similarly, we find that for $L_2:=(\log{x})^5 q_0f_2r'/N$ we have
\begin{align}
&\sum_{\substack{n_2\\ n_2\overline{b'_{e_2r'}}\equiv n_1\overline{b_{e_1r_1}}\Mod{q_0}\\ (n_2,e_2r')=1}}\psi\Bigl(\frac{n_2}{N}\Bigr)\xi_{2}'=\sum_{f_2|e_2}\mu(f_2)\sum_{\substack{n_2\\ n_2\overline{b'_{e_2r'}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_2,r')=1\\ f_2|n_2}}\psi\Bigl(\frac{n_2}{N}\Bigr)\xi_{2}'\nonumber\\
&\qquad=\sum_{f_2|e_2}\frac{\mu(f_2')N}{q_0f_2'r'}\sum_{|\ell_2|\le L_2}\hat{\psi}\Bigl(\frac{N\ell_2}{q_0f_2 r'}\Bigr)e\Bigl(\frac{\ell_2 n_1 b'_{e_2r'} \overline{b_{e_1 r_1}f_2 r'}}{q_0}\Bigr)S(k_2\overline{f_2},\ell_2\overline{q_0};r')\nonumber\\
&\qquad\qquad+O(x^{-100}).\label{eq:N2Sum}
\end{align}
Finally, setting $L_3:=(\log{x})^5 q_0 f_3r'/N$, for the $n_3$ sum we have
\begin{align}
&\sum_{\substack{n_3\\ n_3\overline{b'_{e_3r'}}\equiv n_1\overline{b_{e_1r_1}}\Mod{q_0}\\ (n_3,e_3r')=1}}\psi\Bigl(\frac{n_3}{N}\Bigr)\xi_{3}'=\sum_{f_3|e_3}\mu(f_3)\sum_{\substack{n_3\\ n_3\overline{b'_{e_3r'}}\equiv n_1\overline{b_{e_1r_1r}}\Mod{q_0}\\ (n_3,r')=1\\ f_3|n_3}}\psi\Bigl(\frac{n_3}{N}\Bigr)\xi_{3}'\nonumber\\
&\qquad=\sum_{f_3|e_3}\frac{\mu(f_3)N}{q_0 f_3 r'}\sum_{|\ell_3|\le L_3}\hat{\psi}\Bigl(\frac{N\ell_3}{q_0 f_3 r'}\Bigr)e\Bigl(\frac{\ell_3 n_1 b'_{e_3r'} \overline{b_{e_1r_1} f_3 r'}}{q_0}\Bigr)S(k_3\overline{f_3},\ell_3\overline{q_0};r')\nonumber\\
&\qquad\qquad+O(x^{-100}).\label{eq:N3Sum}
\end{align}
Substituting \eqref{eq:N1'Sum}, \eqref{eq:N2Sum} and \eqref{eq:N3Sum} into \eqref{eq:S9'1} and swapping the order of summation then gives
\begin{align}
\mathscr{S}_5'&=\frac{N^3}{q_0^3 r_1 r'{}^2}\sum_{\substack{f_1'|e_1'\\ f_2|e_2 \\ f_3|e_3}}\frac{\mu(f_1')\mu(f_2)\mu(f_3)}{f_1'f_2f_3}\sum_{\substack{|\ell_1'|\le L_1'\\ |\ell_2|\le L_2 \\ |\ell_3|\le L_3}}\kappa_{\ell_1',\ell_2,\ell_3}\mathscr{S}_5''+O(x^{-10}),\label{eq:S5'1}
\end{align}
where $\mathscr{S}_5''=\mathscr{S}_5''(\ell_1',\ell_2,\ell_3)$ and $\kappa_{\ell_1',\ell_2,\ell_3}$ are given by
\begin{align*}
\mathscr{S}_5''&:=\sum_{(n_1,q_0 e_1 r_1)=1}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_1' e\Bigl(\frac{n_1 \overline{b_{e_1r_1}}(\ell_3 b'_{e_3r'}\overline{f_3r'}+\ell_2 b'_{e_2r'}\overline{f_2 r'}+\ell_1' b_{e_1'r_1}\overline{f_1' r_1})}{q_0}\Bigr),\\
\kappa_{\ell_1',\ell_2,\ell_3}&:=\hat{\psi}\Bigl(\frac{N\ell_1'}{q_0f_1'r_1}\Bigr)\hat{\psi}\Bigl(\frac{N\ell_2}{q_0f_2 r'}\Bigr)\hat{\psi}\Bigl(\frac{N\ell_3}{q_0 f_3 r'}\Bigr)S(k_1'\overline{f_1'},\ell_1'\overline{q_0};r_1)S(k_2\overline{f_2},\ell_2\overline{q_0};r')S(k_3\overline{f_3},\ell_3\overline{q_0};r').
\end{align*}
Since $(k_1',r_1)=1$ and $(k_2,r')=(k_3,r')=(\ell,r')$, and we only consider $\tau(r_i)\le (\log{x})^{B}$ (from the conditions $e_1 r_1,r_2,r_3\in\mathcal{R}$), by the standard Kloosterman sum bound (Lemma \ref{lmm:Kloosterman}) we have
\[
\kappa_{\ell_1',\ell_2,\ell_3}\ll (\log{x})^{3B} r_1^{1/2} r'(\ell,r').
\]
In the special case when $\ell_1'=0$ we see that $S(k_1'\overline{f_1'},\ell_1'\overline{q_0};r_1)$ is a Ramanujan sum and so of size $O(1)$. Thus we also have the bound
\[
\kappa_{0,\ell_2,\ell_3}\ll (\log{x})^{2B} r'(\ell,r').
\]
Separating the $\ell_1'=0$ term and substituting these bounds into our expression \eqref{eq:S5'1} for $\mathscr{S}_5'$ gives
\begin{align}
\mathscr{S}_5' &\ll \frac{(\log{x})^{3B} N^3}{q_0^3 r_1^{1/2} r'}\sum_{\substack{f_1'|e_1'\\ f_2|e_2\\ f_3|e_3}}\frac{(\ell,r') }{f_1'f_2f_3}\Bigl(\sum_{\substack{0<|\ell_1'|\le L_1'\\ |\ell_2|\le L_2 \\ |\ell_3|\le L_3}}|\mathscr{S}_5''|+ \frac{1}{ r_1^{1/2} }\sum_{\substack{\ell_1'=0\\ |\ell_2|\le L_2 \\ |\ell_3|\le L_3}}|\mathscr{S}_5''|\Bigr)+O(x^{-10}).\label{eq:S9'2}
\end{align}
By Lemma \ref{lmm:InverseCompletion} again, we see that for $L_1:=(\log{x})^5 q_0f_1 r_1/N$ and for any $c\in \mathbb{Z}$
\begin{align*}
&\sum_{(n_1,q_0e_1r_1)=1}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_1' e\Bigl(\frac{n_1 c }{q_0}\Bigr)=\sum_{f_1|e_1}\mu(f_1)\sum_{\substack{(n_1,q_0r_1)=1\\ f_1|n_1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi_1' e\Bigl(\frac{n_1 c }{q_0}\Bigr)\\
&=\sum_{f_1|e_1}\frac{\mu(f_1)N}{q_0f_1 r_1}\sum_{|\ell_1|\le L_1}\hat{\psi}\Bigl(\frac{\ell_1 N}{q_0f_1 r_1}\Bigr)S(k_1\overline{f_1},\ell_1\overline{q_0};r_1)\sum_{\substack{b\Mod{q_0}\\ (b,q_0)=1}}e\Bigl(\frac{b (c f_1+\ell_1\overline{r_1})}{q_0}\Bigr)\\
&\qquad +O(x^{-100})\\
&\ll \frac{N}{q_0 r_1}\sum_{f_1|e_1}\frac{1}{f_1}\sum_{\substack{|\ell_1|\le (\log{x})^5 q_0 f_1 r_1/N}}r_1^{1/2}\tau(r_1)(\ell_1+cf_1r_1,q_0)\\
&\ll \frac{(\log{x})^{2B+5} N}{r_1^{1/2} }\Bigl(\frac{q_0+q_0 r_1/N}{q_0}\Bigr).
\end{align*}
Here we used the standard Kloosterman sum bound (Lemma \ref{lmm:Kloosterman}) in the penultimate line. By assumption of the lemma, we have that $N>R$ so $q_0>q_0 r_1/N$. Thus we find that
\begin{equation}
\mathscr{S}_5''\ll (\log{x})^{2B+5}\frac{N}{r_1^{1/2}}.
\end{equation}
Substituting this into \eqref{eq:S9'2}, and recalling that $r'\sim R_2'/E_2$, $q_0\sim QR_0$ and $r_1'\sim R_1''$ gives
\begin{align}
\mathscr{S}_5'&\ll \frac{(\log{x})^{5B+5} N^4}{q_0^3 r_1 r'}\sum_{\substack{f_1'|e_1'\\ f_2|e_2\\ f_3|e_3}}\frac{(\ell,r')(1+L_2)(1+L_3)}{f_1'f_2f_3}\Bigl(L_1'+\frac{1}{r_1^{1/2}}\Bigr)\nonumber\\
&\ll (\ell,r')\frac{(\log{x})^{8B+20}N^4 E_2}{Q^3 R_0^3 R_1'' R_2'}\Bigl(1+\frac{Q R_0 R_2'}{N E_2}\Bigr)^2\Bigl(\frac{Q R_0 R_1''}{N}+\frac{1}{ R_1''{}^{1/2}}\Bigr)\nonumber\\
&\ll (\ell,r') (\log{x})^{8B+20}\Bigl(N R_2'+\frac{N^3}{Q^2 R_0^2}\Bigr).\label{eq:S9'3}
\end{align}
In the final line above we used the fact that $R_1''>N^{2/3}/(Q R_0)^{2/3}$ as assumed in the statement of the lemma to conclude $Q R_0 R_1''/N\gg R_1''{}^{-1/2}$ and the fact that $E_2\ll R_2'$ to simplify one term. Substituting \eqref{eq:S9'3} into \eqref{eq:S91} then gives
\begin{align}
\mathscr{S}_5&\ll (\log{x})^{O_B(1)}\Bigl(N R_2'+\frac{N^3 E_2}{Q^2 R_0^2 R_2'}\Bigr)\sup_{E_2\le R_2'}\sum_{e_1,e_1'\sim E_1}\sum_{\substack{e_2,e_3\asymp E_2\\ (e_2e_3,e_1 e_1')=1\\ (e_2,e_3)=1}}\sum_{\substack{r'\sim R_2'/E_2\\ e_2r',e_3r'\in\mathcal{R}\\ (r',e_1 e_1')=1}}\nonumber\\
&\qquad\times \sum_{\substack{r_1\sim R_1''\\ e_1r_1, e_1'r_1\in\mathcal{R} \\ (r_1,e_2e_3r')=1 }}\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_3\sim H\\ h_2e_3\equiv h_3e_2\Mod{e_1}}}\sum_{\substack{\ell\ll H'' E_1 /E_2\\ h_2 e_1'-e_2\ell\equiv 0\Mod{e_1}}}(\ell,r').
\label{eq:S92}
\end{align}
We now consider the summation above. We recall that $(h_2r_3-h_3r_2,r_1)=1$, that $r_1\sim R_1''>1$ so $h_2r_3-h_3r_2=(h_2e_3-h_3e_2)r'\ne 0$. Thus $h_2e_3\ne h_3e_2$, and so $e_1|h_2e_3-h_3e_2$ is not vacuous. Thus for any choice of $h_2,e_3,h_3,e_2$ there are at most $\tau(h_2e_3-h_3e_2)$ choices of $e_1$. Given a choice of $\ell,h_2,e_2,e_1$ there are $O(1)$ choices of $e_1'$ satisfying $e_1'\equiv d_2\ell \overline{h}_2\Mod{e_1}$ with $e_1'\asymp e_1$. (Recall our summation is restricted to $(h_2,e_1)=1$.) Thus we see that
\begin{align*}
&\sum_{\substack{h_2,h_3\sim H}}\sum_{\substack{e_2,e_3\asymp E_2\\ h_3e_2\ne h_2e_3}}\sum_{\substack{e_1\sim E_1\\ e_1|h_2e_3-h_3e_2\\ (h_2,e_1)=1}}\sum_{\ell\ll H''E_1/E_2}\sum_{r'\sim R_2'/E_2}(\ell,r')\sum_{\substack{e_1'\sim E_1\\ e_1'\equiv d_2\ell \overline{h_2}\Mod{e_1}}}\sum_{r_1\sim R_1''}1\\
&\ll \sum_{h_2,h_3\sim H''}\sum_{\substack{e_2,e_3\sim E_2\\ h_3e_2\ne h_2e_3}}\tau(h_3e_2-h_2e_3)\frac{H'' E_1}{E_2}(\log{x})^{O(1)} \frac{ R_2'}{E_2} R_1''\\
&\ll (\log{x})^{O(1)} R_1'' R_2' H''{}^3 E_1.
\end{align*}
Substituting this into \eqref{eq:S92}, and using the bound $H''\ll (\log{x})^5 N Q R_0 E_1 R_1''R_2'/x$ (with $E_1 R_1'',R_2'\ll R/R_0$) gives
\begin{align}
\mathscr{S}_5&\ll (\log{x})^{O_B(1)}\Bigl(N R_2'+\frac{N^3}{Q^2 R_0^2}\Bigr)R_1'' R_2' H''{}^3 E_1\nonumber\\
&\ll (\log{x})^{O_B(1)}\frac{N^4 (R_1''R_2'E_1 )^2}{Q^3 R_0^2}\Bigl(\frac{Q^6 R^5}{x^3}+\frac{N^2 Q^4 R^4}{x^3}\Bigr).\label{eq:S93}
\end{align}
We wish to show that $\mathscr{S}_5\ll (R_1'' R_2' E_1 N^2)^2/((\log{x})^{A} Q^3 R_0)$. Recalling that $QR=x^{1/2+\delta}$, \eqref{eq:S93} gives this if
\begin{align}
N&<\frac{x^{1/2-2\delta}}{(\log{x})^C},\\
R&>x^{6\delta}(\log{x})^C,
\end{align}
for $C=C(A,B)$ sufficiently large in terms of $A,B$. This gives the result.
\end{proof}
Finally, we consider $\mathscr{S}_6$.
\begin{lmm}[Off-diagonal terms]\label{lmm:OffDiag}
Let $A,B>0$ and let $\mathscr{S}_{6}=\mathscr{S}_6(B)$ be as in Lemma \ref{lmm:SecondCauchy}. Let $R_1''$ and $R$ satisfy
\begin{align*}
R_1''&\ge \Bigl(\frac{N}{ Q R_0}\Bigr)^{2/3},\\
R&\ll \frac{x^{1/10-3\delta}}{(\log{x})^C}
\end{align*}
for some suitably large constant $C=C(A,B)$. Then we have that
\[
\sum_{q\sim Q}\sum_{r_0\sim R_0}\mathscr{S}_{6}\ll \frac{ (E_1 R_1'' R_2')^2 N^4}{(\log{x})^{A} Q^2}.
\]
\end{lmm}
\begin{proof}
To simplify notation, let us set $k_0\Mod{q_0}$, $k_1\Mod{r_1}$, $k_1'\Mod{r_1'}$, $k_2\Mod{r_2}$ and $k_3\Mod{r_3}$ to be
\begin{align*}
k_0&:=\overline{e_1 r_1}(h_2\overline{r_2}-h_3\overline{r_3})-\overline{e_1' r_1'}(h_2'\overline{r_2}-h_3'\overline{r_3}),\\
k_1&:=b_{e_1r_1}(h_2\overline{r_2}-h_3\overline{r_3})\overline{e_1 q_0},\\
k_1'&:=-b_{e_1'r_1'}(h_2'\overline{r_2}-h_3'\overline{r_3})\overline{e_1'q_0},\\
k_2&:=b'_{r_2}(h_2\overline{e_1 r_1}-h_2'\overline{e_1' r_1'})\overline{q_0},\\
k_3&:=-b'_{r_3}(h_3\overline{e_1 r_1}-h_3'\overline{e_1' r_1'})\overline{q_0}.
\end{align*}
We will detect cancellation in the inner sum over $n_1,n_1',n_2,n_3$, and so we write
\begin{align}
\mathscr{S}_6&\ll\sum_{e_1,e_1'\sim E}\sum_{\substack{r_2,r_3\sim R_2'\\ r_2,r_3\in\mathcal{R}}}\sum_{\substack{r_1,r_1'\sim R_1''\\ e_1r_1,e_1'r_1'\in\mathcal{R}\\ (e_1e_1'r_1r_1',r_2r_3)=1}}\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_3,h_2',h_3'\sim H\\ r_1'(h_2r_3-h_3r_2)\ne r_1(h_2'r_3-h_3'r_2)}}|\mathscr{S}_6'|,\label{eq:S61}
\end{align}
where $\mathscr{S}_6'=\mathscr{S}_6'(e_1,e_1',r_1,r_1',r_2,r_3,h_2,h_3,h_2',h_3')$ is given by
\begin{align}
\mathscr{S}_6&':=\sum_{\substack{n_1,n_1',n_2,n_3 '\\ n_1'\overline{b_{e_1'r_1'}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0} \\ n_2\overline{b'_{r_2}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0} \\ n_3\overline{b'_{r_3}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0} \\ (n_1,q_0 e_1 r_1)=(n_1',e_1' r_1')=1\\ (n_2,r_2)=(n_3,r_3)=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\psi\Bigl(\frac{n_1'}{N}\Bigr)\psi\Bigl(\frac{n_2}{N}\Bigr)\psi\Bigl(\frac{n_3}{N}\Bigr)\xi'',\label{eq:S6'1}\\
\xi''&:=e\Bigl(\frac{k_0\overline{n_1}b_{e_1 r_1}}{q_0}\Bigr)e\Bigl(\frac{k_1\overline{n_1}}{r_1}\Bigr)e\Bigl(\frac{k_1'\overline{n_1'}}{r_1'}\Bigr)e\Bigl(\frac{k_2\overline{n_2}}{r_2}\Bigr)e\Bigl(\frac{k_3\overline{n_3}}{r_3}\Bigr).\nonumber
\end{align}
We Fourier-complete the summation over $n_1',n_2,n_3$ in turn. As in the proof of Lemma \ref{lmm:Diag2}, Lemma \ref{lmm:InverseCompletion} gives that for $L_1':=(\log{x})^5 q_0 f_1' r_1'/N$
\begin{align}
\sum_{\substack{n_1'\\ n_1'\overline{b_{e_1'r_1'}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_1',e_1'r_1')=1}}&\psi\Bigl(\frac{n_1'}{N}\Bigr)e\Bigl(\frac{k_1'\overline{n_1'}}{r_1'}\Bigr)=O(x^{-100})\nonumber\\
&\hspace{-2.1cm}+\sum_{f_1'|e_1'}\frac{\mu(f_1')N}{q_0 f_1' r_1'}\sum_{|\ell_1'|\le L_1'}\hat{\psi}\Bigl(\frac{\ell_1' N}{q_0f_1' r_1'}\Bigr)S(k_1'\overline{f_1'},\ell_1'\overline{q_0};r_1')e\Bigl(\frac{\ell_1' b_{e_1'r_1'}\overline{b_{e_1 r_1}f_1' r_1'}n_1}{q_0}\Bigr),
\end{align}
Similarly, we obtain for $L_2:=(\log{x})^5 q_0 r_2/N$ and $L_3:=(\log{x})^5 q_0 r_3/N$
\begin{align}
\sum_{\substack{n_2\\ n_2\overline{b'_{r_2}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_2,r_2)=1}}&\psi\Bigl(\frac{n_2}{N}\Bigr)e\Bigl(\frac{k_2\overline{n_2}}{r_2}\Bigr)=O(x^{-100})\nonumber\\
&+\frac{N}{q_0 r_2}\sum_{|\ell_2|\le L_2}\hat{\psi}\Bigl(\frac{\ell_2 N}{q_0 r_2}\Bigr)S(k_2,\ell_2\overline{q_0};r_2)e\Bigl(\frac{\ell_2 b'_{r_2}\overline{b_{e_1 r_1}r_2}n_1}{q_0}\Bigr),\\
\sum_{\substack{n_3\\ n_3\overline{b'_{r_3}}\equiv n_1\overline{b_{e_1 r_1}}\Mod{q_0}\\ (n_3,r_3)=1}}&\psi\Bigl(\frac{n_3}{N}\Bigr)e\Bigl(\frac{k_3\overline{n_3}}{r_3}\Bigr)=O(x^{-100})\nonumber\\
&+\frac{N}{q_0 r_3}\sum_{|\ell_3|\le L_3}\hat{\psi}\Bigl(\frac{\ell_3 N}{q_0 r_3}\Bigr)S(k_3,\ell_3\overline{q_0};r_3)e\Bigl(\frac{\ell_3 b'_{r_3}\overline{b_{e_1 r_1}r_3}n_1}{q_0}\Bigr).
\end{align}
We substitute each of these expressions into \eqref{eq:S6'1}. In each case the $O(x^{-100})$ error term contributes negligibly. Thus we obtain
\begin{align}
\mathscr{S}_6'&=\frac{N^3}{q_0^3 r_1'r_2r_3}\sum_{f_1'|e_1'}\frac{\mu(f_1')}{f_1'}\sum_{\substack{|\ell_1'|\le L_1'\\ |\ell_2|\le L_2 \\ |\ell_3|\le L_3}} \kappa'_{\ell_1',\ell_2,\ell_3}\sum_{n_1}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi'''+O(x^{-10}),
\end{align}
where
\begin{align*}
\kappa'_{\ell_1',\ell_2,\ell_3}&:=\hat{\psi}\Bigl(\frac{\ell_1' N}{q_0r_1'}\Bigr)\hat{\psi}\Bigl(\frac{\ell_2 N}{q_0r_2}\Bigr)\hat{\psi}\Bigl(\frac{\ell_3 N}{q_0r_3}\Bigr)S(k_1'\overline{f_1'},\ell_1'\overline{q_0};r_1')S(k_2,\ell_2\overline{q_0};r_2)S(k_3,\ell_3\overline{q_0};r_3),\\
\xi'''&:=e\Bigl(\frac{b_{e_1 r_1}\overline{n_1}k_0}{q_0}\Bigr) e\Bigl(\frac{n_1\overline{b_{e_1 r_1}}(\ell_2 b'_{r_2}\overline{r_2}+\ell_3 b'_{r_3}\overline{r_3} +\ell_1' b_{e_1'r_1'}\overline{f_1'r_1'} )}{q_0}\Bigr)e\Bigl(\frac{k_1\overline{n_1}}{r_1}\Bigr).
\end{align*}
By Lemma \ref{lmm:InverseCompletion} again with $L_1=L_1(f_1):=(\log{x})^5 f_1 q_0 r_1/N$
\begin{align*}
\sum_{(n_1,q_0 e_1 r_1)=1}\psi\Bigl(\frac{n_1}{N}\Bigr)\xi'''&=\sum_{f_1| e_1 }\frac{\mu(f_1)N}{q_0 f_1 r_1}\sum_{|\ell_1|\le L_1}\hat{\psi}\Bigl(\frac{\ell_1 N}{q_0 f_1 r_1}\Bigr)S(k_1\overline{f_1},\ell_1\overline{q_0};r_1)S(k_0, \widetilde{k_0};q_0)\\
&\qquad +O(x^{-100}),
\end{align*}
where $k_0'=k_0'(\ell_1,\ell_1',\ell_2,\ell_3)\Mod{q_0}$ is given by
\[
\widetilde{k_0}:=\ell_1 b_{e_1 r_1}\overline{f_1r_1}+\ell_2 b'_{r_2}\overline{r_2}+\ell_3 b'_{r_3}\overline{r_3}+\ell_1' b_{e_1'r_1'}\overline{f_1'r_1'}.
\]
Let $k_0',k_2',k_3',d_0,d_2',d_3'\in\mathbb{Z}$ be defined by
\begin{align*}
k_0'&:=e_1' r_1'(h_2r_3-h_3r_2)-e_1 r_1(h_2'r_3-h_3'r_2),\quad &d_0&:=\gcd(k_0',r_0),\\
k_2'&:=h_2e_1'r_1'-h_2'e_1r_1,&d_2'&:=\gcd(k_2',r_2),\\
k_3'&:=h_3e_1'r_1'-h_3'e_1r_1,&d_3'&:=\gcd(k_3',r_3).
\end{align*}
The standard Kloosterman sum bound (Lemma \ref{lmm:Kloosterman}) then gives $S(k_i,\ell_i,r_i)\ll r_i^{1/2}d_i'{}^{1/2}\tau(r_i)$ for $i\in\{2,3\}$, and $S(k_0,\widetilde{k_0},q_0)\ll q_0^{1/2}d_0^{1/2}\tau(q_0)$ (for these terms we ignore potential savings from when $\ell_2,\ell_3\ne 0$). Since $(r_1,h_2r_3-h_3r_2)=1$ and $(r_1',h_2'r_3-h_3'r_2)=1$ and $\tau(r_1),\tau(r_1')\le (\log{x})^B$, we have that $S(k_1,\ell_1,r_1),S(k_1',\ell_1',r_1')\ll R_1''{}^{1/2}(\log{x})^B$ and $S(k_1,0,r_1),S(k_2,0,r_1')\ll 1$. Thus, separating the terms with $\ell_1=0$ or $\ell_1'=0$, we find that
\begin{align}
\mathscr{S}_6'&\ll (\log{x})^{3B}\frac{N^4 R_2' q_0^{1/2}d_2'{}^{1/2}d_3'{}^{1/2} d_0^{1/2}}{q_0^4R_1''{}^2 R_2'{}^2}\Bigl(\sum_{f_1|e_1}\frac{1}{f_1}\sum_{\substack{|\ell_1|\le L_1}}|S(k_1,\ell_1,r_1)|\Bigr)\nonumber\\
&\qquad \times \Bigl(\sum_{f_1'|e_1'}\frac{1}{f_1'}\sum_{\substack{|\ell_1'|\le L_1'}}|S(k_1',\ell_1',r_1')|\Bigr)\Bigl(\sum_{\substack{|\ell_2|\le L_2\\ |\ell_3|\le L_3}}1\Bigr)\nonumber\\
&\ll (\log{x})^{7B+20}\frac{N^4 q_0^{1/2} d_2'{}^{1/2} d_3'{}^{1/2} d_0^{1/2}}{q_0^4R_1''{}^2 R_2'}\Bigl(1+\frac{ q_0 R_2'}{N}\Bigr)^2\Bigl(1+\frac{ q_0 R_1''}{N}R_1''{}^{1/2}\Bigr)^2.
\end{align}
Since $R_1''\gg N^{2/3}/(Q R_0)^{2/3}$ and $R_2'\le R$ with $R>N/(Q R_0)$, this simplifies to
\begin{align}
\mathscr{S}_6'&\ll (\log{x})^{7B+20} \frac{R_1'' R^2 Q^{1/2} R_0^{1/2} d_2'{}^{1/2}d_3'{}^{1/2} d_0^{1/2}}{R_2'}.
\end{align}
Substituting this into \eqref{eq:S61}, we see that
\begin{align}
\sum_{q\sim Q}\sum_{r_0\sim R_0}&\mathscr{S}_{6}\ll (\log{x})^{7B+20} \frac{R_1'' R^2 Q^{1/2} R_0^{1/2}}{R_2'}\sum_{e_1,e_1'\sim E_1} \sum_{\substack{r_1,r_1'\sim R_1''\\ e_1r_1,e_1'r_1'\in\mathcal{R}}}\sum_{\substack{r_2,r_3\sim R_2'\\ r_2,r_3\in\mathcal{R}\\ (r_2r_3,e_1e_1'r_1r_1')=1}}\nonumber\\
&\qquad \times\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_3,h_2',h_3'\sim H\\ k_0'\ne 0}} \sum_{q_0\sim Q R_0}\tau(q_0)d_0^{1/2}d_2'{}^{1/2}d_3'{}^{1/2}\nonumber\\
&\ll (\log{x})^{O_B(1)}\frac{ R_1'' R^2 Q^{3/2} R_0^{3/2} }{R_2'}\sum_{e_1,e_1'\sim E_1} \sum_{\substack{r_1,r_1'\sim R_1''\\ r_2,r_3\sim R_2'}}\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_3,h_2',h_3'\sim H\\ k_0\ne 0}}d_2'{}^{1/2}d_3'{}^{1/2}\tau(k_0'),\label{eq:S62}
\end{align}
Here we used the fact that $k_0'\ne 0$ for terms counted by $\mathscr{S}_{6}$ to bound the sum over $q_0$. With later estimates in mind, we will work a little bit harder than immediately necessary to produce a bound which is stronger in the $E_1$ aspect than directly required.
We first consider the terms with $k_2',k_3'\ne 0$. By the bound $d_2'{}^{1/2}d_3'{}^{1/2}\ll d_2'+d_3'$ and symmetry in $r_2,r_3$, it suffices to just consider $d_2'$ in place of $d_2'{}^{1/2}d_3'{}^{1/2}$ for these terms. We recall that the summation is constrained by $e_1|h_2r_3-h_3r_2$ and $e_1'|h_2'r_3-h_3'r_2$. Thus $r_3\equiv h_3r_2\overline{h_2}\Mod{e_1}$ and $r_3\equiv h_3'r_2\overline{h_2'}\Mod{e_1'}$. Thus we see that, using Lemma \ref{lmm:Divisor}
\begin{align*}
&\sum_{e_1,e_1'\sim E_1} \sum_{\substack{r_1,r_1'\sim R_1''\\ r_2,r_3\sim R_2'}}\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_3,h_2',h_3'\sim H\\ k_2',k_3',k_0'\ne 0}}d_2'{}^{1/2}d_3'{}^{1/2}\tau(k_0')\\
&\ll \sum_{e_1,e_1'\sim E_1}\sum_{r_1,r_1'\sim R_1''} \sum_{\substack{h_2,h_2',h_3,h_3'\ll H''}}\sum_{r_2\sim R_2'}d_2'\sum_{\substack{r_3\sim R_2' \\ r_3\equiv \overline{h_2'}h_3'r_2\Mod{e_1'}\\ r_3\equiv \overline{h_2}h_3r_2\Mod{e_1}\\ k_2',k_3',k_0'\ne 0}}\tau(k_0')\\
&\ll (\log{x})^{O_B(1)}R_2'\sum_{e_1,e_1'\sim E_1}\Bigl(\frac{R_2'(e_1,e_1')}{E_1^2}+x^{o(1)}\Bigr)\sum_{r_1,r_1'\sim R_1''} \sum_{\substack{h_2,h_2',h_3,h_3'\ll H''\\ k_2'\ne 0}}\tau(k_2')\\
&\ll (\log{x})^{O_B(1)}R_2'\Bigl(R_2'+E_1^2 x^{o(1)}\Bigr) R_1''{}^2 H''{}^4.
\end{align*}
Substituting this into \eqref{eq:S62} and using the bound $H''\ll (\log{x})^5 N Q R R_2'/x\ll (\log{x})^5 N Q R^2/(x R_0)$, we see the terms with $k_2',k_3'\ne 0$ contribute to \eqref{eq:S62} a total
\begin{align}
&\ll (\log{x})^{O_B(1)} \frac{R_1'' R^2 Q^{3/2} R_0^{3/2}}{R_2'} R_2'\Bigl(R_2'+E_1^2 x^{o(1)}\Bigr) R_1''{}^2 H''{}^4\nonumber\\
&\ll (\log{x})^{O_B(1)} \frac{R_1''{}^3 R_2'{}^2 Q^{11/2} N^4 R^9}{x^4 }+\frac{R_1''{}^3 R_2'{}^2 Q^{11/2} N^4 R^8 E_1^2}{x^{4-o(1)} }.
\label{eq:S6Bound1}
\end{align}
We now consider the terms with $k_2'=0$ or $k_3'=0$. Since $k_0'=r_3k_2'-r_2k_3'\ne 0$ we cannot have both $k_2'=0$ and $k_3'=0$. By symmetry, it suffices to consider the case when $k_2'\ne 0$ and $k_3'=0$, so $d_3'=r_3\sim R_2'$ and $d_2'=\gcd(h_2h_3'-h_3'h_2,r_2)$. Given a choice of $h_3,e_1'$ and $r_1'$, since $k'_3=0$ we see that $h_3'e_1r_1=h_3e_1'r_1'$ is fixed, so there are $\tau(h_3 e_1'r_1')$ choices of $h_3',e_1$ and $r_1$. Thus we see that
\begin{align*}
&\sum_{e_1,e_1'\sim E_1} \sum_{\substack{r_1,r_1'\sim R_1''\\ r_2,r_3\sim R_2'}}\mathop{\sideset{}{^*}\sum}_{\substack{h_2,h_3,h_2',h_3'\sim H\\ k_2',k_0'\ne 0\\ k_3'=0}}d_2'{}^{1/2}d_3'{}^{1/2}\tau(k_0')\\
&\ll R_2'{}^{1/2}\sum_{e_1'\sim E_1}\sum_{r_1'\sim R_1''}\sum_{h_2,h_2',h_3\sim H''}\sum_{e_1r_1h_3'|e_1'r_1'h_3}\sum_{r_2\sim R_2'}d_2'{}^{1/2}\sum_{\substack{r_3\sim R_2'\\ r_3\equiv h_3r_2\overline{h_2}\Mod{e_1}\\k_0',k_2'\ne 0}}\tau(k_0')\\
&\ll \Bigl(\frac{R_2'}{E_1}+x^{o(1)}\Bigr)R_2'{}^{3/2}\sum_{e_1'\sim E_1}\sum_{r_1'\sim R_1''}\sum_{h_3\sim H''}\sum_{e_1r_1h_3'|e_1'r_1'h_3}\sum_{\substack{h_2,h_2'\sim H''\\ h_2h_3'\ne h_3h_2'}}\tau(h_2h_3'-h_3h_2')\\
&\ll \Bigl(\frac{R_2'}{E_1}+x^{o(1)}\Bigr)R_2'{}^{3/2}E_1 R_1'' H''{}^3.
\end{align*}
Substituting this into \eqref{eq:S62} and using the bound $H''\ll (\log{x})^5 N Q R R_2'/x\ll (\log{x})^5 N Q R^2/(x R_0)$, we see the terms with $k_2'k_3'=0$ contribute to \eqref{eq:S62} a total
\begin{align}
&\ll (\log{x})^{O_B(1)} \frac{R_1'' R^2 Q^{3/2} R_0^{3/2}}{R_2'}\Bigl(\frac{R_2'}{E_1}+x^{o(1)}\Bigr)R_2'{}^{3/2}E_1 R_1'' H''{}^3\nonumber\\
&\ll (\log{x})^{O_B(1)}\frac{ R_1''{}^2 R_2'{}^{2}N^3 Q^{9/2} R^{15/2}}{x^3}+\frac{E_1 R_1''{}^2 R_2'{}^{2}N^3 Q^{9/2} R^{13/2}}{x^{3-o(1)}}.
\label{eq:S6Bound2}
\end{align}
We see that together \eqref{eq:S6Bound1} and \eqref{eq:S6Bound2} give
\begin{align}
\sum_{q\sim Q}\sum_{r_0\sim R_0}\mathscr{S}_{6}
&\ll (\log{x})^{O_B(1)}\Bigl( \frac{R_1''{}^3 R_2'{}^2 Q^{11/2} N^4 R^9}{x^4 }+\frac{ R_1''{}^2 R_2'{}^{2}N^3 Q^{9/2} R^{15/2}}{x^3 }\Bigr)\nonumber\\
&\qquad +x^{o(1)}\Bigl(\frac{R_1''{}^3 R_2'{}^2 Q^{11/2} N^4 R^8 E_1^2}{x^{4} }+\frac{E_1 R_1''{}^2 R_2'{}^{2}N^3 Q^{9/2} R^{13/2}}{x^{3} }\Bigr)\label{eq:S6Intermediate}\\
&\ll (\log{x})^{O_{B}(1)}\Bigl(\frac{E_1^2 N^4 Q^{11/2} R_1''{}^2 R_2'{}^2 R^{10}}{x^4 }+ \frac{E_1^2 N^3 Q^{9/2} R_1''{}^2 R_2'{}^2 R^{15/2}}{x^3 }\Bigr)\nonumber\\
&\ll (\log{x})^{O_{B}(1)}\frac{E_1^2 N^4 Q^{11/2} R_1''{}^2 R_2'{}^2 R^{10}}{x^{4} }.\label{eq:S6Bound}
\end{align}
In the penultimate line we used the fact that $R>x^{2\epsilon}$ to see that the first two terms dominate after being multiplied by $E_1^2$, and in the final line we used the fact that $N Q R^2\ge Q^2 R^2\ge x$ to see that the first term dominates.
We recall that we want to show that $\sum_q\sum_{r_0}\mathscr{S}_{6}\ll (E_1R_1''R_2'N^2)^2/((\log{x})^{A} Q^2)$. Recalling that $QR=x^{1/2+\delta}$, we see that \eqref{eq:S6Bound} gives this if we have
\begin{align}
R<\frac{x^{1/10-3\delta}}{(\log{x})^C}
\end{align}
and $C=C(A,B)$ is sufficiently large in terms of $A$ and $B$. This finishes the proof.
\end{proof}
\begin{lmm}\label{lmm:MainConclusion}
Let $A,B>0$, let $B_2=B_2(A,B)$ be sufficiently large in terms of $A$ and $B$ and let $C=C(A,B,B_2)$ be sufficiently large in terms of $A,B$ and $B_2$. Let $Q,R,M,N\ge 1$ be such that $QR=x^{1/2+\delta}\ge x^{1/2}(\log{x})^{-A}$ and $NM\asymp x$ with
\[
Q x^{2\delta}(\log{x})^{C}< N < \frac{x^{1/2-3\delta}}{(\log{x})^C},\qquad x^{6\delta}(\log{x})^C\le R\le \frac{x^{1/10-3\delta}}{(\log{x})^C}.
\]
Let $\mathscr{S}$ be given by
\begin{align*}
\mathscr{S}&:=\sum_{q\sim Q}\sum_{\substack{r_1,r_2\sim R}}c_{q,r_1}\overline{c_{q,r_2}}\hspace{-0.2cm}\sum_{\substack{n_1,n_2\sim N\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2}}}\hspace{-0.2cm}\alpha_{n_1}\overline{\alpha}_{n_2}\hspace{-0.2cm}\sum_{\substack{m\sim M\\mn_1\equiv a_{q,r_1} \Mod{qr_1}\\ m n_2\equiv a'_{q,r_2}\Mod{q r_2}}}\psi\Bigl(\frac{m}{M}\Bigr)
\end{align*}
for some 1-bounded coefficients $c_{q,r}$ supported on square-free $r$ with $(q,r)=1$ and $\tau(qr)\le (\log{x})^{B}$, and some coefficients $\alpha_n$ satisfying $|\alpha_n|\le \tau(n)^{B}$ and the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}, and some integer sequences $a_{q,r},a'_{q,r}$ satisfying $(a_{q,r},qr)=(a'_{q,r},qr)=1$. Then we have
\[
\mathscr{S}=\mathscr{S}_{MT}+O_{A,B}\Bigl(\frac{MN^2}{Q(\log{x})^{A}}\Bigr).
\]
where for some constant $C_1=C_1(A,B,B_2)$
\begin{align*}
\mathscr{S}_{MT}&:=\sum_{q\sim Q}\sum_{r_0\le N/((\log{x})^{C_1} Q)}\sum_{\substack{r_1',r_2'\sim R/r_0\\ (r_1',r_2')=1}}c_{q,r_0r_1'}\overline{c_{q,r_0r_2'}}\sum_{\substack{n_1,n_2\sim N\\ (n_1,qr_0r_1')=1\\ (n_2,q r_0 r_2')=1}}\frac{\alpha_{n_1}\overline{\alpha}_{n_2}M\hat{\psi}(0)}{q r_0r_1'r_2'\phi(q r_0)}.
\end{align*}
\end{lmm}
\begin{proof}
Let $r_0=(r_1,r_2)$. We see that the congruence conditions $m n_1\equiv a_{q,r_1}\Mod{qr_1}$ and $m n_2\equiv a'_{q,r_2}\Mod{q r_2}$ require that $(m n_1 n_2,q r_1 r_2)=1$ and that $n_1\overline{a_{q,r_1}}\equiv n_2\overline{a'_{q,r_2}}\Mod{q r_0}$. We now split $\mathscr{S}$ by putting $r_0$ into dyadic intervals $r_0\sim R_0$. Thus it suffices to show that for each $R_0\ll x$ we have
\begin{equation}
\mathscr{S}(R_0)=\begin{cases}
\mathscr{S}_{MT}(R_0)+O_A\Bigl(\frac{MN^2}{Q(\log{x})^{A+1}}\Bigr),\qquad & R_0\le N/((\log{x})^{C_1} Q),\\
O_A\Bigl(\frac{MN^2}{Q(\log{x})^{A+1}}\Bigr),& R_0>N/(x^{4\delta}(\log{x})^{C_1} Q),
\end{cases}
\label{eq:STarget}
\end{equation}
where $C_1=C_1(A,B,B_2)$ is a constant we will choose, and
\begin{align*}
\mathscr{S}(R_0)&:=\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R/r_0\\ (r_1',r_2')=1}}c_{q,r_0r_1'}\overline{c_{q,r_0r_2'}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,r_0r_1'}}\equiv n_2\overline{a'_{q,r_0r_2'} } \Mod{q r_0}}}\alpha_{n_1}\overline{\alpha}_{n_2}\\
&\qquad\times\sum_{\substack{m\sim M\\m n_1\equiv a_{q,r_0r_1'} \Mod{qr_0 r_1'}\\ m n_2 \equiv a'_{q,r_0r_2'}\Mod{ r_2'}}}\psi\Bigl(\frac{m}{M}\Bigr),\\
\mathscr{S}_{MT}(R_0)&:=
\sum_{q\sim Q}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R/r_0\\ (r_1',r_2')=1}}c_{q,r_0r_1'}\overline{c_{q,r_0r_2'}}\sum_{\substack{n_1,n_2\sim N\\ (n_1,qr_0r_1')=1\\ (n_2,q r_0 r_2')=1}}\alpha_{n_1}\overline{\alpha}_{n_2}\frac{M\hat{\psi}(0)}{q r_0r_1'r_2'\phi(q r_0)}.
\end{align*}
To ease dependencies we restrict the support of $c_{q,r}$ to $r\sim R$, and so we may consider the summation with $r_1',r_2'\sim R'$ independent of $r_0$ for $R'\asymp R/R_0$.
We may assume that $B_2$ is sufficiently large in terms of $A,B$ such that Lemma \ref{lmm:Fourier} applies. We then choose $C_1=C_1(A,B,B_2)$ sufficienty large in terms of $A,B,B_2$ such that Lemma \ref{lmm:Fourier} applies and Lemma \ref{lmm:GCD} gives a bound $\mathscr{S}'\ll MN^2/((\log{x})^{A+2BB_2}Q)$. Then if $R_0>N/((\log{x})^{C_1} Q)$ and $C>3C_1$ then \eqref{eq:STarget} follows from Lemma \ref{lmm:GCD}. Indeed we assume $N>Q x^{2\delta}(\log{x})^{C}$ and $|\alpha_n|\le \tau(n)^B\le (\log{x})^{B_2B}$, so Lemma \ref{lmm:GCD} gives the result provided $C>3C_1$. Thus we may assume that $R_0\le N/((\log{x})^{C_1} Q)$, and so by Lemma \ref{lmm:Fourier} it suffices to show for all choices of $R'\asymp R/R_0$
\[
\mathscr{S}_2\ll \frac{N^2 R^2}{R_0(\log{x})^{2A+2BB_2} },
\]
where $\mathscr{S}_2$ is as given in Lemma \ref{lmm:Fourier}. By Lemma \ref{lmm:Simplify}, Cauchy-Schwarz and Lemma \ref{lmm:Cauchy}, we have
\begin{align*}
\mathscr{S}_2&\ll (\log{x})^4\sup_{D_1\le D_2}\sum_{d_1\sim D_1}\sum_{d_2\sim D_2}\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_3|\\
&\ll (\log{x})^4 \sup_{D_1\le D_2}(D_1D_2QR_0)^{1/2}\Bigl(\sum_{d_1\sim D_1}\sum_{d_2\sim D_2}\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_3|^2\Bigr)^{1/2}\\
&\ll (\log{x})^5\sup_{\substack{D_1\le D_2\\ E_1R_1''\asymp R_1}}(D_2 Q N R)^{1/2}\Bigl(\sum_{d_1\sim D_1}\sum_{d_2\sim D_2}\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_4|\Bigr)^{1/2},
\end{align*}
where $\mathscr{S}_3$ and $\mathscr{S}_4$ are as given in Lemmas \ref{lmm:Simplify} and \ref{lmm:Cauchy} respectively. Thus it suffices to show that for all $d_1\sim D_1$, $d_2\sim D_2$
\begin{equation}
\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_4|\ll \frac{N^3 R^3}{(\log{x})^{4A+4BB_2+10} Q R_0^3 D_1 D_2^2}\asymp \frac{N^3 E_1 R_1'' R_2'{}^2 R_0}{(\log{x})^{4A+4BB_2+10} Q}.
\label{eq:S4Target}
\end{equation}
Lemma \ref{lmm:Diag1} gives this if $R_1''\asymp R/(R_0D_1 E_1)$ satisfies $R_1''\le N^{2/3}/(Q R_0)^{2/3}$, so we may assume that $R_1''>N^{2/3}/(Q R_0)^{2/3}$. In this case, we may apply Cauchy-Schwarz, Lemma \ref{lmm:SecondCauchy}, Lemma \ref{lmm:Diag2} and Lemma \ref{lmm:OffDiag} in turn to give
\begin{align*}
\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_4|& \ll (Q R_0)^{1/2}\Bigl(\sum_{q\sim Q}\sum_{r_0\sim R_0}|\mathscr{S}_4|^2\Bigr)^{1/2}\\
&\ll (\log{x})^{1/2} N R_2'\Bigl(\sum_{q\sim Q}\sum_{r_0\sim R_0}(|\mathscr{S}_5|+|\mathscr{S}_6|)\Bigr)^{1/2}\\
&\ll (\log{x})^{1/2} N R_2'\Bigl( \frac{ ( E_1 R_1'' R_2')^2N^4 }{(\log{x})^{A_2} Q^2}\Bigr)^{1/2}\\
&\ll_{A_2,B} \frac{N^3 E_1 R_1'' R_2'{}^2 R_0}{(\log{x})^{A_2/2-1/2} Q}
\end{align*}
provided $C$ is large enough in terms of $A_2$ and $B$. Choosing $A_2=8A+8 B B_2+21$ and $C=C(A,B,B_2)\ge 3C_1$ sufficiently large in terms of $A_2,B$, and $C_1$ then gives \eqref{eq:S4Target}, and hence the result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prpstn:MainProp}]
Let $S$ be the sum of interest
\[
S:=\sum_{q\sim Q}\sum_{r\sim R}\sup_{a\Mod{qr}}\Bigl|\sum_{m\sim M}\sum_{n\sim N}\alpha_n\beta_m\Bigl(\mathbf{1}_{n m\equiv a\Mod{q r}}-\frac{\mathbf{1}_{(nm,qr)=1}}{\phi(q r)}\Bigr)\Bigr|.
\]
First we simplify the moduli appearing. By Lemma \ref{lmm:Divisor} and the trivial bound, the contribution from $q,r$ with $\tau(qr)>(\log{x})^{B}$ is negligible if $B=B(A)$ is sufficiently large, so we may restrict the summation to $\tau(qr)\le (\log{x})^{B}$ for some fixed constant $B=B(A)\ge A$. Given $q,r$, let $q r= s^\square s^{\notsquare}$ be factored into square-full and square-free parts. By Lemma \ref{lmm:Squarefree} we only need to consider $s^\square\ll (\log{x})^{B_1}$ for some fixed constant $B_1=B_1(A)$. We now let $q'=s^\square (q,s^{\notsquare})$ and $r'=(r,s^{\notsquare})$, and so it suffices to show that for all $Q'\in[Q,Q(\log{x})^{B_1}]$, $R'\asymp QR/Q'$ we have
\begin{align*}
\sum_{q'\sim Q'}\tau_3(q')\hspace{-0.5cm}\sum_{\substack{r'\sim R'\\ (r',q')=1\\ \mu^2(r')=1\\ \tau(q' r')\le (\log{x})^{B} }}\hspace{-0.5cm}\sup_{(a,q' r')=1}\Bigl|\sum_{m\sim M}\sum_{\substack{n\sim N}}\alpha_n\beta_m\Bigl(\mathbf{1}_{n m\equiv a\Mod{q' r'}}-\frac{\mathbf{1}_{(m n,q' r')=1}}{\phi(q' r')}\Bigr)\Bigr|\\\ll_{A} \frac{x}{(\log{x})^{A+1} }.
\end{align*}
Since $\tau_3(q')\le \tau(q')^2\le (\log{x})^{2B}$, it suffices to show
\begin{align*}
\sum_{q'\sim Q'}\hspace{-0.2cm}\sum_{\substack{r'\sim R'\\ (r',q')=1\\ \mu^2(r')=1\\ \tau(q'r')\le (\log{x})^{B_1}}}\hspace{-0.2cm}\sup_{(a,q' r')=1}\Bigl|\sum_{m\sim M}\sum_{n\sim N}\alpha_n\beta_m\Bigl(\mathbf{1}_{n m\equiv a\Mod{q' r'}}-\frac{\mathbf{1}_{(m n,q' r')=1}}{\phi(q' r')}\Bigr)\Bigr|\\
\ll_{A}\frac{x}{(\log{x})^{A+2B+1}}.
\end{align*}
By Lemma \ref{lmm:Divisor} and the trivial bound, the contribution from $n$ with $\tau(n)\ge (\log{x})^{B_2}$ is negligible if $B_2$ is sufficiently large in terms of $A$ and $B$. Therefore we may restrict to $\tau(n)\le (\log{x})^{B_2}$, where we will later choose $B_2$ appropriately. Let $\alpha''_n:=\alpha_n\mathbf{1}_{\tau(n)\le(\log{x})^{B_2}}$ be $\alpha_n$ with this restricted support.
Let $a_{q',r'}$ be the residue class achieving the supremum, and $c_{q',r'}$ 1-bounded complex numbers to remove the absolute values. We restrict the support of $c_{q',r'}$ to $(r',q')=1$ with $\tau(q'r')\le (\log{x})^{B}$ and $r'$ square-free. Thus we wish to show
\[
\sum_{q'\sim Q'}\sum_{r'\sim R'}c_{q',r'}\sum_{m\sim M}\sum_{n\sim N}\alpha''_n\beta_m\Bigl(\mathbf{1}_{n m\equiv a_{q',r'}\Mod{q' r'}}-\frac{\mathbf{1}_{(m n,q' r')=1}}{\phi(q' r')}\Bigr)\ll_{A}\frac{x}{(\log{x})^{A+2B+1}}.
\]
By considering the average over $b_{q,r} \Mod{qr}$, it suffices to show that for any sequences $a_{q,r},b_{q,r}\Mod{qr}$ with $(a_{q,r}b_{q,r},qr)=1$ we have
\[
\sum_{q\sim Q}\sum_{\substack{r\sim R}}c_{q,r}\sum_{n\sim N}\sum_{m\sim M}\alpha''_n\beta_m\Bigl(\mathbf{1}_{n m\equiv a_{q,r}\Mod{q r}}-\mathbf{1}_{n m\equiv b_{q,r}\Mod{q r}}\Bigr)\ll_{A} \frac{x}{(\log{x})^{A+2B+1}}.
\]
We apply Cauchy-Schwarz in the $m$ and $q$ variables. Recalling that $|\beta_m|\le \tau(m)^A$ and $B=B(A)$ it suffices to show that for a suitable constant $A_2=A_2(A)$
\begin{align*}
\sum_{q\sim Q}\sum_{m\sim M}\Bigl|\sum_{r\sim R}c_{q,r}\sum_{\substack{n\sim N\\ \tau(n)\le (\log{x})^{B_2} }}\alpha_n\Bigl(\mathbf{1}_{n m\equiv a_{q,r}\Mod{q r}}-\mathbf{1}_{n m\equiv b_{q,r}\Mod{q r}}\Bigr)\Bigr|^2\\
\ll_{A_2} \frac{MN^2}{Q(\log{x})^{A_2}}.
\end{align*}
Inserting a smooth majorant for the $m$ summation then expanding the square, we see that it suffices to show that uniformly over all sequences $a_{q,r},a'_{q,r}$ coprime to $qr$ we have
\[
\mathscr{S}=X+O_{A_2}\Bigl(\frac{MN^2}{Q(\log{x})^{2A_2}}\Bigr)
\]
for some quantity $X$ independent of $a_{q,r}$ and $a'_{q,r}$, where
\begin{align*}
\mathscr{S}&:=\sum_{q\sim Q'}\sum_{\substack{r_1,r_2\sim R'}}c_{q,r_1}\overline{c_{q,r_2}}\sum_{\substack{n_1,n_2\sim N\\ \tau(n_1),\tau(n_2)\le(\log{x})^{B_2} }}\alpha_{n_1}\overline{\alpha}_{n_2}\sum_{\substack{m\sim M\\m\equiv a_{q,r_1}\overline{n_1}\Mod{qr_1}\\ m\equiv a'_{q,r_2}\overline{n_2}\Mod{q r_2}}}\psi\Bigl(\frac{m}{M}\Bigr).
\end{align*}
This now follows from Lemma \ref{lmm:MainConclusion} if first $B_2=B_2(A)$ chosen sufficiently large in terms of $A_2$ and $B$, and then $C=C(A)$ is chosen sufficiently large in terms of $A_2,B$ and $B_2$.
\end{proof}
\section{Second Type II estimate}\label{sec:SecondProp}
We now establish Proposition \ref{prpstn:SecondProp}. The proof is similar to that of Proposition \ref{prpstn:MainProp}, but we change some intermediate manipulations to exploit the additional assumptions on the moduli involved. This ultimately has the effect of reducing the modulus of the final exponential sums appearing, leading to an additional saving. For our applications we no longer need to worry about losing factors of $x^\epsilon$ since we will ultimately have a power-saving estimate.
The key quantity we need to understand for Proposition \ref{prpstn:SecondProp} is a variant of the sum $\mathscr{S}$ with special coefficients $c_{q,r}$. After performing the same initial steps this leads to estimating $\mathscr{A}_3$ in place of $\mathscr{S}_3$, which is given by the lemma below. We estimate this via Lemma \ref{lmm:AlternativeCauchy} and \ref{lmm:AltA4}, which leads to Lemma \ref{lmm:SecondConclusion}, our new variant of Lemma \ref{lmm:MainConclusion}. We then deduce Proposition \ref{prpstn:SecondProp} from Lemma \ref{lmm:SecondConclusion} in a similar manner to before.
\begin{lmm}\label{lmm:AlternativeCauchy}
Let $B>0$, $d_1\sim D_1$, $d_2\sim D_2$, $q\sim Q$, $t_0\sim T_0$ and $q_0=qt_0$. Let $\gamma_{qrs}$, $\lambda_t$ and $\alpha'_n$ be 1-bounded complex sequences, and let $\tilde{\gamma}_{t}=\tilde{\gamma}_{t}(q,d_1,t_0)$ satisfy
\[
|\tilde{\gamma}_{t}|\le \frac{\mathbf{1}_{\tau(q d_1 t_0 t)\le (\log{x})^B}}{(\log{x})^B}\sum_{r\sim R}\sum_{\substack{s\sim S\\ rs=d_1 t_0 t\\ (rs,q)=1}}|\gamma_{q r s}|\mu^2(d_1 t_0 t).
\]
Let $H\ll N Q T_0 T_1 T_2/x^{1-\epsilon}$, $T_1\ll T/(T_0 D_1)$, $T_2\ll T/(T_0 D_2)$ and
\begin{align*}
\mathscr{A}_3&:=\sum_{\substack{t_1'\sim T_1\\ (t_1',q_0)=1}}\tilde{\gamma}_{t_1'}\sum_{\substack{t_2'\sim T_2\\ (t_2',q_0 t_1')=1}}\lambda_{t_2'}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{b_{t_1'}}=n_2\overline{b'_{t_2'}}\Mod{q_0}\\ (n_1,q_0d_1 t_1')=(n_2,q_0d_2t_2')=1}}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{\substack{h\sim H\\ (h,t_1't_2')=1}}\xi,\\
\xi&:=e\Bigl(\frac{b_{ t_1'}h\overline{n_1 t_1't_2'}}{q_0}\Bigr)e\Bigl(\frac{b_{ t_1'}h\overline{n_1 q_0 t_2'}}{t_1'}\Bigr)\Bigl(\frac{b'_{t_2'}h\overline{n_2 q_0 t_1'}}{t_2'}\Bigr).
\end{align*}
Then we have
\[
|\mathscr{A}_3|^2\ll x^{o(1)} N T_1\sup_{\substack{E_1,S_1\\ E_1 S_1\asymp T_1 \\ E_1\ge R/(T_0 D_1)}}\min(E_1,R)\,| \mathscr{A}_{4}|
\]
where
\begin{align*}
\mathscr{A}_4&:=\sum_{e_1\sim E_1}\sum_{s_1\sim S_1}\eta_{e_1,s_1}\sum_{\substack{t_2,t_3\sim T_2\\ (t_2t_3,q_0 e_1 s_1)=1}}\lambda_{t_2}\overline{\lambda_{t_3}}\sum_{\substack{n_1\\ (n_1,q_0 e_1 s_1)=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\\
&\qquad \times \sum_{\substack{h_2,h_3\sim H\\ h_2 t_3\equiv h_3 t_2\Mod{e_1} \\ (h_2t_3-h_3t_2, s_1)=1 \\ (h_2,e_1 s_1 t_2)=1\\ (h_3,e_1 s_1 t_3)=1}}\sum_{\substack{n_2,n_3\sim N\\ n_2\overline{b'_{t_2}}\equiv n_1\overline{b_{e_1 s_1}}\Mod{q_0}\\ n_3\overline{b'_{t_3}}\equiv n_1\overline{b_{e_1 s_1 }}\Mod{q_0}\\ (n_2,q_0 d_2 t_2)=1\\ (n_3,q_0 d_2 t_3)=1}}\alpha'_{n_2}\overline{\alpha'_{n_3}}\xi''',\\
\xi'''&:=e\Bigl(\frac{b_{e_1 s_1}\overline{n_1 e_1 }(h_2\overline{t_2}-h_3\overline{t_3})}{q_0 s_1}\Bigr)e\Bigl(\frac{b'_{t_2}h_2\overline{n_2 q_0 e_1 s_1}}{t_2}\Bigr)e\Bigl(\frac{-b'_{t_3} h_3 \overline{n_3 q_0 e_1 s_1}}{t_3}\Bigr),
\end{align*}
and $|\eta_{e_1 ,s_1}|\le 1$ is supported on $e_1 s_1$ square-free with $\tau(e_1 s_1)\le (\log{x})^B$ and $(e_1 s_1,q_0)=1$.
\end{lmm}
\begin{proof}
We first swap the order of summation and use the upper bound for the coefficients $\tilde{\gamma}$. Given $r\sim R$ and $s\sim S$ with $r s=d_1 t_0 t_1'$ and $r s$ squarefree, let $r'=(t_1',r)$ and $s'=(t_1',s)$. We see that $r'\in [R/(D_1T_0),2R]$. Given $r',s'$ there are $\tau(d_1 t_0)\le x^{o(1)}$ choices of $r,s$. Thus, noting that $\tilde{\gamma_{t}}$ is supported on $\tau(t)\le(\log{x})^B$ with $t$ square-free and coprime to $q_0$, this gives
\begin{align*}
\mathscr{A}_3&\ll \sum_{t_1'\sim T_1}|\tilde{\gamma}_{t_1'}|\sum_{\substack{n_1\sim N\\ (n_1,q_0t_1')=1}}|\alpha'_{n_1}||\mathscr{A}_3'|\\
&\ll x^{o(1)}\sup_{\substack{R' S'\asymp T_1\\ R/(D_1 T_0)\le R' \le R}}\sum_{r'\sim R'}\sum_{\substack{s'\sim S'\\ r' s'\in\mathcal{R} }}\sum_{\substack{n_1\sim N\\ (n_1,q_0r' s')=1}}|\mathscr{A}_3'|,
\end{align*}
where we recall $\mathcal{R}=\{r:\,\mu^2(r)=1,\tau(r)\le (\log{x})^B,\,(r,q_0)=1\}$ and
\begin{align*}
\mathscr{A}_3'&:=\sum_{\substack{t_2'\sim T_2\\ (t_2',q_0 r' s')=1}}\lambda_{t_2'}\sum_{\substack{n_2\sim N\\ n_2\overline{b'_{t_2'}}\equiv n_1\overline{b}_{r' s'}\Mod{q_0} \\ (n_2,q_0 d_2 t_2')=1}}\overline{\alpha'}_{n_2}\sum_{\substack{h\sim H\\ (h,r' s' t_2')=1}}\xi',\\
\xi'&:=e\Bigl(\frac{b_{r' s'}h\overline{n_1 t_2'}}{q_0 r' s'}\Bigr)\Bigl(\frac{b'_{t_2'}h\overline{n_2 q_0 r' s'}}{t_2'}\Bigr).
\end{align*}
We now split the summation according to the residue class of $h\overline{t_2'}\Mod{r'}$, and then apply Cauchy-Schwarz and insert a smooth majorant for the $n_1$ summation. Let $\mathscr{A}_3''$ be $\mathscr{A}_3'$ with the summation restricted by the condition $h\overline{t_2'}\equiv c\Mod{r'}$. This gives
\begin{align*}
\mathscr{A}_3&\ll x^{o(1)}\sup_{\substack{R' S'\sim T_1\\ R/(T_0 D_1)\le R'\le R}}\sum_{r'\sim R'}\sum_{\substack{s'\sim S'\\ r' s'\in\mathcal{R}}}\sum_{\substack{n_1\sim N\\ (n_1,q_0 r' s')=1}}\sum_{\substack{c\Mod{r'}\\ (c,r')=1}}|\mathscr{A}_3''|\\
&\ll x^{o(1)}\sup_{\substack{R' S'\sim T_1\\ R/(T_0 D_1)\le R'\le R}}\Bigl(R'{}^2 S' N\sum_{r'\sim R'}\sum_{\substack{s'\sim S'\\ r' s'\in\mathcal{R}}}\sum_{\substack{n_1\sim N\\ (n_1,q_0r' s')=1}}\sum_{\substack{c\Mod{r'}\\ (c,r')=1}}|\mathscr{A}_3''|^2\Bigr)^{1/2}\\
&\ll x^{o(1)}\sup_{\substack{R' S'\sim T_1\\ R/(T_0 D_1)\le R'\le R}}\Bigl(R' T_1 N\sum_{r'\sim R'}\sum_{\substack{s'\sim S'\\ r' s'\in\mathcal{R}}}\sum_{\substack{n_1\\ (n_1,q_0r' s')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\sum_{\substack{c\Mod{r'}\\ (c,r')=1}}|\mathscr{A}_3''|^2\Bigr)^{1/2}\\
&\ll x^{o(1)}\sup_{\substack{R' S'\sim T_1\\ R/(T_0 D_1)\le R'\le R}}\Bigl(R' T_1 N|\mathscr{A}_3'''|\Bigr)^{1/2},
\end{align*}
where, expanding the square,
\begin{align*}
\mathscr{A}_3'''&:=\sum_{r'\sim R'}\sum_{\substack{s'\sim S'\\ r' s'\in\mathcal{R} }}\sum_{\substack{t_2,t_3\sim T_2\\ (t_2t_3,q_0 r' s')=1}}\lambda_{t_2}\overline{\lambda_{t_3}}\sum_{\substack{n_1\\ (n_1,q_0r' s')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\\
&\qquad \times \sum_{\substack{h_2,h_3\sim H\\ h_2t_3\equiv h_3t_2\Mod{r'} \\ (h_2,r' s' t_2)=1\\ (h_3,r' s' t_3)=1}}\sum_{\substack{n_2,n_3\sim N\\ n_2\overline{b'_{t_2}}\equiv n_1\overline{b_{r' s'}}\Mod{q_0}\\ n_3\overline{b'_{t_3}}\equiv n_1\overline{b_{r' s'}}\Mod{q_0}\\ (n_2,q_0d_2 t_2)=1\\ (n_3,q_0d_2t_3)=1}}\alpha'_{n_2}\overline{\alpha'_{n_3}}\xi''',\\
\xi'''&:=e\Bigl(\frac{b_{r' s'}\overline{n_1}(h_2\overline{t_2}-h_3\overline{t_3})}{q_0 r' s'}\Bigr)e\Bigl(\frac{b'_{t_2}h_2\overline{n_2 q_0 r' s'}}{t_2}\Bigr)e\Bigl(\frac{-b'_{t_3} h_3 \overline{n_3 q_0 r' s'}}{t_3}\Bigr).
\end{align*}
We wish to control some common divisors. Let $e_1=(h_2t_3-h_3t_2,r' s')$ and let $s_1 e_1=r' s'$, so $(s_1,h_2t_3-h_3t_2)=1$ since $r's'$ is square-free. Since $h_2\overline{t_2}\equiv h_3\overline{t_3}\Mod{r'}$, we see that $r'|e_1$. We also see that
\[
e\Bigl(\frac{b_{r' s'}\overline{n_1}(h_2\overline{t_2}-h_3\overline{t_3})}{q_0 r' s'}\Bigr)=e\Bigl(\frac{b_{e_1 s_1}\overline{n_1 e_1}(h_2\overline{t_2}-h_3\overline{t_3})}{q_0 s_1}\Bigr),
\]
and so $\xi'''$ simplifies slightly to give the expression of the lemma. Thus
\begin{align*}
\mathscr{A}_3'''&\ll x^{o(1)}\sup_{\substack{E_1 S_1\asymp T_1 \\ E_1\ge R_1}}\sum_{e_1\sim E_1}\sum_{s_1\sim S_1}\eta_{e_1,s_1}\sum_{\substack{t_2,t_3\sim T_2\\ (t_2t_3,q_0 e_1 s_1)=1}}\lambda_{t_2}\overline{\lambda_{t_3}}\sum_{\substack{n_1\\ (n_1,q_0 e_1 s_1)=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\\
&\qquad \times \sum_{\substack{h_2,h_3\sim H\\ h_2t_3\equiv h_3t_2\Mod{e_1} \\ (h_2t_3-h_3t_2,s_1)=1 \\ (h_2,e_1 s_1 t_2)=1\\ (h_3,e_1 s_1 t_3)=1}}\sum_{\substack{n_2,n_3\sim N\\ n_2\overline{b'_{t_2}}\equiv n_1\overline{b_{e_1 s_1}}\Mod{q_0}\\ n_3\overline{b'_{t_3}}\equiv n_1\overline{b_{e_1 s_1}}\Mod{q_0}\\ (n_2,q_0 d_2 t_2)=1\\ (n_3,q_0 d_2 t_3)=1}}\alpha'_{n_2}\overline{\alpha'_{n_3}}\xi''',\\
\eta_{e_1,s_1}&:=\sum_{\substack{r'\sim R'\\ r'|e_1}}\sum_{\substack{s'\sim S'\\ r' s'=e_1 s_1\in\mathcal{R}}}\frac{1}{(\log{x})^B}\le 1.
\end{align*}
Noting that $R/(D_1T_0)\le R_1\le \min(E_1,R)$, this gives the result.
\end{proof}
\begin{lmm}\label{lmm:AltA4}
Let $\mathscr{A}_4$ be as in Lemma \ref{lmm:AlternativeCauchy}. Let $N,R,T,Q$ satisfy $QT=x^{1/2+\delta}\ge x^{1/2-\epsilon/100}$ and
\begin{align*}
R^2 x^{6\delta+4\epsilon}&\le T\le x^{1/10-3\delta-3\epsilon}R^{2/5},\\
x^{1/4+13\delta/2+3\epsilon}T&\le N\le \frac{x^{1/2-3\delta-4\epsilon}}{R}.
\end{align*}
Then we have that
\[
\sum_{q\sim Q}\sum_{t_0\sim T_0}|\mathscr{A}_4|\ll \frac{N^3 E_1 S_1 T_2^2}{x^\epsilon \min(E_1,R) Q}.
\]
\end{lmm}
\begin{proof}
The key observation is that $\mathscr{A}_4$ is of exactly the same form as $\mathscr{S}_4$ from Lemma \ref{lmm:Cauchy} (with $R_1''$ replaced by $S_1$, $R_2'$ replaced by $T_2$ etc), and so we can reuse the arguments from Lemmas \ref{lmm:Diag1}-\ref{lmm:OffDiag}. We require a slightly stronger bound on $\mathscr{A}_4$ since we wish to gain an additional factor of $\min(E_1,R)$, and the key thing that enables this is the bound $E_1\ge R/(T_0D_1)$ which ensures $S_1$ cannot be too large.
Specifically, if $S_1\ll N^{2/3}/(Q T_0)^{2/3}$ then the argument in the proof of Lemma \ref{lmm:Diag1} up to \eqref{eq:S4DiagBound} shows that
\[
\mathscr{A}_4\ll x^{o(1)} \frac{ N^4 Q T_1^2 T_2^2 T^2}{x^2}.
\]
This gives the result provided
\[
N< \frac{x^{2-2\epsilon}}{Q^3 T^3 R}.
\]
Recalling $QT=x^{1/2+\delta}$, this simplifies to
\begin{align}
N< \frac{x^{1/2-3\delta-2\epsilon}}{R}.
\label{eq:AltNBound1}
\end{align}
We now consider the contribution when $S_1\ge N^{2/3}/(Q T_0)^{2/3}$. The argument of Lemma \ref{lmm:SecondCauchy} gives
\[
|\mathscr{A}_4|^2\ll x^{o(1)}\frac{N^2 T_2^2}{Q T_0}(|\mathscr{A}_5|+|\mathscr{A}_6|),
\]
where, $\mathscr{A}_5,\mathscr{A}_6$ are defined analogously to $\mathscr{S}_5,\mathscr{S}_6$. Thus it suffices to show that
\[
\sum_{q\sim Q}\sum_{t_0\sim T_0}(|\mathscr{A}_5|+|\mathscr{A}_6|)\ll \frac{N^4 E_1^2 S_1^2 T_2^2}{x^{3\epsilon}\min(E_1,R)^2 Q^2}.
\]
The argument of the proof of Lemma \ref{lmm:Diag2} up to \eqref{eq:S93} shows that
\[
\mathscr{A}_5\ll \frac{E_1^2 S_1^2 T_2^2 N^4}{Q^3 T_0^2}\Bigl(\frac{Q^6 T^5}{x^{3-\epsilon}}+\frac{N^2 Q^4 T^4}{x^{3-\epsilon}}\Bigr).
\]
Recalling that $Q T=x^{1/2+\delta}$, this gives an acceptably small contribution if we have
\begin{align}
T&> R^2 x^{6\delta+4\epsilon},\label{eq:AltTBound1}\\
N&< \frac{x^{1/2-2\delta-2\epsilon}}{R}.
\label{eq:AltNBound2}
\end{align}
We note that \eqref{eq:AltNBound1} implies \eqref{eq:AltNBound2}.
Finally, following the proof of Lemma \ref{lmm:OffDiag} up to \eqref{eq:S6Intermediate} gives
\begin{align*}
&\sum_{q\sim Q}\sum_{t_0\sim T_0}\mathscr{A}_6 \ll (\log{x})^{O_B(1)}\Bigl( \frac{S_1^3 T_2^2 Q^{11/2} N^4 T^9}{x^4}+\frac{ S_1^2 T_2^{2} N^3 Q^{9/2} T^{15/2}}{x^3}\Bigr)\nonumber\\
&\qquad\qquad\qquad\qquad+x^{o(1)}\Bigl(\frac{S_1^3 T_2^2 Q^{11/2} N^4 T^8 E_1^2}{x^{4}}+\frac{E_1 S_1^2 T_2^{2} N^3 Q^{9/2} T^{13/2}}{x^{3} }\Bigr)\\
&\ll \frac{x^{o(1)} N^4 E_1^2 S_1^2 T_2^2}{Q^2 \min(E_1,R)^2}\Bigl( \frac{Q^{15/2} T^{10} }{E_1 D_1 x^4 T_0}+\frac{ Q^{13/2} T^{15/2}}{N x^3}+\frac{ Q^{15/2} T^9 R}{x^{4}}+\frac{R Q^{13/2} T^{13/2}}{N x^{3} }\Bigr).
\end{align*}
In the expression above we used the bound $S_1\ll T/(E_1 D_1T_0)$.
We recall that $E_1\ge R/(T_0D_1)$. Thus we see that this gives the desired bound $O(N^4 S_1^2 T_2^2/(Q^2 x^{2\epsilon}))$ provided we have
\[
\frac{Q^{15/2} T^{10} }{R x^4 }+\frac{ Q^{13/2} T^{15/2}}{N x^3}+\frac{ Q^{15/2} T^9 R}{x^{4}}+\frac{R Q^{13/2} T^{13/2}}{N x^{3} }\ll \frac{1}{x^{3\epsilon}}.
\]
Since $R\ll T$, the second term is larger than the fourth term. Thus, since $QT=x^{1/2+\delta}$ we obtain the desired bound provided we have
\begin{align}
T&<x^{1/10-3\delta-3\epsilon}R^{2/5},\label{eq:AltTBound2}\\
N&>x^{1/4+13\delta/2+3\epsilon}T,\\
T&<\frac{x^{1/6-5\delta-3\epsilon}}{R^{2/3}}.\label{eq:AltTBound3}
\end{align}
Finally, we note that \eqref{eq:AltTBound1} and \eqref{eq:AltTBound2} imply that
\[
T=\frac{T^{5/3}}{T^{2/3}}<\frac{x^{1/6-5\delta-5\epsilon}R^{2/3}}{R^{4/3} x^{4\delta+2\epsilon/3}}<\frac{x^{1/6-5\delta-3\epsilon}}{R^{2/3}},
\]
so \eqref{eq:AltTBound3} follows from \eqref{eq:AltTBound1} and \eqref{eq:AltTBound2}. This gives the result.
\end{proof}
\begin{lmm}\label{lmm:SecondConclusion}
Let $\delta,A,B>0$ and $B_2=B_2(A,B)$ be sufficiently large in terms of $A,B$. Let $M,N,R,T\ge 1$ satisfy $MN\asymp x$, $QT=x^{1/2+\delta}$ and
\begin{align*}
R^2 x^{6\delta+4\epsilon}&\le T\le x^{1/10-3\delta-3\epsilon}R^{2/5},\\
\max\Bigl(x^{1/4+13\delta/2+3\epsilon}T,\,Q x^{2\delta+3\epsilon}\Bigr)&\le N\le \frac{x^{1/2-3\delta-4\epsilon}}{R}.
\end{align*}
Let $\gamma_{qrs}$ be a 1-bounded complex sequence, and define
\[
\gamma_{q,t}:=\frac{\mathbf{1}_{\tau(qt)\le (\log{x})^C}}{(\log{x})^C}\sum_{r\sim R}\sum_{\substack{s\sim S\\ rs=t\\ (rs,q)=1}}\gamma_{qrs}\mu^2(t).
\]
Let $|\alpha_n|\le \tau(n)^{B}$ be a complex sequence satisfying the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}, and let $a_{q,t},a'_{q,t}$ be integer sequences satisfying $(a_{q,t},qt)=(a'_{q,t},qt)=1$. Let
\begin{align*}
\mathscr{A}&:=\sum_{q\sim Q}\sum_{\substack{t_1,t_2\sim T}}\gamma_{q,t_1}\overline{\gamma_{q,t_2}}\hspace{-0.3cm}\sum_{\substack{n_1,n_2\sim N\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2} }}\hspace{-0.3cm}\alpha_{n_1}\overline{\alpha}_{n_2}\sum_{\substack{m\sim M\\m\equiv a_{q,t_1}\overline{n_1}\Mod{qt_1}\\ m\equiv a'_{q,t_2}\overline{n_2}\Mod{q t_2}}}\psi\Bigl(\frac{m}{M}\Bigr).
\end{align*}
Then we have that
\[
\mathscr{A}=\mathscr{A}_{MT}+O_A\Bigl(\frac{M N^2}{Q (\log{x})^{2A}}\Bigr),
\]
where for some constant $C_1=C_1(A,B,B_2)$
\begin{align*}
\mathscr{A}_{MT}&:=\sum_{q\sim Q}\sum_{t_0\le N/((\log{x})^{C_1} Q)}\sum_{\substack{t_1',t_2'\sim T/t_0\\ (t_1',t_2')=1}}\gamma_{q,t_0t_1'}\overline{\gamma_{q,t_0t_2'}}\sum_{\substack{n_1,n_2\sim N\\ (n_1,q t_0t_1')=1\\ (n_2,q t_0 t_2')=1}}\alpha_{n_1}\overline{\alpha}_{n_2}\frac{M\hat{\psi}(0)}{q t_0 t_1' t_2'\phi(q t_0)}.
\end{align*}
\end{lmm}
\begin{proof}
This is very similar to the proof of Lemma \ref{lmm:MainConclusion}, since our sum $\mathscr{A}$ is a special case of the sum $\mathscr{S}$ considered there, but with the special form of the coefficients $\gamma_{q,t}$. (It is this special form which enables us to use Lemma \ref{lmm:AlternativeCauchy} to get a result when $N\approx x^{2/5}$). Let $t_0=(t_1,t_2)$, and we consider $t_0\sim T_0$ for different choices of $T_0$. We first assume that $B_2=B_2$ is sufficiently large such that Lemma \ref{lmm:Fourier} applies. We then choose a constant $C_1=C_1(A,B,B_2)$ such that Lemma \ref{lmm:GCD} and Lemma \ref{lmm:Fourier} both apply. Thus, by Lemma \ref{lmm:GCD} there is a negligible contribution from $T_0>N/((\log{x})^{C_1}Q)$. By Lemma \ref{lmm:Fourier} and Lemma \ref{lmm:Simplify}, it suffices to show that for some sufficiently large constant $A_2=A_2(A,B)$
\begin{equation}
\sum_{q\sim Q}\sum_{t_0\sim T_0}|\mathscr{A}_3|\ll_{A_2} \frac{N^2 T_0 T_1 T_2}{(\log{x})^{A_2}},
\label{eq:A3Target}
\end{equation}
where $H\ll (\log{x})^5 N Q T_0 T_1 T_2/x$, $T_1\ll T/(D_1 T_0)$, $T_2\ll T/(D_2 T_0)$ and
\begin{align*}
\mathscr{A}_3&:=\sum_{\substack{t_1'\sim T_1}}\gamma_{t_1'}\sum_{\substack{t_2'\sim T_2\\ (t_1',t_2')=1}}\overline{\gamma'_{t_2'}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{b_{t_1'}}=n_2\overline{b'_{t_2'}}\Mod{q t_0}\\ (n_1,q t_0d_1 t_1')=(n_2,q t_0d_2t_2')=1}}\alpha'_{n_1}\overline{\alpha'_{n_2}}\sum_{h\sim H}\xi,\\
\xi&:=e\Bigl(\frac{b_{t_1'}h'\overline{n_1 t_1't_2'}}{q t_0}\Bigr)e\Bigl(\frac{b_{t_1'}h\overline{n_1 q t_0 t_2'}}{t_1'}\Bigr)\Bigl(\frac{b'_{t_2'}h\overline{n_2 q t_0 t_1'}}{t_2'}\Bigr).
\end{align*}
for some sequences $\gamma_{t'},\gamma'_{t'}$ (depending on $q,t_0,d_1,d_2$) with $|\gamma_{t_1'}|\le |\gamma_{q,d_1t_0t_1'}|$, $|\gamma'_{t_2'}|\le |\gamma_{,d_2t_0t_2'}|$ and $b_{t},b'_t$ (depending on $q,t_0,d_1,d_2$) integer sequences with $(b_t,q t_0 d_1 t)=(b_t', q t_0 d_1 t)=1$ and some 1-bounded sequence $\alpha_n'$.
Applying Lemmas \ref{lmm:AlternativeCauchy} and \ref{lmm:AltA4} in turn, we see that
\begin{align*}
\sum_{q\sim Q}\sum_{t_0\sim T_0}|\mathscr{A}_3|&\ll (QT_0)^{1/2}\Bigl(\sum_{q\sim Q}\sum_{t_0\sim T_0}|\mathscr{A}_3|^2\Big)^{1/2}\\
&\ll x^{o(1)}(Q N T_0 T_1)^{1/2} \Bigl(\sum_{q\sim Q}\sum_{t_0\sim T_0}\sup_{\substack{E_1,S_1\\ E_1 S_1\asymp T_1\\ E_1\ge R/(T_0D_1)}} \min(E_1,R)|\mathscr{A}_4|\Bigr)^{1/2}\\
&\ll x^{o(1)}(Q N T_0 T_1)^{1/2} \Bigl(\sup_{\substack{E_1,S_1\\ E_1 S_1\asymp T_1}}\frac{N^3 E_1 S_1 T_2^2}{x^\epsilon Q}\Bigr)^{1/2}\\
&\ll \frac{ N^2 T_0 T_1 T_2}{x^{\epsilon/2} Q}.
\end{align*}
This gives \eqref{eq:A3Target}, and hence the result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prpstn:SecondProp}]
The argument use to show Proposition \ref{prpstn:SecondProp} is very similar to that for Proposition \ref{prpstn:MainProp}. By Lemma \ref{lmm:Divisor} and the trivial bound, the total contribution from $q_1,q_2,q_3$ with $\tau(q_1q_2q_3)\ge (\log{x})^C$ is $\ll x/(\log{x})^A$ provided $C$ is sufficiently large in terms of $A$. Therefore we only need to consider $\tau(q_1 q_2 q_3)\le (\log{x})^C$.
Given $q_1,q_2,q_3$, let $q_1q_2q_3=q^\square q^{\notsquare}$ with $q^\square$ squarefull and $q^{\notsquare}$ square-free. Let $q_i'=(q_i,q^{\notsquare})$ for $i\in\{1,2,3\}$. We then see that $q_1',q_2',q_3',q^\square$ are pairwise coprime with $q_1'q_2'q_3'q^\square=q_1q_2q_3$. By Lemma \ref{lmm:Squarefree} we only need to consider $q^\square\le x^{o(1)}$. Let $q:=q^\square q_1'$, $r:=q_2'$ and $s:=q_3'$. We then see it suffices to show that
\begin{align*}
\sum_{q\sim Q}\sum_{\substack{r\sim R\\ (r,q)=1\\ \mu^2(r)=1}}\sum_{\substack{s\sim S\\ (s,q r)=1\\ \mu^2(s)=1\\ \tau(q r s)\le (\log{x})^C}}\sup_{(a,qrs)=1}|\Delta(a;qrs)|\ll_A \frac{x}{(\log{x})^A},
\end{align*}
for all choices $Q,R,S$ with $Q=Q_1x^{o(1)}$, $R=Q_2x^{o(1)}$ and $S=Q_3x^{o(1)}$. By considering the average over $(a'_{q r s},q r s)=1$ and inserting 1-bounded coefficients $\gamma_{q r s}$ to remove the absolute values (whose support we restrict to $\tau(q r s)\le (\log{x})^C$), it suffices to show for all sequences $a_{q r s},a'_{q r s}$ with $(a_{q r s}a'_{q r s},q r s)=1$ that
\begin{align*}
\sum_{q\sim Q}\sum_{r\sim R}\sum_{\substack{s\sim S\\ (q,rs)=1}}\mu^2(rs)\gamma_{q r s}\tilde{\Delta}(a_{q r s},a'_{q r s};q r s)\ll_A \frac{x}{(\log{x})^A},
\end{align*}
where
\[
\tilde{\Delta}(a,b;q):=\sum_{n\sim N}\alpha_n \sum_{m\sim M}\beta_m \Bigl(\mathbf{1}_{n m\equiv a\Mod{q}}-\mathbf{1}_{n m\equiv b\Mod{q}}\Bigr).
\]
Let
\begin{equation}
\gamma_{q,t}:=\frac{\mathbf{1}_{\tau(qt)\le (\log{x})^C}}{(\log{x})^C}\sum_{r\sim R}\sum_{\substack{s\sim S\\ rs=t\\ (rs,q)=1}}\gamma_{q r s}\mu^2(t).
\label{eq:GammaDef}
\end{equation}
Since $\gamma_{q r s}$ is 1-bounded and we have restricted to $\tau(q r s)\le (\log{x})^C$, we see that $\gamma_{q,t}$ is 1-bounded, and so it suffices to show that for all $T\asymp RS$ and all $A>0$
\[
\sum_{q\sim Q} \sum_{t\sim T}\gamma_{q,t}\tilde{\Delta}(a_{q t},a'_{qt};qt)\ll_A\frac{x}{(\log{x})^A}.
\]
By the trivial bound and Lemma \ref{lmm:Divisor}, there is a negligible contribution from $n$ with $\tau(n)\ge (\log{x})^{B_2}$ if $B_2\ge B_0(A)$. Thus we may restrict to $\tau(n)\le (\log{x})^{B_2}$ for some $B_2$ to be chosen later sufficiently large in terms of $A$.
This is now a special case of the sum considered in the proof of Proposition \ref{prpstn:MainProp}. By applying Cauchy-Schwarz in the $q,m$ variables (and inserting a smooth majorant for the $m$ summation), it suffices to show that for all choices of residue classes $a_{q,t},a'_{q,t}$ and all 1-bounded sequences $\gamma_{q r s}$ defining $\gamma_{q,t}$ in \eqref{eq:GammaDef} we have
\[
\mathscr{A}=X+O_A\Bigl(\frac{M N^2}{Q (\log{x})^{2A}}\Bigr)
\]
for some quantity $X$ independent of $a_{q,t},a'_{q,t}$, where
\begin{align*}
\mathscr{A}&:=\sum_{q\sim Q}\sum_{\substack{t_1,t_2\sim T}}\gamma_{q,t_1}\overline{\gamma_{q,t_2}}\hspace{-0.3cm}\sum_{\substack{n_1,n_2\sim N\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2}}}\hspace{-0.3cm}\alpha_{n_1}\overline{\alpha}_{n_2}\sum_{\substack{m\sim M\\m\equiv a_{q,t_1}\overline{n_1}\Mod{qt_1}\\ m\equiv a'_{q,t_2}\overline{n_2}\Mod{q t_2}}}\psi\Bigl(\frac{m}{M}\Bigr).
\end{align*}
This estimate now follows from Lemma \ref{lmm:SecondConclusion}, provided $B_2$ is sufficiently large in terms of $A$ and provided we have
\begin{align*}
R^2 x^{6\delta+4\epsilon}&\le T\le x^{1/10-3\delta-3\epsilon}R^{2/5},\\
\max\Bigl(x^{1/4+13\delta/2+3\epsilon}T,\,Q x^{2\delta+3\epsilon}\Bigr)&\le N\le \frac{x^{1/2-3\delta-4\epsilon}}{R}.
\end{align*}
Recalling that $T\asymp RS$ and that $Q=Q_1x^{o(1)}$, $R=Q_2x^{o(1)}$, $S=Q_3x^{o(1)}$, we see that this give the result.
\end{proof}
\begin{rmk}
It would be desirable to produce a variant of Proposition \ref{prpstn:MainProp} to cover the range $N\in[x^{1/2-3\delta-3\epsilon},2x^{1/2}]$ in the spirit of Proposition \ref{prpstn:SecondProp}. Unfortunately we have failed to accomplish this; the psuedo-diagonal terms of Lemma \ref{lmm:Diag1} render the first application of Cauchy-Schwarz in Lemma \ref{lmm:Cauchy} irrelevant; one obtains a subsum which is equivalent to the original. There doesn't seem to be an alternative to Lemma \ref{lmm:Cauchy} which doesn't quickly run into serious issues.
\end{rmk}
\section{Zhang-style Type II estimate}\label{sec:Zhang}
We now prove Proposition \ref{prpstn:Zhang}. The proof of this proposition is very similar to the proof of the refined version of Zhang's Type II estimate \cite[\S12]{Zhang} as given by \cite[Proposition 7.2]{May1}. We require some mild generalisations to handle a slightly different setup and to handle some additional uniformity, but the fundamental content is the same. The key estimate is the following lemma.
\begin{lmm}[Zhang exponential sum estimate]\label{lmm:Zhang}
Let $Q,K,R,R_0,M,N,H\le x^{O(1)}$ satisfy
\begin{align*}
H&\ll \frac{Q N R^2 K }{x^{1-\epsilon} R_0},\qquad R_0 K Q<N,\qquad K^5 R_0^3 Q^{7/2} R^{6} < x^{2-10\epsilon},\qquad N<\frac{x^{1-6\epsilon}}{Q K^2}.
\end{align*}
Let $c_{q,k,r}$ and $\alpha'_n$ be $1$-bounded complex sequences with $c_{q,k,r}$ supported on square-free $r$ with $P^-(r)\ge z_0:=x^{1/(\log\log{x})^3}$ and $q,r,k$ pairwise coprime. Let $a_{q,k,r},a'_{q,k,r}$ be two integer sequences satisfying $(a_{q,k,r},q k r)=(a'_{q,k,r},q k r)=1$ and $a_{q,k,r}\equiv a'_{q,k,r}\Mod{q}$. Define
\begin{align*}
\mathscr{Z}&:=\sum_{\substack{q\sim Q}}\sum_{k\sim K}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R \\ (r_1,r_2)=1}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,k,r_0r_1}}\equiv n_2\overline{a'_{q,k,r_0r_2}}\Mod{q k r_0} \\ (n_1,q k r_0r_1)=(n_2,q k r_0r_2)=1}}\alpha'_{n_1}\overline{\alpha'_{n_2}}c_{q,k,r_0r_1}\overline{c_{q,k,r_0r_2}}\\
&\qquad\times \sum_{1\le |h|\le H}\hat{\psi}\Bigl(\frac{h M}{q k r_0 r_1 r_2}\Bigr)e\Bigl(\frac{a_{q,k,r_0r_1}h\overline{n_1 r_2}}{q k r_0 r_1}\Bigr)e\Bigl(\frac{a'_{q,k,r_0r_2}h\overline{n_2 q k r_0r_1}}{r_2}\Bigr),
\end{align*}
Then we have
\[
\mathscr{Z}\ll \frac{N^2 R^2 R_0}{x^\epsilon}.
\]
\end{lmm}
\begin{proof}
Since we only consider $P^-(r_1),P^-(r_2)\ge z_0$, $r_1$ and $r_2$ have at most $(\log\log{x})^3$ prime factors. Therefore, by Lemma \ref{lmm:FouvryDecomposition}, there are $O(\exp(\log\log{x})^5))$ different sets $\mathcal{N}_1,\mathcal{N}_2,\dots$ which cover all possible pairs $(r_1,r_2)$, and such that if $(r_1,r_2),(r_1',r_2')\in\mathcal{N}_j$ then $\gcd(r_1,r_2')=\gcd(r_1',r_2)=1$. Taking the worst such set $\mathcal{N}$, we see that
\begin{align*}
\mathscr{Z}&\ll \exp((\log\log{x})^5)|\mathscr{Z}_2|,\\
\mathscr{Z}_2&:=\sum_{q\sim Q}\sum_{k\sim K}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_2\sim R\\ (r_1,r_2)=1\\ (r_1,r_2)\in\mathcal{N} }}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,k,r_0r_1}}\equiv n_2\overline{a'_{q,k,r_0r_2}}\Mod{q k r_0} \\ (n_1,q k r_0 r_1)=(n_2,q k r_0 r_2)=1}}\alpha'_{n_1}\overline{\alpha'_{n_2}}c_{q,k,r_0r_1}\overline{c_{q,k,r_0r_2}}\\
&\qquad\times \sum_{1\le |h|\le H}\hat{\psi}\Bigl(\frac{h M}{q k r_0 r_1 r_2}\Bigr)e\Bigl(\frac{a_{q,k,r_0r_1}h\overline{n_1 r_2}}{q k r_0 r_1}\Bigr)e\Bigl(\frac{a'_{q,k,r_0r_2}h\overline{n_2 q k r_0 r_1}}{r_2}\Bigr).
\end{align*}
Since we wish to show $\mathscr{Z}\ll N^2 R^2 R_0/x^\epsilon$, it suffices to show $\mathscr{Z}_2\ll N^2 R^2 R_0/x^{2\epsilon}$. Since $(q,k r_0)=1$ and $a_{q,k,r_0r_1}\equiv a'_{q,k,r_0r_2}\Mod{q}$, we may split the conditions on the $n_1,n_2$ summation to $n_1\equiv n_2\Mod{q}$ and $n_1\overline{a_{q,k,r_0r_1}}\equiv n_2\overline{a_{q,k,r_0r_2}'}\Mod{k r_0}$. We now apply Cauchy-Schwarz in $n_1,n_2,k,r_0$ and $q$ to eliminate the $\alpha'$-coefficients and insert a smooth majorant for the $n_1$ and $n_2$ summations. This gives
\[
\mathscr{Z}_2^2\ll N Q K R_0\Bigl(1+\frac{N}{Q}\Bigr)|\mathscr{Z}_3|\ll N^2 K R_0|\mathscr{Z}_3|,
\]
where
\begin{align*}
\mathscr{Z}_3&:=\sum_{\substack{q\sim Q}}\sum_{k\sim K}\sum_{r_0\sim R_0}\sum_{\substack{n_1,n_2\sim N\\ n_1\equiv n_2\Mod{q}\\ (n_1n_2,q k r_0)=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\psi\Bigl(\frac{n_2}{N}\Bigr)|\mathscr{Z}_4|^2,\\
\mathscr{Z}_4&:=\sum_{\substack{r_1,r_2\sim R\\ (r_1,n_1 q k r_0 r_2)=1\\ (r_2,n_2 q k r_0 r_1)=1\\ (r_1,r_2)\in\mathcal{N} \\ n_1\overline{a_{q,k,r_0r_1}}\equiv n_2\overline{a'_{q,k,r_0r_2}}\Mod{r_0 k}}}c_{q,k,r_0r_1}\overline{c_{q,k,r_0r_2}}\sum_{1\le |h|\le H}\hat{\psi}\Bigl(\frac{h M}{q k r_0 r_1 r_2}\Bigr)\xi,\\
\xi&:=e\Bigl(\frac{a_{q,k,r_0r_1}h\overline{n_1 r_2}}{q k r_0 r_1}\Bigr)e\Bigl(\frac{a'_{q,k,r_0r_2}h\overline{n_2 q k r_0 r_1}}{r_2}\Bigr).
\end{align*}
Since we wish to show $\mathscr{Z}_2\ll N^2 R^2 R_0/x^{2\epsilon}$ and $N\gg Q$, it suffices to show that
\begin{equation}
\mathscr{Z}_3\ll\frac{ N^2 R^4 R_0}{K x^{4\epsilon}}.\label{eq:ZhangE4}
\end{equation}
Expanding the square and swapping the order of summation then gives
\begin{align*}
\mathscr{Z}_3&\le \sum_{q\sim Q}\sum_{k\sim K}\sum_{r_0\sim R_0}\sum_{\substack{r_1,r_1',r_2,r_2'\sim R\\ (r_1r_1', r_2 r_2')=1\\ (r_1r_1'r_2r_2',q k r_0)=1\\ a_{q,k,r_0r_1}\overline{a_{q,k,r_0r_1'}}\equiv a'_{q,k,r_0r_2}\overline{a'_{q,k,r_0r_2'}}\Mod{r_0 k}}}\sum_{1\le |h|,|h'|\le H}|\mathscr{Z}_5|,
\end{align*}
where
\begin{align*}
\mathscr{Z}_5&:=\sum_{\substack{n_1,n_2\\ n_1\equiv n_2\Mod{q} \\ n_1\overline{a_{q,k,r_0r_1}}\equiv n_2\overline{a_{q,k,r_0r_2}}\Mod{r_0k }\\ (n_1,q k r_0 r_1r_1')=1\\ (n_2,r_2r_2')=1}}\psi\Bigl(\frac{n_1}{N}\Bigr)\psi\Bigl(\frac{n_2}{N}\Bigr)e\Bigl(\frac{c_1\overline{n_1}}{q k r_0 r_1r_1'}\Bigr)e\Bigl(\frac{c_2\overline{n_2}}{r_2 r_2'}\Bigr),
\end{align*}
and where $c_1\Mod{q k r_0 r_1r_1'}$ and $c_2\Mod{r_2r_2'}$ are given by
\begin{align*}
c_1&=(a_{q,k, r_0r_1}h r_1'r_2'- a_{q,k, r_0 r_1'} h' r_1r_2)\overline{r_2r_2'},\\
c_2&=(a'_{q,k,r_0r_2} h r_1'r_2'- a'_{q,k,r_0r_2'} h' r_1r_2)\overline{q k r_0 r_1r_1'}.
\end{align*}
Here we used the fact that $(r_1,r_2),(r_1',r_2')\in\mathcal{N}$ to conclude that $(r_1r_1',r_2r_2')=1$.
We separate the `diagonal' terms $\mathscr{Z}_{=}$ with $h r_1' r_2'=h' r_1 r_2$ and the `off-diagonal' terms $\mathscr{Z}_{\ne}$ with $h r_1'r_2'\ne h' r_1r_2$.
\begin{equation}
\mathscr{Z}_3\le \mathscr{Z}_{=}+\mathscr{Z}_{\ne}.
\label{eq:Z4Split}
\end{equation}
We first consider the diagonal terms. Given a choice of $h,r_1',r_2'$ there are $x^{o(1)}$ choices of $h',r_1,r_2$ by the divisor bound. Thus, estimating the remaining sums trivially we have
\begin{equation}
\mathscr{Z}_{=}\ll x^{o(1)} Q K R_0 R^2 H N \Bigl(\frac{N}{Q R_0 K }+1\Bigr)\ll \frac{N^3 Q K R_0 R^4}{x^{1-2\epsilon}}.
\label{eq:ZEq}
\end{equation}
(Here we used the assumption that $N>Q K R_0$.)
Now we consider the off-diagonal terms. By Lemma \ref{lmm:InverseCompletion}, for $L:=x^\epsilon Q K R_0 R^2/N$ we have that
\begin{align*}
&\sum_{\substack{n_2\equiv n_1 a_{q,k,r_0r_2}\overline{a_{q,k,r_0r_1}}\Mod{q k r_0} \\ (n_2,r_2r_2')=1}}\psi\Bigl(\frac{n_2}{N}\Bigr)e\Bigl(\frac{c_2\overline{n_2}}{r_2 r_2'}\Bigr)=O(x^{-100})\\
&+\frac{N}{q k r_0 r_2 r_2'}\sum_{|\ell_2|\le L}\hat{\psi}\Bigl(\frac{\ell_2 N}{q k r_0 r_2r_2'}\Bigr)S(c_2,\ell_2\overline{q k r_0};r_2r_2')e\Bigl(\frac{\ell_2 n_1 a_{q,k,r_0r_2}\overline{a_{q,k,r_0r_1}r_2r_2'}}{q k r_0}\Bigr).
\end{align*}
Here $S(m,n;c)$ is the standard Kloosterman sum, and we used the fact that $(q k r_0,r_2r_2')=1$. By Lemma \ref{lmm:InverseCompletion} again, we have that
\begin{align*}
&\sum_{(n_1,q k r_0 r_1 r_1')=1}\psi\Bigl(\frac{n_1}{N}\Bigr)e\Bigl(\frac{c_1\overline{n_1}}{q k r_0 r_1 r_1'}\Bigr)e\Bigl(\frac{\ell_2 n_1 a_{q,k,r_0r_2}\overline{a_{q,k,r_0r_1}r_2r_2'}}{q k r_0}\Bigr)\\
&=\frac{N}{q k r_0 r_1r_1'}\sum_{|\ell_1|\le L}\hat{\psi}\Bigl(\frac{\ell_1 N}{q k r_0 r_1 r_1'}\Bigr)S(c_1,\ell_1+\ell_2c_3;q k r_0 r_1 r_1')+O(x^{-100}),
\end{align*}
where $c_3\Mod{q k r_0r_1r_1'}$ is defined by
\[
c_3:= r_1r_1' a_{q,k,r_0r_2}\overline{a_{q,k,r_0r_1}r_2r_2'}.
\]
Thus, we see that $\mathscr{Z}_5$ is a sum of Kloosterman sums, given explicitly by
\begin{align*}
\mathscr{Z}_5&=\frac{N^2}{q^2 k^2 r_0^2 r_1 r_1'r_2 r_2'}\sum_{\substack{|\ell_1|\le L\\ |\ell_2|\le L}}\hat{\psi}\Bigl(\frac{\ell_2 N}{q k r_0 r_2r_2'}\Bigr)\hat{\psi}\Bigl(\frac{\ell_1 N}{q k r_0 r_1 r_1'}\Bigr)S(c_2,\ell_2\overline{q};r_2r_2')\\
&\qquad\times S(c_1,\ell_1+\ell_2c_3;q k r_0r_1 r_1')+O(x^{-10}).
\end{align*}
By the standard Kloosterman sum bound $S(m,n;c)\ll \tau(c) c^{1/2}(m,n,c)^{1/2}\ll c^{1/2+o(1)}(m,c)^{1/2}$ (Lemma \ref{lmm:Kloosterman}), we therefore obtain
\begin{align*}
\mathscr{Z}_5&\ll \frac{x^{o(1)}N^2}{Q^2 K^2 R_0^2 R^4}\sum_{\substack{|\ell_1|\le L\\ |\ell_2|\le L}}Q^{1/2}K^{1/2} R_0^{1/2} R^2 (c_2,r_2r_2')^{1/2}(c_1,r_1r_1')^{1/2}(c_1,k r_0 q)^{1/2}\\
&\ll x^{3\epsilon}Q^{1/2}K R_0 R^2 (h,r_1r_2)^{1/2}(h',r_1'r_2')^{1/2}(r_1'r_2',r_1r_2) (hr_1'r_2'-h' r_1r_2,q)^{1/2}.
\end{align*}
In the final line above we used the fact that $a_{q,k,r}\equiv a_{q,k,r}\Mod{q}$ and $(a_{q,k,r},q k r)=1$ to remove the dependencies on the residue classes. Substituting this into our expression for $\mathscr{Z}_{\ne}$ gives
\begin{align}
\mathscr{Z}_{\ne}&\ll x^{3\epsilon} Q^{1/2} K R_0 R^2\sum_{r_1,r_1'\sim R}\sum_{r_2,r_2'\sim R}(r_1r_1',r_2r_2')\sum_{\substack{1\le |h|,|h'|\le H\\ hr_1'r_2'\ne h'r_1r_2}}(h,r_1r_2)(h',r_1'r_2')\nonumber\\
&\qquad\times\sum_{k\sim K} \sum_{r_0\sim R_0}\sum_{q\sim Q}(hr_1'r_2'-h' r_1r_2,q)\nonumber\\
&\ll x^{4\epsilon} K^2 R_0^2 Q^{3/2} R^6 H^2\nonumber\\
&\ll \frac{N^2 Q^{7/2} R^{10} K^4 R_0^4}{ x^{2-6\epsilon}}.\label{eq:ZNeq}
\end{align}
Substituting \eqref{eq:ZEq} and \eqref{eq:ZNeq} into \eqref{eq:Z4Split} gives.
\[
\mathscr{Z}_3\ll \frac{N^3 Q R^4 K R_0}{ x^{1-2\epsilon}}+\frac{N^2 Q^{7/2} R^{10} K^4 R_0^4}{x^{2-6\epsilon}}.
\]
This gives the desired bound \eqref{eq:ZhangE4} provided we have
\begin{align}
N&<\frac{x^{1-6\epsilon} }{Q K^2},\\
K^{5} R_0^3 Q^{7/2} R^{6}&<x^{2-10\epsilon}.
\end{align}
This gives the result.
\end{proof}
\begin{lmm}\label{lmm:ZhangConclusion}
Let $A,B>0$ and let $B_2=B_2(A,B)$ be sufficiently large in terms of $A,B$. Let $Q,K,R_0,N,M\ll x^{O(1)}$ satisfy
\begin{align*}
Q K R\ll x^{1/2+\delta},\quad MN\asymp x,\quad K=x^{o(1)},\quad x^{2\delta+\epsilon} Q< N <\frac{x^{1-7\epsilon}}{Q},\quad Q^7 R^{12}<x^{4-21\epsilon}.
\end{align*}
Let $b_q$, $a_{q,k,r},a'_{q,k,r}$ be integer sequences with $(b_q,q)=(a_{q,k,r},q k r)=(a'_{q,k,r},q k r)=1$ and $b_q\equiv a_{q,k,r}\equiv a'_{q,k,r}\Mod{q}$. Let $c_{q,k,r}$ be a 1-bounded sequence with $c_{q,k,r}$ supported on square-free $r$ with $P^-(r)\ge z_0:=x^{1/(\log\log{x})^3}$ and $r,q,k$ pairwise coprime and let $|\alpha_n|\le \tau(n)^B$ satisfy the Siegel-Walfisz condition \eqref{eq:SiegelWalfisz}. Let $\mathscr{Z}$ be given by
\[
\mathscr{Z}:=\sum_{q\sim Q}\sum_{k\sim K}\sum_{r_1,r_2\sim R}c_{q,k,r_1}\overline{c_{q,k,r_2}}\hspace{-0.5cm}\sum_{\substack{n_1,n_2\sim N\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2}}}\alpha_{n_1}\overline{\alpha_{n_2}}\sum_{\substack{m n_1 \equiv a_{q,k,r_1}\Mod{q k r_1} \\ m n_2\equiv a'_{q,k,r_2}\Mod{q k r_2} }}\psi\Bigl(\frac{m}{M}\Bigr).
\]
Then we have
\[
\mathscr{Z}=\mathscr{Z}_{MT}+O_A\Bigl(\frac{MN^2}{Q K(\log{x})^A}\Bigr),
\]
where for some constant $C_1=C_1(A,B,B_2)$
\[
\mathscr{Z}_{MT}=\sum_{q\sim Q}\sum_{k\sim K}\sum_{r_0\le N/((\log{x})^{C_1} K Q)}\sum_{\substack{r_1',r_2'\sim R/r_0\\ (r_1',r_2')=1}}\sum_{\substack{n_1,n_2\sim N\\ (n_1,q k r_0r_1')=1\\ (n_2,q k r_0r_2')=1\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2}}}\frac{\alpha_{n_1}\overline{\alpha_{n_2}}c_{q,k,r_0r_1'}\overline{c_{q,k,r_0r_2'}}M\hat{\psi}(0)}{q\phi(q k r_0) k r_0 r_1' r_2'}.
\]
\end{lmm}
\begin{proof}
We consider $\mathscr{Z}$. Let $r_0=(r_1,r_2)$ and $r_1=r_0r_1'$, $r_2=r_0r_2'$. The congruence conditions on $m$ have no solutions unless $n_1\overline{a_{q,k,r_0r_1'}}\equiv n_2\overline{a'_{q,k,r_0r_2'}}\Mod{q k r_0}$ and $(n_1,q k r_0r_1')=(n_2,q k r_0r_2')=1$. We split the summations of $\mathscr{Z}_2$ according to the size of $r_0$. Thus we see it suffices to show that for a suitable constant $C_1=C_1(A,B,B_2)$
\[
\mathscr{Z}(R_0)=\begin{cases}
\mathscr{Z}_{MT}(R_0)+O_A\Bigl(\frac{MN^2}{ Q K (\log{x})^{A} }\Bigr),\qquad &R_0\le N/((\log{x})^{C_1} Q K),\\
O_A\Bigl(\frac{MN^2}{ Q K (\log{x})^{A} }\Bigr), &R_0> N/((\log{x})^{C_1} Q K).
\end{cases}
\]
where $\mathscr{Z}_{MT}(R_0)$ is $\mathscr{Z}_{MT}$ with the $r_0$ summation restricted to $r_0\sim R_0$, and $\mathscr{Z}(R_0)$ is given by
\begin{align*}
\mathscr{Z}(R_0)&:=\sum_{q\sim Q}\sum_{k\sim K}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R/r_0\\ (r_1',r_2')=1}}c_{q,k,r_0r_1'}\overline{c_{q,k,r_0r_2'}}\hspace{-1cm}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,k,r_0r_1'}}\equiv n_2\overline{a'_{q,k,r_0r_2'}} \Mod{q k r_0}\\ (n_1,q k r_0 r_1')=1\\ (n_2,q k r_0 r_2')=1 \\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2}}}\hspace{-1cm}\alpha_{n_1}\overline{\alpha_{n_2}}\\
&\times \sum_{\substack{m\sim M\\ n_1m\equiv a_{q,k,r_0r_1'}\Mod{q k r_0 r_1'}\\ n_2m\equiv a'_{q,k,r_0r_1'}\Mod{r_1'}}}\psi\Bigl(\frac{m}{M}\Bigr).
\end{align*}
If $R_0> N/((\log{x})^{C_1}Q K)$ for $C_1$ sufficiently large in terms of $A,B,B_2$, then $\mathscr{Z}(R_0)\ll_{A,B,B_2} MN^2/( (\log{x})^A Q K )$ by Lemma \ref{lmm:GCD}, as required. Thus we only need to consider $R_0<N/((\log{x})^{C_1}QK)$ for some $C_1(A,B,B_2)$ sufficiently large. By the same argument as Lemma \ref{lmm:Fourier}, provided $B_2=B_2(A,B)$ is sufficiently large in terms of $A,B$ and $C_1$ is sufficiently large in terms of $A,B$, it suffices to show that for some sufficiently large constant $A_2=A_2(A,B,B_2)$
\[
\mathscr{Z}'\ll_{A_2} \frac{N^2 R'{}^2 R_0}{(\log{x})^{A_2} },
\]
where for some 1-bounded sequence $\alpha'_n$
\begin{align*}
\mathscr{Z}'&:=\sum_{\substack{q\sim Q}}\sum_{k\sim K}\sum_{r_0\sim R_0}\sum_{\substack{r_1',r_2'\sim R'\\ (r_1',r_2')=1}}\sum_{\substack{n_1,n_2\sim N\\ n_1\overline{a_{q,k,r_0r_1'}}\equiv n_2\overline{a'_{q,k,r_0r_2'}}\Mod{q k r_0}}}\hspace{-1cm}\alpha'_{n_1}\overline{\alpha'_{n_2}}c_{q,k,r_0 r_1'}\overline{c_{q,k,r_0r_2'}}\\
&\qquad\times \sum_{1\le |h|\le H}\hat{\psi}\Bigl(\frac{h M}{q k r_0 r_1' r_2'}\Bigr)e\Bigl(\frac{a_{q,k,r_0 r_1'}h\overline{n_1 r_2}}{q k r_0 r_1'}\Bigr)e\Bigl(\frac{a'_{q,k,r_0 r_2'}h\overline{n_2 q k r_0 r_1'}}{r_2'}\Bigr),
\end{align*}
and $R'\asymp R/R_0$, $H:=N Q K R_0 R'{}^2/x^{1-\epsilon}$. Lemma \ref{lmm:Zhang} then gives the desired result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prpstn:Zhang}]
By the Bombieri-Vinogradov Theorem for convolutions, since $Q_1<x^{1/2-\epsilon}$ we have
\[
\sum_{q_1\sim Q_1}\sup_{(b,q_1)=1}\sum_{q_2\sim Q_2}\Bigl|\sum_{m\sim M}\beta_m\hspace{-0.2cm}\sum_{\substack{n\sim N\\ (n m,q_1q_2)=1}}\hspace{-0.2cm}\frac{\phi(q_1)\alpha_n}{\phi(q_1q_2)}\Bigl(\mathbf{1}_{\substack{n m\equiv b\Mod{q_1}}}-\frac{1}{\phi(q_1)}\Bigr)\Bigl|\ll_A \frac{x}{(\log{x})^A}.
\]
Thus it suffices to show that
\[
\sum_{q_1\sim Q_1}\sup_{(b,q_1)=1}\sum_{q_2\sim Q_2}\sup_{\substack{(a,q_1q_2)=1\\ a\equiv b\Mod{q_1}}}|\Delta(a,b;q_1,q_2)|\ll_A \frac{x}{(\log{x})^A},
\]
where
\[
\Delta(a,b;q_1,q_2):= \sum_{m\sim M}\beta_m\sum_{\substack{n\sim N\\ (n m,q_1 q_2)=1}}\alpha_n \Bigl(\mathbf{1}_{n m\equiv a\Mod{q_1 q_2}}-\frac{\phi(q_1)\mathbf{1}_{n m\equiv b\Mod{q_1}}}{\phi(q_1 q_2)}\Bigr).
\]
Given $q_1,q_2$, let $q:=q_1q_2$ and factor $q=q^\square q^{\notsquare}$ with $q^\square$ square-full and $q^{\notsquare}$ squarefree. Let $q_1':=(q^{\notsquare},q_1)$ and $q_2':=(q^{\notsquare},q_2)$ and $q_1=q_1' q_1''$, $q_2=q_2' q_2''$ for suitable $q_1'',q_2''$ which have $q_1''q_2''$ square-full. Finally, let $q_2'=q_2^-q_2^+$ with $P^+(q_2^-)\le z_0:=x^{1/(\log\log{x})^3}$ and $P^-(q_2^+)>z_0$. Then we see that $(q_2^+,q_1'q_1''q_2''q_2^-)=(q_1',q_2^-q_2^+q_2''q_1'')=1$. Putting each of $q_1',q_1'',q_2^+,q_2^-,q_2''$ into dyadic intervals, and relaxing the condition $a\equiv b\Mod{q_1'q_1''}$ to $a\equiv b\Mod{q_1'}$ for an upper bound we see it suffices to show that for every $A>0$
\[
\sum_{\substack{q_1'\sim Q_1'\\ \mu^2(q_1')=1}}\sup_{(b,q_1')=1}\sum_{\substack{q_1'' \sim Q_1'' \\ q_2'' \sim Q_2'' \\ q_1''q_2'' \text{square-full}\\ (q_1''q_2'',q_1')=1}}\sum_{\substack{q_2^-\sim Q_2^-\\ P^+(q_2^-)\le z_0\\ (q_2^-,q_1')=1}}\sum_{\substack{q_2^+\sim Q_2^+\\ (q_2^+,q_1'q_1''q_2''q_2^-)=1\\ \mu^2(q_2^+)=1\\ P^-(q_2^+)>z_0}}\sup_{\substack{(a,q)=1\\ a\equiv b\Mod{q_1'}}}|\Delta|\ll_A \frac{x}{(\log{x})^{A}},
\]
for all choices of $Q_1',Q_1'',Q_2'',Q_2^-,Q_2^+$ with $Q_1'Q_1''\asymp Q_1$ and $Q_2'' Q_2^-Q_2^+\asymp Q_2$. Here we have written $q$ to represent $q_1'q_1''q_2''q_2^- q_2^+$ and $\Delta$ to represent $\Delta(a,b;q_1' q_1'',q_2'' q_2^-q_2^+)$. By Lemma \ref{lmm:Squarefree} and \ref{lmm:Smooth} we see that we only need to consider $Q_1'',Q_2'', Q_2^-\ll x^{o(1)}$. In particular, $Q_1'=Q_1x^{-o(1)}$ and $Q_2^+=Q_2x^{-o(1)}$. Letting $q=q_1'$, $k= q_1'' q_2'' q_2^-$ and $r=q_2^+$, and relaxing the constraint $a\equiv b\Mod{q_1' q_1''}$ to $a\equiv b\Mod{q_1'}$, we see that it suffices to show for all choices of $Q=Q_1x^{-o(1)}, R=Q_2x^{-o(1)}$ and $K=x^{o(1)}$ and $C>0$
\[
\sum_{q\sim Q}\mu^2(q)\sup_{(b,q)=1}\sum_{\substack{k\sim K\\ (k,q)=1}}\sum_{\substack{r\sim R\\ (r,k q)=1 \\ P^-(r)\ge z_0}}\mu^2(r)\sup_{\substack{(a,k r q)=1\\ a\equiv b\Mod{q}}}|\Delta(a,b,q,k r_1)|\ll_C\frac{x}{(\log{x})^C}.
\]
We see that for $(q,kr)=1$
\[
\Delta(a,b;q,k r)=\frac{1}{\phi(k r)}\sum_{\substack{a'\Mod{q k r}\\ (a',q k r)=1\\ a'\equiv b\Mod{q} }}\tilde{\Delta}(a,a';q k r)\ll\sup_{\substack{(a',q k r)=1\\ a'\equiv b\Mod{q}}}|\tilde{\Delta}(a,a';q k r)|,
\]
where
\[
\tilde{\Delta}(a,a';q k r):= \sum_{m\sim M}\beta_m\sum_{\substack{n\sim N\\ (nm,qr)=1}}\alpha_n \Bigl(\mathbf{1}_{n m\equiv a\Mod{q k r}}-\mathbf{1}_{n m \equiv a'\Mod{q k r}}\Bigr).
\]
Thus it suffices to show that
\[
\sum_{q\sim Q}\mu^2(q)\sup_{(b,q)=1}\sum_{\substack{k\sim K \\ (k,q)=1}}\sum_{\substack{r\sim R\\ (r, k q)=1 \\ P^-(r)\ge z_0}}\mu^2(r)\sup_{\substack{a,a'\\ (a a',k r q)=1\\ a'\equiv a\equiv b\Mod{q}}}|\tilde{\Delta}(a,a',q k r)|\ll_C\frac{x}{(\log{x})^C}.
\]
Let the suprema occur at $b=b_q$, $a=a_{q,k,r}$, and $a'=a'_{q,k,r}$, and insert 1-bounded coefficients $c_{q,k,r}$ to remove the absolute values. We may restrict the support of $c_{q,k,r}$ to $q,k,r$ pairwise coprime with $q r$ square-free and $P^-(r)\ge z_0$. Thus it suffices to show that
\[
\mathscr{Z}_0:=\sum_{q\sim Q}\sum_{k\sim K}\sum_{r\sim R}c_{q,k,r}\tilde{\Delta}(a_{q,k,r},a'_{q,k,r},q k r)\ll_C\frac{x}{(\log{x})^C}.
\]
By Lemma \ref{lmm:Divisor} and the trivial bound, the contribution from $\tau(n)\ge (\log{x})^{B_2}$ is negligible for $B_2\ge B_0(C)$ suitably large in terms of $C$, and so we may restrict to $\tau(n)\le (\log{x})^{B_2}$.
We substitute the definition of $\tilde{\Delta}$ and apply Cauchy-Schwarz in $q,k,m$. Inserting a smooth majorant for the $m$ summation, we see that
\begin{align*}
\mathscr{Z}_0^2&\ll \mathscr{Z}_1:=Q K M \sum_{q\sim Q}\sum_{k\sim K}\sum_{m}\psi\Bigl(\frac{m}{M}\Bigr)|\mathscr{Z}_0'|^2
\end{align*}
where
\begin{align*}
\mathscr{Z}_0'&:=\sum_{r\sim R}c_{q,k,r}\sum_{\substack{n\sim N\\ \tau(n)\le (\log{x})^{B_2}}}\alpha_n\Bigl(\mathbf{1}_{n m\equiv a_{q,k,r}\Mod{q k r}}-\mathbf{1}_{m n\equiv a'_{q,k,r}\Mod{q k r}}\Bigr).
\end{align*}
Thus it suffices to show that $\mathscr{Z}_1\ll M^2 N^2/(\log{x})^{2C}$. Expanding the square, and swapping the order of summation we see that suffices to show that for any sequences $b_q,a_{q,k,r},a'_{q,k,r}$ with $a_{q,k,r}\equiv a'_{q,k,r}\equiv b_q\Mod{q}$ and $(b_q,q)=(a_{q,k,r},q k r)=(a'_{q,k,r},q k r)=1$ that we have
\[
\mathscr{Z}_2= \mathscr{Z}_{MT}+O_C\Bigl(\frac{MN^2}{Q K (\log{x})^{2C}}\Bigr),
\]
where for some $C_1=C_1(A,B,B_2)$
\begin{align*}
\mathscr{Z}_2&:=\sum_{q\sim Q}\sum_{k\sim K}\sum_{r_1,r_2\sim R}c_{q,k,r_1}\overline{c_{q,k,r_2}}\sum_{\substack{n_1,n_2\sim N\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2}} }\alpha_{n_1}\overline{\alpha_{n_2}}\sum_{\substack{m\sim M\\ n_1m\equiv a_{q,k,r_1}\Mod{q k r_1}\\ n_2m\equiv a'_{q,k,r_2}\Mod{q k r_2}}}\psi\Bigl(\frac{m}{M}\Bigr),\\
\mathscr{Z}_{MT}&:=\sum_{q\sim Q}\sum_{k\sim K}\sum_{r_0\le N/((\log{x})^{C_1} K Q)}\sum_{\substack{r_1',r_2'\sim R/r_0\\ (r_1',r_2')=1}}\sum_{\substack{n_1,n_2\sim N\\ (n_1,q k r_0r_1')=1\\ (n_2,q k r_0r_2')=1\\ \tau(n_1),\tau(n_2)\le (\log{x})^{B_2}}}\frac{\alpha_{n_1}\overline{\alpha_{n_2}}c_{q,k,r_0r_1'}\overline{c_{q,k,r_0r_2'}}M\hat{\psi}(0)}{q\phi(q k r_0) k r_0 r_1' r_2'}.
\end{align*}
The result now follows from Lemma \ref{lmm:ZhangConclusion} on choosing $B_2$ sufficiently large in terms of $C$.
\end{proof}
\section{Triple divisor function}\label{sec:Triple}
Finally, we establish Proposition \ref{prpstn:Triple}. As mentioned previously, this is essentially an estimate for the triple divisor function convolved with a short rough sequence.
Friedlander-Iwaniec \cite{FIDivisor} were the first to show that the triple divisor function $\tau_3(n)$ is equidistributed in arithmetic progressions to modulus $q=x^{1/2+\delta}$. This was uniform in the residue class and worked for each individual $q$, but would only allow for an additional factor $M<x^c$ for some very small constant $c$. Instead we take an approach which follows that of \cite{Polymath} to allow for a larger value of $M$. (It is vital for our argument that we can almost get to $x^{1/10}$.) There are additional technical complications in our situation because the original argument of \cite{Polymath} was not completely uniform in the residue class. To resolve this we need to rework several of their arguments slightly, going back to the underlying estimates for sums over $\mathbb{F}_p$. We also require an argument that only has logarithmic losses and isn't limited to square-free moduli, which necessitates more technical care at several stages.
As with previous work on the triple divisor function, the key technical ingredient concerns correlations of hyper Kloosterman sums, which relies on extensions of Deligne's work \cite{DeligneApp}. It is crucial for our argument that we also can handle twists by a suitable additive character to make a small additional saving to handle issues from the uniformity of the residue classes under consideration.
\begin{lmm}[Bound for correlations of Kloosterman sums]\label{lmm:KloostermanCorrelation}
Let
\[
\Kl_3(b;q):=\frac{1}{q}\sum_{\substack{b_1,b_2,b_3\in \mathbb{Z}/q\mathbb{Z}\\ b_1b_2b_3=b\Mod{q}}}e\Bigl(\frac{b_1+b_2+b_3}{q}\Bigr).
\]
We have that for any prime $p$
\[
\sum_{\substack{b\Mod{p}\\ (b,p)=1}}e\Bigl(\frac{c_1 b}{p}\Bigr)\Kl_3(b;p)\overline{\Kl_3(c_2b;p)}\ll p^{1/2}
\]
unless $c_1\equiv 0\Mod{p}$ and $c_2\equiv 1\Mod{p}$.
\end{lmm}
\begin{proof}
This follows from \cite[Proposition 6.11]{Polymath}.
\end{proof}
\begin{lmm}[Completion of sums]\label{lmm:TripleCompletion}
Let $B>0$. Let $\alpha_m$ and $c_{q,r,s}$ be 1-bounded complex sequences with $c_{q,r,s}$ supported on $\tau(qr)\ll (\log{x})^B$. Let $a_{t}$ be a sequence of integers satisfying $(a_{t},t)=1$ for all $t$. Let $\psi_1,\psi_2,\psi_3$ be smooth functions supported on $[1,2]$ with $\|\psi^{(j)}_1\|_\infty,\|\psi^{(j)}_2\|_\infty,\|\psi^{(j)}_3\|_\infty \ll((j+1)\log{x})^{B j}$ for all $j\ge 0$. Let $\mathscr{K}$ and $\mathscr{K}_{MT}$ be given by
\begin{align*}
\mathscr{K}&:=\sum_{s\sim S}\sum_{q\sim Q}\sum_{r\sim R}c_{q,r,s}\sum_{m\sim M}\alpha_m\mathop{\sum_{n_1}\sum_{n_2}\sum_{n_3}}\limits_{n_1n_2n_3m\equiv a_{q r s}\Mod{q r s}}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr)\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)\psi_3\Bigl(\frac{n_3}{N_3}\Bigr),\\
\mathscr{K}_{MT}&:=N_1N_2 N_3\hat{\psi}_1(0)\hat{\psi}_2(0)\hat{\psi}_3(0)\sum_{s\sim S}\sum_{q\sim Q}\sum_{r\sim R}c_{q,r,s}\sum_{\substack{m\sim M\\ (m,q r s)=1}}\alpha_m\frac{\phi(q r s)^2}{q^3 r^3 s^3},
\end{align*}
for some quantities $N_1\le N_2\le N_3\le x$ and $Q R S\le x$ with $MN_1N_2N_3\asymp x$. Then we have
\begin{align*}
\mathscr{K}&=\mathscr{K}_{MT}+\frac{N_1N_2N_3}{Q^3 R^3}\sum_{s\sim S}\mathscr{K}_2+O_B\Bigl(\frac{x(\log{x})^{3B}}{N_1}\Bigr),
\end{align*}
where
\begin{align*}
\mathscr{K}_2:&=\sum_{q\sim Q}\sum_{r\sim R}c'_{q,r,s}\sum_{\substack{m\sim M\\ (m,q r s)=1}}\alpha_m\sum_{\substack{1\le |h_1|\le H_1 \\ 1\le |h_2|\le H_2\\ 1\le |h_3|\le H_3}}\hat{\psi_1}\Bigl(\frac{N_1 h_1}{q r s}\Bigr)\hat{\psi_2}\Bigl(\frac{N_2h_2}{q r s}\Bigr)\hat{\psi_3}\Bigl(\frac{N_3h_3}{q r s}\Bigr)\\
&\qquad\times F(h_1,h_2,h_3;a_{q r s}\overline{m};q r s),\\
H_i&=(\log{x})^{2B+1}\frac{Q R S}{N_i},\qquad (i\in\{1,2,3\})\\
c'_{q,r,s}&=\frac{Q^3 R^3 S^3}{q^3 r^3 s^3}c_{q,r,s}.
\end{align*}
Here $F$ is the function of Lemma \ref{lmm:FSum}.
\end{lmm}
\begin{proof}
Let $t=q r s$. We see that $\mathscr{K}$ is given by
\begin{equation}
\mathscr{K}=\sum_{q\sim Q}\sum_{r\sim R}\sum_{s\sim S}c_{q,r,s}\mathscr{K'}(q r s)
\label{eq:KSum}
\end{equation}
where
\begin{align}
\mathscr{K}'(t):=\sum_{\substack{m\sim M\\ (m,t)=1}}\alpha_m\sum_{(n_2,t)=1}\sum_{(n_3,t)=1}\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)\psi_3\Bigl(\frac{n_3}{N_3}\Bigr)\sum_{n_1\equiv a_{t}\overline{n_2n_3 m}\Mod{t}}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr).\label{eq:K'Sum}
\end{align}
We concentrate on the inner sum. By Lemma \ref{lmm:Completion}, for $(m n_2 n_3,t)=1$ we have
\begin{align}
\sum_{n_1\equiv a_{t}\overline{m n_2 n_3}\Mod{t}}\psi_1\Bigl(\frac{n_1}{N_1}\Bigr)&=\frac{N_1}{t}\hat{\psi_1}(0)+\sum_{1\le |h_1|\le H_1}\hat{\psi}_1\Bigl(\frac{h_1 N_1}{t}\Bigr)e\Bigl(\frac{a_{t} h_1 \overline{m n_2 n_3}}{t}\Bigr)\nonumber\\
&\qquad+O(x^{-100}),\label{eq:Completion1}
\end{align}
where $H_1:=(\log{x})^{2B+1} Q R S/N_1$. We substitute this expression into \eqref{eq:K'Sum}. The final term of \eqref{eq:Completion1} clearly contributes negligibly to $\mathscr{K}'$. The first term of \eqref{eq:Completion1} contributes a total
\[
\frac{N_1\hat{\psi_1}(0)}{t}\sum_{\substack{m\sim M\\ (m,t)=1}}\alpha_m\sum_{(n_2,t)=1}\sum_{(n_3,t)=1}\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)\psi_3\Bigl(\frac{n_3}{N_3}\Bigr).
\]
By Lemma \ref{lmm:TrivialCompletion} we have
\[
\sum_{(n_2,t)=1}\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)=\frac{N_2\phi(t)}{t}\hat{\psi}_2(0)+O(\tau(t)(\log{x})^{2B}),
\]
and similarly for the $n_3$ summation. Since we only consider $\tau(t)\le (\log{x})^B$, we see that the first term of \eqref{eq:Completion1} contributes
\[
\frac{N_1 N_2 N_3\hat{\psi}_1(0)\hat{\psi}_2(0)\hat{\psi}_3(0)\phi(t)^2}{t^3}\sum_{\substack{m\sim M\\ (m,t)=1}}\alpha_m+O\Bigl(\frac{x(\log{x})^{3B}}{t N_2}+\frac{x(\log{x})^{3B}}{t N_3}\Bigr).
\]
to $\mathscr{K}'$. Substituting this into \eqref{eq:KSum}, we see that this term contributes a total
\[
\mathscr{K}_{MT}+O\Bigl(\frac{x(\log{x})^{3B}}{N_2}+\frac{x(\log{x})^{3B} }{N_3}\Bigr)
\]
to $\mathscr{K}$. Thus we are left to show that the second term of \eqref{eq:Completion1} contributes roughly $\mathscr{K}_2$ to $\mathscr{K}$. Lemma \ref{lmm:InverseCompletion} shows that for $H_2:=Q R S (\log{x})^{2B+1}/N_2$ we have
\begin{align*}
\sum_{(n_2,t)=1}\psi_2&\Bigl(\frac{n_2}{N_2}\Bigr)e\Bigl(\frac{a_{t}h_1\overline{m n_2 n_3}}{t}\Bigr)=\frac{N_2\hat{\psi_2}(0)}{t}\sum_{(b_2,t)=1}e\Bigl(\frac{a_{t}h_1\overline{m n_3} b_2}{t}\Bigr)\\
&+\frac{N}{t}\sum_{1\le |h_2|\le H_2}\hat{\psi_2}\Bigl(\frac{h_2 N_2}{t}\Bigr)\sum_{\substack{b_2\Mod{t}\\ (b_2,t)=1}} e\Bigl(\frac{a_{t}h_1\overline{m n_3 b_2}+h_2 b_2}{t}\Bigr)+O_B(x^{-10}).
\end{align*}
The first term above is a multiple of a Ramanujan sum, and so of size $O( N_2\gcd(h_1,t)/t)$. Applying Lemma \ref{lmm:InverseCompletion} again to the $n_3$ sum with $H_3:=Q R S (\log{x})^{2B+1}/N_3$, we find
\begin{align*}
&\sum_{\substack{n_2,n_3\\ (n_2n_3,t)=1}}\psi_2\Bigl(\frac{n_2}{N_2}\Bigr)\psi_3\Bigl(\frac{n_3}{N_3}\Bigr)e\Bigl(\frac{a_t h_1\overline{m n_2n_3}}{t}\Bigr)\\
&=\frac{N_2N_3}{t^2}\sum_{\substack{1\le |h_2|\le H_2\\ 1\le |h_3|\le H_3}}\hat{\psi_2}\Bigl(\frac{h_2 N_2}{t}\Bigr)\hat{\psi_3}\Bigl(\frac{h_3 N_3}{t}\Bigr)\sum_{b_2,b_3\in(\mathbb{Z}/t\mathbb{Z})^\times }e\Bigl(\frac{a_t h_1\overline{b_2 b_3 m}+h_2 b_2+h_3 b_3}{t}\Bigr)\\
&\qquad+O_C\Bigl(\frac{N_2N_3}{Q R S}(h_1,t)\Bigr).
\end{align*}
Putting this all together, we obtain
\begin{align*}
\mathscr{K}&=\mathscr{K}_{MT}+\frac{N_1N_2N_3}{Q^3 R^3 S^3}\sum_{s\sim S}\mathscr{K}_2+O_C\Bigl(\frac{x(\log{x})^{3B} }{N_1}\Bigr),
\end{align*}
where
\begin{align*}
\mathscr{K}_2:&=\sum_{q\sim Q}\sum_{r\sim R}c'_{q,r,s}\sum_{\substack{m\sim M\\ (m,q r s)=1}}\alpha_m\sum_{\substack{1\le |h_1|\le H_1 \\ 1\le |h_2|\le H_2\\ 1\le |h_3|\le H_3}}\hat{\psi_1}\Bigl(\frac{N_1 h_1}{q r s}\Bigr)\hat{\psi_2}\Bigl(\frac{N_2h_2}{q r s}\Bigr)\hat{\psi_3}\Bigl(\frac{N_3h_3}{q r s}\Bigr)\\
&\qquad\times\sum_{\substack{b_1,b_2,b_3\in(\mathbb{Z}/q r s\mathbb{Z})^\times\\ b_1b_2b_3\equiv a_{q r s}\overline{m}\Mod{q r s}}}e\Bigl(\frac{h_1b_1+h_2b_2+h_3b_3}{q r s}\Bigr),\\
c'_{q,r,s}&=\frac{Q^3 R^3 S^3}{q^3 r^3 s^3}c_{q,r,s}.
\end{align*}
We see that the final sum over $b_1,b_2,b_3$ is $F(h_1,h_2,h_3;a_{q r s}\overline{m};q r s)$. This gives the result.
\end{proof}
\begin{lmm}[Dealing with dependencies and GCDs]\label{lmm:TripleGCDs}
Let $a_{t},c_{q,r,s},\alpha_m,\mathscr{K}_2$ be as in Lemma \ref{lmm:TripleCompletion}. Then we have
\begin{align*}
\mathscr{K}_2&\ll (\log{x})^3 S^{11}Q R\sum_{d_0\le 4QR}\,\sum_{\substack{d_1\le 2Q, d_2\le 2R\\ d_0|d_1d_2\\ \mu(d_1d_2)^2=1}}\sum_{\substack{e_1,e_2,e_3| d_1^\infty d_2^\infty\\ d_1d_2|d_0e_1e_2e_3}}\frac{\prod_{i=1}^3 (d_0e_i,d_1d_2)}{d_1^2 d_2^2}|\mathscr{K}_3|,
\end{align*}
where
\begin{align*}
\mathscr{K}_3&:=\sum_{\substack{q'\sim Q'}}\sum_{\substack{r'\sim R'}}c''_{q',r'}\sum_{\substack{m\sim M\\ (m,q' r')=1}}\alpha'_m\sum_{\substack{1\le |h_1'|\le H_1' \\ 1\le |h_2'|\le H_2'\\ 1\le |h_3'|\le H_3'\\ (h_1'h_2'h_3',q' r')=1}} \gamma_{h_1'}\gamma_{h_2'}\gamma_{h_3'} \Kl_3(a'_{q', r'}h_1'h_2'h_3'\overline{m}f;q' r'),
\end{align*}
where $H_i'\le H_i/(d_0e_i)$, $Q':=Q/d_1$, $R':=R/d_2$ and $a'_{q', r'}:=a_{d_1d_2q' r' s}$, $(f,q' r')=1$ depends only on $s,d_0,d_1,d_2,e_1,e_2,e_3$ and where $|c''_{q',r'}|\le |c'_{q' d_1,r' d_2}|$, $|\alpha'_m|\le |\alpha_m|$ and $|\gamma_{h'}|\le 1$ are some 1-bounded complex sequences (depending on $d_0,d_1,s$) .
\end{lmm}
\begin{proof}
First we wish to remove the dependencies between $q,r,h_1,h_2,h_3$ from the $\hat{\psi}_i$ factors. We note that since $\hat{\psi}^{(j)}(x)\ll_{j,k} |x|^{-k}$, we have (uniformly in $s$)
\[
\frac{\partial^{j_1}\partial^{j_2}\partial^{j_3}}{\partial q^{j_1}\partial r^{j_2}\partial h_i^{j_3}}\hat{\psi}_i\Bigl(\frac{N_i h_i}{q r s}\Bigr)\ll_{j_1,j_2,j_3}q^{-j_1} r^{-j_2} h_i^{-j_3}.
\]
Therefore, by partial summation, we have
\[
\mathscr{K}_2\ll (\log{x})^3 \mathscr{K}_2',
\]
where for some $Q'',R'',H_1'',H_2'',H_3''$ (with $H_i''\le H_i$)
\[
\mathscr{K}_2':=\sum_{\substack{q\sim Q\\ q\le Q''}}\sum_{\substack{r\sim R\\ r\le R''}}c'_{q,r,s}\sum_{\substack{m\sim M\\ (m,q r)=1}}\alpha_m\sum_{\substack{1\le |h_1|\le H_1'' \\ 1\le |h_2|\le H_2''\\ 1\le |h_3|\le H_3''}}F(h_1,h_2,h_3;a_{q r s}\overline{m};q r s).
\]
We now wish to simplify the $F$ sum by extracting GCDs. We recall that we only need to consider $q,r,s$ pairwise coprime with $qr$ square-free. Let $d_1=\gcd(h_1h_2h_3,q)$ and $d_2=\gcd(h_1h_2h_3,r)$ and write $q=d_1q'$, $r=d_2r'$. Since $qr$ is square-free we have $r',q',d_1,d_2$ are pairwise coprime. Thus, by Lemma \ref{lmm:FSum} we have
\begin{align*}
F(h_1,h_2,h_3;a_{q r s}\overline{m};q r)&=F(h_1,h_2, h_3;a_{qr s}\overline{m q^3 r^3};s)F(h_1,h_2,h_3;a_{q r s}\overline{m s^3 d_1^3 d_2^3};q' r')\\
&\qquad \times F(h_1,h_2,h_3;a_{q r s}\overline{m s^3 q'{}^3r'{}^3};d_1d_2).
\end{align*}
Since $qr$ is square-free we see that $\gcd(h_1h_2h_3,q' r')=1$, so by Lemma \ref{lmm:FSum}
\[
F(h_1,h_2,h_3;a_{q r s}\overline{m s^3 d_1^3 d_2^3};q' r')=q' r'\Kl_3(a_{d_1d_2q'r's}h_1h_2h_3\overline{m s^3 d_1^3 d_2^3};q' r').
\]
Let $d_0=(h_1,h_2,h_3,d_1d_2)$. Then, by Lemma \ref{lmm:FSum}
\begin{align*}
F(h_1,h_2,h_3;a_{q r s}\overline{m s^3q'{}^3r'{}^3};d_1d_2)&=\phi(d_0)^2 F\Bigl(\frac{h_1}{d_0},\frac{h_2}{d_0},\frac{h_3}{d_0};a_{q r s}\overline{m s^3q'{}^3r'{}^3};\frac{d_1d_2}{d_0}\Bigr)\\
&=\phi(d_0)^2 G\Bigl(\frac{h_1}{d_0},\frac{h_2}{d_0},\frac{h_3}{d_0};\frac{d_1d_2}{d_0}\Bigr)
\end{align*}
for some function $G$ which depends only on $\gcd(h_i/d_0,d_1d_2/d_0)$ and satisfies
\[
\phi(d_0)^2 G\Bigl(\frac{h_1}{d_0},\frac{h_2}{d_0},\frac{h_3}{d_0};\frac{d_1d_2}{d_0}\Bigr)\ll \frac{(h_1,d_1d_2)(h_2,d_1d_2)(h_3,d_1d_2)}{d_1d_2}.
\]
To ease notation let $h_i=d_0e_i h_i'$ with $(h_i',d_1d_2)=1$, $e_i|d_1^\infty d_2^\infty$. Finally, we see that
\[
F(h_1,h_2, h_3;a_{q r s}\overline{m q^3 r^3};s)
\]
only depends on the values of $h_1',h_2',h_3',a_{q r s},m,q',r',d_1,d_2,d_0,e_1,e_2,e_3\Mod{s}$ and is always bounded above by $s^2$. Thus we can essentially fix this factor by fixing the $O(S^{13})$ residue classes of these variables. Specifically, let us fix them to lie in the residue classes $s_{h_1'},s_{h_2'},s_{h_3'},s_{a},s_m,s_{q'},s_{r'},s_{d_1},s_{d_2},s_{d_0},s_{e_1},s_{e_2},s_{e_3}\Mod{s}$. , so we have
\[
F(h_1,h_2,h_3;a_{q r s}\overline{m};q r s)=\kappa_s q' r'\phi(d_0)^2 G\Bigl(e_1,e_2,e_3;\frac{d_1d_2}{d_0}\Bigr)\Kl_3(a_{q r s}h_1'h_2'h_3'\overline{m}f;q' r').
\]
where $f=f(d_0,e_1,e_2,e_3,s,d_1,d_2,q',r')\Mod{q' r'}$ is given by
\[
f= d_0^3 e_1e_2e_3\overline{s^3d_1^3 d_2^3}
\]
and $\kappa_s\ll S^2$ depends only on residue classes $\Mod{s}$ which we have constrained our variables to lie in.
Substituting this into $\mathscr{K}_2'$ and taking the worst choice of residue classes $\Mod{s}$ for an upper bound, we obtain
\begin{align*}
\mathscr{K}_2'&\ll S^{15}\sup_{s_{h_1'},s_{h_2'},s_{h_3'},s_{a},s_m,s_{q'},s_{r'}\Mod{s}}|\mathscr{K}_2''|,\\
\mathscr{K}_2''&:=\sum_{d_0\le 4QR}\,\sum_{\substack{d_1\le 2Q\\ d_2\le 2R\\ d_0|d_1d_2\\ \mu(d_1d_2)^2=1}}\sum_{\substack{e_1,e_2,e_3| d_1^\infty d_2^\infty\\ d_1d_2|d_0e_1e_2e_3}}\frac{(d_0e_1,d_1d_2)(d_0e_2,d_1d_2)(d_0e_3,d_1d_2)}{d_1d_2}Q' R'|\mathscr{K}_3|,\\
\mathscr{K}_3&:=\sum_{\substack{q'\sim Q'}}\sum_{\substack{r'\sim R'}}c''_{q',r'}\sum_{\substack{m\sim M\\ (m,q' r')=1}}\alpha'_m\sum_{\substack{1\le |h_1'|\le H_1' \\ 1\le |h_2'|\le H_2'\\ 1\le |h_3'|\le H_3'\\ (h_1'h_2'h_3',q' r')=1}}\gamma_{h_1'}\gamma_{h_2'}\gamma_{h_3'} \Kl_3(a'_{q',r'}h_1'h_2'h_3'\overline{m}f;q' r'),
\end{align*}
where $H_i':=H_i''/(d_0e_i)\le H_i/(d_0e_i)$, $Q':=Q/d_1$, $R':=R/d_2$, and the coefficients are given by
\begin{align*}
c''_{q',r'}&:=\frac{q' r'}{4 Q' R'}\mathbf{1}_{\substack{ a_{q', r'}\equiv s_{a} \Mod{s} \\ q'\equiv s_{q'}\Mod{s} \\ r'\equiv s_{r'}\Mod{s} }}\mathbf{1}_{q'\le Q''/d_1 }\mathbf{1}_{ r'\le R''/d_2 } c'_{q' d_1,r' d_2,s},\\
\alpha'_m&:=\alpha_m\mathbf{1}_{m\equiv s_m\Mod{s}}\mathbf{1}_{ (m,d_1d_2)=1},\\
\gamma_{h_i'}&:=\mathbf{1}_{h_i'\equiv s_{h_i'}\Mod{s}}\mathbf{1}_{ (h_i',d_1d_2)=1},\\
a'_{q',r'}&:=a_{d_1 d_2 q' r' s}.
\end{align*}
This gives the result.
\end{proof}
\begin{lmm}[Cauchy]\label{lmm:TripleCauchy}
Let $\mathscr{K}_3$ be as in Lemma \ref{lmm:TripleGCDs}. Then we have
\[
|\mathscr{K}_3|^2\ll (\log{x})^{O(1)} Q' H_1'H_2'H_3' \sup_{H\le H_1'H_2'H_3'}\mathscr{K}_4,
\]
where
\begin{align*}
\mathscr{K}_{4}&:=\sum_{q'\sim Q'}\sum_{\substack{r_1,r_2\sim R'}}|c''_{q',r_1}c''_{q',r_2}|\sum_{\substack{m_1,m_2\sim M\\ (m_1,q' r_1)=1\\ (m_2,q' r_2)=1}}\\
&\qquad \times \Bigl|\sum_{(h,q' r_1r_2)=1}\psi\Bigl(\frac{h}{H}\Bigr)\Kl_3(a'_{q',r_1}h\overline{m_1}f;q' r_1)\overline{\Kl_3(a'_{q',r_2}h\overline{m_2}f;q' r_2)}\Bigr|
\end{align*}
\end{lmm}
Here we caution to the reader that by $\overline{\Kl_3(a'_{q',r_2}h\overline{m_2}f;q' r_2)}$ we mean the complex conjugate of $\Kl_3(a'_{q',r_2}h\overline{m_2}f;q' r_2)$, where $\overline{m_2}m_2\equiv 1\Mod{q' r_2}$.
\begin{proof}
We swap the order of summation and write $h=h_1'h_2'h_3'$ in $\mathscr{K}_3$. This gives
\begin{align*}
\mathscr{K}_3&:=\sum_{\substack{q'\sim Q'}}\sum_{\substack{1\le h\le H'\\ (h,q')=1}}\Gamma_h\sum_{\substack{r'\sim R'\\ (r',h)=1}}c''_{q',r'}\sum_{\substack{m\sim M\\ (m,q' r')=1}}\alpha'_m\Kl_3(a'_{q',r'}h\overline{m}f;q' r'),
\end{align*}
where $H':=H_1'H_2'H_3'\le (\log{x})^{6B+3} Q^3 R^3 S^3 /(N_1N_2 N_3 d_0^3 e_1e_2e_3)$ and
\[
\Gamma_h:=\sum_{1\le |h_1'|\le H_1'}\sum_{1\le |h_2'|\le H_2'}\sum_{\substack{1\le |h_3'|\le H_3'\\ h=h_1'h_2'h_3'}} \gamma_{h_1'}\gamma_{h_2'}\gamma_{h_3'}\le \tau_3(h).
\]
We now Cauchy in $q',h_1',h_2',h_3'$, put $h$ into dyadic intervals and insert a smooth majorant $\psi$. This gives
\begin{align*}
\mathscr{K}_3^2&\le \sum_{q'\sim Q'}\sum_{\substack{1\le |h|\le H'\\ (h,q')=1}}\Bigl|\sum_{\substack{r'\sim R'\\ (r',h)=1}}c''_{q',r'}\sum_{\substack{m\sim M\\ (m,q' r')=1}}\alpha'_m\Kl_3(a'_{q',r'}h\overline{m}f;q' r')\Bigr|^2\\
&\qquad\times \Bigl(Q'\sum_{1\le |h|\le H'}\tau_3(h)^2\Bigr)\\
&\ll (\log{x})^{O(1)} Q H' \sup_{H\le H'}\mathscr{K}_3',
\end{align*}
where $\mathscr{K}_3'$ is given by
\begin{align*}
\mathscr{K}_3'&:=\sum_{q'\sim Q'}\sum_{(h,q')=1}\psi\Bigl(\frac{h}{H}\Bigr)\Bigl|\sum_{\substack{r'\sim R'\\ (r',h)=1}}c''_{q',r'}\sum_{\substack{m\sim M\\ (m,q' r')=1}}\alpha'_m\Kl_3(a'_{q',r'}h\overline{m}f;q' r')\Bigr|^2.
\end{align*}
We expand the square, and swap the order of summation, giving
\[
\mathscr{K}_3'\le \mathscr{K}_4,
\]
where
\begin{align*}
\mathscr{K}_{4}&:=\sum_{q'\sim Q'}\sum_{\substack{r_1,r_2\sim R'}}|c''_{q',r_1}c''_{q',r_2}|\sum_{\substack{m_1,m_2\sim M\\ (m_1,q' r_1)=1\\ (m_2,q' r_2)=1}}\\
&\qquad \times\Bigl|\sum_{(h,q' r_1r_2)=1}\psi\Bigl(\frac{h}{H}\Bigr)\Kl_3(a'_{q',r_1}h\overline{m_1}f;q' r_1)\overline{\Kl_3(a'_{q',r_2}h\overline{m_2}f;q' r_2)}\Bigr|.
\end{align*}
This gives the result.
\end{proof}
\begin{lmm}[Bounding $\mathscr{K}_4$]\label{lmm:K4Bound}
Let $\mathscr{K}_4$ be as in Lemma \ref{lmm:TripleCauchy}. Then we have
\[
\mathscr{K}_4\ll (\log{x})^{O_B(1)}\Bigl(Q'{}^{3/2}R'{}^3 M^2+HQ'{}^{1/2}R' M^2+H Q' R' M\Bigr).
\]
\end{lmm}
\begin{proof}
We see that the $\Kl_3$ factors are periodic with period $q'[r_1,r_2]$. (Here we use the notation $[r_1,r_2]:=\lcm(r_1,r_2)$.) Therefore we split the inner sum of $\mathscr{K}_4$ into residue classes $\Mod{q'[r_1,r_2]}$. Denoting this inner sum by $\mathscr{K}_5$, we see
\begin{align}
\mathscr{K}_5&:=\sum_{h}\psi\Bigl(\frac{h}{H}\Bigr)\Kl_3(a'_{q',r_1}h\overline{m_1}f;q' r_1)\overline{\Kl_3(a'_{q',r_2}h\overline{m_2}f;q' r_2)}\nonumber \\
&=\sum_{b\Mod{q'[r_1,r_2]}}\Kl_3(a'_{q',r_1}b\overline{m_1}f;q' r_1)\Kl_3(a'_{q',r_2}b\overline{m_2}f;q' r_2)\sum_{h\equiv b\Mod{q'[r_1,r_2]}}\psi\Bigl(\frac{h}{H}\Bigr).\label{eq:K5}
\end{align}
By completion of sums (Lemma \ref{lmm:Completion}), we have for $L_0:=(\log{x})^5 q' [r_1,r_2]/H$
\[
\sum_{h\equiv b\Mod{q'[r_1,r_2]}}\psi\Bigl(\frac{h}{H}\Bigr)=\frac{H}{q'[r_1,r_2]}\sum_{|\ell|\le L_0}\hat{\psi}\Bigl(\frac{\ell H}{q'[r_1,r_2]}\Bigr)e\Bigl(\frac{\ell b}{q'[r_1,r_2]}\Bigr)+O(x^{-100}).
\]
We substitute this expression into \eqref{eq:K5}. This gives
\begin{align}
\mathscr{K}_5&=O(x^{-10})+\frac{H}{q'[r_1,r_2]}\sum_{|\ell|\le L_0}\hat{\psi}\Bigl(\frac{\ell H}{q'[r_1,r_2]}\Bigr)\nonumber\\
&\qquad \times \sum_{b \Mod{q'[r_1,r_2]}}e\Bigl(\frac{\ell b}{q'[r_1,r_2]}\Bigr)\Kl_3(a'_{q',r_1}b\overline{m_1}f;q' r_1)\Kl_3(a'_{q',r_2}b\overline{m_2}f;q' r_2).\label{eq:K5Completed}
\end{align}
The inner sum over $b$ factors into a product of sums modulo each prime factor of $q'[r_1,r_2]$ by the Chinese Remainder Theorem. Explicitly, using Lemma \ref{lmm:FSum}, it is given by
\begin{align*}
&\prod_{p|q'(r_1,r_2)}\Bigl(\sum_{b \Mod{p}}e\Bigl(\frac{\ell b\overline{(q'[r_1,r_2]/p)}}{p}\Bigr)\Kl_3\Bigl(a'_{q',r_1}b\overline{m_1}\overline{\Bigl(\frac{q' r_1}{p}\Bigr)^3}f;p\Bigr)\Kl_3\Bigl(a'_{q',r_2}b\overline{m_2}\overline{\Bigl(\frac{q'r_2}{p}\Bigr)^3}f;p\Bigr)\Bigr)\\
&\times \prod_{\substack{p|r_1\\ p\nmid r_2}}\Bigl(\sum_{b \Mod{p}}e\Bigl(\frac{\ell b\overline{(q'[r_1,r_2]/p)}}{p}\Bigr)\Kl_3\Bigl(a'_{q',r_1}b\overline{m_1}\overline{\Bigl(\frac{q'r_1}{p}\Bigr)^3}f;p\Bigr)\Bigr)\\
&\times \prod_{\substack{p|r_2\\ p\nmid r_1}}\Bigl(\sum_{b \Mod{p}}e\Bigl(\frac{\ell b\overline{(q'[r_1,r_2]/p)}}{p}\Bigr)\Kl_3\Bigl(a'_{q',r_2}b\overline{m_2}\overline{\Bigl(\frac{q'r_2}{p}\Bigr)^3}f;p\Bigr)\Bigr).
\end{align*}
By Lemma \ref{lmm:KloostermanCorrelation}, each such sum $\Mod{p}$ exhibits square-root cancellation unless $\ell$ vanishes and the arguments are of the $\Kl_3$ factors are the same $\Mod{p}$. Thus the sum over $b$ in \eqref{eq:K5Completed} is bounded by
\[
\tau(q'r_1r_2)^{O(1)}q'{}^{1/2}[r_1,r_2]^{1/2}g_1^{1/2} g_2^{1/2},
\]
where the GCDs $g_1$ and $g_2$ are given by
\begin{align*}
g_1&:=(a'_{q',r_1}r_2^3m_2-a'_{q',r_2}r_1^3 m_1,\ell,q'),\\
g_2&:=\Bigl(\frac{m_2 a'_{q',r_1}r_2^3-m_1a'_{q',r_2}r_1^3}{(r_1,r_2)^3},r_1,r_2,\ell\Bigr).
\end{align*}
We substitute this into our expression \eqref{eq:K5Completed} for $\mathscr{K}_5$. Recalling that $c_{q,r}$ is supported on $\tau(q r)\le (\log{x})^B$, we see that for the terms we are considering the $\tau(q'r_1r_2)^{O(1)}\ll (\log{x})^{O_B(1)}$. Separating the term $\ell=0$, this gives a bound
\begin{align}
\mathscr{K}_5&\ll (\log{x})^{O_B(1)}\Bigl(Q'{}^{1/2}R'+\frac{H(a'_{q',r_1}r_2^3m_2-a'_{q',r_2}r_1^3 m_1,q')^{1/2}(r_1,r_2)^{1/2} }{Q'{}^{1/2}[r_1,r_2]^{1/2}}\Bigr)\nonumber\\
&\ll (\log{x})^{O_B(1)}\Bigl(Q'{}^{1/2}R'+\frac{H}{Q'{}^{1/2}R'}(a'_{q',r_1}r_2^3m_2-a_{q',r_2}r_1^3 m_1,q')^{1/2}(r_1,r_2)\Bigr).\label{eq:K5Bound}
\end{align}
We recall that
\[
\mathscr{K}_4=\sum_{q'\sim Q'}\sum_{\substack{r_1,r_2\sim R'\\ (r_1r_2,q' d_1d_2)=1}}\sum_{\substack{m_1,m_2\sim M\\ (m_1,q' r_1)=1\\ (m_2,q' r_2)=1}}|\mathscr{K}_5|.
\]
The first term of \eqref{eq:K5Bound} contributes to $\mathscr{K}_4$ a total
\[
\ll \sum_{q\sim Q'}\sum_{r_1,r_2\sim R'}\sum_{m_1,m_2\sim M} (\log{x})^{O_B(1)} Q'{}^{1/2}R'\ll (\log{x})^{O_B(1)} Q'{}^{3/2}R'{}^3 M^2.
\]
We now consider the contribution from the second term of \eqref{eq:K5Bound}. We separately the contribution according to the value of $d:=\gcd(a'_{q',r_1}m_2 r_1^3-a'_{q',r_2}m_1 r_2^3,q')$. Given a choice of $d,q',r_1,r_2$ and $m_1$, we see that $m_2$ is forced to lie in a fixed residue class $\mod{d}$. Thus there are $O(M/d+1)$ possible choices of $m_2$. Therefore we see that the total contribution from the second term of \eqref{eq:K5Bound} to $\mathscr{K}_4$ is
\begin{align*}
&\ll (\log{x})^{O_B(1)} \frac{H}{Q'{}^{1/2}R'}\sum_{q'\sim Q'} \sum_{d|q'}d^{1/2}\sum_{r_1,r_2\sim R'}(r_1,r_2) \Bigl(\frac{M^2}{d}+M\Bigr)\\
&\ll (\log{x})^{O_B(1)}\Bigl(HQ'{}^{1/2}R' M^2+H Q' R' M\Bigr).
\end{align*}
Putting this together then gives the result.
\end{proof}
\begin{lmm}\label{lmm:TripleConclusion}
Let $A,B>0$, let $Q R S=x^{1/2+\delta}$, $MN_1N_2N_3\asymp x$ with $(\log{x})^{A+3B}\le N_1\le N_2\le N_3$ and $S\le (\log{x})^B$. Let $\mathscr{K}$ and $\mathscr{K}_{MT}$ be as given by Lemma \ref{lmm:TripleCompletion} and let $M$ satisfy
\[
M<\min\Bigl(\frac{R}{x^{4\delta}(\log{x})^C},\frac{Q^{1/2}}{x^{2\delta}(\log{x})^C}\Bigr)
\]
for some constant $C=C(A,B)$ sufficiently large in terms of $A,B$. Then we have
\[
\mathscr{K}=\mathscr{K}_{MT}+O_A\Bigl(\frac{x}{(\log{x})^A}\Bigr).
\]
\end{lmm}
\begin{proof}
Let $\mathscr{K}_2,\mathscr{K}_3,\mathscr{K}_4$ be the quantities defined in Lemma \ref{lmm:TripleCompletion}, Lemma \ref{lmm:TripleGCDs} and Lemma \ref{lmm:TripleCauchy}. By Lemma \ref{lmm:TripleCompletion}, we see that it suffices to show that
\begin{equation}
\mathscr{K}_2\ll \frac{x}{(\log{x})^A}\frac{Q^3R^3}{N_1N_2 N_3}.
\label{eq:K2Target}
\end{equation}
If we can show that
\begin{equation}
\mathscr{K}_3\ll \frac{x}{(\log{x})^{C_1}}\frac{Q^2 R^2}{N_1 N_2 N_3 d_0^{3/2} d_1^{1/2} d_2^{1/2} (e_1e_2e_3)^{1/2}}
\label{eq:K3Target}
\end{equation}
then, by Lemma \ref{lmm:TripleGCDs}, we see that
\begin{align*}
\mathscr{K}_2&\ll \frac{x S^{11} }{(\log{x})^{C_1}}\frac{Q^3 R^3}{ N_1 N_2 N_3 }\sum_{d_1,d_2\le x}\sum_{d_0|d_1d_2}\frac{1}{d_0^{3/2}d_1^{5/2} d_2^{5/2} }\Bigl(\sum_{e|d_1^\infty d_2^\infty}\frac{(d_0e,d_1d_2)}{e^{1/2}}\Bigr)^3\\
&\ll \frac{x}{(\log{x})^{C_1-11B}}\frac{Q^3 R^3}{ N_1 N_2 N_3 }\sum_{d_1,d_2\le x}\sum_{d_0|d_1d_2}\frac{\tau(d_1d_2)(d_0^{1/2}d_1^{1/2}d_2^{1/2})^3}{d_0^{3/2}d_1^{5/2} d_2^{5/2}}\\
&\ll \frac{x}{(\log{x})^{C_1-O_B(1)}}\frac{Q^3 R^3}{N_1 N_2 N_3 }.
\end{align*}
This would give \eqref{eq:K2Target} provided $C_1=C_1(A,B)$ is sufficiently large in terms of $A$ and $B$. Thus it suffices to show \eqref{eq:K3Target}. By Lemma \ref{lmm:TripleCauchy}, recalling that $H_i'\le (\log{x})^{2B+1} Q R S / (d_0e_i N_i)$, $Q'\le Q/d_1$, $S\le (\log{x})^B$ and $MN_1N_2N_3\asymp x$, we see that we have \eqref{eq:K3Target} provided
\begin{equation}
\mathscr{K}_4\ll \frac{x^{2}}{(\log{x})^{C_2}}\frac{ R}{ d_2 N_1 N_2 N_3}\asymp \frac{x R M}{d_2(\log{x})^{C_2}}\label{eq:K4Target}
\end{equation}
for some constant $C_2$ sufficiently large in terms of $C_1$ and $B$ (so we can take $C_2=C_2(A,B)$). By Lemma \ref{lmm:K4Bound}, we have that
\begin{align}
\mathscr{K}_4&\ll (\log{x})^{O_B(1)}\Bigl(Q'{}^{3/2}R'{}^3 M^2+HQ'{}^{1/2}R' M^2+H Q' R' M\Bigr)\nonumber\\
&\ll \frac{(\log{x})^{O_B(1)}}{d_2}\Bigl(Q^{3/2}R^3 M^2+\frac{Q^{7/2} R^4 M^2}{N_1N_2 N_3}+\frac{Q^4 R^4 M}{N_1 N_2 N_3}\Bigr).\label{eq:K4Bound}
\end{align}
Here we used the fact that $H\le H_1H_2H_3\le (\log{x})^{9B+3} Q^3R^3 /(N_1N_2N_3)$, $R'\ll R/d_2$ and $Q'\le Q$ in the final line.
Recalling that $MN_1N_2N_3\asymp x$, we see that \eqref{eq:K4Bound} gives \eqref{eq:K4Target} provided
\[
Q^{3/2}R^3 M^2+\frac{Q^{7/2}R^4 M^3}{x}+\frac{Q^4 R^4 M^2}{x}\ll \frac{x R M}{(\log{x})^{C_3}}
\]
for some constant $C_3$ chosen sufficiently large in terms of $C_2$ and $B$ (so we can take $C_3=C_3(A,B)$). Recalling that $QR=x^{1/2+\delta}$, this is satisfied if we have
\begin{align}
M&<\frac{Q^{1/2} x }{Q^2 R^2(\log{x})^{C_3}}=Q^{1/2} x^{-2\delta}(\log{x})^{-C_3},\label{eq:DivisorCond1}\\
M&< \frac{R^{1/2} Q^{1/4} x}{Q^2 R^2(\log{x})^{C_3/2}}=R^{1/2}Q^{1/4} x^{-2\delta}(\log{x})^{-C_3/2},\label{eq:DivisorCond2}\\
M&<\frac{R x^2}{Q^4 R^4 (\log{x})^{C_3}}=Rx^{-4\delta}(\log{x})^{-C_3}\label{eq:DivisorCond3}.
\end{align}
We see that \eqref{eq:DivisorCond2} is implied by \eqref{eq:DivisorCond1} and \eqref{eq:DivisorCond3}, and so can be dropped. This gives the result on taking $C=C_3$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prpstn:Triple}]
If $\min(N_1,N_2,N_3)\le x^\epsilon$ then the result follows from Lemma \ref{lmm:DoubleDivisor} and partial summation, so we may assume that $N_1,N_2,N_3\ge x^\epsilon$.
By Lemma \ref{lmm:Divisor} and the trivial bound, the contribution from $\tau(qr)\ge (\log{x})^{B}$ is negligible if $B=B(A)$ is sufficiently large in terms of $A$, so we may restrict to $\tau(qr)\le (\log{x})^{B}$. Similarly, the contribution from $|\alpha_m|\ge (\log{x})^{C_1}$ is negligible for $C_1=C_1(A)$ sufficiently large in terms of $A$, and so we may restrict to $\alpha_m$ being 1-bounded after replacing $A$ by $A+C_1$.
Let $qr=t^\square t^{\notsquare}$ be factored into square-full and square-free parts. By Lemma \ref{lmm:Squarefree}, the contribution from $q,r$ with $t^\square\ge (\log{x})^{C_2}$ is negligible if $C_2=C_2(A)$ is sufficiently large in terms of $A$, so we may restrict to $t^\square\le (\log{x})^{C_2}$. We let $q'=(q,t^{\notsquare})$, $r'=(r,t^{\notsquare})$ and $s'=t^\square$, so $q',r',s'$ are pairwise coprime with $q',r'$ square-free. Thus we see it suffices to show that
\[
\sum_{s'\sim S'}\sum_{q'\sim Q'}\sum_{\substack{r'\sim R'\\ (q'r',s')=1\\ \tau(q'r')\le (\log{x})^B}}\mu^2(q' r')\sup_{(a,q' r' s')=1}|\Delta_{\mathscr{K}}(a;q' r' s')|\ll_A \frac{x}{(\log{x})^A}
\]
over all choices of $S'\le (\log{x})^{C_2}$, $Q'=Q(\log{x})^{O(C_2)}$, $R'=R(\log{x})^{O(C_2)}$. Let the supremum occur with the residue class $b_{q' r' s'}\Mod{q' r' s'}$, and insert 1-bounded coefficients $c_{q',r',s'}$ to remove the absolute values. We may restrict the support of $c_{q',r',s'}$ to $q',r',s'$ pairwise coprime with $q' r'$ square-free and $\tau(q' r')\le (\log{x})^{B}$. Thus it suffices to show
\begin{equation}
\sum_{s\sim S'}\sum_{q\sim Q'}\sum_{r\sim R'}c_{q,r,s}\sum_{m\sim M}\alpha_m\sum_{n_1\sim N_1}\sum_{n_2\sim N_2}\sum_{n_3\sim N_3}\Delta_{\mathscr{K}}(b_{q r s};q r s)\ll_A \frac{x}{(\log{x})^A}.
\label{eq:TripleTarget}
\end{equation}
We see that
\[
\mathbf{1}_{\ell \equiv b\Mod{q rs}}-\frac{\mathbf{1}_{(\ell,q r s )=1}}{\phi(q r s)}=\frac{1}{\phi(q r s)}\sum_{(a,q r s)=1}\Bigl(\mathbf{1}_{\ell\equiv b\Mod{q r s}}-\mathbf{1}_{\ell\equiv a\Mod{q r s}}\Bigr).
\]
Using this with $\ell=m n_1n_2n_3$ and $b=b_{q r s}$ in $\Delta_{\mathscr{K}}(q r s)$, we see that \eqref{eq:TripleTarget} follows if uniformly over all residue classes $(a_{q r s},q r s)=1$ we have
\[
\mathscr{K}=\mathscr{K}_{MT}+O_A\Bigl(\frac{x}{(\log{x})^{A}}\Bigr)
\]
where $\mathscr{K}$ and $\mathscr{K}_{MT}$ are as given by Lemma \ref{lmm:TripleCompletion}. This now follows from Lemma \ref{lmm:TripleConclusion} thanks to our assumptions on $M$.
\end{proof}
\bibliographystyle{plain}
|
1,116,691,501,260 | arxiv | \section{Introduction}
\label{intro}
Crude oil price fluctuations have been a concern for the world macroeconomy since oil crises in 1970, 2008 and 2014. According to Organization of Petroleum Exporting Countries (OPEC), the oil price dropped from \$145 to \$30 in mid 2008 and reached to the low price of \$27 in 2014. These sharp downward trends (shocks) influence the economy by disturbing aggregate economic activities and spread to stock market and energy indices \cite{Guntner2014} and \cite{Nusair2016} and \cite{Zhang2016} and \cite{Bastianin2016} and \cite{Wen2012}and and \cite{Angelidis2015} and \cite{Niknam2016} and \cite{MoyaMartinez2014}.
Some studies reported that oil price shocks have different effect in different economies. For example, oil price shocks have different effect on the U.S. economy and oil-exporting countries \cite{Kilian2009} and \cite{Wang2013}.
Although it is expected that higher oil price leads to higher revenue, cash flow and therefore growth in the economy and financial markets in oil-exporting countries \cite{Arouri2010} the exact impact of the oil price changes on the financial markets is still unclear.
Furthermore, different factors such as the source of oil price shocks \cite{Kilian2009}, political issues, developed or emerging stock markets and whether the country is oil-exporter or oil-importer \cite{Wei2017} have made it more difficult to draw a clear conclusion on the effect of oil price shocks on financial markets \cite{Basher2006} and \cite{Wang2013}. Hence, understanding the underlying behaviour of oil price is important to keep track of changes in the target economy.
On the other hand, Oil price shocks and its contagion on other economic indices and prices have made modelling and prediction complicated \cite{Kokabisaghi2018}. Earlier studies used econometric models such as vector autoregressive (VAR), generalized autoregressive conditional heteroskedasticity (GARCH) \cite{Park2008} and \cite{wei2010}. But the complexity and nonlinear behaviour of oil price and financial and economic variables have convinced researchers to use artificial intelligence methodologies to deal with unpredictable changes in oil price and other economic variables \cite{mingming2012} and \cite{Gurusen2011} and \cite{Bissoondeeal2011}. Overall, several researches have been devoted to analyze the oil price shocks, its origins and impacts on economic factors and financial markets. However, there is surprisingly a lack of focus on oil-dependent economies such as Iran.
Since Iran is one of the largest oil exporting countries, both oil crises and political tensions can impact its economy. During global financial crisis in 2008, Iran economic growth decreased to the lowest rate 1.8 percent \cite{WorldBank2018}. Despite the fact that high oil price is beneficial for oil-exporting countries \cite{Korhonen2010} and lower oil price creates instability in oil dependent countries \cite{Kitous2016}, it was expected that Iran economic growth increases after the global financial crisis and rising oil price; but Iran economic growth had a downward rate of -0.2 percent in 2013. One possible reason is international sanctions imposed on Iran’s industries and banking system. Sanctions which targeted oil created many restrictions in exporting oil and foreign investments in energy industry.
Being largely oil dependent, Iran oil exports dropped from 2231.980 barrel/day to 1081.145 barrel/day during 2009 to 2015. Alongside that, foreign investment was decreased from \$ 3773.8 million to \$ 945 million.
Generally speaking, it is expected that Iran stock market is affected by uncertainty in international oil market and political tensions. While \cite{oskooe2012} reports that there is no evidence on the impact of oil price volatility and Iran stock market. Differently, \cite{Salehi2015} finds that there is strong causality between oil price volatility and stock price in Iran. Besides, changes in macroeconomic variables affect stock market.
An overview of Tehran stock exchange shows that Tehran Stock Exchange Price Index was increased from 25035.2 million to 78849.3 million Unit during heavy sanctions (from 2012 to 2014) and declined to 61426.1 million unit in global financial crisis to 2015 and increased to 99414.5 in 2018 \cite{tes2019}. The growth in Tehran stock exchange during sever sanctions maybe explained by \cite{Biglaiser2020} that finds that sanctions impact stock market in the targeted country negatively and significantly only if targeted country were not already subject to multiple sanctions.
So far, no studies have compared the effect of financial crisis and comprehensive sanctions together on Iran stock and industry indices.
In this paper we aim to model the impact of oil price volatility on stock market and industry indices in Iran. In particular, we investigate how potential uncertainty in oil price and Iran energy industry caused by political tensions and economic crises influence financial and industry indices.
The reason why Iran is an ideal case for the purpose of this study is because Iran is a member of OPEC and one of the largest oil- exporting countries that can influence the supply side of the international oil market. Iran economy has been under severe international sanctions and witnessed several oil crises while more than 60 percent of Iran revenue is within the oil market \cite{Farzanegan2009}.
Therefore, knowing the impact of international sanctions and/or financial crises on Iran economy and whether international sanctions have been successful to meet their target, is a game changer for Iran policy makers and international politics. Moreover, the source of volatility in the stock market and industry is useful from trading and practical perspective.
To shed light onto the aim of the paper, we take an inspiration of several studies that proved the accuracy of the artificial intelligence as a methodology to model the effect of unusual behaviours in oil price on stock and industry indices \cite{Ince2019} \cite{Atsalakis2009} and \cite{Onder2013} and \cite{Svitlana2016}. Additionally, we compare oil-stock nexus in two periods of international sanctions and post sanctions.
The reminder of the paper is organized as follow. Section 2 is an overview of financial crisis, international sanctions and methodology, section 3 presents data and architecture of the model, section 4 is the empirical results and section 5 is the conclusion.
\section{Literature Review}
\label{sec:literature}
\subsection{Overview of Oil price shocks and International Energy Sanction}
\label{sec:event_summary}
Sanction as a pressure tool has been used by policy makers to make changes in nation's policies or achieve certain objectives. Generally speaking, sanctions have a direct impact on the Achilles heel of the target.
In 2007, United Nation Security Council imposed sanctions on Iran to enforce this country to suspend nuclear activities and also meet the requirements of IAEA (United Nations Security Council. Sanction Resolution no. 1747: UN; 2007. Security Council of United Nations. Resolution no 1929; 2010) It continued till 2010 and banned Iran from any activities related to ballistic missiles and blacklisted all entities and individuals involved with this program such as travelling and financial services.
In the case of Iran, international sanctions have been imposed on the energy sector and banking system.
Both economic and energy sanctions have put a sever strain on oil exports and developments that the offshore supergiant South Pars natural gas fields needed \cite{Sabatini2010}. In November 2011, The US, UK and Canada imposed bilateral restrictions on Iran’s oil and petrochemical industries; UK enforced all British financial institutions to stop doing business with Iranian counterparts. Furthermore, US threatened all countries for having any deal with Iran. Although investment on Iran energy industry was beneficial for European countries, but they kept their strategies against Iran. Otherwise, EU had to deal with the risk of losing international trades with US.
In 2012, the European Union banned importing crude oil and petroleum products from Iran. Before European Union sanctions (EU), Iran oil export was around 2.2 million barrel/day. But in 2012 when EU sanctions came into effect, Iran oil export dropped to 1 million barrel/day. In addition to that, Iran lost non-EU buyers (China, India, Japan, South Korea and Turkey) and oil exports declined to more than 50 percent compared to the past years. Sanctions and poor economic health led to high inflation, unemployment rate and devaluation of the national currency \cite{yong2013}.
When US tightened sanctions on Iran central bank, Iran was disconnected from the SWIFT (electronic financial transactions). Sanctions on oil trades not only disposed Iran of foreign investment flow, it also impressed Iran’s share in gas sector by disposing access to energy technologies such as LNG technology, which is important for competitiveness in the gas market. As a result, Iran was not able to exploit gas. Furthermore, the national currency, Rial, fell to its lowest value against the US dollar more than 80 percent since 2011. Thus, the government had no choice but to borrow from its Central Bank, which resulted in an increase in the money supply and inflation \cite{Ghorbani2018}. Iran gross domestic product (GDP) growth was deprecated 8.156 \% and -7.445 \% from 2009 to 2012 respectively \cite{WorldBank2018}. Although the sanctions influenced Iran economy by heavy restrictions on oil production and export, but this impact was temporary. Iran changed oil contracts and found new export markets by price concessions. The main Iran oil buyers were China (22 percent), India (13 percent) and Japan (14 percent). In addition, Iran gas export increased from 5.670 billion cubic meters to 9.307 billion cubic meters \cite{IMF2018}.
In 2014, Iran GDP growth increased to 4.603 \% by Optimism in nuclear deal between Iran and the world powers and easing part of sanctions on Iran oil export \cite{WorldBank2018}. Simultaneously, oil price dropped from \$109.62 to \$41.5 in 2014 to 2015. Being largely oil dependent, Iran economic growth declined to - 1.321 \% ultimately \cite{WorldBank2018}.
Although after global financial crisis and post sanction, Iran export recovered in 2018 to the earlier level before the sanctions 2125.000 b/d but Iran economy suffers from instability, high inflation, drastic devaluation of national currency and stock market inefficiency.
\subsection{Overview of methodology}
\label{sec:model_explenation}
As we previously mentioned, the unpredictable behaviour of financial time series such as crude oil price and stock market index make the analysis difficult. Some studies used econometric models to show the correlation between oil price volatility and stock market; for example, Wei and Guo (2017) applied VAR (vector auto regressive) to show the effect of oil price on stock market in China; (see also, \cite{Kang2015} and \cite{Pandey2018} and \cite{Huang2016}). Some other researchers reported that the real world systems are often nonlinear, thus, it is unreasonable to use linear statistical methods that are built on linear assumptions.
To overcome the linear limitations, researchers have proposed several classes of nonlinear models such as autoregressive conditional heteroscedastic (ARCH) model \cite{Engle1982}, general autoregressive conditional heteroscedastic (GARCH) \cite{Bollerslev1986} among others. However, these models perform well for specific nonlinear patterns and they are not able to analyse other types of nonlinearity in time series.
To explain the non-linearity of various financial time series, studies used artificial intelligence methodologies (\cite{Lu2011} and \cite{Ticknor2013} and \cite{Kristjanpoller2015} and \cite{Gurusen2011} and \cite{Bissoondeeal2011}). The results of researches show that artificial neural networks (ANN) is a better method for simulating unanticipated features of financial time series. On reason is because ANN is data- driven and non-parametric. In addition, no prior assumptions of the model form is required and ANN learns from examples to capture the relationships among the data even if the underlying linkage is unknown \cite{Ince2019}.
In addition, ANN with simple architecture can be applied to different situations in finance and economics \cite{Galeshchuk2016} and \cite{Fahima2018}. Furthermore, ANN has ability to capture subtle fractional relationship between variables even in time series with different features such as shocks \cite{Atsalakis2009} and \cite{Onder2013} and \cite{Svitlana2016}. The universal approximation theory also suggests that a single hidden layer neural network can interpret any input-output structure sufficiently \cite{Ince2019}.
\subsubsection{The feed-forward architecture}
\label{sec:model_architecture}
The feed-forward neural network in this study is a layered network with fully connected hidden layers and outputs. In particular, Feed-forward network can arbitrarily and precisely approximate functions with many finite discontinuities as well as their derivatives. Learning the neural networks is important to optimize the architecture of the network by modifying the weights. If learning is done properly the neural network can update connections of neurons and modify weighted function data.
The main steps for learning networks are first initializing the network weights and comparing the error values between calculated and observed outputs to find the correction vector. Then, the weights for connections between errors are recalculated by determining the correction vector. figure \ref{fig:schematic_network} and \ref{fig:hardlimit} represent a feed-forward neural network and the activation function.
\begin{figure} [hbt!]
\includegraphics[width=0.75\textwidth]{network_architecture.png}
\caption{This figure represents a Feed-forward neural networks architecture. m is the number of inputs shown by p, k is the number of neuron's layer, W is synaptic weight, b is bias and a is hardlimit function}
\label{fig:schematic_network}
\end{figure}
The mathematical structure of the network is shown as follow:
\begin{equation}
u_k= \sum^{m}_{j=i} W_{kj}\, p_j
\label{neuron_architecture_u}
\end{equation}
where $u_k$ is the output of the adder (sum of the weighed input signals)
\begin{equation}
\mbox{hardlimit function} \quad \quad \quad \quad
\\
a=\mbox{hardlim}(W_p+b)
\end{equation}
\begin{equation}
y_k= a (u_k+b_k)
\label{neuron_architecture_y}
\end{equation}
where $y_k$ is the output signals of the neurons
\begin{figure} [hbt!]
\includegraphics[width=0.75\textwidth]{hard_limit_function.png}
\caption{This figure represents hardlimit transfer function performance that classify net inputs; if net input to the function reaches a threshold, it forces a neuron to output 1, otherwise it outputs zero }
\label{fig:hardlimit}
\end{figure}
\section{Data and Methodology}
\label{sec:data_methodology}
\subsection{Data}
\label{sec:data_description}
To analyze the aim of study, we used information from Iranian Central bank, Tehran Stock Exchange and Organization of the Petroleum Exporting Countries (OPEC). The data comprises daily prices and values for oil, gas and gold price, exchange rate \footnote{The rate of Rial to 1 Dollar}, stock market index (TEPIX) \footnote{Tehran Price Index}, industry index and turn. The empirical study covers 10-year of daily datasets from December 2008 to December 2018 (international energy sanctions from December 2008 to 2014) and (global financial crises and post sanction from 2014 to 2018).
The reason for choosing OPEC oil price is because Iran is a member of OPEC and international crude oil prices follow the same trends more or less.
Table no. 1 summarizes the descriptive statistics associated to the research variables. In descriptive information in Table \ref{tab:description} shows a heavy tailed distribution for most of time series, which can be explained by the fact that these financial and economic variables witnessed global financial crisis in 2014 and international energy sanctions.
\begin{table}[hbt!]
\caption{Descriptive statistics (Daily data from 2009 to 2018)}
\label{tab:description}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
& Mean&Median&SD &Skewness&Kurtosis\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Oil price (USD)& 77.2 & 75.06 &27.29 &-\,0.03 &-\, 1.4 \\
Gas price (USD)&3.48& 3.42&0.91&0.56 &0.7\\
Gold price (USD)& 13.7& 1,275&219.6&0.4&-\,0.2\\
Exchange rate (Rial)&25,573&31,345&11,657&-\,0.2&-\,1.57\\
Stock Index (Unit)& 48.779&56,784&28,487& -\,0.02&-\,1.5\\
Industry Index (Unit)& 40854.5&48,468&24,791&0.01& -\,1.6\\
Trading volume (Million)&626.672193&424.083128&893.369438 & 10.19& 183.3\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Model specification, Feed forwards neural network}
\label{feedforwardNN}
In this paper, we use Feed-forward neural network (FFNN) to analyse the purpose of our study in two periods of International energy sanction and post-sanction. The architecture of our model is the following:
FFNN is developed with an input layer consisting of five neurons that is five inputs including oil price, gas and gold price, exchange rate, trading volume and the output layer has two neurons that represents dependent variables, stock market and industry indices. In every period of the study, the neurons of the hidden layer are computed as follow:
\begin{equation}
\mbox{Neurons in Hidden Layers}= \frac{1}{2}(\mbox{Inputs}+\mbox{Outputs})+\\
\sqrt{\mbox{nr.training patterns}}
\end{equation}
In order to improve the performance of FFNN, we scale the data between 0,1 as follow:
\begin{equation}
x_{scaled} = \frac{x - x_{min}}{x_{max} - x_{min}}
\end{equation}
Finally, For FFNN estimation, We split the datasets into two period of sanction (from 2009 to 2014) and post-sanction (2014 to 2018). For each period of study, we use 75 \% of the dataset for training, 20 \% for test and 5\% for validation purpose.
The activation (transfer) function in FFNN is hardlimit to find the relationships between input and output nodes in the network.
{R1.1}
At the end, we check the estimated RMSE and MAPE to assess the accuracy of networks as follow:
\begin{equation}
RMSE=\sqrt{\frac{1}{N}\sum_{t=1}^{N} (s_t-o_t)^2 }
\end{equation}
\begin{equation}
MAPE=100 \, \frac{1}{N} \,\sum_{t=1}^{N} |\frac{s_t-o_t}{s_t}|
\end{equation}
Where $s_t$ and $o_t$ are actual and predicted values at time t respectively, and N is the number of observed data.
\section{Empirical results}
\label{sec:emprical_result}
\subsection{Learning Feed-forward neural}
\subsubsection{The period of International energy sanction:}
\label{sec:Analysis_period_sanction}
The first dataset includes 1845 data from December 2008 to 2014 when severe international energy sanction was tightening on Iran.
There are five inputs including oil, gas and gold price, exchange rate, trading volume as independent variables, and stock market and industry indices as outputs.
As we mentioned in section \ref{feedforwardNN}, the datasets are normalised and split into 75 \%training, 20 \% test and 5 \% validation set and 40 nodes in hidden layers are computed.
The results of learning feed-forward network presented in Figures \ref{fig:stock_sanction} (right panel) shows the actual datasets is close to fitted line (perfect fit) and there is no significant deviation between the prediction and the actual values.
Figure \ref{fig:industry_sanction}(right panel) represent the perfect fit for industry index approximately.
Overall, the results of learning network provide 90 percent accuracy for both indices.
\begin{figure} [htbp]
\includegraphics[width=0.5\textwidth]{stock_in_sanction_2.png}
\includegraphics[width=0.5\textwidth]{stock_in_sanction.png}
\caption{The left figure represents TEPIX prediction and actual data; The red line is actual values and blue line is feed-forward network prediction; The right figure shows TEPIX and fitted line (in the period of international energy sanction) }
\label{fig:stock_sanction}
\end{figure}
\begin{figure} [htbp]
\includegraphics[width=0.5\textwidth]{industry_sanction_2.png}
\includegraphics[width=0.5\textwidth]{industry_sanction_1.png}
\caption{The left figure shows Industry index prediction and actual data; The red line is actual values and blue line is feed-forward network prediction; The right figure represent Industry index and fitted line (in the period of international energy sanction) }
\label{fig:industry_sanction}
\end{figure}
\subsubsection{Post sanction and global financial crisis}
\label{sc:post_sanction}
The second dataset starts from 2014 to 2018. In this period, international energy sanction was eased on Iran and oil price dropped drastically because of the global financial crisis. In this period, the network has 5 inputs (oil, gas and gold price, exchange rate, trading volume) and TEPIX and Industry index as outputs separately. the number of computed nodes in hidden layers is 37. The learning continues till the network became converged. The results with 90 percent accuracy for both TEPIX and industry indices is presented in figure \ref{fig:stock_post} and \ref{fig:industry_post} respectively. Figures show that the feed-forward network has the ability to produce a good prediction by considering a wide range of economic variables. In this period, the model shows a better fit for Industry index in compare with the period of international energy sanction.
\begin{figure} [hbt!]
\includegraphics[width=0.5 \textwidth]{stock_post_2.png}
\includegraphics[width=0.5 \textwidth]{stock_inpost_1.png}
\caption{The left figure shows stock market index prediction and actual data The red line is actual values and blue line is feed-forward network prediction; The right figure shows stock market index and fitted line (perfect fit) (In post-sanction and global financial crisis) }
\label{fig:stock_post}
\end{figure}
\begin{figure}[hbt!]
\includegraphics[width=0.5 \textwidth]{industry_post_2.png}
\includegraphics[width=0.5 \textwidth]{industry_post_1.png}
\caption{The left figure shows Industry index prediction and actual data; The red line is actual values and blue line is feed-forward network prediction; The right figure represents Industry index and fitted line (perfect fit) In post-sanction and global financial crisis}
\label{fig:industry_post}
\end{figure}
The average percentage error (MPE), estimated root mean square error (RMSE) and mean absolute percentage error (MAPE) from learning FFNN for both TEPIX and industry index are listed in Table 2.
Given the acceptable performance of FFNN in terms of accuracy, we can conclude that in the first period of study, international energy sanctions tighten on Iran, the model has a better performance for TEPIX (Table \ref{tab:criteria} shows the smaller error in compare with industry index). During this period, oil price was approximately steady. But industry index that is highly dependent on oil and gas companies, is influenced by the imposed sanctions on foreign investments and oil export. In 2014 to 2018, the model is a better fit for industry index that shows industry index is influenced by oil price shock.
During 2009 and 2014, FFNN performs the best in predicting stock index in compare with post international sanctions, which indicates the positive impact of oil price on the stock market in this period.
\begin{table}[htbp]
\caption{Corresponding values of the evaluation criteria}
\label{tab:criteria}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
& \multicolumn{2}{c}{International energy sanction}&\multicolumn{2}{c}{Post-sanction}\\
Dependent variables&Stock index&Industry index &Stock index&Industry index\\
\noalign{\smallskip}\hline\noalign{\smallskip}
MAE & 0.107 & 0.116&0.107&0.09 \\
RMSE & 1106 & 1629&1750&1734 \\
MAPE & 0.07 & 0.16&0.23&0.33 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Conclusion}
In this paper, we model the impact of oil price volatility on Tehran stock and industry indices.
To have a more realistic model, we consider a wide range of economic variables such as gas and gold price, exchange rate (Rial) and trading volume as explanatory variables.
We also analyze the aim of the study in two periods of international sanction and post-sanction to provide an overall picture on the impact of both sanctions and oil price shocks on an oil-dependent country such as Iran. We choose Iran as an ideal case for this setting because Iran is one of the largest oil exporters and has been under comprehensive sanctions. The result of feed-forward neural network with 90 percent accuracy indicates the positive impact of oil price on stock and industry indices, which is supported by empirical studies \cite{Fang2018} and \cite{Ewing2016} and \cite{Mezghani2018}. More specifically, the feed-forward neural networks performs better in predicting TEPIX in the period of international sanctions.
In post-sanction and global financial crisis, the model evaluation criteria show a better value for industry index, which means Industry index is influenced by oil price shocks, as expected, industry index movements are more affected by oil price changes \cite{fang2017}.
We can draw a conclusion the dependency of industry on energy companies makes industry index more vulnerable to endogenous changes in oil market such as oil price shocks.
In this paper, we have addressed the important question how changes in the international markets and politics influence stock market and industry in Iran.
In future research, we plan to explore the effect of uncertainty in international politics and markets on companies listed in the Tehran stock exchange.
Following the impact of uncertainty in politics and international markets on Iran, future works may also pay attention to the changes in other important economic factors such as unemployment rate in Iran as more than 60 percent of Iran population are young professionals. An empirical extension of this paper is also comparing the results of FFNN with other non-linear models.
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\section{Reply to Reviewer 1}
\begin{itemize}
\item {\em There have been a large number of papers regarding oil price and stock market at different levels, like the whole market, industry-level or firm level, and I do not think it still has much necessity to continue this kind of work. The paper is not well written, and the authors have submitted many unnecessary files. The results have no new points compared to the previous relevant literature. }
\end{itemize}
We know that there are several studies regarding oil price and stock market. However, there is no consensus in the exact effect of oil price shocks or political uncertainties in different economies on stock markets.
To best of our knowledge, there is a lack of study on Iran's stock market and industry and also no studies have compared the effect of financial crisis and comprehensive sanctions together on Iran's stock and industry indices.
We adjusted the introduction and highlighted international sanctions and financial crisis in the section on related work and included references to other studies.
Finally, we have carefully proofread the manuscript and hope that the English grammar is now up to par.
\end{document}
\section{Reply to Reviewer 2}
\begin{itemize}
\item {\em The justification of applying feed-forward ANN is poor. The paper says that linear models are not expressive enough to capture complexity and uncertainty in financial time series. But ANN is not the only non-linear machine learning model. For instance, models based on the decision trees can be used.
Moreover, the proposed architecture of ANN does not exploit specific nature
of the time-series data at all (e.g., autoregression features or model structure based on the Recurrent Neural Network like LSTM or GRU). To show the need for ANN in this problem it is necessary to make a comparison of the model with a more “simple” model (e.g., linear).}\\
We agree that there a number of non-linear models we could use and even make a comparison with ANN results. However, the purpose of our study is to compare the impact of oil price on stock and industry indices in two periods of international sanctions and financial crisis. We have therefore followed the literature and used the most popular and accurate model in the area of our study.
The comparison of models will be addressed in a forthcoming paper.\\
\item{\em The design of the computational experiments is also unclear. The way
the data was split to the test and train datasets is not described. This is highly
important for the problems related to time series because the model must not
be fitted on the data from the future (also known as data leak). The procedure
of cross-validation for investigating potential generalizations of the proposed
model is also not described. There are also minor points.}\\
Valid point! We have added a section explaining our model specification.
\item{\em Formula (5) for RMSE (page 7) is incorrect.}\\
Agreed! We have corrected the equation.
\item{\em It is unlikely that 40 hidden layers were used for the ANN with only five
inputs and such a small amount of data. It is more likely than 40 neurons
(not layers) were used.}\\
You are right! We have corrected it.\\
\item{\em The article also claims to study how political and economic issues affect
the financial time series, but this issue is discussed very briefly and only in
the conclusion. The paper is mostly dealing with technical issues rather than
discusses the numerical experiments. The main statement about crisis and
sanctions is rather bold and probably requires more careful justification.}\\
\\
\item{\em Overall, the text is rather untidy and has an lot of misprints and typos. References to the literature should be arranged using either Harvard or numerical system but not both. } \\
We adjusted the introduction and highlighted international sanctions and financial crisis in the section on related work and included references to other studies.
we have carefully proofread the manuscript and hope that the English grammar is now up to par.
\end{itemize}
\end{document} |
1,116,691,501,261 | arxiv | \section{Introduction}
The experimental demonstration of Bose-Einstein condensates (BEC) \cite{dalfovo} has lead to the development of atom lasers by outcoupling atoms from trapped BECs by either a radio frequency transition or a Raman transition to change the internal state of the atom to one that is either untrapped or anti-trapped \cite{andrews, mewes, martin, bloch, hagely, anderson, lecoq, Cennini, Robins}. Atom lasers are coherent matter waves with spectral fluxes many orders of magnitude higher than thermal sources of atoms. The coherence of these sources will enable an increase in the sensitivity of interferometric measurements \cite{atomint}. Although current experiments usually operate in parameter regimes limited by technical noise, the fundamental limit on these measurements will be caused by the shot noise of the atomic field, which will be intrinsic to all interferometers without a non-classical atomic source. Sensitivity is increased in optical interferometry by `squeezing' the quantum state of the optical field, where the quantum fluctuations in one quadrature are reduced compared to a coherent state, while the fluctuations in the conjugate quadrature are increased. In the context of atom optics, it is interesting to ask whether highly squeezed atom optical sources can be produced. There is also great interest in the production of entangled atomic beams for quantum information processing and tests of quantum mechanics with massive particles \cite{entangledinterest}. This paper will describe methods of coupling the quantum statistics from optical fields to produce non-classical atomic sources with high efficiency.
Generation of squeezed atomic beams has been proposed by either utilising the nonlinear atomic interactions to create correlated pairs of atoms via either molecular down conversion or spin exchange collisions \cite{Duan1, Pu, Kheruntsyan}, or by transferring the quantum state of a squeezed optical field to the atomic beam \cite{Moore, Jing, Fleischhauer}. In the first case, it was shown that collisions between two condensate atoms in the $|M_F = 0\rangle$ state can produce one atom in the $|M_F = +1\rangle$ and one atom in the $|M_F = -1\rangle$, with sufficient kinetic energy to escape the trap \cite{Duan1,Pu}. It was shown that this scheme produced pairs of atoms entangled in atomic spin. In the second scheme a BEC of molecules composed of bosonic atoms is disassociated to produce twin atomic beams, analogous to optical down conversion \cite{Kheruntsyan}. It was shown that the beams were entangled in the sense that phase and amplitude measurements on one beam could infer phase and amplitude measurements of the other beam better than the Heisenberg limit. Although each atomic pair is perfectly correlated in direction in each of these schemes, there is very little control of direction of each pair, so the spectral flux would be limited.
The generation of nonclassical light is well established experimentally \cite{Bachor}. This suggests that a nonclassical atom laser output could be generated by transferring a the quantum state of an optical mode to an atomic beam. Moore {\it et al.} showed that a quantized probe field could be partially transferred to momentum `side modes' of a condensate consisting of three-level atoms in the presence of a strong pump field \cite{Moore}. Jing {\it et al.} performed a single mode analysis of the atom laser outcoupling process for a two-level atom interacting with a quantized light field, and showed that the squeezing in light field would oscillate between the light field and the atomic field at the Rabi frequency \cite{Jing}. As this was a single mode analysis, the interaction with the atoms as they left the outcoupling region was not taken into account. Fleischhauer {\it et al.} \cite{Fleischhauer} showed that Raman adiabatic transfer can be used to transfer the quantum statistics of a propagating light field to a continuously propagating beam of atoms by creating a polariton with a spatially dependent mixing angle, such that the output contained the state of the probe beam.
In this paper, we model the dynamics of an atom laser produced by outcoupling three-level atoms from a BEC via a Raman transition, and investigate the transfer of quantum statistics from one of the optical modes to the atomic field. Ideal transfer will occur when the time taken for each atom to leave the outcoupling region is a quarter of a Rabi period. The finite momentum spread of a trapped condensate means that there will be a broadening of the time taken to leave the outcoupling region, and hence ideal transfer will not be possible. To determine the effectiveness of the quantum state transfer, we require a multimode model that takes into account back coupling and the finite momentum spread of the condensate.
In Section II we describe an atom laser beam made by outcoupling from a BEC using a non-trivial optical mode, using the simplest possible model that contains the spatial effects in the output mode. We derive the Heisenberg equations of motion for this system under suitable approximations. Section III introduces the method used to solve these equations and investigates some properties of the outcoupled atoms, showing that complicated spatial behaviour occurs in the output even when the optical and BEC fields are described by a single mode. In section IV we investigate continuous outcoupling with two mode squeezing, and show that it can be used to generate continuous variable entanglement in twin atom laser beams propagating in different directions.
\section{Outcoupling using a nonclassical optical field}
When an atomic and an optical field are coupled, and they can both be described by a single mode, then complete state transfer must occur between them in a Rabi-like cycle. When producing an atom laser beam in this manner, however, the single mode approximation cannot be made for the output field, even though it may be applicable to the optical and BEC fields. In this section we develop such a model, and derive the Heisenberg equations of motion for the output field and the optical field operators.
We model an atom laser in one dimension as a BEC of three-level atoms coupled to free space via a Raman transition, as shown in figure \ref{fig:levels}. State $|1\rangle$ represents the internal state of the trapped condensate, $|3\rangle$ the excited state, and $|2\rangle$ the untrapped atomic mode. $\hat{a}_{13}$ is the annihilation operator for the probe optical mode (transition $|1\rangle \rightarrow |3\rangle$), and $\hat{a}_{23}$ is the annihilation operator for the pump optical mode (transition $|2\rangle \rightarrow |3\rangle$). The pump field is assumed to be a large coherent state, much stronger than the probe field, so it is approximated well by a classical field $g_{23}\hat{a}_{23} = \Omega^{*}_{23}e^{-i(\omega-\Delta_2)t}$.
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.ps}
\caption{\label{fig:levels} Internal energy levels of a three level atom. A condensate of state $|1\rangle$ atoms confined in a trapping potential are coupled to free space via a Raman transition affected by a probe beam (annihilation operator $\hat{a}_{13}$) which is detuned from the excited state ($|3\rangle$) by an amount $\Delta_1$, and a pump field (annihilation operator $\hat{a}_{23}$) which is detuned from the excited state by an amount $\Delta_2$.}
\end{figure}
The Hamiltonian for the system (in the rotating wave approximation) is:
\begin{eqnarray}
\hat{\mathcal H} &=& \hat{\mathcal H}_{atom} + \hat{\mathcal H}_{light} + \hat{\mathcal H}_{atom-light} \\ \nonumber
&=& \int \hat{\psi}^{\dag}_1(k) H_0 \hat{\psi}_1(k) dk +\frac{\hbar^2}{2m} \int k^2 \hat{\psi}^{\dag}_2(k)\hat{\psi}_2(k) dk \\ \nonumber
&+& \int \hat{\psi}^{\dag}_3(k)(\frac{\hbar^2k^2}{2m} +\hbar \omega)\hat{\psi}_3(k) dk
+ \hbar(\omega -\Delta_1)\hat{a}_{13}^{\dag}\hat{a}_{13} \\ \nonumber
&+& \hbar g_{13} \int \hat{\psi}_1(k)\hat{\psi}_3^{\dag}(k+k_{13})\hat{a}_{13} + \hat{\psi}^{\dag}_1(k)\hat{\psi}_3(k+k_{13})\hat{a}_{13}^{\dag} dk \\ \nonumber
&+& \hbar \int \Omega_{23} \hat{\psi}^{\dag}_2(k)\hat{\psi}_3(k+k_{23})e^{i(\omega-\Delta_2)t} \\ \nonumber
&+& \Omega^{*}_{23} \hat{\psi}_2(k)\hat{\psi}^{\dag}_3(k+k_{23})e^{-i(\omega-\Delta_2)t} dk \\ \nonumber
\end{eqnarray}
where $\hat{\psi}_1(k)$ is the k-space annihilation operator the condensate mode (internal state $|1\rangle$), $\hat{\psi}_3(k)$ is the annihilation operator for atoms in the excited atomic state ($|3\rangle$), and $\hat{\psi}_2(k)$ is the annihilation operator for the untrapped free propagating mode ($|2\rangle$). The annihilation operators obey the usual bosonic commutation relations:
\begin{align}
[\hat{\psi}_i(k),\quad \hat{\psi}_j(k')] = [\hat{\psi}^{\dag}_i(k),\quad \hat{\psi}^{\dag}_j(k')] =0, \\ \nonumber
[\hat{\psi}_i(k),\quad \hat{\psi}^{\dag}_j(k')]=\delta_{ij}\delta(k-k')
\end{align}
$H_0$ is the single particle Hamiltonian for the trapped atoms, $m$ is the mass of the atoms, $\Omega_{23}$ is the Rabi frequency for the pump transition, $g_{13}$ is the coupling strength between the atom and the probe field, $\hbar \omega$ is the internal energy of the excited state $|3\rangle$ atoms, and $\hbar k_{13}$ and $\hbar k_{23}$ are the momentum kicks due to the pump and probe light fields respectively. For simplicity we have assumed a laser geometry where the pump and probe fields are counter propagating, to give the maximum possible momentum kick to the untrapped atoms.
The equations of motion for the Heisenberg operators are:
\begin{eqnarray}
i\dot{\hat{\psi}}_1(k) &=& \frac{H_0}{\hbar}\hat{\psi}_1(k) + g \tilde{\psi}_3(k+k_{13})\hat{a}^{\dag} \\
i\dot{\hat{\psi}}_2(k) &=& \frac{\hbar k^2}{2m}\hat{\psi}_2(k) + \Omega_{23} \tilde{\psi}_3(k+k_{23}) \\
i\dot{\tilde{\psi}}_3(k) &=& ( \frac{\hbar k^2}{2m}+\Delta_2)\tilde{\psi}_3(k) + g_{13} \hat{\psi}_1(k-k_{13})\hat{a}\\ \nonumber
&+& \Omega^{*}_{23}\hat{\psi}_2(k-k_{23}) \\
i\dot{\hat{a}} &=& \delta\hat{a} + g_{13}\int \hat{\psi}^{\dag}_1(k-k_{13})\tilde{\psi}_3(k)dk
\end{eqnarray}
Where $\tilde{\psi}_3 = \hat{\psi}_3 e^{i(\omega-\Delta_2)t}$ and $\hat{a} = \hat{a}_{13}e^{i(\omega-\Delta_2)t}$, and $\delta = (\Delta_2 - \Delta_1)$ is the two-photon detuning.
The population of state $|3\rangle$ will be much less than the other levels when the detunings ($\Delta_1$, $\Delta_2$) are much larger than the other terms in the system (including the kinetic energy of the excited state atoms). Furthermore, most of the dynamics will occur on time-scales less than $\frac{1}{\Delta_2}$, so in this regime we can set $\tilde{\psi}_3(k, t) \approx \frac{-1}{\Delta_2}(g_{13} \hat{\psi}_1(k-k_{13}, t)\hat{a} + \Omega_{23}^{*}\hat{\psi}_2(k-k_{13}, t))$. If the condensate has a large number of atoms and is approximately in a coherent state, we can write $\hat{\psi}_1(k, t) \approx \sqrt{N}\phi_0(k)e^{-i\omega_{t}t}$, where $\phi_0(k)$ is the condensate wavefunction (which we will assume is in the ground state of the harmonic oscillator, with $\omega_{t}$ the trapping frequency) and $N$ is the condensate number. We have ignored the atom-atom interactions in our model, which is valid only if the condensate is dilute. Strong atom-atom interactions would have the effect of introducing complicated evolution to the quantum state of the condensate mode. Inclusion of these effects is not possible with our method, and a more complicated technique such as a phase space method would be required \cite{phase space method}. The approximation of ignoring the back action on the condensate is only valid if we are in the regime where the outcoupling is weak, ie. the number of photons in the probe field is is much less than the number of atoms in the condensate. In an experiment, measuring the quantum noise on the atom laser beam would require small classical noise on the beam, and in practice this is easier to achieve with weak outcoupling. With these approximations our equations of motion for the free propagating atoms and the probe field become
\begin{eqnarray}
i\dot{\hat{\psi}}(k) &=& \omega_0(k)\hat{\psi}(k) - \Omega_0(k)\hat{a} \label{psidot} \\
i\dot{\hat{a}} &=& \omega_a \hat{a} - \int \Omega_0^{*}(k)\hat{\psi}(k)dk \label{adot}
\end{eqnarray}
with $\hat{\psi}(k) = \hat{\psi}_2(k)e^{i\omega_{t}t}$, $\omega_0(k) = (\frac{\hbar k^2}{2m} -\frac{|\Omega_{23}|^2}{\Delta_2} -\omega_{t})$, $\omega_a = (\delta -\frac{g_{13}^2N}{\Delta_2}$), and $\Omega_0(k) = g_{13}\sqrt{N} \frac{\Omega_{23}}{\Delta_2} \phi_0(k+k_{23}-k_{13})$.
In the next section we will discuss the solution to these equations and the properties of the outcoupled atoms.
\section{Properties of the outcoupled atoms}
The solution to equations (\ref{psidot}) and (\ref{adot}) is
\begin{eqnarray}
\hat{\psi}(k,t) &=& \int f(k,k',t)\hat{\psi}_s(k)dk' +g(k,t)\hat{a}_{s} \label{solutionpsi} \\
\hat{a}(t) &=& p(t)\hat{a}_s + \int q(k',t)\hat{\psi}_s(k')dk' \label{solutiona}
\end{eqnarray}
Where $\hat{a}_s = \hat{a}(t=0)$ and $\hat{\psi}_s(k) = \hat{\psi}(k, t=0)$ are the Schr\"{o}dinger picture operators, and $f(k, k', t)$, $g(k, t)$, $p(t)$, $q(k',t)$ are complex functions satisfying:
\begin{eqnarray}
i\dot{f}(k,k') &=& \omega_0(k)f(k,k') -\Omega_0(k)q(k') \label{semiclassical} \\ \nonumber
i\dot{g}(k) &=& \omega_0(k)g(k) -\Omega_0(k)p \\ \nonumber
i\dot{p} &=& \omega_a p -\int\Omega^*_0(k)g(k) dk \\ \nonumber
i\dot{q}(k') &=& \omega_a q(k') -\int\Omega^*_0(k)f(k,k') dk \\ \nonumber
\end{eqnarray}
with initial conditions $f(k, k',t=0)= \delta(k-k')$, $p(t=0) = 1$, and $g(k, t=0) = q(k',t=0) =0$. This ansatz has reduced the field operator equations to a set of coupled partial differential equations. This will only be possible for Heisenberg equations of motion that do not have terms with products of operators, but it allows the possibility of an analytic or numerical solutions to the full quantum problem.
We solved equations (\ref{semiclassical}) numerically using a fourth order Runge Kutta algorithm using the XMDS numerical pacakge \cite{xmds}. We chose parameters realistic to atoms optics experiments with Rb$^{87}$ atoms. Unless stated otherwise, we have set $m = 1.4\times 10^{-25}$ kg, $\omega_{t} = 0.25$ rad $s^{-1}$, $|{\bf k}_{23} -{\bf k}_{13}| = 1.6\times10^7$ m$^{-1}$, which corresponds to twice the wave number of the $^2$S$_{\frac{1}{2}}$,F$=2$ $\rightarrow$ $^2$P$_{\frac{3}{2}}$,F$=3$ transition in Rb$^{87}$. $\phi_0(k)$ was chosen to be the (normalized) ground state momentum space wave function of a condensate (ignoring interactions) in a harmonic trap, and we set $\Omega_0(k) = \Omega\phi_0(k - k_{23} -k_{13})$ with $\Omega = 90$ rad s$^{-1}$. The results are reasonably insensitive to the absolute magnitude of $\omega_a$ and $\omega_0$, but they are quite sensitive to the relative values. To maintain resonance between the two fields, we set $\frac{|\Omega_{23}|^2}{\Delta_2} = \frac{\hbar(k_{23} -k_{13})^2}{2m} -\omega_a -\omega_{t}$. These relationships can be obtained with physically realistic parameters and are consistent with the approximations made in this model. We set $\omega_a=20$ rad s$^{-1}$.
Figures (\ref{fig:f1}), (\ref{fig:g1}), (\ref{fig:p1}) and (\ref{fig:q1}) show the solutions to equations (\ref{semiclassical}) for the values indicated above.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig2.ps}
\caption{\label{fig:f1} $|f(k, k', t=0.11 s)|^2$ for the values of parameters indicated in the text. $k_{kick}$ is the kick acquired due to the Raman transition, ie $k_{kick} = k_{23} - k_{13}$. The function was discretized for numerical calculation by replacing $\delta(k-k')$ with $\frac{\delta_{k, k'}}{\Delta k}$, where $\Delta k$ is the grid spacing. The dip in the function near the $k=k_{kick}$ resonance shows how the quantum state of the atoms has been affected by the interaction with the optical fields.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig3.ps}
\caption{\label{fig:g1} $|g(k, t)|^2$ as found numerically for the values of parameters indicated in the text. This shows that atoms are created around $k=k_{kick}$ with a quantum state related to the initial state of the probe field.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig4.ps}
\caption{\label{fig:p1} $|p(t)|^2$ as found numerically for the values of parameters indicated in the text.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig5.ps}
\caption{\label{fig:q1} $|q(k', t)|^2$ as found numerically for the values of parameters indicated in the text.}
\end{figure}
The solution of equations (\ref{semiclassical}) gives us the solution of Eqs.(\ref{solutionpsi}) and (\ref{solutiona}) for all possible initial quantum states of the optical field and the free propagating atomic field. We will assume that the initial state of the field is $|\psi\rangle \equiv |light\rangle \otimes\{|0\rangle\}_k$ \label{initstate}, where $|light\rangle$ represents an arbitrary state for the optical mode, and $\{|0\rangle\}_k$ represents a vacuum mode at all points in $k$ space for the atomic field. The expectation value of the density of outcoupled atoms $\langle \hat{\Psi}^{\dag}(x) \hat{\Psi}(x)\rangle$ with $\hat{\Psi}(x) = \frac{1}{\sqrt{2\pi}} \int \hat{\psi}(k) e^{ikx}dk$ is
\begin{equation}
\langle \hat{\Psi}^{\dag}(x) \hat{\Psi}(x)\rangle = |G(x)|^2 \langle \hat{a}^{\dag}_s\hat{a}_s\rangle, \quad G(x) = \frac{1}{\sqrt{2\pi}} \int g(k) e^{ikx}dk
\end{equation}
It is interesting to note that when the initial state of the untrapped atomic field is the vacuum, then the spatial structure of the density of the untrapped atoms at later times depends only on the the functional form of $G(x)$, which depends on the efficiency of the outcoupling process.
Figure (\ref{fig:xdens}) show the density of outcoupled atoms when the expectation value of the initial number of photons is $\langle \hat{a}_s^{\dag}\hat{a}_{s}\rangle = 1000$.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig6.ps}
\caption{\label{fig:xdens} $\langle\hat{\Psi}^{\dag}(x)\hat{\Psi}(x)\rangle$ for $\langle \hat{a}_s^{\dag}\hat{a}_{s}\rangle = 1000$.}
\end{figure}
The number operator is $\hat{N} = \int \hat{\Psi}^{\dag}(x) \hat{\Psi}(x) dx$. Using our solution (Eq. \ref{solutionpsi}) and our initial state, the expectation value is $\langle\hat{N}\rangle =\langle \hat{a}^{\dag}_s \hat{a}_s \rangle \int |G(x)|^2 dx$. The variance of the number operator is
\begin{eqnarray}
V(\hat{N}) &=& \langle \hat{N}^2 \rangle - \langle \hat{N} \rangle^2 \nonumber \\
&=& \int\int \langle \hat{\Psi}^{\dag}(x') \hat{\Psi}(x') \hat{\Psi}^{\dag}(x) \hat{\Psi}(x)\rangle dx dx' \nonumber \\
&-&\Big( \int\langle \hat{\Psi}^{\dag}(x) \hat{\Psi}(x) \rangle dx \Big)^2 \nonumber \\
&=& \int \int \langle \hat{\Psi}^{\dag}(x')\hat{\Psi}^{\dag}(x)\hat{\Psi}(x') \hat{\Psi}\rangle(x)dx dx' \nonumber \\
&+& \int\langle \hat{\Psi}^{\dag}(x) \hat{\Psi}(x) \rangle dx - \Big( \int\langle \hat{\Psi}^{\dag}(x) \hat{\Psi}(x) \rangle dx \Big)^2. \nonumber \\
\end{eqnarray}
Using our solution (Eq. \ref{solutionpsi}) and our initial state, this becomes
\begin{eqnarray}
V(\hat{N}) &=& N_G^2\Big(\langle \hat{a}^{\dag}_s \hat{a}^{\dag}_s \hat{a}_s \hat{a}_s \rangle -\langle \hat{a}^{\dag}_s \hat{a}_s \rangle^2 \Big) + N_G\langle \hat{a}^{\dag}_s \hat{a}_s \rangle \nonumber \\
&=& N_G^2 V(\hat{a}^{\dag}_s \hat{a}_s) + N_G(1-N_G)\langle \hat{a}^{\dag}_s \hat{a}_s \rangle,
\end{eqnarray}
with $N_G = \int |G(x)|^2 dx$. We note that as $N_G \rightarrow 1$, the variance in the number of outcoupled atoms approaches the variance of the initial optical mode, as the quantum statistics of the outcoupled atoms depends only on the initial quantum state of the optical field and the efficiency of the outcoupling process. Figure (\ref{fig:Nvar1}) shows the variance of the outcoupled atoms versus time for different states of the optical mode.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig7.ps}
\caption{\label{fig:Nvar1} The relative variance $v(\hat{N})=\frac{V(\hat{N})}{\langle \hat{N}\rangle}$ for the outcoupled atoms versus time for different states of the optical field. The solid line represents the initial optical mode in a coherent state $|\alpha\rangle$ with $|\alpha|^2 = 1000$, the dotted line represents a squeezed state $|\alpha, r\rangle$ with $|\alpha|^2 = 1000$, $r=1.38$, and the dashed line represents a Fock state $|n\rangle$ with $n=1000$.}
\end{figure}
A more interesting observable to look at is the flux of the outcoupled atoms, as the spatial structure of the outcoupled beam becomes apparent. The flux operator is:
\begin{equation}
\hat{J}(x) = \frac{i\hbar}{2m}\Bigl( \nabla \hat{\Psi}^{\dag}(x) \hat{\Psi}(x) - \hat{\Psi}^{\dag}(x) \nabla \hat{\Psi}(x) \Bigr)
\end{equation}
which, using our solution for $\hat{\psi}(k)$ becomes
\begin{eqnarray}
\hat{J}(x) &=& \int\int J_f(x,k',k'') \hat{\psi}_s^{\dag}(k')\hat{\psi}_s(k'') dk' dk'' \nonumber \\
&+& J_g(x) \hat{a}_s^{\dag}\hat{a}_s \nonumber \\
&+& \int J_{fg}(x, k') \hat{\psi}_s^{\dag}(k')\hat{a}_s dk' \nonumber \\
&+& \int J_{gf}(x, k'') \hat{\psi}_s(k'')\hat{a}^{\dag}_s dk''
\end{eqnarray}
with
\begin{align*}
J_f(x,k',k'')& = \frac{i\hbar}{2m}\Bigl( \nabla F^{*}(x,k')F(x,k'') - F^{*}(x,k') \nabla F(x,k'') \Bigr) \\
J_g(x)& = \frac{i\hbar}{2m}\Bigl( \nabla G^{*}(x)G(x) - G^{*}(x) \nabla G(x) \Bigr) \\
J_{fg}(x,k')& = \frac{i\hbar}{2m}\Bigl( \nabla F^{*}(x,k')G(x) - F^{*}(x,k') \nabla G(x) \Bigr) \\
J_{gf}(x,k'')& = \frac{i\hbar}{2m}\Bigl( \nabla G^{*}(x)F(x,k') - G^{*}(x) \nabla F(x,k'') \Bigr)
\end{align*}
and with $F(x, k') = \frac{1}{\sqrt{2 \pi}}\int f(k, k') e^{ikx}dk$.
Using our initial state $|\psi\rangle = |light\rangle\otimes\{|0\rangle\}_k$, the expectation value of the flux operator becomes:
\begin{equation}
\langle \hat{J}(x)\rangle = J_{g}(x)\langle \hat{a}^{\dag}_s \hat{a}_s \rangle
\end{equation}
This shows that the mean atom flux in the output pulse depends only on the details of the coupling process, and not on the statistics of the outcoupling field. Figure \ref{fig:flux1} shows the flux of outcoupled atoms for $\langle \hat{a}^{\dag}_s \hat{a}_s \rangle = 1000$.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig8.ps}
\caption{\label{fig:flux1} Flux of outcoupled atoms at a point in the atomic beam ($x=1.5$ mm) for $\langle \hat{a}^{\dag}_s \hat{a}_s \rangle = 1000$.}
\end{figure}
To investigate how the quantum statistics are transferred to the atomic beam, we look at the variance in the flux.
\begin{eqnarray}
V(\hat{J}) &=& \langle \hat{J}^2 \rangle - \langle \hat{J} \rangle^2 \\
&=& J_g^2\langle \hat{a}^{\dag}_s \hat{a}_s \hat{a}^{\dag}_s \hat{a}_s \rangle - J_g^2\langle \hat{a}^{\dag}_s \hat{a}_s\rangle^2 \nonumber \\
&+& \int\int J_{gf}(x,k')J_{fg}(x,k'') \langle \hat{a}^{\dag}_s \hat{\psi}_s(k') \hat{a}_s \hat{\psi}^{\dag}_s(k'') \rangle dk'dk'' \nonumber \\
&=& J_g^2 V(\hat{a}^{\dag}_s \hat{a}_s) + \langle \hat{a}^{\dag}_s \hat{a}_s \rangle \int J_{gf}(x,k')J_{fg}(x,k') dk'
\end{eqnarray}
The variance in the flux has two terms, one proportional to the variance in the photon number, and the other proportional to the photon number itself. For a Fock state photonic input the first of those terms is zero, and the variance is proportional to the function $\int J_{gf}(x,k)J_{fg}(x,k) dk$. This can be contrasted to the case where the optical field is in a coherent state, and the total variance in the flux is simply proportional to the function $J_g^2 + \int J_{gf}(x,k)J_{fg}(x,k) dk$. A reasonable measure of the transfer of the quantum state of the zero-dimensional photon field to the larger space of the output pulse is therefore the function
\begin{eqnarray}
v(\hat{J}) &=& \frac{\int J_{gf}(x,k')J_{fg}(x,k') dk'}{J_g^2 + \int J_{gf}(x,k)J_{fg}(x,k) dk},
\end{eqnarray}
which shows the minimum possible variance in the output flux normalised to the flux variance produced by output with a coherent optical state.
Figure (\ref{fig:fluxfigy}) shows $v(\hat{J})$ for different values of the coupling constant $\Omega$. Even in our simplified model where we have assumed a single mode for the optical beam and the condensate, the outcoupled atoms still display complicated spatio-temporal dynamics. Weak outcoupling gives a steady flux, but very little suppression of the shot noise because the timing of the output of each atom becomes uncertain, making the number statistics uncertain in the transient period. When the outcoupling rate is increased, a significant amount of flux squeezing is displayed in a localised pulse. Further increase of the outcoupling rate shows more complicated dynamics, as some of the outcoupled atoms are coupled back into the condensate. This causes the atoms to come out in a series of pulses, with less flux squeezing than for optimal outcoupling. An interesting sidenote is that the flux variance produced by the coherent optical state (the denominator of $v(\hat{J})$) is simply proportional to the flux itself, with the same proportionality constant for all times, and all values of $\Omega$.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig9.ps}
\caption{\label{fig:fluxfigy} $v(\hat{J})$ at a point in the path of the atomic beam ($x=1.5$ mm) for $\Omega = 18$ rad s$^{-1}$ (dotted line), $\Omega = 144$ rad s$^{-1}$ (solid line), and $\Omega = 270$ rad s$^{-1}$ (dashed line). Coupling weakly produces a long pulse, but the variance in the flux is almost unaffected by the statistics of the optical state. Coupling too strongly causes significant back coupling from the output field to the photonic state.}
\end{figure}
Although the variance in the number of the pulse is quite insensitive to the strength of the coupling between the trapped and untrapped fields, this is not true for the variance in the flux. Figure (\ref{fig:maxsqueezy}) shows the maximum suppression of shot noise in $v(\hat{J})$ for different values of $\Omega$. We can estimate the outcoupling that will produce the minimum $v(\hat{J})$ by finding the maximum Rabi frequency that will not cause significant back-coupling to the condensate. Equating the quarter-period of a Rabi oscillation $T_{Rabi}/4 = \pi/(2\Omega)$, with the time taken for the kicked atoms to leave the coupling region $T_{leave} = \sqrt{\frac{8\hbar}{m\omega_{t}}}m/(\hbar|{\bf k}_{23} -{\bf k}_{13}|)$ where $\sqrt{\frac{8\hbar}{m\omega_{t}}}$ is the spatial width of the condensate. From this we can estimate that optimum outcoupling will occur when $\Omega \approx \frac{\pi \hbar|{\bf k}_{23} -{\bf k}_{13}|}{4 m \sqrt{\frac{2\hbar}{m\omega_{t}}}} \approx 250$ rad s$^{-1}$ for the parameters used in this paper. This agrees well with the calculated minimum shown in figure (\ref{fig:maxsqueezy}).
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig10.ps}
\caption{\label{fig:maxsqueezy} Minimum value of $v(\hat{J})$ versus $\Omega$ . The shot noise is below the vacuum noise for all values of $\Omega$ when using a Fock state to outcouple. }
\end{figure}
We have shown that the quantum statistics of an optical mode can be transferred to an atom laser beam to produce a pulse of atoms with better defined number, or to partially suppress the fluctuations in the flux. Furthermore, we have shown that the quantum statistics of the optical mode can be transferred independent of the initial quantum state of the optical mode, which suggests that two-mode optical squeezing could be used to generate spatially separated entangled atomic beams. In the next section we investigate the possibility of using twin optical beams produced from a non-degenerate optical parametric oscillator to generate two entangled atomic beams propagating in different directions.
\section{EPR beams}
Continuous wave generation of correlated atom beams requires a more complicated scheme. We consider a probe field created by a nondegenerate OPO producing twin optical beams (figure (\ref{fig:eprscheme})). These modes have the same wavelength, but travel in different directions and hence they have different momenta. The OPO is driven by a classical, non-depletable driving field. This will produce twin atom laser beams with different momenta. This differs from the previous case in that it allows continuous outcoupling of the atoms, rather than just a pulse. The Hamiltonian for the system is now
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig11.ps}
\caption{\label{fig:eprscheme} Twin atom laser beams produced by outcoupling with two-mode squeezed light. An OPO is driven by a classical, non-depletable driving field ($\beta$). The $\chi^{(2)}$ process produces two optical modes $\hat{a}_1$ and $\hat{a}_2$, which are used to outcouple the atom laser beams. }
\end{figure}
\begin{eqnarray}
\hat{\mathcal H} &=& \hat{\mathcal H}_{atom} \\ \nonumber
&+& \hbar(\omega-\Delta_1)\hat{a}^{\dag}_1\hat{a}_1
+ \hbar(\omega-\Delta_1)\hat{a}^{\dag}_2\hat{a}_2 \\ \nonumber
&+& \hbar\chi(\beta\hat{a}^{\dag}_1\hat{a}^{\dag}_{2} e^{-i\omega_{p}t} + \beta^{*}\hat{a}_1\hat{a}_{2} e^{i\omega_{p}t}) \\ \nonumber
&+& \hbar g_{13} \int \hat{\psi}_1(k)\hat{\psi}_3^{\dag}(k+k_1)\hat{a_1} + \hat{\psi}^{\dag}_1(k)\hat{\psi}_3(k+k_1)\hat{a}^{\dag}_1 dk \\ \nonumber
&+& \hbar g_{13} \int \hat{\psi}_1(k)\hat{\psi}_3^{\dag}(k+k_2)\hat{a_2} + \hat{\psi}^{\dag}_1(k)\hat{\psi}_3(k+k_2)\hat{a}^{\dag}_2 dk \\ \nonumber
&+& \hbar \int \Omega_{23} \hat{\psi}^{\dag}_2(k)\hat{\psi}_3(k+k_0)e^{i(\omega-\Delta_2)t} \\ \nonumber
&+& \Omega^{*}_{23} \hat{\psi}_2(k)\hat{\psi}^{\dag}_3(k+k_0)e^{-i(\omega-\Delta_2)t} dk \\ \nonumber
\end{eqnarray}
where $\hat{a}_1$ and $\hat{a}_2$ represent the annihilation operators for the twin probe fields produced from the down conversion process, both assumed to affect the $|1\rangle \rightarrow |3\rangle$ transition, $\beta$ is the complex amplitude of the pump field, $\omega_p$ is the frequency of the pump, and $\chi$ is the nonlinear coefficient of the down conversion medium. $\hbar k_1$ and $\hbar k_2$ are the magnitudes of the momentum kicks due to absorption from photons in $\hat{a}_1$ and $\hat{a}_2$ respectively. We have assumed that the photons are resonant in an optical resonator with 100\% reflective mirrors. This assumption is valid as the dominant form of loss out of the cavity will be due to atomic absorption. For computational convenience in our one-dimensional model, we have chosen $\mathbf{k_1}-\mathbf{k_0} = -(\mathbf{k_2}-\mathbf{k_0})$, ie. the resultant momentum kicks that the atoms obtain after being outcoupled are of equal magnitude and opposite direction. By adiabatically eliminating the excited state, and assuming the condensate is a large coherent state as before, we obtain the following equations of motion for the outcoupled atoms and the probe fields:
\begin{eqnarray} \label{OPOEOM}
i\dot{\hat{\psi}}(k) &=& \omega_0(k)\hat{\psi}(k) - \Omega_1(k)\tilde{a}_1 + \Omega_2(k)\tilde{a}_2 \\
i\dot{\tilde{a}}_1 &=& \omega_a \tilde{a}_1 - \int \Omega_1^{*}(k)\hat{\psi}(k)dk \\ \nonumber
&+& \chi\beta\tilde{a}^{\dag}_2 e^{i(2(\omega -\Delta_2)-\omega_{p})t} -\Omega_C \tilde{a}_2 \\
i\dot{\tilde{a}}_2 &=& \omega_a \tilde{a}_2 - \int \Omega_2^{*}(k)\hat{\psi}(k)dk \\ \nonumber
&+& \chi\beta\tilde{a}^{\dag}_1 e^{i(2(\omega -\Delta_2)-\omega_{p})t} -\Omega_C^{*} \hat{a}_1
\end{eqnarray}
with $\hat{\psi}(k) = \hat{\psi}_2(k)e^{i\omega_{t}t}$, $\tilde{a}_j = \hat{a}_j e^{i(\omega-\Delta_2)t}$, $\Omega_j(k) = g\sqrt{N} \frac{\Omega}{\Delta_2} \phi_0(k+k_0-k_j)$ for $j=1, 2$, and $\Omega_C = \frac{g^2N}{\Delta_2}\int\phi_0^{*}(k-k_1)\phi_0(k-k_2) dk$. The $\Omega_C$ cross coupling term between the two optical modes is due to atoms absorbing a photon from one beam and emitting it into the other beam. This term will be small due to the large momentum difference between the two modes. However, the functional form of $\Omega_C$ is due to our assumption that the condensate remains single mode. Cross coupling between the two optical modes will cause momentum `side bands' on the condensate mode \cite{Moore}, but the effect of this cross coupling will be small if the number of photons in the probe beam is small compared to the number of atoms in the condensate. As the results in this section are calculated in a parameter regime where the chance of an outcoupled atom coupling back into the condensate is low, it is valid to neglect this term in our calculations. The general solution to equations (\ref{OPOEOM}) is
\begin{eqnarray} \label{oposol}
\hat{\psi}(k,t) &=& \int f_{+}(k,k',t)\hat{\psi}_s(k)dk' \\ \nonumber
&+& \int f_{-}(k,k',t)\hat{\psi}^{\dag}_s(k)dk'+ g_{1+}(k,t)\hat{a}_{1s} \\ \nonumber &+& g_{1-}(k,t)\hat{a}^{\dag}_{1s}
+ g_{2+}(k,t)\hat{a}_{2s} + g_{2-}(k,t)\hat{a}^{\dag}_{2s} \\
\hat{a}_1(t) &=& p_{1+}(t)\hat{a}_{1s} + p_{1-}(t)\hat{a}^{\dag}_{1s} \\ \nonumber
&+& p_{2+}(t)\hat{a}_{2s} + p_{2-}(t)\hat{a}^{\dag}_{2s} \\ \nonumber
&+& \int p_{3+}(k',t)\hat{\psi}_s(k')dk' + \int p_{3-}(k',t)\hat{\psi}^{\dag}_s(k')dk' \\
\hat{a}_2(t) &=& q_{1+}(t)\hat{a}_{1s} + q_{1-}(t)\hat{a}^{\dag}_{1s} \\ \nonumber
&+& q_{2+}(t)\hat{a}_{2s} + q_{2-}(t)\hat{a}^{\dag}_{2s} \\ \nonumber
&+& \int q_{3+}(k',t)\hat{\psi}_s(k')dk' + \int q_{3-}(k',t)\hat{\psi}^{\dag}_s(k')dk'
\end{eqnarray}
where $f_{\pm}(k,k',t)$, $g_{1,2 \pm}(k,t)$, $p_{1,2\pm}(t)$, $p_{3\pm}(k',t)$, $q_{1,2 \pm}(t)$, $q_{3\pm}(k',t)$ are complex functions satisfying differential equations obtained by substituting the solutions (\ref{oposol}) into (\ref{OPOEOM}) (see appendix). From the solution of these equations, we can calculate any observable of the system.
We solved the equations (\ref{opoprop}) numerically for $\chi\beta = 80$s$^{-1}$, $\Omega_j(k) = \Omega\phi_0(k-k_0-k_j)$ with $\Omega = 108$ rad s$^{-1}$ for $j=1, 2$. We set $k_1-k_0 = -(k_2 -k_0) = 1.6\times10^7$ m$^{-1}$, and $\omega_p- 2(\omega-\Delta_2) = 2\omega_a$, with all other parameters as before. If we assume that the two optical modes and the untrapped atomic field are initially in the vacuum state, using equation [\ref{oposol}] the expectation value of the atomic density $\rho(x) = \langle \hat{\Psi}^{\dag}(x)\hat{\Psi}(x) \rangle$ is
\begin{eqnarray}
\rho(x) &=& \langle \hat{\Psi}^{\dag}(x)\hat{\Psi}(x) \rangle \\ \nonumber
&=& \int |F_{-}(x, k')|^2 dk' + |G_{1-}(x)|^2 + |G_{2-}(x)|^2 \nonumber
\end{eqnarray}
Where $F_{-}(x, k')=\frac{1}{\sqrt{2 \pi}}\int f_{-}(k, k') e^{ikx}dk$, $G_{1-}(x)=\frac{1}{\sqrt{2 \pi}}\int g_{1-}(k) e^{ikx}dk $ and $G_{2-}(x)=\frac{1}{\sqrt{2 \pi}}\int g_{2-}(k) e^{ikx}dk $. Figure(\ref{fig:opodensity}) shows the atomic density versus time. Two atomic beams in opposite directions are produced, with steady flux.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig12.ps}
\caption{\label{fig:opodensity} Density of outcoupled atoms versus time. Outcoupling using light from a non-degenerate OPO produces two atomic beams in opposite directions, with steady flux. }
\end{figure}
To check whether there are correlations present in the two atomic beams, the relevant observable is the difference in the flux of the two beams. If the two beams are completely uncorrelated, then we would expect
\begin{equation}
V(\hat{J}(x_0) - \hat{J}(-x_0)) \geq 2V(\hat{J}(x_0)) \label{fluxsqueezecondition}
\end{equation}
Where $x_0$ and $-x_0$ are points that lie in the rightward and leftward propagating beams respectively. Figure (\ref{fig:opovariance}) shows the variance of the flux difference $V(\hat{J}(x_0) - \hat{J}(-x_0))$ versus time, and shows that the fluctuations in the difference are approximately eight times less than for uncorrelated atoms.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig13.ps}
\caption{\label{fig:opovariance} Variance in the flux difference. The fluctuations are eight times smaller than for uncorrelated atoms. }
\end{figure}
We now investigate whether we can use this process to generate entanglement between the two atomic beams. We quantify our entanglement using the EPR criterion of Reid and Drummond \cite{Reid}, the requirement being that two conjugate variables on one of the beams can be inferred from measurements on the other beam to below the quantum limit. We define four quadratures:
\begin{eqnarray}
\hat{X}_{\pm} &=& \int(L_{\pm}^{*}(x)\hat{\Psi}(x) + L_{\pm}(x)\hat{\Psi}^{\dag}(x))dx \\
\hat{Y}_{\pm} &=& i\int(L_{\pm}^{*}(x)\hat{\Psi}(x) - L_{\pm}(x)\hat{\Psi}^{\dag}(x))dx \\
\end{eqnarray}
with
\begin{eqnarray}
L_{\pm}(x,t) &=& \frac{e^{i(k_0x-\omega_a t)}}{\sqrt{|x_1-x_2|}},\phantom{aa} \mbox{if} \pm x_1 > x > \pm x_2 \nonumber \\
&=& 0 \quad \mbox{otherwise}
\end{eqnarray}
The commutator of the conjugate quadratures give us the uncertainty relation $V(\hat{X}_{\pm})V(\hat{Y}_{\pm}) \geq 1$ since $\int |L_{\pm}(x)|^2dx =1$. The beams are entangled under the EPR criterion if by making measurements of quadratures of one beam (eg. $\hat{X}_{+}$ and $\hat{Y}_{+}$), then quadratures of the other beam ($\hat{X}_{-}$ and $\hat{Y}_{-}$) can be inferred to better than this quantum limit. Quantitatively: $V^{inf}(\hat{X}_{-})V^{inf}(\hat{Y}_{-}) < 1$ is the requirement for entanglement, where $V^{inf}(\hat{X}_{\pm}) = V(\hat{X}_{\pm}) - \frac{(V(\hat{X}_{\pm}, \hat{Y}_{\mp}))^2}{V(\hat{Y}_{\mp})}$, $V^{inf}(\hat{Y}_{\pm}) = V(\hat{Y}_{\pm}) - \frac{(V(\hat{Y}_{\pm}, \hat{X}_{\mp}))^2}{V(\hat{X}_{\mp})}$ and $V(a,b) = \langle ab\rangle - \langle a\rangle\langle b\rangle$. We note here that the correlations present are between conjugate quadratures of each beam. Figure (\ref{fig:epr1}) shows the product of the inferred variances $V^{inf}(\hat{X}_{-})V^{inf}(\hat{Y}_{-})$ plotted against time. As the intensity of the beams increase and become more monochromatic, the product of the inferred variances dip well below the requirement for entanglement. The initial increase is due to the beams initially not approximating plane waves. This could be fixed by appropriate choice of $L_{\pm}(x)$, to better match the mode shape of the output atom laser beams.
\begin{figure}
\includegraphics[width=\columnwidth, bb=0 0 600 600]{fig14.ps}
\caption{\label{fig:epr1} Product of the inferred variances $V^{inf}(\hat{X}_{-})V^{inf}(\hat{Y}_{-})$ versus time. As the system goes to steady state, the requirement for entanglement is satisfied.}
\end{figure}
In the long time limit, the product of the inferred variances is on the order of three orders of magnitude below the classical limit, demonstrating that this system produces an almost pure EPR correlated state. Our model uses an ideal OPO, so the squeezing in the optical modes would grow without bound in the absence of the damping due to the atoms. In practice, the entanglement on the atomic beams will not exceed the optical entanglement that can be obtained from a real OPO. The limit to the entanglement between the atomic beams in this model is given by the finite momentum width of the condensate. As the EPR state is expected to be very pure, the dominant noise in an experiment may actually be due to some of the effects we have ignored in our model due to their small effect on the dynamics. In particular, there may be a small reduction in fidelity due to effects of the back action of the outcoupling on the condensate wavefunction, which we have ignored in this calculation.
\section{Conclusion}
We have modelled the dynamics of an atom laser produced by outcoupling from a Bose-Einstein condensate with squeezed light. We modelled the multimode dynamics of the output field and showed that a significant amount of squeezing can be transferred from an optical mode to a propagating atom laser beam. We also demonstrated that two-mode squeezing can be used to produce twin atom laser beams with continuous variable entanglement in amplitude and phase.
This research was supported by the Australian Research Council Centre of Excellence for Quantum Atom Optics. We would like to acknowledge useful discussions with M. K. Olsen, A. M. Lance and H. A. Bachor.
\section{Appendix}
$f_{+}(k,k',t)$, $f_{-}(k,k',t)$, $g_{1+}(k,t)$, $g_{1-}(k,t)$, $g_{2+}(k,t)$, $g_{2-}(k,t)$, $p_{1+}(t)$, $p_{1-}(t)$, $p_{2+}(t)$, $p_{2-}(t)$, $p_{3+}(k', t)$, $p_{3-}(k', t)$, $q_{1+}(t)$, $q_{1-}(t)$, $q_{2+}(t)$, $q_{2-}(t)$, $q_{3+}(k', t)$, $q_{3-}(k', t)$, must satisfy:
\begin{eqnarray} \label{opoprop}
i \dot{f}_{+}(k, k') &=& \omega_0(k)f_{+}(k, k') - \Omega_1(k)p_{3+}(k') \\ \nonumber
&-& \Omega_2(k)q_{3+}(k') \\ \nonumber
i \dot{f}_{-}(k, k') &=& \omega_0(k)f_{-}(k, k') - \Omega_1(k)p_{3-}(k') - \Omega_2(k)q_{3-}(k') \\ \nonumber
i \dot{g}_{1+}(k) &=& \omega_0(k)g_{1+}(k) - \Omega_1(k)p_{1+} - \Omega_2(k)q_{1+} \\ \nonumber
i \dot{g}_{1-}(k) &=& \omega_0(k)g_{1-}(k) - \Omega_1(k)p_{1-} - \Omega_2(k)q_{1-} \\ \nonumber
i \dot{g}_{2+}(k) &=& \omega_0(k)g_{2+}(k) - \Omega_1(k)p_{2+} - \Omega_2(k)q_{2+} \\ \nonumber
i \dot{g}_{2-}(k) &=& \omega_0(k)g_{2-}(k) - \Omega_1(k)p_{2-} - \Omega_2(k)q_{2-} \\ \nonumber
i\dot{p}_{1+} &=& \omega_a p_{1+} - \int\Omega_1^{*}(k)g_{1+}(k)dk +\chi_{p}q_{1-}^{*} \\ \nonumber
i\dot{p}_{1-} &=& \omega_a p_{1-} - \int\Omega_1^{*}(k)g_{1-}(k)dk +\chi_{p}q_{1+}^{*} \\ \nonumber
i\dot{p}_{2+} &=& \omega_a p_{2+} - \int\Omega_1^{*}(k)g_{2+}(k)dk +\chi_{p}q_{2-}^{*} \\ \nonumber
i\dot{p}_{2-} &=& \omega_a p_{2-} - \int\Omega_1^{*}(k)g_{2-}(k)dk +\chi_{p}q_{2+}^{*} \\ \nonumber
i\dot{p}_{3+}(k') &=& \omega_a p_{3+}(k') - \int\Omega_1^{*}(k)f_{+}(k, k')dk +\chi_{p}q_{3-}^{*}(k') \\ \nonumber
i\dot{p}_{3-}(k') &=& \omega_a p_{3-}(k') - \int\Omega_1^{*}(k)f_{-}(k, k')dk +\chi_{p}q_{3+}^{*}(k') \\ \nonumber
i\dot{q}_{1+} &=& \omega_a q_{1+} - \int\Omega_1^{*}(k)g_{1+}(k)dk +\chi_{p}p_{1-}^{*} \\ \nonumber
i\dot{q}_{1-} &=& \omega_a q_{1-} - \int\Omega_1^{*}(k)g_{1-}(k)dk +\chi_{p}p_{1+}^{*} \\ \nonumber
i\dot{q}_{2+} &=& \omega_a q_{2+} - \int\Omega_1^{*}(k)g_{2+}(k)dk +\chi_{p}p_{2-}^{*} \\ \nonumber
i\dot{q}_{2-} &=& \omega_a q_{2-} - \int\Omega_1^{*}(k)g_{2-}(k)dk +\chi_{p}p_{2+}^{*} \\ \nonumber
i\dot{q}_{3+}(k') &=& \omega_a q_{3+}(k') - \int\Omega_1^{*}(k)f_{+}(k, k')dk +\chi_{p}p_{3-}^{*}(k') \\ \nonumber
i\dot{q}_{3-}(k') &=& \omega_a q_{3-}(k') - \int\Omega_1^{*}(k)f_{-}(k, k')dk +\chi_{p}p_{3+}^{*}(k') \nonumber
\end{eqnarray}
with $\chi_{p} = \chi\beta e^{i(2(\omega -\Delta_2)-\omega_{p})t}$, and initial conditions $f_{+}(k, k', t=0) = \delta(k-k')$, $p_{1+}(t=0) = 1$, $q_{2+}(t=0) = 1$ with all other fields zero. From the solution of these equations, we can calculate any observable of the system.
|
1,116,691,501,262 | arxiv | \section{Introduction}
Characterising the different forms of correlations shared by the constituents of a composite quantum system is essential for the theoretical understanding and for the operational exploitation of the quantum system itself \cite{Horodecki2009,Modi2012}. Correlations which cannot be amenable to a classical description, in particular, exhibit a rich variety in mixed states of bipartite and multipartite quantum systems. Nowadays, the notion of quantum correlations refers not only to entanglement, but to more general forms of correlations which are conventionally identified with the quantum discord \cite{Ollivier2001, Henderson2001}, and capture for instance the necessary disturbance induced on quantum states by any local measurement (which is nonzero even in all separable states apart from so-called classical-quantum states). Correspondingly, the portion of correlations which are left in the state after a minimally disturbing local measurement can be identified with the classical correlations originally shared by the subsystems.
Unlike entanglement, for which a resource theory is well established \cite{Horodecki2009}, proposals to quantify quantum and classical correlations in this more general paradigm are still relatively scarce, and the mathematical requirements that any such proposal has to obey to be regarded as a valid measure are still to be completely formalised \cite{Modi2012}. Yet an interesting phenomenology associated to these correlations is being uncovered in different physical contexts. For example, from a foundational perspective the sudden transition from a decay to a plateau regime for classical correlations between a quantum system and its measurement apparatus has been interpreted as characterising the finite-time emergence of the pointer basis in the apparatus~\cite{Cornelio2012,Paula2013}. Moreover, quantum correlations between noninteracting qubits have been shown to dynamically revive despite of decoherence thanks to memory effects of the local environment, independently of the quantum or classical nature of the environment~\cite{SaroD1,SaroD3,lofrancoreview,xulofranco2013NatComms}. Quantum correlations of ground states also appear to play an important role in the characterisation of exotic phases of quantum many-body systems~\cite{Jiang2012,Giampaolo2013,Marzolino2013}. More generally, from an operational viewpoint various forms of quantum correlations, including and beyond entanglement, can and typically do provide fundamental resources for quantum technologies~\cite{Horodecki2009,Modi2012}. Consequently, rigorously addressing the quantification of correlations is of paramount importance.
The notion of distance defined on the convex set of states of a quantum system paves the way for several geometric approaches to the quantification of correlations~\cite{Vedral1997,Modi2010,Dakic2010,LuoFu,Monras2011,Bellomo2012a,Nakano2013,tufodiscord,LQU,Roga2014}. We refer in particular to Refs.~\cite{Nakano2013,Roga2014} for a comparison among these approaches with respect to the quantification of quantum correlations. In this paper, we focus on bipartite systems and we follow the approach of Ref.~\cite{Modi2010}, according to which the minimum distance between a state $\rho$ and the set of states that do not possess a particular kind of correlations is a quantifier of that kind of correlations. Hence, the minimum distances between the state $\rho$ of a bipartite system and the sets of product, classical-quantum and separable states represent, respectively, the amount of total correlations, quantum correlations and entanglement of $\rho$.
Furthermore, the minimum distance between the set of closest classical-quantum states to $\rho$ and the set of product states represents the classical correlations of $\rho$.
This geometric approach to the quantification of correlations manifests several appealing features. First, it is unifying, thus allowing for a direct comparison among all the above mentioned notions of correlations \cite{Modi2010}. Second, it readily suggests generalisations to the multipartite setting~\cite{Blasone2008}.
In this paper we use specifically the Bures distance on the set of states to define geometric quantifiers of correlations. The Bures distance is defined as
\begin{equation}
D_{Bu}\left(\rho,\sigma \right) \equiv \sqrt{2\left(1-\sqrt{F(\rho,\sigma)}\right)},
\end{equation}
where $\rho$ and $\sigma$ are two arbitrary states while $F(\rho,\sigma)$ is the Uhlmann fidelity \cite{uhlmann}
\begin{equation}\label{uberman}
F(\rho,\sigma)\equiv{\left[\mathrm{Tr}\left(\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\,\right) \right]}^2.
\end{equation}
The reason for choosing this distance instead of others stems from its quite peculiar, but desirable, properties. Bures distance is at the same time contractive under completely positive trace-preserving maps, locally Riemannian and its metric coincides with the quantum Fisher information~\cite{Braunstein1994,Petz1996,Sommers2003,Facchi2010}, thus playing a crucial role in high precision interferometry and quantum metrology. Moreover, the minimum Bures distance between a state $\rho$ and the set of classical-quantum states is simply related to the maximal success probability in the ambiguous quantum state discrimination of a family of states and prior probabilities depending on $\rho$ \cite{Spehner2013,Spehner2014}. The task of minimal error quantum state discrimination plays a fundamental role both in quantum communication and cryptography and has been realised experimentally using polarised light \cite{Huttner1996,Mohseni2004}. On the contrary, e.g., the Hilbert-Schmidt distance is locally Riemannian but not contractive~\cite{Ozawa2000,Piani2012}, the trace distance is contractive but not locally Riemannian~\cite{Ruskai1994} and the relative entropy, although widely used in information theory, is technically not even a proper distance as it is not symmetric \cite{Modi2010}.
Here we derive closed formulae for classical and total correlations of Bell-diagonal states of two qubits according to the Bures distance. Together with the known corresponding formulae for entanglement~\cite{Streltsov2010} and discord-type quantum correlations~\cite{Aaronson2013a,Spehner2013,Spehner2014}, these allow us to gain a complete and unifying view of Bures correlations for Bell-diagonal states. We then provide two applications of these results. We first report the explicit expressions of the Bures correlations for two special subclasses of Bell-diagonal states, namely Werner states and rank-2 Bell-diagonal states. Finally, we consider a dynamical system made of two independent qubits locally interacting with a bosonic non-dissipative channel and show that both quantum and classical correlations measured by Bures distance can alternatively freeze during the evolution, joining the ranks of other faithful correlation quantifiers \cite{Mazzola,Aaronson2013a,Paula2013}. It is worthwhile to note that the freezing analysis was addressed in Ref.~\cite{Aaronson2013} with similar methods for the trace distance, but with a different definition of classical correlations.
The paper is organised as follows. In Section \ref{sec:quantumcorrelations} we review some known results concerning Bures quantum correlations and entanglement of Bell-diagonal states. In Sections \ref{sec:classicalcorrelations} and \ref{sec:totalcorrelations} we provide, respectively, the closed formulae for Bures classical and total correlations of Bell-diagonal states. In Section \ref{sec:examples} we compute the correlations of two particular classes of Bell-diagonal states, i.e., Werner states and rank-2 Bell-diagonal states. In Section \ref{sec:dynamics} we analyse the dynamics of correlations between two noninteracting qubits initially prepared in a Bell-diagonal state and subject to identical local pure dephasing channels. We conclude in Section \ref{sec:conclusions} with a summary and outlook.
\section{Quantum Correlations}\label{sec:quantumcorrelations}
Quantum correlations stem from two peculiar ingredients of quantum mechanics, the superposition principle and the tensor product structure of the Hilbert space associated to a composite quantum system. They are completely characterised by entanglement in the case of pure states, whereas in the case of mixed states entanglement constitutes only a part of the quantumness of correlations~\cite{Ollivier2001, Henderson2001}. As a result, for any pair of comparable quantifiers of general quantum correlations and entanglement, the quantum correlations of a state should intuitively be always greater or equal to the corresponding entanglement, being equal if the state is pure \cite{Interplay,PianiAdesso}. This is nicely captured by the aforementioned geometric approach. Specifically, in this paper, the quantum correlations of a state $\rho$ are quantified by the minimum Bures distance of $\rho$ to the set of classical-quantum states, namely
\begin{equation}
Q_{Bu}\left( \rho\right) \equiv \inf_{\chi\in\mathcal{CQ}} D_{Bu}\left(\rho,\chi \right) = D_{Bu}\left(\rho,\chi_\rho \right),
\end{equation}
where $\mathcal{CQ}$ is the set of classical-quantum states, i.e. states of the form $\chi=\sum_i p_i |i^A\rangle\langle i^A|\otimes\rho_i^B$ with ${\left\lbrace p_i \right\rbrace}$ being a probability vector, $\left\lbrace|i^A\rangle \right\rbrace $ an orthonormal basis of qubit $A$ and $\rho_i^B$ any state of qubit $B$ and $\chi_\rho$ is any of the closest classical-quantum states to $\rho$. The entanglement of $\rho$ is measured by the minimum Bures distance of $\rho$ to the set of separable states, namely
\begin{equation}
E_{Bu}\left( \rho\right) \equiv \inf_{\sigma\in\mathcal{S}} D_{Bu}\left(\rho,\sigma \right) = D_{Bu}\left(\rho,\sigma_\rho \right),
\end{equation}
where $\mathcal{S}$ is the set of separable states, i.e. states of the form $\sigma=\sum_i p_i \rho_i^A\otimes\rho_i^B$ with ${\left\lbrace p_i \right\rbrace}$ being a probability vector, $\rho_i^A$ and $\rho_i^B$ any state of qubit $A$ and $B$, respectively, while $\sigma_\rho$ is any of the nearest separable states to $\rho$. As the set of classical-quantum states is contained in the set of separable states, we immediately have that $Q_{Bu}\left( \rho\right)\geq E_{Bu}\left( \rho\right)$ for every $\rho$.
We shall restrict ourselves to the relevant but structurally simple class of Bell-diagonal (BD) states $\rho$ of two qubits, which are diagonal in the ``magic basis'' of the four maximally entangled Bell states. As a result, BD states are represented in the standard computational basis by the following matrix
\begin{equation}
\rho = \frac{1}{4}\left(\mathbb{I}^A\otimes\mathbb{I}^B + \sum_{i=1}^{3} c_i \sigma_i^A\otimes\sigma_i^B \right),
\end{equation}
where $\mathbb{I}$ and $\sigma_i$, $i=1, 2, 3$, are the identity and the Pauli matrices, respectively. The coefficients $c_i=\mathrm{Tr}\left[\rho\left( \sigma_i^A\otimes\sigma_i^B\right) \right] $ are the only correlation matrix elements of a BD state $\rho$ that can be different from zero, in terms of which the eigenvalues of $\rho$ are expressed as follows,
\begin{eqnarray}
\alpha &=& \frac{1}{4}\left(1+c_1-c_2+c_3 \right), \\
\beta &=& \frac{1}{4}\left(1-c_1+c_2+c_3 \right), \nonumber\\
\gamma &=& \frac{1}{4}\left(1+c_1+c_2-c_3 \right), \nonumber\\
\delta &=& \frac{1}{4}\left(1-c_1-c_2-c_3 \right). \nonumber
\end{eqnarray}
BD states are also called states with maximally mixed marginals, due to the fact that their reduced density matrices $\rho_A=\mathrm{Tr}_B\left( \rho\right) $ and $\rho_B=\mathrm{Tr}_A\left( \rho\right)$ are both equal to the maximally mixed state of a qubit, i.e., $\rho_A=\frac{1}{2}\mathbb{I}^A$ and $\rho_B=\frac{1}{2}\mathbb{I}^B$. The class of BD states is particularly interesting: for instance, they include the well-known Bell states and Werner states \cite{Horodecki2009} and constitute a resource for entanglement activation and distribution \cite{PianiAdesso,Sciarrino,kay2012,Fedrizzi2013}.
In Ref.~\cite{Aaronson2013a,Spehner2013,Spehner2014} it was proven that, according to the Bures distance, one of closest classical-quantum states to a BD state $\rho$ is always a BD classical-quantum state of the form
\begin{equation}\label{Eq:BDCQState}
\chi_\rho^{BD} = \frac{1}{4}\left(\mathbb{I}^A\otimes\mathbb{I}^B + s_k \sigma_k^A\otimes\sigma_k^B \right),
\end{equation}
where the index $k$ is such that $\Lambda_k =\Lambda_{max}\equiv\max\left\lbrace\Lambda_1,\Lambda_2,\Lambda_3\right\rbrace$, with
\begin{eqnarray}\label{Lambdas}
\Lambda_1 &\equiv& \sqrt{\alpha\gamma} + \sqrt{\beta\delta}, \nonumber\\
\Lambda_2 &\equiv& \sqrt{\alpha\delta} + \sqrt{\beta\gamma}, \\
\Lambda_3 &\equiv& \sqrt{\alpha\beta} + \sqrt{\gamma\delta},\nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{Eq:theparameters}
s_1 &\equiv& \frac{\alpha+\gamma-\beta-\delta+2\left(\sqrt{\alpha\gamma} - \sqrt{\beta\delta} \right) }{1+2\Lambda_{1}}, \nonumber\\
s_2 &\equiv& \frac{\beta+\gamma-\alpha-\delta+2\left(\sqrt{\beta\gamma}-\sqrt{\alpha\delta} \right) }{1+2\Lambda_{2}}, \\
s_3 &\equiv& \frac{\alpha+\beta-\gamma-\delta+2\left(\sqrt{\alpha\beta} - \sqrt{\gamma\delta} \right) }{1+2\Lambda_{3}}.\nonumber
\end{eqnarray}
As a result, the closed expression for the quantum correlations of an arbitrary BD state, as quantified by the Bures distance, is given by
\begin{equation}\label{Buresdiscord}
Q_{Bu}\left( \rho\right) = \sqrt{2\left(1-\sqrt{ F(\rho,\chi_\rho^{BD})}\right)},
\end{equation}
where
\begin{equation}
F(\rho,\chi_\rho^{BD})=\frac{1}{2}\left(1+2 \Lambda_{max} \right) .
\end{equation}
We now make some important remarks. The index $k$ characterising the state $\chi_\rho^{BD}$ in (\ref{Eq:BDCQState}) is such that $|c_k|=\max\{|c_1|,|c_2|,|c_3|\}$. This can be easily proven by considering the expressions of the correlation matrix coefficients $c_i$ in terms of the eigenvalues of the BD state $\rho$ and noting that the condition $c_k^2=\max\{c_1^2,c_2^2,c_3^2\}$ is equivalent to the condition $\Lambda_k^2=\max\{\Lambda_1^2,\Lambda_2^2,\Lambda_3^2\}$.
The closest classical-quantum state $\chi_{\rho}$ to a BD state $\rho$ is unique if, and only if, $\rho$ is within the interior of the tetrahedron of BD states ($\alpha,\beta,\gamma,\delta > 0$) and the index $k$ such that $|c_k|=\max\{|c_1|,|c_2|,|c_3|\}$ is unique. Otherwise, there are infinitely many closest classical-quantum states to a BD state $\rho$ \cite{Spehner2014}.
We finally note that the Bures quantum correlations of $\rho$ as captured by Eq.~(\ref{Buresdiscord}) are different (conceptually and quantitatively) from the ``discord of response'' of $\rho$, where quantumness of correlations is alternatively defined in terms of the minimum (Bures) distance between $\rho$ and the set of states obtained by rotating $\rho$ via local root-of-unity unitary operations on one subsystem only \cite{Roga2014}.
The closed expression for the entanglement of an arbitrary two-qubit state $\rho$, as quantified by the Bures distance, was obtained in terms of the concurrence \cite{Horodecki2009} $\mathrm{Con}(\rho)$ of $\rho$ and is given by \cite{Streltsov2010}
\begin{equation}
E_{Bu}\left( \rho\right) = \sqrt{2\left(1-\sqrt{F(\rho,\sigma_\rho)}\right)},
\end{equation}
where
\begin{equation}
F(\rho,\sigma_\rho) =\frac{1}{2} \left(1+\sqrt{1-\mathrm{Con}^2(\rho)} \right).
\end{equation}
In the case of a BD state $\rho$, the concurrence specialises to
\begin{equation}
\mathrm{Con}(\rho)=\max\left\lbrace 0, \lambda_1 -\lambda_2 - \lambda_3 - \lambda_4 \right\rbrace
\end{equation}
with $\lambda_1\geq\lambda_2\geq \lambda_3\geq\lambda_4$ being the eigenvalues $\alpha,\beta,\gamma,\delta$ in non-increasing order.
\section{Classical Correlations}\label{sec:classicalcorrelations}
The classical correlations of a state $\rho$ can be quantified as follows. Given the set of all the closest classical-quantum states to $\rho$, called $\mathcal{CCQ}_{\rho}$, we define the classical correlations of $\rho$ to be
\begin{equation}\label{ClassicalCorrelationsDefinition}
C_{Bu}\left( \rho\right) \equiv \inf_{\chi_{\rho} \in \mathcal{CCQ}_{\rho}} \inf_{\pi\in\mathcal{P}} D_{Bu}\left(\chi_\rho,\pi \right) = \inf_{\chi_{\rho} \in \mathcal{CCQ}_{\rho}} D_{Bu}\left(\chi_\rho,\pi_{\chi_\rho}\right),
\end{equation}
where $\mathcal{P}$ is the set of product states, i.e.~states of the form $\pi=\rho^A\otimes\rho^B$ with $\rho^A$ ($\rho^B$) being an arbitrary state of qubit $A$ ($B$) while $\pi_{\chi_\rho}$ is any of the closest product states to $\chi_\rho$. Notice that the definition in Eq.~(\ref{ClassicalCorrelationsDefinition}) represents an important improvement over previous attempts to quantify classical correlations geometrically \cite{Modi2010,Aaronson2013,PaulaEPL}. In fact, without the inclusion of the additional minimisation over all the closest classical-quantum states to $\rho$, a measure of classical correlations might be ill-defined, as the distances between each closest classical-quantum state and their respective closest product states can generally differ. This issue has been very recently highlighted in an independent work \cite{Sarandy00}.
As we have already mentioned, if $\rho$ is an arbitrary BD state then, within $\mathcal{CCQ}_{\rho}$, there always exists a BD classical-quantum state $\chi_{\rho}^{BD}$ of the form of Eq.~(\ref{Eq:BDCQState}). In the following we shall prove that, for any BD state $\rho$, the BD state $\chi_{\rho}^{BD}$ achieves the infimum over $\mathcal{CCQ}_{\rho}$ in Eq.~(\ref{ClassicalCorrelationsDefinition}) and that one of the product states $\pi_{\chi_\rho^{BD}}$ nearest to the BD classical-quantum state $\chi_\rho^{BD}$ is the tensor product of the marginals of a BD state $\rho$, i.e.
\begin{equation}
\pi_{\chi_\rho^{BD}} = \frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B.
\end{equation}
Thus, the Bures classical correlations of any BD state $\rho$ are quantified by
\begin{equation}\label{BuresClassCorr}
C_{Bu}\left( \rho\right) = \sqrt{2\left(1-\sqrt{ F(\chi_\rho^{BD},\pi_{\chi_\rho^{BD}})}\right)},
\end{equation}
with
\begin{equation}
F(\chi_\rho^{BD},\pi_{\chi_\rho^{BD}})=\frac{1+2(\Lambda_1+\Lambda_2+\Lambda_3) }{2(1+2 \Lambda_{max})},
\end{equation}
where $\Lambda_i$ ($i=1,2,3$) is defined in Eq. (\ref{Lambdas}).
We now prove the announced result. In \cite{Spehner2014} the authors presented an explicit construction of all the closest classical-quantum states $\chi_\rho$ to any BD state $\rho$. In order to express $\chi_\rho$ in a general mathematical form we need to introduce some notations. Let $p_0=\delta$, $p_1=\beta$, $p_2=\alpha$, $p_3=\gamma$ be the eigenvalues of $\rho$ and $k$ be any index such that $|c_k|=\max\{|c_1|,|c_2|,|c_3|\}$. Let us finally introduce the orthonormal product basis $\{|\alpha_i,\beta_j\rangle=|\alpha_i\rangle\otimes|\beta_j\rangle\}_{i,j=0}^1$ of $\mathbb{C}^2\otimes\mathbb{C}^2$ defined as follows \cite{Spehner2014},
\begin{enumerate}
\item{if $k$ is unique, then $|\alpha_i\rangle=|\beta_i\rangle$ are the eigenvectors of $\sigma_k$};
\item{if $k$ can take two distinct values, then $k$, $|\alpha_i\rangle$ and $|\beta_i\rangle$ are defined by:
\begin{equation}
k=\cases{
1 & if $c_1=\pm c_2, |c_1|>|c_3|$, \\
3 & if $c_1=\pm c_3, |c_1|>|c_2|$,\\
3 & if $c_2=\pm c_3, |c_2|>|c_1|$,}
\end{equation}
\begin{equation}
|\alpha_i\rangle=\cases{
e^{-i\frac{\phi}{2}\sigma_3}\frac{|0\rangle+(-1)^i|1\rangle}{\sqrt{2}} & if $c_1=\pm c_2, |c_1|>|c_3|$, \\
e^{-i\frac{\theta}{2}\sigma_2}|i\rangle & if $c_1=\pm c_3, |c_1|>|c_2|$,\\
e^{i\frac{\theta}{2}\sigma_1}|i\rangle & if $c_2=\pm c_3, |c_2|>|c_1|$,}
\end{equation}
and
\begin{equation}
|\beta_i\rangle=\cases{
e^{\mp i\frac{\phi}{2}\sigma_3}\frac{|0\rangle+(-1)^i|1\rangle}{\sqrt{2}} & if $c_1=\pm c_2, |c_1|>|c_3|$, \\
e^{\mp i\frac{\theta}{2}\sigma_2}|i\rangle & if $c_1=\pm c_3, |c_1|>|c_2|$,\\
e^{\pm i\frac{\theta}{2}\sigma_1}|i\rangle & if $c_2=\pm c_3, |c_2|>|c_1|$;}
\end{equation}
}
where $\phi\in[0,2\pi[$ and $\theta\in[0,2\pi[$ are arbitrary;
\item{if $k$ can take all three different values, i.e. $c_1=\epsilon_2 c_2 = \epsilon_3 c_3$ with $\epsilon_{2,3}\in\{-1,1\}$, then we define
\begin{eqnarray}
k=3, \\
|\alpha_i\rangle=e^{-i\frac{\phi}{2}\sigma_3}e^{-i\frac{\theta}{2}\sigma_2}|i\rangle, \\
|\beta_i\rangle=e^{-i\epsilon_2\frac{\phi}{2}\sigma_3}e^{-i\epsilon_3\frac{\theta}{2}\sigma_2}|i\rangle ;
\end{eqnarray}
}
\end{enumerate}
where $\phi\in[0,2\pi[$ and $\theta\in[0,2\pi[$ are arbitrary.
Now we are ready to write down the general form of any closest classical-quantum states $\chi_\rho$ to an arbitrary BD state $\rho$ \cite{Spehner2014}:
\begin{enumerate}
\item{\label{enum:probcond1} if $p_0p_k=0$ and $p_i p_j>0$ for $i\neq j$, $i\neq k$, $j\neq k$, then
\begin{eqnarray}\label{Eq:generalformofchirhoprobcond1}
\chi_\rho(r)=\frac{1+s_k}{4}\left[|\alpha_0,\beta_0\rangle\langle\alpha_0,\beta_0|+ |\alpha_1,\beta_1\rangle\langle\alpha_1,\beta_1| \right] \\ \nonumber
+ \frac{1-s_k}{4}\left[(1+r)|\alpha_0,\beta_1\rangle\langle\alpha_0,\beta_1|+(1-r)|\alpha_1,\beta_0\rangle\langle\alpha_1,\beta_0| \right],
\end{eqnarray}
where $r$ is a parameter which can take any value in the interval $r\in[-1,1]$ ;}
\item{\label{enum:probcond2} if $p_0p_k>0$ and $p_i p_j=0$ for $i\neq j$, $i\neq k$, $j\neq k$, then
\begin{eqnarray}\label{Eq:generalformofchirhoprobcond2}
\chi_\rho(r)=\frac{1+s_k}{4}\left[(1+r)|\alpha_0,\beta_0\rangle\langle\alpha_0,\beta_0|+ (1-r)|\alpha_1,\beta_1\rangle\langle\alpha_1,\beta_1| \right] \\ \nonumber
+ \frac{1-s_k}{4}\left[|\alpha_0,\beta_1\rangle\langle\alpha_0,\beta_1|+|\alpha_1,\beta_0\rangle\langle\alpha_1,\beta_0| \right],
\end{eqnarray}
where $r$ is a parameter which can take any value in the interval $r\in[-1,1]$ ;}
\item{\label{enum:probcond3} if $p_0p_1 p_2 p_3>0$, then
\begin{eqnarray}\label{Eq:generalformofchirhoprobcond3}
\chi_\rho=\frac{1+s_k}{4}\left[|\alpha_0,\beta_0\rangle\langle\alpha_0,\beta_0|+ |\alpha_1,\beta_1\rangle\langle\alpha_1,\beta_1| \right] \\ \nonumber
+ \frac{1-s_k}{4}\left[|\alpha_0,\beta_1\rangle\langle\alpha_0,\beta_1|+|\alpha_1,\beta_0\rangle\langle\alpha_1,\beta_0| \right];
\end{eqnarray}}
\end{enumerate}
where $s_k$ is given by Eq. (\ref{Eq:theparameters}). Also, in the following it will be useful to note that $p_0p_k=0$ and $p_i p_j>0$ for $i\neq j$, $i\neq k$, $j\neq k$, imply $s_k\in[\frac{3}{5},1]$, whereas $p_0p_k>0$ and $p_i p_j=0$ for $i\neq j$, $i\neq k$, $j\neq k$ imply $s_k\in[-1,-\frac{3}{5}]$, as can be easily seen from Eq. (\ref{Eq:theparameters}).
Now, for the sake of simplicity, let us focus on BD states $\rho$ such that $k$ is unique and equal to $3$ and such that $p_0p_3=0$ and $p_1p_2>0$, which from now on will be referred to as reference BD states. Later, we will generalise the analysis valid for this particular reference BD states, to a general BD state. In the case of the reference BD states we have that the set of all closest classical-quantum states to $\rho$ is given by the following $1$-parameter family of states:
\begin{eqnarray}\label{Eq:CCQstatestoparticularBDstates}
\chi_\rho(r)=\frac{1+s_3}{4}\left[|00\rangle\langle 00|+ |11\rangle\langle11| \right] \\ \nonumber
+ \frac{1-s_3}{4}\left[(1+r)|01\rangle\langle 01|+(1-r)|10\rangle\langle10| \right],
\end{eqnarray}
where $r\in[-1,1]$ and $s_3$ is given by Eq.~(\ref{Eq:theparameters}). In particular, for $r=0$ we get the BD classical-quantum state $\chi_{\rho}^{BD}$ of the form of Eq.~(\ref{Eq:BDCQState}). Also, as we have already mentioned, due to the conditions $k=3$, $p_0p_3=0$ and $p_1p_2>0$, we have necessarily $s_3\in[\frac{3}{5},1]$.
Let us consider a general product state of two qubits, $\pi=\rho^A\otimes\rho^B$, where $\rho^A$ and $\rho^B$ are any two states of qubits $A$ and $B$, respectively. Due to the Bloch representations of $\rho^A$ and $\rho^B$, i.e. $\rho^A=\frac{1}{2}\left(\mathbb{I}^A + \sum_{i=1}^3 a_i \sigma_i^A \right) $ and $\rho^B=\frac{1}{2}\left(\mathbb{I}^B + \sum_{i=1}^3 b_i \sigma_i^B \right)$ with $\left|\vec{a}\right|\leqslant 1,\left|\vec{b}\right|\leqslant 1$ , we have that $\pi$ is represented in the standard computational basis by the following matrix
\begin{equation}\label{statoprodotto}\pi = \frac{1}{4}\left(\mathbb{I}^A\otimes\mathbb{I}^B + \sum_{i=1}^3 a_i \sigma_i^A\otimes\mathbb{I}^B + \sum_{i=1}^3 b_i\mathbb{I}^A\otimes\sigma_i^B + \sum_{i,j=1}^3 a_i b_j \sigma_i^A\otimes\sigma_j^B \right ).\end{equation}
A general product state $\pi$ of two qubits is clearly characterised by the Bloch vectors of each qubit, i.e. $\vec{a}\equiv\left\lbrace a_1,a_2,a_3 \right\rbrace $ and $\vec{b}\equiv\left\lbrace b_1,b_2,b_3 \right\rbrace $. We now take into account the product states $\pi'$ and $\pi_0$, where $\pi'=U_3 \pi U_3^\dagger$, with $U_3=\sigma_3^A\otimes\mathbb{I}^B$, is characterised by the Bloch vectors $\vec{a}'=\{-a_1,-a_2,a_3\}$ and $\vec{b}'=\vec{b}=\{b_1,b_2,b_3\}$, whereas $\pi_0\equiv \frac{1}{2}\left(\pi + \pi' \right) $ is characterised by the Bloch vectors $\vec{a}_0=\{0,0,a_3\}$ and $\vec{b}_0=\vec{b}=\{b_1,b_2,b_3\}$. Then, the following holds
\begin{equation}\label{Eq:FidelityEquality}
F\left(\chi_\rho,\pi \right)=F\left(\chi_\rho,\pi' \right),
\end{equation}
where $\chi_\rho$ is any closest classical-quantum state of the form (\ref{Eq:CCQstatestoparticularBDstates}). To prove the above equality it suffices to consider the invariance of the fidelity under general unitaries and the invariance of any $\chi_\rho$ under the action of the particular local unitary $U_3$.
It is known that the fidelity is a concave function on the convex set of states, i.e.
\begin{equation}\label{eq:concavityfidelity}
F(\rho,p\sigma_1 + (1-p)\sigma_2)\geq pF(\rho,\sigma_1) + (1-p)F(\rho,\sigma_2),\ \ \forall p\in\left[0,1 \right]
\end{equation}
for any states $\rho$, $\sigma_1$ and $\sigma_2$. As a result, by substituting $p=\frac{1}{2}$, $\rho=\chi_\rho$, $\sigma_1 = \pi$ and $\sigma_{2} =\pi'$ into (\ref{eq:concavityfidelity}), one obtains
\begin{equation}
F\left(\chi_\rho,\pi_0 \right)\geq F\left(\chi_\rho,\pi \right).
\end{equation}
By symmetry, a similar result holds also by flipping the first two components of the Bloch vector $\vec{b}$.
As a result, in order to maximise the fidelity between any closest classical-quantum state $\chi_\rho$ to a reference BD state $\rho$ and any product state $\pi$, the Bloch vectors that characterise $\pi$ must be necessarily such that $a_i=a \delta_{i3}$ and $b_i=b \delta_{i3}$.
The square root of the fidelity between any $\chi_\rho$ of the form (\ref{Eq:CCQstatestoparticularBDstates}) and any product state $\pi$ with $a_i=a \delta_{i3}$ and $b_i=b \delta_{i3}$ is
\begin{eqnarray}\label{Eq:ClassicalFidelity}
\sqrt{F(\chi_{\rho},\pi)}= \nonumber\\
\frac{1}{4} \sqrt{(1+a) (1-b)(1-r) (1-s_{3})}+\frac{1}{4} \sqrt{(1-a) (1+b) (1+r)(1-s_{3})} \nonumber\\
+\frac{1}{4} \sqrt{(1-a) (1-b) (1+s_{3})}+\frac{1}{4} \sqrt{(1+a) (1+b) (1+s_{3})},
\end{eqnarray}
where we have used the fact that any $\chi_{\rho}$ of the form (\ref{Eq:CCQstatestoparticularBDstates}) commutes with any product state $\pi$ with $a_i=a \delta_{i3}$ and $b_i=b \delta_{i3}$, so that the square root of their fidelity is nothing but the square root of their classical fidelity which, in turn, is given by the sum of the square roots of the products of the corresponding eigenvalues of $\chi_{\rho}$ and $\pi$.
By maximising Eq. (\ref{Eq:ClassicalFidelity}) with respect to $r$, $a$ and $b$, one obtains that $r=a=b=0$ reaches the global maximum for any $s_3\in[0, 1]$, so in particular for any $s_3\in[\frac{3}{5}, 1]$. As a consequence, for any reference BD state $\rho$, the infimum over $\mathcal{CCQ}_\rho$ in Eq. (\ref{ClassicalCorrelationsDefinition}) is achieved by $\chi_\rho^{BD}$ and one of the nearest product states to $\chi_\rho^{BD}$ is $\pi_{\chi_\rho^{BD}}= \frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B$.
Before generalising the above analysis from the reference BD states to any BD state we need to make the following two remarks. First, for any orthonormal product basis $\{|\alpha_i,\beta_j\rangle\}_{i,j=0}^1$, any classical-quantum state in Eq.~(\ref{Eq:generalformofchirhoprobcond1}) can be transformed through a local unitary into the reference classical-quantum state in Eq.~(\ref{Eq:CCQstatestoparticularBDstates}) with the same value of $r$ and $s_3=s_k$, any classical-quantum state in Eq.~(\ref{Eq:generalformofchirhoprobcond2}) can be transformed through a local unitary into the reference classical-quantum state (\ref{Eq:CCQstatestoparticularBDstates}) with the same value of $r$ and $s_3=-s_k$, and finally any classical-quantum state in Eq.~(\ref{Eq:generalformofchirhoprobcond3}) can be transformed through a local unitary into the reference classical-quantum state (\ref{Eq:CCQstatestoparticularBDstates}) with $r=0$ and $s_3=|s_k|$. Second, we note that the minimal Bures distance from the set of product states is invariant under local unitaries, indeed
\begin{eqnarray}\label{Eq:BuresLocalUnitaries}
\inf_{\pi \in \mathcal{P}} D\left(\chi_{\rho},\pi\right) &=\inf_{\pi \in \mathcal{P}} D\left( (U_{A} \otimes U_{B})\chi_{\rho}(U_{A}^{\dagger} \otimes U_{B}^{\dagger}), (U_{A} \otimes U_{B})\pi(U_{A}^{\dagger} \otimes U_{B}^{\dagger})\right) \nonumber \\
&=\inf_{\pi \in \mathcal{P}} D\left( (U_{A} \otimes U_{B})\chi_{\rho}(U_{A}^{\dagger} \otimes U_{B}^{\dagger}),\pi\right)
\end{eqnarray}
where in the first equality we use the invariance of the Bures distance under unitaries and in the second equality we use the locality and bijectivity of local unitaries.
As promised, we are now ready to generalise the above analysis from the reference BD states to any BD state. Let us start from the BD states $\rho$ satisfying Condition (\ref{enum:probcond1}), i.e. the ones such that $p_0p_k=0$ and $p_i p_j>0$ for $i\neq j$, $i\neq k$, $j\neq k$. As we mentioned, we have necessarily a reference BD state $\rho'$ with $s_3=s_k$ such that for any pair of $\chi_{\rho}(r) \in \mathcal{CCQ}_{\rho}$ and $\chi_{\rho'}(r) \in \mathcal{CCQ}_{\rho'}$ with the same value of $r$, there always exists a local unitary $U_{A} \otimes U_{B}$ such that $\chi_{\rho'} = (U_{A} \otimes U_{B}) \chi_{\rho} (U_{A}^{\dagger} \otimes U_{B}^{\dagger})$. Also,
\begin{eqnarray}\label{Eq:ChiRhoBDInfimum}
\inf_{\chi_\rho \in \mathcal{CCQ}_\rho}\inf_{\pi \in \mathcal{P}} D\left(\chi_{\rho},\pi\right) &= \min_{r\in[-1,1]}\inf_{\pi \in \mathcal{P}} D\left(\chi_{\rho}(r),\pi\right) \nonumber \\
&= \min_{r\in[-1,1]}\inf_{\pi \in \mathcal{P}} D \left( \chi_{\rho'}(r),\pi\right) \nonumber \\
&= D \left( \chi_{\rho'}^{BD},\frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B\right) \nonumber \\
&= D \left( \chi_{\rho}^{BD},\frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B\right)
\end{eqnarray}
where in the first equality, we use the fact that all the states $\chi_\rho(r)$ with the same value of $r$, that may depend on $\theta$ or $\phi$ or both, are local unitarily equivalent and that the minimal Bures distance from the set of product states is invariant under local unitaries. In the second equality we use the fact that all the states $\chi_\rho(r)$ are local unitarily equivalent to the states $\chi_{\rho'}(r)$ with the same value of $r$ and again that the minimal Bures distance from the set of product states is invariant under local unitaries. In the third equality we use the fact that $\chi_{\rho'}(r) $ achieves the infimum over $\mathcal{CCQ}_\rho$ is $\chi_{\rho'}(0) =\chi_{\rho'}^{BD}$ and one of its nearest product states is $\pi_{\chi_{\rho'}^{BD}}= \frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B$. In the last equality we use the fact that $\chi_{\rho}^{BD}$ is local unitarily equivalent to $\chi_{\rho'}^{BD}$, both corresponding to $r=0$, and the fact that $\frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B$ and the Bures distance are invariant under general unitaries. Equation (\ref{Eq:ChiRhoBDInfimum}) means that, for any BD state $\rho$ satisfying Condition (\ref{enum:probcond1}), i.e. such that $p_0p_k=0$ and $p_i p_j>0$ for $i\neq j$, $i\neq k$, $j\neq k$, the infimum over $\mathcal{CCQ}_{\rho}$ in Eq.~(\ref{ClassicalCorrelationsDefinition}) is achieved by the BD closest classical-quantum states to $\rho$, $\chi_{\rho}^{BD}$, and one of the nearest product states to $\chi_\rho^{BD}$ is $\pi_{\chi_\rho^{BD}}= \frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B$. An identical reasoning holds for BD states satisfying Condition (\ref{enum:probcond2}), i.e. such that $p_0p_k>0$ and $p_i p_j=0$ for $i\neq j$, $i\neq k$, $j\neq k$, with the only exception of using a reference BD state $\rho'$ such that $s_3=-s_k$. Finally, for BD states satisfying Condition (\ref{enum:probcond3}), i.e. such that $p_0 p_1 p_2 p_3>0$, one simply needs to use the fact that each of the closest classical-quantum states $\chi_\rho$, all obtained by setting $r=0$ in the previous two cases, is local unitarily equivalent to $\chi_{\rho'}^{BD}$ directly.
In summary, we have proven that for any BD state $\rho$, the closest BD classical-quantum state $\chi_{\rho}^{BD}$ and the product of the marginals of $\rho$ quite miraculously achieve the double minimisation in Eq.~(\ref{ClassicalCorrelationsDefinition}). This result is highly nontrivial (and not obvious {\it a priori}) and suggests that the definition of classical correlations proposed here is particularly natural for BD states $\rho$.
\section{Total Correlations}\label{sec:totalcorrelations}
The Bures total correlations of a state $\rho$ are defined by the minimum Bures distance of $\rho$ to the set of product states, namely
\begin{equation}
T_{Bu}\left( \rho\right) \equiv \inf_{\pi\in\mathcal{P}} D_{Bu}\left(\rho,\pi \right) = D_{Bu}\left(\rho,\pi_{\rho}\right),
\end{equation}
where $\mathcal{P}$ is the set of product states, while $\pi_{\rho}$ is any of the product states closest to $\rho$. Therefore, in order to obtain the total correlations of a given BD state $\rho$, we simply need to maximise the fidelity $F(\rho,\pi)$ between $\rho$ and any product state $\pi$.
However, the argument used in the previous section to maximise the fidelity $F(\chi_\rho,\pi)$ between any closest classical-quantum state $\chi_\rho$ and any product state $\pi$, does not apply anymore to the present problem. We note in fact that Eq.~(\ref{Eq:FidelityEquality}) does not hold for a general BD state, i.e.,
\begin{equation}
F(\rho,\pi) \neq F(\rho,\pi').
\end{equation}
The concavity of the fidelity thus cannot be utilised to extend the previous analysis from classical to total correlations.
We then formulate an ansatz on the form of one of the closest product states $\pi_{\rho}=\frac{1}{2}\left(\mathbb{I}^A + \sum_{i=1}^3 a_i \sigma_i^A \right)\otimes\frac{1}{2}\left(\mathbb{I}^B + \sum_{i=1}^3 b_i \sigma_i^B \right)$ to a general BD state $\rho$, which is of the form
\begin{equation}\label{eq:ansatzproductstates}
a_i=a \delta_{il},\ \ b_i=b \delta_{il},\ \ a=\frac{|c_l|}{c_l} b,
\end{equation}
where the index $l$ is such that $|c_l|=\min\{|c_1|,|c_2|,|c_3|\}$. This allows us to accomplish the optimisation of the Bures distance analytically. Interestingly, the ansatz form of one of the closest product states to a BD state using the trace distance, formulated in Ref. \cite{Aaronson2013}, is the same as Eq. (\ref{eq:ansatzproductstates}), but with $l$ given by $|c_l|=\max\{|c_1|,|c_2|,|c_3|\}$.
The ansatz in Eq. (\ref{eq:ansatzproductstates}) is supported and verified by an extensive numerical investigation, which was implemented in the following way. We begin by generating a random set of four normalised probabilities and forming a BD state by setting these probabilities as the eigenvalues $\alpha$, $\beta$, $\gamma$ and $\delta$. We then numerically maximise the fidelity between this random BD state and a general product state. The result of this numerical maximisation is compared with the analytical maximisation of the fidelity between the random BD state and our ansatz product states, Eq. (\ref{eq:ansatzproductstates}). This process was repeated for $10^6$ randomly generated BD states, and in all cases the analytically maximised fidelity between the random BD state and the ansatz product states exceeded or equalled the numerical maximisation over all product states.
In order to get the explicit expression of the above coefficient $a$ characterising the Bloch vectors of $\pi_\rho$, in terms of the BD state parameters, let us proceed as follows. We define the auxiliary set of coefficients $\vec{\mu} \equiv \{\mu_1,\mu_2,\mu_3,\mu_4\}$ as given by suitable reorderings of the BD state eigenvalues. Specifically, for $l=1$ we have $\vec{\mu}=\{\alpha,\gamma,\beta,\delta\}$, for $l=2$ we have $\vec{\mu}=\{\beta,\gamma,\alpha,\delta\}$, and for $l=3$ we have $\vec{\mu}=\{\alpha,\beta,\gamma,\delta\}$.
For any BD state $\rho$ with, respectively, $c_{l}\geq 0$ and $c_{l}\leq 0$, the square root of the fidelity between $\rho$ and any product state $\pi$ having the Bloch vectors of Eq.~(\ref{eq:ansatzproductstates}) is
\begin{eqnarray}
\sqrt{F(\rho,\pi)}= \nonumber \\
\frac{1}{2}\left[\sqrt{(\sqrt{\mu_1}+\sqrt{\mu_2})^2+(\sqrt{\mu_1}-\sqrt{\mu_2})^2a^2}+(\sqrt{\mu_3}+\sqrt{\mu_4})\sqrt{1-a^2}\right]
\end{eqnarray}
and
\begin{eqnarray}
\sqrt{F(\rho,\pi)}= \nonumber \\
\frac{1}{2}\left[\sqrt{(\sqrt{\mu_3}+\sqrt{\mu_4})^2+(\sqrt{\mu_3}-\sqrt{\mu_4})^2a^2}+(\sqrt{\mu_1}+\sqrt{\mu_2})\sqrt{1-a^2}\right].
\end{eqnarray}
By maximising now the square root of the fidelity between $\rho$ and $\pi$ with respect to $a$, one obtains after some algebra that:
\begin{enumerate}
\item{\label{Item:Condition1}if $c_l>0$ and the condition
\begin{equation}\label{eq:conditionfortherealityofaplus}
{\left( \sqrt{\mu_1}-\sqrt{\mu_2}\right)}^2>{\left( \sqrt{\mu_3}+\sqrt{\mu_4}\right)}{\left( \sqrt{\mu_1}+\sqrt{\mu_2}\right)}
\end{equation}
is fulfilled, then $b=a$ with
\begin{equation}\label{eq:optimalainthefirstcaseplus}
a=\pm\sqrt{\frac{{\left( \sqrt{\mu_1}-\sqrt{\mu_2}\right)}^4-{\left( \sqrt{\mu_3}+\sqrt{\mu_4}\right)}^2{\left( \sqrt{\mu_1}+\sqrt{\mu_2}\right)}^2 }{{\left( \sqrt{\mu_1}-\sqrt{\mu_2}\right)}^4+{\left( \sqrt{\mu_3}+\sqrt{\mu_4}\right)}^2{\left( \sqrt{\mu_1}-\sqrt{\mu_2}\right)}^2}};
\end{equation}}
\item{if $c_l<0$ and the condition
\begin{equation}\label{eq:conditionfortherealityofaminus}
{\left( \sqrt{\mu_3}-\sqrt{\mu_4}\right)}^2>{\left( \sqrt{\mu_1}+\sqrt{\mu_2}\right)}{\left( \sqrt{\mu_3}+\sqrt{\mu_4}\right)}
\end{equation}
is fulfilled, then $b=-a$ with
\begin{equation}\label{eq:optimalainthefirstcaseminus}
a=\pm\sqrt{\frac{{\left( \sqrt{\mu_3}-\sqrt{\mu_4}\right)}^4-{\left( \sqrt{\mu_1}+\sqrt{\mu_2}\right)}^2{\left( \sqrt{\mu_3}+\sqrt{\mu_4}\right)}^2 }{{\left( \sqrt{\mu_3}-\sqrt{\mu_4}\right)}^4+{\left( \sqrt{\mu_1}+\sqrt{\mu_2}\right)}^2{\left( \sqrt{\mu_3}-\sqrt{\mu_4}\right)}^2}};
\end{equation}}
\item{\label{Item:Condition3}if none of the two conditions above hold, we have $a=b=0$.}
\end{enumerate}
As a result, the Bures total correlations of an arbitrary BD state $\rho$ are measured by
\begin{equation}\label{Eq:TotalCorrelations}
T_{Bu} (\rho) = \sqrt{2 \left(1-\sqrt{F(\rho,\pi_{\rho})}\right)},
\end{equation}
where
\begin{equation}\label{eq:fidelitymax}
F(\rho,\pi_{\rho})=\cases{
F_+ & if (i) holds, \\
F_- & if (ii) holds,\\
F_{0} & if (iii) holds,}
\end{equation}
with
\begin{equation}\label{eq:fidelityplus}
F_+= \frac{(\mu_1 +\mu_2 ) \left(1 -2 \sqrt{\mu_1 \mu_2 } + 2 \sqrt{\mu_3 \mu_4 }\right)}{2 \left(\sqrt{\mu_1} - \sqrt{\mu_2} \right)^2},
\end{equation}
\begin{equation}\label{eq:fidelityminus}
F_-= \frac{(\mu_3 +\mu_4 ) \left(1 -2 \sqrt{\mu_3 \mu_4 } + 2 \sqrt{\mu_1 \mu_2 }\right)}{2 \left(\sqrt{\mu_3} - \sqrt{\mu_4} \right)^2},
\end{equation}
\begin{equation}
F_{0} = \frac{1}{4} \left( \sqrt{\alpha}+\sqrt{\beta}+\sqrt{\gamma}+\sqrt{\delta}\right) ^{2}.
\end{equation}
The contour plot in Figure~\ref{Fig:Perturbations} shows the fidelity between an example BD state obeying Condition (\ref{Item:Condition1}) with $l=3$,
and product states obtained as perturbations of the ansatz from Eq.~(\ref{eq:ansatzproductstates}), by allowing $a$ and $b$ to vary over the interval $[-1,1]$. The values of $a$ and $b$ that maximise this fidelity are shown to coincide with the values given by Eq.~(\ref{eq:optimalainthefirstcaseplus}). Similar plots may be created by perturbing the ansatz states in such a way that the index $l$ is no longer $|c_l|=\min\{|c_1|,|c_2|,|c_3|\}$, i.e. $l=3$, but rather $l=1$ or $l=2$. These plots both show two maxima, but the fidelity in these two maxima never exceeds the maximal value corresponding to $l=3$.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{Perturbations.pdf}
\caption{The fidelity between a BD state characterised by $\alpha = 0.874168$, $\beta=0.001239$, $\gamma=0.026908$, $\delta=0.097685$, obeying Condition (\ref{Item:Condition1}) with $l=3$, and product states obtained as perturbations of the ansatz from Eq.~(\ref{eq:ansatzproductstates}), by considering any value of $a$ and $b$ belonging to $[-1,1]$ and not only $a=\frac{|c_l|}{c_l} b$. The red dots indicate the maxima found at $a=b= \pm 0.725398$, in agreement with the values predicted by Eq.~(\ref{eq:optimalainthefirstcaseplus}). It can be seen that the product of the marginals $\frac{1}{4} \mathbb{I}^{A} \otimes \mathbb{I}^{B}$ becomes a saddle point in the case of Condition (\ref{Item:Condition1}).}
\label{Fig:Perturbations}
\end{figure}
It is worth highlighting that the product of the marginals does not represent, in general, the closest product state $\pi_{\rho}$ to an arbitrary BD state $\rho$; indeed, from the above classification, only when Condition (\ref{Item:Condition3}) holds the corresponding $F_0$ is then precisely the fidelity between $\rho$ and
$\frac{1}{4}\mathbb{I}^A\otimes\mathbb{I}^B$. This apparently counterintuitive feature has been observed as well when the trace distance is used to measure total correlations~\cite{Aaronson2013}. Differently, when Hilbert-Schmidt~\cite{Bellomo2012a} and relative entropy~\cite{Modi2010} distances are taken into account, the closest product state to any BD state is always the product of the marginals, i.e.~the maximally mixed state.
Another aspect to be remarked in this context is that, according to a naive intuition, total correlations may be expected to be equal to the sum of the classical and quantum correlations. Indeed, this is true for BD states when considering the relative entropy and Hilbert-Schmidt distance measures of correlations~\cite{Bellomo2012a}. However, the triangle inequality which is a requirement for any metric imposes that, for geometric quantifiers of correlations, only a subadditivity property needs to be satisfied, of the form $T \leq Q + C$. This turns out to be in general a sharp inequality when correlations are measured either by the trace distance~\cite{Aaronson2013,PaulaEPL} or by the Bures distance, as shall become apparent in the following.
\section{Examples}\label{sec:examples}
In this Section the Bures distance correlations are analysed for two families of one-parameter BD states: Werner states~\cite{Werner1989} and rank-2 BD states.
Werner states $\rho_{W}$ are conventionally defined, for two qubits, as the mixture of a maximally entangled Bell state $\ket{\Phi} = (\ket{00} + \ket{11})/\sqrt{2}$ with the maximally mixed state, namely $\rho_{W} = r \ket{\Phi}\bra{\Phi} + (1-r)(\mathbb{I}^A\otimes\mathbb{I}^B)/4$, with $r \in [0,1]$.
Werner states are therefore a subclass of BD states, with eigenvalues given simply by $\alpha = (1+3r)/4$ and $\beta = \gamma =\delta = (1-r)/4$. Combining the results of \cite{Streltsov2010,Aaronson2013a,Spehner2013,Spehner2014} with the above analysis, the Bures distance based correlations of $\rho_{W}$, as shown in Figure~\ref{Fig:WernerCorrelations} as functions of $r$, are given overall by
\begin{equation}
E_{Bu}^{2}(\rho_{W})=\cases{
0 & $r\leq 1/3$,\\
2-\sqrt{2+\sqrt{3(1+2r-3r^2)}} & $r > 1/3$,}
\end{equation}
\begin{equation}
Q_{Bu}^{2}(\rho_{W}) = 2-\sqrt{3-r+\sqrt{1+2r-3r^2}},
\end{equation}
\begin{equation}
C_{Bu}^{2}(\rho_{W})=2 -\frac{ 3 \sqrt{1-r}+\sqrt{1+3r}}{\sqrt{3-r+\sqrt{1+2r-3r^2}}},
\end{equation}
\begin{equation}
T_{Bu}^{2}(\rho_{W})=\cases{
\Xi_{1}(r) & $r < (1 + \sqrt{5})/4$,\\
\Xi_{2}(r) & $r \geq (1 + \sqrt{5})/4$,} \qquad \mbox{with}
\end{equation}
\begin{equation*}
\Xi_{1}(r) = \frac{1}{2} \left(4-3 \sqrt{1-r}-\sqrt{3 r+1}\right),
\end{equation*}
\begin{equation*}
\Xi_{2}(r) = 2-\sqrt{\frac{(r+1) \left(3-r-\sqrt{1+2r-3 r^2}\right)}{1+r-\sqrt{1+2r-3 r^2}}}.
\end{equation*}
All the correlations vanish for $r=0$, when the state reduces to the maximally mixed (uncorrelated) state, and are monotonically increasing functions of $r$. However, entanglement and total correlations are not smooth functions of $r$, due to non-analyticities at $r=1/3$ and $r=(1 + \sqrt{5})/4$, respectively. For $r > (1 + \sqrt{5})/4$, the closest product state $\pi_{\rho}$ to $\rho$ is not the maximally mixed product of the marginals, and becomes instead increasingly less mixed with increasing $r$, eventually reaching a pure state (e.g.~$\ket{00}\bra{00}$) when $r=1$. We finally note that for every $r$ we have $T_{Bu} \leq Q_{Bu}+C_{Bu} $, saturating the bound only trivially for $r=0$ where $Q_{Bu}=C_{Bu}=T_{Bu}=0$.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{WernerCorrelations.pdf}
\caption{Bures distance based correlations for Werner states as a function of $r$. The figure displays total correlations (solid blue line), classical correlations (dashed red line), quantum correlations (dot-dashed purple line) and entanglement (dot-dot-dashed black line). The sum of classical and quantum correlations (dotted orange line) is also plotted.}
\label{Fig:WernerCorrelations}
\end{figure}
We next consider a class of rank-2 BD states $\rho_{2}$, whose eigenvalues take the form $\alpha=(1-c)/2$, $\beta=(1+c)/2$ and $\gamma=\delta=0$, with $c\in[0,1[$. The Bures distance correlations, shown in Figure~\ref{Fig:Rank2Correlations} as functions of $c$, are in this case
\begin{equation}
E_{Bu}^{2}(\rho_{2})=Q_{Bu}^{2}(\rho_{2})= 2-\sqrt{2}\sqrt{1+\sqrt{1-c^2}},
\end{equation}
\begin{equation}
C_{Bu}^{2}(\rho_{2})=T_{Bu}^{2}(\rho_{2})=2-\sqrt{2}.
\end{equation}
In this case the non-additivity of correlations is evident, due to the fact that classical correlations and total correlations are identically equal even though quantum correlations are non-vanishing.
Furthermore, general quantum correlations of the discord type are equal to entanglement even though $\rho_{2}$ is not pure.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{Rank2Correlations.pdf}
\caption{Bures distance correlations for rank-2 Bell-diagonal states as a function of $c$. The figure displays total correlations (solid blue line), equal to the classical correlations, and quantum correlations (dot-dashed purple line), which are equal to the entanglement. The sum of classical and quantum correlations (dotted orange line) is also plotted.}
\label{Fig:Rank2Correlations}
\end{figure}
\section{Dynamics of Bures distance correlations}\label{sec:dynamics}
In this Section we study the dynamics of Bures distance correlations between two initially correlated but noninteracting qubits, each undergoing local pure dephasing due to the coupling with a bosonic bath at zero temperature and with super-Ohmic spectrum, whose characteristics do not depend on the qubit~\cite{Haikka2013}. Specifically, the dynamics of each qubit and the bosonic reservoir is governed by the following Hamiltonian ($\hbar=1$)
\begin{equation}
H=\omega_0\sigma_z + \sum_k\omega_k a_k^\dagger a_k + \sum_k \sigma_z \left(g_k a_k + g_k^*a_k^\dagger \right),
\end{equation}
where $\omega_0$ is the qubit frequency, $\omega_k$ the frequencies of the reservoir modes, $\sigma_z$ the Pauli operator along the $z$-direction, $a_k$ the bosonic annihilation operators, $a_k^\dagger$ the bosonic creation operators and $g_k$ the coupling constants between the qubit and each reservoir mode. Moreover, in the continuum limit we have $\sum_k {\left|g_k \right|}^2\rightarrow \int d\omega J(\omega)\delta(\omega_k - \omega)$, where $J(\omega)$ is the reservoir spectral density, which in the super-Ohmic case is given by
\begin{equation}
J(\omega)=\frac{\omega^s}{\omega_c^{s-1}}e^{-\omega/\omega_c},\ \ \ s>1
\end{equation}
with $\omega_c$ denoting the cut-off reservoir frequency.
If the two-qubit state is initially prepared in BD form with coefficients $c_1(0),c_2(0),c_3(0)$, then the ensuing dynamics preserves the BD form of the state, whose correlation coefficients evolve in time according to the following expressions,
\begin{eqnarray}
c_1(t)=q^2(t)c_1(0), \\
c_2(t)=q^2(t)c_2(0), \\
c_3(t)=c_3(0),
\end{eqnarray}
where the decay characteristic function $q(t)$ at zero temperature is given by $q(t)=e^{-\Upsilon(t)}$,
with dephasing factor
\begin{equation}
\Upsilon(t)=2\int_0^t\gamma(t')dt'
\end{equation}
and dephasing rate
\begin{equation}
\gamma(t)=\omega_c {\left[1+{\left(\omega_c t \right)}^2 \right] }^{-s/2}\Gamma[s]\sin\left[s \arctan(\omega_c t ) \right],
\end{equation}
where $\Gamma[x]$ is the Euler Gamma function.
We are interested in generally non-Markovian dynamics since it can prolong the existence of quantum correlations between the qubits. Consequently, in the following we will consider $s>s_{c}=2$, where $s_{c}=2$ represents the zero-temperature crossover between Markovian and non-Markovian dynamics~\cite{Haikka2013}.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.6\textwidth]{Dynamics.pdf}\label{Fig:Dynamics}} \\
\subfigure[]{\includegraphics[width=0.6\textwidth]{Dynamics2.pdf}\label{Fig:Dynamics2}} \\
\caption{Dynamics of Bures distance correlations, plotted versus the dimensionless time $\nu=\omega_c t$, for two noninteracting qubits, each coupled to a super-Ohmic bosonic reservoir with $s=2.5$ and initially prepared in a Bell-diagonal state with coefficients (a) $c_1(0)=1,c_2(0)=-0.6,c_3(0)=0.6$ and (b) $c_1(0)=1,c_2(0)=-0.02,c_3(0)=0.02$. The figures display total correlations (solid blue line), classical correlations (dashed red line), quantum correlations (dot-dashed purple line) and entanglement (dot-dot-dashed black line).}
\label{Fig:DynamicsBoth}
\end{figure}
Figure \ref{Fig:Dynamics} shows the behaviour of Bures distance total, classical and quantum correlations as a function of the dimensionless time $\nu=\omega_c t$, when the two qubits are initially prepared in a rank-$2$ BD state with coefficients $c_1(0)=1$, $c_2(0)=-0.6$, $c_3(0)=0.6$ and each locally interacts with a zero-temperature super-Ohmic bosonic reservoir with $s=2.5$. This particular dynamics leads to paradigmatic {\it freezing} phenomena for both quantum and classical correlations, which can stay constant in certain time intervals despite the state physically changing; this was originally reported by analysing entropic measures of correlations \cite{Mazzola}. Bures measures of correlations are here shown to capture the same phenomenology. In our specific example, the freezing of quantum correlations can be seen up until a certain time $t^*$ and the freezing of classical correlations from that exact time. It is worth noting that classical and quantum correlations exactly coincide at time $t^*$, which is known as the instant of sudden transition from classical to quantum decoherence \cite{Mazzola}. This property also happens for correlations quantified by relative entropy \cite{Haikka2013,xulofranco2013NatComms,lofrancoreview} and Hilbert-Schmidt distance \cite{Bellomo2012a}. For correlations quantified by trace distance, although quantum and classical correlations alternately freeze in the same time intervals as the other measures \cite{Aaronson2013a,Aaronson2013,Paula2013}, they are generally not coincident at the crossing time $t^*$. The entanglement, instead, decays monotonically and completely disappears at a finite time exhibiting the well-known sudden death phenomenon \cite{lofrancoreview,yu2009Science}. Total correlations, finally, decay smoothly and asymptotically reach a non-vanishing value.
However, the aforementioned sudden switch between freezing of quantum and classical correlations may not happen at all if the two qubits are initially prepared in a suitable BD state. This is displayed in Figure~\ref{Fig:Dynamics2} for initial coefficients $c_1(0)=1,c_2(0)=-0.02,c_3(0)=0.02$. In fact, in the latter case, quantum correlations are frozen indefinitely and never coincide with classical correlations. This peculiar time behaviour is in accordance with Ref.~\cite{Haikka2013}, in which this phenomenon was originally investigated by considering the entropic quantum discord \cite{Ollivier2001}. We also point out that, instead, the entanglement presents again a sudden death despite the indefinite freezing of general quantum correlations. This is an interesting point showing once more the very different nature of quantum correlations, as captured by the concept of quantum discord, and conventional entanglement: quantum correlations can be in principle completely unaffected by decoherence in spite of the fast disappearance of entanglement. For a more general study of the sudden change phenomena for quantum correlations see Ref.~\cite{FanchiniNew}.
\section{Conclusions}\label{sec:conclusions}
In this paper we obtained exact expressions for the classical and total correlations of two-qubit Bell-diagonal states according to the Bures distance. These expressions, together with the corresponding known formulae for entanglement and discord-type quantum correlations, allow for the completion of the hierarchy of Bures distance based correlations in the physically relevant family of Bell-diagonal states. We proved that the closest Bell-diagonal classical-quantum state and the tensor product of the marginals of any Bell-diagonal state achieve the infima in the definition of classical correlations. We have shown that the nearest uncorrelated state to every Bell-diagonal state is not always the product of its marginals and that, in general, the total correlations are strictly smaller than the sum of quantum and classical correlations. These results are in agreement with those obtained with the trace distance \cite{Aaronson2013,PaulaEPL} but not with the Hilbert-Schmidt \cite{Bellomo2012a} and relative entropy distances \cite{Modi2010}. Finally, we have shown that, for two independent qubits under local bosonic pure-dephasing non-Markovian channels and suitable initial conditions \cite{Mazzola,Haikka2013}, Bures distance quantifiers of correlations can also manifest: (i) the freezing of quantum correlations before a certain time $t^*$ and the freezing of classical correlations after the same time $t^*$; (ii) the indefinite freezing of quantum correlations notwithstanding the contemporary occurrence of entanglement sudden death. Another peculiar feature of quantum but not classical geometric correlations, namely the possibility of double sudden changes (under different dynamical conditions) as observed theoretically and experimentally for trace distance quantifiers \cite{SarandyNew,PaulaEPL,Paula2013}, has also been theoretically shown to occur for the Bures measure of discord \cite{OrszagNew}.
Overall, with the contributions brought forward by this paper, we have reached a fairly comprehensive understanding of the different components of correlations in generally mixed states of archetypical bipartite systems and their dynamical characteristics. Experimental demonstrations with quantum optics and nuclear magnetic resonance setups \cite{Guo,Cornelio2012,xulofranco2013NatComms,DiogoNMR,Isabela,Paula2013} have also verified the predicted resilience of classical and quantum correlations to particular decoherent evolutions. However, a satisfactory understanding of these phenomena from first principles is still missing. This is perhaps to be traced back to a still incomplete formalisation for the requirements that measures of classical and general quantum correlations have to satisfy \cite{Modi2012,Ollivier2001,Henderson2001}. Future work will be devoted to the theoretical and experimental understanding of the physical origin of the freezing under nondissipative decoherence common to any known \textit{bona fide} measure of quantum correlations before a certain time $t^*$ (including entropic, trace, Bures, and skew-information based measures as studied in \cite{Aaronson2013a}), and to any currently known \textit{bona fide} measure of classical correlations after the same time $t^*$ (counting entropic, trace and now Bures classical correlations). The ultimate implications of such extreme robustness to decoherence, not observable in entanglement, and the ensuing possibilities opened for quantum technologies clearly deserve further investigation.
\ack{
We thank Ben Aaronson, Massimo Blasone, Catalin Catana, Fabrizio Illuminati, Soojoon Lee, Giuseppe Marmo, Sammy Ragy, Marcelo Sarandy, Diogo~O.~Soares-Pinto, Dominique Spehner and Karol \.{Z}yczkowski for fruitful discussions. We acknowledge the
Brazilian funding agency CAPES [Pesquisador Visitante Especial-Grant No.~108/2012] and the Foundational Questions Institute [FQXi-RFP3-1317] for financial support.}
\section*{References}
\providecommand{\newblock}{}
|
1,116,691,501,263 | arxiv | \subsection{LASSO: $\lambda$-path}
\subsubsection{$\lambda$-path: path \wrt to $\lambda$ ($\tau$ fixed)}
Since $\tau$ is fixed, we drop it from the notation. The normal equation of the $\lambda$-path can be written as
\begin{align*}
- X^T\left( y - X\beta \right) + \lambda s(\lambda) &= 0
\end{align*}
where, \(s(\lambda)\) is the sub-differential defined as %
\begin{equation*}\label{eq:sub-diff_lambda_LASSO}
s_j(\lambda) \in \begin{cases}
\{-1, +1\}, \enspace \text{if} \enspace \beta_j(\lambda) \neq 0\\
[-1, +1], \enspace \enspace \text{if} \enspace \beta_j(\lambda) = 0.
\end{cases}
\end{equation*}
Now, if we consider two $\lambda$ values (\(\lambda_t > \lambda_{t+1}\)) at which the active set does not change (i.e. $\mcl{A}_{\lambda_t} = \mcl{A}_{\lambda_{t+1}}$) and the sign of the active coefficients also remain the same (i.e. $s_{\mc{A}_{\lambda_t}}(\lambda_t) = s_{\mc{A}_{\lambda_{t+1}}}(\lambda_{t+1})$) , then we can write
\begin{equation}\label{eqn:lasso_lmd_path_beta_piecewise_linear}
\beta_{\mc{A}_{\lambda_{t}}}(\lambda_{t+1}) - \beta_{\mc{A}_{\lambda_t}}(\lambda_t) = - \nu_{\mc{A}_{\lambda_t}}(\lambda_t) (\lambda_{t+1} - \lambda_t)
\end{equation}
where, \(\nu_{\mc{A}_{\lambda_t}}(\lambda_t) = (X^T_{\mc{A}_{\lambda_t}}X_{\mc{A}_{\lambda_t}})^{-1}s_{\mc{A}_{\lambda_t}}(\lambda_t)\). The derivation of \(\nu_{\mc{A}_{\lambda_t}}(\lambda_t)\) is given in \ref
{LASSO:direction_vector_lambda_path}. Note that \(\nu_{\mathcal{A}_{\lambda}}(\lambda)\) is constant for all real values of \(\lambda \in [\lambda_t, \lambda_{t+1})\) and thus, equation (\ref{eqn:lasso_lmd_path_beta_piecewise_linear}) states that \(\beta(\lambda)\) is piece-wise linear in \(\lambda\) for a fixed \(\tau\). To draw the curve of solutions as a function of $\lambda$, we need to check when the active set changes. If $\lambda_{t+1}$ is the next zero crossing point then either of the following two events happens.
$\bullet$ A zero variable becomes non-zero \ie
\( \exists j \in A^c_{\lambda_t} \text{ s.t. }: |x_{j}^{\top}(y - X_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_{t+1}))| = \lambda_{t+1} \text{ or,}\)
$\bullet$ A non zero variable becomes zero \ie \(\exists j \in \mcl{A}_{\lambda_t} \text{ s.t.}:\beta_j(\lambda_t) \neq 0 \text{ but } \beta_j(\lambda_{t+1}) = 0 \enspace.\)
Overall, the next change of the active set happens at \(\lambda_{t+1} = \lambda_t - \Delta_j, \) where
\begin{equation}\label{eq:lasso_lmd_path_step_length}
\Delta_j = min(\Delta_j^1, \Delta_j^2) = \min\left(\min_{j \in \mc{A}^c_{\lambda_t}} \Big( \frac{(x_k \pm x_j)^Tw(\lambda_t)}{(x_k \pm x_j)^Tv(\lambda_t)} \Big)_{++}, \hst \min_{j \in \mcl{A}_{\tau_t}} \Big( - \frac{\beta_j(\lambda_t)}{\nu_j(\lambda_t)} \Big)_{++} \right) \enspace.
\end{equation}
where, \(w(\lambda_t) = y - X_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_t)\), and \(v(\lambda_t) = X_{\mcl{A}_{\lambda_t}}\nu_{\mcl{A}_{\lambda_t}}(\lambda_t)\). The derivation of the step-size of inclusion ($\Delta_j^1$) is given in \ref{LASSO:step-size_lmd_path_d1}.
\subsubsection{Direction vector ($\lambda$-path)}\label{LASSO:direction_vector_lambda_path}
Let's consider the normal equation at $\lambda_t$ and $\lambda_{t+1}$ for the active components
\begin{equation}\label{eqn:normal_lambda1}
-X^{\top}_{\mathcal{A}_{\lambda_t}}(y - X_{\mathcal{A}_{\lambda_t}}\beta_{\mathcal{A}_{\lambda_t}}(\lambda_{t}))
+ \lambda_t s_{\mathcal{A}_{\lambda_t}}(\lambda_{t}) = 0,
\end{equation}
\begin{equation}\label{eqn:normal_lambda2}
-X^{\top}_{\mathcal{A}_{\lambda_{t}}}(y - X_{\mathcal{A}_{\lambda_{t}}}\beta_{\mathcal{A}_{\lambda_{t}}}(\lambda_{t+1}))
+ \lambda_{t} s_{\mathcal{A}_{\lambda_{t}}}(\lambda_{t}) = 0.
\end{equation}
Note that \(\mcl{A}_{\lambda_t} = \mcl{A}_{\lambda_{t+1}}\) and \(s_{\mc{A}_{\lambda_t}}(\lambda_{t}) = s_{\mcl{A}_{\lambda_{t}}}(\lambda_{t+1})\). Therefore, subtracting (\ref{eqn:normal_lambda1}) from (\ref{eqn:normal_lambda2}) one can write
\begin{align*}
\beta_{\mathcal{A}_{\lambda_{t}}}(\lambda_{t+1}) - \beta_{\mathcal{A}_{\lambda_t}}(\lambda_{t}) = -\nu_{\mathcal{A}_{\lambda_t}}(\lambda_{t}) (\lambda_{t+1} - \lambda_t),
\end{align*}
where, we defined \(\nu_{\mathcal{A}_{\lambda_t}}(\lambda_{t}) = (X^{\top}_{\mathcal{A}_{\lambda_t}}X_{\mathcal{A}_{\lambda_t}})^{-1}s_{\mathcal{A}_{\lambda_t}}(\lambda_t)\).
\subsubsection{Step-size of inclusion ($\lambda$-path)}\label{LASSO:step-size_lmd_path_d1}
The optimality condition for the active features \(j \in \mathcal{A}_{\lambda_t}\) of the $\lambda$-path of LASSO can be written as
\begin{align*}
- x_j^{\top}\left( y - X_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) \right) + \lambda s(\beta_j) &= 0,\\
x_j^{\top}\left( y - X_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) \right) &= \lambda s_j,\\
\therefore \hspace{1cm} \left| x_j^{\top}\left( y - X_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) \right) \right| &= \lambda.
\end{align*}
Therefore, at any step \((\lambda_t \rightarrow \lambda_t - \Delta_j) \) any non-active feature \( (j \in \mathcal{A}^c_{\lambda_t} ) \) becomes active when the following condition is satisfied i.e.
\begin{align*}
\left| x_j^{\top}\left( y - X_{\mcl{A}_{\lambda_t}}(\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) + \Delta_j\nu(\lambda_t)) \right) \right| &= \left| x_k^{\top}\left(y - X_{\mcl{A}_{\lambda_t}}(\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) + \Delta_j\nu(\lambda_t)) \right) \right|, \hst \forall j \in \mathcal{A}^c_{\lambda_t}, \forall k \in \mathcal{A}_{\lambda_t}\\
\left|x_j^{\top}(w(\lambda_t) - \Delta_j X_{\mcl{A}_{\lambda_t}}v(\lambda_t)) \right| &= \left|x_k^{\top}(w(\lambda_t) - \Delta_j X_{\mcl{A}_{\lambda_t}}v(\lambda_t)) \right|,
\end{align*}
where \(w(\lambda_t) = y - X_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) \) and \(v(\lambda_t) = X_{\mcl{A}_{\lambda_t}}\nu_{\mcl{A}_{\lambda_t}}(\lambda_t) \). Now, considering the positive and negative terms separately one can write the step-size of inclusion as -
\begin{align*}
\Delta_j^1 &= \underset{j \in \mathcal{A}^c_{\lambda_t}}{min} \Bigg( \frac{(x_k \pm x_j)^{\top}w(\lambda_t)}{(x_k \pm x_j)^{\top}v(\lambda_t)} \Bigg).
\end{align*}
\subsubsection{Tree pruning ($\lambda$-path)}
The derivation of this pruning condition is also given in \citep{ERP_Tsuda}. However, here we provide the same derivation in our notation to make it self-contained.
Similar to (\ref{eqn:pruning_condn}), the pruning condition of the $\lambda$-path can be written as
\begin{equation}\label{eq:lasso_tree_pruning_lmd_path1}
\lvert \rho_{\ell} (\lambda_t) \lvert + \Delta^{\ast}_{\ell} \lvert \eta_{\ell}(\lambda_t) \lvert < \lvert \rho_k(\lambda_t) \lvert - \Delta^{\ast}_{\ell} \lvert \eta_k(\lambda_t) \lvert,
\end{equation}
where \(\rho_{\ell}(\lambda_t) = x_{\ell}^{\top}w(\lambda_t) \) and \(\eta_{\ell}(\lambda_t) =x_{\ell}^{\top} v(\lambda_t) \quad \forall \ell \in \mathcal{A}^c_{\lambda_t} \), \(\rho_k(\lambda_t) = x_{k}^{\top}w(\lambda_t) \) and \(\eta_k(\lambda_t) = x_k^{\top} v(\lambda_t), \quad \forall k \in \mathcal{A}_{\lambda_t} \) .
Now similar to the Proposition 3 we can also write
\begin{proposition} \label{proposition:lasso_tree_antimonotonicity_lmd_path}
\textnormal{Using the tree anti-monotonicity property (\ref{eq:tree_anti_monotonicity_prop}) we can easily show that $\forall \ell^{\prime} $ s.t. $\ell^{\prime} \supset \ell$, the following conditions are satisfied i.e.}
\begin{align*}
| \rho_{\ell^{\prime}} (\lambda_t) | = \lvert \sum_i w_i(\lambda_t) x_{i\ell^{\prime}} \lvert &\leq max \big\{ \sum_{w_i(\lambda_t) < 0} \lvert w_i(\lambda_t) \lvert x_{i\ell}, \sum_{w_i(\lambda_t) > 0} \lvert w_i(\lambda_t) \lvert x_{i\ell} \big\} := b_{w(\lambda_t)},\\
|\eta_{\ell^{\prime}}(\lambda_t)| = \lvert \sum_i v_i(\lambda_t) x_{i\ell^{\prime}} \lvert &\leq max \big\{ \sum_{v_i(\lambda_t) < 0} \lvert v_i(\lambda_t) \lvert x_{i\ell}, \sum_{v_i(\lambda_t) > 0} \lvert v_i(\lambda_t) \lvert x_{i\ell} \big\} := b_{v(\lambda_t)}.
\end{align*}
\end{proposition}
Therefore, similar to the $\tau$-path (\ref{eq:lemma_3_eq}) the pruning condition of the $\lambda$-path (\ref{eq:lasso_tree_pruning_lmd_path1}) can be written as
\begin{equation}\label{eq:lasso_pruning_cond_lmd_path3}
b_{w(\lambda_t)} + \Delta^{\ast}_{\ell} b_{v(\lambda_t)} < \lvert \rho_k(\lambda_t)| - \Delta^{\ast}_{\ell} \lvert \eta_k(\lambda_t) \lvert.
\end{equation}
The complete algorithm for the selection path ($\lambda$-path) is given in Algorithm\ref{algo:selection_path}.
\begin{algorithm}[H]
\caption{$\lambda$-path}
\label{algo:selection_path}
\begin{algorithmic}[1]
\State \textbf{Input:} \(Z, y, \lambda \).
\State Initialization: \( \lambda_t = \lambda_{max} = ||X^{\top}y||_{\infty}, \hst \mcl{A}_{\lambda_t} = \underset{j}{\textit{arg max}} |X^{\top}y|_j, \quad \beta(\lambda_t)=0,\)
\State \(\nu_{\mc{A}_{\lambda_t}}(\lambda_t) = (X_{\mc{A}_{\lambda_t}}^{\top}X_{\mc{A}_{\lambda_t}})^{-1} sign(\beta_{\mc{A}_{\lambda_t}}(\lambda_t))\), \hst \(\nu_{\mc{A}^c_{\lambda_t}}(\lambda_t) = 0\).
\State \textbf{while} \(\lambda_t \geq \lambda \hs \textbf{do}\).
\State Compute step-length \(\Delta_j \leftarrow \) Equation (\ref{eq:lasso_lmd_path_step_length}).
\State If \(\Delta_j = \Delta_j^1\), add $j$ into \(\mcl{A}_{\lambda_t}\). \Comment{Inclusion}
\State If, \(\Delta_j = \Delta_j^2\), remove $j$ from \(\mcl{A}_{\lambda_t}\). \Comment{Deletion}.
\State Update: \( \lambda_t \leftarrow \lambda_t - \Delta_j, \beta_{\mcl{A}_{\lambda_t}}(\lambda_{t+1}) \leftarrow \beta_{\mcl{A}_{\lambda_t}}(\lambda_t) + \Delta_j \nu_{\mcl{A}_{\lambda_t}}(\lambda_t), \newline
\nu_{\mc{A}_{\lambda_{t+1}}}(\lambda_{t+1}) = (X_{\mc{A}_{\lambda_{t+1}}}^{\top}X_{\mc{A}_{\lambda_{t+1}}})^{-1} sign(\beta_{\mc{A}_{\lambda_{t+1}}}(\lambda_{t+1})), \quad \nu_{\mc{A}^c_{\lambda_{t+1}}}(\lambda_{t+1}) = 0. \)
\State \textbf{end while}
\State \textbf{Output:} \( \beta(\lambda), \mathcal{A}_{\lambda} \).
\end{algorithmic}
\end{algorithm}
\section{Appendix}
\subsection{LASSO $\tau$-path.}
\subsubsection{Derivations of $\nu_{\cA_{\tau_t}} (\tau_t)$ and $\gamma_{\cA^c_{\tau_t}} (\tau_t)$ in Equations (\ref{eqn:beta_piece_wise_constant}) and (\ref{eqn:gamma_piece_wise_constant}).}\label{LASSO:direction_vector_tau_path}
From the optimality conditions (\ref{eq:opt_condn_shim_homotopy}) of the Lasso at $\tau_t$ and $\tau_{t+1}$, we have following equations for the active components
\begin{align}
-X^{\top}_{\mathcal{A}_{\tau_t}}(y(\tau_t) - X_{\mathcal{A}_{\tau_t}}\beta_{\mathcal{A}_{\tau_t}}(\tau_t))
+ \lambda s_{\mathcal{A}_{\tau_t}}(\tau_{t}) = 0, \label{eqn:lasso_normal_tau1} \\
-X^{\top}_{\mathcal{A}_{\tau_{t}}}(y(\tau_{t+1}) - X_{\mathcal{A}_{\tau_{t}}}\beta_{\mathcal{A}_{\tau_{t}}}(\tau_{t+1}))
+ \lambda s_{\mathcal{A}_{\tau_{t}}}(\tau_{t}) = 0.\label{eqn:lasso_normal_tau2}
\end{align}
Note that \(\mcl{A}_{\tau_t} = \mcl{A}_{\tau_{t+1}}\) and \(s_{\mc{A}_{\tau_t}}(\tau_{t}) = s_{\mcl{A}_{\tau_{t}}}(\tau_{t+1})\). Therefore, subtracting (\ref{eqn:lasso_normal_tau1}) from (\ref{eqn:lasso_normal_tau2}) we can write
\begin{align*}
\beta_{\mcl{A}_{\tau_{t}}}(\tau_{t+1}) - \beta_{\mathcal{A}_{\tau_t}}(\tau_t)
&= (X^{\top}_{\mc{A}_{\tau_t}}X_{\mc{A}_{\tau_t}})^{-1}X^{\top}_{\mc{A}_{\tau_t}}(y(\tau_{t+1}) - y(\tau_t))\\
&= (X^{\top}_{\mc{A}_{\tau_t}}X_{\mc{A}_{\tau_t}})^{-1}X^{\top}_{\mc{A}_{\tau_t}}(q + b\tau_{t+1} - q - b\tau_t) \quad \text{using Equation (\ref{eq:parametrized_response_vector})} \\
&= (X^{\top}_{\mc{A}_{\tau_t}}X_{\mc{A}_{\tau_t}})^{-1}X^{\top}_{\mc{A}_{\tau_t}}b(\tau_{t+1} - \tau_t)\\
&= \nu_{\mathcal{A}_{\tau_t}}(\tau_{t+1} - \tau_t).
\end{align*}
where, we defined \(\nu_{\mathcal{A}_{\tau_t}}(\tau_t) =(X^{\top}_{\mc{A}_{\tau_t}}X_{\mc{A}_{\tau_t}})^{-1}X^{\top}_{\mc{A}_{\tau_t}}b \).
Similarly, for the non-active components, where \(\mcl{A}^c_{\tau_t} = \mcl{A}^c_{\tau_{t+1}}\) but, \(s_{\mathcal{A}^c_{\tau_t}}(\tau_t) \neq s_{\mathcal{A}^c_{\tau_{t}}}(\tau_{t+1})\),
\begin{align}
-X^T_{\mathcal{A}^c_{\tau_t}}(y (\tau_t) - X_{\mathcal{A}_{\tau_t}}\beta_{\mathcal{A}_{\tau_t}}(\tau_{t}))
+ \lambda s_{\mathcal{A}^c_{\tau_t}}(\tau_t) = 0 \label{eqn:lasso_normal_tau1_na}, \\
-X^T_{\mathcal{A}^c_{\tau_{t}}}(y (\tau_{t+1}) - X_{\mathcal{A}_{\tau_{t}}}\beta_{\mathcal{A}_{\tau_{t}}}(\tau_{t+1}))
+ \lambda s_{\mathcal{A}^c_{\tau_{t}}}(\tau_{t+1}) = 0 \label{eqn:lasso_normal_tau2_na}
\end{align}
Therefore, subtracting (\ref{eqn:lasso_normal_tau1_na}) from (\ref{eqn:lasso_normal_tau2_na}) we can write
\begin{align}
\lambda s_{\mathcal{A}^c_{\tau_{t}}}(\tau_{t+1}) - \lambda s_{\mathcal{A}^c_{\tau_t}}(\tau_t)(\tau_{t})
&= X^T_{\mathcal{A}^c_{\tau_t}}(b - X_{\mathcal{A}_{\tau_t}} \nu_{\mathcal{A}_{\tau_t}}(\tau_t))(\tau_{t+1} - \tau_{t}) \nonumber \\
&= \gamma_{\mathcal{A}^c_{\tau_t}}(\tau_t) (\tau_{t+1} - \tau_{t}) \label{eq:gamma_appendix}
\end{align}
where we defined \(\gamma_{\mathcal{A}^c_{\tau_t}}(\tau_t) = X^T_{\mathcal{A}^c_{\tau_t}}(b - X_{\mathcal{A}_{\tau_t}} \nu_{\mathcal{A}_{\tau_t}}(\tau_t))\).
\subsubsection{Derivation of step-size $\Delta_j$ in Equation (\ref{eq:step_length})}\label{LASSO:step-size_z_path}
\paragraph{Step-size of inclusion $\Delta_j^1$:}
Let's define \(\forall j \in \mcl{A}^c_{\tau_t}\), \(c_j(\tau_t) = x_j^{\top}\left( y(\tau_t) - X_{\mcl{A}_{\tau_t}}\beta_{\mcl{A}_{\tau_t}}(\tau_t) \right) \), then we can rewrite Equation (\ref{eq:opt_condn_shim_homotopy}) for non-active components as
\begin{equation}\label{eqn:tau_path_normal_eqn}
\begin{split}
-c_j(\tau_t) + \lambda s_j(\tau_t) = 0\\
\therefore \hspace{1cm} c_j(\tau_t) = \lambda s_j(\tau_t).
\end{split}
\end{equation}
Therefore, at any step \((\tau_{t+1} \rightarrow \tau_t + \Delta_j \)) any non-active feature \( (j \in \mathcal{A}^c_{\tau_t} ) \) becomes active when the following condition is satisfied. i.e.
\begin{equation}\label{eqn:tau_path_equi_corr_lamda}
| c_j(\tau_{t+1}) | = \lambda.
\end{equation}
Now, let's consider a linear approximation of \(c_j(\tau_{t+1})\) by considering the value \(c_j(\tau_t)\) at $\tau_t$ i.e.
\begin{align}
c_j(\tau_{t+1}) &= c_j(\tau_t) + (\tau_{t+1} - \tau_t) \frac{\partial c_j(\tau_t)}{\partial \tau} \nonumber \\
&= c_j(\tau_t) + (\tau_{t+1} - \tau_t) g_j(\tau_t). \label{eq:lin_approx}
\end{align}
where, \(g_j(\tau_t) = \frac{\partial c_j(\tau_t)}{\partial \tau}\).
By plugging (\ref{eq:lin_approx}) into (\ref{eqn:tau_path_equi_corr_lamda})
and expanding (\ref{eqn:tau_path_equi_corr_lamda}) separately for positive and negative terms we can write the step-size of inclusion as
\begin{equation*}
\begin{split}
\Delta_j^1 = \tau_{t+1} - \tau_t &= \underset{j \in \mathcal{A}_{\tau_t}^c}{\min} \Bigg( \frac{\lambda - c_j(\tau_t)}{g_j(\tau_t)}, \frac{-\lambda - c_j(\tau_t)}{g_j(\tau_t)} \Bigg)\\
&= \underset{j \in \mathcal{A}_{\tau_t}^c}{\min} \Bigg( \frac{\pm \lambda - c_j(\tau_t)}{g_j(\tau_t)} \Bigg)\\
&= \underset{j \in \mathcal{A}_{\tau_t}^c}{\min} \Bigg( \frac{\lambda \hspace{0.1cm} \text{sign}(g_j(\tau_t)) - c_j(\tau_t)}{g_j(\tau_t)} \Bigg)\\
&= \underset{j \in \mathcal{A}_{\tau_t}^c}{\min} \Bigg( \lambda \frac{ \text{sign}(\gamma_j(\tau_t)) - s_j(\tau_t)}{\gamma_j(\tau_t)} \Bigg), \enspace \text{using } c_j(\tau_t) = \lambda s_j(\tau_t) \text{ in Equation }(\ref{eqn:tau_path_normal_eqn}).
\end{split}
\end{equation*}
The last equality has been written considering the fact that \(g_j(\tau_t) = \gamma_j(\tau_t)\). The proof of this is given below.
\begin{proof}
\textnormal{We will now show that} \(g_j(\tau_t) = \gamma_j(\tau_t), \enspace \forall j \in \mathcal{A}^c_{\tau_t}\).
\textnormal{We know from (\ref{eq:gamma_appendix}) that \(\gamma_j(\tau_t), \enspace \forall j \in \mathcal{A}^c_{\tau_t}\), is}
\begin{equation*}
\begin{split}
\gamma_j(\tau_t) &= \frac{\lambda s_j(\tau_{t+1}) - \lambda s_j(\tau_t)}{\tau_{t+1} - \tau_1}, \\
&= \frac{c_j(\tau_{t+1}) - c_j(\tau_t)}{\tau_{t+1} - \tau_1}, \textnormal{ using } (\ref{eqn:tau_path_normal_eqn})\\
&= \frac{\partial c_j(\tau_t)}{\partial \tau},\\
&= g_j(\tau_t).
\end{split}
\end{equation*}
\end{proof}
\paragraph{Step-size of deletion $\Delta_j^2$:}
A non zero variable becomes zero \ie
\(\exists j \in \mcl{A}_{\tau_t} \text{ such that }: \beta_j(\tau_t) \neq 0 \text{ and } \beta_j(\tau_{t+1}) = 0 \enspace.\)
\begin{align}\label{eq:lasso_step_size_tau_d2}
\therefore \quad \beta_j(\tau_{t+1}) &= \beta_j(\tau_t) + \Delta^2_j\nu_j(\tau_t) = 0 \Longrightarrow
\Delta^2_j = \min_{j \in \mcl{A}_{\tau_t}} \Bigg( - \frac{\beta_j(\tau_t)}{\nu_j(\tau_t)} \Bigg)_{++}.
\end{align}
\subsubsection{Proofs of Lemmas \ref{lemma:lemma_1} and \ref{lemma:lemma_2} in \S3.2.}
We first prove Lemma \ref{lemma:lemma_1}.
The pruning condition at any node $\ell$ in (\ref{eqn:pruning_condn}) is
\begin{equation}\label{eqn:pruning_cond_node_l}
|\rho_{\ell}(\tau_{t}, \tau_{t+1})| + \Delta_{\ell} |\eta_{\ell}(\tau_t)| < |\rho_k(\tau_{t}, \tau_{t+1})| - \Delta_{\ell} |\eta_k(\tau_t)|.
\end{equation}
Let $\Delta^\ast_\ell$ is the current minimum step-size, i.e. \(\Delta^\ast_\ell = \underset{t \in \{1, 2, \ldots, \ell\}}{\min}\{\Delta_t\} \). Now, if we consider the node $\ell$ to find the minimum step-size then, we are expecting that $\Delta_{\ell} \leq \Delta^\ast_{\ell}$.
Therefore, by construction, we can write
\begin{equation*}
\begin{split}
|\rho_{\ell}(\tau_{t}, \tau_{t+1})| + \Delta_{\ell} |\eta_{\ell}(\tau_t)| &\leq |\rho_{\ell}(\tau_{t}, \tau_{t+1})| + \Delta^\ast_{\ell} |\eta_{\ell}(\tau_t)| \quad \text{and,}\\
|\rho_k(\tau_{t}, \tau_{t+1})| - \Delta^\ast_{\ell} |\eta_k(\tau_t)| &\leq |\rho_k(\tau_{t}, \tau_{t+1})| - \Delta_{\ell} |\eta_k(\tau_t)|.
\end{split}
\end{equation*}
Therefore, (\ref{eqn:pruning_cond_node_l}) is equivalent to
\begin{equation}\label{eqn:pruning_cond_node_l_opt_step}
|\rho_{\ell}(\tau_t, \tau_{t + 1})| + \Delta^\ast_\ell |\eta_\ell(\tau_t)| < |\rho_k(\tau_t, \tau_{t + 1})| - \Delta^\ast_\ell |\eta_k(\tau_t)|.
\end{equation}
This completes the proof of Lemma \ref{lemma:lemma_1}. Therefore, Lemma \ref{lemma:lemma_1} is the new pruning condition using the current minimum step-size, i.e $\Delta^\ast_\ell$.
Note that we can further simplify Lemma \ref{lemma:lemma_1} as follows. We can write
\begin{equation}\label{eqn:rho_tau_t_plus_1}
\begin{split}
\rho_{\ell}(\tau_t, \tau_{t + 1}) &= |x_{\ell}^{\top}\big(y(\tau_{t+1}) - X_{\mc{A}_{\tau_t}}\beta_{\mc{A}_{\tau_t}}(\tau_t)\big)| \\
&= |x_{\ell}^{\top}\big(y(\tau_{t}) + \Delta^\ast_{\ell}b) - X_{\mc{A}_{\tau_t}}\beta_{\mc{A}_{\tau_t}}(\tau_t)\big)| \textnormal{ using (\ref{eq:parametrized_response_vector})}\\
&= |x_{\ell}^{\top}\big(y(\tau_{t}) - X_{\mc{A}_{\tau_t}}\beta_{\mc{A}_{\tau_t}}(\tau_t)\big) + \Delta^\ast_{\ell}x_{\ell}^{\top}b|\\
&= |\rho_{\ell}(\tau_t) + \Delta^\ast_{\ell}\theta_\ell|,
\end{split}
\end{equation}
where $\theta_\ell = x_{\ell}^{\top}b$ and $\rho_\ell(\tau_t) = x_{\ell}^{\top}\big(y(\tau_{t}) - X_{\mc{A}_{\tau_t}}\beta_{\mc{A}_{\tau_t}}(\tau_t)\big) $.
We know that
\begin{equation}\label{eqn:traingular_inequality}
\begin{split}
|\rho_{\ell}(\tau_t) + \Delta^\ast_{\ell}\theta_\ell| &\geq |\rho_{\ell}(\tau_t)| - \Delta^\ast_{\ell}|\theta_\ell| \quad \text{and,}\\
|\rho_{\ell}(\tau_t) + \Delta^\ast_{\ell}\theta_\ell| &\leq |\rho_{\ell}(\tau_t)| + \Delta^\ast_{\ell}|\theta_\ell|.
\end{split}
\end{equation}
Now using (\ref{eqn:rho_tau_t_plus_1}) and (\ref{eqn:traingular_inequality}) we can further write (\ref{eqn:pruning_cond_node_l_opt_step}) as
\begin{align}\label{eqn:pruning_cond_node_l_opt_step2}
|\rho_{\ell}(\tau_t)| + \Delta^{\ast}_{\ell} |\theta_\ell| + \Delta^\ast_\ell |\eta_\ell(\tau_t)| &< |\rho_k(\tau_t)| - \Delta^\ast_{\ell} |\theta_k| - \Delta^\ast_{\ell} |\eta_k(\tau_t)|.
\end{align}
Therefore, (\ref{eqn:pruning_cond_node_l_opt_step2}) serves as the simplified expression of the Lemma \ref{lemma:lemma_1}.
Next, we provide two propositions which we use to prove Lemma \ref{lemma:lemma_2}.
\begin{proposition} \label{prop_1} (Tree anti-monotonicity)
A tree is constructed in such a way that for any pair of nodes (\(\ell, \ell^\prime)\), where $\ell$ is the ancestor of $\ell^\prime$, i.e., \(\ell^\prime \supset \ell \), the following conditions are satisfied
\begin{equation}\label{eq:tree_anti_monotonicity_prop}
\hst x_{i \ell^\prime} = 1 \implies x_{i\ell } = 1 \hst
\text{ and conversely,} \hst
x_{i\ell } = 0 \implies x_{i \ell^\prime} = 0 \quad \forall i \in [n].
\end{equation}
\end{proposition}
\begin{proposition}\label{prop_2}
If Proposition 1 holds, then $\forall \ell^\prime \supset \ell $, we have
\begin{align*}
\lvert \rho_\ell (\tau_t) \lvert &\geq \lvert \rho_{\ell^\prime} (\tau_t) \lvert, \\
\lvert \eta_\ell (\tau_t) \lvert &\geq \lvert \eta_{\ell^\prime} (\tau_t) \lvert, \\
|\theta_\ell | &\geq |\theta_{\ell^\prime}|.
\end{align*}
\end{proposition}
\paragraph{Proof for Proposition 2:}
If Proposition 1 holds, we have
\begin{align*}
\lvert \rho_\ell (\tau_t) \lvert &= \lvert x_\ell^\top (y(\tau_t) - X_{\cA_{\tau_t}} \beta_{\cA_{\tau_t}}(\tau_t)), \\
& = \lvert x_\ell^\top w(\tau_t) \lvert, \\
& \geq \lvert x_{\ell^\prime}^\top w(\tau_t) \lvert \quad \text{ if }~ w(\tau_t) \geq 0,\\
& =: \lvert \rho_{\ell^\prime} (\tau_t) \lvert,
\end{align*}
where $w(\tau_t) = y(\tau_t) - X_{\cA_{\tau_t}} \beta_{\cA_{\tau_t}}(\tau_t)$.
Similarly, we also have
\begin{align*}
\lvert \eta_\ell (\tau_t) \lvert &= \lvert x_\ell^\top X_{\cA_{\tau_t}} \nu_{\cA_{\tau_t}} (\tau_t) \lvert, \\
& = \lvert x_\ell^\top v(\tau_t) \lvert, \\
& \geq \lvert x_{\ell^\prime}^\top v(\tau_t) \lvert \quad \text{ if }~ v(\tau_t) \geq 0,\\
& =: \lvert \eta_{\ell^\prime} (\tau_t) \lvert,
\end{align*}
where $v(\tau_t) = X_{\cA_{\tau_t}} \nu_{\cA_{\tau_t}} (\tau_t)$, and
\begin{align*}
\lvert \theta_\ell \lvert &= \lvert x_\ell^\top b \lvert, \\
& \geq \lvert x_{\ell^\prime}^\top b \lvert \quad \text{ if }~ b \geq 0,\\
&=: \lvert \theta_{\ell^\prime} \lvert,
\end{align*}
This completes the proof of Proposition 2.
Proposition 2 will be used to prove Lemma \ref{lemma:lemma_2}.
We will prove Lemma \ref{lemma:lemma_2} by contradiction i.e. we assume that (\ref{eqn:pruning_cond_node_l_opt_step2}) holds and \(\forall \ell^{\prime} \text{ s.t. } \ell^{\prime} \supset \ell\), $\Delta_{\ell^{\prime}} < \Delta_l^{\ast}$.
\begin{equation*}
\begin{split}
\therefore \quad |\rho_k(\tau_t)| - \Delta_{\ell^{\prime}} |\theta_k| - \Delta_{\ell^{\prime}} |\eta_k(\tau_t)| &> |\rho_k(\tau_t)| - \Delta^\ast_{\ell} |\theta_k| - \Delta^\ast_{\ell} |\eta_k(\tau_t)|, \hst \because \Delta_{\ell^{\prime}} < \Delta_l^{\ast}\\
&> |\rho_{\ell}(\tau_t)| + \Delta^\ast_{\ell} |\theta_\ell| + \Delta^\ast_{\ell} |\eta_{\ell}(\tau_t)|, \hst \text{using } (\ref{eqn:pruning_cond_node_l_opt_step2})\\
&> |\rho_{\ell^{\prime}}(\tau_t)| + \Delta^\ast_{\ell} |\theta_{\ell^\prime}| + \Delta^\ast_{\ell} |\eta_{\ell^{\prime}}(\tau_t)|, \hst \text{using Proposition 2,}\\
&> |\rho_{\ell^{\prime}}(\tau_t)| + \Delta_{\ell^{\prime}} |\theta_{\ell^\prime}| + \Delta_{\ell^{\prime}} |\eta_{\ell^{\prime}}(\tau_t)|, \hst \because \Delta_{\ell^{\prime}} < \Delta_l^{\ast}.
\end{split}
\end{equation*}
Therefore, we got
\begin{equation*}
\begin{split}
&\therefore \quad |\rho_k(\tau_t)| - \Delta_{\ell^{\prime}} |\theta_k| - \Delta_{\ell^{\prime}} |\eta_k(\tau_t)| > |\rho_{\ell^{\prime}}(\tau_t)| + \Delta_{\ell^{\prime}} |\theta_{\ell^\prime}| + \Delta_{\ell^{\prime}} |\eta_{\ell^{\prime}}(\tau_t)|\\
&\implies \ell^{\prime} \text{ is infeasible} \hst (\text{using } (\ref{eqn:pruning_cond_node_l_opt_step2})) \implies \Delta_{\ell^{\prime}} \nless \Delta_{\ell}^{\ast}.
\end{split}
\end{equation*}
This completes the proof of Lemma \ref{lemma:lemma_2}.
If any of $w(\tau_t)$, $v(\tau_t)$ and $b$ in Proposition 2 contains at least one negative element, then we can no longer use Lemma 2.
Hence, using the following Proposition 3, we can propose Lemma 3 as a general pruning condition.
\begin{proposition} We can write
\begin{align*}
|\rho_\ell (\tau_t)| & \leq b_{\ell, w(\tau_t)}, \\
|\eta_\ell (\tau_t)| & \leq b_{\ell, v(\tau_t)}, \\
|\theta_\ell| & \leq b_{\ell, \theta},
\end{align*}
where
\begin{align*}
b_{\ell, w(\tau_t)} &= \max \big\{ \sum_{w_i(\tau_t) < 0} \lvert w_i(\tau_t) \lvert x_{i\ell}, \sum_{w_i(\tau_t) > 0} \lvert w_i(\tau_t) \lvert x_{i\ell} \big\} \\
b_{\ell, v(\tau_t)} &= \max \big\{ \sum_{v_i(\tau_t) < 0} \lvert v_i(\tau_t) \lvert x_{i\ell}, \sum_{v_i(\tau_t) > 0} \lvert v_i(\tau_t) \lvert x_{i\ell} \big\} \\
b_{\ell, \theta} &= \max \big\{ \sum_{b_i < 0} \lvert b_i \lvert x_{i\ell}, \sum_{b_i > 0} \lvert b_i \lvert x_{i\ell} \big\}.
\end{align*}
\end{proposition}
\paragraph{Proof of Proposition 3:}
We have
\begin{align*}
|\rho_\ell (\tau_t)| &= |x_\ell^\top w(\tau_t)| \\
&= \left | \sum
\limits_{i=1}^n w_{i\ell} x_{i \ell} \right | \\
&= \left | \sum
\limits_{w_{i\ell} > 0} |w_{i\ell}| x_{i \ell} - \sum \limits_{w_{i\ell} < 0} |w_{i\ell}| x_{i \ell}\right | \\
&\leq \max \left \{ \sum
\limits_{w_{i\ell} > 0} |w_{i\ell}| x_{i \ell}, \sum \limits_{w_{i\ell} < 0} |w_{i\ell}| x_{i \ell} \right \} =: b_{\ell, w(\tau_t)}.
\end{align*}
Similarly,
\begin{align*}
|\eta_\ell (\tau_t)| &= |x_\ell^\top v(\tau_t)| \\
&= \left | \sum
\limits_{i=1}^n v_{i\ell} x_{i \ell} \right | \\
&= \left | \sum
\limits_{v_{i\ell} > 0} |v_{i\ell}| x_{i \ell} - \sum \limits_{v_{i\ell} < 0} |v_{i\ell}| x_{i \ell}\right | \\
&\leq \max \left \{ \sum
\limits_{v_{i\ell} > 0} |v_{i\ell}| x_{i \ell}, \sum \limits_{v_{i\ell} < 0} |v_{i\ell}| x_{i \ell} \right \} =: b_{\ell, v(\tau_t)}
\end{align*}
and
\begin{align*}
|\theta_\ell| &= |x_\ell^\top b| \\
&= \left | \sum
\limits_{i=1}^n b_i x_{i \ell} \right | \\
&= \left | \sum
\limits_{b_i > 0} |b_i| x_{i \ell} - \sum \limits_{b_i < 0} |b_i| x_{i \ell}\right | \\
&\leq \max \left \{ \sum
\limits_{b_i > 0} |b_i| x_{i \ell}, \sum \limits_{b_i < 0} |b_i| x_{i \ell} \right \} =: b_{\ell, \theta}.
\end{align*}
This completes the proof of Proposition 3.
\begin{lemma} Using Proposition 3 we can show that
$\forall \ell^\prime \supset \ell$, if
\begin{align} \label{eq:lemma_3_eq}
b_{\ell, w(\tau_t)} + \Delta_\ell^\ast b_{\ell, \theta} + \Delta_\ell^\ast b_{\ell, v(\tau_t)} < |\rho_k(\tau_t)| - \Delta^\ast_\ell |\theta_k| - \Delta_\ell^\ast |\eta_k(\tau_t)|,
\end{align}
then $\Delta_{\ell^\prime} > \Delta_\ell^\ast$.
\end{lemma}
Before proving Lemma 3, we introduce Proposition 4 which will be used to prove Lemma 3:
\begin{proposition}
If Proposition 1 holds, we have
\begin{align*}
b_{\ell, w(\tau_t)} & \geq b_{\ell^\prime, w(\tau_t)}, \\
b_{\ell, v(\tau_t)} & \geq b_{\ell^\prime, v(\tau_t)}, \\
b_{\ell, \theta} & \geq b_{\ell^\prime, \theta}.
\end{align*}
\end{proposition}
\paragraph{Proof of Proposition 4:}
If Proposition 1 holds, we have
\begin{align*}
b_{\ell, w(\tau_t)} &= \max \big\{ \sum_{w_i(\tau_t) < 0} \lvert w_i(\tau_t) \lvert x_{i\ell}, \sum_{w_i(\tau_t) > 0} \lvert w_i(\tau_t) \lvert x_{i\ell} \big\} \\
&\geq \max \big\{ \sum_{w_i(\tau_t) < 0} \lvert w_i(\tau_t) \lvert x_{i\ell^\prime}, \sum_{w_i(\tau_t) > 0} \lvert w_i(\tau_t) \lvert x_{i\ell^\prime} \big\} =: b_{\ell^\prime, w(\tau_t)}.
\end{align*}
Similarly, we also have
\begin{align*}
b_{\ell, v(\tau_t)} &= \max \big\{ \sum_{v_i(\tau_t) < 0} \lvert v_i(\tau_t) \lvert x_{i\ell}, \sum_{v_i(\tau_t) > 0} \lvert v_i(\tau_t) \lvert x_{i\ell} \big\} \\
&\geq \max \big\{ \sum_{v_i(\tau_t) < 0} \lvert v_i(\tau_t) \lvert x_{i\ell^\prime}, \sum_{v_i(\tau_t) > 0} \lvert v_i(\tau_t) \lvert x_{i\ell^\prime} \big\} =: b_{\ell^\prime, v(\tau_t)}.
\end{align*}
and
\begin{align*}
b_{\ell, \theta} &= \max \big\{ \sum_{b_i < 0} \lvert b_i \lvert x_{i\ell}, \sum_{b_i > 0} \lvert b_i \lvert x_{i\ell} \big\} \\
&\geq \max \big\{ \sum_{b_i < 0} \lvert b_i \lvert x_{i\ell^\prime}, \sum_{b_i > 0} \lvert b_i \lvert x_{i\ell^\prime} \big\} =: b_{\ell^\prime, \theta}.
\end{align*}
\paragraph{Proof of Lemma 3:}
We will prove Lemma 3 by contradiction i.e. we assume that (\ref{eq:lemma_3_eq}) holds and \(\forall \ell^{\prime} \text{ s.t. } \ell^{\prime} \supset \ell\), $\Delta_{\ell^{\prime}} < \Delta_l^{\ast}$.
\begin{equation*}
\begin{split}
\therefore \quad |\rho_k(\tau_t)| - \Delta_{\ell^{\prime}} |\theta_k| - \Delta_{\ell^{\prime}} |\eta_k(\tau_t)| &> |\rho_k(\tau_t)| - \Delta^\ast_{\ell} |\theta_k| - \Delta^\ast_{\ell} |\eta_k(\tau_t)|, \hst \because \Delta_{\ell^{\prime}} < \Delta_l^{\ast}\\
&> b_{\ell, w(\tau_t)} + \Delta_\ell^\ast b_{\ell, \theta} + \Delta_\ell^\ast b_{\ell, v(\tau_t)}, \hst \text{using } (\ref{eq:lemma_3_eq})\\
&> b_{\ell^\prime, w(\tau_t)} + \Delta_{\ell}^\ast b_{\ell^\prime, \theta} + \Delta_\ell^\ast b_{\ell^\prime, v(\tau_t)}, \hst \text{using Proposition 3,}\\
&> b_{\ell^\prime, w(\tau_t)} + \Delta_{\ell^\prime} b_{\ell^\prime, \theta} + \Delta_{\ell^\prime}b_{\ell^\prime, v(\tau_t)}, \hst \because \Delta_{\ell^{\prime}} < \Delta_l^{\ast}.
\end{split}
\end{equation*}
Therefore, we got
\begin{equation*}
\begin{split}
&|\rho_k(\tau_t)| - \Delta_{\ell^{\prime}} |\theta_k| - \Delta_{\ell^{\prime}} |\eta_k(\tau_t)| > b_{\ell^\prime, w(\tau_t)} + \Delta_{\ell^\prime} b_{\ell^\prime, \theta} + \Delta_{\ell^\prime}b_{\ell^\prime, v(\tau_t)}\\
&\implies \ell^{\prime} \text{ is infeasible} \hst (\text{using } (\ref{eq:lemma_3_eq})) \implies \Delta_{\ell^{\prime}} \nless \Delta_{\ell}^{\ast}.
\end{split}
\end{equation*}
This completes the proof of Lemma 3.
Hence, if the pruning condition in Lemma 3 holds, then we do not need to search the sub-tree with $\ell$ as the root node, and hence increasing the efficiency of the search procedure [\cite{ERP_Tsuda}].
\section{Extension for Elastic Net (ElNet)}
A common problem of the LASSO is that if the data has correlated features then, the LASSO picks only one of them and ignores the rest, which leads to instability. To solve this problem \cite{zou2005regularization} proposed the Elastic Net (ElNet). This feature correlation problem is very much evident in SHIM type problem, and hence we extended our framework for the Elastic Net. To extend our framework for the Elastic Net, we need to solve the following optimization problem.
\begin{equation}\label{obj:primal_elnet}
\beta(\lambda, \tau) \in \argmin_{\beta \in \bbR^p} \frac{1}{2}\norm{y(\tau) - X\beta}_2^2 + \frac{1}{2}\alpha \norm{\beta}_2^2 + \lambda \norm{\beta}_1.
\end{equation}
\subsection{$\lambda$-path: path \wrt to $\lambda$ ($\tau$ fixed)}
Similar to the LASSO, the normal equation can be written as
\begin{align*}\label{eqn:ElNet_lmd_path_normal_eqn}
- X^{\top}\left( y - X\beta(\lambda) \right) + \alpha \beta(\lambda) + \lambda s(\lambda) &= 0.
\end{align*}
where, \(s(\lambda)\) is the sub-differential that can be defined in a similar fashion as done in the case of the $\lambda$-path for the LASSO (\ref{eq:sub-diff_lambda_LASSO}). Now, if we consider two $\lambda$ values (\(\lambda_t > \lambda_{t+1}\)) at which the active set does not change (i.e. $\mathcal{A}_{\lambda_t} = \mathcal{A}_{\lambda_{t+1}}$) and the sign of the active coefficients also remain the same (i.e. $s_{\mathcal{A}_{\lambda_t}}(\lambda_t) = s_{\mathcal{A}_{\lambda_{t}}}(\lambda_{t+1})$) , then we can write
\begin{align}
\beta_{\mathcal{A}_{\lambda_{t}}} (\lambda_{t + 1}) - \beta_{\mathcal{A}_{\lambda_t}} (\lambda_t) &= -\nu_{\mathcal{A}_{\lambda_t}} (\lambda_t) (\lambda_{t+1} - \lambda_t),
\end{align}
where, \(\nu_{\mathcal{A}_{\lambda_t}}(\lambda_t) = (X^{\top}_{\mathcal{A}_{\lambda_t}}X_{\mathcal{A}_{\lambda_t}} + \alpha I_{|\mathcal{A}_{\lambda_t}|})^{-1}s_{\mathcal{A}_{\lambda_t}}(\lambda_t)\). Note that here the only change in the direction vectors compared to the LASSO is the addition of an \(\alpha I_{|\mathcal{A}_{\lambda_t}|}\) term to the expression of \(\nu_{\mathcal{A}_{\lambda_t}}(\lambda_t)\). Now, similar to the LASSO we can derive the step-size of deletion ($\Delta_j^2$) considering this updated expression of the direction vector. However, to derive the step-size of inclusion ($\Delta_j^1$), we need a different approach. The elastic net optimization problem can actually be formulated as a LASSO optimization problem using augmented data.
If we consider an augmented data defined as \(\Tilde{X} = \begin{pmatrix} X\\ \sqrt{\alpha} I_p \end{pmatrix} \) and \(\Tilde{y} = \begin{pmatrix}y \\ 0 \end{pmatrix}\), then solving the elastic net optimization problem (\ref{obj:primal_elnet}) for a fixed $\tau$, is equivalent to solving the following problem.
\begin{equation}
\beta(\lambda) \in \argmin_{\beta \in \bbR^p} \frac{1}{2}\norm{\Tilde{y} - \Tilde{X}\beta}_2^2 + \lambda \norm{\beta}_1.
\end{equation}
Now, similar to the LASSO we can write the step-size of inclusion ($\Delta_j^1$) of the $\lambda$-path of ElNet using the augmented data ($\Tilde{X}, \Tilde{y}$) as
\begin{align}\label{eq:step_size_d1_elnet}
\Delta_j^1 &= \underset{j \in \mathcal{A}_{\lambda_t}^c}{min} \Bigg( \frac{(\Tilde{x}_j - \Tilde{x}_k)^{\top}\Tilde{w}(\lambda_t)}{(\Tilde{x}_j - \Tilde{x}_k)^{\top}\Tilde{v}(\lambda_t)}, \frac{(\Tilde{x}_j + \Tilde{x}_k)^{\top}\Tilde{w}(\lambda_t)}{(\Tilde{x}_j + \Tilde{x}_k)^{\top}\Tilde{v}(\lambda_t)} \Bigg).
\end{align}
However, we cannot just simply augment the data by stacking extra rows as this can be prohibitively expensive due to the combinatorial effects. In order to derive the step-size of inclusion ($\Delta_j^1$) we need a different approach as we construct the high-order interaction model in a progressive manner. We have shown that using the following approach the step-size of inclusion for the $\lambda$-path of ElNet can be computed very efficiently, where the step-size of inclusion can be defined as
\begin{align}
\Delta_j^1 &= \underset{j \in \mathcal{A}^c_{\lambda_t}}{min} \Bigg( \frac{(x_j - x_k)^{\top}w(\lambda_t) + \alpha \beta_k}{(x_j - x_k)^{\top}v(\lambda_t) - \alpha \nu_k}, \frac{(x_j + x_k)^{\top}w(\lambda_t) - \alpha \beta_k}{(x_j + x_k)^{\top}v(\lambda_t) + \alpha \nu_k} \Bigg).
\end{align}
The derivation of the above step-size ($\Delta_j^1$) is given below.
\begin{proof}
\textnormal{Lets, consider} \(\Tilde{w}(\lambda_t) = \Tilde{y} - \Tilde{X}_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) \in \mathbb{R}^{n+p}\) \textnormal{and} \(w(\lambda_t) = y - X_{\mcl{A}_{\lambda_t}}\beta_{\mcl{A}_{\lambda_t}}(\lambda_t) \in \mathbb{R}^{n}\), \textnormal{where} \(p = |\mathcal{A}_{\lambda_t}| + |\mathcal{A}^c_{\lambda_t}| \), \textnormal{then we can write}
\begin{equation}\label{eqn:aug_w}
\Tilde{w}_i(\lambda_t) = \begin{cases}
w_i(\lambda_t) \quad \quad \textnormal{if} \quad i \leq n, \\
-\sqrt{\alpha}\beta_j \quad \hspace{0.2cm} \textnormal{if} \quad n < i \leq n + |\mathcal{A}_{\lambda_t}|,\\
0 \quad \quad \quad \hspace{0.45cm} \textnormal{if} \quad n + |\mathcal{A}_{\lambda_t}| < i \leq n + p.
\end{cases}
\end{equation}
\textnormal{similarly considering \(\Tilde{v}(\lambda_t) = \Tilde{X}\nu(\lambda_t) \in \mathbb{R}^{n+p}\) and \(v(\lambda_t) = X\nu(\lambda_t) \in \mathbb{R}^{n}\), we can write}
\begin{equation}\label{eqn:aug_v}
\Tilde{v}_i(\lambda_t) = \begin{cases}
v_i(\lambda_t) \quad \quad \hspace{0.08cm} \textnormal{if} \quad i \leq n ,\\
\sqrt{\alpha}\nu_j \quad \quad \hspace{0.1cm} \textnormal{if} \quad n < i \leq n + |\mathcal{A}_{\lambda_t}|,\\
0 \quad \quad \quad \hspace{0.42cm} \textnormal{if} \quad n + |\mathcal{A}_{\lambda_t}| < i \leq n + p.
\end{cases}
\end{equation}
\textnormal{and, considering \(\Tilde{X} \in \mathbb{R}^{n+p}\) and \(X \in \mathbb{R}^{n}\) we can write}
\begin{equation}\label{eqn:aug_X}
\Tilde{x}_{ij} = \begin{cases}
x_{ij} \quad \quad \hspace{0.1cm} \textnormal{if} \quad i \leq n, \\
\sqrt{\alpha} \quad \quad \hspace{0.03cm} \textnormal{if} \quad i > n \enspace \textnormal{and} \enspace (i-n)=j,\\
0 \hspace{1.05cm} \textnormal{otherwise}.
\end{cases}
\end{equation}
\textnormal{Therefore, in (\ref{eq:step_size_d1_elnet}) we can write that \(\forall j \in \mathbb{R}^p\)}
\begin{align*}
\Tilde{x}_j^{\top} \Tilde{w}(\lambda_t) &= \sum_{i=1}^{n+p} \Tilde{w}_i(\lambda_t) \Tilde{x}_{ij}, \\
&=\sum_{i=1}^n \Tilde{w}_i(\lambda_t) \Tilde{x}_{ij} + \sum_{\substack{i=n+1}}^{n+|\mathcal{A}_{\lambda_t}|} \Tilde{w}_i(\lambda_t) \Tilde{x}_{ij} + \sum_{i=n+|\mathcal{A}_{\lambda_t}|+1}^{n+p} \Tilde{w}_i(\lambda_t) \Tilde{x}_{ij}.
\end{align*}
\textnormal{Now, using (\ref{eqn:aug_w}) and (\ref{eqn:aug_X}) the second and the third quantity in the above expression can be written as}
\begin{equation*}
\sum_{\substack{i=n+1}}^{n+|\mathcal{A}_{\lambda_t}|} \Tilde{w}_i (\lambda_t) \Tilde{x}_{ij} =\begin{cases}
(-\sqrt{\alpha}\beta_j)(\sqrt{\alpha}), \hspace{0.5cm} \textnormal{if} \enspace (i-n) = j,\\
0 \hspace{2.65cm} \textnormal{otherwise}.
\end{cases}
\end{equation*}
\textnormal{and,}
\begin{equation*}
\sum_{i=n+|\mathcal{A}_{\lambda_t}|+1}^{n+p} \Tilde{w}_i(\lambda_t) \Tilde{x}_{ij} = 0.
\end{equation*}
\textnormal{Therefore,}
\begin{equation*}
\Tilde{x}_j^{\top} \Tilde{w}(\lambda_t) = \sum_{i=1}^n w_i(\lambda_t) x_{ij}, \enspace \forall j \in \mathcal{A}^c_{\lambda_t} \enspace \textnormal{and} \enspace \Tilde{x}_k^{\top} \Tilde{w}(\lambda_t) = \sum_{i=1}^n w_i(\lambda_t) x_{ik} - \alpha \beta_k, \enspace \forall k \in \mathcal{A}_{\lambda_t}.
\end{equation*}
\textnormal{Similarly, using (\ref{eqn:aug_v}) and (\ref{eqn:aug_X}) we can write}
\begin{equation*}
\Tilde{x}_j^{\top} \Tilde{v}(\lambda_t)= \sum_{i=1}^n v_i (\lambda_t) x_{ij}, \enspace \forall j \in \mathcal{A}_{\lambda_t}^c \hspace{0.5cm} \textnormal{and} \hspace{0.5cm} \Tilde{x}_k^{\top} \Tilde{v}(\lambda_t)= \sum_{i=1}^n v_i(\lambda_t) x_{ik} + \alpha \nu_k, \enspace \forall k \in \mathcal{A}_{\lambda_t}.
\end{equation*}
\textnormal{Therefore the step-size of inclusion can be written as}
\begin{align}
\Delta_j^1 &= \underset{j \in \mathcal{A}_{\lambda_t}^c}{\min} \Bigg( \frac{(x_j - x_k)^{\top}w(\lambda_t) + \alpha \beta_k}{(x_j - x_k)^{\top}v(\lambda_t) - \alpha \nu_k}, \frac{(x_j + x_k)^{\top}w(\lambda_t) - \alpha \beta_k}{(x_j + x_k)^{\top}v(\lambda_t) + \alpha \nu_k} \Bigg).
\end{align}
\end{proof}
\subsubsection{Tree pruning ($\lambda$-path)}
Similar to the LASSO (\ref{eq:lasso_tree_pruning_lmd_path1}) we can use the following inequality in augmented data (\(\Tilde{X}, \Tilde{y}\)) as the pruning criteria for the $\lambda$-path of ElNet.
\begin{equation}\label{cond:pruning}
\lvert \Tilde{\rho}_{\ell} \lvert + \Delta^{\ast}_{\ell} \lvert \Tilde{\eta}_{\ell} \lvert < \lvert \Tilde{\rho}_k \lvert - \Delta^{\ast}_{\ell} \lvert\Tilde{\eta}_k \lvert,
\end{equation}
where, \(\Tilde{\rho}_{\ell} = \Tilde{x}_{\ell}^{\top}\Tilde{w}(\lambda_t) \) and \(\Tilde{\eta}_{\ell} =\Tilde{x}_{\ell}^{\top} \Tilde{v}(\lambda_t) \quad \forall \ell \in \mathcal{A}^c_{\lambda_t} \), \(\Tilde{\rho}_k = \Tilde{x}_{k}^{\top}\Tilde{w}(\lambda_t) \) and \(\Tilde{\eta}_k = \Tilde{x}_k^{\top} \Tilde{v}(\lambda_t), \quad \forall k \in \mathcal{A}_{\lambda_t} \) . Now, using (\ref{eqn:aug_w}), (\ref{eqn:aug_v}) and (\ref{eqn:aug_X}) we can show that
\begin{align*}
\Tilde{\rho}_{\ell}(\lambda_t) = \sum_{i=1}^n w_i(\lambda_t) x_{i\ell}, \enspace \Tilde{\eta}_{\ell}(\lambda_t) = \sum_{i=1}^n v_i(\lambda_t) x_{i\ell} \enspace \text{and},\\
\Tilde{\rho}_k(\lambda_t) = \sum_{i=1}^n w_i(\lambda_t) x_{ik} - \alpha \beta_k, \enspace \Tilde{\eta}_k(\lambda_t) = \sum_{i=1}^n v_i(\lambda_t) x_{ik} + \alpha \nu_k.
\end{align*}
Therefore, the pruning condition (\ref{cond:pruning}) can be redefined as -
\begin{multline*}
|\sum_{i=1}^n w_i(\lambda_t) x_{i\ell}| + \Delta^{\ast}_{\ell} |\sum_{i=1}^n v_i(\lambda_t) x_{i\ell}| < |\sum_{i=1}^n w_i(\lambda_t) x_{ik} - \alpha \beta_k| - \Delta^{\ast}_{\ell} |\sum_{i=1}^n v_i(\lambda_t) x_{ik} + \alpha \nu_k|.
\end{multline*}
Now, similar to the LASSO (\ref{eq:lasso_pruning_cond_lmd_path3}) we can also write
\begin{equation}\label{eq:Elnet_pruning_lamda_path}
b_{w(\lambda_t)} + \Delta^{\ast}_{\ell} b_{v(\lambda_t)} < |\Bar{\rho}_k(\lambda_t)| - \Delta^{\ast}_{\ell}|\Bar{\eta}_k(\lambda_t)|,
\end{equation}
where, \(\Bar{\rho}_k(\lambda_t) = \sum_{i=1}^n w_i(\lambda_t) x_{ik} - \alpha \beta_k\), \(\Bar{\eta}_k(\lambda_t) = \sum_{i=1}^n v_i(\lambda_t) x_{ik} + \alpha \nu_k\), and
\begin{align*}
b_{w(\lambda_t)} &= max \big\{ \sum_{w_i(\lambda_t) < 0} |w_i(\lambda_t)| x_{i\ell}, \sum_{w_i(\lambda_t) > 0} |w_i(\lambda_t)| x_{i\ell} \big\}, \\
b_{v(\lambda_t)} &= max \big\{ \sum_{v_i(\lambda_t) < 0} |v_i(\lambda_t)| x_{i\ell}, \sum_{v_i(\lambda_t) > 0} |v_i(\lambda_t)| x_{i\ell} \big\}.
\end{align*}
Therefore, (\ref{eq:Elnet_pruning_lamda_path}) can be used as the pruning condition for the $\lambda$-path of ElNet.
\subsection{$\tau$-path: path \wrt to $\tau$ ($\lambda$ fixed)}
If we consider two real values $\tau_t$ and $\tau_{t+1}$ ( $\tau_{t+1}>\tau_t$) at which the active set does not change and their signs also remain the same, then we can write
\begin{align*}
\beta_{\mathcal{A}_{\tau_{t}}}(\tau_{t+1}) - \beta_{\mathcal{A}_{\tau_t}}(\tau_t) = \nu_{\mathcal{A}_{\tau_t}}(\tau_t)(\tau_{t+1} - \tau_t),\\
\lambda s_{\mathcal{A}^c_{\tau_{t}}}(\tau_{t+1}) - \lambda s_{\mathcal{A}^c_{\tau_t}}(\tau_t) = \gamma_{\mathcal{A}^c_{\tau_t}}(\tau_t)(\tau_{t+1} - \tau_t).
\end{align*}
where, \(\nu_{\mathcal{A}_{\tau_t}}(\tau_t) = (X_{\mathcal{A}_{\tau_t}}^{\top}X_{\mathcal{A}_{\tau_t}} + \alpha I_{|\mathcal{A}_{\tau_t}|})^{-1}X_{\mathcal{A}_{\tau_t}}^{\top} b \) and \(\gamma_{\mathcal{A}^c_{\tau_t}}(\tau_t) = X_{\mathcal{A}^c_{\tau_t}}^{\top} b - X_{\mathcal{A}^c_{\tau_t}}^{\top}X_{\mathcal{A}_{\tau_t}}\nu_{\mathcal{A}_{\tau_t}}(\tau_t)\).
Note that here also the only change compared to the LASSO (\ref{LASSO:direction_vector_tau_path}) is the addition of an \(\alpha I_{|\mathcal{A}_{\tau_t}|}\) term to the expression of \(\nu_{\mathcal{A}_{\tau_t}}\). Now, one can also derive a similar expression of step-size of inclusion and deletion as done for the LASSO (\ref{LASSO:step-size_z_path}) by considering the updated expression of \(\nu_{\mathcal{A}_{\tau_t}}(\tau_t)\) and \(\gamma_{\mathcal{A}^c_{\tau_t}}(\tau_t) \).
\subsubsection{Tree pruning ($\tau$-path)}
Similar to the LASSO (\ref{eq:lemma_3_eq}), by using (\ref{eqn:aug_w}), (\ref{eqn:aug_v}) and (\ref{eqn:aug_X}) the pruning condition for the $\tau$-path of ElNet can be written as
\begin{equation}\label{eqn:elnet_pruning_condn_tau_path3}
b_{\ell, w(\tau_t)} + \Delta_\ell^\ast b_{\ell, \theta} + \Delta_\ell^\ast b_{\ell, v(\tau_t)} < \lvert \Bar{\rho}_k(\tau_t) \lvert - \Delta_{\ell}^{\ast}\lvert \theta_k \lvert - \Delta_{\ell}^{\ast} \lvert \Bar{\eta}_k(\tau_t) \lvert,
\end{equation}
where \(\Bar{\rho}_k(\tau_t) = \sum_{i=1}^n w_i(\tau_t) x_{ik} - \alpha \beta_k\), \(\Bar{\eta}_k(\tau_t) = \sum_{i=1}^n v_i(\tau_t) x_{ik} + \alpha \nu_k\).
\section{Submission of papers to NeurIPS 2021}
\bibliographystyle{ACM-Reference-Format}
\section{Additional Results}
Here we report additional results using real world HIV-1 sequence data from Stanford HIV Drug Resistance Database \citep{rhee2003human}. This dataset contains three classes of drug data: NRTIs, NNRTIs and PIs consisting of 16 drugs. Finding virus induced mutations which leads to drug resistance is crucial to drug development. However, drug resistance is a complex biological phenomenon and it is often reported in the literature \citep{rhee2006genotypic, ERP_Tsuda, suzumura2017selective}, that it is the association of multiple mutations along with some crucial single mutations that can best describe the phenomenon. Hence, it is important to understand the association of multiple mutations related to the drug resistance. In our experiment we used 6 NRTIs, 1 NNRTIs and 3 PIs drugs. We reported the results on 3 NRTIs drugs in the main article and here we include the results on the remaining 3 NRTIs (Fig. \ref{fig:stats_NRTIs2}) and 3 PIs (Fig. \ref{fig:stats_PIs}) and 1 NNRTI (\ref{fig:stats_NNRTI}) drugs. The continuous drug resistance values corresponds to the response ($y \in \mb{R}$) and the binary mutations corresponds to the original features ($z \in \mb{R}^m$) in our experimental settings.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{figures/NRTs2.pdf}
\caption{Comparison of statistical powers (Homotopy vs Polytope). (a.1-a.3) show the percentage of cases where selection bias corrected p-values and confidence interval lengths of the proposed method (Homotopy) was smaller than that of the existing method (Polytope) in random sub-sampling experiments. (b.1-b.3) show the distributions of the confidence interval lengths of the same experiments.}
\label{fig:stats_NRTIs2}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{figures/PIs.pdf}
\caption{Comparison of statistical powers (Homotopy vs Polytope). (a.1-a.3) show the percentage of cases where selection bias corrected p-values and confidence interval lengths of the proposed method (Homotopy) was smaller than that of the existing method (Polytope) in random sub-sampling experiments. (b.1-b.3) show the distributions of the confidence interval lengths of the same experiments.}
\label{fig:stats_PIs}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\linewidth]{figures/NNRTI.pdf}
\caption{Comparison of statistical powers (Homotopy vs Polytope). (a.1-a.3) show the percentage of cases where selection bias corrected p-values and confidence interval lengths of the proposed method (Homotopy) was smaller than that of the existing method (Polytope) in random sub-sampling experiments. (b.1-b.3) show the distributions of the confidence interval lengths of the same experiments.}
\label{fig:stats_NNRTI}
\end{figure}
%
%
%
%
In Table.~\ref{table:comp_eff_kink_numbers} we demonstrated the computational advantage of the proposed homotopy method over exiting method on conditioning on model (\cite{lee2016exact}).
%
In this experiment the $\lambda$-path was constructed until the active set ($\mcl{A}$) contains 20 features and subsequently that active set and the corresponding $\lambda$ value is used for the construction of the $\tau$-path.
%
The \cite{lee2016exact} method needs to consider the union of all possible signs in the observed active set ($\mcl{A}$) in order to condition on the model. However, our homotopy mining needs to consider only $\sim$ 120 polytopes (worst case) for the same task.
\begin{table}[h!]
\centering
\begin{tabular}{ |c|c|c| }
\hline
High-order interactions & \shortstack{Homotopy \\ (\# kinks)} & \shortstack{Polytope \\ (\# polytopes)} \\
\hline
$1^{st}$ & $104.15 \pm 10.73$ & $2^{20}$\\
\hline
$2^{nd}$ &$101.0 \pm 4.64$ & $2^{20}$\\
\hline
$3^{rd}$ & $78.33 \pm 24.69$ & $2^{20}$\\
\hline
\end{tabular}
\caption{Comparison of computational efficiencies of the proposed homotopy method against existing polytope method. The "\# kinks" represents the average number of kinks encountered during the construction of $\tau$-path for each test statistic direction, whereas the "\# polytopes" represents the number of all possible signs one needs to consider to condition on the model.}
\label{table:comp_eff_kink_numbers}
\end{table}
We note that theoretically, in the worst-case, the complexity of the homotopy method grows exponentially. This is a common issue in homotopy-based methods such as computing regularization paths. However, fortunately, it has been well-recognized \citep{le2021parametric} that this worst case rarely happens in practice, and this is also evident from our experimental results.
Similar to the pruning, empirical evidence also demonstrates that homotopy is more efficient in case of high-order interaction terms compared to that of singleton terms, and the efficiency increases as the order of interaction increases.
We suspect that as the order of interaction increases the sparsity of the data also increases which significantly affects the construction of the $\tau$-path as evident from the effectiveness of both pruning and the homotopy method. However, more theoretical investigations are required to have a clear understanding of this phenomenon which we believe worth considering in the future.
\section{Introduction}
\section{Introduction}
Blackbox models such as deep neural network models generally have high predictive performance but are difficult to interpret and hence, often considered unreliable. Therefore, for tasks that require high-stake decision-making, such as medical diagnosis and automated driving, models with higher interpretability and reliability are required. As one of the interpretable and reliable models with good prediction ability, we consider Sparse High-order Interaction Model (SHIM) in this study. Considering a regression problem with a response $y$ and $m$ original covariates $z_1, \ldots, z_m$, an example SHIM up to $4^{th}$ order interactions can be written as
\begin{equation}\label{eq:shim_eg1}
y = \beta_1 z_3 + \beta_2 z_5 + \beta_3 z_2z_6 + \beta_4 z_1z_2z_5z_9 .
\end{equation}
where \(\beta_1, \beta_2, \beta_3, \beta_4\) are the model parameters (or coefficients). Such a SHIM has practical importance, such as identifying complex genotypic features for HIV-1 drug resistance \citep{saigo2007mining}. HIV-1 evolves in the human body and exposure to certain drugs causes mutations that leads to resistance against the drugs. Structural biological studies show that it is the association of multiple mutations along with some crucial single mutations that can best describe the complex biological phenomenon of drug resistance
\citep{vivet2006nucleoside, iversen1996multidrug, rhee2006genotypic}.
The goal of this study is to fit a SHIM such as (\ref{eq:shim_eg1}) to the given data and subsequently perform statistical significance test to judge the reliability of the model parameters. However, unless the original dimension and the order of interactions are small, fitting a high-order interaction model can be challenging and one would require some computational tricks to avoid the combinatorial effects.
Another challenge of data-driven modeling is understanding the reliability of findings because the model might have cherry-picked the strong associations given a particular realization of the data.
This is called "cherry-picking" effect a.k.a. selection bias \citep{taylor2015statistical}.
Traditional statistical inference, which assumes that the statistical model and the target for which inferences are conducted must be fixed {\it a priori}, cannot be used for this problem.
Any inference conducted after model selection will suffer from the selection bias unless it is corrected.
\textbf{Related works:}
Several approaches have been suggested in the literature to address the above problem ~(\cite{fithian2014optimal, fithian2015selective}, \cite{choi2017selecting}, \cite{tian2018selective}, \cite{chen2020valid}, \cite{hyun2018post}, \cite{loftus2014significance, loftus2015selective}, \cite{panigrahi2016bayesian}, \cite{tibshirani2016exact}, \cite{yang2016selective}).
A particularly notable approach is \emph{conditional} SI introduced in the seminal paper by \citet{lee2016exact}. The basic idea of conditional SI is to make inference on a data-driven hypothesis conditional on the selection event that the hypothesis is selected.
\cite{lee2016exact} first proposed conditional SI methods for the selected features by using Lasso.
Their basic idea is to characterize the selection event by a polytope, i.e., a set of linear inequalities, in the sample space.
When a selection event can be characterized by a polytope, practical computational methods developed by these authors can be used for making inferences of the selected hypotheses conditional on the selection events.
However, the conditional SI framework based on a polytope has a serious drawback called \emph{over-conditioning} issue, i.e., additional extra events must be introduced to characterize the selection event by a single polytope, which is known to lead loss of statistical power or \emph{statistically sub-optimal} ~\cite{fithian2014optimal}.
The work by \cite{suzumura2017selective}, who first applied polytope-based SI into high-order interaction model when a high-order interaction feature is sequentially added to the model, also suffers from this problem. As a solution in the case of LASSO \cite{lee2016exact} proposed to take the union of all possible signs of the selected features. However, unless the number of the selected features is small, it is computationally expensive and, in the case of SHIM type problem, it will be impractical due to the combinatorial effects.
Recently, \cite{le2021parametric} introduced a homotopy method to resolve the over-conditioning issue and realizes minimally-conditioned SI for Lasso.
Our basic idea for identifying statistically reliable high-order interaction features in sparse modeling framework is to employ \emph{exact homotopy}-based SI method for SHIM. Unfortunately, the computational cost for applying the exact homotopy method to SHIM increases exponentially and intractable unless the size of the selected features and the maximum order of interactions are fairly small. Several methods have already been proposed for fitting a SHIM \citep{saigo2009gboost, ERP_Tsuda, nakagawa2016safe}.
\textbf{Contribution:}
Our main contribution in this paper is to introduce a ``homotopy mining'' method by exploting the best of both homotopy and (pattern) mining methods for conditional SI for SHIM.
This approach is motivated by the exact regularization path computation algorithm for graph data~\citep{ERP_Tsuda}, which is considered as a homotopy method with respect to the regularization parameter.
In the algorithm of our proposed method, we use two types of homotopy mining methods, one for fitting a SHIM on the observed dataset (which is essentially the same as the approach in \cite{ERP_Tsuda}) and, another for computing the sampling distribution of the test-statistic conditional on the selection event.
Interestingly, these two types of homotopy mining methods share many common properties such as branch and bound techniques for pruning high-order interaction tree (see Fig.\ref{fig:tree}). We applied our proposed method on synthetic and real-world HIV1 drug resistance data and demonstrated in \S4 that we could quantify the statistical significance of high-order interaction features in the forms of $p$-values and confidence intervals without any computational nor statistical approximations. In an experimental study of the inference stage, we showed that a single traversal of a search space of more than $10^{10}$ high-order interaction terms (sample size, $n=625$) took less than 240 sec (worst case) and 78 sec (best case) on average using Intel Xeon Gold 6230 CPU @ 2.10 GHZ. We extended this framework to solve the Elastic Net optimization problem which was not trivial as we cannot follow the common approach of data augmentation by stacking extra rows as this can be prohibitively expensive due to the combinatorial effects.
\section{Problem Statement}
Consider a regression problem with a response vector $y \in \RR^n$ and
$m$ original covariate vectors $z_1, \ldots, z_m$, where \(z_{j} \in \mb{R}^n\) and $j \in [m] = \{1, ..., m\}$. Then, a high-order interaction model up to $d^{\rm th}$ order is written as
\begin{equation}\label{eq:shim_model}
\begin{split}
\hspace{-2.5mm} y =
\sum_{j_1 \in [m]} \alpha_{j_1} z_{j_1}
+ \sum_{ \substack{ (j_1, j_2) \in [m] \times [m] \\ j_1 \neq j_2}} \hspace{-2.5mm} \alpha_{j_1, j_2} z_{j_1} \circ z_{j_2}
+ \cdots +
\sum_{\substack{(j_1, ..., j_d) \in [m]^d \\ j_1 \neq ... \neq j_d}} \hspace{-2.5mm} \alpha_{j_1, \ldots, j_d} z_{j_1} \circ \cdots \circ z_{j_d},
\end{split}
\end{equation}
where $\circ$ is the element-wise product and scalar $\alpha$s are the coefficients.
In this paper, we consider each element of the original covariate vector $z_{j} \in \RR^n$, $j \in [m]$, is defined in a domain $[0, 1]$.
To simplify notation, it is convenient to write the high-order interaction model in (\ref{eq:shim_model}) by using the following matrix of concatenated vectors of all high-order interactions:
\[
X = [\underbrace{z_1, \ldots, z_m}_{1\text{\ts{st} order}}, \underbrace{ z_1 z_2, \ldots, z_{m-1} z_m}_{2\text{\ts{nd}order}},
\cdots,
\underbrace{z_1 \ldots z_d, \ldots, z_{m-d+1} \ldots z_m}_{d\text{\ts{th} order}}] \in \RR^{n \times p},
\]
where \(p \coloneqq \sum_{\kappa=1}^d {m \choose \kappa}\).
Similarly, the coefficient vector associated with all possible high-order interaction terms can be written as:
\[
\beta:= [ \underbrace{ \alpha_1, \ldots, \alpha_m}_{1\text{\ts{st} order}}, \underbrace{\alpha_{1,2}, \ldots, \alpha_{m-1, m}}_{2\text{\ts{nd} order}},
\cdots,
\underbrace{\alpha_{1, \ldots ,d}, \ldots, \alpha_{m-d+1, \ldots, m}}_{d\text{\ts{th} order}}]^\top \in \RR^p.
\]
The high-order interaction model (\ref{eq:shim_model}) is then simply written as a linear model
$
y = X \beta.
$
Unfortunately, $p$ can be prohibitively large unless both $m$ and $d$ are fairly small.
In SHIM, we consider a sparse estimation of high-order interaction model.
An example of SHIM looks like
\begin{equation}\label{eq:shim_eg}
y = \alpha_3z_{3} + \alpha_5z_{5} + \alpha_{2,6}z_{2}z_{6} + \alpha_{1,2,5,9}z_{1}z_{2}z_{5}z_{9}.
\end{equation}
The goal of this study is to fit a SHIM such as (\ref{eq:shim_eg}) and test the statistical significance of the coefficients of the selected model (in the above example, \( \alpha_3, \alpha_5, \alpha_{2,6}, \alpha_{1,2,5,9} \)) in order to quantify the reliability.
Unfortunately, both fitting and testing a SHIM are non-trivial because, unless both $m$ and $d$ are very small, a high-order interaction model will have an extremely large number of parameters to be considered.
Several algorithms for fitting a sparse high-order interaction model were proposed in the literature (see \S1).
A common approach taken in these existing works is to exploit the hierarchical structure of high-order interaction features.
In other words, a tree structure as in Fig.~\ref{fig:tree}(a) is considered and a branch-and-bound strategy is employed in order to avoid handling all the exponentially increasing number of high-order interaction features.
Here, we introduce an alogrithm for conditional SI in order to quantify the statistical significance of the fitted coefficients of SHIM such as $\alpha_3, \alpha_5, \alpha_{2,6}, \alpha_{1,2,5,9}$ in the forms of $p$-values or confidence intervals by using homotopy-based SI.
However, due to the extremely large number of features in \eq{eq:shim_model}, it is intractable to characterize the selection event for homotopy-based SI.
In order to overcome this challenge, we develop \emph{homotopy mining} method which effectively combines the homotopy method and branch-and-bound strategy in the cherry tree.
Before delving into our proposed method, we briefly overview conditional SI.
\subsection{Selective Inference and Homotopy Method}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/homotopy-mining.pdf}
\caption{(a) A cherry tree of patterns has been constructed by exploiting the hierarchical structure of high-order interaction features. Not all nodes are traversed due to pruning. (b) The conditional data space is restricted to a line. In the figure it is restricted along the horizontal "$\tau$-line" and we need to find the truncation points ($\tau_t, \tau_{t+1}$) along this line.}
\label{fig:tree}
\end{figure}
We present conditional selective inference (SI) which is introduced in \citet{lee2016exact} and then explain that \emph{optimal (i.e., minimally-conditioned)} conditional SI can be conducted with a homotopy method.
In conditional SI framework, we assume that the design matrix $X$ is fixed, response vector $y$ is a realization of random response vector $Y \sim N(\mu, \Sigma)$, where $\mu \in \RR^{n}$ is unknown mean vector and $\Sigma \in \RR^{n \times n}$ is covariance matrix which is known or estimable from external data.
In this framework, we do not assume ``true'' relationship between $X$ and $\mu$, but consider a case where the data analyst adopts the SHIM as a reasonable approximation model to describe the relationship.
Let $\cA$ be the set of selected features by solving the SHIM fitting problem.
With a slight abuse of notation, we also write this set of features as $\cA(y)$ in order to emphasize that the set of features $\cA$ is obtained when $y$ is observed.
This notation enables us to consider $\cA(y^\prime)$ as the set of features which would be selected when a different response vector $y^\prime$ is observed.
Furthermore, $\cA(Y)$ represents the ``random'' set of features selected from the ``random'' response vector $Y$.
Given the set of selected features $\cA$, consider the best linear approximation of $\mu$ with the selected features.
For $j \in \cA$, let
\begin{align*}
\beta_j^* := (X_\cA^\top X_\cA)^{-1} X_{\cA}^\top \mu
\end{align*}
be the $j^{\rm th}$ population coefficient of the best linear approximation model fitted only with the selected features.
In conditional SI framework, we consider the following hypothesis test:
\begin{align}
\label{eq:statistical_test}
{\rm H}_0: \beta_j^* = 0 ~~~\text{v.s.}~~~ {\rm H}_1: \beta_j^* \neq 0, ~ j \in \cA.
\end{align}
Noting that, by defining $\eta := e_j^\top X_{\cA(Y)} (X_{\cA(Y)}^\top X_{\cA(Y)})^{-1}$ with $e_j \in \RR^n$ being the vector with 1 at the $j^{\rm th}$ component and 0 otherwise, we can write $\beta_j^* = \eta^\top \mu$ with $Y = y$.
Therefore, it is reasonable to use $\eta^\top Y$ as the test statistic for the test \eq{eq:statistical_test}.
The (unconditional) sampling distribution of $\eta^\top Y$ is highly complicated and intractable because $\eta$ also depends on the random response vector $Y$ through the selected features $\cA(Y)$.
The basic idea of conditional SI is to consider the sampling distribution of the test-statistic conditional on the selection event, i.e., $\eta^\top Y \mid \{\cA(Y) = \cA\}$.
By further conditioning on the nuisance component $q(Y) = (I_n - b \eta^\top) Y$ with $b := \Sigma \eta (\eta^\top \Sigma \eta)^{-1}$ which is independent of the test statistic $\eta^\top Y$, \cite{lee2016exact} showed that the conditional sampling distribution of $\eta^\top Y \mid \{\cA(Y) = \cA, q(Y) = q \}$ follows a truncated Normal distribution
\begin{equation}\label{eqn:conditional_sampling_distr}
\eta^\top Y \mid \{\mathcal{A}(Y)=\mathcal{A}, q(Y)=q\} \sim F^{\mathcal{T}}_{\eta^T\mu,\eta^T\sum\eta},
\end{equation}
where $F_{\tilde{\mu}, \tilde{\sigma}^2}^{\cT}$ is the c.d.f. of the truncated Normal distribution with mean $\tilde{\mu}$, variance $\tilde{\sigma}^2$, the truncation region $\cT$, and $q$ is the observed nuisance component defined as $q = (I_n - b \eta^\top) y$. However, identifying the conditional data space \(\{\mathcal{A}(Y)=\mathcal{A}, q(Y)=q\}\) is a challenging problem.
In \cite{lee2016exact}, the authors developed a practical algorithm to compute the truncated Normal distribution by further conditioning on the signs of the selected features in $\cA$.
Although the validity of the inference can be maintained with this additional conditioning on the signs, it turns out that the power of the inference is \emph{suboptimal} with this over-conditioning~\citep{fithian2014optimal}. Recently, \citet{le2021parametric} developed an algorithm to resolve this issue by using homotopy method.
In particular, they considered the parametrized response vector (see Fig. \ref{fig:tree} (b))
\begin{equation}\label{eq:parametrized_response_vector}
y(\tau) := q + b \tau
\end{equation}
for a scalar parameter $\tau \in \RR$, and solve the continuum of optimal solutions when the response vector $y$ is replaced with $y(\tau)$ by using homotopy method. Therefore, we can redefine the conditional data space in (\ref{eqn:conditional_sampling_distr}) as
\begin{equation} \label{eq:def_truncation_region}
\mcl{T} = \{\tau \in \mb{R} \hst | \hst \mcl{A}(y(\tau)) = \mcl{A}(y) \}.
\end{equation}
It enables us to completely identify the truncation region of the truncated Normal sampling distribution and compute the selective $p$-value
\begin{equation}
P_j^{\rm selective} = 2 \hspace{0.1cm} \text{min} \{ \pi_j, 1 - \pi_j\}, \quad \text{where,} \quad \pi_j = 1 - F^{\mathcal{T}}_{0,\eta^T\sum\eta} (\eta^\top y).
\end{equation}
Similarly, one can obtain $1 - \alpha$ confidence interval $\mcl{C}_\alpha$ for any $\alpha \in [0, 1] $ such that
\begin{equation*}
\mb{P}(\beta_j^* \in \mcl{C}_\alpha \big| \{\mathcal{A}(Y)=\mathcal{A}, q(Y)=q \}) = 1 -\alpha.
\end{equation*}
Unfortunately, in the case of SHIM, since the number of high-order interaction features are exponentially large, we cannot use the same homotopy method. In the following section, we present the \emph{homotopy mining algorithm} which enables us to compute the conditional sampling distribution (\ref{eqn:conditional_sampling_distr}) of the fitted SHIM coefficients by effectively combining homotopy method and branch-and-bound method in pattern mining.
\section{Method}
\section{Proposed Method}\label{homotopy-mining}
In this study we propose a similar \textit{``homotopy-mining''} approach for model selection and inference. Homotopy method refers to an optimization framework for solving a sequence of parameterized optimization problems. The basic idea of our homotopy mining approach is to consider the following optimization problem with a parameterized response vector $y (\tau)$ in (\ref{eq:parametrized_response_vector})
%
\begin{equation}\label{eq:shim_homotopy_opt}
\beta(\lambda, \tau) = \argmin_{\beta \in \bbR^p} \hspace{0.2cm} \mathcal{F}_{\lambda, \tau}(\beta) := \frac{1}{2}\norm{y(\tau) - X\beta}^2 + \lambda \norm{\beta}_1,
\end{equation}
where $\tau \in \mathbb{R}$ is a scalar parameter, $\lambda$ is the regularization parameter for $L_1$-regularization, and the objective function $\mathcal{F}_{\lambda, \tau}(\beta)$ is parameterized by both $\tau$ and $\lambda$.
The homotopy mining enables us to solve a sequence of parameterized optimization problems in the form of (\ref{eq:shim_homotopy_opt}) by effectively combining homotopy and mining method.
To extend the homotopy selective inference framework for SHIM, we first need to solve (\ref{eq:shim_homotopy_opt}) for a fixed $\tau$ and target $\lambda$ using the observed data and obtain an active set $\mathcal{A}$.
Now, $\forall j \in \mathcal{A}$, we need to construct the exact solution path characterized by $\tau$ and then identify the conditional data space in (\ref{eq:def_truncation_region}) by identifying the intervals of $\tau$ on the solution path.
This exact solution path can be constructed in a similar manner as the LARS-LASSO algorithm by an efficient step size calculation.
Here, we define the exact regularization paths \(\lambda \mapsto \beta(\lambda)\) for a fixed $\tau$ as the ``\(\lambda\)-\textit{path}'' and \(\tau \mapsto \beta(\tau)\) for a fixed $\lambda$ as the ``\(\tau\)-\textit{path}'', respectively.
%
Then, both the selection and inference paths of the SHIM can be constructed in a similar fashion as stated below:
$\bullet$ Model selection of SHIM can be done by using exact regularization path algorithm
\begin{equation}\label{eq:lambda_seq_path}
\lambda_0 > \lambda_1 > \cdots > \lambda_{\rm min} \Rightarrow \{\beta(\lambda_0), \beta(\lambda_1) , \cdots, \beta(\lambda_{\rm min}) \}.
\end{equation}
%
$\bullet$ For inference, we can have similar path algorithm
\begin{equation}\label{eq:z_seq_path}
\tau_0 > \tau_1 > \cdots > \tau_{\rm min } \Rightarrow \{\beta(\tau_0), \beta(\tau_1), \cdots, \beta(\tau_{\rm min}) \},
\end{equation}
where sequences of $\lambda$ and $\tau$ represent the breakpoints of homotopy method.
The Equations (\ref{eq:lambda_seq_path}) and (\ref{eq:z_seq_path}) have similar problem structure, the only difference is that in (\ref{eq:lambda_seq_path}) we find the solution path characterized by the regularization parameter $\lambda$, whereas in (\ref{eq:z_seq_path}) we find the solution path characterized by $\tau$.
%
Basically, what we need to characterize the selection event is to find those breakpoints (e.g. $\tau_0, \tau_3, \tau_8$) along the $\tau$-line where the active set remains the same as the observed one, i.e.,
$
\mathcal{A}(y) = \mathcal{A}(y(\tau_0)) = \mathcal{A}(y(\tau_3)) = \mathcal{A}(y(\tau_8)).
$
%
%
However, computing the exact regularization paths for such SHIM is a challenging task due to exponentially expanded feature space.
%
Efficient computational methods are required both at the selection and inference stage.
%
Therefore, we considered a tree structure (see Fig. \ref{fig:tree} (a)) of the interaction terms (or patterns) and proposed a tree pruning strategy both for the selection path ($\lambda$-path) and inference path ($\tau$-path).
In the next section, we will present the main technical details of characterizing the conditional data space in (\ref{eq:def_truncation_region}) by using homotopy-mining method.
\subsection{Characterization of truncation region in SHIM}
The optimal condition of (\ref{eq:shim_homotopy_opt}) can be written as
\begin{equation}\label{eq:opt_condn_shim_homotopy}
X^\top \big(X\beta(\lambda,\tau) - y(\tau)\big) + \lambda s(\lambda,\tau) = 0 \text{ where} \hst s_{j}(\lambda, \tau) \in \begin{cases}
\{ -1, +1 \} \hst \hst \text{if} \hst \beta_j(\lambda, \tau) \neq 0,\\
[-1, +1] \hst \hst \hst \text{if} \hst \beta_j(\lambda, \tau) = 0,
\end{cases}
\end{equation}
where $j \in [p]$.
Let us define the active set of features as
\(\label{eq:active_set}
\mcl{A}(y(\tau)) = \left\{j \in [p]:\; \beta_j(\lambda, \tau) \neq 0 \right\}
\).
\paragraph{The $\tau$-path ($\lambda$ fixed).}
Since $\lambda$ is fixed we drop it from the notation. Now consider two real values $\tau_t$ and $\tau_{t+1}$ ($\tau_{t+1}>\tau_t$) at which the active set does not change and their signs also remain the same.
For notational simplicity, we denote $\cA_{\tau_t} = \cA(y(\tau_t))$.
Then, one can write from (\ref{eq:opt_condn_shim_homotopy})
\begin{align}
\beta_{\mcl{A}_{\tau_t}} (\tau_{t + 1}) - \beta_{\mcl{A}_{\tau_t}} (\tau_t) &= \nu_{\mcl{A}_{\tau_t}} (\tau_t) \times (\tau_{t+1} - \tau_t) \label{eqn:beta_piece_wise_constant} \\
\lambda s_{\mcl{A}^c_{\tau_t}} (\tau_{t + 1}) - \lambda s_{\mcl{A}^c_{\tau_t}} (\tau_t) &= \gamma_{\mcl{A}^c_{\tau_t}} (\tau_t) \times (\tau_{t+1} - \tau_t) \label{eqn:gamma_piece_wise_constant}
\end{align}
where \(\nu_{\mcl{A}_{\tau_t}} (\tau) = (X_{\mcl{A}_{\tau_t}}^\top X_{\mcl{A}_{\tau_t}})^{-1}X_{\mcl{A}_{\tau_t}}^\top b \) and \(\gamma_{\mcl{A}^c_{\tau_t}} (\tau) = X_{\mcl{A}^c_{\tau_t}}^\top b - X_{\mcl{A}^c_{\tau_t}}^\top X_{\mcl{A}_{\tau_t}}\nu_{\mcl{A}_{\tau_t}} (\tau) \) remain constant for all real values of \(\tau \in [\tau_t, \tau_{t+1})\).
Thus, Equations (\ref{eqn:beta_piece_wise_constant}) and (\ref{eqn:gamma_piece_wise_constant}) state that \(\beta(\tau)\) and \(\lambda s(\tau)\) are piecewise linear in \(\tau\) for a fixed \(\lambda\).
The derivations of \(\nu_{\mcl{A}_{\tau_t}} (\tau_t) \) and \(\gamma_{\mcl{A}^c_{\tau_t}} (\tau_t) \) are given in Appendix A.
If $\tau_{t+1} > \tau_t$ is the next zero crossing point, then either of the following two events happens
$\bullet$ A zero variable becomes non-zero, i.e., \( \exists j \in \mcl{A}^c_{\tau_t} \text{ s.t. } |x_{j}^{\top}(y(\tau_{t+1}) - X_{\cA_{\tau_{t}}}\beta_{\cA_{\tau_t}}(\tau_{t + 1}))| = \lambda \hst \text{or,}\)
$\bullet$ A non-zero variable becomes zero, i.e.,
\(\exists j \in \mcl{A}_{\tau_t} \text{ s.t. } \beta_j(\tau_t) \neq 0 \text{ and } \beta_j(\tau_{t+1}) = 0 \enspace.\)
Overall, the next change of the active set happens at $\tau_{t + 1} = \tau_t + \Delta_j$, where
\begin{equation}\label{eq:step_length}
\Delta_j = \min(\Delta_j^1, \Delta_j^2) = \min\left(\min_{j \in \mathcal{A}^c_{\tau_t}} \Big( \lambda \frac{ \text{sign}(\gamma_j (\tau_t)) - s_j(\tau_t)}{\gamma_j(\tau_t)} \Big)_{++}, \hst \min_{j \in \mcl{A}_{\tau_t}} \Big( - \frac{\beta_j(\tau_t)}{\nu_j(\tau_t)} \Big)_{++} \right) \enspace.
\end{equation}
Here, we use the convention that for any $a \in \RR$, $(a)_{++} = a$ if $a > 0$ and $\infty$ otherwise.
The derivation of the step-size $\Delta_j$ for the $\tau$-path is given in the Appendix A. However, solving the minimization problem to determine the step-size of the $\tau$-path and the $\lambda$-path (the details of $\lambda$-path are given in Appendix A) can be challenging for SHIM type problems.
Hence, we need efficient computational methods to make it practically feasible.
In the following section we present an efficient tree pruning strategy by considering a tree structure of the interaction terms (or patterns).
Similar pruning strategy already exists in the literature to solve the $\lambda$-path of the LASSO in the context graph mining [\cite{ERP_Tsuda}]. In the next section we will show that the same pruning strategy can be applied for the $\tau$-path of the SHIM.
\subsection{Tree pruning}
A tree is constructed in such a way that for any pair of nodes (\(\ell, \ell^\prime)\), where $\ell$ is the ancestor of $\ell^\prime$, i.e., \(\ell \subseteq \ell^\prime\), the following conditions are satisfied
\begin{equation*}
\hst x_{i \ell^\prime} = 1 \implies x_{i\ell } = 1 \hst
\text{ and conversely,} \hst
x_{i\ell } = 0 \implies x_{i \ell^\prime} = 0 \quad \forall i \in [n].
\end{equation*}
Now considering the $\tau$-path of the LASSO, the equicorrelation condition for any active feature \(k \in \mathcal{A}_{\tau_{t + 1}}\) at a fixed $\lambda$ can be written as
\begin{equation*}
\left|x_k^\top (y(\tau_{t + 1}) - X\beta(\tau_{t + 1}))\right| = \lambda.
\end{equation*}
Therefore at a fixed \(\lambda\), any non-active feature \(\ell \in \mathcal{A}^c_{\tau_{t} } \) becomes active at $\tau_{t + 1}$ when the following condition is satisfied
\begin{align}\label{eqn:inclusion_condition2}
\big\lvert x_\ell^\top \big(y(\tau_{t + 1}) - X_{\mathcal{A}_{\tau_t}}(\beta_{\mathcal{A}_{\tau_t}} (\tau_t) + \Delta_\ell \nu_{\mathcal{A}_{\tau_t}} (\tau_t)) \big)\big\rvert
&=
\big\lvert x_k^\top \big(y(\tau_{t + 1}) - X_{\mathcal{A}_{\tau_t}}(\beta_{\mathcal{A}_{\tau_t}} (\tau_t) + \Delta_\ell \nu_{\mathcal{A}_{\tau_t}} (\tau_t)) \big)\big\rvert \nonumber \\
\text{or} \quad |\rho_\ell(\tau_t, \tau_{t + 1}) - \Delta_\ell \eta_\ell(\tau_t) | &= |\rho_k(\tau_t, \tau_{t + 1}) - \Delta_\ell \eta_k(\tau_t)|,
\end{align}
where the l.h.s. corresponds to \(\ell \in \mathcal{A}_{\tau_t}^c \) and the r.h.s. corresponds to \(k \in \mathcal{A}_{\tau_t} \).
Here, we define
\(\rho_\ell (\tau_{t}, \tau_{t + 1}) = x_\ell^\top \Big(y(\tau_{t + 1}) - X_{\mathcal{A}_{\tau_t}} \beta_{\mathcal{A}_{\tau_t}} (\tau_t) \Big) \text{ and } \eta_\ell(\tau_t) = x_\ell^\top X_{\mathcal{A}_{\tau_t}} \nu_{\mathcal{A}_{\tau_t}}(\tau_t) \).
The r.h.s. of (\ref{eqn:inclusion_condition2}) has a lower bound, i.e.,
\[
|\rho_k(\tau_t, \tau_{t + 1}) - \Delta_\ell \eta_k(\tau_t)| \geq |\rho_k(\tau_t, \tau_{t+ 1})| - \Delta_\ell |\eta_k(\tau_t)|,
\] and the l.h.s. of (\ref{eqn:inclusion_condition2}) has an upper bound, i.e.,
\[
|\rho_\ell(\tau_{t}, \tau_{t + 1}) - \Delta_\ell \eta_\ell(\tau_t) | \leq |\rho_\ell(\tau_t, \tau_{t+1})| + \Delta_\ell |\eta_\ell(\tau_t)|.
\]
Therefore, for equation (\ref{eqn:inclusion_condition2}) to have a solution, the following condition needs to be satisfied
\begin{equation}\label{eqn:solution_condn}
\quad \quad |\rho_\ell(\tau_t, \tau_{t + 1})| + \Delta_\ell |\eta_\ell(\tau_t)| \geq |\rho_k(\tau_t, \tau_{t + 1})| - \Delta_\ell |\eta_k(\tau_t)|.
\end{equation}
If the above condition (\ref{eqn:solution_condn}) is not satisfied, then equation (\ref{eqn:inclusion_condition2}) will not have any solution, and that can be used as a pruning condition. Therefore, the pruning condition can be written as
\begin{equation}\label{eqn:pruning_condn}
|\rho_\ell(\tau_t, \tau_{t + 1})| + \Delta_\ell |\eta_\ell(\tau_t)| < |\rho_k(\tau_t, \tau_{t + 1})| - \Delta_\ell |\eta_k(\tau_t)|.
\end{equation}
\begin{lemma}\label{lemma:lemma_1}
If $\Delta^\ast_\ell$ is the current minimum step-size, i.e. \(\Delta^\ast_\ell = \underset{t \in \{1, 2, \ldots, \ell\}}{\min}\{\Delta_t\}, \)
(\ref{eqn:pruning_condn}) is equivalent to
\begin{equation*}
|\rho_{\ell}(\tau_t, \tau_{t + 1})| + \Delta^\ast_\ell |\eta_\ell(\tau_t)| < |\rho_k(\tau_t, \tau_{t + 1})| - \Delta^\ast_\ell |\eta_k(\tau_t)|.
\end{equation*}
\end{lemma}
\begin{lemma}\label{lemma:lemma_2}
If Lemma \ref{lemma:lemma_1} holds, then $\forall \ell^\prime \supset \ell $,
\begin{equation} \label{eq:pruning_2}
|\rho_{\ell^\prime}(\tau_t, \tau_{t + 1})| + \Delta^\ast_\ell |\eta_{\ell^\prime}(\tau_t)| < |\rho_k(\tau_t, \tau_{t + 1})| - \Delta^\ast_\ell |\eta_k(\tau_t)|.
\end{equation}
\end{lemma}
If the Lemma \ref{lemma:lemma_2} holds, then $\forall \ell^\prime \supset \ell $, $\Delta_{\ell^\prime} > \Delta^\ast_{\ell }$.
Therefore, we can use Lemma \ref{lemma:lemma_2} as the pruning criterion to prune the sub-tree with $\ell^\prime$ as the root node.
The proofs of Lemmas \ref{lemma:lemma_1} and \ref{lemma:lemma_2} are deferred to Appendix A.
The complete algorithm for the inference path ($\tau$-path) is given in Algorithm \ref{algo:inference_path}.
\begin{algorithm}[h!]
\caption{$\tau$-path}
\label{algo:inference_path}
\begin{algorithmic}[1]
\State \textbf{Input:} \(Z, \lambda, b, q, [\tau_{\rm min}, \tau_{\rm max}]\)
\State Initialization: \( t=0, \tau_t=\tau_{\rm min}, \mathcal{T}= \{ \tau_t \} \), \(\beta(\tau_t)=0\)
\State \(y(\tau_t) = q + b \tau_t \), \( \quad \mathcal{A}_{\tau_t}, \beta_{\mathcal{A}_{\tau_k}} (\tau_t) \leftarrow \lambda\text{-path}(Z, y(\tau_k), \lambda)\) (The algorithm of $\lambda$-path is in Appendix A)
\State \(\nu_{\mc{A}_{\tau_{t}}} (\tau_t) = (X_{\mc{A}_{\tau_{t}}}^\top X_{\mc{A}_{\tau_{t}}})^{-1}X_{\mc{A}_{\tau_{t}}}^\top b\), \quad \(\nu_{\mc{A}_{\tau_{t}}^c}(\tau_t) = 0\)
\State \textbf{while} \((\tau_t < \tau_{max})\) \textbf{do}
\State Compute step-length $\Delta_j \leftarrow$ Equation (\ref{eq:step_length})
\State If $\Delta_j = \Delta_j^1$, add \(j\) into \(\mathcal{A}_{\tau_t}\) \Comment{Inclusion}
\State If \(\Delta_j = \Delta_j^2\), remove $j$ from \(\mathcal{A}_{\tau_t}\) \Comment{Deletion}
\State update: \( \tau_{t+1} \leftarrow \tau_{t} + \Delta_j \), \(\mathcal{T}=\mathcal{T} \cup \{\tau_{t+1}\} \), \( \beta_{\mc{A}_{\tau_{t+1}}} (\tau_t) \leftarrow \beta_{\mc{A}_{\tau_t}} (\tau_t) + \Delta_j \nu_{\mc{A}_{\tau_t}} (\tau_t) \), \(y(\tau_{t+1}) = q + b \tau_{t+1} \),
\(\nu_{\mc{A}_{\tau_{t+1}}} (\tau_{t + 1}) = (X_{\mc{A}_{\tau_{t+1}}}^\top X_{\mc{A}_{\tau_{t+1}}})^{-1}X_{\mc{A}_{\tau_{t+1}}}^\top b\), \quad \(\nu_{\mc{A}_{\tau_{t+1}}^c} (\tau_{t + 1}) = 0\)
\State \textbf{end while}
\State \textbf{Output:} \( \mathcal{T}, \{\mathcal{A}_{\tau_t}\}_{\tau_t \in \mathcal{T}} \)
\end{algorithmic}
\end{algorithm}
\subsection{Extension for Elastic Net}
We extended our proposed method to solve the elastic net optimization problem. However, we could not follow the general approach of solving the elastic net optimization problem as solving LASSO with augmented data. Because, we cannot just simply augment the data by stacking extra rows as this can be prohibitively expensive due to the combinatorial effects.
In order to derive the step-size for both $\lambda$-path and $\tau$-path, we need a different approach as we construct the high-order interaction model in a progressive manner.
We have shown that using a simple trick, the step-size can be computed very efficiently.
Similar trick is also used to derive the pruning condition. See Appendix B for the details.
\section{Experiments}
We only highlight the main results. The details of experimental setup and several additional experimental results are deferred to Appendix C.
\subsection{Comparison of statistical powers.}
\textbf{Synthetic data:}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/stats_synthetic_data.pdf}
\caption{Demonstration of the statistical power of three selection bias correction methods (ds: data splitting, homo: homotopy, poly: polytope) using synthetic data experiments. (a) and (b) show the false positive rates and the true positive rates for different sample sizes and (c) shows the distribution of the confidence interval lengths.}
\label{fig:stat_power_synthetic}
\end{figure}
We generated the i.i.d. random samples $(z_i, y) \in [0,1]^m \times \mb{R} $ in such a way that $100m(1-\zeta)\%$ of $z_i \in \mb{R}^m$ contain $1s$on average.
Here, $\zeta \in [0, 1]$ is the sparsity controlling parameter.
The response $y_i \in \mb{R}$ is randomly generated from a normal distribution $N(0, \sigma^2)$.
For the comparison of false positive rates (FPRs), true positive rates (TPRs) and confidence interval (CI) across different methods, we generated the design matrix for a fixed sparsity parameter $\zeta = 0.95$.
In all experiments, the significance level was set as $\alpha = 0.05$.
For the comparison of TPRs we considered a true model of up to $3^{\rm rd}$-order interactions defined as $\mu (x_i) = 0.5z_1 - 2z_2z_3 + 3z_4z_5z_6$.
The response $y_i$ is accordingly generated from $N(\mu(X), \sigma^2I)$.
For the comparison of FPRs, we set $\beta_j = 0, \forall j \in \mb{R}^p$.
We compared both FPRs and TPRs across three different methods ({\tt ds}: data splitting, {\tt homo}: homotopy, {\tt poly}: polytope) for four different sample sizes $n \in [100, 200, 400, 500]$.
We generated TPRs and FPRs over 100 trials for all three methods and repeated the experiments for $5$ times.
The results are shown in Fig.~\ref{fig:stat_power_synthetic}(a) and Fig.~\ref{fig:stat_power_synthetic}(b), respectively.
It can be seen that all SI methods can properly control the FPRs under $\alpha = 0.05$.
Regarding the TPRs comparison, it can be seen that homotopy has the highest power which is obvious as it is minimally conditioned compared to polytope which suffers from over conditioning.
Comparing TPRs of data splitting ({\tt ds}) and homotopy ({\tt homo}), it can be seen that TPRs of {\tt homo} is always greater than that of {\tt ds}.
Note that in {\tt ds}, only half of the data is used for selection and the remaining half is used for the inference.
Therefore, compared to {\tt homo}, {\tt ds} has higher risk of failing to identify truly correlated features in selection stage and similarly suffer from low statistical power in the inference stage.
The result of CIs is shown in Fig.~\ref{fig:stat_power_synthetic}(c).
Here, we used the same true model of the TPR experiments and reported the average CIs over 100 trials across different methods.
The results of CIs are consistent with the findings of TPRs.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/NRTs1.pdf}
\caption{Comparison of statistical powers (Homotopy vs Polytope). (a.1-a.3) show the percentage of cases where selection bias corrected p-values and confidence interval lengths of the proposed method (Homotopy) was smaller than that of the existing method (Polytope) in random sub-sampling experiments. (b.1-b.3) show the distributions of the confidence interval lengths of the same experiments. The numbers inside the brackets represent the average number of intervals along the $\tau$-line considered for the homotopy method. Note that in case of polytope only one such interval is considered.}
\label{fig:stats_NRTIs1}
\end{figure}
\textbf{Real data:}
We obtained HIV-1 sequence data from Stanford HIV Drug Resistance Database \cite{rhee2003human}.
In our experiment we used 6 NRTIs, 1 NNRTIs and 3 PIs drugs.
We only reported here the results of 3 NRTIs drugs.
Additional results are included in the Appendix C.
To demonstrate the statistical efficacy of the proposed homotopy method over existing polytope method we generated random sub-samples of those 10 drug data as follows.
First, we created a dataset consisting of top 30 mutations from each of the 10 drug data.
As most of the columns contain zeros we sorted the columns based on the number of 1's present in each column and picked the top 30 columns as our starting set.
Then, from this starting set we considered random sub-samples of five features for three different sample sizes ($n \in \{100, 200, 300\}$).
Here, we considered randomization without replacement for both sample and features selection.
We generated 100 samples and repeated the experiments for five times and hence, in total we generated 500 samples.
Figure~\ref{fig:stats_NRTIs1} demonstrates the percentage of times homotopy produced smaller $p$-values and CI lengths than the polytope.
This also depicts the distributional difference of the CI lengths between homotopy and polytope.
These results clearly demonstrate that homotopy is statistically more powerful than existing polytope method.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/comp_efficiency.pdf}
\caption{Distribution of the fraction of total nodes traversed against different maximum pattern size ($d$) constraints while applying the proposed pruning method during the construction of the $\tau$-path. (a.1) - (a.3) demonstrate the results for $1^{st}$, $2^{nd}$ and $3^{rd}$ order interaction terms. }
\label{fig:comp_eff_node_counts}
\end{figure}
\begin{table}[t]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|c|c|c|c|c|c|c| }
\hline
\multirow{2}{*}{$d$} & \multirow{2}{*}{\shortstack{Search space \\ (\# nodes)}} & \multicolumn{3}{|c|}{With pruning} & \multicolumn{3}{|c|}{Without pruning}\\\cline{3-8}
& & $1^{st}$ & $2^{nd}$ & $3^{rd}$ & $1^{st}$ & $2^{nd}$ & $3^{rd}$ \\
\hline
5 & 174436 & $14.56 \pm 6.05$ & $6.58 \pm 2.05$ & $6.78 \pm 3.92 $ &$25.29 \pm 2.50$ & $34.80 \pm 1.19$ & $23.34 \pm 1.88$ \\
\hline
6 & 768211 & $ 34.80 \pm 16.10$ & $13.56 \pm 5.75$ & $13.99 \pm 10.08 $ &$126.96 \pm 8.61$ & $125.29 \pm 2.14$ & $127.97 \pm 4.80$ \\
\hline
7 & 2804011 & $ 68.19 \pm 33.17$ & $24.15 \pm 11.83$ & $25.19 \pm 20.50$ & $450.24 \pm 28.50$ & $447.59 \pm 22.15$ & $447.19 \pm 37.69$ \\
\hline
8 & 8656936 & $ 110.25 \pm 55.55$ & $37.70 \pm 19.39$ & $39.45 \pm 33.37$ & > 1 day & > 1 day & > 1 day \\
\hline
9 & 8656936 & $ 151.31 \pm 76.81$ & $51.08 \pm 27.09 $ & $54.06 \pm 47.34$ & > 1 day & > 1 day & > 1 day \\
\hline
10 & 53009101 & $ 188.26 \pm 95.91$ & $63.66 \pm 34.71$ & $65.49 \pm 58.42$ & > 1 day & > 1 day & > 1 day \\
\hline
11 & 107636401 & $ 212.34 \pm 105.54$ & $69.26 \pm 38.49 $ & $74.54 \pm 66.42$ & > 1 day & > 1 day & > 1 day \\
\hline
12 & 194129626 & $ 226.98 \pm 115.71$ & $74.36 \pm 41.20$ & $78.97 \pm 70.33$ & > 1 day & > 1 day & > 1 day \\
\hline
13 & 313889476 & $ 233.88 \pm 117.25$ & $76.86 \pm 43.10$ & $83.09 \pm 75.13$ & > 1 day & > 1 day & > 1 day \\
\hline
14 & 459312151 & $ 240.36 \pm 124.79$ & $78.127 \pm 43.44$ & $82.98 \pm 74.13$ & > 1 day & > 1 day & > 1 day \\
\hline
15 & 614429671 & $ 238.0 \pm $ 120.35 & $79.67 \pm 44.72 $ & $83.31 \pm 75.16$ & > 1 day & > 1 day & > 1 day \\
\hline
None & 1073741823 & $ 240.17 \pm 119.76 $ & $78.08 \pm 43.62$ & $82.98 \pm 74.33$ & > 1 day & > 1 day & > 1 day \\
\hline
\end{tabular}}
\caption{Computation time (in sec) with and without puning for $1^{\rm st}$, $2^{\rm nd}$ and $3^{\rm rd}$ order interactions. Here, the computation time is measured against different maximum pattern size ($d$) constraints. The last row corresponds to the case when "$d$" is not specified and the whole search space is used for exploration. All computation times are measured on Intel Xeon Gold 6230 CPU @ 2.10GHz.}
\label{table:comp_eff_time_taken}
\end{table}
\subsection{Comparison of computational efficiencies.}
To demonstrate the computational efficiency of the proposed pruning strategy for the $\tau$-path, we applied our homotopy method with and without pruning on HIV NRTI D4T drug resistance data with the same starting set of top 30 mutations as used to demonstrate the statistical power.
Although we varied the $d$ from 5 to $m$, high-order interaction terms upto $3^{\rm rd}$ order appeared in $\mcl{A}$.
We compared both the number of nodes traversed (Fig.\ref{fig:comp_eff_node_counts}) and the time taken (Table.\ref{table:comp_eff_time_taken}) against different maximum interaction order $d$ during the construction of the $\tau$-path of each test statistic direction.
Empirically it was found that the pruning was more effective for the $\tau$-path of high-order interaction terms compared to that of singleton terms and the power of pruning increases as the order of interaction increases.
Therefore, we reported the average number of nodes and average time taken separately for $1^{\rm st}$, $2^{\rm nd}$ and $3^{\rm rd}$ order interaction terms.
It can be observed that the pruning is more effective at the deeper nodes of the tree and it saturates after certain depth of the tree.
This is evident as the sparsity of the data increases at the deeper nodes and the pruning exploits the monotonicity of high-order interaction terms constructed as tree.
In case of homotopy method without pruning we stopped the execution of program if the $\tau$-path was not finished in one day.
From Tab.~\ref{table:comp_eff_time_taken}, it can be observed that without the pruning the construction of $\tau$-path is not practical owing to the generation of exponential number of high-order interaction terms as we progress to the deeper nodes of the tree.
The $\tau$-path without pruning took more than a day beyond $d=7$, while the maximum time taken by the $\tau$-path with pruning was around 240 sec on average, even when no $d$ constraint was imposed.
\section{Conclusions}
In this paper, we presented an algorithm for testing a sparse high-order interaction model (SHIM) by using the framework of conditional selective inference (SI). The algorithm is developed by effectively combining the homotopy and branch-and-bound tree mining method to deal with the combinatorial computational burden of the SHIM and also to improve the statistical power. |
1,116,691,501,264 | arxiv | \section{Introduction}
\label{section1}
Timely status updates play a key role in networked control and monitoring systems.
Age of Information (AoI) has recently been introduced to quantify the timeliness of information freshness in status update systems \cite{kaul_etal_SMAN11}.
In the general AoI framework outlined in \cite{kosta_etal_survey}, information sources sample a source-specific random process at random epochs and generate information packets containing the sample values as well as the sampling times.
On the other hand, servers gather the information packets from multiple sources so as to be transmitted to a remote monitor using queueing, buffer management, scheduling, etc.
For a given source, AoI is defined as the time elapsed since the generation of the last successfully received update packet. Therefore, AoI is a source-specific random process whose sample paths increase in time with unit slope but are subject to abrupt downward jumps at information packet reception instances. The PAoI process is obtained by sampling the AoI process just before the cycle termination instances.
In this paper, we consider a non-preemptive status update system in Fig.~\ref{fig:twosource} with two sources, a server, and a monitor, with random information packet arrivals from the sources.
The server employs Single-Buffer Per-Source queueing (SBPSQ) for which the freshest packet from each source is held in a single buffer. The server is work-conserving, i.e., it does not idle unless the waiting room is empty, and it serves the packets probabilistically (probabilities denoted by $p_i$ in Fig.~\ref{fig:twosource}) when there are two waiting packets. The scheduler is age-agnostic and does not require the server to read the timestamp field in the packets and keep track of the instantaneous AoI values. The scheduling probabilities are to be chosen so as to provide AoI/PAoI differentiation. In this paper, we attempt to provide differentiation through the minimization of the weighted average AoI/PAoI. The motivation behind AoI differentiation is that in a networked control system, the information about certain input processes need to be kept relatively fresher at the control unit since this information will have profound impact on the overall performance of the control system.
The studied model falls in the general framework of status update systems analytically studied in the literature; see the surveys on AoI \cite{kosta_etal_survey},\cite{survey_Yates} and the references therein for a collection of multi-source queueing models for AoI.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[scale=0.32]
\draw[very thick](3,2) circle (2);
\draw (3,4) node[anchor=south] {\small{source-$2$}} ;
\draw[very thick] (3,9) circle (2) ;
\draw (3,11) node[anchor=south] {\small{source-$1$}} ;
\draw[very thick,->] (6,2) -- (10,2) ;
\draw (8,2) node[anchor=south] {$\lambda_2$};
\draw[very thick,->] (6,9) -- (10,9) ;
\draw (8,9) node[anchor=south] {$\lambda_1$};
\filldraw[fill=gray!50, thick] (13,1) rectangle(15,3);
\draw[thick] (11,1) -- (13,1);
\draw[thick] (11,3) -- (13,3);
\filldraw[fill=gray!50, thick] (13,8) rectangle(15,10);
\draw[thick] (11,8) -- (13,8);
\draw[thick] (11,10) -- (13,10);
\draw[thick,->] (15,2) -- (20,5) ;
\draw[thick,->] (15,9) -- (20,6) ;
\draw[thick, ->, dashed] (20,3.5) arc (270:90:2) node[anchor=south east] {$p_1$};
\filldraw (20,3.5) circle (0.01) node[anchor=north east] {$p_2$};
\draw[very thick](24,6) circle (3);
\filldraw (24,6) circle (0.01) node[anchor=center] {server};
\draw[very thick,->] (28,6) -- (32,6) ;
\draw[very thick](33,4) rectangle (40,8);
\filldraw (36.5,6) circle (0.01) node[anchor=center] {monitor};
\end{tikzpicture}
\caption{A single-hop status update system with two sources employing per-source queueing. A single-buffer queue is dedicated to each source to hold the freshest packet from that source.}
\label{fig:twosource}
\end{figure}
Our main contributions are as follows:
\begin{itemize}
\item As the main contribution of this paper, under the assumption of Poisson packet arrivals, and exponentially distributed service times, we obtain the exact distributions of the AoI/PAoI processes for the two sources for the system of interest in Fig.~\ref{fig:twosource} as a function of the scheduling probabilities. The analysis is based on well-known absorbing Continuous-Time Markov Chains (CTMC) and does not employ relatively more specific tools such as Stochastic Hybrid Systems (SHS) that were recently proposed for AoI modeling \cite{yates_kaul_tit19} and used in several works. We believe that the simplicity of the tool we use to derive the AoI/PAoI distributions makes it a convenient tool for researchers and practitioners in this field. Subsequently, for given traffic parameters, the proposed analytical model is used to numerically obtain the Optimum Probabilistic Scheduling (OPS) policy which minimizes the weighted average AoI or PAoI, referred to as OPS-A and OPS-P, respectively.
\item A heavy-traffic analysis is presented to obtain closed-form expressions for the average per-source AoI/PAoI values which has enabled us to write the OPS-P policy in closed-form in heavy-traffic regime. On the other hand, the OPS-A policy for the heavy-traffic regime is shown to be obtainable by solving a quartic equation.
\item On the basis of the heavy-traffic analysis, we propose two age-agnostic heuristic schedulers that are quite easy to implement in comparison with age-aware schedulers and therefore they can be used in more challenging multi-hop scenarios and resource-constrained servers.
\end{itemize}
The paper is organized as follows. Section~\ref{section2} presents the related work. In Section~\ref{section3}, the analytical model is presented. The heavy-traffic regime is addressed in Section~\ref{section4} along with the two heavy-traffic analysis-based heuristic schedulers. Section~\ref{section5} addresses the analytical model and associated closed-form expressions for the Non-Preemptive Bufferless (NPB) variation of the same problem which is used as a benchmark in the numerical examples. In Section~\ref{section6}, we provide numerical examples for comparative evaluation of the age-agnostic schedulers of interest. We conclude in Section~\ref{section7}.
\section{Related Work}
\label{section2}
There has been a great deal of interest on AoI modeling and optimization problems in the context of communication systems since the reference \cite{kaul_etal_infocom12} first introduced the AoI concept in a single-source, single-server queueing system setting. The existing analytical models can be classified according to one or more of the following: (i) existence of one, two, or more information sources, (ii) random access vs. scheduled access, (iii) existence of transmission errors, (iv) performance metrics used, e.g., average AoI/PAoI values, age violation probabilities, etc., (v) buffer management mechanisms, (vi) scheduling algorithms, (vii) arrival and service processes used in the models, (viii) single-hop vs. multi-hop systems, (ix) continuous-time vs. discrete-time systems. The recent references \cite{kosta_etal_survey} and \cite{survey_Yates} present exhaustive surveys on existing work on AoI and moreover describe several open problems.
\subsection{Single-source Queueing Models} The average AoI is obtained for the M/M/1, M/D/1, and D/M/1 queues with infinite buffer capacity and FCFS (First Come First Serve) in \cite{kaul_etal_infocom12}.
The reference \cite{costa_etal_TIT16} obtains the AoI and PAoI distributions for small buffer systems, namely M/M/1/1 and M/M/1/2 queues, as well as the non-preemptive LCFS (Last Come First Serve) M/M/1/2$^{\ast}$ queue for which the packet waiting in the queue is replaced by a fresher packet arrival.
The average AoI and PAoI are obtained in \cite{najm_nasser_isit16} for the preemptive LCFS M/G/1/1 queueing system
where a new arrival preempts the packet in service and the service time distribution is assumed to follow a more general gamma distribution.
Average PAoI expressions are derived for an M/M/1 queueing system with packet transmission errors with various buffer management schemes in \cite{chen_huang_isit16}.
Expressions for the steady-state distributions of AoI and PAoI are derived in \cite{inoue_etal_tit19} for a wide range of single-source systems.
The authors of \cite{akar_etal_tcom20} obtain the exact distributions of AoI and PAoI in bufferless systems with probabilistic preemption and single-buffer systems with probabilistic replacement also allowing general phase type distributions to represent interrarival times and/or service times.
\subsection{Multi-source Queueing Models}
For analytical models involving multiple sources, the average PAoI for M/G/1 FCFS and bufferless M/G/1/1 systems with heterogeneous service time requirements are derived in \cite{huang_modiano} by which one can optimize the information packet generation rates from the sources.
An exact expression for the average AoI for the case of multi-source M/M/1 queueing model under FCFS scheduling is provided in \cite{moltafet2020average} and three approximate expressions are proposed for the average AoI for the more general multi-source M/G/1 queueing model.
The reference \cite{yates_kaul_tit19} investigates the multi-source M/M/1 model with FCFS, preemptive bufferless, and non-preemptive single buffer with replacement, using the theory of Stochastic Hybrid Systems (SHS) and obtain exact expressions for the average AoI. Hyperexponential (H$_2$) service time distribution for each source is considered in \cite{yates_etal_isit19} for an M/H$_2$/1/1 non-preemptive bufferless queue to derive an expression for the average per-source AoI per class.
The authors of \cite{farazi_etal_Asilomar19} study a self-preemptive system in which preemption of a source in service is allowed by a newly arriving packet from the same source and AoI expressions are derived using the SHS technique.
For distributional results, the MGF (Moment Generating Function) of AoI has been derived for a bufferless multi-source status update system using global preemption \cite{moltafet2021moment}. The work in \cite{abdelmagid2021closedform} considers a real-time status update system with an energy harvesting transmitter and derive the MGF of AoI in closed-form under certain queueing disciplines making use of SHS techniques.
The authors of \cite{dogan_akar_tcom21} obtain the exact distributions of AoI/PAoI in a probabilistically preemptive bufferless multi-source M/PH/1/1 queue where non-preemptive, globally preemptive, and self-preemptive systems are investigated using a common unifying framework. In \cite{optimumpreemption}, the optimum packet generation rates are obtained for self-preemptive and global preemptive bufferless systems for weighted AoI minimization, the latter case shown to allow closed-form expressions.
The most relevant existing analytical modeling work to this paper are the ones that study SBPSQ models for status update systems.
The merits of SBPSQ systems are presented in \cite{pappas_etal_ICC15} in terms of lesser transmissions and AoI reduction.
The authors of \cite{moltafet_isit} derive the average AoI expressions for a two-source M/M/1/2 queueing system in which a packet waiting in the queue can be replaced only by a newly arriving packet from the same source using SHS techniques. The per-source MGF of the AoI is also obtained \cite{moltafet_wcomlet} for the two-source system by
using SHS under self-preemptive and non-preemptive policies, the latter being a per-source queueing system.
However, in these works, the order of
packets in the queue does not change based on new arrivals and therefore AoI differentiation is not possible.
\subsection{Scheduling Algorithms for Random Arrivals}
We now review the existing work on AoI scheduling with random arrivals that are related to the scope of the current paper. The authors of \cite{bedewy_etal_tit21} consider the problem of minimizing the age of information in a multi-source system and they show that for any given sampling strategy, the Maximum Age First (MAF) scheduling strategy provides the best age performance among all scheduling strategies. The authors of \cite{joo_eryilmaz_TNET18} propose an age-based scheduler that combines age
with the interarrival times of incoming packets, in its scheduling decisions, to achieve improved information freshness at
the receiver. Although the analytical
results are obtained for only heavy-traffic, their numerical results reveal that the proposed algorithm achieves desirable freshness performance for lighter loads as well.
The authors of \cite{kadota_tn18} and \cite{kadota_tmc21} consider an asymmetric (source weights/service times are different) discrete-time wireless network with a base station serving multiple traffic streams using per-source queueing under the assumption of synchronized and random information packet arrivals, respectively, and propose nearly optimal age-based schedulers and age-agnostic randomized schedulers.
For the particular vase of random arrivals which is more relevant to the current paper, the reference \cite{kadota_tmc21} proposes a non-work-conserving stationary randomized policy for the single-buffer case with optimal scheduling probabilities depending on the source weights and source success probabilities through a square-root relationship and this policy is independent of the arrival rates. Moreover, they propose a work-conserving age-based Max-Weight scheduler for the same system whose performance is better and is close to the lower bound. We also note that similar results had been obtained in \cite{kadota_tn18} for synchronized arrivals. Our focus in this paper is on work-conserving age-agnostic schedulers that are more suitable for resource-constrained environments and multi-hop scenarios for which it is relatively difficult to keep track of per-source AoI information at the server.
\section{Probabilistic Scheduling}
\label{section3}
\subsection{Definitions of AoI and PAoI}
In a very general setting, let $T_j^{(i)}$ and $A_j^{(i)}$ for $j\geq 1$ denote the times
at which the $j$th successful source-$i$ packet is received by the monitor and generated at the source, respectively. We also let $\Psi^{(i)}_j$ denote the system time of the $j$th successful source-$i$ information packet which is the sum of the packet's queue wait time and service times, i.e., $\Psi^{(i)}_j=T_j^{(i)} - A_j^{(i)}$. Fig.~\ref{fig:samplepath} depicts a sample path of the source-$i$ AoI process $\Delta^{(i)}(t)$ which increases with unit slope from the value $\Phi_{j}^{(i)}$ at $t=T_{j}^{(i)}$ until $t=T_{j+1}^{(i)}$ in cycle-$j$. The peak value in cycle-$j$ is denoted by $\Psi_{j}^{(i)}$ which represents the Peak AoI process for source-$i$. These definitions apply to general status update systems. Note that for the specific system of Fig.~\ref{fig:twosource}, successful packets are the ones which are received by the monitor, and those that are replaced by fresher incoming packets while at the waiting room are unsuccessful packets. Let $\Delta^{(i)}$ and $\Phi^{(i)}$ denote the steady-state values for the source-$i$ processes $\Delta^{(i)}(t)$ and $\Phi_j^{(i)}$, respectively. The weighted average AoI, $W_{AoI}$, and the weighted average PAoI, $W_{PAoI}$, of the system are written as
\begin{equation}
W_{AoI} = \sum_{i=1}^2 \omega_i E [ \Delta^{(i)}],
\ W_{PAoI}= \sum_{i=1}^2 \omega_i E [ \Phi^{(i)}], \label{W}
\end{equation} where $\omega_i, i=1,2,$ with $\omega_1+\omega_2=1$ are the (normalized) weighting coefficients.
\subsection{System Model}
In this paper, we consider a non-preemptive status update system in Fig.~\ref{fig:twosource} with two sources, a server, and a monitor. Source-$i$, $i=1,2$ generates information packets (containing time-stamped status update information) according to a Poisson process with intensity $\lambda_i$. The generated packets become immediately available at the server.
The server maintains two single-buffer queues, namely $Q_i, i=1,2$, that holds the freshest packet from source-$i$. This buffer management is referred to as Single-Buffer Per-Source Queueing (SBPSQ). A newcoming source-$i$ packet receives immediate service if the server is idle and there are no waiting packets, or joins the empty $Q_i$, or replaces the existing staler source-$i$ packet at $Q_i$. The server is work-conserving as a result of which an information packet is immediately transmitted unless the system is idle. Consequently, when the system has one packet waiting at $Q_i$ for $i=1$ or $i=2$ upon the server becoming idle, then this packet from $Q_i$ will immediately be served. When there are two packets waiting at the two queues, then the server is to transmit the packet from $Q_i$ with probability $p_i$ with $p_1 + p_2 =1$. Therefore, the scheduler is age-agnostic and does not require the server to read the timestamp field in the packets and keep track of the instantaneous AoI values. The probabilities $p_i$'s are to be chosen so as to provide AoI/PAoI differentiation.
At the end of a single transmission, positive/negative acknowledgments from the monitor to the server are assumed to be immediate, for the sake of convenience. The channel success probability for source-$i$ is $s_i$ and when a packet's transmission gets to start, it will be retransmitted until it is successfully received by the monitor.
Therefore, if a single transmission is assumed to be exponentially distributed with parameter $\nu_i$, then the transmission time of successful source-$i$ packets from the server to the monitor are exponentially distributed with parameter $\mu_i = \nu_i s_i$ by taking into account of the retransmissions. With this choice, error-prone channels are also considered in this paper.
We define the source-$i$ load as $\rho_i = \frac{\lambda_i}{\mu_i}$ and the total load $\rho = \rho_1+\rho_2$.
We also define the traffic mix parameter $r_i$ so that $\rho_i =\rho r_i$, and the traffic mix ratio $r=\frac{r_1}{r_2}$.
The studied model falls in the general framework of status update systems analytically studied in the literature; see the surveys on AoI \cite{kosta_etal_survey},\cite{survey_Yates} and the references therein for a collection of multi-source queueing models for AoI.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=0.45]
\draw[thick,<->,gray] (9,13) -- (16,13);
\filldraw (12.5,13) circle (0.01) node[anchor=south, thick] {cycle-$j$};
\draw[ultra thick,->] (0,0) -- (23,0) node[anchor=north] {$t$};
\draw[ultra thick,->] (0,0) -- (0,14) node[anchor=west] {$\Delta^{(i)}(t)$};
\draw[ultra thick,red] (4.5,4.5) -- (9,9);
\filldraw[red] (4,4) circle (3pt);
\filldraw[red] (3.5,3.5) circle (3pt) ;
\filldraw[red] (3,3) circle (3pt);
\draw (0,9) node[anchor=east] {$\Phi_{j+1}^{(i)}$};
\draw (0,5) node[anchor=east] {$\Psi_j^{(i)}$};
\draw[dashed,gray] (0,9) -- (23,9);
\draw[dashed,gray] (0,5) -- (23,5);
\draw[dashed,gray] (0,2) -- (23,2);
\draw[dashed,gray] (9,12.5) -- (9,0) node[anchor=north, thick, black] {$T_{j}^{(i)}$};
\draw[dashed,very thick,gray] (9,5) -- (4,0) node[anchor=north, thick, black] {$A_{j}^{(i)}$};
\draw[dashed,very thick, gray] (16,2) -- (14,0) node[anchor=north east, thick, black] {$\quad A_{j+1}^{(i)}$};
\draw[ultra thick,red] (9,9) -- (9,5.2);
\draw[ultra thick,red] (9.1,5.1) -- (16,12);
\draw (0,12) node[anchor=east] {$\Phi_{j}^{(i)}$};
\draw (0,2) node[anchor=east] {$\Psi_{j+1}^{(i)}$};
\draw[dashed,gray] (16,12.5) -- (16,0) node[anchor=north, thick, black] {$T_{j+1}^{(i)}$};
\draw[dashed,gray] (0,12) -- (23,12);
\draw[ultra thick,red] (16,12) -- (16,2.2);
\draw[ultra thick,red] (16.1,2.1) -- (20,6);
\filldraw[red] (20.5,6.5) circle (3pt);
\filldraw[red] (21,7) circle (3pt) ;
\filldraw[red] (21.5,7.5) circle (3pt);
\draw[red] (9,5) circle (6pt);
\draw[red] (16,2) circle (6pt);
\end{tikzpicture}
\caption{Sample path of the AoI process $\Delta^{(i)}(t)$.}
\label{fig:samplepath}
\end{figure}
The analytical method we propose in the next subsection enables us to obtain the distribution of $\Delta^{(1)}$ and $\Phi^{(1)}$. By renumbering the sources, the distribution of $\Delta^{(2)}$ and $\Phi^{(2)}$ can also be obtained using the same method.
\subsection{Queueing Model}
\begin{table}[tb]
\begin{center}
\begin{tabular}{||c|c|c|c||}
\hline
State & Server & $Q_1$ & $Q_2$ \\ [0.5ex]
\hline\hline
1 & I & E & E \\
\hline
2 & B1 & E & E \\
\hline
3 & B1 & F & E \\
\hline
4 & B1 & E & F \\
\hline
5 & B1 & F & F \\
\hline
6 & B2 & E & E \\
\hline
7 & B2 & F & E \\
\hline
8 & B2 & E & F \\
\hline
9 & B2 & F & F \\ [0.1ex]
\hline \hline
\end{tabular}
\end{center}
\caption{Description of the 9 states of the CTMC $\bm{X}(t)$. I, E, and F, stand for idle, empty, and full, respectively. B1 (B2) stands for the server being busy serving a source-1 (source-2) packet.}
\label{step1}
\end{table}
The proposed method consists of two main steps. In the first step, we construct an irreducible Continuous Time Markov Chain (CTMC) denoted by $\bm{X}(t)$ with nine states each of which is described in detail in Table~\ref{step1}. The CTMC $\bm{X}(t)$ has the generator matrix $\bm{P}$ where
\begin{align}
\bm{P_0} & = \begin{pmatrix}
0 & \lambda_1 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 \\
\mu_1 & 0 & 0 & \lambda_1& \lambda_2 & 0 & 0 & 0 & 0 \\
0 & \mu_1 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \lambda_1 & \mu_1 & 0 & 0 & 0 \\
0 & 0 & 0 & \mu_1 p_1 & 0 & 0 & \mu_1 p_2 & 0 & 0 \\
\mu_2 & 0 & 0 & 0 & 0 & 0 &\lambda_1 & \lambda_2 &0\\
0& \mu_2 & 0 & 0 & 0 & 0 & 0 & 0 & \lambda_2 \\
0& 0 & 0 & 0 & 0 & \mu_2 & 0 & 0 & \lambda_1 \\
0& 0 & 0 & \mu_2 p_1 & 0 & 0 & \mu_2 p_2 & 0 & 0
\end{pmatrix},
\end{align}
and $\bm{P}$ is the same as $\bm{P_0}$ except for its diagonal entries which are set to the corresponding row sums with a minus sign so that $\bm{P} \bm{1} =\bm{0}$ where $\bm{1}$ and $\bm{0}$ are column vectors of ones and zeros, respectively, of appropriate size. Let $\bm{\pi}$ be the stationary solution for $\bm{X}(t)$ so that
\begin{align}
\bm{\pi} \bm{P} = 0, \ \bm{\pi} \bm{1}=1,
\end{align}
with $\bm{\pi_j}$ denoting the steady-state probability of any new packet arrival finding the system in state $j$.
In the second step of the proposed method, we construct an absorbing CTMC denoted by $\bm{Y}(t)$ with 14 transient states $1,2,\ldots,14$ and two absorbing states $15,16$ which starts to evolve with the arrival of a source-1 packet, say packet $n$ into the system. If this packet turns out to be unsuccessful then we transition to the absorbing state 15. If packet $n$ turns out to be successful, then we evolve until the reception of the next successful packet say $m$ at which point the absorbing state 16 is transitioned to, which is referred to as a successful absorption. The 14 transient states are described in Table~\ref{step2}.
\begin{table}[tb]
\begin{center}
\begin{tabular}{||c|c|c|c|c||}
\hline
State & Server & Packet $n$ & $Q_1$ & $Q_2$ \\
[0.5ex]
\hline\hline
1 & \bf{B1}& N& E & E \\
\hline
2 & \bf{B1} & N & E & E \\
\hline
3 & B1 & N & F & E \\
\hline
4 & \bf{B1} & N & E & F \\
\hline
5 & \bf{B1} & N & F & F \\
\hline
6 & B1 & N & F & F \\
\hline
7 & B2 & N & E & F \\
\hline
8 & B2 & N & F & F \\
\hline
9 & I & Y & E & E \\
\hline
10 & B1 & Y & X & X \\ \hline
11 & B2 & Y & E & E \\ \hline
12 & B2 & Y & F & E \\ \hline
13 & B2 & Y & E & F \\ \hline
14 & B2 & Y & F & F \\
[0.1ex]
\hline \hline
\end{tabular}
\end{center}
\caption{Description of the 14 states of the CTMC $\bm{Y}(t)$. I, E, and F, stand for idle, empty, and full, respectively, and B1 (B2) stands for the server being busy serving a source-1 (source-2) packet. The notation {\bf B1} means the particular packet $n$ is being served. N and Y stand for packet $n$ not successful yet and otherwise, respectively, and X is don't care.}
\label{step2}
\end{table}
The generator for the absorbing CTMC, denoted by $\bm{Q}$ is in the form
\begin{align}
\bm{Q}=\begin{pmatrix}
\bm{A} & \bm{u} & \bm{s} \\
\bm{0} & 0 & 0 \\
\bm{0} & 0 & 0
\end{pmatrix},
\end{align}
where
\begin{align}
\bm{A_0} & =
\left(
\begin{array}{cccccccccccccc}
0 & \lambda_1 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & \mu_1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & \mu_1 & 0 & 0 & 0 & 0 \\
\mu_1 & 0 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \lambda_1 & 0 & 0 & 0 & 0 & 0 & \mu_1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_1 p_1 & 0 & \mu_1 p_2 & 0 & 0 \\
0 & 0 & 0 & \mu_1 p_1 & 0 & 0 & \mu_1 p_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\mu_2 & 0 & 0 & 0 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \mu_2 p_1 & 0 & 0 & \mu_2 p_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \lambda_1 & \lambda_2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 &0 &0 & \lambda_1 & \lambda_2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 & 0 & 0 & 0 & \lambda_2 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 & 0 & 0 & \lambda_1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 p_1 & 0 & \mu_2 p_2 & 0 & 0 \\
\end{array}
\right),
\bm{u} = \begin{pmatrix}
0 \\
0 \\
\lambda_1 \\
0 \\
0 \\
\lambda_1 \\
\lambda_1 \\
\lambda_1 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0
\end{pmatrix},
\bm{s} = \begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
\mu_1 \\
0 \\
0 \\
0 \\
0
\end{pmatrix},
\bm{h} = \begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
1 \\
1 \\
1 \\
1 \\
1 \\
1
\end{pmatrix}
\label{big}
\end{align}
$\bm{A}$ is the same as $\bm{A_0}$ in \eqref{big} except for its diagonal entries which are set to the corresponding row sums with a minus sign so that $\bm{A} \bm{1} + \bm{u} + \bm{s}=\bm{0}$. Note that $\bm{A}$ is the sub-generator matrix corresponding to the transient states and $\bm{u}$ and $\bm{s}$
are the transition rate vectors from the transient states to the unsuccessful and successful absorbing states, respectively. The vector $\bm{h}$ which takes the unit value for the indices 9 to 14, and zero otherwise, will be needed in deriving the AoI distribution.
The initial probability vector of the CTMC $\bm{Y}(t)$ is denoted by $\bm{\alpha}$ which is given as follows:
\begin{align}
\bm{\alpha} & = \begin{pmatrix}
\bm{\pi_1} & 0 & \bm{\pi_{23}} & \bm{0_{1 \times 2}} & \bm{\pi_{45}} & \bm{\pi_{67}} & \bm{\pi_{89}} & \bm{0_{1 \times 6}}
\end{pmatrix},
\end{align}
where $\bm{\pi_{ij}} := \bm{\pi_i} + \bm{\pi_j}$. In order to understand this, a new source-1 packet $n$ will find the system idle (state 1 of ${\bm X(t)}$) with probability $\bm{\pi_1}$ and therefore will be placed in service immediately, i.e., state 1 of ${\bm Y(t)}$. Similarly, packet $n$ will find the system in states 2 and 3 of ${\bm X(t)}$ with probability $\bm{\pi_{23}}$ and in either case this packet will start its journey from state 3 of ${\bm Y(t)}$ and so on. With this step, the two CTMCs ${\bm X(t)}$ and ${\bm Y(t)}$ are linked.
Let us visit Fig.~\ref{fig:samplepath} and relate it to the absorbing CTMC ${\bm Y(t)}$. The instance $A_{j}^{(i)}$ is the arrival time of packet $n$ of ${\bm Y(t)}$ and $T_{j+1}^{(i)}$ is the reception time of packet $m$. Therefore, the distribution of the absorption times of ${\bm Y(t)}$ in successful absorptions enables us to write the steady-state distribution of the PAoI process. In particular,
\begin{align}
\Pr \{ \Phi^{(1)} \leq x \} & = \Pr \{ {\bm Y(x)} =16 \ | \ {\bm Y(\infty)} = 16\} \\
& = \frac{\Pr \{ {\bm Y(x)} =16 \}}{\Pr\{ {\bm Y(\infty)} = 16 \} }
\end{align}
Differentiating this expression with respect to $x$, we obtain the pdf (probability density function) of $\Phi^{(1)}$, denoted by $f_{ \Phi^{(1)}}(x)$, as follows;
\begin{align}
f_{\Phi^{(1)}}(x) & = \beta \ \bm{\alpha} \mathrm{e}^{\bm{A}x} \bm{s},
\end{align}
where $\beta^{-1} =\Pr\{ {\bm Y(\infty)} = 16 \} = -\bm{\alpha} \bm{A^{-1}} \bm{s} $.
Revisiting Fig.~\ref{fig:samplepath}, the probability $\Pr \{ x < \Delta^{(1)} \leq x + \delta x \}$ is proportional with $\Pr \{ {\bm Y(x)} \in \cal{S} \}$
with the subset $\cal{S}$ containing the six transient states 9 to 14 of ${\bm Y(t)}$ and the proportionality constant being the reciprocal of the mean holding time in $\cal{S}$ in successful absorptions.
Consequently, we write
\begin{align}
f_{ \Delta^{(1)}}(x) & = \kappa \ \bm{\alpha} \mathrm{e}^{\bm{A}x} \bm{h},
\end{align}
where $\kappa^{-1} = -\bm{\alpha} \bm{A^{-1}} \bm{h} $. The $k$th non-central moments of $\Phi^{(1)}$ and $\Delta^{(1)}$ are subsequently very easy to write:
\begin{align}
E \left[(\Phi^{(1)})^k\right] & = \beta \ \bm{\alpha}( -\bm{A})^{-k-1} \bm{s}, \quad
E \left[ (\Delta^{(1)})^k \right] = \kappa \ \bm{\alpha}( -\bm{A})^{-k-1} \bm{h}. \label{momentsPA}
\end{align}
\section{Heavy-traffic Regime}
\label{section4}
In this section, we study the so-called heavy-traffic regime, i.e., $\lambda_i \rightarrow \infty$. We first describe the analytical model in this regime along with the closed-form average AoI/PAoI expressions. Subsequently, we propose two heuristic schedulers based on this model that are devised to operate at any load as well as an optimum probabilistic scheduler on the basis of the analytical model of the previous section.
\subsection{Analytical Model}
In this case, the CTMC in step 1 of the proposed method reduces to one single state corresponding to a busy server with both queues being full since in the heavy-traffic regime, neither the queues can be empty nor the server can be idle. Moreover, the absorbing CTMC with 14 transient and 2 absorbing states reduces to one with 3 transient states and 1 successful absorbing state (the state $\bm{s}$). The transient states 1 and 3 indicate that packet $n$ and packet $m$ are in service, respectively, whereas transient state 2 indicates the transmission of a source-2 packet. Consequently, the matrices characterizing this absorbing CTMC take the following simpler form:
\begin{align}
\bm{A} & =
\left(
\begin{array}{ccc}
-\mu_1 & \mu_1 p_2 & \mu_1 p_1 \\
0 & -\mu_2 p_1 & \mu_2 p_1 \\
0 & 0 & -\mu_1
\end{array}
\right),
\bm{s} = \begin{pmatrix}
0 \\
0 \\
\mu_1
\end{pmatrix},
\bm{h} = \begin{pmatrix}
0 \\
1 \\
1
\end{pmatrix}, {\bm \alpha}=\begin{pmatrix}
1 \\
0 \\
0
\end{pmatrix}^T,
\label{small}
\end{align}
and the expressions in \eqref{momentsPA} are valid for the moments of AoI/PAoI in this heavy-traffic regime. Using the upper-triangular nature of the matrix $\bm{A}$ and \eqref{momentsPA}, it is not difficult to show that
\begin{align}
E [ \Phi^{(1)}] & = \frac{2}{\mu_1} + \frac{p_2}{\mu_2 p_1}, \ E [ \Phi^{(2)}] = \frac{2}{\mu_2} + \frac{p_1}{\mu_1 p_2}. \label{nail11}
\end{align}
Defining the probability ratio $p=\frac{p_1}{p_2}$ and the weight ratio $\omega=\frac{\omega_1}{\omega_2}$, the weighted average PAoI simplifies to
\begin{align}
W_{PAoI} & = \frac{\omega_2}{\mu_1 \mu_2} \left( 2 \omega (\mu_1 + \mu_2) + \omega \mu_1 p^{-1} + \mu_2 p \right)
\end{align}
Employing the Karush–Kuhn–Tucker (KKT) conditions on this expression and defining $\mu = \frac{\mu_1}{\mu_2}$, the optimum probability ratio that yields the minimum $W_{PAoI}$, denoted by $p_{PAoI}^*$, can easily be shown to satisfy the following:
\begin{align}
p_{PAoI}^* & \; {\propto } \; \sqrt{\omega \mu}. \label{HL}
\end{align}
The expression for the average AoI is somewhat more involved:
\begin{align}
E [ \Delta^{(1)}] & = \frac{1}{\mu_1} + \frac{\mu_2 p_1 + \mu_1}{\mu_1 \mu_2 p_1} - \frac{1}{\mu_2 p_1 + \mu_1 p_2}. \label{nail10}
\end{align}
A similar expression for $E [ \Delta^{(2)}]$ is easy to write due to symmetry. However, in this case, the KKT conditions for the expression for $W_{AoI}$ give rise to a quartic equation, i.e, 4th degree polynomial equation, for the roots of which closed-form expressions are not available. However, numerical techniques can be used to find the optimum probability ratio minimizing $W_{AoI}$, denoted by $p_{AoI}^*$, in this case.
However, for the special case $\mu_1 = \mu_2 = u$, the expression \eqref{nail10} reduces to
\begin{align}
E [ \Delta^{(1)}] & = \frac{1}{u} + \frac{1}{u p_1}, \ E [ \Delta^{(2)}] = \frac{1}{u} + \frac{1}{u p_2},
\end{align}
which are identical to the expressions for $E [ \Phi^{(1)}]$ and $E [ \Phi^{(2)}]$ in \eqref{nail11}, respectively, for the special case
$\mu_1 = \mu_2 = u$.
Employing KKT conditions on $W_{AoI}$, it is obvious to show that
\begin{align}
p_{AoI}^* & \; {\propto } \; \sqrt{\omega}. \label{HL2}
\end{align}
When $\mu_1 \neq \mu_2$, we use exhaustive search to obtain $p_{AoI}^*$ throughout the numerical examples of this paper.
\subsection{Proposed Heuristic Schedulers}
The focus of this paper is on work-conserving schedulers that are neither age- or timestamp-aware, i.e., the schedulers make a decision only on the source indices of packets in the waiting room, and not on the timestamp information in the packets or the instantaneous ages of the source processes.
This allows us to use simple-to-implement scheduling policies without the server having to process the timestamp information included in the information packets.
Given the traffic parameters $\lambda_i, \mu_i,$ and the weights $\omega_i$, for $i=1,2$, we first introduce the OPS-P (Optimum Probabilistic Scheduling for PAoI) policy that minimizes the weighted average PAoI of the system given in \eqref{W}.
OPS-A (Optimum Probabilistic Scheduling for PAoI) is defined similarly so as to minimize the weighted average AoI in \eqref{W}.
We use the analytical model and exhaustive search to obtain OPS-P and OPS-A. Although the analytical model is computationally efficient, one needs to resort to simpler heuristics which may be beneficial especially in situations where the traffic parameters may vary in time and the server may need to update its scheduling policy without having to perform extensive computations.
For this purpose, we propose a generic heuristic probabilistic scheduler called H1$(p)$ that employs the probability ratio $p=\frac{p_1}{p_2}, p_1=\frac{p}{1+p}, p_2=\frac{1}{1+p},$ using the information about $\omega$ and $\mu$ only but not the actual arrival rates $\lambda_i, i=1,2$.
The second heuristic scheduler we propose is called H2$(p)$ which is obtained by determinizing the probabilistic policy H1$(p)$ as described below. In H2$(p)$, each source-$i$ maintains a bucket $b_i$ so that $b_1 + b_2=0$ at all times. Initially, $b_i=0, i=1,2$. When there are two packets in the waiting room, the source with the larger bucket value $b_i$ is selected for transmission. Every time a source-$1$ packet is transmitted, $b_1$ is decremented by $(1 - p_1)$ and $b_2$ is incremented by $p_2$. Similarly, when a source-$2$ packet is transmitted, $b_2$ is decremented by $(1 - p_2)$ and $b_1$ is incremented by $p_1$.
In order for the bucket values not to grow to infinity (which may occur if there are no packet arrivals from a specific source for an extended duration of time), we impose a limit on the absolute values of the buckets, i.e., $| b_i | < B$ where $B$ is called the bucket limit.
Note that in the heavy-traffic regime, H2$(p)$ is the determinized version of H1$(p)$. To see this, let $p=1,p_1=p_2=0.5$. In H1$(p)$, a geometrically distributed (with parameter 0.5) number of source-1 packets will be transmitted followed with the transmission of a geometrically distributed (again with parameter 0.5) number of source-2 packets. On the other hand, in H2$(1)$, an alternating pattern arises where a single source-1 packet transmission is to be followed by a single packet-2 transmission, i.e., round-robin scheduling. For both heuristic schedulers, the ratio of source-1 transmissions to source-2 transmissions is kept at $p$ in the heavy-traffic regime, but H2$(p)$ manages to maintain this ratio using deterministic patterns as opposed to being probabilistic. The bucket-based nature of the algorithm enables one to obtain this deterministic pattern for all values of the ratio parameter $p$ which is advantageous especially for average AoI.
Moreover, in H2$(p)$, we seek to maintain a probability ratio $p$ of transmissions between the two sources throughout the entire operation of the system whereas this probability ratio is maintained in H1$(p)$ only during times when there are two packets in the waiting room.
When we choose $p=p_{PAoI}^*$ with $p_{PAoI}^*$ being the optimum probability ratio in the heavy-traffic regime (see Eqn.~\eqref{HL}), we obtain our two proposed schedulers H1-P (Heuristic 1 Scheduler for PAoI) and H2-P (Heuristic Scheduler 2 for PAoI) for average weighted PAoI minimization, i.e.,
\begin{align}
\text{H1-P} & \equiv \text{H1}(p_{PAoI}^*), \ \text{H2-P} \equiv \text{H2}(p_{PAoI}^*),
\end{align} where the notation $\equiv$ is used to denote equivalence.
Similarly, we propose two schedulers for average weighted AoI minimization, namely H1-A (Heuristic 1 Scheduler for AoI) and H2-A (Heuristic Scheduler 2 for AoI) , i.e.,
\begin{align}
\text{H1-A} &\equiv \text{H1}(p_{AoI}^*),
\ \text{H2-A} \equiv \text{H2}(p_{AoI}^*).
\end{align} For two-source networks with $\mu=1$, H1-P $\equiv$ H1-A, and H2-P $\equiv$ H2-A.
\section{Analytical Model for the Non-preemptive Bufferless Server}
\label{section5}
Up to now, we have considered SBPSQ servers with scheduling. In this section, we also study the Non-Preemptive Bufferless (NPB) server for the purpose of using it as a benchmark against the per-source queueing systems of our interest.
In the NPB scenario, the newcoming source-$i$ packet is served immediately if the server is idle or is otherwise discarded since there is no waiting room.
Actually, an analytical model for AoI and PAoI is recently proposed in \cite{dogan_akar_tcom21} for servers serving a general number of sources with more general phase-type distributed service times, also allowing arbitrary preemption probabilities. In this section, we make use of the model introduced by \cite{dogan_akar_tcom21} to provide closed-form expressions for the average AoI/PAoI for the specific case of two sources, no preemption, and exponentially distributed service times. While doing so, we use absorbing CTMCs as opposed to Markov Fluid Queues (MFQ) used in \cite{dogan_akar_tcom21}. Both yield the same results but ordinary CTMCs of absorbing type are more commonly known and established than MFQs.
In this case, the CTMC in step 1 is not needed due to the bufferless nature of the system.
Moreover, the absorbing CTMC with 14 transient and 2 absorbing states reduces to one with 4 transient states and 1 absorbing state. The transient states 1 and 4 indicate that packet $n$ and packet $m$ are in service, respectively, whereas in transient state 2, we wait for a packet arrival, and in transient state 3, a source-2 packet is in service. Consequently, the matrices characterizing this absorbing CTMC are written as:
\begin{align}
\bm{A} & =
\left(
\begin{array}{cccc}
-\mu_1 & \mu_1 & 0 & 0 \\
0 & -(\lambda_1+\lambda_2) & \lambda_2 & \lambda_1 \\
0 & \mu_2 & -\mu_2 & 0 \\
0 & 0 & 0 & -\mu_1
\end{array}
\right), \
\bm{s} = \begin{pmatrix}
0 \\
0 \\
0 \\
\mu_1
\end{pmatrix},\
\bm{h} = \begin{pmatrix}
0 \\
1 \\
1 \\
1
\end{pmatrix}, \
{\bm \alpha} =\begin{pmatrix}
1 \\
0 \\
0 \\
0
\end{pmatrix}^T,
\label{bufferless}
\end{align}
and the expressions \eqref{momentsPA} can be used for obtaining the moments of AoI/PAoI for the bufferless system. Using \eqref{momentsPA}, for the average per-source PAoI, one can easily show that
\begin{align}
E [ \Phi^{(1)}] & = \frac{1}{\mu_1} + \frac{(1+\rho)}{\lambda_1}, \ E [ \Phi^{(2)}] = \frac{1}{\mu_2} + \frac{(1+\rho)}{\lambda_2}.
\end{align}
Recalling the definition of the traffic mix parameter $r_i$ and the traffic mix ratio $r=\frac{r_1}{r_2}$, the weighted average PAoI can be written in terms of $r_1$ as follows:
\begin{align}
W_{PAoI} & = \frac{\omega_1}{\mu_1} + \frac{\omega_2}{\mu_2} + \frac{\omega_1(1+\rho)}{\rho r_1 \mu_1} + \frac{\omega_2(1+\rho)}{\rho (1-r_1) \mu_2}.
\end{align}
Fixing $\rho$ and employing the KKT conditions for this expression, the optimum traffic mix ratio, denoted by $r_{PAoI}^*$, is given as:
\begin{align}
r_{PAoI}^* & \; {\propto } \; \sqrt{\frac{\omega}{\mu}}. \label{optimum_mix_ratio}
\end{align}
Note that the above ratio does not depend on the load parameter $\rho$.
If we define the arrival rate ratio $\lambda = \frac{\lambda_1}{\lambda_2}$, then the optimum arrival rate ratio, denoted by $\lambda_{PAoI}^*$, can be written as:
\begin{align}
\lambda_{PAoI}^* & \; {\propto } \; \sqrt{{\omega}{\mu}}. \label{optimum_arrivalrate_ratio}
\end{align}
The expression for the average AoI can be written as:
\begin{align}
E [ \Delta^{(1)}] & = \frac{1}{\mu_1 \mu_2} \left( {\mu_2} + \frac{\mu_2}{\rho_1} + \frac{\lambda_2}{\rho_1} + \frac{\mu_2 \rho_1 + \mu_1 \rho_2}{(1+\rho)} \right) .
\end{align}
A similar expression for $E [ \Delta^{(2)}]$ is again very easy to write due to symmetry. However, in this case, the KKT conditions for the expression for $W_{AoI}$ again result in a quartic equation in which case numerical techniques can be used to find the optimum traffic mix ratio denoted by $r_{AoI}^*$.
\section{Numerical Examples}
\label{section6}
\subsection{Heavy-traffic Scenario}
In the first numerical example, we study the heavy-traffic regime and we depict the corresponding optimum probability ratio parameters $p_{PAoI}^*$ and $p_{AoI}^*$ as a function of the square root of the weight ratio parameter, $\sqrt{\omega}$, for three values of the service rate ratio parameter $\mu$ in Fig.~\ref{fig:ornek1}. When the service rates of the two sources are identical, then these probability ratios are the same for both AoI and PAoI. However, when the service rate ratio starts to deviate from unity, then the optimum probability ratio parameters for PAoI and AoI turn out to deviate from each other. More specifically, $p_{AoI}^* < p_{PAoI}^*$ when $\mu < 1$ and $p_{AoI}^* > p_{PAoI}^*$ when $\mu > 1$. Subsequently, we study whether one can use the easily obtainable $p_{PAoI}^*$ in Eqn.~\eqref{HL} in place of $p_{AoI}^*$ when the minimization of weighted AoI is sought.
Fig.~\ref{fig:ornek1b} depicts the ratio of $W_{AoI}$ obtained with the use of the probability ratio $p_{PAoI}^*$ to that obtained using $p_{AoI}^*$ as a function of the weight ratio $\omega$. We observe that $p_{PAoI}^*$ can be used in place of $p_{AoI}^*$ only when the rate ratio $\mu$ and the weight ratio $w$ are both close to unity. It is clear that when $\mu=1$, the depicted ratio in Fig.~\ref{fig:ornek1b} is always one irrespective of $\omega$; also see \eqref{HL2}.
\begin{figure}[bth]
\centering
\includegraphics[width=0.7\linewidth]{ornek1.pdf}
\caption{The probability ratio parameters $p_{PAoI}^*$ and $p_{AoI}^*$ as a function of the square root of the weight ratio parameter, $\sqrt{\omega}$, for three values of the service rate ratio parameter $\mu$.
}
\label{fig:ornek1}
\end{figure}
\begin{figure}[bth]
\centering
\includegraphics[width=0.7\linewidth]{ornek1b.pdf}
\caption{The ratio of $W_{AoI}$ obtained with the use of $p_{PAoI}^*$ to that using $p_{AoI}^*$ as a function of the weight ratio parameter, ${\omega}$, for five values of the service rate ratio parameter $\mu$.
}
\label{fig:ornek1b}
\end{figure}
\subsection{Numerical Study of the Proposed Schedulers}
A two-source network is called symmetric when $\omega=1, \ \mu=1$ in \cite{kadota_tmc21} and is asymmetric otherwise. We first present our numerical results for symmetric networks and subsequently, asymmetric network results are presented first for weighted average PAoI minimation, and then for weighted average AoI minimization. We fix $\mu_2=1$ in all the numerical examples. Thus, one time unit is taken as the average service time of source-2 packets. All the results are obtained through the analytical models developed in this paper except for the bucket-based H2-P and H2-A for which an analytical model is cumbersome to build for all values of the probability parameter $p$ and therefore we resorted to simulations.
\subsection{Symmetric Network}
The weighted average PAoI or AoI are depicted in Fig.~\ref{fig:symmetricexample} as a function of the traffix mix parameter $r$ on a log-log scale for two values of the load $\rho$ using the four schedulers OPS-P(A), NPB, H1-P(A), and H2-P(A) and note that H1-P $\equiv$ H1-A and H2-P $\equiv$ H2-A for symmetric networks. We have the following observations about symmetric networks:
\begin{itemize}
\item For symmetric networks, the optimum traffic mix should be unity due to symmetry. The discrepancy between NPB and the other SBPSQ systems is reduced as $r \rightarrow 1$ and it vanishes as $r \rightarrow 1$ and $\rho \rightarrow \infty$. However, for moderate loads and when $r$ deviates from unity, SBPSQ has substantial advantages compared to NPB.
\item The proposed heuristic schedulers
are developed without the knowledge of load and traffic mix
using only heavy-traffic conditions. However, we observe
through numerical results that the heuristic schedulers perform very close to that obtained by the computation-intensive optimum probabilistic scheduler. This observation is in line with those made in \cite{joo_eryilmaz_TNET18}.
\item H2-P (H2-A) presents very similar performance to OPS-P (OPS-A) for all the load and traffic mix values we have obtained whereas H1-P and H1-A are slightly outperformed by them except for light and heavy loads. We also note that there are even cases when H2-A outperforms OPS-A in the high load regime when $r \rightarrow 1$. This stems from the fact that in the heavy-traffic regime, determinized source scheduling strategies perform better than their corresponding probabilistic counterparts for AoI. However, this observation does not necessarily apply to PAoI.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{PAoI}$ ($\rho=1$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample1}
\label{fig:symmetric1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{AoI}$ ($\rho=1$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample2}
\label{fig:symmetric2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{PAoI}$ ($\rho=10$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample3}
\label{fig:symmetric3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{AoI}$ ($\rho=10$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample4}
\label{fig:symmetric4}
\end{subfigure}
\caption{The weighted average PAoI or AoI as a function the traffic mix parameter $r$ for two values of the load $\rho$.}
\label{fig:symmetricexample}
\end{figure}
\subsection{Asymmetric Network - Weighted Average PAoI Minimization}
In this numerical example, we depict $W_{PAoI}$ as a function of the load $\rho$ (on a log-log scale) employing four different buffer management/scheduling mechanisms, namely OPS-P, NPB, H1-P, and H2-P in Fig.~\ref{fig:PAoIexample} for which we fix $\omega=4$ and $\mu=4$. In Fig.~\ref{fig:PAoIexample1}, for given load $\rho$, we choose $\rho_i = \rho r_{PAoI}^*$ where the traffic mix ratio $r_{PAoI}^*=1$ as given in \eqref{optimum_mix_ratio}. This choice ensures that source-$i$ packet generation intensities are chosen such that NBP performance is maximized in terms of $W_{PAoI}$.
On the other hand, for Fig.~\ref{fig:PAoIexample2}, we fix $r=1/4$, a choice which is quite different than the choice $r_{PAoI}^*=1$ giving rise to a scenario for which the arrival rate selections are not as consistent with the weights and average service rates as in Fig.~\ref{fig:PAoIexample1}.
The following observations are made for this example.
\begin{itemize}
\item If the per-source packet arrival rates are chosen to optimize NBP as in Fig.~\ref{fig:PAoIexample1}, then the discrepancy between NPB and the other three SBPSQ systems is reduced especially for light and heavy loads. For this scenario, there are moderate load values at which NPB outperformed H1-P but OPS-P and H2-P always outperformed NPB in all the cases we had investigated.
\item When the arrival rates deviate from the the optimum values derived for NPB as in Fig.~\ref{fig:PAoIexample2}, then the advantage of using SBPSQ with respect to NBP is magnified.
Therefore, one can conclude that the sensitivity of the performance of SBPSQ systems to the specific choice of the arrival rates are lower than that of NPB.
\item The performance of H2-P is quite similar to that of OPS-P for all the values we had tried both of which slightly outperform H1-P. We conclude that H2-P depends only on the knowledge on
$\omega$ and $\mu$ and does not use the load and traffic mix. However, H2-P can safely be used at all loads and all traffic mixes as a simple-to-implement alternative to OPS-P for weighted PAoI minimization.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1$}\vspace*{-0.2cm}
\includegraphics[width=\textwidth]{PAoIexample1}
\label{fig:PAoIexample1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1/4$} \vspace*{-0.2cm}
\includegraphics[width=\textwidth]{PAoIexample2}
\label{fig:PAoIexample2}
\end{subfigure}
\caption{$W_{PAoI}$ depicted as a function of the total load $\rho$ obtained with the algorithms OPS-P, NPB, H1-P, and H2-P for two different scenarios.}
\label{fig:PAoIexample}
\end{figure}
\subsection{Asymmetric Network - Weighted Average AoI Minimization}
In this example, we continue with the same example of the previous subsection but we focus on $W_{AoI}$ which is plotted as a function of the load $\rho$ (on a log-log scale) under the policies OPS-A, NPB, H1-A, and H2-A, in Fig.~\ref{fig:AoIexample} with $\omega=4$ and $\mu=4$. The traffic mix
parameter $r$ is fixed to $r=1$ and $r=1/4$ in Fig.~\ref{fig:AoIexample1} and Fig.~\ref{fig:AoIexample2}, respectively.
We have the following observations:
\begin{itemize}
\item The OPS-A curve is not monotonically decreasing with respect to load $\rho$ as in OPS-P for the two values of the traffic mix parameter $r$ we have studied; it first decreases until a certain load threshold is reached but then it slightly rises up to its heavy-traffic limit obtained with the probability ratio $p_{AoI}^*$. The corresponding load threshold value appears to depend on the traffic mix.
\item The H2-P policy tracks the performance of OPS-A until the load threshold is reached but when the load ranges between the load threshold and infinity, H2-P outperforms OPS-A. This observation does not pertain to the results obtained for weighted average PAoI minimization.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1$}\vspace*{-0.2cm}
\includegraphics[width=\textwidth]{AoIexample1}
\label{fig:AoIexample1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1/4$} \vspace*{-0.2cm}
\includegraphics[width=\textwidth]{AoIexample2}
\label{fig:AoIexample2}
\end{subfigure}
\caption{$W_{AoI}$ depicted as a function of the total load $\rho$ obtained with the algorithms OPS-A, NPB, H1-A, and H2-A for two different scenarios.}
\label{fig:AoIexample}
\end{figure}
\section{Conclusions}
\label{section7}
We studied a two-source SBPSQ-based status update system with probabilistic scheduling and we proposed a method to obtain the distributions and moments of AoI and PAoI numerically using CTMCs of absorbing-type. The proposed technique is quite simple to implement making it amenable to use for a wider range of analytical modeling problems regarding AoI/PAoI distributions. Moreover, we performed heavy-traffic analysis for the same scenario to obtain closed form expressions for the per-source average AoI/PAoI values from which we have proposed two simple-to-implement age-agnostic heuristic schedulers.
The proposed heuristic schedulers
are developed without the knowledge of load and traffic mix
using only heavy-traffic conditions. However, we observed
through numerical results that the heuristic schedulers perform very close to that obtained by their computation-intensive optimum probabilistic scheduler counterparts and at all loads and traffic mixes. In particular, for weighted AoI minimization, our proposed heuristic scheduler H2-A's performance tracked that of the optimum probabilistic scheduler OPS-A except for heavy loads where it even outperformed OPS-A. For weighted PAoI minimization, our proposed heuristic scheduler H2-P's performance tracked that of the optimum probabilistic scheduler OPS-P. Therefore, H2-A and H2-P are promising candidates for scheduling in SBPSQ systems stemming from their performance and age-agnostic nature. Future work will be on extending the results to general number of sources and non-exponentially distributed service times, and also to discrete-time.
\bibliographystyle{unsrtnat}
\section{Introduction}
\label{section1}
Timely status updates play a key role in networked control and monitoring systems.
Age of Information (AoI) has recently been introduced to quantify the timeliness of information freshness in status update systems \cite{kaul_etal_SMAN11}.
In the general AoI framework outlined in \cite{kosta_etal_survey}, information sources sample a source-specific random process at random epochs and generate information packets containing the sample values as well as the sampling times.
On the other hand, servers gather the information packets from multiple sources so as to be transmitted to a remote monitor using queueing, buffer management, scheduling, etc.
For a given source, AoI is defined as the time elapsed since the generation of the last successfully received update packet. Therefore, AoI is a source-specific random process whose sample paths increase in time with unit slope but are subject to abrupt downward jumps at information packet reception instances. The PAoI process is obtained by sampling the AoI process just before the cycle termination instances.
In this paper, we consider a non-preemptive status update system in Fig.~\ref{fig:twosource} with two sources, a server, and a monitor, with random information packet arrivals from the sources.
The server employs Single-Buffer Per-Source queueing (SBPSQ) for which the freshest packet from each source is held in a single buffer. The server is work-conserving, i.e., it does not idle unless the waiting room is empty, and it serves the packets probabilistically (probabilities denoted by $p_i$ in Fig.~\ref{fig:twosource}) when there are two waiting packets. The scheduler is age-agnostic and does not require the server to read the timestamp field in the packets and keep track of the instantaneous AoI values. The scheduling probabilities are to be chosen so as to provide AoI/PAoI differentiation. In this paper, we attempt to provide differentiation through the minimization of the weighted average AoI/PAoI. The motivation behind AoI differentiation is that in a networked control system, the information about certain input processes need to be kept relatively fresher at the control unit since this information will have profound impact on the overall performance of the control system.
The studied model falls in the general framework of status update systems analytically studied in the literature; see the surveys on AoI \cite{kosta_etal_survey},\cite{survey_Yates} and the references therein for a collection of multi-source queueing models for AoI.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[scale=0.32]
\draw[very thick](3,2) circle (2);
\draw (3,4) node[anchor=south] {\small{source-$2$}} ;
\draw[very thick] (3,9) circle (2) ;
\draw (3,11) node[anchor=south] {\small{source-$1$}} ;
\draw[very thick,->] (6,2) -- (10,2) ;
\draw (8,2) node[anchor=south] {$\lambda_2$};
\draw[very thick,->] (6,9) -- (10,9) ;
\draw (8,9) node[anchor=south] {$\lambda_1$};
\filldraw[fill=gray!50, thick] (13,1) rectangle(15,3);
\draw[thick] (11,1) -- (13,1);
\draw[thick] (11,3) -- (13,3);
\filldraw[fill=gray!50, thick] (13,8) rectangle(15,10);
\draw[thick] (11,8) -- (13,8);
\draw[thick] (11,10) -- (13,10);
\draw[thick,->] (15,2) -- (20,5) ;
\draw[thick,->] (15,9) -- (20,6) ;
\draw[thick, ->, dashed] (20,3.5) arc (270:90:2) node[anchor=south east] {$p_1$};
\filldraw (20,3.5) circle (0.01) node[anchor=north east] {$p_2$};
\draw[very thick](24,6) circle (3);
\filldraw (24,6) circle (0.01) node[anchor=center] {server};
\draw[very thick,->] (28,6) -- (32,6) ;
\draw[very thick](33,4) rectangle (40,8);
\filldraw (36.5,6) circle (0.01) node[anchor=center] {monitor};
\end{tikzpicture}
\caption{A single-hop status update system with two sources employing per-source queueing. A single-buffer queue is dedicated to each source to hold the freshest packet from that source.}
\label{fig:twosource}
\end{figure}
Our main contributions are as follows:
\begin{itemize}
\item As the main contribution of this paper, under the assumption of Poisson packet arrivals, and exponentially distributed service times, we obtain the exact distributions of the AoI/PAoI processes for the two sources for the system of interest in Fig.~\ref{fig:twosource} as a function of the scheduling probabilities. The analysis is based on well-known absorbing Continuous-Time Markov Chains (CTMC) and does not employ relatively more specific tools such as Stochastic Hybrid Systems (SHS) that were recently proposed for AoI modeling \cite{yates_kaul_tit19} and used in several works. We believe that the simplicity of the tool we use to derive the AoI/PAoI distributions makes it a convenient tool for researchers and practitioners in this field. Subsequently, for given traffic parameters, the proposed analytical model is used to numerically obtain the Optimum Probabilistic Scheduling (OPS) policy which minimizes the weighted average AoI or PAoI, referred to as OPS-A and OPS-P, respectively.
\item A heavy-traffic analysis is presented to obtain closed-form expressions for the average per-source AoI/PAoI values which has enabled us to write the OPS-P policy in closed-form in heavy-traffic regime. On the other hand, the OPS-A policy for the heavy-traffic regime is shown to be obtainable by solving a quartic equation.
\item On the basis of the heavy-traffic analysis, we propose two age-agnostic heuristic schedulers that are quite easy to implement in comparison with age-aware schedulers and therefore they can be used in more challenging multi-hop scenarios and resource-constrained servers.
\end{itemize}
The paper is organized as follows. Section~\ref{section2} presents the related work. In Section~\ref{section3}, the analytical model is presented. The heavy-traffic regime is addressed in Section~\ref{section4} along with the two heavy-traffic analysis-based heuristic schedulers. Section~\ref{section5} addresses the analytical model and associated closed-form expressions for the Non-Preemptive Bufferless (NPB) variation of the same problem which is used as a benchmark in the numerical examples. In Section~\ref{section6}, we provide numerical examples for comparative evaluation of the age-agnostic schedulers of interest. We conclude in Section~\ref{section7}.
\section{Related Work}
\label{section2}
There has been a great deal of interest on AoI modeling and optimization problems in the context of communication systems since the reference \cite{kaul_etal_infocom12} first introduced the AoI concept in a single-source, single-server queueing system setting. The existing analytical models can be classified according to one or more of the following: (i) existence of one, two, or more information sources, (ii) random access vs. scheduled access, (iii) existence of transmission errors, (iv) performance metrics used, e.g., average AoI/PAoI values, age violation probabilities, etc., (v) buffer management mechanisms, (vi) scheduling algorithms, (vii) arrival and service processes used in the models, (viii) single-hop vs. multi-hop systems, (ix) continuous-time vs. discrete-time systems. The recent references \cite{kosta_etal_survey} and \cite{survey_Yates} present exhaustive surveys on existing work on AoI and moreover describe several open problems.
\subsection{Single-source Queueing Models} The average AoI is obtained for the M/M/1, M/D/1, and D/M/1 queues with infinite buffer capacity and FCFS (First Come First Serve) in \cite{kaul_etal_infocom12}.
The reference \cite{costa_etal_TIT16} obtains the AoI and PAoI distributions for small buffer systems, namely M/M/1/1 and M/M/1/2 queues, as well as the non-preemptive LCFS (Last Come First Serve) M/M/1/2$^{\ast}$ queue for which the packet waiting in the queue is replaced by a fresher packet arrival.
The average AoI and PAoI are obtained in \cite{najm_nasser_isit16} for the preemptive LCFS M/G/1/1 queueing system
where a new arrival preempts the packet in service and the service time distribution is assumed to follow a more general gamma distribution.
Average PAoI expressions are derived for an M/M/1 queueing system with packet transmission errors with various buffer management schemes in \cite{chen_huang_isit16}.
Expressions for the steady-state distributions of AoI and PAoI are derived in \cite{inoue_etal_tit19} for a wide range of single-source systems.
The authors of \cite{akar_etal_tcom20} obtain the exact distributions of AoI and PAoI in bufferless systems with probabilistic preemption and single-buffer systems with probabilistic replacement also allowing general phase type distributions to represent interrarival times and/or service times.
\subsection{Multi-source Queueing Models}
For analytical models involving multiple sources, the average PAoI for M/G/1 FCFS and bufferless M/G/1/1 systems with heterogeneous service time requirements are derived in \cite{huang_modiano} by which one can optimize the information packet generation rates from the sources.
An exact expression for the average AoI for the case of multi-source M/M/1 queueing model under FCFS scheduling is provided in \cite{moltafet2020average} and three approximate expressions are proposed for the average AoI for the more general multi-source M/G/1 queueing model.
The reference \cite{yates_kaul_tit19} investigates the multi-source M/M/1 model with FCFS, preemptive bufferless, and non-preemptive single buffer with replacement, using the theory of Stochastic Hybrid Systems (SHS) and obtain exact expressions for the average AoI. Hyperexponential (H$_2$) service time distribution for each source is considered in \cite{yates_etal_isit19} for an M/H$_2$/1/1 non-preemptive bufferless queue to derive an expression for the average per-source AoI per class.
The authors of \cite{farazi_etal_Asilomar19} study a self-preemptive system in which preemption of a source in service is allowed by a newly arriving packet from the same source and AoI expressions are derived using the SHS technique.
For distributional results, the MGF (Moment Generating Function) of AoI has been derived for a bufferless multi-source status update system using global preemption \cite{moltafet2021moment}. The work in \cite{abdelmagid2021closedform} considers a real-time status update system with an energy harvesting transmitter and derive the MGF of AoI in closed-form under certain queueing disciplines making use of SHS techniques.
The authors of \cite{dogan_akar_tcom21} obtain the exact distributions of AoI/PAoI in a probabilistically preemptive bufferless multi-source M/PH/1/1 queue where non-preemptive, globally preemptive, and self-preemptive systems are investigated using a common unifying framework. In \cite{optimumpreemption}, the optimum packet generation rates are obtained for self-preemptive and global preemptive bufferless systems for weighted AoI minimization, the latter case shown to allow closed-form expressions.
The most relevant existing analytical modeling work to this paper are the ones that study SBPSQ models for status update systems.
The merits of SBPSQ systems are presented in \cite{pappas_etal_ICC15} in terms of lesser transmissions and AoI reduction.
The authors of \cite{moltafet_isit} derive the average AoI expressions for a two-source M/M/1/2 queueing system in which a packet waiting in the queue can be replaced only by a newly arriving packet from the same source using SHS techniques. The per-source MGF of the AoI is also obtained \cite{moltafet_wcomlet} for the two-source system by
using SHS under self-preemptive and non-preemptive policies, the latter being a per-source queueing system.
However, in these works, the order of
packets in the queue does not change based on new arrivals and therefore AoI differentiation is not possible.
\subsection{Scheduling Algorithms for Random Arrivals}
We now review the existing work on AoI scheduling with random arrivals that are related to the scope of the current paper. The authors of \cite{bedewy_etal_tit21} consider the problem of minimizing the age of information in a multi-source system and they show that for any given sampling strategy, the Maximum Age First (MAF) scheduling strategy provides the best age performance among all scheduling strategies. The authors of \cite{joo_eryilmaz_TNET18} propose an age-based scheduler that combines age
with the interarrival times of incoming packets, in its scheduling decisions, to achieve improved information freshness at
the receiver. Although the analytical
results are obtained for only heavy-traffic, their numerical results reveal that the proposed algorithm achieves desirable freshness performance for lighter loads as well.
The authors of \cite{kadota_tn18} and \cite{kadota_tmc21} consider an asymmetric (source weights/service times are different) discrete-time wireless network with a base station serving multiple traffic streams using per-source queueing under the assumption of synchronized and random information packet arrivals, respectively, and propose nearly optimal age-based schedulers and age-agnostic randomized schedulers.
For the particular vase of random arrivals which is more relevant to the current paper, the reference \cite{kadota_tmc21} proposes a non-work-conserving stationary randomized policy for the single-buffer case with optimal scheduling probabilities depending on the source weights and source success probabilities through a square-root relationship and this policy is independent of the arrival rates. Moreover, they propose a work-conserving age-based Max-Weight scheduler for the same system whose performance is better and is close to the lower bound. We also note that similar results had been obtained in \cite{kadota_tn18} for synchronized arrivals. Our focus in this paper is on work-conserving age-agnostic schedulers that are more suitable for resource-constrained environments and multi-hop scenarios for which it is relatively difficult to keep track of per-source AoI information at the server.
\section{Probabilistic Scheduling}
\label{section3}
\subsection{Definitions of AoI and PAoI}
In a very general setting, let $T_j^{(i)}$ and $A_j^{(i)}$ for $j\geq 1$ denote the times
at which the $j$th successful source-$i$ packet is received by the monitor and generated at the source, respectively. We also let $\Psi^{(i)}_j$ denote the system time of the $j$th successful source-$i$ information packet which is the sum of the packet's queue wait time and service times, i.e., $\Psi^{(i)}_j=T_j^{(i)} - A_j^{(i)}$. Fig.~\ref{fig:samplepath} depicts a sample path of the source-$i$ AoI process $\Delta^{(i)}(t)$ which increases with unit slope from the value $\Phi_{j}^{(i)}$ at $t=T_{j}^{(i)}$ until $t=T_{j+1}^{(i)}$ in cycle-$j$. The peak value in cycle-$j$ is denoted by $\Psi_{j}^{(i)}$ which represents the Peak AoI process for source-$i$. These definitions apply to general status update systems. Note that for the specific system of Fig.~\ref{fig:twosource}, successful packets are the ones which are received by the monitor, and those that are replaced by fresher incoming packets while at the waiting room are unsuccessful packets. Let $\Delta^{(i)}$ and $\Phi^{(i)}$ denote the steady-state values for the source-$i$ processes $\Delta^{(i)}(t)$ and $\Phi_j^{(i)}$, respectively. The weighted average AoI, $W_{AoI}$, and the weighted average PAoI, $W_{PAoI}$, of the system are written as
\begin{equation}
W_{AoI} = \sum_{i=1}^2 \omega_i E [ \Delta^{(i)}],
\ W_{PAoI}= \sum_{i=1}^2 \omega_i E [ \Phi^{(i)}], \label{W}
\end{equation} where $\omega_i, i=1,2,$ with $\omega_1+\omega_2=1$ are the (normalized) weighting coefficients.
\subsection{System Model}
In this paper, we consider a non-preemptive status update system in Fig.~\ref{fig:twosource} with two sources, a server, and a monitor. Source-$i$, $i=1,2$ generates information packets (containing time-stamped status update information) according to a Poisson process with intensity $\lambda_i$. The generated packets become immediately available at the server.
The server maintains two single-buffer queues, namely $Q_i, i=1,2$, that holds the freshest packet from source-$i$. This buffer management is referred to as Single-Buffer Per-Source Queueing (SBPSQ). A newcoming source-$i$ packet receives immediate service if the server is idle and there are no waiting packets, or joins the empty $Q_i$, or replaces the existing staler source-$i$ packet at $Q_i$. The server is work-conserving as a result of which an information packet is immediately transmitted unless the system is idle. Consequently, when the system has one packet waiting at $Q_i$ for $i=1$ or $i=2$ upon the server becoming idle, then this packet from $Q_i$ will immediately be served. When there are two packets waiting at the two queues, then the server is to transmit the packet from $Q_i$ with probability $p_i$ with $p_1 + p_2 =1$. Therefore, the scheduler is age-agnostic and does not require the server to read the timestamp field in the packets and keep track of the instantaneous AoI values. The probabilities $p_i$'s are to be chosen so as to provide AoI/PAoI differentiation.
At the end of a single transmission, positive/negative acknowledgments from the monitor to the server are assumed to be immediate, for the sake of convenience. The channel success probability for source-$i$ is $s_i$ and when a packet's transmission gets to start, it will be retransmitted until it is successfully received by the monitor.
Therefore, if a single transmission is assumed to be exponentially distributed with parameter $\nu_i$, then the transmission time of successful source-$i$ packets from the server to the monitor are exponentially distributed with parameter $\mu_i = \nu_i s_i$ by taking into account of the retransmissions. With this choice, error-prone channels are also considered in this paper.
We define the source-$i$ load as $\rho_i = \frac{\lambda_i}{\mu_i}$ and the total load $\rho = \rho_1+\rho_2$.
We also define the traffic mix parameter $r_i$ so that $\rho_i =\rho r_i$, and the traffic mix ratio $r=\frac{r_1}{r_2}$.
The studied model falls in the general framework of status update systems analytically studied in the literature; see the surveys on AoI \cite{kosta_etal_survey},\cite{survey_Yates} and the references therein for a collection of multi-source queueing models for AoI.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=0.45]
\draw[thick,<->,gray] (9,13) -- (16,13);
\filldraw (12.5,13) circle (0.01) node[anchor=south, thick] {cycle-$j$};
\draw[ultra thick,->] (0,0) -- (23,0) node[anchor=north] {$t$};
\draw[ultra thick,->] (0,0) -- (0,14) node[anchor=west] {$\Delta^{(i)}(t)$};
\draw[ultra thick,red] (4.5,4.5) -- (9,9);
\filldraw[red] (4,4) circle (3pt);
\filldraw[red] (3.5,3.5) circle (3pt) ;
\filldraw[red] (3,3) circle (3pt);
\draw (0,9) node[anchor=east] {$\Phi_{j+1}^{(i)}$};
\draw (0,5) node[anchor=east] {$\Psi_j^{(i)}$};
\draw[dashed,gray] (0,9) -- (23,9);
\draw[dashed,gray] (0,5) -- (23,5);
\draw[dashed,gray] (0,2) -- (23,2);
\draw[dashed,gray] (9,12.5) -- (9,0) node[anchor=north, thick, black] {$T_{j}^{(i)}$};
\draw[dashed,very thick,gray] (9,5) -- (4,0) node[anchor=north, thick, black] {$A_{j}^{(i)}$};
\draw[dashed,very thick, gray] (16,2) -- (14,0) node[anchor=north east, thick, black] {$\quad A_{j+1}^{(i)}$};
\draw[ultra thick,red] (9,9) -- (9,5.2);
\draw[ultra thick,red] (9.1,5.1) -- (16,12);
\draw (0,12) node[anchor=east] {$\Phi_{j}^{(i)}$};
\draw (0,2) node[anchor=east] {$\Psi_{j+1}^{(i)}$};
\draw[dashed,gray] (16,12.5) -- (16,0) node[anchor=north, thick, black] {$T_{j+1}^{(i)}$};
\draw[dashed,gray] (0,12) -- (23,12);
\draw[ultra thick,red] (16,12) -- (16,2.2);
\draw[ultra thick,red] (16.1,2.1) -- (20,6);
\filldraw[red] (20.5,6.5) circle (3pt);
\filldraw[red] (21,7) circle (3pt) ;
\filldraw[red] (21.5,7.5) circle (3pt);
\draw[red] (9,5) circle (6pt);
\draw[red] (16,2) circle (6pt);
\end{tikzpicture}
\caption{Sample path of the AoI process $\Delta^{(i)}(t)$.}
\label{fig:samplepath}
\end{figure}
The analytical method we propose in the next subsection enables us to obtain the distribution of $\Delta^{(1)}$ and $\Phi^{(1)}$. By renumbering the sources, the distribution of $\Delta^{(2)}$ and $\Phi^{(2)}$ can also be obtained using the same method.
\subsection{Queueing Model}
\begin{table}[tb]
\begin{center}
\begin{tabular}{||c|c|c|c||}
\hline
State & Server & $Q_1$ & $Q_2$ \\ [0.5ex]
\hline\hline
1 & I & E & E \\
\hline
2 & B1 & E & E \\
\hline
3 & B1 & F & E \\
\hline
4 & B1 & E & F \\
\hline
5 & B1 & F & F \\
\hline
6 & B2 & E & E \\
\hline
7 & B2 & F & E \\
\hline
8 & B2 & E & F \\
\hline
9 & B2 & F & F \\ [0.1ex]
\hline \hline
\end{tabular}
\end{center}
\caption{Description of the 9 states of the CTMC $\bm{X}(t)$. I, E, and F, stand for idle, empty, and full, respectively. B1 (B2) stands for the server being busy serving a source-1 (source-2) packet.}
\label{step1}
\end{table}
The proposed method consists of two main steps. In the first step, we construct an irreducible Continuous Time Markov Chain (CTMC) denoted by $\bm{X}(t)$ with nine states each of which is described in detail in Table~\ref{step1}. The CTMC $\bm{X}(t)$ has the generator matrix $\bm{P}$ where
\begin{align}
\bm{P_0} & = \begin{pmatrix}
0 & \lambda_1 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 \\
\mu_1 & 0 & 0 & \lambda_1& \lambda_2 & 0 & 0 & 0 & 0 \\
0 & \mu_1 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \lambda_1 & \mu_1 & 0 & 0 & 0 \\
0 & 0 & 0 & \mu_1 p_1 & 0 & 0 & \mu_1 p_2 & 0 & 0 \\
\mu_2 & 0 & 0 & 0 & 0 & 0 &\lambda_1 & \lambda_2 &0\\
0& \mu_2 & 0 & 0 & 0 & 0 & 0 & 0 & \lambda_2 \\
0& 0 & 0 & 0 & 0 & \mu_2 & 0 & 0 & \lambda_1 \\
0& 0 & 0 & \mu_2 p_1 & 0 & 0 & \mu_2 p_2 & 0 & 0
\end{pmatrix},
\end{align}
and $\bm{P}$ is the same as $\bm{P_0}$ except for its diagonal entries which are set to the corresponding row sums with a minus sign so that $\bm{P} \bm{1} =\bm{0}$ where $\bm{1}$ and $\bm{0}$ are column vectors of ones and zeros, respectively, of appropriate size. Let $\bm{\pi}$ be the stationary solution for $\bm{X}(t)$ so that
\begin{align}
\bm{\pi} \bm{P} = 0, \ \bm{\pi} \bm{1}=1,
\end{align}
with $\bm{\pi_j}$ denoting the steady-state probability of any new packet arrival finding the system in state $j$.
In the second step of the proposed method, we construct an absorbing CTMC denoted by $\bm{Y}(t)$ with 14 transient states $1,2,\ldots,14$ and two absorbing states $15,16$ which starts to evolve with the arrival of a source-1 packet, say packet $n$ into the system. If this packet turns out to be unsuccessful then we transition to the absorbing state 15. If packet $n$ turns out to be successful, then we evolve until the reception of the next successful packet say $m$ at which point the absorbing state 16 is transitioned to, which is referred to as a successful absorption. The 14 transient states are described in Table~\ref{step2}.
\begin{table}[tb]
\begin{center}
\begin{tabular}{||c|c|c|c|c||}
\hline
State & Server & Packet $n$ & $Q_1$ & $Q_2$ \\
[0.5ex]
\hline\hline
1 & \bf{B1}& N& E & E \\
\hline
2 & \bf{B1} & N & E & E \\
\hline
3 & B1 & N & F & E \\
\hline
4 & \bf{B1} & N & E & F \\
\hline
5 & \bf{B1} & N & F & F \\
\hline
6 & B1 & N & F & F \\
\hline
7 & B2 & N & E & F \\
\hline
8 & B2 & N & F & F \\
\hline
9 & I & Y & E & E \\
\hline
10 & B1 & Y & X & X \\ \hline
11 & B2 & Y & E & E \\ \hline
12 & B2 & Y & F & E \\ \hline
13 & B2 & Y & E & F \\ \hline
14 & B2 & Y & F & F \\
[0.1ex]
\hline \hline
\end{tabular}
\end{center}
\caption{Description of the 14 states of the CTMC $\bm{Y}(t)$. I, E, and F, stand for idle, empty, and full, respectively, and B1 (B2) stands for the server being busy serving a source-1 (source-2) packet. The notation {\bf B1} means the particular packet $n$ is being served. N and Y stand for packet $n$ not successful yet and otherwise, respectively, and X is don't care.}
\label{step2}
\end{table}
The generator for the absorbing CTMC, denoted by $\bm{Q}$ is in the form
\begin{align}
\bm{Q}=\begin{pmatrix}
\bm{A} & \bm{u} & \bm{s} \\
\bm{0} & 0 & 0 \\
\bm{0} & 0 & 0
\end{pmatrix},
\end{align}
where
\begin{align}
\bm{A_0} & =
\left(
\begin{array}{cccccccccccccc}
0 & \lambda_1 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & \mu_1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & \mu_1 & 0 & 0 & 0 & 0 \\
\mu_1 & 0 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \lambda_1 & 0 & 0 & 0 & 0 & 0 & \mu_1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_1 p_1 & 0 & \mu_1 p_2 & 0 & 0 \\
0 & 0 & 0 & \mu_1 p_1 & 0 & 0 & \mu_1 p_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\mu_2 & 0 & 0 & 0 & 0 & 0 & 0 & \lambda_2 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \mu_2 p_1 & 0 & 0 & \mu_2 p_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \lambda_1 & \lambda_2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 &0 &0 & \lambda_1 & \lambda_2 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 & 0 & 0 & 0 & \lambda_2 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 & 0 & 0 & \lambda_1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \mu_2 p_1 & 0 & \mu_2 p_2 & 0 & 0 \\
\end{array}
\right),
\bm{u} = \begin{pmatrix}
0 \\
0 \\
\lambda_1 \\
0 \\
0 \\
\lambda_1 \\
\lambda_1 \\
\lambda_1 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0
\end{pmatrix},
\bm{s} = \begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
\mu_1 \\
0 \\
0 \\
0 \\
0
\end{pmatrix},
\bm{h} = \begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
0 \\
1 \\
1 \\
1 \\
1 \\
1 \\
1
\end{pmatrix}
\label{big}
\end{align}
$\bm{A}$ is the same as $\bm{A_0}$ in \eqref{big} except for its diagonal entries which are set to the corresponding row sums with a minus sign so that $\bm{A} \bm{1} + \bm{u} + \bm{s}=\bm{0}$. Note that $\bm{A}$ is the sub-generator matrix corresponding to the transient states and $\bm{u}$ and $\bm{s}$
are the transition rate vectors from the transient states to the unsuccessful and successful absorbing states, respectively. The vector $\bm{h}$ which takes the unit value for the indices 9 to 14, and zero otherwise, will be needed in deriving the AoI distribution.
The initial probability vector of the CTMC $\bm{Y}(t)$ is denoted by $\bm{\alpha}$ which is given as follows:
\begin{align}
\bm{\alpha} & = \begin{pmatrix}
\bm{\pi_1} & 0 & \bm{\pi_{23}} & \bm{0_{1 \times 2}} & \bm{\pi_{45}} & \bm{\pi_{67}} & \bm{\pi_{89}} & \bm{0_{1 \times 6}}
\end{pmatrix},
\end{align}
where $\bm{\pi_{ij}} := \bm{\pi_i} + \bm{\pi_j}$. In order to understand this, a new source-1 packet $n$ will find the system idle (state 1 of ${\bm X(t)}$) with probability $\bm{\pi_1}$ and therefore will be placed in service immediately, i.e., state 1 of ${\bm Y(t)}$. Similarly, packet $n$ will find the system in states 2 and 3 of ${\bm X(t)}$ with probability $\bm{\pi_{23}}$ and in either case this packet will start its journey from state 3 of ${\bm Y(t)}$ and so on. With this step, the two CTMCs ${\bm X(t)}$ and ${\bm Y(t)}$ are linked.
Let us visit Fig.~\ref{fig:samplepath} and relate it to the absorbing CTMC ${\bm Y(t)}$. The instance $A_{j}^{(i)}$ is the arrival time of packet $n$ of ${\bm Y(t)}$ and $T_{j+1}^{(i)}$ is the reception time of packet $m$. Therefore, the distribution of the absorption times of ${\bm Y(t)}$ in successful absorptions enables us to write the steady-state distribution of the PAoI process. In particular,
\begin{align}
\Pr \{ \Phi^{(1)} \leq x \} & = \Pr \{ {\bm Y(x)} =16 \ | \ {\bm Y(\infty)} = 16\} \\
& = \frac{\Pr \{ {\bm Y(x)} =16 \}}{\Pr\{ {\bm Y(\infty)} = 16 \} }
\end{align}
Differentiating this expression with respect to $x$, we obtain the pdf (probability density function) of $\Phi^{(1)}$, denoted by $f_{ \Phi^{(1)}}(x)$, as follows;
\begin{align}
f_{\Phi^{(1)}}(x) & = \beta \ \bm{\alpha} \mathrm{e}^{\bm{A}x} \bm{s},
\end{align}
where $\beta^{-1} =\Pr\{ {\bm Y(\infty)} = 16 \} = -\bm{\alpha} \bm{A^{-1}} \bm{s} $.
Revisiting Fig.~\ref{fig:samplepath}, the probability $\Pr \{ x < \Delta^{(1)} \leq x + \delta x \}$ is proportional with $\Pr \{ {\bm Y(x)} \in \cal{S} \}$
with the subset $\cal{S}$ containing the six transient states 9 to 14 of ${\bm Y(t)}$ and the proportionality constant being the reciprocal of the mean holding time in $\cal{S}$ in successful absorptions.
Consequently, we write
\begin{align}
f_{ \Delta^{(1)}}(x) & = \kappa \ \bm{\alpha} \mathrm{e}^{\bm{A}x} \bm{h},
\end{align}
where $\kappa^{-1} = -\bm{\alpha} \bm{A^{-1}} \bm{h} $. The $k$th non-central moments of $\Phi^{(1)}$ and $\Delta^{(1)}$ are subsequently very easy to write:
\begin{align}
E \left[(\Phi^{(1)})^k\right] & = \beta \ \bm{\alpha}( -\bm{A})^{-k-1} \bm{s}, \quad
E \left[ (\Delta^{(1)})^k \right] = \kappa \ \bm{\alpha}( -\bm{A})^{-k-1} \bm{h}. \label{momentsPA}
\end{align}
\section{Heavy-traffic Regime}
\label{section4}
In this section, we study the so-called heavy-traffic regime, i.e., $\lambda_i \rightarrow \infty$. We first describe the analytical model in this regime along with the closed-form average AoI/PAoI expressions. Subsequently, we propose two heuristic schedulers based on this model that are devised to operate at any load as well as an optimum probabilistic scheduler on the basis of the analytical model of the previous section.
\subsection{Analytical Model}
In this case, the CTMC in step 1 of the proposed method reduces to one single state corresponding to a busy server with both queues being full since in the heavy-traffic regime, neither the queues can be empty nor the server can be idle. Moreover, the absorbing CTMC with 14 transient and 2 absorbing states reduces to one with 3 transient states and 1 successful absorbing state (the state $\bm{s}$). The transient states 1 and 3 indicate that packet $n$ and packet $m$ are in service, respectively, whereas transient state 2 indicates the transmission of a source-2 packet. Consequently, the matrices characterizing this absorbing CTMC take the following simpler form:
\begin{align}
\bm{A} & =
\left(
\begin{array}{ccc}
-\mu_1 & \mu_1 p_2 & \mu_1 p_1 \\
0 & -\mu_2 p_1 & \mu_2 p_1 \\
0 & 0 & -\mu_1
\end{array}
\right),
\bm{s} = \begin{pmatrix}
0 \\
0 \\
\mu_1
\end{pmatrix},
\bm{h} = \begin{pmatrix}
0 \\
1 \\
1
\end{pmatrix}, {\bm \alpha}=\begin{pmatrix}
1 \\
0 \\
0
\end{pmatrix}^T,
\label{small}
\end{align}
and the expressions in \eqref{momentsPA} are valid for the moments of AoI/PAoI in this heavy-traffic regime. Using the upper-triangular nature of the matrix $\bm{A}$ and \eqref{momentsPA}, it is not difficult to show that
\begin{align}
E [ \Phi^{(1)}] & = \frac{2}{\mu_1} + \frac{p_2}{\mu_2 p_1}, \ E [ \Phi^{(2)}] = \frac{2}{\mu_2} + \frac{p_1}{\mu_1 p_2}. \label{nail11}
\end{align}
Defining the probability ratio $p=\frac{p_1}{p_2}$ and the weight ratio $\omega=\frac{\omega_1}{\omega_2}$, the weighted average PAoI simplifies to
\begin{align}
W_{PAoI} & = \frac{\omega_2}{\mu_1 \mu_2} \left( 2 \omega (\mu_1 + \mu_2) + \omega \mu_1 p^{-1} + \mu_2 p \right)
\end{align}
Employing the Karush–Kuhn–Tucker (KKT) conditions on this expression and defining $\mu = \frac{\mu_1}{\mu_2}$, the optimum probability ratio that yields the minimum $W_{PAoI}$, denoted by $p_{PAoI}^*$, can easily be shown to satisfy the following:
\begin{align}
p_{PAoI}^* & \; {\propto } \; \sqrt{\omega \mu}. \label{HL}
\end{align}
The expression for the average AoI is somewhat more involved:
\begin{align}
E [ \Delta^{(1)}] & = \frac{1}{\mu_1} + \frac{\mu_2 p_1 + \mu_1}{\mu_1 \mu_2 p_1} - \frac{1}{\mu_2 p_1 + \mu_1 p_2}. \label{nail10}
\end{align}
A similar expression for $E [ \Delta^{(2)}]$ is easy to write due to symmetry. However, in this case, the KKT conditions for the expression for $W_{AoI}$ give rise to a quartic equation, i.e, 4th degree polynomial equation, for the roots of which closed-form expressions are not available. However, numerical techniques can be used to find the optimum probability ratio minimizing $W_{AoI}$, denoted by $p_{AoI}^*$, in this case.
However, for the special case $\mu_1 = \mu_2 = u$, the expression \eqref{nail10} reduces to
\begin{align}
E [ \Delta^{(1)}] & = \frac{1}{u} + \frac{1}{u p_1}, \ E [ \Delta^{(2)}] = \frac{1}{u} + \frac{1}{u p_2},
\end{align}
which are identical to the expressions for $E [ \Phi^{(1)}]$ and $E [ \Phi^{(2)}]$ in \eqref{nail11}, respectively, for the special case
$\mu_1 = \mu_2 = u$.
Employing KKT conditions on $W_{AoI}$, it is obvious to show that
\begin{align}
p_{AoI}^* & \; {\propto } \; \sqrt{\omega}. \label{HL2}
\end{align}
When $\mu_1 \neq \mu_2$, we use exhaustive search to obtain $p_{AoI}^*$ throughout the numerical examples of this paper.
\subsection{Proposed Heuristic Schedulers}
The focus of this paper is on work-conserving schedulers that are neither age- or timestamp-aware, i.e., the schedulers make a decision only on the source indices of packets in the waiting room, and not on the timestamp information in the packets or the instantaneous ages of the source processes.
This allows us to use simple-to-implement scheduling policies without the server having to process the timestamp information included in the information packets.
Given the traffic parameters $\lambda_i, \mu_i,$ and the weights $\omega_i$, for $i=1,2$, we first introduce the OPS-P (Optimum Probabilistic Scheduling for PAoI) policy that minimizes the weighted average PAoI of the system given in \eqref{W}.
OPS-A (Optimum Probabilistic Scheduling for PAoI) is defined similarly so as to minimize the weighted average AoI in \eqref{W}.
We use the analytical model and exhaustive search to obtain OPS-P and OPS-A. Although the analytical model is computationally efficient, one needs to resort to simpler heuristics which may be beneficial especially in situations where the traffic parameters may vary in time and the server may need to update its scheduling policy without having to perform extensive computations.
For this purpose, we propose a generic heuristic probabilistic scheduler called H1$(p)$ that employs the probability ratio $p=\frac{p_1}{p_2}, p_1=\frac{p}{1+p}, p_2=\frac{1}{1+p},$ using the information about $\omega$ and $\mu$ only but not the actual arrival rates $\lambda_i, i=1,2$.
The second heuristic scheduler we propose is called H2$(p)$ which is obtained by determinizing the probabilistic policy H1$(p)$ as described below. In H2$(p)$, each source-$i$ maintains a bucket $b_i$ so that $b_1 + b_2=0$ at all times. Initially, $b_i=0, i=1,2$. When there are two packets in the waiting room, the source with the larger bucket value $b_i$ is selected for transmission. Every time a source-$1$ packet is transmitted, $b_1$ is decremented by $(1 - p_1)$ and $b_2$ is incremented by $p_2$. Similarly, when a source-$2$ packet is transmitted, $b_2$ is decremented by $(1 - p_2)$ and $b_1$ is incremented by $p_1$.
In order for the bucket values not to grow to infinity (which may occur if there are no packet arrivals from a specific source for an extended duration of time), we impose a limit on the absolute values of the buckets, i.e., $| b_i | < B$ where $B$ is called the bucket limit.
Note that in the heavy-traffic regime, H2$(p)$ is the determinized version of H1$(p)$. To see this, let $p=1,p_1=p_2=0.5$. In H1$(p)$, a geometrically distributed (with parameter 0.5) number of source-1 packets will be transmitted followed with the transmission of a geometrically distributed (again with parameter 0.5) number of source-2 packets. On the other hand, in H2$(1)$, an alternating pattern arises where a single source-1 packet transmission is to be followed by a single packet-2 transmission, i.e., round-robin scheduling. For both heuristic schedulers, the ratio of source-1 transmissions to source-2 transmissions is kept at $p$ in the heavy-traffic regime, but H2$(p)$ manages to maintain this ratio using deterministic patterns as opposed to being probabilistic. The bucket-based nature of the algorithm enables one to obtain this deterministic pattern for all values of the ratio parameter $p$ which is advantageous especially for average AoI.
Moreover, in H2$(p)$, we seek to maintain a probability ratio $p$ of transmissions between the two sources throughout the entire operation of the system whereas this probability ratio is maintained in H1$(p)$ only during times when there are two packets in the waiting room.
When we choose $p=p_{PAoI}^*$ with $p_{PAoI}^*$ being the optimum probability ratio in the heavy-traffic regime (see Eqn.~\eqref{HL}), we obtain our two proposed schedulers H1-P (Heuristic 1 Scheduler for PAoI) and H2-P (Heuristic Scheduler 2 for PAoI) for average weighted PAoI minimization, i.e.,
\begin{align}
\text{H1-P} & \equiv \text{H1}(p_{PAoI}^*), \ \text{H2-P} \equiv \text{H2}(p_{PAoI}^*),
\end{align} where the notation $\equiv$ is used to denote equivalence.
Similarly, we propose two schedulers for average weighted AoI minimization, namely H1-A (Heuristic 1 Scheduler for AoI) and H2-A (Heuristic Scheduler 2 for AoI) , i.e.,
\begin{align}
\text{H1-A} &\equiv \text{H1}(p_{AoI}^*),
\ \text{H2-A} \equiv \text{H2}(p_{AoI}^*).
\end{align} For two-source networks with $\mu=1$, H1-P $\equiv$ H1-A, and H2-P $\equiv$ H2-A.
\section{Analytical Model for the Non-preemptive Bufferless Server}
\label{section5}
Up to now, we have considered SBPSQ servers with scheduling. In this section, we also study the Non-Preemptive Bufferless (NPB) server for the purpose of using it as a benchmark against the per-source queueing systems of our interest.
In the NPB scenario, the newcoming source-$i$ packet is served immediately if the server is idle or is otherwise discarded since there is no waiting room.
Actually, an analytical model for AoI and PAoI is recently proposed in \cite{dogan_akar_tcom21} for servers serving a general number of sources with more general phase-type distributed service times, also allowing arbitrary preemption probabilities. In this section, we make use of the model introduced by \cite{dogan_akar_tcom21} to provide closed-form expressions for the average AoI/PAoI for the specific case of two sources, no preemption, and exponentially distributed service times. While doing so, we use absorbing CTMCs as opposed to Markov Fluid Queues (MFQ) used in \cite{dogan_akar_tcom21}. Both yield the same results but ordinary CTMCs of absorbing type are more commonly known and established than MFQs.
In this case, the CTMC in step 1 is not needed due to the bufferless nature of the system.
Moreover, the absorbing CTMC with 14 transient and 2 absorbing states reduces to one with 4 transient states and 1 absorbing state. The transient states 1 and 4 indicate that packet $n$ and packet $m$ are in service, respectively, whereas in transient state 2, we wait for a packet arrival, and in transient state 3, a source-2 packet is in service. Consequently, the matrices characterizing this absorbing CTMC are written as:
\begin{align}
\bm{A} & =
\left(
\begin{array}{cccc}
-\mu_1 & \mu_1 & 0 & 0 \\
0 & -(\lambda_1+\lambda_2) & \lambda_2 & \lambda_1 \\
0 & \mu_2 & -\mu_2 & 0 \\
0 & 0 & 0 & -\mu_1
\end{array}
\right), \
\bm{s} = \begin{pmatrix}
0 \\
0 \\
0 \\
\mu_1
\end{pmatrix},\
\bm{h} = \begin{pmatrix}
0 \\
1 \\
1 \\
1
\end{pmatrix}, \
{\bm \alpha} =\begin{pmatrix}
1 \\
0 \\
0 \\
0
\end{pmatrix}^T,
\label{bufferless}
\end{align}
and the expressions \eqref{momentsPA} can be used for obtaining the moments of AoI/PAoI for the bufferless system. Using \eqref{momentsPA}, for the average per-source PAoI, one can easily show that
\begin{align}
E [ \Phi^{(1)}] & = \frac{1}{\mu_1} + \frac{(1+\rho)}{\lambda_1}, \ E [ \Phi^{(2)}] = \frac{1}{\mu_2} + \frac{(1+\rho)}{\lambda_2}.
\end{align}
Recalling the definition of the traffic mix parameter $r_i$ and the traffic mix ratio $r=\frac{r_1}{r_2}$, the weighted average PAoI can be written in terms of $r_1$ as follows:
\begin{align}
W_{PAoI} & = \frac{\omega_1}{\mu_1} + \frac{\omega_2}{\mu_2} + \frac{\omega_1(1+\rho)}{\rho r_1 \mu_1} + \frac{\omega_2(1+\rho)}{\rho (1-r_1) \mu_2}.
\end{align}
Fixing $\rho$ and employing the KKT conditions for this expression, the optimum traffic mix ratio, denoted by $r_{PAoI}^*$, is given as:
\begin{align}
r_{PAoI}^* & \; {\propto } \; \sqrt{\frac{\omega}{\mu}}. \label{optimum_mix_ratio}
\end{align}
Note that the above ratio does not depend on the load parameter $\rho$.
If we define the arrival rate ratio $\lambda = \frac{\lambda_1}{\lambda_2}$, then the optimum arrival rate ratio, denoted by $\lambda_{PAoI}^*$, can be written as:
\begin{align}
\lambda_{PAoI}^* & \; {\propto } \; \sqrt{{\omega}{\mu}}. \label{optimum_arrivalrate_ratio}
\end{align}
The expression for the average AoI can be written as:
\begin{align}
E [ \Delta^{(1)}] & = \frac{1}{\mu_1 \mu_2} \left( {\mu_2} + \frac{\mu_2}{\rho_1} + \frac{\lambda_2}{\rho_1} + \frac{\mu_2 \rho_1 + \mu_1 \rho_2}{(1+\rho)} \right) .
\end{align}
A similar expression for $E [ \Delta^{(2)}]$ is again very easy to write due to symmetry. However, in this case, the KKT conditions for the expression for $W_{AoI}$ again result in a quartic equation in which case numerical techniques can be used to find the optimum traffic mix ratio denoted by $r_{AoI}^*$.
\section{Numerical Examples}
\label{section6}
\subsection{Heavy-traffic Scenario}
In the first numerical example, we study the heavy-traffic regime and we depict the corresponding optimum probability ratio parameters $p_{PAoI}^*$ and $p_{AoI}^*$ as a function of the square root of the weight ratio parameter, $\sqrt{\omega}$, for three values of the service rate ratio parameter $\mu$ in Fig.~\ref{fig:ornek1}. When the service rates of the two sources are identical, then these probability ratios are the same for both AoI and PAoI. However, when the service rate ratio starts to deviate from unity, then the optimum probability ratio parameters for PAoI and AoI turn out to deviate from each other. More specifically, $p_{AoI}^* < p_{PAoI}^*$ when $\mu < 1$ and $p_{AoI}^* > p_{PAoI}^*$ when $\mu > 1$. Subsequently, we study whether one can use the easily obtainable $p_{PAoI}^*$ in Eqn.~\eqref{HL} in place of $p_{AoI}^*$ when the minimization of weighted AoI is sought.
Fig.~\ref{fig:ornek1b} depicts the ratio of $W_{AoI}$ obtained with the use of the probability ratio $p_{PAoI}^*$ to that obtained using $p_{AoI}^*$ as a function of the weight ratio $\omega$. We observe that $p_{PAoI}^*$ can be used in place of $p_{AoI}^*$ only when the rate ratio $\mu$ and the weight ratio $w$ are both close to unity. It is clear that when $\mu=1$, the depicted ratio in Fig.~\ref{fig:ornek1b} is always one irrespective of $\omega$; also see \eqref{HL2}.
\begin{figure}[bth]
\centering
\includegraphics[width=0.7\linewidth]{ornek1.pdf}
\caption{The probability ratio parameters $p_{PAoI}^*$ and $p_{AoI}^*$ as a function of the square root of the weight ratio parameter, $\sqrt{\omega}$, for three values of the service rate ratio parameter $\mu$.
}
\label{fig:ornek1}
\end{figure}
\begin{figure}[bth]
\centering
\includegraphics[width=0.7\linewidth]{ornek1b.pdf}
\caption{The ratio of $W_{AoI}$ obtained with the use of $p_{PAoI}^*$ to that using $p_{AoI}^*$ as a function of the weight ratio parameter, ${\omega}$, for five values of the service rate ratio parameter $\mu$.
}
\label{fig:ornek1b}
\end{figure}
\subsection{Numerical Study of the Proposed Schedulers}
A two-source network is called symmetric when $\omega=1, \ \mu=1$ in \cite{kadota_tmc21} and is asymmetric otherwise. We first present our numerical results for symmetric networks and subsequently, asymmetric network results are presented first for weighted average PAoI minimation, and then for weighted average AoI minimization. We fix $\mu_2=1$ in all the numerical examples. Thus, one time unit is taken as the average service time of source-2 packets. All the results are obtained through the analytical models developed in this paper except for the bucket-based H2-P and H2-A for which an analytical model is cumbersome to build for all values of the probability parameter $p$ and therefore we resorted to simulations.
\subsection{Symmetric Network}
The weighted average PAoI or AoI are depicted in Fig.~\ref{fig:symmetricexample} as a function of the traffix mix parameter $r$ on a log-log scale for two values of the load $\rho$ using the four schedulers OPS-P(A), NPB, H1-P(A), and H2-P(A) and note that H1-P $\equiv$ H1-A and H2-P $\equiv$ H2-A for symmetric networks. We have the following observations about symmetric networks:
\begin{itemize}
\item For symmetric networks, the optimum traffic mix should be unity due to symmetry. The discrepancy between NPB and the other SBPSQ systems is reduced as $r \rightarrow 1$ and it vanishes as $r \rightarrow 1$ and $\rho \rightarrow \infty$. However, for moderate loads and when $r$ deviates from unity, SBPSQ has substantial advantages compared to NPB.
\item The proposed heuristic schedulers
are developed without the knowledge of load and traffic mix
using only heavy-traffic conditions. However, we observe
through numerical results that the heuristic schedulers perform very close to that obtained by the computation-intensive optimum probabilistic scheduler. This observation is in line with those made in \cite{joo_eryilmaz_TNET18}.
\item H2-P (H2-A) presents very similar performance to OPS-P (OPS-A) for all the load and traffic mix values we have obtained whereas H1-P and H1-A are slightly outperformed by them except for light and heavy loads. We also note that there are even cases when H2-A outperforms OPS-A in the high load regime when $r \rightarrow 1$. This stems from the fact that in the heavy-traffic regime, determinized source scheduling strategies perform better than their corresponding probabilistic counterparts for AoI. However, this observation does not necessarily apply to PAoI.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{PAoI}$ ($\rho=1$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample1}
\label{fig:symmetric1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{AoI}$ ($\rho=1$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample2}
\label{fig:symmetric2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{PAoI}$ ($\rho=10$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample3}
\label{fig:symmetric3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$W_{AoI}$ ($\rho=10$)} \vspace*{-0.3cm}
\includegraphics[width=\textwidth]{SymmetricExample4}
\label{fig:symmetric4}
\end{subfigure}
\caption{The weighted average PAoI or AoI as a function the traffic mix parameter $r$ for two values of the load $\rho$.}
\label{fig:symmetricexample}
\end{figure}
\subsection{Asymmetric Network - Weighted Average PAoI Minimization}
In this numerical example, we depict $W_{PAoI}$ as a function of the load $\rho$ (on a log-log scale) employing four different buffer management/scheduling mechanisms, namely OPS-P, NPB, H1-P, and H2-P in Fig.~\ref{fig:PAoIexample} for which we fix $\omega=4$ and $\mu=4$. In Fig.~\ref{fig:PAoIexample1}, for given load $\rho$, we choose $\rho_i = \rho r_{PAoI}^*$ where the traffic mix ratio $r_{PAoI}^*=1$ as given in \eqref{optimum_mix_ratio}. This choice ensures that source-$i$ packet generation intensities are chosen such that NBP performance is maximized in terms of $W_{PAoI}$.
On the other hand, for Fig.~\ref{fig:PAoIexample2}, we fix $r=1/4$, a choice which is quite different than the choice $r_{PAoI}^*=1$ giving rise to a scenario for which the arrival rate selections are not as consistent with the weights and average service rates as in Fig.~\ref{fig:PAoIexample1}.
The following observations are made for this example.
\begin{itemize}
\item If the per-source packet arrival rates are chosen to optimize NBP as in Fig.~\ref{fig:PAoIexample1}, then the discrepancy between NPB and the other three SBPSQ systems is reduced especially for light and heavy loads. For this scenario, there are moderate load values at which NPB outperformed H1-P but OPS-P and H2-P always outperformed NPB in all the cases we had investigated.
\item When the arrival rates deviate from the the optimum values derived for NPB as in Fig.~\ref{fig:PAoIexample2}, then the advantage of using SBPSQ with respect to NBP is magnified.
Therefore, one can conclude that the sensitivity of the performance of SBPSQ systems to the specific choice of the arrival rates are lower than that of NPB.
\item The performance of H2-P is quite similar to that of OPS-P for all the values we had tried both of which slightly outperform H1-P. We conclude that H2-P depends only on the knowledge on
$\omega$ and $\mu$ and does not use the load and traffic mix. However, H2-P can safely be used at all loads and all traffic mixes as a simple-to-implement alternative to OPS-P for weighted PAoI minimization.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1$}\vspace*{-0.2cm}
\includegraphics[width=\textwidth]{PAoIexample1}
\label{fig:PAoIexample1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1/4$} \vspace*{-0.2cm}
\includegraphics[width=\textwidth]{PAoIexample2}
\label{fig:PAoIexample2}
\end{subfigure}
\caption{$W_{PAoI}$ depicted as a function of the total load $\rho$ obtained with the algorithms OPS-P, NPB, H1-P, and H2-P for two different scenarios.}
\label{fig:PAoIexample}
\end{figure}
\subsection{Asymmetric Network - Weighted Average AoI Minimization}
In this example, we continue with the same example of the previous subsection but we focus on $W_{AoI}$ which is plotted as a function of the load $\rho$ (on a log-log scale) under the policies OPS-A, NPB, H1-A, and H2-A, in Fig.~\ref{fig:AoIexample} with $\omega=4$ and $\mu=4$. The traffic mix
parameter $r$ is fixed to $r=1$ and $r=1/4$ in Fig.~\ref{fig:AoIexample1} and Fig.~\ref{fig:AoIexample2}, respectively.
We have the following observations:
\begin{itemize}
\item The OPS-A curve is not monotonically decreasing with respect to load $\rho$ as in OPS-P for the two values of the traffic mix parameter $r$ we have studied; it first decreases until a certain load threshold is reached but then it slightly rises up to its heavy-traffic limit obtained with the probability ratio $p_{AoI}^*$. The corresponding load threshold value appears to depend on the traffic mix.
\item The H2-P policy tracks the performance of OPS-A until the load threshold is reached but when the load ranges between the load threshold and infinity, H2-P outperforms OPS-A. This observation does not pertain to the results obtained for weighted average PAoI minimization.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1$}\vspace*{-0.2cm}
\includegraphics[width=\textwidth]{AoIexample1}
\label{fig:AoIexample1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\caption{$\omega=4, \ \mu=4, \ r=1/4$} \vspace*{-0.2cm}
\includegraphics[width=\textwidth]{AoIexample2}
\label{fig:AoIexample2}
\end{subfigure}
\caption{$W_{AoI}$ depicted as a function of the total load $\rho$ obtained with the algorithms OPS-A, NPB, H1-A, and H2-A for two different scenarios.}
\label{fig:AoIexample}
\end{figure}
\section{Conclusions}
\label{section7}
We studied a two-source SBPSQ-based status update system with probabilistic scheduling and we proposed a method to obtain the distributions and moments of AoI and PAoI numerically using CTMCs of absorbing-type. The proposed technique is quite simple to implement making it amenable to use for a wider range of analytical modeling problems regarding AoI/PAoI distributions. Moreover, we performed heavy-traffic analysis for the same scenario to obtain closed form expressions for the per-source average AoI/PAoI values from which we have proposed two simple-to-implement age-agnostic heuristic schedulers.
The proposed heuristic schedulers
are developed without the knowledge of load and traffic mix
using only heavy-traffic conditions. However, we observed
through numerical results that the heuristic schedulers perform very close to that obtained by their computation-intensive optimum probabilistic scheduler counterparts and at all loads and traffic mixes. In particular, for weighted AoI minimization, our proposed heuristic scheduler H2-A's performance tracked that of the optimum probabilistic scheduler OPS-A except for heavy loads where it even outperformed OPS-A. For weighted PAoI minimization, our proposed heuristic scheduler H2-P's performance tracked that of the optimum probabilistic scheduler OPS-P. Therefore, H2-A and H2-P are promising candidates for scheduling in SBPSQ systems stemming from their performance and age-agnostic nature. Future work will be on extending the results to general number of sources and non-exponentially distributed service times, and also to discrete-time.
\bibliographystyle{unsrtnat}
|
1,116,691,501,265 | arxiv | \section{Introduction}
Let $\mathcal{P}$ denote the set of polynomials in the complex variable $z.$ For a compact subset $K$ of the complex plane $\mathbb C,$ let $Rat(K)$ be the set of rational functions with poles off $K.$
For $1 \le t < \infty$ with conjugate exponent $t' = \dfrac{t}{t-1}$ and a finite positive measure $\mu$ supported on $K,$ let $R^t(K, \mu)$ denote the closure in $L^t (\mu )$ of $Rat(K).$ In the case that $K$ is polynomially convex, $R^t(K, \mu) = P^t(\mu ), $ the closure of $\mathcal{P}$ in $L^t(\mu ).$ Multiplication by $z$ defines a bounded linear operator on $R^t(K, \mu)$ which we will denote by $S_\mu.$ A rationally invariant subspace of $R^t(K, \mu)$ is a closed linear subspace $M \subset R^t(K, \mu)$ such that $r(S_\mu) M \subset M$ for $r\in Rat(K).$
For a subset $A \subset \mathbb C,$ we set $\bar A$ or $clos (A)$ for its closure, $A^c$ for its complement, and $\chi _A$ for its characteristic function. For $\lambda\in \mathbb C$ and $\delta > 0,$ we set $B(\lambda, \delta) = \{z: |z - \lambda | <\delta \}$ and $\mathbb D = B(0,1).$ Let $m$ be the normalized Lebesgue measure $\frac{d\theta}{2\pi}$ on $\partial {\mathbb D}.$ For a compactly supported finite measure $\nu$ on $\mathbb {C},$ we denote the support of $\nu$ by $spt(\nu ).$ For a compact subset $K,$ we denote the boundary of $K$ by $\partial K.$ The inner boundary of $K$, denoted by $\partial _i K$, is the set of boundary points
which do not belong to the boundary of any connected component of $\mathbb C\setminus K.$
For $\lambda \in K,$ we denote evaluation on $Rat(K)$ at $\lambda$ by $e_\lambda ,$ i.e. $e_\lambda (r) = r(\lambda )$ for $r\in Rat(K).$
$\lambda$ is a bounded point evaluation (bpe) for $R^t(K, \mu)$ if $e_\lambda$ extends to a bounded linear functional on $R^t(K, \mu),$ which we will also denote by $e_\lambda .$ We denote the set of bounded point evaluations for $R^t(K, \mu)$ by $bpe(R^t(K, \mu))$ and set $M_\lambda = \|e_\lambda\|_{R^t(K, \mu)^*} .$
For $\lambda _0 \in K,$ if there is a neighborhood of $\lambda _0,$ $B(\lambda _0, \delta),$ consisting entirely
of bpe's for $R^t(K, \mu)$ with $\lambda \rightarrow e_\lambda (f)$ analytic in $B(\lambda _0, \delta)$ for all $f \in R^t(K, \mu),$ then we say that $\lambda _0$ is an analytic bounded point evaluation (abpe) for $R^t(K, \mu).$ We denote the
set of abpe's for $R^t(K, \mu)$ by $abpe(R^t(K, \mu)).$ Clearly analytic bounded point evaluations are contained in the interior of $K.$
\cite{thomson} proves a remarkable structural theorem for $P^t(\mu ):$
There is a Borel partition $\{\Delta_i\}_{i=0}^\infty$ of $spt\mu$ such that the space $P^t(\mu |_{\Delta_i})$ contains no nontrivial characteristic functions and
\[
\ P^t(\mu ) = L^t(\mu |_{\Delta_0})\oplus \left \{ \oplus _{i = 1}^\infty P^t(\mu |_{\Delta_i}) \right \}.
\]
Furthermore, if $U_i$ is the open set of analytic bounded point evaluations for
$P^t(\mu |_{\Delta_i})$ for $i \ge 1,$ then $U_i$ is a simply connected region and the closure of $U_i$ contains $\Delta_i.$
Because of Thomson's decomposition, the study of general $P^t(\mu )$ can be reduced to the case where $P^t(\mu )$ is irreducible (contains no nontrivial characteristic functions) and $abpe(P^t(\mu ))$ is a nonempty simply connected open set whose
closure contains $spt \mu.$ \cite{oy95} shows that one can use the Riemann Mapping Theorem to further reduce to the case where $abpe(P^t(\mu )) = \mathbb D.$ In this case, \cite{ars} obtained the following remarkable structural theorem.
\begin{ARSTheorem} \label{ARSTheorem}
Suppose that $\mu$ is supported in $\bar{\mathbb D}$ and is such that
$abpe (P^t(\mu )) = \mathbb D$ and $P^t(\mu )$ is irreducible, and that $\mu (\partial \mathbb D)> 0.$ Then:
\newline
a) If $f \in P^t(\mu )$ then the nontangential limit $f^*(z)$ of f exists for $\mu |_{\partial \mathbb D}$-
almost all $z,$ and $f^* = f |_{\partial \mathbb D}$ as elements of $L^t(\mu |_{\partial \mathbb D}).$
\newline
b) Every nonzero invariant subspace of $P^t(\mu )$ has index 1.
\end{ARSTheorem}
\cite{ce93} extends some results of Thomson's Theorem to the space $R^t(K,\mu ).$ \cite{b08} expresses $R^t(K,\mu )$ as a direct sum as the following: With the assumption that the diameters of the components of $\mathbb C\setminus K$ are bounded away from zero, there exists a Borel partition $\{\Delta_i\}_{i=0}^\infty$ of $spt\mu$ and matching compact subsets $\{K_i\}_{i=0}^\infty$ of $K$ such that $\Delta_i \subset K_i$
and
\[
\ R^t(K,\mu ) = L^t(\mu |_{\Delta_0})\oplus \left \{ \oplus _{i = 1}^\infty R^t(K_i, \mu |_{\Delta_i}) \right \}, \tag{1-1}
\]
where for each $i\ge 1$ the corresponding summand $R^t(K_i, \mu |_{\Delta_i})$ is irreducible in the
sense that it contains no non-trivial characteristic function. Furthermore, if $U_i = abpe(R^t(K_i, \mu |_{\Delta_i}))$ for $i \ge 1,$ then $U_i$ is a connected region and the closure of $U_i$ contains $\Delta_i.$
The results includes both Thomson's theorem and results of \cite{ce93}.
It is evident that some restriction on the nature of $\mathbb C \setminus K$ is necessary in order ensure (1-1) to be
valid in general. Because of Brennan's decomposition under some additional conditions for $\mathbb C \setminus K,$ it is reasonable to assume, in the study of general $R^t(K, \mu ),$ that $R^t(K, \mu )$ is irreducible and $abpe(R^t(K,\mu ))$ is a nonempty connected open set whose closure contains $spt \mu.$ It is the purpose of this paper to explore the boundary values of functions and indices of rationally invariant subspaces for $R^t(K, \mu )$ and to extend Aleman-Richter-Sundberg's Theorem.
Notice that it is possible for two compact sets, $K_1$ and $K_2,$ to contain the support of $\mu $ and satisfy $R^t(K_1, \mu ) = R^t(K_2, \mu ).$ Thus giving conditions on a compact set $K$ is inappropriate unless attention is focused on the smallest compact set which yields the same set of functions. Since $K \supset \sigma (S_\mu ),$ the spectrum of $S_\mu,$ $\sigma (S_\mu )$ is the smallest set. We will always assume that $K = \sigma (S_\mu ).$
For readability purpose, in section 2, we consider an important special case that the boundary of unbounded component of $\mathbb C\setminus K$ is the unit circle. Proposition \ref{MProposition1}, which locally estimates the boundary values of Cauchy transform of an annihilating measure in the sense of capacitary density, plays a key role in proving Theorem \ref{MTheorem1} that extends Aleman-Richter-Sundberg's Theorem. As a consequence, our approach provides an alternative short proof of Aleman-Richter-Sundberg's Theorem.
The main difficulty in their original proof, in \cite{ars}, is the proof of the following inequality:
\[
\ \underset{\lambda\rightarrow z}{\overline{\lim}} (1 - |\lambda |^2) ^{\frac{1}{t}} M_{\lambda } \le \dfrac{C}{h(z)^{\frac{1}{t}}} \tag{1-2}
\]
nontangentially for $m$-almost all $z \in \partial \mathbb D,$ where $C$ is some constant. Our proof does not depend on the inequality (1-2). However, we will also develop a more general version of (1-2) in section 3 (see Theorem \ref{MTheorem4}). Proposition \ref{MProposition2}, which estimates the upper bound of Cauchy transform of an annihilating measure, is used to prove Theorem \ref{MTheorem2} that extends Theorem C in \cite{ars}.
To facilitate the discussion of further results for more general $K,$ we provide the following example.
\begin{Example}
Let $ 0 < \epsilon < \frac{1}{8},~ M =\{z: ~ -\frac{1}{2} < Re(z) < \frac{1}{2}, ~ Im(z) =0\},$ $U_n = \{z: ~ -\frac{1}{2} < Re(z) < \frac{1}{2}, ~ \frac{1}{2^n}(\frac{1}{2} - \epsilon )< Im(z) < \frac{1}{2^n}(\frac{1}{2} + \epsilon )\},$ and $L_n = \{z: ~ -\frac{1}{2} < Re(z) < \frac{1}{2}, ~ \frac{1}{2^n}(-\frac{1}{2} - \epsilon )< Im(z) < \frac{1}{2^n}(-\frac{1}{2} + \epsilon )\}.$ Let
\[
\ K_1 = \bar {\mathbb D} \setminus \left ( \cup _{n=1}^\infty U_n \right )
\]
and
\[
\ K_2 = \bar {\mathbb D} \setminus \left ( (\cup _{n=1}^\infty U_n)\cup (\cup _{n=1}^\infty L_n) \right )
\]
Let $\mu$ and $\nu$ be positive finite measures with $spt(\mu )\subset K_1$ and $spt(\nu )\subset K_2$ so that $R^t(K_1,\mu)$ and $R^t(K_2,\nu)$ are irreducible. Suppose that $abpe(R^t(K_1,\mu)) = Int(K_1)$ (for example, $\mu = Area |_{Int(K_1)} + m|_{M},$ where $m|_{M}$ is Lebesgue measure on $M$) and $abpe(R^t(K_2,\nu)) = Int(K_2)$. By the Radon-Nikodym theorem, we can write $\mu = \mu _a + \mu _s $ and $\nu = \nu _a + \nu _s, $ where $\mu _a << m|_{M},$ $\mu _s \perp m|_{M},$ $\nu _a << m|_{M},$ and $\mu _s \perp m|_{M}.$
\end{Example}
In this example, $M$ is the inner boundary of $K_i.$ It is natural to explore nontangential limits of functions of $R^t(K_1, \mu )$ on the inner boundary $M$ (from below) with respect to $\mu_a.$ What can we say about $R^t(K_2, \nu )?$
The purpose of section 3 is to investigate the boundary behaviors of the functions in $R^t(K, \mu )$ for the boundaries other than the unit circle in section 2. Theorem \ref{SBTheorem} proves if $R^t(K, \mu )$ is irreducible and there are 'big parts' of $\mathbb C \setminus K$ near 'both sides' of $E\subset \partial K,$ then $\mu (E) = 0.$ In the above example, the inner boundary $M$ of $K_2$ satisfies the property, so our result implies $\nu_a(E) = 0.$ Therefore, it is not needed to investigate the values of functions in $R^t(K_2, \nu )$ for the boundary $M.$ Theorem \ref{SBTheorem} can also be applied to those $K$ for which the diameters of all components of $\mathbb C\setminus K$ are bounded away from zero. For example, if $K$ in Theorem \ref{MTheorem1} or Theorem \ref{MTheorem2} satisfies the property, then the carrier of $\mu |_{\partial \mathbb D}$ is away from $\mathbb D\setminus K.$ In the case, the nontangential limits of functions in $R^t(K, \mu )$ can be defined with respect to $\mu |_{\partial \mathbb D}.$ Theorem \ref{MTheorem3} generalizes Theorem \ref{MTheorem1}. Finally, Theorem \ref{MTheorem4} generalizes the inequality (1-2) ((1.4) in \cite{ars}).
Before closing this section, we mention here a few related papers. For a compactly supported complex measure $\nu$ of $\mathbb C,$ by estimating analytic capacity of the set $\{\lambda: |\mathcal C\nu (\lambda)| \ge c \},$ where $\mathcal C\nu$ is Cauchy transform of $\nu$ (see section 2 for definition), \cite{b06}, \cite{ars}, and \cite{ARS10} provide interesting alternative proofs of Thomson's theorem. Both their proofs rely on X. Tolsa's deep results on analytic capacity. The author refines the estimations for Cauchy transform, in Lemma 4 of \cite{y17}, to study the bounded point evaluations for rationally multicyclic subnormal operators. Also see the work of \cite{a01}, \cite{a02}, \cite{ar97}, \cite{ms90}, \cite{msy99}, \cite{ot80}, \cite{ty95}, \cite{tr79a}, \cite{tr79b}, \cite{wy98}, \cite{y95a}, and \cite{y95b}.
\section{Outer boundary of $K$ is the unit circle}
In this section, we will concern the special cases where the outer boundary of $K$ is the unit circle $\partial \mathbb D.$ Consequently, we provide alternative proofs of Theorem A and Theorem C in \cite{ars}.
Let $\nu$ be a compactly supported finite measure on $\mathbb {C}.$ The Cauchy transform
of $\nu$ is defined by
\[
\ \mathcal C\nu (z) = \int \dfrac{1}{w - z} d\nu (w)
\]
for all $z\in\mathbb {C}$ for which
$\int \frac{d|\nu|(w)}{|w-z|} < \infty .$ A standard application of Fubini's
Theorem shows that $\mathcal C\nu \in L^s_{loc}(\mathbb {C} )$ for $ 0 < s < 2,$ in particular, it is
defined for area-almost all $z,$ and clearly $\mathcal C\nu$ is analytic in $\mathbb C_\infty \setminus spt \nu,$ where $\mathbb C_\infty = \mathbb C \cup \{\infty \}.$
For a compact $K \subset \mathbb C$ we
define the analytic capacity of $K$ by
\[
\ \gamma(K) = sup |f'(\infty)|
\]
where the sup is taken over those functions $f$ analytic in $\mathbb C_\infty \setminus K$ for which
$|f(z)| \le 1$ for all $z \in \mathbb C_\infty \setminus K,$ and
$f'(\infty) = \lim _{z \rightarrow \infty} z[f(z) - f(\infty)].$
The analytic capacity of a general $E \subset \mathbb C$ is defined to be
\[
\ \gamma (E) = \sup \{\gamma (K) : K \subset E, ~K~ compact\}.
\]
Good sources for basic information about analytic
capacity are \cite{Gar72}, Chapter VIII of \cite{gamelin}, Chapter V of \cite{conway}, and \cite{tol14}.
A related capacity, $\gamma _+,$ is defined for $E \subset \mathbb C$ by
\[
\ \gamma_+(E) = sup \|\mu \|
\]
where the sup is taken over positive measures $\mu$ with compact support
contained in $E$ for which $\|\mathcal C\mu \|_{L^\infty (\mathbb C)} \le 1.$ Since $\mathcal C\mu$ is analytic in $\mathbb C_\infty \setminus spt \mu$ and $(\mathcal C \mu)'(\infty) = −\|\mu \|,$ we have
\[
\ \gamma _+(E) \le \gamma (E)
\]
for all $E \subset \mathbb C.$ \cite{Tol03} proves the astounding result (Tolsa's Theorem) that
$\gamma_+$ and $\gamma$ are actually equivalent.
That is, there is an absolute constant $A_T$ such that
\[
\ \gamma (E) \le A_ T \gamma_+(E) \tag{2-1}
\]
for all $E \subset \mathbb C.$ The following semiadditivity of analytic capacity is a conclusion of Tolsa's Theorem.
\[
\ \gamma \left (\bigcup_{i = 1}^m E_i \right ) \le A_T \sum_{i=1}^m \gamma(E_i)\tag{2-2}
\]
where $E_1,E_2,...,E_m \subset \mathbb C.$
Let $\nu$ be a compactly supported finite measure on $\mathbb {C}.$ For $\epsilon > 0,$ $\mathcal C_\epsilon \nu$ is defined by
\[
\ \mathcal C_\epsilon \nu (z) = \int _{|w-z| > \epsilon}\dfrac{1}{w - z} d\nu (w),
\]
and the maximal Cauchy transform is defined by
\[
\ \mathcal C_* \nu (z) = \sup _{\epsilon > 0}| \mathcal C_\epsilon \nu (z) |.
\]
The 1-dimensional radial maximal operator of $\nu$ (see also (2.7) in \cite{tol14}) is defined by
\[
\ M_R \nu (z) = \sup _{r > 0} \dfrac{| \nu | (B(z. r))}{r}.
\]
\begin{Lemma}\label{TolsaLemma}
There is an absolute positive constant $C_T,$ for $a > 0,$ we have
\newline
(1)
\[
\ \gamma(\{\mathcal C_*\nu \geq a\}) \le \dfrac{C_T}{a} \|\nu \|, \tag{2-3}
\]
\newline
(2)
\[
\ m(\{M_R\nu \geq a\}) \le \dfrac{C_T}{a} \|\nu \|.
\]
In this case, if we define
\[
\ MV(\nu ) = \{e^{i\theta}: M_R\nu (e^{i\theta}) = +\infty\}, \tag{2-4}
\]
then $m(MV(\nu )) = 0.$
\end{Lemma}
{\bf Proof:} (1) follows from Proposition 2.1 of \cite{Tol02} and Tolsa's Theorem (2-1) (also see \cite{tol14} Proposition 4.16). Theorem 2.6 in \cite{tol14} implies (2).
For $0 < \sigma < 1$ and $z \in \partial \mathbb D,$ we define the nontangential approach region $\Gamma _\sigma (z)$ to be the interior of the convex hull of $\{z\} \cup B(0,\sigma ).$ It is well known that the existence of nontangential limits on a set $E \subset \partial \mathbb D$ is independent of $\sigma$ up to sets of $m$-measure zero, so we will write $\Gamma (z) = \Gamma _{\frac{1}{2}}(z)$ a nontangential approach region. The following lemma is due to Lemma 1 in \cite{kt77}.
\begin{Lemma}\label{KTLemma}
Suppose $\nu$ is a finite positive measure supported on $\mathbb D,$ define
\[
\ IV(\nu ) = \{e^{i\theta}:~ \underset{\Gamma (e^{i\theta})\ni \lambda \rightarrow e^{i\theta}}{\overline\lim}\int_{\mathbb D} \dfrac{1 - |\lambda|^2}{|1 - \bar \lambda z|^2} d \nu (z) > 0 \}\tag{2-5}
\]
then $m(IV(\nu )) = 0.$
\end{Lemma}
For a finite compactly supported measure $\nu ,$ definite
\[
\ U(\nu ) = \{\lambda \in \mathbb C: ~ \int \dfrac{1}{|z - \lambda |} d|\nu |(z) < \infty\}.
\]
Then $Area ((U(\nu ))^c) = 0.$
\begin{Lemma}\label{CauchyTLemma}
Let $\nu$ be a finite measure supported in $\bar {\mathbb D}$ and $| \nu | (\partial \mathbb D ) = 0.$ Let $1 < p \le \infty ,$ $q = \frac{p}{p-1},$ $f \in C (\bar {\mathbb D}),$ and $g \in L^{q} (| \nu |).$ Define
\[
\ EV(|g|^q|\nu | ) = MV (|g|^q|\nu | ) \cup IV (|g|^q|\nu | ) \tag{2-6}
\]
where $MV (|g|^q|\nu | )$ and $IV (|g|^q|\nu | )$ are defined as in (2-4) and (2-5), respectively.
Suppose that $a > 0$ and $e^{i\theta} \in \partial \mathbb D \setminus EV(|g|^q|\nu | ),$ then there exist $\frac{3}{4} < r_\theta < 1,$ $E_\delta ^f \subset \bar B (e^{i\theta}, \delta ),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 - r_\theta ,$ such that
\[
\ \lim _{\delta \rightarrow 0} \epsilon (\delta ) = 0,
\]
\[
\ \gamma(E_\delta ^f) <\epsilon (\delta ) \delta ,
\]
and for $|\lambda _0 - e^{i\theta} | = \frac{\delta}{2} $ and $\lambda _0\in \Gamma (e^{i\theta }),$
\[
\ \left |\mathcal C\left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{(1 - |\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\lambda) - \mathcal C\left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{(1 - |\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\frac{1}{\bar{\lambda} _0}) \right | \le a\|f\|_{L^{p} (| \nu |)}
\]
for all $\lambda\in (B (e^{i\theta}, \delta ) \setminus E_\delta ^f )\cap U(g\nu ) .$ Notice that $E_\delta ^f$ depends on $f$ and all other parameters are independent of $f.$
\end{Lemma}
{\bf Proof:} For $e^{i\theta} \in \partial \mathbb D\setminus EV(|g|^q|\nu | ),$ by Lemma \ref{TolsaLemma} and \ref{KTLemma}, we conclude that $m(EV(|g|^q|\nu | )) = 0,$ $M_1 = M_R (|g|^{q}|\nu |) (e^{i\theta}) < \infty ,$ and there exists $\frac{3}{4} < r_\theta < 1$ such that
\[
\ \left (\int_{\mathbb D}\dfrac{(1 - |\lambda _0|^2) |g|^q}{|1 - \bar \lambda _0 z|^2}d|\nu | \right )^{\frac{1}{q}} \le \frac{a}{256}\tag{2-7}
\]
for $\delta < 1 - r_\theta.$
Let $\nu_\delta = \frac{\chi _{B (e^{i\theta}, N \delta )}}{(1 - \bar{\lambda} _0 z )^{1-\frac{2}{p}}\delta^{\frac{1}{p}}}fg \nu .$ For $\epsilon < \delta ,$ $N > 2 ,$ and $\lambda \in B (e^{i\theta} , \delta ),$ we get:
\[
\ 2(1 - |\lambda _0|) \le \delta \le 4(1 - |\lambda _0|),
\]
\[
\ \bar B (\lambda , \epsilon) \subset B (e^{i\theta}, 2 \delta ) \subset B (e^{i\theta}, N\delta),
\]
and
\[
\ \begin{aligned}
\ & \left |\mathcal C _\epsilon \left ((1 - \bar \lambda _0 z)^{\frac{2}{p}}\delta^{-\frac{1}{p}}fg\nu \right )(\lambda) - \mathcal C \left ((1 - \bar \lambda _0 z)^{\frac{2}{p}}\delta^{-\frac{1}{p}}fg\nu \right )(\frac{1}{\bar{\lambda} _0}) \right | \\
\ \le & \dfrac{|1 - \bar{\lambda} _0 \lambda |}{\delta^{\frac{1}{p}}} \left | \int _{|z - \lambda| > \epsilon}\dfrac{fgd\nu }{(z - \lambda)(1 - \bar{\lambda}_0 z)^{1-\frac{2}{p}}} \right | + \left | \mathcal C \left (\chi _{\bar B (\lambda, \epsilon)}\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{\delta^{\frac{1}{p}}}fg\nu \right )(\frac{1}{\bar{\lambda} _0}) \right |\\
\ \le & 2\delta^{\frac{1}{q}} \left | \int _{ B (e^{i\theta}, N\delta )^c}\dfrac{fgd\nu}{(z - \lambda)(1 - \bar{\lambda}_0 z)^{1-\frac{2}{p}}} \right | + 2\delta \left |\int _{|z - \lambda| > \epsilon}\dfrac{d\nu_\delta}{(z - \lambda)} \right | \\
\ & + \int_{\bar B (\lambda, \epsilon)}\dfrac{\delta^{-\frac{1}{p}}}{|1 - \bar \lambda _0 z|^{1-\frac{2}{p}}}|fg|d|\nu | \\
\ \le & 2\delta^{\frac{1}{q}} \sum_{k=0}^{\infty}\int _{2^kN\delta \le |z - e^{i\theta} | < 2^{k+1}N\delta} \dfrac{|f||g|d|\nu |}{|z - \lambda ||1 - \bar{\lambda} _0 z |^{1-\frac{2}{p}} } + 2\delta |\mathcal C_\epsilon \nu _\delta (\lambda )| \\
\ & + \int_{ B (e^{i\theta}, 2 \delta)}\dfrac{|1 - \bar \lambda _0 z|\delta^{-\frac{1}{p}}}{|1 - \bar \lambda _0 z|^{\frac{2}{q}}}|fg|d|\nu | \\
\ \le & 2\delta^{\frac{1}{q}} \sum_{k=0}^{\infty} \dfrac{(2^{k+1}N\delta)^{\frac{1}{q}}(2^kN\delta + 2\delta )^{\frac{2}{p}}}{(2^kN\delta - \delta )(2^kN\delta - 2\delta )} \left (\dfrac{\int _{B (e^{i\theta}, 2^{k+1}N\delta)}|g|^qd|\nu |}{2^{k+1}N\delta} \right )^{\frac{1}{q}} \|f\|_{L^{p} (| \nu |)} \\
\ &+ 2\delta \mathcal C_* \nu _\delta (\lambda ) + 4\int_{ B (e^{i\theta}, 2 \delta)}\dfrac{\delta^{\frac{1}{q}}}{|1 - \bar \lambda _0 z|^{\frac{2}{q}}}|fg|d|\nu |\\
\ \le & \dfrac{4(N+2)^{1+\frac{1}{p}}\sum_{k=0}^\infty 2^{\frac{-k}{q}}M_1^{\frac{1}{q}}}{(N-1)(N-2)} \|f\|_{L^{p} (| \nu |)} + 2\delta \mathcal C_* \nu _\delta (\lambda ) \\
\ & + 4 \|f\|_{L^{p} (| \nu |)} \left (\int_{\mathbb D}\dfrac{\delta |g|^q}{|1 - \bar \lambda _0 z|^2}d|\nu | \right )^{\frac{1}{q}}\\
\ \end{aligned}
\]
Let
\[
\ N = 6 + \left (\dfrac{256}{a} \sum_{k=0}^\infty 2^{\frac{-k}{q}} \right)^q M_1,
\]
then together with (2-7), we get
\[
\ \begin{aligned}
\ & \left |\mathcal C _\epsilon \left ((1 - \bar \lambda _0 z)^{\frac{2}{p}}\delta^{-\frac{1}{p}}fg\nu \right )(\lambda) - \mathcal C \left ((1 - \bar \lambda _0 z)^{\frac{2}{p}}\delta^{-\frac{1}{p}}fg\nu \right )(\frac{1}{\bar{\lambda} _0}) \right | \\
\ \le &\dfrac{a}{8} \|f\|_{L^{p} (| \nu |)} + 2\delta \mathcal C_* \nu _\delta (\lambda )
\ \end{aligned} \tag{2-8}
\]
for $\lambda \in B (e^{i\theta} , \delta ).$
Let
\[
\ E_\delta ^f = \{\lambda : \mathcal C_* \nu _\delta (\lambda ) \ge \frac{a\|f\|_{L^{p} (| \nu |)}}{16\delta } \} \cap \bar B (e^{i\theta}, \delta ),
\]
From (2-3) and Holder's inequality, we get
\[
\ \begin{aligned}
\ \gamma (E_\delta ^f) \le & \dfrac{16C_T\delta }{a\|f\|_{L^{p} (| \nu |)}} \int _{ B (e^{i\theta}, N\delta )}\dfrac{|f||g|d|\nu |}{|1 - \bar{\lambda} _0 z |^{1-\frac{2}{p}}\delta^{\frac{1}{p}}} \\
\ \le & \dfrac{16C_T\delta }{a\|f\|_{L^{p} (| \nu |)}} \int _{ B (e^{i\theta}, N\delta )}\dfrac{|1 - \bar{\lambda} _0 z |\delta^{-\frac{1}{p}}|f||g|d|\nu |}{|1 - \bar{\lambda} _0 z |^{\frac{2}{q}}} \\
\ \le & \dfrac{16C_T(N+2)\delta }{a\|f\|_{L^{p} (| \nu |)}} \int _{ B (e^{i\theta}, N\delta )}\dfrac{\delta^{\frac{1}{q}}|f||g|d|\nu |}{|1 - \bar{\lambda} _0 z |^{\frac{2}{q}}} \\
\ \le & \dfrac{64C_T(N+2) \delta}{a} \left(\int_{\mathbb D} \dfrac{1 - |\lambda _0 |^2}{|1 - \bar {\lambda} _0 z|^2} |g|^qd | \nu | \right ) ^ {\frac{1}{q}}
\ \end{aligned}\tag{2-9}
\]
Set
\[
\ \epsilon (\delta ) = \dfrac{65C_T(N+2)}{a} \left(\int_{\mathbb D} \dfrac{1 - |\lambda _0 |^2}{|1 - \bar {\lambda} _0 z|^2} |g|^qd | \nu | \right ) ^ {\frac{1}{q}},
\]
then $\lim_{\delta \rightarrow 0 } \epsilon (\delta ) = 0$ and
\[
\ \gamma (E_\delta ^f) < \epsilon (\delta ) \delta.
\]
From (2-8) and the definition of $E_\delta ^f,$ for $\lambda \in B (e^{i\theta}, \delta ) \setminus E_\delta ^f$ and $\epsilon < \delta ,$ we conclude that
\[
\ \begin{aligned}
\ & \left |\mathcal C _\epsilon \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{(1 - |\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\lambda) - \mathcal C\left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{(1 - |\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\frac{1}{\bar{\lambda} _0}) \right | \\
\ \le &4\left |\mathcal C _\epsilon \left ((1 - \bar \lambda _0 z)^{\frac{2}{p}}\delta^{-\frac{1}{p}}fg\nu \right )(\lambda) - \mathcal C \left ((1 - \bar \lambda _0 z)^{\frac{2}{p}}\delta^{-\frac{1}{p}}fg\nu \right )(\frac{1}{\bar{\lambda} _0}) \right | \\
\ < & a \|f\|_{L^{p} (| \nu |)}
\ \end{aligned}
\]
The lemma follows since the limit of $\mathcal C _\epsilon,$ when $\epsilon\rightarrow 0,$ exists for $ \lambda\in (B (e^{i\theta}, \delta ) \setminus E_\delta ^f )\cap U(g\nu ).$
\begin{Proposition}\label{MProposition1} Let $\nu$ be a finite complex measure with support in $K \subset \bar {\mathbb D}.$ Suppose that $\nu \perp Rat(K)$ and $\nu | _{\partial \mathbb D} = hm$ ($m = \frac{d\theta}{2\pi}$). Then for $b > 0$ and $m$-almost all $e^{i\theta}\in \partial \mathbb D,$ there exist $\frac{3}{4} < r_\theta < 1,$ $E_{\delta}\subset B(e^{i\theta}, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 -r_\theta ,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta) < \epsilon (\delta ) \delta ,$ and
\[
\ \left |\mathcal C\nu (\lambda) - e^{-i\theta}h(e^{i\theta}) \right | \le b \tag{2-10}
\]
for all $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta )\cap U(\nu ) .$
\end{Proposition}
{\bf Proof:} Let $\nu _1 = \nu | _{\mathbb D}$ and $\nu _2 = \nu | _{\partial \mathbb D} = hm.$ Using Plemelj's formula (see page 56 of \cite{cmr2006} or Theorem 8.8 in \cite{tol14}), we can find $E_1 \subset \partial \mathbb D$ with $m(E_1) = 0$ such that
\[
\ \lim_{ \Gamma (e^{i\theta})\ni z\rightarrow e^{i\theta}} \mathcal C \nu _2 (z) - \lim_{ \Gamma (e^{i\theta})\ni z\rightarrow e^{i\theta}} \mathcal C \nu _2 (\frac{1}{\bar z}) = e^{-i\theta} h (e^{i\theta}) \tag{2-11}
\]
for $e^{i\theta} \in \partial \mathbb D\setminus E_1.$ Set $E _0 = E_1\cup EV(|\nu_1|),$ where $EV(|\nu_1|)$ is defined as in (2-6) and $m(EV) = 0.$
We now apply Lemma \ref{CauchyTLemma} for $p=\infty , ~ q = 1,~f = 1,~ g = 1,$ and $a=\frac{b}{2}.$ For $e^{i\theta} \in \partial \mathbb D\setminus E_0,$ there exist $\frac{3}{4} < r_\theta < 1,$ $E_{\delta}\subset B(e^{i\theta}, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 -r_\theta ,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta) <(\delta ) \delta ,$ and for $\lambda _0 \in (\partial B (e^{i\theta}, \frac{\delta}{2} )) \cap \Gamma (e^{i\theta}),$
\[
\ \left | \mathcal C \nu _ 1(\lambda ) - \mathcal C \nu _ 1(\frac{1}{\bar \lambda_0}) \right | \le \frac{b}{2}
\]
for all $\lambda\in (B (e^{i\theta}, \delta )\setminus E_\delta )\cap U(\nu ) .$ Moreover, from (2-11), $r_{\theta}$ can be chosen so that
\[
\ \left | \mathcal C \nu _2 (\lambda ) - \mathcal C \nu _2 (\frac{1}{\bar \lambda_0}) - e^{-i\theta} h (e^{i\theta}) \right | \le \frac{b}{2}
\]
for $\lambda\in B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta}).$ Since $\mathcal C\nu (\frac{1}{\bar{\lambda} _0}) = 0,$ we get
\[
\ \begin{aligned}
\ & \left |\mathcal C\nu (\lambda) - e^{-i\theta}h(e^{i\theta}) \right | \\
\ \le & \left |\mathcal C\nu _1 (\lambda) - \mathcal C\nu _1 (\frac{1}{\bar \lambda _0}) \right | + \left |\mathcal C\nu_2 (\lambda) - \mathcal C\nu _2 (\frac{1}{\bar \lambda _0}) - e^{-i\theta}h(e^{i\theta}) \right | \\
\ \le b
\ \end{aligned}
\]
all $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta )\cap U(\nu ) .$ The proposition is proved.
Let $R = \{ z: -1/2 < Re(z),Im(z) < 1/2 \}$ and $Q = \bar{\mathbb D}\setminus R.$ For a bounded Borel set
$E\subset \mathbb C$ and $1\le p \le \infty,$ $L^p(E)$ denotes the $L^p$ space with respect to the area measure $dA$ restricted to $E.$ The following Lemma is a simple application of Thomson's coloring scheme.
\begin{Lemma} \label{lemmaDSet}
There is an absolute constant $\epsilon _1 > 0$ with the
following property. If $\gamma (\mathbb D \setminus K) < \epsilon_1,$ then
\[
\ |f(\lambda ) | \le \|f\|_{L^\infty (Q\cap K)}
\]
for $\lambda \in R$ and $f \in A(\mathbb D),$ the uniform closure of $\mathcal P$ in $C(\bar {\mathbb D}).$
\end{Lemma}
{\bf Proof:} Let $S$ be a closed square whose edges are parallel to x-axis and y-axis. $S$ is defined to be light if $Area(S \cap K) = 0 .$ $S$ is heavy if it is not light.
We now sketch our version of Thomson's coloring scheme for $Q$ with a given a positive integer $m.$ We refer the reader to \cite{thomson} and \cite{thomson3} section 2 for details.
For each integer $k > 3$ let $\{S_{kj}\}$ be an enumeration of the closed squares contained in $\mathbb C$ with edges of length $2^{-k}$
parallel to the coordinate axes, and corners at the points whose coordinates
are both integral multiples of $2^{-k}$ (except the starting square $S_{m1}$, see (3) below).
In fact, Thomson's coloring scheme is just needed to be modified slightly as the following:
(1) Use our definition of a light $\epsilon$ square.
(2) A path to $\infty$ means a path to any point that is outside of $Q$ (replacing the polynomially convex hull of $\Phi$ by $Q$).
(3) The starting yellow square $S_{m1}$ in the $m$-th generation is $R.$ Notice that the length of $S_{m1}$ in $m$-th generation is $1$ (not $2^{-m}$).
We will borrow notations that are used in Thomson's coloring scheme such as $\{\gamma_n\}_{n\ge m}$ and $\{\Gamma_n\}_{n\ge m},$ etc. We denote
\[
\ YellowBuffer_m = \sum _{k = m+1}^\infty k^2 2^{-k}.
\]
Suppose the scheme terminates, in our setup, this means Thomson's coloring scheme reaches a square $S$ in $n$-th generation that is not contained in $Q.$ One can construct a polygonal path $P,$ which connects the centers of adjacent squares, from the center of a square (contained in $Q$) adjacent to $S$ to the center of a square adjacent to $R$ so that the orange (non green in Thomson's coloring scheme) part of length is no more than $YellowBuffer_m.$ Let $GP = \cup S_j,$ where $\{S_j\}$ are all light squares with $P\cap S_j \ne \emptyset .$ By Tolsa's Theorem (2-2), we see
\[
\ \gamma (P) \le A_T (\gamma (Int(GP)) + YellowBuffer_m).
\]
Since $P$ is a connected set, $\gamma (P) \ge \frac{0.1}{4}$ (Theorem 2.1 on page 199 of \cite{gamelin}). We can choose $m $ to be large enough so that
\[
\ \gamma (GP) \ge \dfrac{1}{40A_T} - YellowBuffer_{m} = \epsilon_m > 0.
\]
Now by Lemma 3 in \cite{b06} (or the proof of Case I of Lemma B in \cite{ars} on page 462-464), there exists a constant $\epsilon _0 > 0$ such that
\[
\ \gamma (GP \setminus K) \ge \epsilon _0 \gamma (GP) \ge \epsilon _0 \epsilon _m.\tag{2-13}
\]
So we have prove if the scheme terminates, then (2-13) holds.
Set $\epsilon_1 = \epsilon_0\epsilon_m.$ By assumption $\gamma (\mathbb D \setminus K) < \epsilon_1,$ we must have $\gamma (GP \setminus K) \le \gamma (\mathbb D \setminus K) < \epsilon_1. $ Therefore, the scheme will not terminate since (2-13) does not hold. In this case, one can construct a sequence of heavy barriers inside $Q,$ that is, $\{\gamma_n\}_{n\ge m}$ and $\{\Gamma_n\}_{n\ge m}$ are infinite.
Let $f\in A(\mathbb D),$ by the maximal modulus principle, we can find $z_n\in\gamma_n$ such that $|f(\lambda )| \le |f(z_n)|$ for $\lambda \in R.$ By the definition of $\gamma_n,$ we can find a heavy square $S_n$ with $z_n\in S_n\cap\gamma_n.$ Since $Area(S_n\cap K) > 0,$ we can choose $w_n\in S_n$ with $|f(w_n)| = \|f\|_{L^\infty (S_n\cap K)}.$ $\frac{f(w)-f(z_n)}{w-z_n}$ is analytic in $\mathbb D,$ therefore, by the maximal modulus principle again, we get
\[
\ \left | \dfrac{f(w_n)-f(z_n)}{w_n-z_n} \right | \le \sup_{w \in \gamma_{n+1}} \left | \dfrac{f(w)-f(z_n)}{w-z_n} \right | \le \dfrac{2\|f\|_{L^\infty (\mathbb D)}}{dist (z_n,\gamma_{n+1})}.
\]
Therefore,
\[
\ |f(\lambda )| \le |f(z_n)| \le |f(w_n)| + \dfrac{2|z_n-w_n|\|f\|_{L^\infty (\mathbb D)}}{dist (z_n,\gamma_{n+1})} \le \|f\|_{L^\infty(Q\cap K)} + \dfrac{2\sqrt 2 2^{-n}\|f\|_{L^\infty (\mathbb D)}}{n^2 2^{-n}}
\]
for $\lambda \in R.$
The lemma follows by taking $n\rightarrow \infty .$
\begin{Corollary} \label{CorollaryDSet}
There is an absolute constant $\epsilon _1 > 0$ with the
following property. If $\lambda _0 \in \mathbb C, ~ \delta > 0,$ and $\gamma (B(\lambda _0 , \delta) \setminus K) < \epsilon_1\delta ,$ then
\[
\ |f(\lambda ) | \le \|f\|_{L^\infty (B(\lambda _0 , \delta)\cap K)}
\]
for $\lambda \in B(\lambda _0 , \frac{\delta}{2})$ and $f \in A(B(\lambda _0 , \delta)),$ the uniform closure of $\mathcal P$ in $C(\bar B(\lambda _0 , \delta)).$
\end{Corollary}
Now we assume that $R^t(K,\mu )$ is irreducible and $\Omega$ is a connected region satisfying:
\[
\ abpe(R^t(K,\mu )) = \Omega ,~ K = \bar \Omega, ~ \Omega \subset \mathbb D,~ \partial \mathbb D \subset \partial\Omega . \tag{2-14}
\]
It is well known that, in this case, $\mu |_{\partial \mathbb D} << m. $
So we assume $\mu |_{\partial \mathbb D} = hm.$
For $0 < \delta < 1$ and $e^{i\theta}\in \partial \mathbb D,$ define $\Gamma ^\delta _\sigma (e^{i\theta}) = \Gamma _\sigma (e^{i\theta})\cap B(e^{i\theta}, \delta).$ In order to define a nontangential limit of a function in $R^t(K,\mu )$ at $e^{i\theta} \in \partial \Omega,$ one needs $\Gamma ^\delta _\sigma (e^{i\theta}) \subset \Omega$ for some $\delta.$ Therefore, we define the strong outer boundary of $\Omega$ as the following:
\[
\ \partial _{so, \sigma} \Omega = \{e^{i\theta} \in \partial \Omega: ~\exists 0<\delta<1,~ \Gamma _{\sigma}^\delta (e^{i\theta}) \subset \Omega \}, ~ \partial _{so} \Omega = \partial _{so, \frac{1}{2}} \Omega. \tag{2-15}
\]
It is known that $\partial _{so, \sigma} \Omega$ is a Borel set (i.e., see Lemma 4 in \cite{ot80}) and $m(\partial _{so, \sigma_1} \Omega \setminus \partial _{so, \sigma _2} \Omega ) = 0$ for $\sigma _1 \ne \sigma _2.$ From Theorem \ref{SBTheorem} in section 3, if $R^t(K,\mu )$ is irreducible and the diameters of all components of $\mathbb C\setminus K$ are bounded away from zero, then $\mu (\partial \mathbb D\setminus \partial _{so} \Omega ) = 0.$ This means that the carrier of $\mu | _{\partial \mathbb D} $ is a subset of $\partial _{so} \Omega $ and the nontangential limit of a function at $e^{i\theta}\in \partial \mathbb D\setminus \partial _{so} \Omega $ is not defined.
From Lemma VII.1.7 in \cite{conway}, we find a function $G \in R^t(K,\mu )^\perp \subset L^{t'}(\mu )$ such that $G(z) \ne 0$ for $\mu$-almost every $z.$ Every $f \in R^t(K,\mu )$ is analytic on $\Omega$ and
\[
\ f(\lambda ) \mathcal C(G\mu ) (\lambda ) = \int \dfrac{f(z)}{z - \lambda }G(z) d\mu(z) = \mathcal C(fG\mu ) (\lambda )\tag{2-16}
\]
for $\lambda \in \Omega \cap U(G\mu ).$
\begin{Theorem}\label{MTheorem1}
Suppose that $\mu$ is a finite positive measure supported in $K$ and is such that
$abpe(R^t(K,\mu )) = \Omega $ and $R^t(K,\mu )$ is irreducible, where $\Omega$ is a connected region satisfying (2-14), $\mu |_{\partial \mathbb D} = hm,$ and $\mu (\partial _{so} \Omega ) > 0.$ Then:
\newline
(a) If $f \in R^t(K,\mu )$ then the nontangential limit $f^*(z)$ of $f$ exists for $\mu |_{\partial _{so} \Omega}$-
almost all z, and $f^* = f |_{\partial _{so} \Omega}$ as elements of $L^t(\mu |_{\partial _{so} \Omega}).$
\newline
(b) Every nonzero rationally invariant subspace $M$ of $R^t(K,\mu )$ has index 1, that is, $dim(M / (S_\mu - \lambda _0) M) = 1,$ for $\lambda _0\in \Omega.$
\newline
If the diameters of all components of $\mathbb C \setminus K$ are bounded away from zero, then by Theorem \ref{SBTheorem} (in section 3), the above $\partial _{so} \Omega$ can be replaced by $\partial \mathbb D.$
\end{Theorem}
{\bf Proof:} (a) Let $1 > \epsilon > 0$ and $\epsilon _0 = \frac{\epsilon _1}{32A_T},$ where $\epsilon _1$ is as in Lemma \ref{lemmaDSet} and $A_T$ is from (2-2). For $f \in R^t(K,\mu ),$ from Proposition \ref{MProposition1}, we see that for $\mu$-almost all $e^{i\theta}\in \partial _{so} \Omega$ with $\Gamma ^{r_0}(e^{i\theta}) \subset \Omega$ and $G(e^{i\theta})h(e^{i\theta} \ne 0,$ $b = \frac{|G(e^{i\theta})h(e^{i\theta})|}{2(1+|f(e^{i\theta})|)} \epsilon > 0,$ there exist $\max (r_0, \frac{3}{4} ) < r_\theta < 1,$ $E_{\delta}^1\subset B(e^{i\theta}, \delta),$ $E_{\delta}^2\subset B(e^{i\theta}, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 -r_\theta ,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta ^1) < \epsilon (\delta ) \delta ,$ $\gamma(E_\delta ^2) < \epsilon (\delta ) \delta ,$
\[
\ \left |\mathcal C(G\mu ) (\lambda) - G(e^{i\theta})e^{-i\theta}h(e^{i\theta}) \right | \le b
\]
for all $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta ^1)\cap U(G\mu ),$ and
\[
\ \left |\mathcal C(fG\mu ) (\lambda) - f(e^{i\theta}) G(e^{i\theta}) e^{-i\theta}h(e^{i\theta}) \right | \le b
\]
for $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta ^2)\cap U(G\mu ).$ Now choose $\delta$ small enough so that $\epsilon(\delta ) < \epsilon_0.$ Set $E_\delta = E_\delta ^1 \cup E_\delta ^2,$ then from the semi-additivity (2-2), we get
\[
\ \gamma (E_\delta ) \le A_T (\gamma (E_\delta ^1) + \gamma (E_\delta ^2)) < \epsilon _1 \dfrac{\delta}{16} .
\]
Therefore, by (2-16), for $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta )\cap U(G\mu ),$
\[
\ \begin{aligned}
\ & |f (\lambda) - f(e^{i\theta})| \\
\ \le &\left | \dfrac{\mathcal C(fG\mu ) (\lambda) - f(e^{i\theta})\mathcal C(G\mu ) (\lambda)}{\mathcal C(G\mu ) (\lambda)}\right | \\
\ \le & \dfrac{2|\mathcal C(fG\mu ) (\lambda) - f(e^{i\theta}) G(e^{i\theta}) e^{-i\theta}h(e^{i\theta})|}{|G(e^{i\theta})h(e^{i\theta})|} + \dfrac{2|\mathcal C(G\mu ) (\lambda) - G(e^{i\theta}) e^{-i\theta}h(e^{i\theta})| |f(e^{i\theta})|}{|G(e^{i\theta})h(e^{i\theta})|}\\
\ \le & \epsilon .
\ \end{aligned}
\]
For $\lambda _0 \in (\partial B (e^{i\theta}, \frac{\delta}{2} )) \cap \Gamma _{\frac{1}{4}}(e^{i\theta}),$ we see that $B (\lambda _0, \frac{\delta}{16}) \subset B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta}).$
Using Lemma \ref{lemmaDSet} for $f - f(e^{i\theta}),$ we get
\[
\ |f (\lambda) - f(e^{i\theta})| \le \|f - f(e^{i\theta}) \| _{L^\infty (B (\lambda _0, \frac{\delta}{16}) \setminus E_\delta)} \le \epsilon
\]
for every $\lambda \in B (\lambda_0, \frac{\delta }{32}).$ Hence,
\[
\ \lim_{ \Gamma _{\frac{1}{4}}(e^{i\theta})\ni\lambda\rightarrow e^{i\theta}}f(\lambda ) = f(e^{i\theta} ).
\]
We turn to prove (b). Let $M$ be a nonzero rationally invariant subspace of $R^t(K,\mu ).$ Without loss of generality, we assume $\lambda_0 = 0$ and $0\in \Omega.$ We must show that $dim(M/S_\mu M) = 1.$ Let $n$ be the smallest integer such that $f(z) = z^n f_0(z)$ for every $f\in M$ and there exists $g\in M$ with $g(z) = z^n g_0(z)$ and $g_0(0)
\ne 0.$ We only need to show $\frac{f(z) - \frac{f_0(0)}{g_0(0)} g(z)}{z}\in M.$ To do this, it is suffice to show that for $\phi\in M^\perp \subset L^{t'} (\mu)$, the function
\[
\ \Phi (\lambda ) = \int \dfrac{g(\lambda )f(z) - f(\lambda )g(z)}{z - \lambda } \phi (z) d\mu (z),
\]
which is analytic in $\Omega,$ is identically zero. In fact, the proof is similar to that of (a). Let $E \subset \partial _{so} \Omega$ so that for $e^{i\theta} \in E,$ $f$ and $g$ have nontangential limits at $e^{i\theta},$ and $h(e^{i\theta}) > 0. $ By Theorem \ref{MTheorem1} (a), $m(E) > 0.$ For $1 > \epsilon > 0$ and $\epsilon _0 = \frac{\epsilon _1}{32A_T},$ applying Proposition \ref{MProposition1} for $f\phi\mu , g\phi\mu$ since $f\phi\mu , g\phi\mu \perp Rat(K)$ and Theorem \ref{MTheorem1} (a) for $f$ and $g,$ we see that for $e^{i\theta} \in E$ with $\Gamma ^{r_0} (e^{i\theta}) \subset \Omega$ and $b = \frac{1}{(1 + |f(e^{i\theta})| + |g(e^{i\theta})| ) (1 + |\phi(e^{i\theta})| h(e^{i\theta}))} \epsilon,$ there exist $\max (r_0, \frac{3}{4} ) < r_\theta < 1,$ $E_{\delta}^1\subset B(e^{i\theta}, \delta),$ $E_{\delta}^2\subset B(e^{i\theta}, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 -r_\theta ,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta ^1) < \epsilon (\delta ) \delta ,$ $\gamma(E_\delta ^2) < \epsilon (\delta ) \delta ,$
\[
\ \left |\mathcal C(f\phi\mu ) (\lambda) - f(e^{i\theta})\phi(e^{i\theta})e^{-i\theta}h(e^{i\theta}) \right | \le b
\]
for all $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta ^1)\cap U(G\mu ),$
\[
\ \left |\mathcal C(g\phi\mu ) (\lambda) - g(e^{i\theta})\phi(e^{i\theta})e^{-i\theta}h(e^{i\theta}) \right | \le b,
\]
for all $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta ^2)\cap U(G\mu ),$
$|f(\lambda) - f(e^{i\theta})| < b$ and $|g (\lambda) - g(e^{i\theta})| < b$ on $B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta}).$
Choose $\delta$ small enough so that $\epsilon (\delta ) < \epsilon_0.$ Set $E_\delta = E_\delta ^1 \cup E_\delta ^2,$ then by the semi-additivity (2-2) again, we have $\gamma (E_\delta ) < \epsilon _1 \frac{\delta}{16} .$
Therefore, for all $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta )\cap U(G\mu ),$
\[
\ \begin{aligned}
\ & |\Phi (\lambda) | \\
\ \le & |g(\lambda)| \left |\mathcal C(f\phi\mu ) (\lambda) - f(e^{i\theta})\phi(e^{i\theta})e^{-i\theta}h(e^{i\theta}) \right | + |f(\lambda)| |\mathcal C(g\phi\mu ) (\lambda) \\
\ &- g(e^{i\theta})\phi(e^{i\theta})e^{-i\theta}h(e^{i\theta}) | + |f(\lambda)g(e^{i\theta}) - g(\lambda)f(e^{i\theta})| |\phi(e^{i\theta})| h(e^{i\theta})\\
\ \le & (b + |f(e^{i\theta})| + |g(e^{i\theta})| ) (1 + |\phi(e^{i\theta})| h(e^{i\theta})) b\\
\ \le & \epsilon .
\ \end{aligned}
\]
For $\lambda _0 \in (\partial B (e^{i\theta}, \frac{\delta}{2} )) \cap \Gamma _{\frac{1}{4}}(e^{i\theta}),$ $B (\lambda _0, \frac{\delta}{16}) \subset B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta}).$
Using Lemma \ref{lemmaDSet} for $\Phi,$ we get
\[
\ |\Phi (\lambda )| \le \|\Phi \| _{L^\infty (B (\lambda _0, \frac{\delta}{16}) \setminus E_\delta)} \le \epsilon
\]
for every $\lambda \in B (\lambda_0, \frac{\delta }{32}).$ Hence,
\[
\ \lim_{ \Gamma _{\frac{1}{4}}(e^{i\theta})\ni\lambda\rightarrow e^{i\theta}} \Phi (\lambda ) = 0.
\]
Let $V = \cup _{e^{i\theta}\in E}\Gamma _{\frac{1}{4}}^\delta (e^{i\theta}).$ Since $m(E) > 0,$ there exists a connected component $V_0$ of $V$ with $m(\partial V_0 \cap \partial \mathbb D) > 0.$ $\partial V_0$ is a rectifiable Jordan curve and $\Phi (\lambda )$ is analytic in $V_0.$ Therefore $\Phi (\lambda ) = 0$ since $\Omega$ is a connected region. This completes the proof.
\begin{Proposition}\label{MProposition2} Let $\mu$ be a finite positive measure with support in $ K \subset\bar {\mathbb D}$ and $\mu | _{\partial \mathbb D} = hm.$ Let $1 < p <\infty, ~ q = \frac{p}{p-1}, ~ f\in C(\bar{\mathbb D}), ~ g \in L^q (\mu ),$ and $fg\mu \perp Rat(K).$ Then for $0 < \beta < \frac{1}{16},$ $b > 0,$ and $m$-almost all $e^{i\theta}\in \partial \mathbb D,$ there exist $\frac{3}{4} < r_\theta < 1,$ $E_{\delta}^f \subset B(e^{i\theta}, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 -r_\theta ,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta ^f) < \epsilon (\delta ) \delta ,$ and for $\lambda _0 \in (\partial B (e^{i\theta}, \frac{\delta}{2} )) \cap \Gamma _{\frac{1}{4}}(e^{i\theta}),$
\[
\ \left |\mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\mu \right )(\lambda) \right | \le \left(b + \dfrac{1+4\beta}{1-4\beta} \left ( \int _{\partial \mathbb D} \dfrac{1 - |\lambda _0|^2}{|1 - \bar \lambda _0 z |^2} |g|^qd\mu \right )^{\frac{1}{q}} \right ) \|f\|_{L^p(\mu )} \tag{2-17}
\]
for all $\lambda\in (B (\lambda _0, \beta \delta ) \setminus E_\delta^f )\cap U(g\mu ) .$
\end{Proposition}
{\bf Proof:} Let $\nu = \mu | _{\mathbb D}.$ We now apply Lemma \ref{CauchyTLemma} for $p, ~ q,~f,~ g, $ and $a=b.$ For $e^{i\theta} \in \partial \mathbb D\setminus EV(|g|^q\nu)$ (as in (2-6) and $m(EV(|g|^q\nu)) = 0$), there exist $\frac{3}{4} < r_\theta < 1,$ $E_{\delta}^f \subset B(e^{i\theta}, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 -r_\theta ,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta ^f) < \epsilon (\delta ) \delta ,$ and for $\lambda _0 \in (\partial B (e^{i\theta}, \frac{\delta}{2} )) \cap \Gamma _{\frac{1}{4}}(e^{i\theta}),$
\[
\ \left | \mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\lambda ) - \mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\frac{1}{\bar{\lambda} _0})\right | \le b \|f\|_{L^p(\mu )}
\]
for all $\lambda\in (B (e^{i\theta }, \delta ) \setminus E_\delta^f )\cap U(g\mu ) .$
\[
\ \mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\mu \right )(\frac{1}{\bar{\lambda} _0}) = 0
\]
since $fg\mu \perp Rat(K).$ From Lemma \ref{CauchyTLemma}, for all $\lambda\in (B (\lambda _0, \beta \delta ) \setminus E_\delta^f )\cap U(g\mu ),$ we get
\[
\ \begin{aligned}
\ & \left |\mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\mu \right )(\lambda) \right | \\
\ \le & \left |\mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\lambda) - \mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\nu \right )(\frac{1}{\bar{\lambda} _0})\right | \\
\ & + \left |\int_{\partial \mathbb D} \left ( \dfrac{1}{z - \lambda} - \dfrac{1}{z - \frac{1}{\bar{\lambda} _0}} \right )\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{p}}}{ (1-|\lambda _0|^2)^{\frac{1}{p}}}fg\mu\right | \\
\ \le & b \|f\|_{L^p(\mu )} + \int_{\partial \mathbb D} \dfrac{|1 - \lambda \bar{\lambda} _0| }{|z - \lambda |} \dfrac{ (1 - |\lambda _0|^2)^{-\frac{1}{p}} }{ |1 - \bar \lambda _0 z|^{1 - \frac{2}{p}}} |fg| d \mu \\
\ \le & b \|f\|_{L^p(\mu )} + \dfrac{1+4\beta}{1-4\beta} \int_{\partial \mathbb D} \dfrac{ (1 - |\lambda _0|^2)^{\frac{1}{q}}}{ |1 - \bar \lambda _0 z|^{\frac{2}{q}}} |fg| d \mu
\ \end{aligned}
\]
where the last step follows from
\[
\ \dfrac{|1 - \lambda \bar{\lambda} _0| }{|z - \lambda |} \le \dfrac{1 - |\lambda _0|^2 + |\lambda _0||\lambda - \lambda _0| }{|z - \lambda _0 | - |\lambda - \lambda _0|} \le \dfrac{(1+4\beta)(1 - |\lambda _0|^2)}{|z - \lambda _0 | - 4\beta (1 - |\lambda _0|)} \le \dfrac{(1+4\beta)(1 - |\lambda _0|^2)}{(1 - 4\beta )|1 - \bar\lambda _0 z|}
\]
for $z\in\partial \mathbb D.$ The corollary now follows from Holder's inequality.
\begin{Theorem}\label{MTheorem2} Suppose that $\mu$ is a finite positive measure supported in $K$ and is such that
$abpe(R^t(K,\mu )) = \Omega $ and $R^t(K,\mu )$ is irreducible, where $\Omega$ is a connected region satisfying (2-14), $\mu |_{\partial \mathbb D} = hm,$ and $\mu (\partial _{so} \Omega ) > 0.$
Then for $t > 1,$
\[
\ \lim_{ \Gamma _{\frac{1}{4}}(e^{i\theta})\ni\lambda \rightarrow e^{i\theta}} (1 - |\lambda |^2)^{\frac{1}{t}} M_\lambda = \dfrac{1}{h(e^{i\theta})^{\frac{1}{t}}}
\]
for $\mu$-almost all $e^{i\theta}\in \partial _{so} \Omega.$ If the diameters of all components of $\mathbb C \setminus K$ are bounded away from zero, then by Theorem \ref{SBTheorem} (in section 3), the above $\partial _{so} \Omega$ can be replaced by $\partial \mathbb D.$
\end{Theorem}
{\bf Proof:} By Proposition \ref{MProposition1} and \ref{MProposition2}, for $\mu$-almost all $e^{i\theta}\in \partial _{so} \Omega$ with $G(e^{i\theta})h(e^{i\theta}) \ne 0$ and $\Gamma ^{r_0}(e^{i\theta}) \subset \Omega ,$ $0 < \beta < \frac{1}{16},$ $b > 0,$ and $f\in Rat(K),$ there exist $max(\frac{3}{4}, r_0) < r_\theta < 1,$ $E_{\delta} \subset B(e^{i\theta}, \delta),$ $E_{\delta}^f \subset B(e^{i\theta}, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < 1 -r_\theta ,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta ) < \epsilon (\delta ) \delta ,$ $\gamma(E_\delta ^f) < \epsilon (\delta ) \delta ,$
\[
\ \left |\mathcal C(G\mu ) (\lambda) - e^{-i\theta}G(e^{i\theta})h(e^{i\theta}) \right | \le b \tag{2-18}
\]
for all $\lambda\in (B (e^{i\theta}, \delta ) \cap \Gamma (e^{i\theta})\setminus E_\delta )\cap U(G\mu ) .$
and
\[
\ \left |\mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{t}}}{ (1-|\lambda _0|^2)^{\frac{1}{t}}}fG\mu \right )(\lambda) \right | \le \left(b + \dfrac{1+4\beta}{1-4\beta} \left ( \int _{\partial \mathbb D} \dfrac{1 - |\lambda _0|^2}{|1 - \bar \lambda _0 z |^2} |G|^{t'}d\mu \right )^{\frac{1}{t'}} \right ) \|f\|_{L^t(\mu )} \tag{2-19}
\]
for $\lambda _0 \in \partial B (e^{i\theta}, \frac{\delta}{2} ) \cap \Gamma _{\frac{1}{4}}(e^{i\theta})$ and all $\lambda\in (B (\lambda _0, \beta \delta ) \setminus E_\delta^f )\cap U(G\mu ) .$ From semi-additivity of (2-2), we get
\[
\ \gamma (E_\delta \cup E_\delta^f) \le A_T(\gamma (E_\delta ) + \gamma ( E_\delta^f)) \le 2A_T \epsilon (\delta ) \delta .
\]
Let $\delta $ be small enough so that $\epsilon (\delta ) < \frac{\beta}{2A_T}\epsilon _1,$ where $\epsilon _1$ is as in Corollary \ref{CorollaryDSet}. From (2-16), (2-18), and (2-19), for $\lambda _0 \in \partial B (e^{i\theta}, \frac{\delta }{2}) \cap \Gamma _{\frac{1}{4}}(e^{i\theta})$ and all $\lambda\in (B (\lambda _0, \beta \delta ) \setminus (E_{\delta } \cup E_{\delta }^f) )\cap U(G\mu ) ,$ we have the following calculation:
\[
\ | 1 - \bar \lambda _0 \lambda | \ge 1- |\bar \lambda _0|^2 - |\lambda - \lambda _0||\bar \lambda _0| \ge 1- |\bar \lambda _0|^2 - \beta \delta |\lambda _0|
\]
and
\[
\ \begin{aligned}
\ (1-|\lambda _0|^2)^{\frac{1}{t}} |f(\lambda ) | \le & \dfrac{| (1 - \bar \lambda _0 \lambda )^{\frac{2}{t}}(1-|\lambda _0|^2)^{-\frac{1}{t}}f(\lambda ) |}{(1 - \beta \frac{\delta |\lambda _0|}{1-|\lambda _0|^2})^{\frac{2}{t}}} \\
\ = & \dfrac{1}{(1 - \beta \frac{\delta |\lambda _0|}{1-|\lambda _0|^2})^{\frac{2}{t}}} \left |\dfrac{ \mathcal C \left (\dfrac{(1 - \bar \lambda _0 z)^{\frac{2}{t}}}{ (1-|\lambda _0|^2)^{\frac{1}{t}}}fG\mu \right )(\lambda) }{\mathcal C(G\mu ) (\lambda)} \right | \\
\ \le & \dfrac{ b + \frac{1+4\beta}{1-4\beta} \left ( \int _{\partial \mathbb D} \frac{1 - |\lambda _0|^2}{|1 - \bar \lambda _0 z |^2} |G|^{t'}d\mu \right )^{\frac{1}{t'}} }{(1-4\beta)^{\frac{2}{t}}(|G(e^{i\theta})|h(e^{i\theta}) - b)} \|f\|_{L^t(\mu )}.
\ \end{aligned}
\]
Since $\gamma (E_{\delta }\cup E_{\delta }^f) < \epsilon _1 \delta ,$ from Corollary \ref{CorollaryDSet}, we conclude
\[
\ M_{\lambda _0} \le \sup _{\underset{\|f\|_{L^t(\mu )} = 1}{f\in Rat(K)}}|f(\lambda _0)| \le \sup _{\underset{\|f\|_{L^t(\mu )} = 1}{f\in Rat(K)}}\|f\|_{L^\infty (B (\lambda _0, \beta \delta ) \setminus (E_{\delta } \cup E_{\delta }^f) }
\]
for $\lambda _0 \in \partial B (e^{i\theta}, \frac{\delta}{2} ) \cap \Gamma _{\frac{1}{4}}(e^{i\theta}).$ Hence,
\[
\ \underset{\Gamma _{\frac{1}{4}}(e^{i\theta})\ni \lambda _0 \rightarrow e^{i\theta}}{\overline\lim} (1-|\lambda _0|^2)^{\frac{1}{t}} M_{\lambda _0} \le \dfrac{ b + \frac{1+4\beta}{1-4\beta} |G(e^{i\theta})|(h(e^{i\theta}))^{\frac{1}{t'}} }{(1-4\beta)^{\frac{2}{t}}(|G(e^{i\theta})|h(e^{i\theta}) - b)}
\]
since $\frac{1 - |\lambda _0|^2}{|1 - \bar \lambda _0 z |^2}$ is the Poisson kernel. Taking $b\rightarrow 0$ and $\beta\rightarrow 0, $ we get
\[
\ \lim_{\Gamma _{\frac{1}{4}}(e^{i\theta}) \ni \lambda \rightarrow e^{i\theta}} (1 - |\lambda |^2)^{\frac{1}{t}} M_\lambda \le \dfrac{1}{h(e^{i\theta})^{\frac{1}{t}}}.
\]
The reverse inequality is from \cite{kt77} (applying Lemma \ref{KTLemma} to testing function $(1 - \bar \lambda _0 z )^{-\frac{2}{t}}$). This completes the proof.
\section{Boundary values of $R^t(K,\mu)$ for certain $K$}
In this section, we are concerning the boundary behaviors of functions in $R^t(K,\mu)$ near the boundary of $K$ (not necessarily outer boundary as in last section), in particular, the inner boundary of $K.$ Our approach in estimating Cauchy transform, in section 2, is concentrating on the local behavior of the transform. This makes it possible to extend our methodology to more general $K.$ In order to apply our approach, the following requirements are needed.
\newline
(A) Plemelj's formula must hold for the boundary points under consideration;
\newline
(B) Lemma \ref{TolsaLemma} (2) and Lemma \ref{KTLemma} shall be extended.
For (A), it is known that Plemelj's formula holds for a Lipschitz graph (see Theorem 8.8 in \cite{tol14}). So we will restrict our attention to the boundary of $K$ which is a part of a Lipschitz graph although Plemelj's formula may hold for more general rectifiable curves.
We define the open cone (with vertical axis)
\[
\ \Gamma (\lambda, \alpha ) = \{z \in \mathbb C :~ |Re(z) - Re(\lambda )| < \alpha |Im(z) - Im(\lambda )| \},
\]
and the half open cones
\[
\ \Gamma ^+ (\lambda, \alpha ) = \{z \in \Gamma (\lambda, \alpha ) :~ Im(z) > Im(\lambda )\}
\]
and
\[
\ \Gamma ^- (\lambda, \alpha ) = \{z \in \Gamma (\lambda, \alpha ) :~ Im(z) < Im(\lambda ) \}.
\]
Set $\Gamma _\delta ^+ (\lambda, \alpha ) = B(\lambda , \delta) \cap \Gamma ^+ (\lambda, \alpha )$ and $\Gamma _\delta ^- (\lambda, \alpha ) = B(\lambda , \delta) \cap \Gamma ^- (\lambda, \alpha ).$ $\Gamma ^+ (\lambda, \alpha )$ (or $\Gamma _\delta ^+ (\lambda, \alpha )$) is called upper cone. $\Gamma ^- (\lambda, \alpha )$ (or $\Gamma _\delta ^- (\lambda, \alpha )$) is called lower cone.
Let $A : ~ \mathbb R \rightarrow \mathbb R$ be a Lipschitz function and let $LG$ be its graph. Observe that if $\alpha < \frac{1}{\|A'\|_\infty},$ then, for every $\lambda\in LG,$
$\Gamma ^+ (\lambda, \alpha )\subset \{z\in \mathbb C:~ Im(z) > A(Re(z))\}$ and $\Gamma ^- (\lambda, \alpha )\subset \{z\in \mathbb C:~ Im(z) < A(Re(z))\}.$
On the graph Γ of $A,$ we consider the usual complex measure
\[
\ dz_{LG} = \dfrac{1 + iA'(Re(z))}{ (1 + A'(Re(z))^2)^{\frac{1}{2}}} d\mathcal H^1 |_{LG} = (L(z))^{-1}d\mathcal H^1 |_{LG}\tag{3-1}
\]
where $\mathcal H^1$ is one dimensional Hausdorff measure. Notice that $|L(z)| = 1.$ For $1 \le p < \infty $ and $f \in L^p(\mathcal H^1 |_{LG}),$ the nontangential limits
\[
\ \mathcal C_+ (f dz_{LG}(\lambda) = \lim_{\Gamma ^+ (\lambda, \alpha ) \ni z\rightarrow \lambda} \mathcal C (f dz_{LG}(z)
\]
and
\[
\ \mathcal C_{-} (f dz_{LG}(\lambda) = \lim_{\Gamma ^{-} (\lambda, \alpha ) \ni z\rightarrow \lambda} \mathcal C (f dz_{LG}(z)
\]
exist $\mathcal H^1 |_{LG}$-almost everywhere. Moreover,
\[
\ \dfrac{1}{2\pi i} \mathcal C_+ (f dz_{LG}(\lambda) - \dfrac{1}{2\pi i}C_- (f dz_{LG}(\lambda) = f(\lambda ) \tag{3-2}
\]
(see Theorem 8.8 in \cite{tol14}).
Suppose that $R^t(K,\mu )$ is irreducible and $\Omega$ is a connected region satisfying:
\[
\ abpe(R^t(K,\mu )) = \Omega ,~ K = \bar \Omega . \tag{3-3}
\]
Let $G \in R^t(K,\mu )^\perp \subset L^{t'}(\mu )$ such that $G(z) \ne 0$ for $\mu$-almost every $z.$
In order to apply our approach, we need to impose some constraints on $K$ and define type I and II boundaries for $K.$
Upper cone $\Gamma ^+(\lambda, \alpha )$ (or lower cone $\Gamma ^-(\lambda, \alpha )$) is outer for $\lambda \in LG\cap \partial K$ if there exist $\delta _\lambda, ~ \epsilon _\lambda > 0$ such that for every $\delta < \delta _\lambda,$
\[
\ B(\lambda _\delta, \epsilon _\lambda \delta) \subset K^c \cap \Gamma _\delta ^+(\lambda, \alpha ) ~(\text{or } B(\lambda _\delta, \epsilon _\lambda \delta) \subset K^c\cap \Gamma _\delta ^-(\lambda, \alpha )). \tag{3-4}
\]
$\lambda \in LG\cap \partial K$ is a type I boundary point of $LG\cap \partial K$ if either upper cone $\Gamma ^+(\lambda, \alpha )$ or lower cone $\Gamma ^-(\lambda, \alpha )$ is outer. The type I boundary $\partial _{I,\alpha}^{LG} K$ is the set of all type I boundary points of $LG\cap \partial K.$ For example, if $V$ is a component of $K$ and $\partial V$ is a Lipschitz graph, then
$\partial V$ is a type I boundary.
Upper cone $\Gamma ^+(\lambda, \alpha )$ (or lower cone $\Gamma ^-(\lambda, \alpha )$) is inner for $\lambda \in LG\cap \partial K$ if there exists $\delta > 0$ such that
\[
\ \Gamma _\delta ^+(\lambda, \alpha ) \subset \Omega ~(\text{or } \Gamma _\delta ^-(\lambda, \alpha ) \subset \Omega ).
\]
$\lambda \in LG\cap \partial K$ is a type II boundary point of $LG\cap \partial K$ if $\lambda$ is type I and either upper cone $\Gamma ^+(\lambda, \alpha )$ or lower cone $\Gamma ^-(\lambda, \alpha )$ is inner.
The type II boundary $\partial _{II,\alpha}^{LG} K$ is the set of all type II boundary points of $LG\cap \partial K.$ The strong outer boundary of $\Omega$ defined in the section 2 is type II boundary of $K.$
Without loss of generality, for type I boundary point $\lambda,$ we usually assume upper cone $\Gamma ^+(\lambda, \alpha )$ is outer, and for type II boundary point $\lambda,$ we usually assume lower cone $\Gamma ^-(\lambda, \alpha )$ is inner.
\begin{Lemma}
Both $\partial _{I,\alpha}^{LG} K$ and $\partial _{II,\alpha}^{LG} K$ are Borel sets.
\end{Lemma}
{\bf Proof:} Let $\epsilon _0 = \frac{1}{n}$ and define $A_{nm}$ to be the set of $\lambda \in LG\cap \partial K$ such that for every $0 < \delta < \frac{1}{m},$ there exists $\lambda _0$ with
\[
\ B(\lambda _0,\epsilon _0\delta ) \subset K^c \cap \Gamma _\delta ^+(\lambda, \alpha).
\]
One sees that $A_{nm}$ is a closed set and $\partial _{I,\alpha}^{LG} K = \cup A_{nm}.$ If we define $B_{nmk}$ to be the set of $\lambda \in A_{nm}$ such that $\Gamma _{\frac{1}{k}} ^- (\lambda, \alpha ) \subset Int(K),$ then it is straightforward to verify that $B_{nmk}$ is a closed set and $\partial _{II,\alpha}^{LG} K = \cup B_{nmk}.$
It is easy to verify that $\mathcal H^1 |_{LG} (\partial _{I,\alpha_1}^{LG} K \setminus \partial _{I,\alpha_2}^{LG} K) = 0$ and $\mathcal H^1 |_{LG} (\partial _{II,\alpha_1}^{LG} K \setminus \partial _{II,\alpha_2}^{LG} K) = 0$ for $\alpha_1 \ne \alpha_2.$ Therefore, we will fix $0 < \alpha < \frac{1}{\|A'\|_\infty}$ and use $\partial _{I}^{LG} K,~ \partial _{II}^{LG} K$ for $\partial _{I,\alpha}^{LG} K,~ \partial _{II,\alpha}^{LG} K,$ respectively.
For (B), Lemma \ref{lemmaBasic} and Corollary \ref{CorollaryKTGen} below extend Lemma \ref{TolsaLemma} (2) and Lemma \ref{KTLemma}. From now on, we use $LG$ for a fixed Lipschitz graph as above.
\begin{Lemma}\label{lemmaBasic}
Let $\nu$ be a finite complex measure with compact support. Suppose $\nu$ is singular to $\mathcal H^1 |_{LG}$ ($|\nu |\perp \mathcal H^1 |_{LG}$). Then
\newline
(1)
\[
\ \mathcal H^1 |_{LG} (\{\lambda : M_R\nu (\lambda ) \geq a\}) \le \dfrac{C}{a} \|\nu \|
\]
where $C$ is an absolute constant. In this case,
\[
\ \mathcal H^1 |_{LG} (\{\lambda : M_R\nu (\lambda ) = \infty\}) = 0 \tag{3-5}.
\]
\newline
(2)
\[
\ \mathcal H^1 |_{LG} (\{\lambda :\underset{\delta\rightarrow 0}{\overline\lim} \dfrac{|\nu |(B(\lambda , \delta )}{\delta} > 0 \}) = 0. \tag{3-6}
\]
\end{Lemma}
{\bf Proof:} As the same as Lemma \ref{TolsaLemma} (2), (1) follows from Theorem 2.6 in \cite{tol14}.
(2) Let $E_0$ be a Borel set such that $\mathcal H^1 |_{LG} (E_0) = 0$ and $|\nu | (E_0 ^c) = 0$ (since $|\nu |\perp \mathcal H^1 |_{LG}$). Let $\epsilon ,~ \eta > 0$ and let $E \subset \{\lambda :\underset{\delta\rightarrow 0}{\overline\lim} \dfrac{|\nu |(B(\lambda , \delta )}{\delta} > \frac{1}{N} \} \cap E_0^c$ be a compact subset. Let $O$ be an open set containing $E$ with $|\nu | (O) < \eta .$ Let $x\in E,$ then there exists $ 0 < \delta _x < \frac{\epsilon}{3}$ such that $|\nu |(B(x, \delta _x )) \ge \frac{1}{N} \delta _x$ and $B(x, \delta _x ) \subset O.$ Since $E \subset \cup_{x\in E} B(x, \delta _x),$ we can choose a finite subset $\{x_i\}_{i=1}^n $ so that $E\subset \cup_{i=1}^n B(x_i, \delta _{x_i}).$ From $3r$-covering theorem (see Theorem 2.1 in \cite{tol14}), we can further select a subset $\{x_{i_j}\}_{j=1}^m$ such that $\{B(x_{i_j}, \delta _{x_{i_j}})\}$ are disjoint and
\[
\ E \subset \cup_{i=1}^n B(x_i, \delta _{x_i}) \subset \cup _{j=1}^m B(x_{i_j}, 3\delta _{x_{i_j}}).
\]
Therefore,
\[
\ \mathcal H^1_\epsilon (E) \le 3\sum _{j=1}^m \delta _{x_{i_j}} \le \dfrac{ 3}{N} \sum _{j=1}^m |\nu |(B(x_{i_j}, \delta _{x_{i_j}})) = \dfrac{ 3}{N} |\nu |(\cup _{j=1}^mB(x_{i_j}, \delta _{x_{i_j}})) \le \dfrac{ 3}{N} |\nu | (O) < \dfrac{ 3}{N} \eta .
\]
This implies $\mathcal H^1 |_{LG}(E) = 0.$ The lemma is proved.
\begin{Corollary}\label{CorollaryKTGen}
Let $\nu$ be a positive finite compactly supported measure on $\mathbb C$ and $\nu$ is singular to $\mathcal H^1 |_{LG}$ ($\nu \perp \mathcal H^1 |_{LG}$). For $\mathcal H^1 |_{LG}$-almost all $w\in LG,$ if there exists $\delta_w, ~\epsilon _w > 0$ such that
\[
\ B(\lambda _\delta, \epsilon _w \delta ) \subset (spt(\nu ))^c\cap B(w, \delta )
\]
for $0 < \delta < \delta_w,$ then
\[
\ \lim _{\delta \rightarrow 0} \int \dfrac{\delta}{|z - \lambda _\delta|^2} d \nu (z) = 0.\tag{3-7}
\]
\end{Corollary}
{\bf Proof:} From (3-5) and (3-6), we assume that
\[
\ M_R(w) < \infty,~ \underset{\delta\rightarrow 0}{\lim} \dfrac{\nu (B(w , \delta )}{\delta} = 0.
\]
Hence, for $N > 2,$
\[
\ \begin{aligned}
\ & \int \dfrac{\delta}{|z - \lambda _\delta|^2} d \nu (z) \\
\ \le & \int _{B(w,N\delta )} \dfrac{\delta}{|z - \lambda _\delta|^2} d \nu (z) + \int _{B(w,N\delta )^c} \dfrac{\delta}{|z - \lambda _\delta|^2} d \nu (z) \\
\ \le & \dfrac{N}{\epsilon _w^2} \dfrac{\nu (B(w , N\delta )}{N\delta} + \sum _{k = 0}^\infty \int _{2^kN\delta \le |z-w| < 2^{k+1}N\delta } \dfrac{\delta}{|z - \lambda _\delta|^2} d \nu (z) \\
\ \le & \dfrac{N}{\epsilon _w^2} \dfrac{\nu (B(w , N\delta )}{N\delta} + \sum _{k = 0}^\infty \dfrac{2^{k+1}N\delta ^ 2}{(2^kN\delta - \delta)^2} \dfrac{\nu (B(w , 2^{k+1}N\delta )}{2^{k+1}N\delta} \\
\ \le & \dfrac{N}{\epsilon _w^2} \dfrac{\nu (B(w , N\delta )}{N\delta} + \dfrac{4N}{(N-1)^2}M_R(w)
\ \end{aligned}
\]
The second term is small for $N$ large and for a given $N,$ the first term is small if $\delta$ is small enough. Therefore, (3-7) holds.
Now we state our generalized version of Lemma \ref{CauchyTLemma} below. Notice that there is no corresponding function $(1-\bar\lambda _0 z)^\frac{2}{p}$ for a boundary point $w$ of an arbitrary $K,$ in particular, for an inner boundary point $w$.
\begin{Lemma}\label{CauchyTLemmaGen}
Let $\nu$ be a finite measure supported in $K$ and $| \nu | \perp \mathcal H^1 |_{\partial _I^{LG} K}.$ Let $1 < p \le \infty ,$ $q = \frac{p}{p-1},$ $f \in C (K),$ and $g \in L^{q} (| \nu |).$ Define
\[
\ EVG(|g|^q|\nu | ) = \{\lambda \in \partial _I^{LG} K: ~ M_R(|g|^q||\nu |)(\lambda ) = \infty\text{ or }\underset{\delta \rightarrow 0} {\overline\lim}\int \dfrac{\delta |g|^q}{|z - \lambda _\delta|^2} d |\nu |(z) > 0 \}
\]
where $\lambda _\delta$ is defined as in (3-4). Then $\mathcal H^1 |_{\partial _I^{LG} K}(EVG(|g|^q|\nu | ))=0$ (Lemma \ref{lemmaBasic} and
Corollary \ref{CorollaryKTGen}). Suppose that $a > 0,$ $w \in \partial _I^{LG} K \setminus EV(|g|^q|\nu | ),$ and upper cone $\Gamma _\delta ^+(w, \alpha )$ is outer, then there exist $\delta_w >0,$ $E_\delta ^f \subset \bar B (w, \delta ),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < \delta_w ,$ such that
$\lim _{\delta \rightarrow 0} \epsilon (\delta ) = 0,$ $\gamma(E_\delta ^f) <\epsilon (\delta ) \delta ,
$
and
\[
\ \left |\mathcal C\left (fg\nu \right )(\lambda) - \mathcal C\left (fg\nu \right )(\lambda _\delta) \right | \le a \delta ^{-\frac{1}{p}}\|f\|_{L^{p} (| \nu |)}
\]
for all $\lambda\in (B (w, \delta ) \setminus E_\delta ^f )\cap U(g\nu ) .$ Notice that $E_\delta ^f$ depends on $f$ and all other parameters are independent of $f.$
\end{Lemma}
{\bf Proof:} We just need to make the following slight modifications to the proof of Lemma \ref{CauchyTLemma}:
\newline
(1) Replace $\frac{1}{\bar \lambda _0}$ by $\lambda _\delta .$
\newline
(2) Use Lemma \ref{lemmaBasic} (1) instead of Lemma \ref{TolsaLemma} (2) and use Corollary \ref{CorollaryKTGen} instead of Lemma \ref{KTLemma}.
\newline
(3) Replace $\nu_\delta$ by $\nu_\delta = \frac{\delta^{\frac{1}{p}}\chi _ {B(w, N\delta )}}{z - \lambda _\delta}fg\nu .$
\newline
(4) (2-8) becomes
\[
\ \delta^{\frac{1}{p}}|\mathcal C_\epsilon \nu (\lambda ) - \mathcal C \nu (\lambda _\delta ) | \le \dfrac{a}{2} \|f\|_{L^{p} (| \nu |)} + 2\delta C_*\nu _\delta (\lambda ).
\]
\newline
(5) Define
\[
\ E_\delta ^f = \{\lambda : C_*\nu _\delta (\lambda ) > \dfrac{a\|f\|_{L^{p} (| \nu |)}}{4\delta } \} \cap \bar B(w, \delta ).
\]
(2-9) becomes
\[
\ \gamma (E_\delta ) \le \dfrac{4C_T\delta }{a\|f\|_{L^{p} (| \nu |)}} \int _{B(w,N \delta )} \dfrac{\delta^{\frac{1}{p}}|fg|d|\nu |}{|z - \lambda _\delta |} < \epsilon(\delta) \delta .
\]
where $\epsilon_w$ is as in (3-4) and
\[
\ \epsilon(\delta) = \dfrac{5(N+1)C_T}{a\epsilon_w}\left( \int \dfrac{\delta |g|^qd|\nu |}{|z - \lambda _\delta |^2} \right )^\frac{1}{q}.
\]
The proof is completed.
\begin{Proposition}\label{MProposition1Gen} Let $\nu$ be a finite complex measure with support in $K.$ Suppose that $\nu \perp Rat(K)$ and $\nu = \nu_a + \nu_s$ is the Radon-Nikodym decomposition with respect to $\mathcal H^1 |_{\partial _I^{LG} K},$ where $\nu_a = \frac{1}{2\pi} h\mathcal H^1 |_{\partial _I^{LG} K}$ and $\nu_s \perp \mathcal H^1 |_{\partial _I^{LG} K}.$ Suppose upper cone $\Gamma _\delta ^+(w, \alpha )$ is outer for $w\in \partial _I^{LG} K.$
Then for $b > 0$ and $\mathcal H^1 |_{\partial _I^{LG} K}$-almost all $w\in \partial _I^{LG} K,$ there exist $\delta _w > 0,$ $E_{\delta}\subset B(w, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < \delta _w,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta) < \epsilon (\delta ) \delta ,$ and
\[
\ \left |\mathcal C\nu (\lambda) - L(w) h(w) \right | \le b
\]
for all $\lambda\in (\Gamma _\delta ^-(w, \alpha )\setminus E_\delta )\cap U(\nu ) .$
\end{Proposition}
{\bf Proof:} We just need to replace Plemelj's formula (2-11) in the proof of Proposition \ref{MProposition1} by (3-2).
The following Lemma is from Lemma B in \cite{ars} (also see Lemma 3 in \cite{y17}).
\begin{Lemma} \label{lemmaARS}
There are absolute constants $\epsilon _1 > 0$ and $C_1 < \infty$ with the
following property. For $R > 0,$ let $E \subset \bar B(\lambda _0, R)$ with
$\gamma (E) < R\epsilon_1.$ Then
\[
\ |p(\lambda)| \le \dfrac{C_1}{R^2} \int _{\bar B(\lambda _0, R)\setminus E} |p| \frac{dA}{\pi}
\]
for all $\lambda\in B(\lambda _0, \frac{R}{2})$ and $p \in A(B(\lambda _0, R)),$ the uniform closure of $\mathcal P$ in $C(\bar B(\lambda _0, R)).$
\end{Lemma}
Set
\[
\ a(\alpha) = \frac{1}{8}\sin(\frac{\tan^{-1}(\alpha)}{2}). \tag{3-8}
\]
Clearly, for $\lambda _0 \in\Gamma _\delta ^-(w, \frac{\alpha }{2}) \cap (\partial B(w, \frac{\delta}{2})),$
\[
\ B(\lambda _0, 2a(\alpha)\delta) \subset \Gamma _\delta ^-(w, \alpha ).
\]
The following theorem indicates that the carrier of $\mu _a,$ for irreducible $R^t(K,\mu ),$ does not intersect the boundary points for which both upper and lower cones contain a big portion of $\mathbb C\setminus K.$
\begin{Theorem}\label{SBTheorem}
Suppose that $\mu$ is a finite positive measure supported in $K$ and is such that $abpe(R^t(K,\mu )) = \Omega$ and $R^t(K,\mu )$ is irreducible, where $\Omega$ satisfies (3-3). Suppose that upper cone $\Gamma _\delta ^+(w, \alpha )$ is outer for all $w\in \partial _I^{LG} K$ and $\mu = \mu_a + \mu_s$ is the Radon-Nikodym decomposition with respect to $\mathcal H^1 |_{\partial _I^{LG} K},$ where $\mu_a = \frac{1}{2\pi} h\mathcal H^1 |_{\partial _I^{LG} K}$ and $\mu_s \perp \mathcal H^1 |_{\partial _I^{LG} K}.$
\newline
(a) Define
\[
\ E = \{w\in \partial _I^{LG} K : \underset{\delta\rightarrow 0}{\overline{\lim}}\dfrac{\gamma (\Gamma _\delta^- (w,\alpha ) \setminus K)}{\delta} >0 \},
\]
then $\mu _a (E) = 0.$
\newline
(b) If the diameters of all components of $\mathbb C \setminus K$ are bounded away from zero, then
\[
\ \mu _a (\partial _I^{LG} K \setminus \partial _{II}^{LG} K ) = 0.
\]
\end{Theorem}
{\bf Proof:} (a) Let $G \in R^t(K,\mu )^\perp$ and $G(z) \ne 0~ \mu~ a.e.$ as above. Suppose $\mu _a (E) > 0, $ then there exists $w\in E$ such that
(1) $G(w)h(w) \ne 0.$
(2) Proposition \ref{MProposition1Gen} holds for $w,$ that is, for $b = \frac{|G(w)|h(w)}{2},$ there exist $\delta _w > 0,$ $E_{\delta}\subset B(w, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < \delta _w,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta) < \epsilon (\delta ) \delta ,$ and
\[
\ \left |\mathcal C(G\mu ) (\lambda) - L(w) G(w) h(w) \right | \le b \tag{3-9}
\]
for all $\lambda\in (\Gamma _\delta ^-(w, \alpha )\setminus E_\delta )\cap U(G\mu ) .$
(3) There are a sequence of $\{\delta_n\}$ with $\delta _n \rightarrow 0$ and $\epsilon _0 > 0$ such that
\[
\ \gamma (\Gamma _{\delta _n}^- (w,\alpha ) \setminus K) \ge \epsilon _0 \delta _n.
\]
Choose $N$ large enough so that $\epsilon (\delta _N ) < \frac{\epsilon _0}{2}.$ For $\lambda \in \Gamma _{\delta _N}^- (w,\alpha ) \setminus K,$ we see that $\lambda \in U(G\mu )$ and (3-9) does not hold since $\mathcal C(G\mu) (\lambda) = 0.$ That implies
\[
\ \Gamma _{\delta _N}^- (w,\alpha ) \setminus K \subset E_{\delta _N}.
\]
Hence,
\[
\ \gamma (\Gamma _{\delta _N}^- (w,\alpha ) \setminus K) \le \gamma (E_{\delta _N}) \le \dfrac{\epsilon_0}{2}\delta _N.
\]
This contradicts (3).
We now turn to prove (b). Let $lb > 0$ be less than the diameters of all components of $\mathbb C \setminus K.$
Let $E_1$ be the set of $w\in \partial _I^{LG} K$ such that there exists a sequence of $\{\delta_n\}$ with $\delta_n\rightarrow 0$ and $\Gamma _{\delta _n}^- (w,\frac{\alpha}{2} ) \cap \partial K \ne \emptyset .$ For a given $w\in E_1,$ there exists a component $V_n$ of $\mathbb C\setminus K$ so that $\Gamma _{\delta _n}^- (w,\frac{\alpha}{2} ) \cap V_n \ne \emptyset .$
Let $\lambda _n \in \Gamma _{\delta _n}^- (w,\frac{\alpha}{2} ) \cap V_n,$ then
\[
\ B(\lambda _ n, a(\alpha )\delta _n) \cap V_n \subset \Gamma _{2\delta _n}^- (w,\alpha ) \setminus K,
\]
where $a(\alpha )$ is defined as in (3-8).
Hence,
\[
\ \begin{aligned}
\ \dfrac{1}{4}\min (a(\alpha ) \delta _n, lb) \le &\dfrac{1}{4} diameter (B(\lambda _ n, a(\alpha )\delta _n) \cap V_n) \\
\ \le &\gamma(B(\lambda _ n, a(\alpha )\delta _n) \cap V_n) \\
\le &\gamma(\Gamma _{2\delta _n}^- (w,\alpha ) \setminus K),
\end{aligned}
\]
where the second inequality is implied by Theorem 2.1 on page 199 of \cite{gamelin}. This implies
\[
\ \underset{r\rightarrow 0}{\overline{\lim}}\dfrac{\gamma (\Gamma _{\delta}^- (w,\alpha ) \setminus K)}{\delta} \ge \dfrac{a(\alpha )}{8}.
\]
So $E_1\subset E,$ from (a), we conclude $\mu _a (E_1) = 0.$ We have shown that $\Gamma _{\delta}^- (w,\frac{\alpha}{2} ) \cap \partial K = \emptyset $ for $w\in \partial _I^{LG} K\setminus E_1$ with $\mu _a (E_1) = 0$ as $\delta$ is close to zero enough, in this case, $\Gamma _{\delta}^- (w,\frac{\alpha}{2} ) \subset Int(K) .$
Let $w\in \partial _I^{LG} K\setminus E_1$ so that we can apply Proposition \ref{MProposition1Gen} for $w$ and $b = \frac{ |G(w)h(w)|}{2}.$ There exist $\delta _w > 0,$ $E_{\delta}\subset B(w, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < \delta _w,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta) < \epsilon (\delta ) \delta ,$ and
\[
\ \left |\mathcal C(G\mu ) (\lambda) \right | \ge \dfrac{ |G(w)h(w)|}{2} \tag{3-10}
\]
for all $\lambda\in (\Gamma _\delta ^-(w, \alpha )\setminus E_\delta )\cap U(G\mu ) .$ Now choose $\delta$ to be small enough so that $\epsilon (\delta ) < a(\frac{\alpha}{2})\epsilon _1,$ where $\epsilon _1$ is as in Lemma \ref{lemmaARS} and $a(\alpha)$ is defined in (2-8). Let $\lambda _0 \in \Gamma _{\delta} ^-(w, \frac{\alpha}{4} )$ with $|\lambda _0 - w| = \frac{\delta}{2},$ then $B(\lambda _0, a(\frac{\alpha}{2})\delta) \subset \Gamma _{\delta} ^-(w, \frac{\alpha}{2} ) \subset Int(K),$ where $\delta$ is small enough. Since $\gamma (B(\lambda _0, a(\frac{\alpha}{2})\delta) \cap E_\delta ) < \epsilon_1 a(\frac{\alpha}{2})\delta , $ from Lemma \ref{lemmaARS} and (3-10). we conclude $\lambda \in B(\lambda _0, \frac{a(\frac{\alpha}{2})\delta}{2}),$
\[
\begin{aligned}
\ |r(\lambda)| \le & \dfrac{C_1}{\pi (a(\frac{\alpha}{2})\delta)^2} \int _{B(\lambda _0, a(\frac{\alpha}{2})\delta) \setminus E_\delta } |r(z)| dA(z) \\
\ \le & \dfrac{2C_1}{ \pi |G(w)h(w)|a(\frac{\alpha}{2})^2 \delta ^2} \int _{B(\lambda _0, a(\frac{\alpha}{2})\delta) \setminus E_\delta } |\mathcal C(rG\mu ) (z)| dA(z) \\
\ \le & \dfrac{C_1}{ \pi |G(w)h(w)| a(\frac{\alpha}{2})^2 \delta ^2} \int \int _{B(\lambda _0, a(\frac{\alpha}{2})\delta)} \dfrac{1}{|z-\lambda|} dA(z)|rG|d\mu (\lambda) \\
\le & \dfrac{C_2}{\delta} \|G\|_{L^{t'}(\mu)} \|r\|_{L^{t}(\mu)}
\end{aligned}
\]
where $r\in Rat(K)$ and $C_2$ is a constant. Thus, $B(\lambda _0, \frac{a(\frac{\alpha}{2})\delta}{2}) \subset \Omega .$ This implies $\Gamma _{\frac{\delta}{2}} ^-(w, \frac{\alpha}{4} ) \subset \Omega $ for $\delta$ small enough. Let
\[
\ F(\alpha ) = \{z\in \partial _I^{LG} K\setminus E_1: ~ \exists ~\delta > 0, \text{ such that } \Gamma _{\delta} ^-(z,\alpha) \subset \Omega\},
\]
then $w\in F(\frac{\alpha}{4})$ and there exists a $\mathcal H^1 |_{\partial _I^{LG} K}$ zero set $E_0$ such that
\[
\ \partial _I^{LG} K\setminus (E_0\cup E_1) \subset F(\frac{\alpha}{4}).
\]
It is easy to verify $\mathcal H^1 |_{\partial _I^{LG} K} (F(\alpha _1) \setminus F(\alpha _2) ) = 0$ for $\alpha _1 \ne \alpha _2.$ Let $E_2 = F(\frac{\alpha}{4}) \setminus F(\alpha ),$ then $\mathcal H^1 |_{\partial _I^{LG} K} (E_2) = 0$ and
\[
\ \partial _I^{LG} K\setminus (E_0\cup E_1\cup E_2) \subset \partial _{II}^{LG}.
\]
The theorem is proved.
The following example is an interesting application of above theorem.
\begin{Example}
A Swiss cheese $K$ can be constructed as
\[
\ K = \bar {\mathbb D} \setminus \cup_{n=1}^\infty B(a_n, r_n),
\]
where $B(a_n, r_n) \subset \mathbb D,$ $\bar B(a_i, r_i) \cap \bar B(a_j, r_j) = \emptyset $ for $i\ne j,$ $\sum_{n=1}^\infty r_n< \infty ,$ and $K$ has no interior points. Let $\mu$ be the sum of the arc length measures of $\partial \mathbb D$ and all $\partial B(a_n, r_n).$ Let $\nu$ be the sum of $dz$ on $\partial \mathbb D$ and all $-dz$ on $\partial B(a_n, r_n).$ For $f\in Rat(K),$ we have
\[
\ \int f d\nu = 0.
\]
Clearly $| \frac{d\nu}{d\mu} | > 0, ~ a.e. ~\mu$ and $\overline{(\frac{d\nu}{d\mu})} \perp R^2(K, \mu),$ so $R^2(K, \mu)$ is irreducible. From Theorem \ref{SBTheorem}, we conclude that
\[
\ \underset{\delta\rightarrow 0}{\overline{\lim}}\dfrac{\gamma (\Gamma ^\delta (e^{i\theta}) \setminus K)}{\delta} = 0
\]
$m$-almost all $e^{i\theta}\in \partial \mathbb D,$ where $\Gamma ^\delta (e^{i\theta})$ is defined in section 2 (right before Theorem \ref{MTheorem1}).
\end{Example}
The example indicates although swiss cheese $K$ has no interior, the portion of $\mathbb D \setminus K$ near $\partial \mathbb D$ is very small.
\begin{Theorem}\label{MTheorem3}
Suppose that $\mu$ is a finite positive measure supported in $K$ and is such that
$abpe(R^t(K,\mu )) = \Omega $ and $R^t(K,\mu )$ is irreducible, where $\Omega$ is a connected region satisfying (3-3). Suppose that upper cone $\Gamma _\delta ^+(w, \alpha )$ is outer for all $w\in \partial _I^{LG} K$ and $\mu = \mu_a + \mu_s$ is the Radon-Nikodym decomposition with respect to $\mathcal H^1 |_{\partial _I^{LG} K},$ where $\mu_a = \frac{1}{2\pi} h\mathcal H^1 |_{\partial _I^{LG} K}$ and $\mu_s \perp \mathcal H^1 |_{\partial _I^{LG} K},$ and $\mu _a(\partial _{II}^{LG} K ) > 0.$ Then:
\newline
(a) If $f \in R^t(K,\mu )$ then the nontangential limit $f^*(z)$ of $f$ exists for $\mu _a |_{\partial _{II}^{LG} K }$-
almost all z, and $f^* = f |_{\partial _{II}^{LG} K }$ as elements of $L^t(\mu |_{\partial _{II}^{LG} K }).$
\newline
(b) Every nonzero rationally invariant subspace $M$ of $R^t(K,\mu )$ has index 1, that is, if $\lambda _0 \in \Omega,$ then $\dim (M/(S_\mu - \lambda _0)M) = 1.$
\newline
If the diameters of all components of $\mathbb C \setminus K$ are bounded away from zero, then by Theorem \ref{SBTheorem}, the above ${\partial _{II}^{LG} K }$ can be replaced by $\partial _{I}^{LG} K .$
\end{Theorem}
{\bf Proof:} The proof is the same as in Theorem \ref{MTheorem1} if we apply Proposition \ref{MProposition1Gen} instead of Proposition \ref{MProposition1}.
The following lemma is an easy exercise.
\begin{Lemma}\label{LemmaEasy}
Let $B(\lambda, \epsilon \delta ) \subset \Gamma _\delta ^-(w, \alpha )$ (or $\Gamma _\delta ^+(w, \alpha )$). Then there are constants $c(\alpha ),~ C(\alpha ) > 0$ that only depend on $\alpha$ and $\|A'\|_\infty$ such that
\[
\ \min (\epsilon, c(\alpha ))(\delta + |Re(z - w)|) \le |z - \lambda | \le C(\alpha )(\delta + |Re(z-w)|)
\]
for $z\in LG.$
\end{Lemma}
{\bf Proof:} In fact, $C(\alpha ) =1 + \sqrt{1+ \|A'\|_\infty^2 }$ and $c(\alpha ) =\frac{1 - \alpha \|A'\|_\infty}{\sqrt{1+ \|A'\|_\infty^2 }\sqrt{1+ \alpha^2 }}.$ We leave the details to the reader.
Because we do not have an analogous testing function (such as $(1-\bar\lambda _0 z)^{-\frac{2}{t}}$ in Proposition \ref{MProposition2}) in general, we are not able to get an estimation of the Cauchy transform as in Proposition \ref{MProposition2}. However, our following proposition is enough for us to estimate an upper bound as in (1-2) ((1.4) in \cite{ars}).
We define a set
\[
\ B\Gamma _\delta ^-(w, \alpha ) = \underset{\lambda _0 \in\Gamma _\delta ^-(w, \frac{\alpha }{2}) \cap (\partial B(w, \frac{\delta}{2}))}{\cup}B(\lambda _0, a(\alpha)\delta)
\]
where $a(\alpha)$ is defined as in (3-8).
\begin{Proposition}\label{MProposition2Gen} Let $\mu$ be a finite complex measure with support in $K.$ Suppose that $\mu = \mu_a + \mu_s$ is the Radon-Nikodym decomposition with respect to $\mathcal H^1 |_{\partial _I^{LG} K},$ where $\mu_a = \frac{1}{2\pi} h\mathcal H^1 |_{\partial _I^{LG} K}$ and $\mu_s \perp \mathcal H^1 |_{\partial _I^{LG} K}.$ Suppose upper cone $\Gamma _\delta ^+(w, \alpha )$ is outer for $w\in \partial _I^{LG} K.$ Let $1 < p <\infty, ~ q = \frac{p}{p-1}, ~ f\in C(K), ~ g \in L^q (\mu ),$ and $fg\mu \perp Rat(K).$ Then for $b > 0,$ and $\mathcal H^1 |_{\partial _I^{LG} K}$-almost all $w\in \partial _I^{LG} K,$ there exist $\delta _w >0,$ $E_{\delta}^f \subset B(w, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < \delta _w,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta ^f) < \epsilon (\delta ) \delta ,$ and for $\lambda _\delta$ as in (3-4),
\[
\ \begin{aligned}
\ &\left |\mathcal C \left (fg\mu \right )(\lambda) \right | \le b \delta ^{-\frac{1}{p}} \|f\|_{L^p(\mu )} \\
+ &\dfrac{2 C(\alpha )^{\frac{2}{q}} \delta ^{-\frac{1}{p}} \|f\|_{L^p(\mu )}}{\epsilon_w ^\alpha c_0(\alpha )} \left ( \int \dfrac{\delta}{|Re(z-w) - (\lambda _\delta -w)|^2} |g|^qd\mu _a \right )^{\frac{1}{q}}
\ \end{aligned}
\]
for all $\lambda\in (B\Gamma _\delta ^-(w, \alpha ) \setminus E_\delta^f )\cap U(g\mu ) ,$ where $\epsilon_w ^\alpha = \min(\epsilon_w, c(\alpha ))$ and $c_0(\alpha ) = \min(a(\alpha ),c(\alpha )),$ where $\epsilon_w$ is as in (3-4), $a(\alpha )$ is from (3-8), and $c(\alpha ),~ C(\alpha )$ are from Lemma \ref{LemmaEasy}.
\end{Proposition}
{\bf Proof:} Using Lemma \ref{LemmaEasy}, we have the following calculation:
\[
\ \begin{aligned}
\ &\left |\mathcal C (fg\mu _a )(\lambda) - \mathcal C (fg\mu _a )(\lambda _\delta) \right | \\
\ \le & \int \dfrac{|\lambda - \lambda _\delta|}{|z - \lambda ||z - \lambda _\delta|} |fg|d\mu _a \\
\ \le & \dfrac{2\delta}{\epsilon_w ^\alpha c_0(\alpha )}\int \dfrac{1}{(|Re(z - w)| + \delta )^2} |fg|d\mu _a \\
\ \le & \dfrac{2\delta ^{-\frac{1}{p}}}{\epsilon_w ^\alpha c_0(\alpha )}\int \dfrac{\delta ^\frac{1}{q}}{(|Re(z - w)| + \delta )^\frac{2}{q}} |fg|d\mu _a \\
\ \le & \dfrac{2\delta ^{-\frac{1}{p}}\|f\|_{L^p(\mu )}}{\epsilon_w ^\alpha c_0(\alpha )}\left (\int \dfrac{\delta}{(|Re(z - w)| + \delta )^2} |g|^qd\mu _a \right )^\frac{1}{q}\\
\ \le &\dfrac{2C(\alpha )^{\frac{2}{q}}\delta ^{-\frac{1}{p}} \|f\|_{L^p(\mu )}}{\epsilon_w ^\alpha c_0(\alpha )} \left ( \int \dfrac{\delta}{|Re(z-w) - (\lambda _\delta -w)|^2} |g|^qd\mu _a \right )^{\frac{1}{q}},
\ \end{aligned}
\]
where the last step also follows Lemma \ref{LemmaEasy}. The rest of proof is the same as in the proof of Proposition \ref{MProposition2}.
\begin{Theorem}\label{MTheorem4}
Suppose that $\mu$ is a finite positive measure supported in $K$ and is such that
$abpe(R^t(K,\mu )) = \Omega $ and $R^t(K,\mu )$ is irreducible, where $\Omega$ is a connected region satisfying (3-3). Suppose that upper cone $\Gamma _\delta ^+(w, \alpha )$ is outer for all $w\in \partial _I^{LG} K$ and $\mu = \mu_a + \mu_s$ is the Radon-Nikodym decomposition with respect to $\mathcal H^1 |_{\partial _{I}^{LG} K},$ where $\mu_a = \frac{1}{2\pi} h\mathcal H^1 |_{\partial _{I}^{LG} K }$ and $\mu_s \perp \mathcal H^1 |_{\partial _{I}^{LG} K},$ and $\mu _a(\partial _{II}^{LG} K ) > 0.$ Then:
\newline
(a) For $t = 1,$ there are constants $C(w) > 0$ (depending on $G$) such that
\[
\ \underset{\Gamma ^-(w, \frac{\alpha}{2}) \ni \lambda \rightarrow w}{\overline\lim} |\lambda -w |M_\lambda \le \dfrac{C(w)}{h(w)}\tag{3-11}
\]
for $\mu_a$-almost all $w\in \partial _{II}^{LG} K.$
\newline
(b) For $t > 1,$ there are constants $C_0(\alpha ) > 0$ (depending on $\alpha $ and $\|A'\|_\infty$) such that
\[
\ \underset{\Gamma ^-(w, \frac{\alpha}{2})\ni \lambda \rightarrow w}{\overline\lim} |\lambda -w |^{\frac{1}{t}} M_\lambda \le \dfrac{C_0(\alpha )/(\epsilon _w \epsilon _w^\alpha )}{h(w)^{\frac{1}{t}}}\tag{3-12}
\]
for $\mu_a$-almost all $w\in \partial _{II}^{LG} K,$ where $\epsilon _w$ is as in (3-4) and $\epsilon _w^\alpha$ is from Proposition \ref{MProposition2Gen}.
\newline
If the diameters of all components of $\mathbb C \setminus K$ are bounded away from zero, then by Theorem \ref{SBTheorem}, the above $\partial _{II}^{LG} K$ can be replaced by $\partial _{I}^{LG} K.$
\end{Theorem}
{\bf Proof:} (a) Let $w\in \partial _{II}^{LG} K$ so that we can apply Proposition \ref{MProposition1Gen} for $w$ and $b = \frac{ |G(w)h(w)|}{2}.$ There exist $\delta _w > 0,$ $E_{\delta}\subset B(w, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < \delta _w,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta) < \epsilon (\delta ) \delta ,$ and
\[
\ \left |\mathcal C(G\mu ) (\lambda) \right | \ge \frac{|G(w)h(w)|}{2} \tag{3-13}
\]
for all $\lambda\in (\Gamma _\delta ^-(w, \alpha )\setminus E_\delta )\cap U(G\mu ) ,$ where $\Gamma _\delta ^-(w, \alpha ) \subset \Omega.$
Now choose $\delta$ to be small enough so that $\epsilon (\delta ) < a(\alpha )\epsilon _1,$ where $\epsilon _1$ is as in Lemma \ref{lemmaARS} and $a(\alpha )$ is from (3-8). Let $\lambda _0 \in \Gamma _{\delta} ^-(w, \frac{\alpha}{2} )$ and $|\lambda _0 - w| = \frac{\delta}{2},$ then $B(\lambda _0, a(\alpha )\delta) \subset \Gamma _{\delta} ^-(w, \alpha) \subset \Omega,$ where $\delta$ is small enough. Since $\gamma (B(\lambda _0, a(\alpha )\delta) \cap E_\delta ) < \epsilon_1 a(\alpha )\delta, $ from Lemma \ref{lemmaARS}, (2-16), and (3-13). we conclude that for $\lambda \in B(\lambda _0, \frac{a(\alpha )\delta}{2})$ and $r\in Rat(K),$ we have
\[
\ \begin{aligned}
\ |r (\lambda) | \le & \dfrac{C_1}{(a(\alpha )\delta) ^2} \int _{B (\lambda _0, a(\alpha )\delta ) \setminus E_\delta} |r(z)| \dfrac{dA(z)}{\pi} \\
\ \le & \dfrac{C_1}{\pi a(\alpha )^2\delta ^2} \int _{B (\lambda _0, a(\alpha )\delta ) \setminus E_\delta} \dfrac{|\mathcal C(rG\mu ) (z)|}{|\mathcal C(G\mu ) (z)|} dA(z)\\
\ \le & \dfrac{2C_1}{\pi |G(w)h(w)|a(\alpha )^2\delta ^2} \int \int _{B (\lambda _0, a(\alpha )\delta )} \dfrac{1}{|z-u|} dA(z) |r(u)||G(u)|d\mu (u)\\
\ \le & \dfrac{C_2}{|G(w)h(w)|\delta } \int |r(u)||G(u)|d\mu (u),
\ \end{aligned}
\]
where $C_1,$ $C_2,$ $C_3,...$ stand for absolute constants, and hence,
\[
\ |\lambda - w| |r (\lambda) | \le \dfrac{C_3}{|G(w)h(w)|} \int |r(u)||G(u)|d\mu (u)
\]
for $\lambda \in \Gamma _{\delta} ^-(w, \frac{\alpha}{2} )$ and $|\lambda - w| = \frac{\delta}{2}.$
Let $C(w) = \dfrac{C_3\|G\|_{L^\infty (\mu )}}{|G(w)|},$ we get
\[
\ \underset{\Gamma ^-(w, \frac{\alpha}{2})\ni \lambda \rightarrow w}{\overline{\lim}}|\lambda - w|M_\lambda \le \dfrac{C(w)}{h(w)} .
\]
(b) By Proposition \ref{MProposition1Gen} and Proposition \ref{MProposition2Gen}, for $b > 0,$ and $\mathcal H^1 |_{\partial _I^{LG} K}$-almost all $w\in \partial _I^{LG} K,$ there exist $\delta _w >0,$ $E_{\delta} \subset B(w, \delta),$ $E_{\delta}^r \subset B(w, \delta),$ and $\epsilon (\delta ) > 0,$ where $0 < \delta < \delta _w,$ such that $\lim_{\delta\rightarrow 0}\epsilon (\delta ) = 0,$ $\gamma(E_\delta) < \epsilon (\delta ) \delta ,$ $\gamma(E_\delta ^r) < \epsilon (\delta ) \delta ,$
\[
\ |\mathcal C(G\mu ) (\lambda) - L(w)G(w)h(w)| \le b,\tag{3-14}
\]
for all $\lambda\in (\Gamma _\delta ^-(w, \alpha )\setminus E_\delta )\cap U(G\mu ) ,$
and for $\lambda _\delta$ as in (3-4),
\[
\ \begin{aligned}
\ &\left |\mathcal C \left (rg\mu \right )(\lambda) \right | \le b \delta ^{-\frac{1}{t}} \|r\|_{L^t(\mu )} \\
+ &\dfrac{2C(\alpha )^{\frac{2}{t'}}\delta ^{-\frac{1}{t}} \|r\|_{L^t(\mu )}}{\epsilon_w ^\alpha c_0(\alpha )} \left ( \int \dfrac{\delta}{|Re(z-w) - (\lambda _\delta -w)|^2} |G|^{t'}d\mu _a \right )^{\frac{1}{t'}}
\ \end{aligned} \tag{3-15}
\]
for all $\lambda\in (B\Gamma _\delta ^-(w, \alpha ) \setminus E_\delta^r )\cap U(|G|^{t'}\mu ) ,$
From Plemelj's formula (3-2), we have the following calculation:
\[
\ \begin{aligned}
\ & \underset{\delta\rightarrow 0}{\overline\lim}\int \dfrac{\delta}{|Re(z-w) - (\lambda _\delta -w)|^2} |G|^{t'}d\mu _a \\
\ = & \underset{\delta\rightarrow 0}{\overline\lim}\dfrac{i\delta}{2 Im(\lambda _\delta)} (\dfrac{1}{2\pi i} \mathcal C (|G|^{t'}h\sqrt{1 + (A'(x))^2}dx) (\lambda _\delta - w - Re(w)) \\
\ &- \dfrac{1}{2\pi i} \mathcal C (|G|^{t'}h\sqrt{1 + (A'(x))^2}dx) (\bar\lambda _\delta - \bar w - Re(w) )) \\
\ \le &\dfrac{|G(w)|^{t'} h(w)\sqrt{1 + (A'(Re(w)))^2}}{2\epsilon_w}.
\ \end{aligned}\tag{3-16}
\]
Therefore, for $\eta > 0,$ if $\delta$ is small enough, we conclude
\[
\ \int \dfrac{\delta}{|Re(z-w) - (\lambda _\delta -w)|^2} |G|^{t'}d\mu _a < \dfrac{|G(w)|^{t'} h(w)\sqrt{1 + (A'(Re(w)))^2}}{2\epsilon_w} + \eta .\tag{3-17}
\]
Combining (3-14), (3-15), and (3-17), for $\delta$ small enough, $\lambda_0\in (\partial B(w,\frac{\delta}{2}))\cap \Gamma _\delta ^-(w, \frac{\alpha }{2}) ,$ $B(\lambda_0, a(\alpha ) \delta ) \subset \Gamma _\delta ^-(w, \alpha ), $ and $\lambda \in (B(\lambda_0, a(\alpha ) \delta ) \setminus (E_\delta \cup E_\delta^r))\cap U(|G|^{t'}\mu),$ we get
\[
\ \dfrac{|r(\lambda )|}{\|r\|_{L^t(\mu )}} \le \dfrac{b \delta ^{-\frac{1}{t}} + \dfrac{2C(\alpha )^{\frac{2}{t'}}\delta ^{-\frac{1}{t}}}{\epsilon_w ^\alpha c_0(\alpha )} \left ( \dfrac{|G(w)|^{t'} h(w)\sqrt{1 + \|A'\|_\infty^2}}{2\epsilon_w} + \eta \right)^{\frac{1}{t'}}}{|G(w)| h(w) -b }.\tag{3-18}
\]
From semi-additivity of (2-2), we see
\[
\ \gamma (E_\delta \cup E_\delta^r) \le A_T(\gamma (E_\delta ) + \gamma ( E_\delta^r)) \le 2A_T \epsilon (\delta ) \delta .
\]
Let $\delta $ be small enough so that $\epsilon (\delta ) < \frac{a(\alpha)}{2A_T}\epsilon _1,$ where $\epsilon _1$ is as in Corollary \ref{CorollaryDSet}. From Corollary \ref{CorollaryDSet}, we conclude that (3-18) holds for all $\lambda \in (B(\lambda_0, \frac{a(\alpha ) \delta}{2}).$ Hence, for $\delta$ small enough, $\lambda\in (\partial B(w,\frac{\delta}{2}))\cap \Gamma _\delta ^-(w, \frac{\alpha }{2}),$
\[
\ |\lambda - w|^{\frac{1}{t}} M_ \lambda \le \dfrac{b + \dfrac{2C(\alpha )^{\frac{2}{t'}}}{\epsilon_w ^\alpha c_0(\alpha )} \left ( \dfrac{|G(w)|^{t'} h(w)\sqrt{1 + (A'(Re(w)))^2}}{2\epsilon_w} + \eta \right)^{\frac{1}{t'}}}{|G(w)| h(w) -b }
\]
Therefore, there exists a constant $C_0(\alpha ) > 0$ that only depends on $\alpha$ and $\|A'\|_\infty$ so that
\[
\ \underset{ \Gamma ^-(w,\frac{\delta}{2})\ni \lambda\rightarrow w}{\overline{\lim}} |\lambda - w|^{\frac{1}{t}} M_ \lambda \le \dfrac{C_0(\alpha )}{\epsilon_w \epsilon_w ^\alpha h(w)^\frac{1}{t}}
\]
for $\mathcal H^1 |_{LG}$-almost all $w \in \partial_{II}^{LG} K.$
For the lower bound, we do have testing functions $f_1^\delta (z) = (z - \lambda _\delta ) ^{-2}\in R^1(K,\mu)$ and $f_2^\delta (z) = (z - \lambda _\delta ) ^{-1}\in R^2(K,\mu).$ The following proposition estimates their norms.
\begin{Proposition}\label{LemmaKTGen}
Let $\mu$ be a finite positive measure with support in $K.$
Suppose that $\mu = \mu_a + \mu_s$ is the Radon-Nikodym decomposition with respect to $\mathcal H^1 |_{\partial _I^{LG} K},$ where $\mu_a = \frac{1}{2\pi} h\mathcal H^1 |_{\partial _I^{LG} K}$ and $\mu_s \perp \mathcal H^1 |_{\partial _I^{LG} K},$ and $\mu _a(\partial _{II}^{LG} K ) > 0.$ Suppose that $\Gamma _\delta ^+(w,\alpha )$ is outer for $w\in\partial _{II}^{LG} K,$
then there exists a constant $C_1(\alpha ) > 0$ that only depends on $\alpha$ and $\|A'\|_\infty$ such that
\[
\ \underset{\delta \rightarrow 0}{\overline\lim} \int \dfrac{\delta}{|z - \lambda _\delta|^2} d \mu \le \dfrac{C_1(\alpha )}{\epsilon_w(\epsilon_w^\alpha)^2}h(w).
\]
for $\mu _a$-almost all $w\in \partial _{II}^{LG} K,$ where $\lambda _\delta$ and $\epsilon_w$ are from (3-4).
\end{Proposition}
{\bf Proof:} The proposition follows from Corollary \ref{CorollaryKTGen}, Lemma \ref{LemmaEasy}, and the same proof of (3-16).
So we have lower bounds for $R^1(K,\mu)$ and $R^2(K,\mu)$ as the following:
For $t = 1,$
\[
\ \underset{\Gamma ^-(w, \frac{\alpha}{2}) \ni \lambda \rightarrow w}{\underline\lim} |\lambda -w | M_\lambda \ge \underset{\delta \rightarrow 0}{\underline\lim} \dfrac{|f_1^\delta (\lambda )|}{\|f_1^\delta\|_{L^1(\mu)}} \ge \dfrac{\epsilon_w(\epsilon_w^\alpha)^2}{4C_1(\alpha )h(w)}.
\]
For $t = 2,$
\[
\ \underset{\Gamma ^-(w, \frac{\alpha}{2}) \ni \lambda \rightarrow w}{\underline\lim} |\lambda -w |^\frac{1}{2} M_\lambda \ge \underset{\delta \rightarrow 0}{\underline\lim} \dfrac{|f_2^\delta (\lambda )|}{\|f_2^\delta\|_{L^2(\mu)}} \ge \dfrac{\sqrt{\epsilon_w}\epsilon_w^\alpha}{2\sqrt{C_1(\alpha )h(w)}}.
\]
For $t \ne 1$ and $t \ne 2,$ if $w$ is a boundary point of $\mathbb C\setminus K,$ then we can define a similar testing function and have corresponding lower bounds. However, if $w$ is an inner boundary point, we do not have such a testing function to estimate the lower bounds.
|
1,116,691,501,266 | arxiv | \part{\part{title}}
\textwidth6.2in \textheight8.5in \oddsidemargin0.00in
\evensidemargin0.00in
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{example}[thm]{Example}
\newtheorem{prop}[thm]{Proposition}
\theoremstyle{definition}
\newtheorem{defn}[thm]{Definition}
\theoremstyle{remark}
\newtheorem{rem}[thm]{\bf{Remark}}
\numberwithin{equation}{section}
\theoremstyle{remark}
\newtheorem{exmp}[thm]{Example}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{\bes} {\begin{equation*}}
\newcommand{\ees} {\end{equation*}}
\newcommand{\be} {\begin{equation}}
\newcommand{\ee} {\end{equation}}
\newcommand{\bea} {\begin{eqnarray}}
\newcommand{\eea} {\end{eqnarray}}
\newcommand{\ra} {\rightarrow}
\newcommand{\txt} {\textmd}
\newcommand{\ds} {\displaystyle}
\newcommand{\mathbb R}{\mathbb R}
\newcommand{\mathbb C}{\mathbb C}
\newcommand{\mathbb T}{\mathbb T}
\newcommand{\mathbb N}{\mathbb N}
\newcommand{\mathbb Z}{\mathbb Z}
\newcommand{\lambda}{\lambda}
\newcommand{\mathfrak{g}}{\mathfrak{g}}
\newcommand{\widehat}{\widehat}
\begin{document}
\title[boundary behavior of positive solutions] {boundary behavior of positive solutions of the heat equation on a stratified Lie group}
\author[J. Sarkar]{Jayanta Sarkar}
\address{Stat Math Unit, Indian Statistical Institute, 203 B. T. Road, Calcutta 700108}
\email{[email protected]}
\subjclass[2010]{Primary 43A80, 31B25, 35R03; Secondary 28A15, 44A35}
\keywords{Stratified Lie groups, Heat equation on Carnot group, Fatou-type theorems, Parabolic convergence, Derivative of measures.}
\begin{abstract}
In this article, we are concerned with a certain type of boundary behavior of positive solutions of the heat equation on a stratified Lie group at a given boundary point. We prove that a necessary and sufficient condition for the existence of the parabolic limit of a positive solution $u$ at a point on the boundary is the existence of the strong derivative of the boundary measure of $u$ at that point. Moreover, the parabolic limit and the strong derivative are equal.
\end{abstract}
\maketitl
\section{Introduction}
To motivate our study in this paper, we first consider the heat equation
\begin{equation}\label{euclideanheat}
\Delta u(x,t)=\frac{\partial}{\partial t}u(x,t),
\end{equation}
on the Euclidean upper half space $\mathbb R^{n+1}_+=\{(x,t)\mid x\in\mathbb R^n,t>0\}$, where $\Delta=\sum_{i=1}^{n}\frac{\partial^2}{\partial x_i^2}$ is the Laplace operator on $\mathbb R^n$. The fundamental solution of the heat equation is known as the Gauss-Weierstrass kernel or the heat kernel of $\mathbb R^{n+1}_+$ and is given by
\begin{equation*}
W(x,t)=(4\pi t)^{-\frac{n}{2}}e^{-\frac{\|x\|^2}{4t}},\:(x,t)\in\mathbb R^{n+1}_+.
\end{equation*}
In this article, by a measure $\mu$ we will always mean a complex Borel measure or a signed Borel measure such that the total variation $|\mu|$ is locally finite, that is, $|\mu|(K)$ is finite for all compact sets $K$. If $\mu(E)$ is nonnegative for all Borel measurable sets $E$ then $\mu$ will be called a positive measure. Also, by a positive solution of some partial differential equation, we shall always mean a nonnegative solution. The Gauss-Weierstrass integral of a measure $\mu$ on $\mathbb R^n$ is given by the convolution
\begin{equation*}
W\mu(x,t)=\int_{\mathbb R^n}W(x-y,t)\:d\mu(y),\:\:\:\: x\in\mathbb R^n,\:\: t\in (0,\infty ),
\end{equation*}
whenever the above integral exists. It is known that if $W\mu(x_0,t_0)$ is finite at some point $(x_0,t_0)\in\mathbb R^{n+1}_+$, then $W\mu$ is well defined and is a solution of the heat equation in $\{(x,t): x\in\mathbb R^n, t\in (0,t_0)\}$ \cite[Theorem 4.4]{W1}. In \cite{G}, Gehring initiated the study of Fatou-type theorems and their converse for solutions of the heat equation for $n=1$. This was motivated by an earlier work of Loomis \cite{L} regarding converse of Fatou theorem for Poisson integral of positive measures. To explain Gehring's result, we need the following characterization of positive solutions of the heat equation (\ref{euclideanheat}), for $n=1$, due to Widder \cite[Theorem 6]{Wi}: if $u$ is a positive solution of the heat equation on $\mathbb R^2_+$ then there exists a nondecreasing function $\beta$ defined on $\mathbb R$ such that
\begin{equation*}
u(x,t)=\int_{\mathbb R}W(x-y,t)\:d\beta (y),\:\:\:\:x\in\mathbb R,\:\: t>0,
\end{equation*}
where $d\beta$ is the Lebesgue-Stieltjes measure induced by $\beta$. The following are the results of Gehring \cite[Theorem 2, Theorem 5]{G}.
\begin{thm}\label{g}
Suppose $u$ is a positive solution of the heat equation on $\mathbb R^2_+$, $x_0\in\mathbb R$ and $\beta$ as above.
\begin{enumerate}
\item If $\beta'(x_0)=L$ then for each $\alpha>0$,
\begin{equation*}
\lim_{\substack{(x,t)\to(x_0,0)\\|x-x_0|^2<\alpha t}}u(x,t)=L.
\end{equation*}
\item If for two distinct real numbers $\alpha_1$, $\alpha_2$
\begin{equation*}
\lim_{t\to 0}u(x_0+\alpha_1\sqrt{t},t)=L=\lim_{t\to 0}u(x_0+\alpha_2\sqrt{t},t),
\end{equation*}
then $\beta'(x_0)=L$.
\end{enumerate}
\end{thm}
It is known \cite[P.93-99]{W1} that if $u$ is a positive solution of the heat equation (\ref{euclideanheat}) on $\mathbb R^{n+1}_+$, then there exists a unique positive measure $\mu$ (known as the boundary measure of $u$) on $\mathbb R^n$ such that
\begin{equation*}
u(x,t)=\int_{\mathbb R^n}W(x-\xi,t)\:d\mu(\xi),\:\:\:\:x\in\mathbb R^n,\:\: t>0.
\end{equation*}
However, it is not clear how to interpret the derivative $\beta'$ (appearing in the theorem above) in higher dimensions. Recently, the author \cite{Sar} has shown that one possible way to resolve this problem is to consider the strong derivative of measures. The notion of strong derivative was introduced by Ramey-Ullrich \cite{UR} which we recall: a measure $\mu$ on $\mathbb R^n$ is said to have strong derivative $L\in\mathbb C$ at $x_0\in\mathbb R^n$ if
\begin{equation*}
\lim_{r\to 0}\frac{\mu(x_0+rB)}{m(rB)}=L
\end{equation*}
holds for every open ball $B\subset\mathbb R^n$. Here, $rE=\{rx\mid x\in E\}$, $r\in(0,\infty)$, $E\subset\mathbb R^n$.
\begin{thm}\label{thmparaeucl}
(Theorem 3.2, \cite{Sar}) Suppose that $u$ is a positive solution of the heat equation (\ref{euclideanheat}) and that $x_0\in\mathbb R^n$, $L\in[0,\infty)$. If $\mu$ is the boundary measure of $u$ then the following statements hold.
\begin{enumerate}
\item[i)]If the strong derivative of $\mu$ at $x_0$ is equal to $L$ then for each $\alpha>0$,
\begin{equation*}
\lim_{\substack{(x,t)\to(x_0,0)\\\|x-x_0\|^2<\alpha t}}u(x,t)=L.
\end{equation*}
\item[ii)]If there exists $\eta>0$ such that
\begin{equation*}
\lim_{\substack{(x,t)\to(x_0,0)\\\|x-x_0\|^2<\eta t}}u(x,t)=L,
\end{equation*}
then the strong derivative of $\mu$ at $x_0$ is also equal to $L$.
\end{enumerate}
\end{thm}
Gehring had used Wiener's Tauberian theorem to prove Theorem \ref{g}. On the other hand, proof of Theorem \ref{thmparaeucl} uses a completely different method which was inspired by the work of Ramey and Ullrich \cite[Theorem 2.3]{UR} on the nontangential convergence of positive harmonic functions on $\mathbb R^{n+1}_+$. It is worth pointing out that a recent result of Bar \cite[Theorem 4]{B} on generalization of Montel's theorem and a result of Poon \cite[Theorem 1.2]{P} on unique continuation of parabolic equations plays an important role in the proof of Theorem \ref{thmparaeucl}. The generalization of Montel's theorem mentionead above, will also play an important role in a forthcoming paper of ours \cite{RS} which will deal with an analog of \cite[Theorem 2.3]{UR} for positive eigenfunctions of the Laplace-Beltrami operator on Harmonic $NA$ groups.
In this article, our aim is to prove an analogue of Theorem \ref{thmparaeucl} for positive solutions of more genereal partial differential equations. We will consider positive solutions $u$ of the heat equation corresponding to a sub-Laplacian on a stratified Lie group and prove the equivalence between the parabolic convergence of $u$ to the boundary and the strong derivative of the boundary measure of $u$ (Theorem \ref{mainc}). We refer the reader to Definition \ref{impdefnc}, for the relevant definitions. One of the main difficulty in this setting is that we do not have any explicit expression of the fundamental solution or the heat kernel. However, we do have Gaussian estimates of the heat kernels (see Theorem \ref{fundamental}) and using this estimate we have been able to prove our results. This makes the proof of our main theorem (Theorem \ref{mainc}) and auxiliary results much more involved than that of their Euclidean counterparts. This paper is organised as follows: In section 2, we will collect some basic information about stratified Lie groups and the heat equation on these groups. Proof of the result about heat maximal functions, and other relevant Lemmas are given in section 3. The statement and proof of the main theorem (Theorem \ref{mainc}) is given in the last section.
\section{preliminaries about stratified Lie groups}
Stratified Lie groups (also known as Carnot groups) have been introduced by Folland \cite[P.162]{F}. A stratified Lie group $(G,\circ)$ is a connected, simply connected nilpotent Lie group whose Lie algebra $\mathfrak{g}$ admits a vector space decomposition
\begin{equation*}
\mathfrak{g}=V_1\oplus V_2\oplus\cdots\oplus V_l,
\end{equation*}
such that
\begin{equation*}
[V_1,V_j]=V_{j+1},\:\:1\leq j<l,\:\:\:\:\:\:\:\:[V_1,V_l]=0.
\end{equation*}
Here,
\begin{equation*}
[V_1,V_j]=\text{span}~\{[X,Y]\mid X\in V_1, Y\in V_j \}.
\end{equation*}
Therefore, $V_1$ generates, $\mathfrak{g}$ as a Lie algebra. We say that $G$ is of step $l$ and has $\text{dim}\:V_1$ many generators. The Lie algebra $\mathfrak{g}$ is eqquiped with a cannonical family of dilations $\{\delta_r\}_{r>0}$ which are Lie algebra automophisms defined by \cite[P.5]{FS}
\begin{equation*}
\delta_r\left(\sum_{j=1}^{l}X_j\right)=\sum_{j=1}^{l}r^jX_j,\:\:X_j\in V_j.
\end{equation*}
Since $\mathfrak{g}$ is nilpotent, the exponential map $\exp:\mathfrak{g}\to G$ is a diffeomorphism, and the dilations $\delta_r$ lift via the exponential map to give a one-parameter group of automorphisms of $G$ which we still denote by $\delta_r$. We fix once and for all a bi-invariant measure $m$ on $G$ which is the push forward of the Lebesgue measure on $\mathfrak{g}$ via the exponential map. The bi-invariant measure $m$ on $G$ is, in fact, the Lebesgue measure of the underlying Euclidean space. We shall denote by
\begin{equation*}
Q=\sum_{j=1}^lj(\text{dim}\:V_j),
\end{equation*}
the homogeneous dimension of $G$ and by $0$ the identity element of $G$. The importance of homogeneous dimension stems from the following relation
\begin{equation*}
m\left(\delta_r(E)\right)=r^Qm(E),
\end{equation*}
which holds for all measurable sets $E\subset G$ and $r>0$. A homogeneous norm on $G$ is a continuous function $d:G\to[0,\infty)$ satisfying the following
\begin{enumerate}
\item[i)]$d$ is smooth on $G\setminus\{0\}$;
\item[ii)] $d(\delta_r(x))=rd(x)$, for all $r>0,\:x\in G$;
\item [iii)]$d(x^{-1})=d(x)$, for all $x\in G$;
\item[iv)]$d(x)=0$ if and only if $x=0$.
\end{enumerate}
It is known \cite[P.8-10]{FS} that homogeneous norms always exist on stratified Lie groups and for any homogeneous norm $d$ on $G$ there exists a positive constant $C_d$ such that
\begin{equation}\label{quasimorm}
d(x\circ y)\leq C_d\{d(x)+d(y)\},\;\:\:\:x\in G, y\in G.
\end{equation}
Moreover, any two homogeneous norms on $G$ are equivalent (see \cite[P.230]{BLU}): if $d_1$ and $d_2$ are two homogeneous norms on $G$ then there exists a positive constant $B$ such that
\begin{equation*}
B^{-1}d_1(x)\leq d_2(x)\leq Bd_2(x),\:\:\:\text{for all}\:\:x\in G.
\end{equation*}
The homogeneous norm $d$ defines a left invariant quasi-metric on $G$ (again denoted by $d$) as follows:
\begin{equation*}
d(x,y)=d(x^{-1}\circ y),\:\:\:\: x\in G,y\in G.
\end{equation*}
One can then write from (\ref{quasimorm}) that
\begin{equation*}
d(x,y)\leq C_d\left(d(x,z)+d(z,y)\right),\:\:\:\text{for all}\:\:x,\:y,\:z\in G.
\end{equation*}
\begin{rem}\label{topology} (\cite[Proposition 3.5]{Le})
Every homogeneous norm on $G$ induces the Euclidean topology on $G$.
\end{rem}
\begin{rem}(\cite[Proposition 5.15.1]{BLU})
Let $d$ be a homogeneous norm on $G$. Then for every compact set $K\subset G$, there exists a positive constant $c_K$ such that
\begin{equation}\label{bilipschitz}
(c_K)^{-1}\|x-y\|\leq d(y^{-1}\circ x)\leq c_K\|x-y\|^{\frac{1}{l}},\:\:\:\:\text{for all}\:\:x,\:y\in K,
\end{equation}
where $l$ is the step of $G$ and $\|\cdot\|$ is the norm of the underlying Euclidean space.
\end{rem}
For $x\in G$ and $s>0$, the $d$-ball centered at $x$ with radius $s$ is defined as
\begin{equation*}
B_d(x,s)=\{y\in G:d(x,y)<s\}.
\end{equation*}
It follows that $B_d(x,s)$ is the left translate by $x$ of the ball $B_d(0,s)$ which in turn, is the image under $\delta_s$ of the ball $B_d(0,1)$.
\begin{rem}\label{compact}(\cite[Lemma 1.4]{FS})
If $B=B_d(x,s)$ is a $d$-ball then $\overline{B}=\{y\in G:d(x,y)\leq s\}$ is compact with respect to the Euclidean topology of $G$.
\end{rem}
We identify $\mathfrak{g}$ as the Lie algebra of all left invariant vector fields on $G$ and fix once and for all a basis $\{X_1,X_2,\cdots,X_{N_1}\}$ for $V_1$, with $N_1=\text{dim}\:V_1$, which generates $\mathfrak{g}$ as a Lie algebra. The second order differential operator
\begin{equation*}
\mathcal{L}=\sum_{j=1}^{N_1}X_j^2,
\end{equation*}
is called a sub-Laplacian for $G$.
\begin{rem} (\cite[Theorem 2.2]{BLU1})
There exists a homogeneous norm $d_{\mathcal{L}}$ on $G$ such that $d_{\mathcal{L}}(\cdot )^{2-Q}$ is the fundamental solution of $\mathcal{L}$.
\end{rem}
A differential operator $D$ on $G$ is said to be homogeneous of degree $\lambda$, where $\lambda\in\mathbb C$, if
\begin{equation*}
D(f\circ\delta_r)=r^{\lambda}(Df)\circ\delta_r,
\end{equation*}
for all $f\in C_c^{\infty}(G)$, $r>0$. It is evident that $X\in \mathfrak{g}$ is homogeneous of degree $j$ if and only if $X\in V_j$, $1\leq j\leq k$ (see \cite[P.172]{F}). Hence, $\mathcal{L}$ is a left invariant second order differential operator on $G$ which is homogeneous of degree two.
The heat operator $\mathcal{H}$ associated to the sub-Laplacian $\mathcal{L}$ is the differential operator
\begin{equation*}
\mathcal{H}=\mathcal{L}-\frac{\partial}{\partial t}
\end{equation*}
on $G\times(0,\infty)$. Since $X_1,X_2,\cdots,X_{N_1}$ generates the whole $\mathfrak{g}$ as an algebra, by a celebrated theorem of Hormander \cite[Theorem 1.1]{H}, $\mathcal{L}$ and $\mathcal{H}$ are hypoelliptic on $G$ and $G\times (0,\infty)$ respectively. Hypoellipticity of $\mathcal{H}$ plays an important role in the results we have proved.
\begin{exmp}
The simplest nontrivial example of stratified group is given by the Heisenberg group $H^n$. As a set, $H^n$ is $\mathbb C^n\times\mathbb R$. Denoting the points of $H^n$ by $(z,s)$ with $z=(z_1,\cdots,z_n)\in\mathbb C^n,\:s\in\mathbb R$, we have the group law given as
\begin{equation*}
(z,s)\circ(z',s')=\left(z+z',s+s'+2\sum_{j=1}^n\Im(z_j\overline{z_j'})\right)
\end{equation*}
With the notation $z_j=x_j+y_j$, the horizontal space $V_1=\mathbb R^{2n}\times\{0\}$ is spanned by the basis
\begin{equation*}
X_j=\frac{\partial}{\partial x_j}+2y_j\frac{\partial}{\partial s},\:\:Y_j=\frac{\partial}{\partial y_j}-2x_j\frac{\partial}{\partial s}
\end{equation*}
The one dimensional centre $V_2=\{0\}\times\mathbb R$ is generated by the vector field
\begin{equation*}
S=\frac{\partial}{\partial s}.
\end{equation*}
The nonzero Lie brackets of the basis elements are given by
\begin{equation*}
[X_j,Y_j]=-4S,\:\:\:\: 1\leq j\leq n.
\end{equation*}
The sub-Laplacian $\mathcal{L}=\sum_{j=1}^n(X_j^2+Y_j^2)$ is known as the Kohn Laplacian in the literature. The corresponding homogeneous norm $d_{\mathcal{L}}$ (known as Koranyi norm) on $H^n$ is given by the following expression
\begin{equation*}
d_{\mathcal{L}}(z,s)=\left(|z|^4+16s^2\right)^{\frac{1}{4}}.
\end{equation*}
We refer the reader to \cite{BLU} for more examples of stratified Lie groups.
\end{exmp}
As stated before, in this paper, we are interested in boundary behavior of positive solutions of the heat equation on stratified groups:
\begin{equation}\label{heateq}
\mathcal{H}u(x,t)=0,\:\:\:\:\:\:(x,t)\in G\times (0,\infty).
\end{equation}
We list down some properties of the fundamental solution of the heat equation (\ref{heateq}).
\begin{thm}\label{fundamental}
The fundamental solution of $\mathcal{H}$ is given by
\begin{equation*}
\Gamma(x,t;\xi):=\Gamma(\xi^{-1}\circ x,t),\:\:\:\: x\in G,\:\xi\in G,\:t\in(0,\infty),
\end{equation*}where $\Gamma$ is a smooth, strictly positive function on $G\times(0,\infty)$ satisfying the following properties:
\begin{enumerate}
\item[(i)] $\Gamma(x,t+\tau)=\int_{G}\Gamma(\xi^{-1}\circ x,t)\Gamma(\xi,\tau)\:dm(\xi)$,\: \:\:$x\in G,\:\:t\in(0,\infty),\:\:\tau\in(0,\infty)$.
\item[(ii)] $\Gamma(x,t)=\Gamma(x^{-1},t)$,\:\:\:$(x,t)\in G\times(0,\infty)$.
\item[(iii)] $\Gamma(\delta_r(x),r^2t)=r^{-Q}\Gamma(x,t)$,\:\:\:$(x,t)\in G\times(0,\infty)$,\:\:$r>0$.
\item[(iv)] $\int_{G}\Gamma(x,t)\:dm(x)=1$,\:\:\:$t>0$.
\item [(v)] There exists a positive constant $c_0$, depending only on $\mathcal{L}$, such that the following Gaussian estimates hold.
\begin{equation}\label{gaussian}
c_0^{-1}t^{-\frac{Q}{2}}\exp\left(-\frac{c_0d_{\mathcal{L}}(x)^2}{t}\right)\leq\Gamma(x,t)\leq c_0t^{-\frac{Q}{2}}\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{c_0t}\right),
\end{equation} for every $x\in G$ and $t>0$.
\item[(vi)] Given any nonnegative integers $p$, $q$, there exists positive constants $c_1$, $c_{p,q}$ such that for every $i_1,\cdots,i_{p}\in\{1,\cdots,N_1\}$ we have
\begin{equation}\label{derivativeest}
|X_{i_1}\cdots X_{i_p}(\partial_t)^q\Gamma(x,t)|\leq c_{p,q}t^{-\frac{Q+p+2q}{2}}\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{c_1t}\right),
\end{equation}
for every $x\in G$ and $t>0$.
\end{enumerate}
\end{thm}
The proof of (i)-(iv) can be found in \cite[Proposition 1.68, Corollary 8.2]{FS} and the proofs of (v), (vi) are available in \cite[Theorem 5.1, Theorem 5.2, Theorem 5.3]{BLU1}. Property (v) plays an important role in our study and we will frequently
use it throughout this paper.
For a measure $\mu$ on $G$, we define
\begin{equation}\label{gammamu}
\Gamma\mu(x,t)=\int_{G}\Gamma(\xi^{-1}\circ x,t)\:d\mu(\xi),\:\:\:\:\:x\in G,\:\:t>0,
\end{equation}
whenever the integral above exists. We define
\begin{equation*}
\gamma(x):=\Gamma(x,1),\:\:\:\:\:x\in G.
\end{equation*}
Then by Therorem \ref{fundamental}, (iii) we have
\begin{equation*}
\Gamma(x,t)=t^{-\frac{Q}{2}}\gamma\left(\delta_{\frac{1}{\sqrt{t}}}(x)\right)
\end{equation*}
For a function $\psi$ defined on $G$, we set for $t>0$,
\begin{equation*}
\psi_t(x)=t^{-Q}\psi\left(\delta_{\frac{1}{t}}(x)\right),\:\:\:\:\:x\in G.
\end{equation*}
Hence, we can rewrite (\ref{gammamu}) as follows.
\begin{equation}\label{gammaconv}
\Gamma\mu(x,t)=\mu\ast\gamma_{\sqrt{t}}(x),\:\:\:\:\:x\in G,\:\:t\in(0,\infty).
\end{equation}
where $\ast$ is the convolution on the group $G$. From now onwards, unless mentioned explicitly, we will always write $B(x,s)$ instead of $B_{d_{\mathcal{L}}}(x,s)$ to denote a ball centered at $x$ and radius $s>0$, with respect to the homogeneous norm $d_{\mathcal{L}}$. We recall that there exists a constant $C_\mathcal{L}\geq 1$, such that
\begin{equation*}
d_{\mathcal{L}}(y\circ z)\leq C_{\mathcal{L}}\left(d_{\mathcal{L}}(y)+d_{\mathcal{L}}(z)\right),\:\:y,\:z\in G.
\end{equation*}
Using this we get the following simple inequality.
\begin{equation}\label{revtri}
d_{\mathcal{L}}(y,z)\geq\frac{1}{C_{\mathcal{L}}}d_{\mathcal{L}}(u,z)-d_{\mathcal{L}}(u,y),\:\:u,\:y,\:z\in G.
\end{equation}
We next prove a simple lemma regarding convolution on $G$. To do this, we take a function $\phi:G\rightarrow (0,\infty)$ such that
\begin{equation}\label{lradial}
\phi(x_1)=\phi(x_2),\:\:\text{whenever}\:\:d_{\mathcal{L}}(x_1)=d_{\mathcal{L}}(x_2);
\end{equation}
\begin{equation}\label{ldec}
\phi(x_1)\leq\phi(x_2),\:\:\text{whenever}\:\:d_{\mathcal{L}}(x_1)\geq d_{\mathcal{L}}(x_2).
\end{equation}
Following \cite[P.247]{BLU}, any function satisfying (\ref{lradial}) (resp.(\ref{ldec})) will be called $\mathcal{L}$-radial (resp. $\mathcal{L}$-radially decreasing) function. If $\phi$ is $\mathcal{L}$-radial function on $G$, for the sake of simplicity, we shall often write $\phi(r)=\phi(x)$ whenever $r=d_{\mathcal{L}}(x)$.
\begin{prop}\label{finiteconv}
Suppose that $\mu$ is a measure on $G$ and that $\phi$ is as above. Then finiteness of $|\mu|\ast \phi_{t_0}(x_0)$ for some $(x,t)\in G\times(0,\infty)$ implies the finiteness of $|\mu|\ast\phi_t(x)$ for all $(x,t)\in G\times (0,t_0/C_\mathcal{L})$.
\end{prop}
\begin{proof}
We take $(x,t)\in G\times(0,t_0/C_\mathcal{L})$ and set $\alpha=\frac{t_0}{t_0-tC_\mathcal{L}}$. We write
\begin{eqnarray}\label{intsplit}
|\mu|\ast\phi_t(x)&=&t^{-Q}\int_{\{\xi\in G:\:d_{\mathcal{L}}(\xi,x_0)<\alpha C_\mathcal{L} d_{\mathcal{L}}(x,x_0)\}}\phi\left(\delta_{\frac{1}{t}}(\xi^{-1}\circ x)\right)\:d|\mu|(\xi)\nonumber\\ &&\:\:\:+t^{-Q}\int_{\{\xi\in G:\:d_{\mathcal{L}}(\xi,x_0)\geq\alpha C_\mathcal{L} d_{\mathcal{L}}(x,x_0)\}}\phi\left(\delta_{\frac{1}{t}}(\xi^{-1}\circ x)\right)\:d|\mu|(\xi)\nonumber\\
&\leq& t^{-Q}\phi(0)|\mu|\left(B(x_0,\alpha C_\mathcal{L} d_{\mathcal{L}}(x,x_0))\right)\\\nonumber&&\:\:\:+t^{-Q}\int_{\{\xi\in G:\:d_{\mathcal{L}}(\xi,x_0)\geq\alpha C_\mathcal{L} d_{\mathcal{L}}(x,x_0)\}}\phi\left(\delta_{\frac{1}{t}}(\xi^{-1}\circ x)\right)\:d|\mu|(\xi).
\end{eqnarray}
Using the reverse triangle inequality (\ref{revtri}), we obtain
\begin{equation*}
d_{\mathcal{L}}(\xi,x)\geq\frac{1}{C_{\mathcal{L}}}d_{\mathcal{L}}(\xi,x_0)-d_{\mathcal{L}}(x,x_0)\geq\left(\frac{1}{C_\mathcal{L}}-\frac{1}{\alpha C_\mathcal{L}}\right)d_{\mathcal{L}}(\xi,x_0),
\end{equation*}
whenever $d_{\mathcal{L}}(\xi,x_0)\geq\alpha C_\mathcal{L} d_{\mathcal{L}}(x,x_0)$. Therefore,
\begin{equation*}
d_{\mathcal{L}}\left(\delta_{\frac{1}{t}}(\xi^{-1}\circ x)\right)\geq\frac{1}{t}\left(\frac{1}{C_\mathcal{L}}-\frac{1}{\alpha C_\mathcal{L}}\right)d_{\mathcal{L}}\left(\xi^{-1}\circ x_0\right)=\frac{1}{t_0}d_{\mathcal{L}}\left(\delta_{\frac{1}{t}}(\xi^{-1}\circ x_0)\right)=d_{\mathcal{L}}\left(\delta_{\frac{1}{t_0}}(\xi^{-1}\circ x_0)\right),
\end{equation*}
whenever $d_{\mathcal{L}}(\xi,x_0)\geq\alpha C_\mathcal{L} d_{\mathcal{L}}(x,x_0)$. Using this observation, and the fact that $\phi$ is ${\mathcal{L}}$-radially decreasing, we get from (\ref{intsplit})
\begin{equation*}
|\mu|\ast\phi_t(x)\leq t^{-Q}\phi(0)|\mu|\left(B(x_0,\alpha C_\mathcal{L} d_{\mathcal{L}}(x,x_0))\right)+ t^{-Q}\int_{G}\phi\left(\delta_{\frac{1}{t_0}}(\xi^{-1}\circ x_0)\right)\:d|\mu|(\xi).
\end{equation*}
By our hypothesis, integral on the right-hand side is finite and hence $|\mu|\ast\phi_t(x)<\infty$.
\end{proof}
Using this Proposition and the Gaussian estimates (\ref{gaussian}), (\ref{derivativeest}) we can prove the following.
\begin{cor}\label{finiteonstrip}
Suppose $\mu$ is a measure on $G$. If $\Gamma\mu(x_0,t_0)$ exists for some $(x_0,t_0)\in G\times(0,\infty)$ then $\Gamma\mu$ is well defined on the whole strip $G\times(0,\delta)$, where $\delta=\frac{t_0}{2c_0^2C_{\mathcal{L}}}$. Moreover, $\Gamma\mu$ is a solution of $\mathcal{H}u=0$ in $G\times(0,\delta)$.
\end{cor}
\begin{proof}
As $\Gamma\mu(x_0,t_0)$ exists, using (\ref{gaussian}) we get
\begin{equation}
\int_{G}\exp\left(-\frac{c_0d_{\mathcal{L}}(\xi^{-1}\circ x_0)^2}{t_0}\right)\:d|\mu|(\xi)<\infty.
\end{equation}
Consequently, for all $t\in (0,t_0/c_0^2)$
\begin{equation}\label{int1lemma1}
\int_{G}\exp\left(-\frac{d_{\mathcal{L}}(\xi^{-1}\circ x_0)^2}{c_0t}\right)\:d|\mu|(\xi)<\infty.
\end{equation}
Setting $\phi(x)=\exp\left(\frac{-d_{\mathcal{L}}(x)^2}{c_0}\right)$, we note that $\phi$ satisfies all the requirements of Proposition \ref{finiteconv}. Moreover, by (\ref{int1lemma1})
\begin{equation*}
|\mu|\ast\phi_{\sqrt{t_1}}(x_0)<\infty,
\end{equation*}
where $t_1=\frac{t_0}{2c_0^2}$. Applying Proposition \ref{finiteconv}, we conclude that $|\mu|\ast\phi_{\sqrt{t}}(x)<\infty$, for all $x\in G$, $t\in (0,t_1/C_{\mathcal{L}})$. Consequently, it follows from the Gaussian estimate (\ref{gaussian}) that $\Gamma\mu(x,t)$ is well defined for all $(x,t)\in G\times (0,\frac{t_0}{2c_0^2C_{\mathcal{L}}})$. To prove the second part, we differentiate $\Gamma\mu$ in $G\times(0,\delta)$ along the vector fields $X_1,\cdots,X_{N_1},\frac{\partial}{\partial t}$ and then use the fact that $\Gamma$ is a fundamental solution of $\mathcal{H}$. Differentiation under integral sign is justified because of the estimate (\ref{derivativeest}).
\end{proof}
\begin{rem}
For an alternative proof of the second part of this Corollary \ref{finiteonstrip}, which uses Harnack inequality, we refer to \cite[Lemma 2.5]{BU}.
\end{rem}
It is clear from the Gaussian estimate (\ref{gaussian}) that for each $t>0$, $\Gamma(\cdot,t)\in L^p(G)$, for all $p\in [1,\infty]$. Thus, for $f\in L^p(G)$, $1\leq p\leq\infty$,
\begin{equation*}
\Gamma f(x,t):=\int_G\Gamma(\xi^{-1}\circ x,t)f(\xi)\:dm(\xi)
\end{equation*}
is well defined for all $(x,t)\in G\times(0,\infty)$. This follows from the formula for integration in "polar coordinates" \cite[Proposition 1.15]{FS}: for all $g\in L^1(G)$,
\begin{equation}\label{polarcordinate}
\int_Gg(x)\:dm(x)=\int_{0}^{\infty}\int_{S}g(\delta_r(\omega))r^{Q-1}\:d\sigma(\omega)\:dr,
\end{equation}
where $S=\{\omega\in G:d_{\mathcal{L}}(\omega)=1\}$ and $\sigma$ is a unique positive Radon measure on $S$.
\begin{rem}\label{appcc} (\cite[Proposition 1.20]{FS})
As $\gamma$ is positive with $\int_{G}\gamma(x)\:dm(x)=1$ and $\Gamma f(.,t)=f\ast\gamma_{\sqrt{t}}$, it can be shown that if $f\in C_c(G)$ then $\Gamma f(.,t)$ converges to $f$ uniformly as $t$ goes to zero.
\end{rem}
However, a stronger result is true.
\begin{prop}\label{unifc}
If $f\in C_c(G)$ then
\begin{equation*}
\lim_{t\to 0}\frac{\Gamma f(.,t)}{\gamma}=\frac{f}{\gamma},
\end{equation*}
uniformly on $G$.
\end{prop}\label{ccfun}
\begin{proof}
We assume that $\text{supp}\:f\subset B(0,R)$ for some $R>0$ . By (\ref{gaussian}), $\gamma$ is bounded below by a positive number on $B(0,2RC_{\mathcal{L}})$. Therefore, Remark \ref{appcc} tells us that
\begin{equation*}
\lim_{t\to 0}\frac{\Gamma f(x,t)}{\gamma(x)}=\frac{f(x)}{\gamma(x)},
\end{equation*}
uniformly for $x\in B(0,2RC_{\mathcal{L}})$. Hence, it suffices to prove that
\begin{equation*}
\lim_{t\to 0}\frac{\Gamma f(x,t)}{\gamma(x)}=0,
\end{equation*}
uniformly for $x\in G\setminus B(0,2RC_{\mathcal{L}})$.
We observe that
\begin{eqnarray}\label{gammaf}
\frac{|\Gamma f(x,t)|}{\gamma(x)}&=&\frac{1}{\gamma(x)}\left|\int_{G}\Gamma(\xi^{-1}\circ x,t)f(\xi)\:dm(\xi)\right|\nonumber\\
&\leq&\frac{1}{\gamma(x)}\int_{B(0,R)}c_0t^{-\frac{Q}{2}}\exp\left(-\frac{d_{\mathcal{L}}(\xi^{-1}\circ x)^2}{c_0t}\right)|f(\xi)|\:dm(\xi),
\end{eqnarray}
where last inequality follows from the Gaussian estimate (\ref{gaussian}) and the fact that $\text{supp} f\subset B(0,R)$. Now, for $x\in G\setminus B(0,2RC_{\mathcal{L}})$ and $\xi\in B(0,R)$, using (\ref{revtri}) we get
\begin{equation}\label{atrineq}
d_{\mathcal{L}}(\xi^{-1}\circ x)\geq \frac{d_{\mathcal{L}}(x)}{C_{\mathcal{L}}}-d_{\mathcal{L}}(\xi)\geq \frac{d_{\mathcal{L}}(x)}{C_{\mathcal{L}}}-\frac{d_{\mathcal{L}}(x)}{2C_{\mathcal{L}}}=\frac{d_{\mathcal{L}}(x)}{2C_{\mathcal{L}}}.
\end{equation}
Using this in (\ref{gammaf}), we obtain
\begin{equation*}
\frac{|\Gamma f(x,t)|}{\gamma(x)}\leq \frac{c_0}{t^{\frac{Q}{2}}\gamma(x)}\int_{B(0,R)}\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{4c_0C_{\mathcal{L}}^2t}\right)|f(\xi)|\:dm(\xi)\leq \frac{c_0\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{4c_0C_{\mathcal{L}}^2t}\right)}{t^{\frac{Q}{2}}\gamma(x)}\|f\|_{L^1(G)}.
\end{equation*}
Hence, it is enough to show that
\begin{equation*}
\lim_{t\to 0}\frac{\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{4c_0C_{\mathcal{L}}^2t}\right)}{t^{\frac{Q}{2}}\gamma(x)}=0,
\end{equation*}
uniformly for $x\in G\setminus B(0,2RC_{\mathcal{L}})$. But
\begin{equation*}
\frac{\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{4c_0C_{\mathcal{L}}^2t}\right)}{t^{\frac{Q}{2}}\gamma(x)}
\leq\frac{\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{4c_0C_{\mathcal{L}}^2t}\right)}{t^{\frac{Q}{2}}c_0^{-1}\exp(-c_0d_{\mathcal{L}}(x)^2)}
=c_0t^{-\frac{Q}{2}}\exp\left(-\left(\frac{1}{4c_0C_{\mathcal{L}}^2t}-c_0\right){d_{\mathcal{L}}(x)^2}\right),
\end{equation*}
where the inequality follows from the Gaussian estimate (\ref{gaussian}). Taking
$0<t<\frac{1}{4c_0^2C_{\mathcal{L}}^2}$, we see that $\frac{1}{4c_0C_{\mathcal{L}}^2t}-c_0$ is positive. Hence, for such $t$ and for all $x\in G\setminus B(0,2RC_{\mathcal{L}})$ that is, $d_{\mathcal{L}}(x)\geq2RC_{\mathcal{L}}$, last inequality gives
\begin{equation*}
\frac{\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{4c_0C_{\mathcal{L}}^2t}\right)}{t^{\frac{Q}{2}}\gamma(x)}\leq c_0t^{-\frac{Q}{2}}\exp\left(-\left(\frac{1}{4c_0C_{\mathcal{L}}^2t}-c_0\right){4C_{\mathcal{L}}^2R^2}\right)\leq At^{-\frac{Q}{2}}e^{-\frac{1}{Bt}},
\end{equation*}
for some positive constants $A$ and $B$. The expression on the right-hand side of the inequality above goes to zero as $t$ goes to zero. This completes the proof.
\end{proof}
Let $M$ denote the set of all measures $\mu$ on $G$ such that $\Gamma\mu$ exists on $G\times(0,\infty)$. In view of Corollary \ref{finiteonstrip}, we have
\begin{equation*}
M=\{\mu\:\text{is a measure on}\:G\mid \Gamma\mu(0,t)\text{ exists for all $t\in (0,\infty)$}\}.
\end{equation*}
We note that if $|\mu|(G)<\infty$, then $\mu\in M$. In particular, every complex measure on $G$ belongs to $M$. We have the following observation regarding this class of measures.
\begin{prop}\label{fubinic}
If $\nu\in M$ and $f\in C_c(G)$ then for each fixed $t>0$,
\begin{equation*}
\int_{G}\Gamma f(x,t)\:d\nu(x)=\int_{G}\Gamma\nu(x,t)f(x)\:dm(x).
\end{equation*}
\end{prop}
\begin{proof}
The result will follow by interchanging integrals using Fubini's theorem. In order to apply Fubini's theorem we must prove that
\begin{equation*}
\int_{G}\int_{\text{supp}\:f}\Gamma(\xi^{-1}\circ x,t)|f(\xi)|\:dm(\xi)\:d|\nu|(x)<\infty.
\end{equation*}
We asuume that $\text{supp}\:f\subset B(0,R)$, for some $R>0$. Then for each fixed $t>0$,
\begin{eqnarray*}
I&:=&\int_{G}\int_{B(0,R)}\Gamma(\xi^{-1}\circ x,t)|f(\xi)|\:dm(\xi)\:d|\nu|(x)\nonumber\\
&\leq&c_0t^{-\frac{Q}{2}}\int_{G}\int_{B(0,R)}\exp\left(-\frac{d_{\mathcal{L}}(\xi^{-1}\circ x)^2}{c_0t}\right)|f(\xi)|\:dm(\xi)\:d|\nu|(x)\:\:\:\:\:(\text{using}\:(\ref{gaussian}))\nonumber\\
&=&c_0t^{-\frac{Q}{2}}\int_{B(0,2C_{\mathcal{L}}R)}\int_{B(0,R)}\exp\left(-\frac{d_{\mathcal{L}}(\xi^{-1}\circ x)^2}{c_0t}\right)|f(\xi)|\:dm(\xi)\:d|\nu|(x)\nonumber\\
&&\:\:\:+c_0t^{-\frac{Q}{2}}\int_{G\setminus B(0,2C_{\mathcal{L}}R)}\int_{B(0,R)}\exp\left(-\frac{d_{\mathcal{L}}(\xi^{-1}\circ x)^2}{c_0t}\right)|f(\xi)|\:dm(\xi)\:d|\nu|(x)\nonumber\\
&\leq&c_0t^{-\frac{Q}{2}}|\nu|(B(0,2C_{\mathcal{L}}R))\|f\|_{L^1(G)}\nonumber\\
&&\:\:\:+c_0t^{-\frac{Q}{2}}\int_{G\setminus B(0,2C_{\mathcal{L}}R)}\int_{B(0,R)}\exp\left(-\frac{d_{\mathcal{L}}( x)^2}{4c_0C_{\mathcal{L}}^2t}\right)|f(\xi)|\:dm(\xi)\:d|\nu|(x),\nonumber\\
\end{eqnarray*}
where we have used (\ref{atrineq}) in the last integral. Applying Gaussian estimate (\ref{gaussian}) in the last integral, we get
\begin{eqnarray*}
I&\leq&c_0t^{-\frac{Q}{2}}|\nu|(B(0,2C_{\mathcal{L}}R))\|f\|_{L^1(G)}\\
&&\:\:\:+(4c_0^2C_{\mathcal{L}}^2)^{\frac{Q}{2}}c_0^2\int_{G\setminus B(0,2C_{\mathcal{L}}R)}\int_{B(0,R)}\Gamma(x,4c_0^2C_{\mathcal{L}}^2t)|f(\xi)|\:dm(\xi)\:d|\nu|(x)\\
&\leq&c_0t^{-\frac{Q}{2}}|\nu|(B(0,2C_{\mathcal{L}}R))\|f\|_{L^1(G)}+(4c_0^2C_{\mathcal{L}}^2)^{\frac{Q}{2}}c_0\|f\|_{L^1(G)}\Gamma\mu(0,4c_0^2C_{\mathcal{L}}^2t)
\end{eqnarray*}
As $\nu\in M$, $I$ is finite. This proves the lemma.
\end{proof}
Before we move into our next section, we end this section with some definitions that will be used in the upcoming sections.
\begin{defn}\label{impdefnc}
\begin{enumerate}
\item[i)] A function $u$ defined on $G\times(0,t_0)$, for some $<t_0\leq\infty$ is said to have parabolic limit $L\in\mathbb C$, at $x_0\in G$ if for each $\alpha>0$
\begin{equation*}
\lim_{\substack{(x,t)\to(x_0,0)\\(x,t)\in \texttt{P}(x_0,\alpha)}}u(x,t)=L,
\end{equation*}
where $\texttt{P}(x_0,\alpha)=\{(x,t)\in G\times(0,\infty):d_{\mathcal{L}}(x_0,x)<\alpha\sqrt{t}\}$ is the parabolic domain with vertex at $x_0$ and aperture $\alpha$.
\item[ii)] Given a measure $\mu$ on $G$, we say that $\mu$ has strong derivative $L\in[0,\infty)$ at $x_0$ if
\begin{equation*}
\lim_{r\to 0}\frac{\mu(x_0\circ\delta_r(B))}{m(x_0\circ\delta_r(B))}=L
\end{equation*}
holds for every $d_{\mathcal{L}}$-ball $B\subset G$. The strong derivative of $\mu$ at $x_0$, if it exists, is denoted by $D\mu(x_0)$. Note that if $B=B(y,s)$ for some $y\in G$, $s>0$, then $\delta_r(B)=B(\delta_r(y),rs)$, for all $r>0$.
\item[iii)] A sequence of functions $\{u_j\}$ defined on $G\times(0,\infty)$ is said to converge normally to a function $u$ if $\{u_j\}$ converges to $u$ uniformly on compact subsets of $G\times(0,\infty)$.
\item[iv)] A sequence of functions $\{u_j\}$ defined on $G\times(0,\infty)$ is said to be locally bounded if given any compact set $K\subset G\times(0,\infty)$, there exists a positive constant $C_K$ such that
for all $j$ and all $x\in K$
\begin{equation*}
|u_j(x)|\leq C_K.
\end{equation*}
\item[v)] A sequence of positive measures $\{\mu_j\}$ on $G$ is said to converge to a positive measure $\mu$ on $G$ in weak* if
\begin{equation*}
\lim_{j\to\infty}\int_{G}\psi(y)\:d\mu_j(y)=\int_{G}\psi(y)\:d\mu(y),
\end{equation*}
for all $\psi\in C_c(G)$.
\end{enumerate}
\end{defn}
\section{Some auxilary results}
We start this section with the following result involving normal convergence and weak* convergence.
\begin{lem}\label{normalc}
Suppose $\{\mu_j\mid j\in\mathbb N\}\subset M$ and $\mu\in M$ are positive measures. If $\{\Gamma\mu_j\}$ converges normally to $\Gamma\mu$ then $\{\mu_j\}$ converges to $\mu$ in weak*.
\end{lem}
\begin{proof}
Let $f\in C_c(G)$ with $\text{supp}\:f\subset B(0,R)$ for some $R>0$. For any $t>0$, we write
\begin{eqnarray}
&&\int_{G}f(x)\:d\mu_j(x)-\int_{G}f(x)\:d\mu(x)\nonumber\\
&=&\int_{G}(f(x)-\Gamma f(x,t))\:d\mu_j(x)+\int_{G}\Gamma f(x,t)\:d\mu_j(x)-\int_{G}\Gamma f(x,t)\:d\mu(x)\nonumber\\
&&\:\:\:\:\:+\int_{G}(\Gamma f(x,t)-f(x))\:d\mu(x).\label{ineq1c}
\end{eqnarray}
Given $\epsilon>0$, by Proposition \ref{unifc} we get some $t_0>0$, such that for all $x\in G$
\begin{equation}\label{uniformineq}
\frac{|\Gamma f(x,t_0)-f(x)|}{\gamma(x)}<\epsilon.
\end{equation}
Using Proposition \ref{fubinic}, it follows from (\ref{ineq1c}) that
\begin{eqnarray*}
&&\left|\int_{G}f(x)\:d\mu_j(x)-\int_{G}f(x)\:d\mu(x)\right|\\
&\leq &\int_{G}|f(x)-\Gamma f(x,t_0)|\:d\mu_j(x)+\int_{B(0,R)}|\Gamma\mu_j(x,t_0)-\Gamma\mu(x,t_0)||f(x)|\:dx\\
&&\:\:\:\:\:+\int_{G}|\Gamma f(x,t_0)-f(x)|\:d\mu(x)\\
&=&I_1(j)+I_2(j)+I_3.
\end{eqnarray*}
It follows from (\ref{uniformineq}) that
\begin{equation*}
I_1(j)=\int_{G}\frac{|\Gamma f(x,t_0)-f(x)|}{\gamma(x)}\gamma(x)\:d\mu_j(x)\leq \epsilon\int_{G}\gamma(x)\:d\mu_j(x)=\epsilon \Gamma\mu_j(0,1),
\end{equation*}
for all $j\in\mathbb N$. By the same argument, we also have that
\begin{equation*}
|I_3|\leq\epsilon \Gamma\mu(0,1).
\end{equation*}
Since $\{\Gamma\mu_j\}$ converges to $\Gamma\mu$ normally, the sequence $\{\Gamma\mu_j(0,1)\}$, in particular, is bounded. Hence, taking $A$ to be the supremum of $\{\Gamma\mu_j(0,1)+\Gamma\mu(0,1)\}$, we get that for all $j\in\mathbb N$
\begin{equation*}
I_1(j)+I_3\leq2A\epsilon.
\end{equation*}
Since $\{\Gamma\mu_j\}$ converges normally to $\Gamma\mu$, there exists $j_0\in\mathbb N$ such that for all $j\geq j_0$,
\begin{equation*}
\|\Gamma\mu_j-\Gamma\mu\|_{L^{\infty}(\overline{B(0,R)}\times\{t_0\})}<\epsilon.
\end{equation*}
This implies that for all $j\geq j_0$,
\begin{equation*}
I_2(j)\leq \epsilon \|f\|_{L^1(G)}.
\end{equation*}
Hence, for all $j\geq j_0$,
\begin{equation*}
\left|\int_{G}f(x)d\mu_j(x)-\int_{G}f(x)d\mu(x)\right|\leq\epsilon (2A+\|f\|_{L^1(G)}).
\end{equation*}
This completes the prove.
\end{proof}
It is well-known that if two positive measures on $\mathbb R^n$ agree on all open balls, then they are equal. We are now going to prove that the same conclusion can be drawn when open balls are replaced by $d_{\mathcal{L}}$-balls.
\begin{prop}\label{measuresequal}
Let $\mu$ and $\nu$ be two positive measures on $G$. If
\begin{equation}\label{equalonball}
\mu(B)=\nu(B),
\end{equation}
for every $d_{\mathcal{L}}$-ball $B\subset G$, then $\mu=\nu$.
\end{prop}
\begin{proof}
We set
\begin{equation*}
\phi=m\left(B(0,1)\right)^{-1}\chi_{B(0,1)}.
\end{equation*}
Since translation and dilation of a $d_{\mathcal{L}}$-ball is again a $d_{\mathcal{L}}$-ball, it follows that for all $x\in G$ and $r>0$,
\begin{equation}
\mu\ast\phi_r(x)=\nu\ast\phi_r(x)\label{muphiequalnuphi}.
\end{equation}
It follows from \cite[Theorem 2.18]{Rub} that $\mu$, $\nu$ are regular and hence it suffices to show that
\begin{equation*}
\int_{G}g\:d\mu=\int_{G}g\:d\nu,\:\:\:\:\:\text{for all}\:\:g\in C_c(G).
\end{equation*}
We take $f\in C_c(G)$ with $\text{supp}f\subset B(0,R)$. We consider for $x\in G$, $r>0$,
\begin{eqnarray}\label{fmuphi}
f\ast(\mu\ast\phi_r)(x)&=&\int_{G}f(y)\mu\ast\phi_r(y^{-1}\circ x)\:dm(y)\nonumber\\&=&\int_{G}f(y)\int_{G}\phi_r\left(\xi^{-1}\circ (y^{-1}\circ x)\right)\:d\mu(\xi)\:dm(y)\nonumber\\
&=&\int_{G}\int_{G}f(y_1\circ\xi^{-1})\phi_r(y_1^{-1}\circ x)\:d\mu(\xi)\:dm(y_1)\nonumber\\
&&\:\:\:\:(\text{substituting}\:\:y=y_1\circ\xi^{-1}\:\:\text{and using the translation invariance of}\:m)\nonumber\\&=&\int_{G}f_{\mu}(y_1)\phi_r(y_1^{-1}\circ x)\:dm(y_1)\nonumber\\&=&f_{\mu}\ast\phi_r(x),
\end{eqnarray} where
\begin{equation}\label{fmudefinition}
f_{\mu}(y)=\int_{G}f(y\circ\xi^{-1})\:d\mu(\xi),\:\:\:y\in G.
\end{equation}
We now claim that $f_{\mu}$ is continuous at $0$. To see this, we consider a sequence $\{y_k\}$ converging to $0$. Since the group operation and $d_{\mathcal{L}}$ are continuous, $y_k\circ\xi^{-1}\to\xi^{-1}$, for each $\xi\in G$, and there exists some positive constant $A$ such that $d_{\mathcal{L}}(y_k)\leq A$, for all $k$. Note that for $d_{\mathcal{L}}(\xi)>C_{\mathcal{L}}(R+A)$,
\begin{equation*}
d_{\mathcal{L}}(y_k\circ\xi^{-1})\geq \frac{1}{C_{\mathcal{L}}}d_{\mathcal{L}}(\xi)-d_{\mathcal{L}}(y_k)>\frac{1}{C_{\mathcal{L}}}C_{\mathcal{L}}(R+A)-A=R, \:\:\:\:\text{for all}\:\:k.
\end{equation*}
Therefore, we can write for all $k$,
\begin{equation}\label{anotherfmu}
f_{\mu}(y_k)=\int_{B(0,C_{\mathcal{L}}(R+A))}f(y_k\circ\xi^{-1})\:d\mu(\xi).
\end{equation}
By continuty of $f$, $f(y_k\circ\xi^{-1})\to f(\xi^{-1})$, for each $\xi$, and hence applying dominated convergence theorem on the righ-hand side of (\ref{anotherfmu}), we obtain
\begin{equation*}
f_{\mu}(y_k)\to\int_{B(0,C_{\mathcal{L}}(R+A))}f(\xi^{-1})\:d\mu(\xi)=\int_{G}f(\xi^{-1})\:d\mu(\xi)=f_{\mu}(0),\:\:\:\:\text{as}\:\:k\to\infty.
\end{equation*}
This proves our claim. Let $\epsilon>0$. Using (\ref{bilipschitz}) we choose some $\delta>0$, such that
\begin{equation*}
|f_{\mu}(y)-f_{\mu}(0)|<\epsilon,\:\:\:\:\text{for all}\:\:\:y\in B(0,\delta).
\end{equation*}
Hence, \begin{eqnarray*}
|f_{\mu}\ast\phi_r(0)-f_{\mu}(0)|&=&\left|\int_{G}f_{\mu}(\xi)\phi_r(\xi^{-1})\:dm(\xi)-\int_{G}f_{\mu}(0)\phi_r(\xi^{-1})\:dm(\xi)\right|\\&\leq&\frac{1}{m(B(0,r))}\int_{B(0,r)}|f_{\mu}(\xi)-f_{\mu}(0)|\:dm(\xi)\\&<&\epsilon,\:\:\:\:\text{for all}\:\:0<r<\delta.
\end{eqnarray*}
This together with (\ref{fmuphi}) and (\ref{fmudefinition}), implies that \begin{equation*}
f\ast(\mu\ast\phi_r)(0)\to f_{\mu}(0)=\int_{G}f(\xi^{-1})\:d\mu(\xi), \:\:\:\:\text{as}\:\:r\to 0.
\end{equation*}
Similarly, we can prove that \begin{equation*}
f\ast(\nu\ast\phi_r)(0)\to f_{\nu}(0)=\int_{G}f(\xi^{-1})\:d\nu(\xi), \:\:\:\:\text{as}\:\:r\to 0,
\end{equation*}
where $f_{\nu}$ is defined according to (\ref{fmudefinition}). Equation (\ref{muphiequalnuphi}) now shows that \begin{equation*}
\int_{G}f(\xi^{-1})\:d\mu(\xi)=\int_{G}f(\xi^{-1})\:d\nu(\xi).
\end{equation*}
This completes the proof.
\end{proof}
We now use this proposition to prove the following measure theoretic result that will be needed in the proof of our main theorem.
\begin{lem}\label{mthc}
Suppose $\{\mu_j\}_{j\geq 1}$, $\mu$ are nonnegative measures on $G$ and $\{\mu_j\}$ converges to $\mu$ in weak*. Then for some $L\in[0,\infty)$, $\mu=Lm$ if and only if $\{\mu_j(B)\}$ converges to $Lm(B)$ for every $d_{\mathcal{L}}$-ball $B\subset G$.
\end{lem}
\begin{proof}
Suppose $\mu=Lm$. Fix a $d_{\mathcal{L}}$-ball $B\subset G$ and $\epsilon>0$. As $\overline{B}$ is compact with respect to the Euclidean topology, by regularity of the Lebesgue measure $m$, there exists an open set $V\supset\overline{B}$ such that $m(V\setminus\overline{B})<\epsilon$. Using Uryshon's lemma \cite[Theorem 2.12]{Rub}, we choose $\psi\in C_c(G)$ such that
\begin{equation*}
0\leq\psi(x)\leq 1,\:\:\text{for all}\:x\in G;\: \psi\equiv1\:\text{on}\:\overline{B};\:\:\psi\equiv0\:\text{on}\:G\setminus V.
\end{equation*}
Then
\begin{equation}\label{regular1}
\int_{G}\psi\:dm=\int_{B}\psi\:dm+\int_{V\setminus\overline{B}}\psi\:dm\leq m(B)+m(V\setminus\overline{B})\leq m(B)+\epsilon.
\end{equation}
As $\psi\equiv1\:\text{on}\:\overline{B}$ and $\mu_j\to\mu$ in weak*,
\begin{equation*}
\limsup_{j\to\infty}\mu_j(B)=\limsup_{j\to\infty}\int_{B}\psi\:d\mu_j\leq\limsup_{j\to\infty}\int_{G}\psi\:d\mu_j=\int_{G}\psi\:d\mu.
\end{equation*}
Using our assumption, that is, $\mu=Lm$ and (\ref{regular1}) in the above, we get
\begin{equation*}
\limsup_{j\to\infty}\mu_j(B)\leq L\int_{G}\psi\:dm\leq L(m(B)+\epsilon)
\end{equation*}
Since $\epsilon>0$ is arbitrary
\begin{equation}\label{limsup}
\limsup_{j\to\infty}\mu_j(B)\leq Lm(B).
\end{equation}
Similarly, by choosing a compact set $K\subset B$ with
\begin{equation*}
m(K)>m(B)-\epsilon\:\:\:\:\:(\text{using Remark \ref{topology}})
\end{equation*}
and a function $g\in C_c(G)$ such that
\begin{equation*}
0\leq g(x)\leq 1,\:\:\text{for all}\:x\in G;\: g\equiv1\:\text{on}\:K;\:\:g\equiv0\:\text{on}\:G\setminus B,
\end{equation*} we observe that
\begin{equation*}
\int_{G}g\:dm\geq \int_{K}g\:dm=m(K)>m(B)-\epsilon.
\end{equation*}
As $0\leq g\leq1$ with $\text{supp}\:g\subset B$ and $\mu_j\to\mu$ in weak*,
\begin{equation*}
\liminf_{j\to\infty}\mu_j(B)\geq\liminf_{j\to\infty}\int_{G}g\:d\mu_j=\int_{G}g\:d\mu=L\int_{G}g\:dm>L(m(B)-\epsilon).
\end{equation*}
Since $\epsilon>0$ is arbitrary
\begin{equation*}
\liminf_{j\to\infty}\mu_j(B)\geq Lm(B).
\end{equation*}
Combining the above inequality with (\ref{limsup}) we conclude that
\begin{equation*}
\lim_{j\to\infty}\mu_j(B)=Lm(B).
\end{equation*}
Conversely, we suppose that
\begin{equation}\label{limmuj}
\lim_{j\to\infty}\mu_j(B)=Lm(B),
\end{equation}
for every $d_{\mathcal{L}}$-ball $B\subset G$. We need to prove that $\mu=Lm$. In view of Proposition \ref{measuresequal}, it suffices to show that $\mu(B)=Lm(B)$, for every $d_{\mathcal{L}}$-ball $B\subset G$. The proof of this part is similar to that of the previous part. We fix $\epsilon>0$ and a $d_{\mathcal{L}}$-ball $B=B(x_0,r)\subset G$. We denote the $d_{\mathcal{L}}$-ball centred at $x_0$ and radius $r+\epsilon$ by $B^{\prime}$. Taking Remark \ref{compact} into account and applying Uryshon's lemma we get a function $f\in C_c(G)$ such that
\begin{equation*}
0\leq f(x)\leq 1,\:\:\text{for all}\:x\in G;\: f\equiv1\:\text{on}\:\overline{B};\:\:f\equiv0\:\text{on}\:G\setminus B^{\prime}.
\end{equation*}
Using our hypothesis, namely $\mu_j\to\mu$ in weak*, the above implies that
\begin{equation*}
\mu(B)=\int_{B}f\:d\mu\leq\int_{G}f\:d\mu=\lim_{j\to\infty}\int_{G}f\:d\mu_j\leq\lim_{j\to\infty}\mu_j(B^{\prime})=Lm(B^{\prime})=Lm(B(0,1))(r+\epsilon)^Q.
\end{equation*}
Since $\epsilon>0$ is arbitrary,
\begin{equation*}
\mu(B)\leq Lm(B(0,1))r^Q=Lm(B).
\end{equation*}
Similarly, letting $B^{\prime\prime}=B(x_0,r-\epsilon)$ and choosing a function $f_1\in C_c(G)$ such that
\begin{equation*}
0\leq f_1(x)\leq 1,\:\:\text{for all}\:x\in G;\: f_1\equiv1\:\text{on}\:B^{\prime\prime};\:\:f_1\equiv0\:\text{on}\:G\setminus B,
\end{equation*}
we obtain
\begin{equation*}
\mu(B)\geq\int_{G}f_1\:d\mu=\lim_{j\to\infty}\int_{G}f_1\:d\mu_j\geq\liminf_{j\to\infty}\int_{B^{\prime\prime}}f_1\:d\mu_j=\liminf_{j\to\infty}\mu_j(B^{\prime\prime}).
\end{equation*}
Consequently, (\ref{limmuj}) gives
\begin{equation*}
\mu(B)\geq Lm(B^{\prime\prime})=Lm(B(0,1))(r-\epsilon)^Q.
\end{equation*}
As $\epsilon>0$ is arbitrary,
\begin{equation*}
\mu(B)\geq Lm(B).
\end{equation*}
This completes the proof.
\end{proof}
Next, we shall consider various types of maximal functions on $G$. For a measureable function $\phi$ defined on $G$ and a complex or a signed measure $\mu$, we define the $\alpha$-nontangential maximal function $M_{\phi}^{\alpha}\mu$, where $\alpha>0$ and the radial maximal function $M_{\phi}^0\mu$ of $\mu$ with respect to $\phi$
\begin{eqnarray*}
&&M_{\phi}^{\alpha}\mu(x)=\sup_{\substack{(\xi,t)\in G\times(0,\infty)\\d_{\mathcal{L}}(x,\xi)<\alpha t}}|\mu\ast\phi_t(\xi)|,\:x\in G,\\&&M_{\phi}^0\mu(x)=\sup_{0< t<\infty}|\mu\ast\phi_t(\xi)|,\:x\in G.
\end{eqnarray*}
It is obvious that $M_{\phi}^0\mu$ is pointwise dominated by $M_{\phi}^{\alpha}\mu$ for all $\alpha>0$. In \cite[Corollary 2.5]{FS}, it was proved that if $\phi$ satisfies some polynomial decay, namely
\begin{equation*}
|\phi(x)|\leq A(1+d_{\mathcal{L}}(x))^{-\lambda}
\end{equation*}
for some $A>0$ and $\lambda>Q$, then $M_{\phi}^{\alpha}:L^1(G)\to L^{1,\infty}(G)$ and $M_{\phi}^{\alpha}:L^p(G)\to L^{p}(G)$, $1< p\leq\infty$. Although Folland-Stein proved these mapping properties of $M_{\phi}^{\alpha}$ for $\alpha=1$ but their proof works for all $\alpha>0$. An important special case of this type of maximal functions is the centred Hardy-Littlewood maximal function, which is obtained by taking $\phi=\chi_{B(0,1)}$ in $M_{\phi}^0\mu$. We shall denote it by $M_{HL}(\mu)$. In other words,
\begin{equation*}
M_{HL}(\mu)(x)=\sup_{r>0}\frac{|\mu(B(x,r))|}{m(B(x,r))},\:\:\:\:x\in G.
\end{equation*}
In the following, we shall prove a lemma regarding pointwise comparison between the centred Hardy-Littlewood maximal function and other maximal functions introduced above. We then use it to prove the corresponding result for heat maximal functions.
\begin{lem}\label{maximal}
Let $\phi:G\to(0,\infty)$ be a $\mathcal{L}$-radial, $\mathcal{L}$-radially decreasing (see (\ref{lradial}), (\ref{ldec})) and integrable function. If $\mu$ is a nonnegative measure on $G$ and $\alpha>0$ then there exist positive constants $c_{\alpha,\phi}$ and $c_{\phi}$ such that
\begin{equation*}
c_{\phi}M_{HL}(\mu)(x_0)\leq M_{\phi}^0\mu(x_0)\leq M_{\phi}^{\alpha}\mu(x_0)\leq c_{\alpha,\phi}M_{HL}(\mu)(x_0),
\end{equation*}
for all $x_0\in G$. The constants $c_{\phi}$ and $c_{\alpha,\phi}$ are independent of $x_0$.
\end{lem}
\begin{proof}
We have already observed that the second inequality is obvious. To prove the left-most inequality we take $t>0$ and note that
\begin{eqnarray*}
\mu\ast\phi_t(x_0)&\geq&\int_{B(x_0,t)}\phi_t(\xi^{-1}\circ x_0)\:d\mu(\xi)\\
&=&t^{-Q}\int_{B(x_0,t)}\phi\left(\delta_{\frac{1}{t}}(\xi^{-1}\circ x_0)\right)\:d\mu(\xi)\\
&\geq&t^{-Q}\int_{B(x_0,t)}\phi(1)\:d\mu(\xi)\:\:\:\left(\text{as}\:\:d_{\mathcal{L}}\left(\delta_{\frac{1}{t}}(\xi^{-1}\circ x_0\right)<1\right)\\
&=&\phi(1)m(B(0,1))\frac{\mu(B(x_0,t))}{m(B(x_0,t))}.
\end{eqnarray*}
Taking supremum over $t$ on both sides we get
\begin{equation}\label{phiradial}
c_{\phi}M_{HL}(\mu)(x_0)\leq M_{\phi}^0\mu(x_0),
\end{equation}
where $c_{\phi}=\phi(1)m(B(0,1))$. To prove the right-most inequality, we take $(\xi,t)\in G\times(0,\infty)$ such that $d_{\mathcal{L}}(x_0,\xi)<\alpha t$. Then,
\begin{eqnarray}
\nonumber\mu\ast\phi_t(\xi)&=&\int_{G}\phi_t(x^{-1}\circ \xi)\:d\mu(x)\\\nonumber
&=&t^{-Q}\int_{\{x:d_{\mathcal{L}}(x^{-1}\circ \xi)<\alpha t\}}\phi\left(\delta_{\frac{1}{t}}(x^{-1}\circ\xi)\right)\:d\mu(x)\\\nonumber
&&\:\:\:\:\:+t^{-Q}\sum_{j=1}^{\infty}\int_{\{x:2^{j-1}\alpha t\leq d_{\mathcal{L}}(x^{-1}\circ \xi)<2^j\alpha t\}}\phi\left(\delta_{\frac{1}{t}}(x^{-1}\circ\xi)\right)\:d\mu(x)\\\nonumber
&\leq&\phi(0)t^{-Q}\mu(B(\xi,\alpha t))+t^{-Q}\sum_{j=1}^{\infty}\int_{\{x:d_{\mathcal{L}}(x^{-1}\circ \xi)<2^j\alpha t\}}\phi(2^{j-1}\alpha)\:d\mu(x)\\\label{ntmax}
&=&\phi(0)t^{-Q}\mu(B(\xi,\alpha t))+t^{-Q}\sum_{j=1}^{\infty}\phi(2^{j-1}\alpha)\mu(B(\xi,2^j\alpha t))
\end{eqnarray}
By the triangle inequality,
\begin{equation*}
d_{\mathcal{L}}(x,x_0)\leq C_{\mathcal{L}}\left(d_{\mathcal{L}}(\xi,x)+d_{\mathcal{L}}(x_0,\xi)\right)\leq C_{\mathcal{L}}(\alpha t+\alpha t)=2C_{\mathcal{L}}\alpha t,
\end{equation*}
whenever $x\in B(\xi,\alpha t)$. Consequently, $B(\xi,\alpha t)\subset B(x_0,2C_{\mathcal{L}}\alpha t)$ and hence
\begin{equation}\label{mumonotone1}
\mu(B(\xi,\alpha t))\leq \mu(B(x_0,2C_{\mathcal{L}}\alpha t)).
\end{equation}
Similarly,
\begin{equation}\label{mumonotone2}
\mu(B(\xi,2^j\alpha t))\leq \mu(B(x_0,C_{\mathcal{L}}(2^j+1)\alpha t)).
\end{equation}
We now use the formula for integration in "polar coordinates" given in (\ref{polarcordinate}) to get
\begin{equation}\label{polar}
\int_{G}\phi(x)\:dm(x)=\sigma(S)\int_{0}^{\infty}\phi(r)r^{Q-1}\:dr\:\:\:\:\:\:(\text{as $\phi$ is $\mathcal{L}$-radial}).
\end{equation}
But $\phi$ is $\mathcal{L}$-radially decreasing and nonnegative. Therefore,
\begin{eqnarray*}
\int_{\alpha}^{\infty}\phi(r)r^{Q-1}\:dr&=&\sum_{j=1}^{\infty}\int_{2^{j-1}\alpha}^{2^j\alpha}\phi(r)r^{Q-1}\:dr\geq\sum_{j=1}^{\infty}\phi(2^j\alpha)\int_{2^{j-1}\alpha}^{2^j\alpha}r^{Q-1}\:dr\\
&=&\frac{\alpha^Q}{Q}\sum_{j=1}^{\infty}\phi(2^j\alpha)(2^{jQ}-2^{(j-1)Q})=\frac{(2^Q-1)\alpha^Q}{2^{2Q}Q}\sum_{j=1}^{\infty}\phi(2^j\alpha)2^{(j+1)Q}.
\end{eqnarray*}
Equation (\ref{polar}) and integrability of $\phi$ now imply that
\begin{equation}\label{series}
\sum_{j=1}^{\infty}\phi(2^j\alpha)2^{(j+1)Q}<\infty.
\end{equation}
Let us get back to the inequality (\ref{ntmax}). We get the following by making use of (\ref{mumonotone1}) and (\ref{mumonotone2}) in (\ref{ntmax}).
\begin{eqnarray}
\nonumber\mu\ast\phi_t(\xi)&\leq&\phi(0)t^{-Q}\mu(B(x_0,2C_{\mathcal{L}}\alpha t))+t^{-Q}\sum_{j=1}^{\infty}\phi(2^{j-1}\alpha)\mu(B(x_0,(2^j+1)C_{\mathcal{L}}\alpha t))\\\nonumber&=&\phi(0)(2C_{\mathcal{L}}\alpha)^Qm(B(0,1))\frac{\mu(B(x_0,2C_{\mathcal{L}}\alpha t)}{m(B(x_0,2C_{\mathcal{L}}\alpha t)}\\
\nonumber&&\:\:\:\:\:\:+\sum_{j=1}^{\infty}\phi(2^{j-1}\alpha)m(B(0,1))(C_{\mathcal{L}}\alpha)^Q(2^j+1)^Q\frac{\mu(B(x_0,(2^j+1)C_{\mathcal{L}}\alpha t))}{m(B(x_0,(2^j+1)C_{\mathcal{L}}\alpha t))}\\
\nonumber&\leq&m(B(0,1))(C_{\mathcal{L}}\alpha)^Q\left(2\phi(0)+\sum_{j=1}^{\infty}\phi(2^{j-1}\alpha)2^{(j+1)Q}\right)M_{HL}(\mu)(x_0).
\end{eqnarray}
In view of (\ref{series}), we see that the series inside the bracket above is finite and so we can write
\begin{equation*}
\mu\ast\phi_t(\xi)\leq c_{\alpha,\phi}M_{HL}(\mu)(x_0),\:\:\:\text{where}
\end{equation*}
\begin{equation*}
c_{\alpha,\phi}=m(B(0,1))(C_{\mathcal{L}}\alpha)^Q\left(2\phi(0)+\sum_{j=1}^{\infty}\phi(2^{j-1}\alpha)2^{(j+1)Q}\right)<\infty.
\end{equation*}
Taking supremum over all $(\xi,t)\in G\times(0,\infty)$ with $d_{\mathcal{L}}(x_0,\xi)<\alpha t$, we obtain
\begin{equation}\label{nontangentialmax}
M_{\phi}^{\alpha}\mu(x_0)\leq c_{\alpha,\phi}M_{HL}(\mu)(x_0).
\end{equation}
\end{proof}
\begin{thm}\label{heatmaximal}
Let $\mu\in M$ be a nonnegative measure and let $x_0\in G$. Then for each $\alpha>0$, there exists positive constants $c_n$ and $c_{\alpha}$ (independent of $x_0$) such that
\begin{equation}\label{heatmaxineq}
c_nM_{HL}(\mu)(x_0)\leq\sup_{t>0}\Gamma\mu(x_0,t^2)\leq\sup_{(x,t)\in \texttt{P}(x_0,\alpha)}\Gamma\mu(x,t)\leq c_{\alpha}M_{HL}(\mu)(x_0).
\end{equation}
\end{thm}
\begin{proof}
The second inequality is trivial as $\{(x_0,t^2):t>0\}\subset\texttt{P}(x_0,\beta)$ for every $\beta>0$. To prove the first inequality, we take
\begin{equation*}
\phi(x)=c_0^{-1}\exp\left(-c_0d_{\mathcal{L}}(x)^2\right),\:\:\:x\in G.
\end{equation*}
Clearly, $\phi$ satisfies the hypothesis of Lemma \ref{maximal}. By the first part of the Gaussian estimate (\ref{gaussian}), we have
\begin{equation*}
\mu\ast\phi_t(x)\leq\Gamma\mu(x,t^2),\:\:\:\text{for all}\:(x,t)\in G\times(0,\infty).
\end{equation*}
Applying first inequality of Lemma \ref{maximal}, we obtain
\begin{equation*}
c_nM_{HL}(\mu)(x_0)\leq\sup_{t>0}\mu\ast\phi_t(x_0)\leq\sup_{t>0}\Gamma\mu(x_0,t^2),
\end{equation*}
for some constant positive $c_n$, independent of $x_0$. On the other hand, we consider
\begin{equation*}
\psi(x)=c_0\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{c_0}\right),\:\:\:x\in G.
\end{equation*}
It is obvious that $\psi$ satisfies the hypothesis of Lemma \ref{maximal}. Moreover, by the last part of the Gaussian estimate (\ref{gaussian}), we have
\begin{equation}\label{max1}
\Gamma\mu(x,t)\leq\mu\ast\psi_{\sqrt{t}}(x),\:\:\:\text{for all}\:(x,t)\in G\times(0,\infty).
\end{equation}
But the last inequality of Lemma \ref{maximal} gives us
\begin{equation*}
\sup_{\substack{(\xi,t)\in G\times(0,\infty)\\d_{\mathcal{L}}(x_0,\xi)<\alpha \sqrt{t}}}\mu\ast\psi_{\sqrt{t}}(\xi)\leq c_{\alpha}M_{HL}(\mu)(x_0),
\end{equation*}
for some positive constant $c_{\alpha}$, independent of $x_0$. Using (\ref{max1}) and recalling the definition of $\texttt{P}(x_0,\alpha)$ (see Definition \ref{impdefnc}, i)), we note that
\begin{equation*}
\sup_{(\xi,t)\in \texttt{P}(x_0,\alpha)}\Gamma\mu(\xi,t)=\sup_{\substack{(\xi,t)\in G\times(0,\infty)\\d_{\mathcal{L}}(x_0,\xi)<\alpha \sqrt{t}}}\Gamma\mu(\xi,t)\leq\sup_{\substack{(\xi,t)\in G\times(0,\infty)\\d_{\mathcal{L}}(x_0,\xi)<\alpha \sqrt{t}}}\mu\ast\psi_{\sqrt{t}}(\xi)
\end{equation*}
Hence,
\begin{equation*}
\sup_{(\xi,t)\in \texttt{P}(x_0,\alpha)}\Gamma\mu(\xi,t)\leq c_{\alpha}M_{HL}(\mu)(x_0).
\end{equation*} This completes the proof.
\end{proof}
To prove our main result we will also need an analogue of Montel's theorem for solutions of the heat equation (\ref{heateq}). We have already observed that the heat operator $\mathcal{H}=\frac{\partial}{\partial t}-\mathcal{L}$ is hypoelliptic on $G\times(0,\infty)$. Using this hypoellipticty, one can get a Montel-type result for solutions of the heat equation (\ref{heateq}) from a very general theorem proved in \cite[Theorem 4]{B}.
\begin{lem}\label{montelc}
Let $\{u_j\}$ be a sequence of solutions of the heat equation (\ref{heateq}) on $G\times(0,\infty)$. If $\{u_j\}$ is locally bounded then it has a subsequence which converges normally to a function $v$, defined on $G\times(0,\infty)$, which is also a solution of the heat equation (\ref{heateq}).
\end{lem}
We have already mentioned in the inroduction that the positive solutions of the classical heat equation on the Euclidean upper half space $\mathbb R^{d+1}_+$ are given by convolution of positive measures with the Euclidean heat kernel. In case of the heat equations on stratified Lie groups, we also have similar representation formula.
\begin{lem}\label{existence}(\cite[Lemma 2.3]{BU})
Let $u$ be a nonnegative solution of the heat equation $\mathcal{H}u=0$ in the strip $G\times(0,T)$. Then, there exists a unique positive measure $\mu$ on $G$ such that
\begin{equation*}
u(x,t)=\int_G\Gamma(\xi^{-1}\circ x,t)\:d\mu(\xi),\:\:\:(x,t)\in G\times(0,T).
\end{equation*}
In this case we say that $\mu$ is the boundary measure of $u$.
\end{lem}
Bonfiglioli-Uguzzoni proved the above Lemma under the implicit assumption that $T\in(0,\infty)$. But the same proof will work for the case $T=\infty$. This type of theorem has been known as Widder-type representation fromula in the literature.
Given a function $F$ on $G\times(0,\infty)$ and $r>0$, we define the parabolic dilation of $F$ as
\begin{equation}\label{dilatedfc}
F_r(x,t)=F(\delta_r(x),r^2t),\:(x,t)\in G\times(0,\infty).
\end{equation}
\begin{rem}\label{dilatefc}
The notion of parabolic dilation is crucial for us primarily because of the following reasons.
\begin{enumerate}
\item[i)] If $F$ is a solution of the heat equation then so is $F_r$
for every $r>0$. Indeed, $\mathcal{L}$ is homogeneous of degree two with respect to the dilations $\{\delta_s\}_{s>0}$. This implies that
\begin{equation*}
\left(\mathcal{L}-\frac{\partial}{\partial t}\right)F(\delta_r(x),r^2t)=r^2\mathcal{L}F(\delta_r(x),r^2t)-r^2\frac{\partial}{\partial t}F(\delta_r(x),r^2t)=0.
\end{equation*}
\item[ii)] $(x,t)\in \texttt{P}(0,\alpha)$ if and only if $(\delta_r(x),r^2t)\in \texttt{P}(0,\alpha)$ for every $r>0$.
\end{enumerate}
\end {rem}
Given $\nu\in M$ and $r>0$, we also define the dilate $\nu_r$ of $\nu$ by
\begin{equation}\label{dilatemc}
\nu_r(E)=r^{-Q}\nu\left(\delta_r(E)\right),
\end{equation}
for every Borel set $E\subset G$. We now prove a simple lemma involving the above dilates which will be used in the proof of our main result.
\begin{lem}\label{dilatec}
If $\nu\in M$, then for every $r>0$, $\Gamma(\nu_r)=(\Gamma\nu)_r$, that is, for all $(x,t)\in G\times(0,\infty)$,
\begin{equation*}
\Gamma(\nu_r)(x,t)=\Gamma\nu\left(\delta_r(x),r^2t\right).
\end{equation*}
\end{lem}
\begin{proof}
For $E\subset G$ a Borel set, using (\ref{dilatemc}) it follows that
\begin{equation*}
\int_{G}\chi_E\:d\nu_r=r^{-Q}\nu\left(\delta_r(E)\right)=r^{-Q}\int_{G}\chi_{\delta_r(E)}(x)\:d\nu(x)=r^{-Q}\int_{G}\chi_{E}\left(\delta_{r^{-1}}(x)\right)\:d\nu(x).
\end{equation*}
Hence, for all nonnegative measurable functions $f$ we have
\begin{equation*}
\int_{G}f(x)\:d\nu_r(x)=r^{-Q}\int_{G}f\left(\delta_{r^{-1}}(x)\right)\:d\nu(x).
\end{equation*}
It now follows from the above relation and from Theorem \ref{fundamental}, (iii) that for all $(x,t)\in G\times(0,\infty)$,
\begin{eqnarray*}
\Gamma(\nu_r)(x,t)&=&\int_G\Gamma(\xi^{-1}\circ x,t)\:d\nu_r(\xi)\\
&=&r^{-Q}\int_G\Gamma\left(\left(\delta_{r^{-1}}(\xi)\right)^{-1}\circ x,t\right)\:d\nu(x)\\
&=&r^{-Q}\int_G\Gamma\left(\delta_{r^{-1}}\left(\xi^{-1}\circ\delta_r(x)\right),r^{-2}r^2t\right)\:d\nu(x)\\&=&r^{-Q}r^Q\int_G\Gamma(\xi^{-1}\circ \delta_r(x),r^2t)\:d\nu(x)\\&=&(\Gamma\nu)_r(x,t).
\end{eqnarray*}
\end{proof}
\section{Main theorem}
We shall first prove a special case of our main result. The proof of the main result will follow by reducing matters to this special case.
\begin{thm}\label{specialthc}
Suppose $u$ is a nonegative solution of the heat equation (\ref{heateq}), that is,
\begin{equation*}
\mathcal{H}u(x,t)=0,\:(x,t)\in G\times(0,\infty),
\end{equation*}
and $L\in[0,\infty)$. If a finite measure $\mu$ is the boundary measure of $u$ then the following statements are equivalent.
\begin{enumerate}
\item[(i)] $u$ has parabolic limit $L$ at $0$.
\item[(ii)]$\mu$ has strong derivative $L$ at $0$.
\end{enumerate}
\end{thm}
\begin{proof}
We first prove that (i) implies (ii). We choose a $d_{\mathcal{L}}$-ball $B_0\subset G$, a sequence of positive numbers $\{r_j\}$ converging to zero and consider the quotient
\begin{equation*}
L_j=\frac{\mu\left(\delta_{r_j}(B_0)\right)}{m\left(\delta_{r_j}(B_0)\right)}.
\end{equation*}
Assuming (i), we will prove that $\{L_j\}$ is a bounded sequence and every convergent subsequence of $\{L_j\}$ converges to $L$. We first choose a positive real number $s$ such that $B_0$ is contained in the $d_{\mathcal{L}}$-ball $B(0,s)$. Then for all $j\in\mathbb N$,
\begin{equation}\label{lj}
L_j\leq \frac{\mu\left(\delta_{r_j}(B(0,s))\right)}{m\left(\delta_{r_j}(B_0)\right)}=\frac{\mu\left(\delta_{r_j}(B(0,s))\right)}{m\left(\delta_{r_j}(B(0,s))\right)}\times\frac{m(B(0,s))}{m(B_0)}\leq \frac{m(B(0,s))}{m(B_0)}M_{HL}(\mu)(0).
\end{equation}
Since $\mu$ is the boundary measure for $u$ we have that
\begin{equation*}
u(x,t)=\Gamma\mu (x,t),\:\:\:\:\:\text{for all $(x,t)\in G\times(0,\infty)$.}
\end{equation*}
By hypothesis, $u(0,t^2)$ converges to $L$ as $t$ tends to zero which implies, in particular, that there exists a positive number $\beta$ such that
\begin{equation*}
\sup_{t<\beta}\Gamma\mu(0,t^2)<\infty.
\end{equation*}
Since $\mu$ is a finite measure, using (\ref{gaussian}) we also have
\begin{equation*}
\Gamma\mu(0,t^2)\leq c_0t^{-Q}\int_{G}\exp\left(-\frac{d_{\mathcal{L}}(x)^2}{c_0t^2}\right)\:d\mu(x)\leq c_0t^{-Q}\mu(G)\leq c_0\beta^{-Q}\mu(G),
\end{equation*}
for all $t\geq\beta$ and hence
\begin{equation*}
\sup_{0< t<\infty}\Gamma\mu(0,t^2)<\infty.
\end{equation*}
Inequality (\ref{heatmaxineq}), now implies that $M_{HL}(\mu)(0)$ is finite. Boundedness of the sequence $\{L_j\}$ is now a consequence of the inequality (\ref{lj}). We choose a convergent subsequence of $\{L_j\}$ and denote it also, for the sake of simplicity, by $\{L_j\}$. For $j\in\mathbb N$ we define
\begin{equation*}
u_j(x,t)=u\left(\delta_{r_j}(x),r_j^2t\right),\:\:\:\:(x,t)\in G\times(0,\infty).
\end{equation*}
Then by Remark \ref{dilatefc}, i), $\{u_j\}$ is a sequence of solutions of the heat equation in $G\times(0,\infty)$. We claim that $\{u_j\}$ is locally bounded. To see this, we choose a compact set $K\subset G\times(0,\infty)$. Then there exists a positive number $\alpha$ such that $K$ is contained in the parabolic region $\texttt{P}(0,\alpha)$. Indeed, we consider the map
\begin{equation*}
(x,t)\mapsto\frac{d_{\mathcal{L}}(x)}{\sqrt{t}},\:\:\:(x,t)\in G\times(0,\infty),
\end{equation*}
that is, $K\subset\texttt{P}(0,\alpha)$. Since $d_{\mathcal{L}}$ is continuous on $G$, this map is also continuous. As $K$ is compact, image of $K$ under this map is bounded and hence there exists a positive real number $\alpha$ such that
\begin{equation*}
\frac{d_{\mathcal{L}}(x)}{\sqrt{t}}<\alpha,\:\:\:\:\text{for all $(x,t)\in K$.}
\end{equation*}
Using the invariance of $\texttt{P}(0,\alpha)$ under the parabolic dilation (see Remark \ref{dilatefc}, ii)) and (\ref{heatmaxineq}), it follows that for all $j\in\mathbb N$
\begin{equation*}
\sup_{(x,t)\in \texttt{P}(0,\alpha)}u_j(x,t)\leq \sup_{(x,t)\in \texttt{P}(0,\alpha)}u(x,t)\leq c_{\alpha}M_{HL}(\mu)(0).
\end{equation*}
Hence, $\{u_j\}$ is locally bounded. Lemma \ref{montelc}, now guarantees the existence of a subsequence $\{u_{j_k}\}$ of $\{u_j\}$ which converges normally to a nonnegative solution $v$ of the heat equation in $G\times(0,\infty)$. We claim that for all $(x,t)\in G\times(0,\infty)$
\begin{equation}\label{vlimitmain}
v(x,t)=L.
\end{equation}
To see this, we take $(x_0,t_0)\in G\times(0,\infty)$ and choose $\eta>0$ such that $(x_0,t_0)\in\texttt{P}(0,\eta)$. Since $\{r_{j_k}\}$ converges to zero as $k$ goes to infinity and $u(x,t)$ has limit $L$, as $(x,t)$ tends to $(0,0)$ within $\texttt{P}(0,\eta)$,
\begin{equation*}
v(x_0,t_0)=\lim_{k\to\infty}u_{j_k}(x_0,t_0)=\lim_{k\to\infty}u\left(\delta_{r_{j_k}}(x_0),r_{j_k}^2t_0\right)=L,
\end{equation*}
as $(\delta_{r_{j_k}}(x),r_{j_k}^2t)\in \texttt{P}(0,\eta)$ for all $j_k\in\mathbb N$. This settles the claim. On the other hand, by Lemma \ref{dilatec}, we have for all $(x,t)\in G\times(0,\infty)$
\begin{equation}\label{udilatec}
u_{j_k}(x,t)=u\left(\delta_{r_{j_k}}(x),r_{j_k}^2t\right)=\Gamma\mu\left(\delta_{r_{j_k}}(x),r_{j_k}^2t\right)=\Gamma\left(\mu_{r_{j_k}}\right)(x,t).
\end{equation}
It follows from (\ref{vlimitmain}) and (\ref{udilatec}) that $\{\Gamma(\mu_{r_{j_k}})\}$ converges normally to $L\equiv\Gamma(Lm)$. Then, Lemma \ref{normalc} implies that the sequence of measures $\{\mu_{r_{j_k}}\}$ converges to $Lm$ in weak* and hence by Lemma \ref{mthc}, $\{\mu_{r_{j_k}}(B)\}$ converges to $Lm(B)$ for every $d_{\mathcal{L}}$-ball $B\subset G$.
Using this for $B=B_0$, we get that
\begin{equation*}
Lm(B_0)=\lim_{k\to \infty}\mu_{r_{j_k}}(B_0)=\lim_{k\to\infty}{r_{j_k}}^{-Q}\mu\left(\delta_{{r_{j_k}}}(B_0)\right)=m(B_0)\lim_{k\to\infty}\frac{\mu\left(\delta_{{r_{j_k}}}(B_0)\right)}{m\left(\delta_{{r_{j_k}}}(B_0)\right)}.
\end{equation*}
This implies that the sequence $\{L_{j_{k}}\}$ converges to $L$ and hence so does $\{L_j\}$, as $\{L_j\}$ is convergent. Thus, every convergent subsequence of the bounded sequence $\{L_j\}$ converges to $L$. This implies that $\{L_j\}$ itself converges to $L$. Since $B_0$ and $\{r_j\}$ is arbitrary, $\mu$ has strong derivative $L$ at $0$.
Now, we prove (ii) implies (i). We suppose that the strong derivative of $\mu$ at zero is equal to $L$ but the parabolic limit of $u$ at zero is not equal to $L$. Then there exists a positive number $\alpha$ and a sequence $\{(x_j,t_j^2)\}\subset \texttt{P}(0,\alpha)$ with $(x_j,t_j^2)$ converging to $(0,0)$ but $\{u(x_j,t_j^2)\}$ fails to converge to $L$. Since $D\mu(0)$ exists finitely (in fact, equal to $L$) it follows, in particular, that
\begin{equation*}
\sup_{0<r<\delta}\frac{\mu(B(0,r))}{m(B(0,r))}<L+1,
\end{equation*}
for some $\delta>0$. Finiteness of the measure $\mu$ implies that
\begin{equation*}
\frac{\mu(B(0,r))}{m(B(0,r))}\leq\frac{\mu(G)}{m(B(0,1))r^Q}\leq\frac{\mu(G)}{m(B(0,1))\delta^Q},
\end{equation*}
for all $r\geq\delta$. The above two inequalities together with (\ref{heatmaxineq}) shows that
\begin{equation*}
\sup_{(x,t)\in \texttt{P}(0,\alpha)}u(x,t)\leq c_{\alpha}M_{HL}(\mu)(0)<\infty.
\end{equation*}
In particular, $\{u(x_j,t_j^2)\}$ is a bounded sequence. We now consider a convergent subsequence of this sequence, denote it also, for the sake of simplicity, by $\{u(x_j,t_j^2)\}$ such that
\begin{equation}\label{limitLc}
\lim_{j\to\infty} u(x_j,t_j^2)=L'.
\end{equation}
We will prove that $L'$ is equal to $L$. Using the sequence $\{t_j\}$ we consider the dilates
\begin{equation*}
u_j(x,t)=u\left(\delta_{t_j}(x),t_j^2t\right),\:\:\:\:(x,t)\in G\times(0,\infty).
\end{equation*}
Arguments used in the first part of the proof shows that $\{u_j\}$ is a locally bounded sequence of nonnegative solutions of the heat equation in $G\times(0,\infty)$. Hence, by Lemma \ref{montelc}, there exists a subsequence $\{u_{j_k}\}$ of $\{u_j\}$ which converges normally to a nonegative solution $v$ of the heat equation in $G\times(0,\infty)$. Therefore, Lemma \ref{existence} shows that there exists $\nu\in M$ such that $v$ equals $\Gamma\nu$. We now consider the sequence of dilates $\{\mu_k\}$ of $\mu$ by $\{t_{j_k}\}$ according to (\ref{dilatemc}). An application of Lemma \ref{dilatec} then implies that $\Gamma\mu_k=u_{j_k}$. It follows that the sequence of functions $\{\Gamma\mu_k\}$ converges normally to $\Gamma\nu$. By Lemma \ref{normalc}, we thus obtain weak* convergence of $\{\mu_k\}$ to $\nu$.
Since $D\mu(0)=L$, it follows that for any $d_{\mathcal{L}}$-ball $B\subset G$,
\begin{equation*}
\lim_{k\to\infty}\mu_k(B)=\lim_{k\to\infty}{t_{j_k}}^{-Q}\mu(\delta_{{t_{j_k}}}(B))=\lim_{k\to\infty}\frac{\mu(\delta_{{t_{j_k}}}(B))}{m(\delta_{{t_{j_k}}}(B)}m(B)=Lm(B).
\end{equation*}
Hence by Lemma \ref{mthc}, $\nu=Lm$. As $v=\Gamma\nu$, it follows that
\begin{equation*}
v(x,t)=L,\:\:\:\:\text{for all $(x,t)\in G\times(0,\infty)$}.
\end{equation*}
This, in turn, implies that $\{u_{j_k}\}$ converges to the constant function $L$ normally in $G\times(0,\infty)$. On the other hand, we note that
\begin{equation*}
u(x_{j_k},t_{j_k}^2)=u\left(\delta_{t_{j_k}}\left(\delta_{{t^{-1}_{j_k}}}(x_{j_k})\right),t_{j_k}^2\right)=u_{j_{k}}\left(\delta_{{t^{-1}_{j_k}}}(x_{j_k}),1\right).
\end{equation*}
Since $(x_{j_k},t_{j_k}^2)$ belongs to the parabolic region $\texttt{P}(0,\alpha)$, for all $k\in\mathbb N$, it follows that
\begin{equation*}
\left(\delta_{{t^{-1}_{j_k}}}(x_{j_k}),1\right)\in\overline B(0,\alpha)\times\{1\},
\end{equation*}
which is a compact subset of $G\times(0,\infty)$. Therefore,
\begin{equation*}
\lim_{k\to\infty}u(x_{j_k},t_{j_k}^2)=L.
\end{equation*}
In view of (\ref{limitLc}) we can thus conclude that $L'$ equals $L$. So, every convergent subsequence of the original sequence $\{u(x_j,t_j^2)\}$ converges to $L$. This contradicts our assumption that $\{u(x_j,t_j^2)\}$ fails to converge to $L$. This completes the proof.
\end{proof}
Now, we are in a position to state and prove our main result.
\begin{thm}\label{mainc}
Suppose $u$ is a nonnegative solution of the heat equation $\mathcal{H}u=0$ in $G\times(0,T)$, for some $0<T\leq\infty$ and suppose $x_0\in G$, $L\in[0,\infty)$. If $\mu$ is the boundary measure of $u$ then the following statements are equivalent.
\begin{enumerate}
\item[(i)] $u$ has parabolic limit $L$ at $x_0$.
\item[(ii)]$\mu$ has strong derivative $L$ at $x_0$.
\end{enumerate}
\end{thm}
\begin{proof}
We consider the translated measure $\mu_0=\tau_{x_0}\mu$, where
\begin{equation*}
\tau_{x_0}\mu (E)=\mu(x_0\circ E),
\end{equation*}
for all Borel subsets $E\subset G$. Using translation invariance of the Lebesgue measure $m$, it follows from the definition of strong derivative that $D\mu_0(0)$ and $D\mu
(x_0)$ are equal. Since $\Gamma\mu_0$ is given by the convolution of $\mu_0$ with $\gamma_{\sqrt{t}}$ and translation commutes with convolution, it follows that
\begin{equation}\label{trans}
\Gamma\mu_0(x,t)= (\gamma_{\sqrt{t}}\ast \tau_{x_0}\mu )(x)=\tau_{x_0}(\gamma_{\sqrt{t}}\ast\mu )(x)=\Gamma\mu(x_0\circ x,t).
\end{equation}
We fix an arbitrary positive number $\alpha$. As $(x,t)\in \texttt{P}(0,\alpha)$ if and only if $(x_0\circ x,t)\in \texttt{P}(x_0,\alpha)$, one infers from (\ref{trans}) that
\begin{equation*}
\lim_{\substack{(x,t)\to(0,0)\\(x,t)\in \texttt{P}(0,\alpha)}}\Gamma\mu_0(x,t)=\lim_{\substack{(\xi,t)\to(x_0,0)\\(\xi,t)\in \texttt{P}(x_0,\alpha)}}\Gamma\mu(\xi,t).
\end{equation*}
Hence, it suffices to prove the theorem under the assumption that $x_0=0$. We now show that we can even take $\mu$ to be a finite measure. Let $\tilde{\mu}$ be the restriction of $\mu$ on the $d_{\mathcal{L}}$-ball $B(0,C_{\mathcal{L}}^{-1})$. Suppose $B(y,s)$ is any given $d_{\mathcal{L}}$-ball. Then for all $0<r<\left(C_{\mathcal{L}}^2(s+d_{\mathcal{L}}(y))\right)^{-1}$, it follows that whenever $\xi\in\delta_r(B(y,s))=B(\delta_r(y),rs)$, we have
\begin{equation*}
d_{\mathcal{L}}(0,\xi)\leq C_{\mathcal{L}}\left(d_{\mathcal{L}}(0,\delta_r(y))+d_{\mathcal{L}}(\delta_r(y),\xi)\right)\leq C_{\mathcal{L}}\left(rd_{\mathcal{L}}(y)+rs\right)<C_{\mathcal{L}}^{-1}.
\end{equation*}
In other words, $\delta_r(B(y,s))$ is a subset of $B(0,C_{\mathcal{L}}^{-1})$. This in turn implies that $D\mu(0)$ and $D\tilde{\mu}(0)$ are equal. We now claim that
\begin{equation}\label{finaleqc}
\lim_{\substack{(x,t)\to(0,0)\\(x,t)\in \texttt{P}(0,\alpha)}}\Gamma\mu(x,t)=\lim_{\substack{(x,t)\to(0,0)\\(x,t)\in \texttt{P}(0,\alpha)}}\Gamma\tilde{\mu}(x,t).
\end{equation}
In this regard, we first observe that
\begin{equation*}
\lim_{t\to 0}\int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\Gamma(\xi^{-1}\circ x,t)\:d\mu(\xi)=0,
\end{equation*}uniformly for $x\in B(0,1/(2C_{\mathcal{L}}^2))$. Indeed, for $x\in B(0,1/(2C_{\mathcal{L}}^2))$ and $\xi\in G\setminus B(0,C_{\mathcal{L}}^{-1})$, we have
\begin{equation*}
d_{\mathcal{L}}(\xi^{-1}\circ x)\geq\frac{1}{C_{\mathcal{L}}}d_{\mathcal{L}}(\xi)-d_{\mathcal{L}}(x)\geq\frac{d_{\mathcal{L}}(\xi)}{C_{\mathcal{L}}}-\frac{d_{\mathcal{L}}(\xi)}{2C_{\mathcal{L}}}=\frac{d_{\mathcal{L}}(\xi)}{2C_{\mathcal{L}}}\geq\frac{1}{2C_{\mathcal{L}}^2}.
\end{equation*}
Using the Gaussian estimate (\ref{gaussian}) and the inequality above, we get
\begin{eqnarray*}
&&\int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\Gamma(\xi^{-1}\circ x,t)\:d\mu(\xi)\\
&\leq& c_0t^{-\frac{Q}{2}}\int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\exp\left(-\frac{d_{\mathcal{L}}(\xi^{-1}\circ x)^2}{c_0t}\right)\:d\mu(\xi)\\
&\leq&c_0t^{-\frac{Q}{2}}\int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\exp\left(-\frac{d_{\mathcal{L}}(\xi)^2}{4c_0C_{\mathcal{L}}^2t}\right)\:d\mu(\xi)\\
&\leq& c_0t^{-\frac{Q}{2}}\exp\left(-\frac{1}{8c_0C_{\mathcal{L}}^4t}\right)\int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\exp\left(-\frac{d_{\mathcal{L}}(\xi)^2}{8c_0C_{\mathcal{L}}^2t}\right)\:d\mu(\xi)\\
&\leq& c_0t^{-\frac{Q}{2}}\exp\left(-\frac{1}{8c_0C_{\mathcal{L}}^4t}\right)\int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\exp\left(-\frac{2c_0d_{\mathcal{L}}(\xi)^2}{t_0}\right)\:d\mu(\xi),
\end{eqnarray*}
for all $t<(16c_0^2C_{\mathcal{L}}^2)^{-1}t_0$, where $t_0$ is a fixed positive number less than $T$. As $\Gamma\mu(0,t_0/2)$ exists, the Gaussian estimate (\ref{gaussian}) implies that the integral on the right-hand side in the last inequality is finite. Hence, letting $t$ goes to zero on the right-hand side in the last inequality, the observation follows. Now,
\begin{eqnarray*}
\Gamma\mu(x,t)&=&\int_{B(0,C_{\mathcal{L}}^{-1})}\Gamma(\xi^{-1}\circ x,t)\:d\mu(\xi)+\int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\Gamma(\xi^{-1}\circ x,t)\:d\mu(\xi)\\
&=& \Gamma\tilde{\mu}(x,t)+ \int_{G\setminus B(0,C_{\mathcal{L}}^{-1})}\Gamma(\xi^{-1}\circ x,t)\:d\mu(\xi).
\end{eqnarray*}
Given any $\epsilon>0$, we get some $t_1\in(0,(16c_0^2C_{\mathcal{L}}^2)^{-1}t_0)$ such that for all $t\in (0,t_1)$, the integral on the right-hand side of the equality above is smaller than $\epsilon$ for all $x\in B(0,1/(2C_{\mathcal{L}}^2))$. On the other hand, if we choose $t\in (0,1/(4\alpha^2C_{\mathcal{L}}^4))$ then it follows that
\begin{equation*}
\texttt{P}(0,\alpha)\cap \{(x,t)\mid t\in (0,1/(4\alpha^2C_{\mathcal{L}}^4)) \}\subset B(0,1/(2C_{\mathcal{L}}^2))\times (0,1/(4\alpha^2C_{\mathcal{L}}^4)).
\end{equation*}
Hence, for all $(x,t)\in \texttt{P}(0,\alpha)$ with $t\in(0,\min\{t_1,1/(4\alpha^2C_{\mathcal{L}}^4)\})$ we have
\begin{equation*}
|\Gamma\mu (x,t)-\Gamma\tilde{\mu}(x,t)|<\epsilon.
\end{equation*}
This proves (\ref{finaleqc}). Therefore, as $\alpha>0$ is arbitrary, we may and do suppose that $\mu$ is a finite measure. Using this, without loss of generality, we may also assume $T=\infty$. The proof now follows from Theorem \ref{specialthc}.
\end{proof}
\begin{rem}
We have mentioned in the introduction that a result of Poon \cite[Theorem 1.2]{P}, regarding uniqueness of solutions of the heat equation in $\mathbb R^{n+1}_+$ plays an important role in the proof of Theorem \ref{thmparaeucl}. The above-cited result of Poon says, in particular, that a solution of the heat equation (\ref{euclideanheat}) in $\mathbb R^{n+1}_+$ can not vanishes on an open subset of $\mathbb R^{n+1}_+$ unless it is identically zero. We have used this result to prove Theorem \ref{thmparaeucl}, ii). It is not known to us whether an analogous result holds true for solutions of the heat equation (\ref{heateq}) on stratified Lie groups. However, if it turns out to be true that a nonzero solution of (\ref{heateq}) can not vanish on any open set in $G\times(0,\infty)$ then one can actually prove a stronger version of Theorem \ref{mainc} in the sense that if there exists an $\eta>0$ such that
\begin{equation*}
\lim_{\substack{(x,t)\to(x_0,0)\\(x,t)\in \texttt{P}(x_0,\eta)}}u(x,t)=L,
\end{equation*}
then $\mu$ has strong derivative $L$ at $x_0$, where $u$ is a positive solution of the heat equation and $\mu$ is the boundary measure of $u$.
\end{rem}
\section*{acknowledgements}
The author would like to thank Swagato K. Ray for many
useful discussions during the course of this work. The author is supported by a research fellowship from Indian Statistical Institute.
|
1,116,691,501,267 | arxiv | \section{Introduction}
This note is concerned with the problem of counting independent sets in hypergraphs.
We start with the basic definitions.
A \emph{hypergraph} $H=(W,\mathcal{E})$ is specified by a set of vertices $W$ and a set of hyperedges, where each hyperedge $e\in\mathcal{E}$ is a subset of $W$.
It is said to be $k$-\emph{uniform}, if each hyperedge contains exactly $k$ vertices.
The degree of a vertex is the number of hyperedges in which it appears,
and the \emph{degree} $\Delta$ of the hypergraph is the maximum degree of its vertices.
A set $I\subset W$ is a (weak) \emph{independent set} if $I\cap e\neq e$ holds for all $e\in\mathcal{E}$.
This problem is naturally parameterised by $k$ and $\Delta$.
Unlike many other problems like hypergraph colouring or $k$-SATs where the existence of a solution is captured by the celebrated Lov\'{a}sz local lemma \cite{EL75} under such parameterisation,
even constructing a hypergraph independent set is trivial: the empty set is trivially a solution.
However, the story seems no longer diverged when it comes to the computational hardness of the relevant approximate counting problems.
Bez\'{a}kov\'{a}, Galanis, Goldberg, Guo and \v{S}tefankovi\v{c} prove that approximating the number of independent sets is intractable when $\Delta\geq 5\cdot 2^{k/2}$ \cite{BGGGS19}, unless $\mathbf{NP}=\mathbf{RP}$.
The exponent $k/2$ coincides with the intractability result for counting hypergraph $q$-colourings by Galanis, Guo and Wang that $\Delta\geq 5\cdot q^{k/2}$ \cite{GGW22} when $q$ is an even.
On the other hand, there are several recent breakthroughs from the algorithmic side.
Hermon, Sly and Zhang \cite{HSZ19} first give a Markov-chain-based sampler that outputs an independent set almost uniformly at random in polynomial time when $\Delta\leq c2^{k/2}$ for some absolute constant $c>0$.
This also yields a fully-polynomial randomised approximation scheme (FPRAS) for the number of independent sets due to a standard sampling-to-counting reduction \cite{JVV86}.
A later work by Qiu, Wang and Zhang \cite{qiu2022perfect} provide a perfect sampler (i.e., the output distribution is unbiased) which runs in expected polynomial time when $\Delta\leq c2^{k/2}/k$ for some absolute constant $c>0$.
Very recently, Feng, Guo, Wang, Wang and Yin \cite{FGWWY22} further derandomise the Markov chain Monte Carlo approach and provide a fully-polynomial deterministic approximation scheme (FPTAS) when $\Delta\leq c2^{k/2}/k^2$ for some absolute constant $c>0$.
All these regimes nearly match the hardness bound.
The notion of \emph{linear} hypergraphs (aka. \emph{simple} hypergraphs) also attracts some attention.
We say a hypergraph has \emph{overlap} $b$, if the intersection of each pair of hyperedges contains at most $b$ vertices.
The hypergraph is linear if it has overlap $1$.
The regimes where the above algorithms work go further when the input hypergraph is restricted to be linear.
That is $\Delta\leq c2^{k}/k^2$ for both FPRAS and perfect sampler, and $\Delta\leq 2^{(1-o(1))k}$ for the FPTAS, established in the same work as above respectively.
Are these algorithmic regimes asymptotically tight, up to some small factors?
As the main claim of this note, we answer the question affirmatively.
\begin{theorem} \label{thm:main}
For any $k\geq 2$, $1\leq b\leq k/2$ and $\Delta\geq 5\cdot 2^{k-b}+1$, it is $\NP$-hard to approximate the number of independent sets in $k$-uniform hypergraphs of maximum degree at most $\Delta$ and overlap at most $b$.
\end{theorem}
The hardness for the linear hypergraph is then obtained by plugging in $b=1$.
The above theorem also subsumes the general case in \cite{BGGGS19} by setting $b:=\lfloor k/2\rfloor$.
In fact, as we will see soon, the reduction there is a special case of ours.
The phenomenon that a more relaxed algorithmic regime (and thus a more restricted hardness regime) exists for linear hypergraphs is also present in the hypergraph $q$-colouring problem.
The up-to-date algorithmic bound is $\Delta\lesssim q^{k/3}$ in the general hypergraph \cite{JPV21,HSW21,FGWWY22}, while it goes further to $q^{k/2-o(k)}$ in the linear case \cite{FGW22a}.
From the hardness side, the approximate counting problem is known to be $\NP$-hard when $\Delta\geq 5\cdot q^{k/2}$ \cite{GGW22} in the general case where $q$ is an even, but $\Delta\geq 2kq^k\log q+2q$ in the linear case \cite{GGW22}.
The reduction here is inspired by the general case \cite{BGGGS19}.
The argument therein reduces from the hard-core model (counting weighted independent sets) on graphs, by replacing each vertex in the graph with $k/2$ copies in the hypergraph.
This naturally requires large overlaps in the result hypergraph.
In our case, linearity (or the requirement of small overlaps) is ensured by controlling the number of copies created, followed by filling up each hyperedge to $k$ vertices.
This boils down to a general anti-ferromagnetic $2$-spin system, instead of merely the hard-core model.
The main complicacy is to establish the so-called ``non-uniqueness'', before which we can invoke a theorem by Sly and Sun \cite{SS14}, and show the inapproximability of this $2$-spin system.
We remark that the hardness of linear hypergraph colourings is handled separately in \cite{GGW22}.
However, the approach there is based on the hardness of the searching problem, and thus not applicable in our case because, again, constructing an independent set is trivial.
An open problem is to locate the computational phase transition completely for the linear case, as there is still an $O(k^2)$ gap in-between.
Moreover, the so-called uniqueness threshold for independent sets on the hypertree is $\Delta\leq 2^{k}/(2k)$ \cite[Lemma 60]{BGGGS19}.
However, it is not obvious which among these three, if not none, would be the ground truth for the computational phase transition point.
\section{Reduction from 2-spin systems}
Our reduction is made from the hardness of approximating the partition function of the \emph{$2$-spin system} on graphs.
A $2$-spin system on a graph $G=(V,E)$ is specified by an interaction matrix $\bm B$ and a vector ${\bm h}$ for the external field:
\begin{equation} \label{equ:2spin-interaction}
{\bm B}=\begin{bmatrix}
\beta & 1 \\ 1 & \gamma
\end{bmatrix}, \qquad
{\bm h}=\begin{bmatrix}
\lambda \\ 1
\end{bmatrix},
\end{equation}
where $\beta,\gamma,\lambda\geq 0$.
The system is called \emph{anti-ferromagnetic}, if $\beta\gamma<1$.
A configuration $\sigma:V\to\{0,1\}$ assigns each vertex $v\in V$ with a spin either $0$ or $1$.
The \emph{weight} of a configuration $\sigma$ is defined by
\[
\wt(\sigma):=\lambda^{n_0(\sigma)}\beta^{m_{00}(\sigma)}\gamma^{m_{11}(\sigma)}
\]
where $n_0(\sigma)$ is the number of vertices assigned $0$ under $\sigma$, and $m_{00}(\sigma)$ (resp. $m_{11}(\sigma)$) is the number of edges whose both endpoints are assigned $0$ (resp. $1$) under $\sigma$.
The \emph{partition function} is defined by
\[
Z_{\beta,\gamma,\lambda}(G):=\sum_{\sigma}\wt(\sigma).
\]
The $2$-spin system we are interested in is specified by the following choices of parameters:
\[
\beta=1, \qquad \gamma=1-\frac{1}{2^{k-2b}}, \qquad \lambda=2^b-1.
\]
The subscription in $Z_{\beta,\gamma,\lambda}$ is thus omitted as the parameters are now fixed.
We now state the reduction.
For any given $\Delta$-regular graph $G=(V,E)$, construct the hypergraph $H_G$ according to the following steps.
\begin{itemize}
\item[(T1)] Interpret the graph as a $2$-uniform hypergraph.
\item[(T2)] Replace each vertex with $b$ vertices.
\item[(T3)] For each hyperedge, insert another $k-2b$ vertices independently.
\end{itemize}
Below is an example illustrating the reduction where $k=7$, $b=3$ and $\Delta=3$.
\begin{center}
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (28,156) -- (108,156) ;
\draw (28,156) -- (68,124) ;
\draw (108,156) -- (68,124) ;
\draw (68,76) -- (28,156) ;
\draw (68,76) -- (108,156) ;
\draw (68,76) -- (68,124) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (24,156) .. controls (24,153.79) and (25.79,152) .. (28,152) .. controls (30.21,152) and (32,153.79) .. (32,156) .. controls (32,158.21) and (30.21,160) .. (28,160) .. controls (25.79,160) and (24,158.21) .. (24,156) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (64,76) .. controls (64,73.79) and (65.79,72) .. (68,72) .. controls (70.21,72) and (72,73.79) .. (72,76) .. controls (72,78.21) and (70.21,80) .. (68,80) .. controls (65.79,80) and (64,78.21) .. (64,76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (64,124) .. controls (64,121.79) and (65.79,120) .. (68,120) .. controls (70.21,120) and (72,121.79) .. (72,124) .. controls (72,126.21) and (70.21,128) .. (68,128) .. controls (65.79,128) and (64,126.21) .. (64,124) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (104,156) .. controls (104,153.79) and (105.79,152) .. (108,152) .. controls (110.21,152) and (112,153.79) .. (112,156) .. controls (112,158.21) and (110.21,160) .. (108,160) .. controls (105.79,160) and (104,158.21) .. (104,156) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (536.98,63.78) .. controls (536.98,61.57) and (538.77,59.78) .. (540.98,59.78) .. controls (543.19,59.78) and (544.98,61.57) .. (544.98,63.78) .. controls (544.98,65.99) and (543.19,67.78) .. (540.98,67.78) .. controls (538.77,67.78) and (536.98,65.99) .. (536.98,63.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (536.98,73.78) .. controls (536.98,71.57) and (538.77,69.78) .. (540.98,69.78) .. controls (543.19,69.78) and (544.98,71.57) .. (544.98,73.78) .. controls (544.98,75.99) and (543.19,77.78) .. (540.98,77.78) .. controls (538.77,77.78) and (536.98,75.99) .. (536.98,73.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (536.98,83.78) .. controls (536.98,81.57) and (538.77,79.78) .. (540.98,79.78) .. controls (543.19,79.78) and (544.98,81.57) .. (544.98,83.78) .. controls (544.98,85.99) and (543.19,87.78) .. (540.98,87.78) .. controls (538.77,87.78) and (536.98,85.99) .. (536.98,83.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (530.98,133.78) .. controls (530.98,131.57) and (532.77,129.78) .. (534.98,129.78) .. controls (537.19,129.78) and (538.98,131.57) .. (538.98,133.78) .. controls (538.98,135.99) and (537.19,137.78) .. (534.98,137.78) .. controls (532.77,137.78) and (530.98,135.99) .. (530.98,133.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (542.98,133.78) .. controls (542.98,131.57) and (544.77,129.78) .. (546.98,129.78) .. controls (549.19,129.78) and (550.98,131.57) .. (550.98,133.78) .. controls (550.98,135.99) and (549.19,137.78) .. (546.98,137.78) .. controls (544.77,137.78) and (542.98,135.99) .. (542.98,133.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (536.98,123.78) .. controls (536.98,121.57) and (538.77,119.78) .. (540.98,119.78) .. controls (543.19,119.78) and (544.98,121.57) .. (544.98,123.78) .. controls (544.98,125.99) and (543.19,127.78) .. (540.98,127.78) .. controls (538.77,127.78) and (536.98,125.99) .. (536.98,123.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (495.98,150.78) .. controls (495.98,148.57) and (497.77,146.78) .. (499.98,146.78) .. controls (502.19,146.78) and (503.98,148.57) .. (503.98,150.78) .. controls (503.98,152.99) and (502.19,154.78) .. (499.98,154.78) .. controls (497.77,154.78) and (495.98,152.99) .. (495.98,150.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (487.98,156.78) .. controls (487.98,154.57) and (489.77,152.78) .. (491.98,152.78) .. controls (494.19,152.78) and (495.98,154.57) .. (495.98,156.78) .. controls (495.98,158.99) and (494.19,160.78) .. (491.98,160.78) .. controls (489.77,160.78) and (487.98,158.99) .. (487.98,156.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (479.98,162.78) .. controls (479.98,160.57) and (481.77,158.78) .. (483.98,158.78) .. controls (486.19,158.78) and (487.98,160.57) .. (487.98,162.78) .. controls (487.98,164.99) and (486.19,166.78) .. (483.98,166.78) .. controls (481.77,166.78) and (479.98,164.99) .. (479.98,162.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (593.98,162.78) .. controls (593.98,160.57) and (595.77,158.78) .. (597.98,158.78) .. controls (600.19,158.78) and (601.98,160.57) .. (601.98,162.78) .. controls (601.98,164.99) and (600.19,166.78) .. (597.98,166.78) .. controls (595.77,166.78) and (593.98,164.99) .. (593.98,162.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (577.98,150.78) .. controls (577.98,148.57) and (579.77,146.78) .. (581.98,146.78) .. controls (584.19,146.78) and (585.98,148.57) .. (585.98,150.78) .. controls (585.98,152.99) and (584.19,154.78) .. (581.98,154.78) .. controls (579.77,154.78) and (577.98,152.99) .. (577.98,150.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (585.98,156.78) .. controls (585.98,154.57) and (587.77,152.78) .. (589.98,152.78) .. controls (592.19,152.78) and (593.98,154.57) .. (593.98,156.78) .. controls (593.98,158.99) and (592.19,160.78) .. (589.98,160.78) .. controls (587.77,160.78) and (585.98,158.99) .. (585.98,156.78) -- cycle ;
\draw (528.98,69.78) .. controls (528.98,63.15) and (534.35,57.78) .. (540.98,57.78) -- (540.98,57.78) .. controls (547.61,57.78) and (552.98,63.15) .. (552.98,69.78) -- (552.98,129.78) .. controls (552.98,136.41) and (547.61,141.78) .. (540.98,141.78) -- (540.98,141.78) .. controls (534.35,141.78) and (528.98,136.41) .. (528.98,129.78) -- cycle ;
\draw (535.18,119.39) .. controls (540.92,116.08) and (548.26,118.05) .. (551.57,123.79) -- (551.57,123.79) .. controls (554.88,129.53) and (552.92,136.87) .. (547.18,140.18) -- (495.22,170.18) .. controls (489.48,173.49) and (482.14,171.53) .. (478.82,165.79) -- (478.82,165.79) .. controls (475.51,160.05) and (477.48,152.71) .. (483.22,149.39) -- cycle ;
\draw (599.18,149.79) .. controls (604.92,153.11) and (606.88,160.45) .. (603.57,166.19) -- (603.57,166.19) .. controls (600.26,171.93) and (592.92,173.89) .. (587.18,170.58) -- (535.22,140.58) .. controls (529.48,137.27) and (527.51,129.93) .. (530.82,124.19) -- (530.82,124.19) .. controls (534.14,118.45) and (541.48,116.48) .. (547.22,119.79) -- cycle ;
\draw (472.98,160.78) .. controls (472.98,152.5) and (479.7,145.78) .. (487.98,145.78) -- (593.98,145.78) .. controls (602.27,145.78) and (608.98,152.5) .. (608.98,160.78) -- (608.98,160.78) .. controls (608.98,169.06) and (602.27,175.78) .. (593.98,175.78) -- (487.98,175.78) .. controls (479.7,175.78) and (472.98,169.06) .. (472.98,160.78) -- cycle ;
\draw (547.84,54.84) .. controls (554.38,58.61) and (556.62,66.98) .. (552.85,73.51) -- (498.52,167.61) .. controls (494.74,174.15) and (486.38,176.39) .. (479.84,172.62) -- (479.84,172.62) .. controls (473.3,168.84) and (471.06,160.48) .. (474.84,153.94) -- (529.16,59.84) .. controls (532.94,53.3) and (541.3,51.06) .. (547.84,54.84) -- cycle ;
\draw (602.75,172.83) .. controls (596,176.73) and (587.37,174.42) .. (583.48,167.67) -- (529.58,74.32) .. controls (525.69,67.58) and (528,58.95) .. (534.75,55.05) -- (534.75,55.05) .. controls (541.49,51.16) and (550.12,53.47) .. (554.01,60.22) -- (607.91,153.56) .. controls (611.8,160.31) and (609.49,168.94) .. (602.75,172.83) -- cycle ;
\draw (536.98,109.78) .. controls (536.98,107.57) and (538.77,105.78) .. (540.98,105.78) .. controls (543.19,105.78) and (544.98,107.57) .. (544.98,109.78) .. controls (544.98,111.99) and (543.19,113.78) .. (540.98,113.78) .. controls (538.77,113.78) and (536.98,111.99) .. (536.98,109.78) -- cycle ;
\draw (520.98,137.78) .. controls (520.98,135.57) and (522.77,133.78) .. (524.98,133.78) .. controls (527.19,133.78) and (528.98,135.57) .. (528.98,137.78) .. controls (528.98,139.99) and (527.19,141.78) .. (524.98,141.78) .. controls (522.77,141.78) and (520.98,139.99) .. (520.98,137.78) -- cycle ;
\draw (552.98,137.78) .. controls (552.98,135.57) and (554.77,133.78) .. (556.98,133.78) .. controls (559.19,133.78) and (560.98,135.57) .. (560.98,137.78) .. controls (560.98,139.99) and (559.19,141.78) .. (556.98,141.78) .. controls (554.77,141.78) and (552.98,139.99) .. (552.98,137.78) -- cycle ;
\draw (509.84,113.73) .. controls (509.84,111.52) and (511.63,109.73) .. (513.84,109.73) .. controls (516.05,109.73) and (517.84,111.52) .. (517.84,113.73) .. controls (517.84,115.94) and (516.05,117.73) .. (513.84,117.73) .. controls (511.63,117.73) and (509.84,115.94) .. (509.84,113.73) -- cycle ;
\draw (564.98,113.78) .. controls (564.98,111.57) and (566.77,109.78) .. (568.98,109.78) .. controls (571.19,109.78) and (572.98,111.57) .. (572.98,113.78) .. controls (572.98,115.99) and (571.19,117.78) .. (568.98,117.78) .. controls (566.77,117.78) and (564.98,115.99) .. (564.98,113.78) -- cycle ;
\draw (536.98,160.78) .. controls (536.98,158.57) and (538.77,156.78) .. (540.98,156.78) .. controls (543.19,156.78) and (544.98,158.57) .. (544.98,160.78) .. controls (544.98,162.99) and (543.19,164.78) .. (540.98,164.78) .. controls (538.77,164.78) and (536.98,162.99) .. (536.98,160.78) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (372.98,63.89) .. controls (372.98,61.68) and (374.77,59.89) .. (376.98,59.89) .. controls (379.19,59.89) and (380.98,61.68) .. (380.98,63.89) .. controls (380.98,66.1) and (379.19,67.89) .. (376.98,67.89) .. controls (374.77,67.89) and (372.98,66.1) .. (372.98,63.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (372.98,73.89) .. controls (372.98,71.68) and (374.77,69.89) .. (376.98,69.89) .. controls (379.19,69.89) and (380.98,71.68) .. (380.98,73.89) .. controls (380.98,76.1) and (379.19,77.89) .. (376.98,77.89) .. controls (374.77,77.89) and (372.98,76.1) .. (372.98,73.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (372.98,83.89) .. controls (372.98,81.68) and (374.77,79.89) .. (376.98,79.89) .. controls (379.19,79.89) and (380.98,81.68) .. (380.98,83.89) .. controls (380.98,86.1) and (379.19,87.89) .. (376.98,87.89) .. controls (374.77,87.89) and (372.98,86.1) .. (372.98,83.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (366.98,133.89) .. controls (366.98,131.68) and (368.77,129.89) .. (370.98,129.89) .. controls (373.19,129.89) and (374.98,131.68) .. (374.98,133.89) .. controls (374.98,136.1) and (373.19,137.89) .. (370.98,137.89) .. controls (368.77,137.89) and (366.98,136.1) .. (366.98,133.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (378.98,133.89) .. controls (378.98,131.68) and (380.77,129.89) .. (382.98,129.89) .. controls (385.19,129.89) and (386.98,131.68) .. (386.98,133.89) .. controls (386.98,136.1) and (385.19,137.89) .. (382.98,137.89) .. controls (380.77,137.89) and (378.98,136.1) .. (378.98,133.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (372.98,123.89) .. controls (372.98,121.68) and (374.77,119.89) .. (376.98,119.89) .. controls (379.19,119.89) and (380.98,121.68) .. (380.98,123.89) .. controls (380.98,126.1) and (379.19,127.89) .. (376.98,127.89) .. controls (374.77,127.89) and (372.98,126.1) .. (372.98,123.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (331.98,150.89) .. controls (331.98,148.68) and (333.77,146.89) .. (335.98,146.89) .. controls (338.19,146.89) and (339.98,148.68) .. (339.98,150.89) .. controls (339.98,153.1) and (338.19,154.89) .. (335.98,154.89) .. controls (333.77,154.89) and (331.98,153.1) .. (331.98,150.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (323.98,156.89) .. controls (323.98,154.68) and (325.77,152.89) .. (327.98,152.89) .. controls (330.19,152.89) and (331.98,154.68) .. (331.98,156.89) .. controls (331.98,159.1) and (330.19,160.89) .. (327.98,160.89) .. controls (325.77,160.89) and (323.98,159.1) .. (323.98,156.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (315.98,162.89) .. controls (315.98,160.68) and (317.77,158.89) .. (319.98,158.89) .. controls (322.19,158.89) and (323.98,160.68) .. (323.98,162.89) .. controls (323.98,165.1) and (322.19,166.89) .. (319.98,166.89) .. controls (317.77,166.89) and (315.98,165.1) .. (315.98,162.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (429.98,162.89) .. controls (429.98,160.68) and (431.77,158.89) .. (433.98,158.89) .. controls (436.19,158.89) and (437.98,160.68) .. (437.98,162.89) .. controls (437.98,165.1) and (436.19,166.89) .. (433.98,166.89) .. controls (431.77,166.89) and (429.98,165.1) .. (429.98,162.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (413.98,150.89) .. controls (413.98,148.68) and (415.77,146.89) .. (417.98,146.89) .. controls (420.19,146.89) and (421.98,148.68) .. (421.98,150.89) .. controls (421.98,153.1) and (420.19,154.89) .. (417.98,154.89) .. controls (415.77,154.89) and (413.98,153.1) .. (413.98,150.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (421.98,156.89) .. controls (421.98,154.68) and (423.77,152.89) .. (425.98,152.89) .. controls (428.19,152.89) and (429.98,154.68) .. (429.98,156.89) .. controls (429.98,159.1) and (428.19,160.89) .. (425.98,160.89) .. controls (423.77,160.89) and (421.98,159.1) .. (421.98,156.89) -- cycle ;
\draw (364.98,69.89) .. controls (364.98,63.27) and (370.35,57.89) .. (376.98,57.89) -- (376.98,57.89) .. controls (383.61,57.89) and (388.98,63.27) .. (388.98,69.89) -- (388.98,129.89) .. controls (388.98,136.52) and (383.61,141.89) .. (376.98,141.89) -- (376.98,141.89) .. controls (370.35,141.89) and (364.98,136.52) .. (364.98,129.89) -- cycle ;
\draw (371.18,119.51) .. controls (376.92,116.2) and (384.26,118.16) .. (387.57,123.9) -- (387.57,123.9) .. controls (390.88,129.64) and (388.92,136.98) .. (383.18,140.29) -- (331.22,170.29) .. controls (325.48,173.61) and (318.14,171.64) .. (314.82,165.9) -- (314.82,165.9) .. controls (311.51,160.16) and (313.48,152.82) .. (319.22,149.51) -- cycle ;
\draw (435.18,149.91) .. controls (440.92,153.22) and (442.88,160.56) .. (439.57,166.3) -- (439.57,166.3) .. controls (436.26,172.04) and (428.92,174.01) .. (423.18,170.69) -- (371.22,140.69) .. controls (365.48,137.38) and (363.51,130.04) .. (366.82,124.3) -- (366.82,124.3) .. controls (370.14,118.56) and (377.48,116.6) .. (383.22,119.91) -- cycle ;
\draw (308.98,160.89) .. controls (308.98,152.61) and (315.7,145.89) .. (323.98,145.89) -- (429.98,145.89) .. controls (438.27,145.89) and (444.98,152.61) .. (444.98,160.89) -- (444.98,160.89) .. controls (444.98,169.18) and (438.27,175.89) .. (429.98,175.89) -- (323.98,175.89) .. controls (315.7,175.89) and (308.98,169.18) .. (308.98,160.89) -- cycle ;
\draw (383.84,54.95) .. controls (390.38,58.73) and (392.62,67.09) .. (388.85,73.63) -- (334.52,167.73) .. controls (330.74,174.27) and (322.38,176.51) .. (315.84,172.73) -- (315.84,172.73) .. controls (309.3,168.95) and (307.06,160.59) .. (310.84,154.05) -- (365.16,59.96) .. controls (368.94,53.42) and (377.3,51.18) .. (383.84,54.95) -- cycle ;
\draw (438.75,172.95) .. controls (432,176.84) and (423.37,174.53) .. (419.48,167.78) -- (365.58,74.44) .. controls (361.69,67.69) and (364,59.06) .. (370.75,55.17) -- (370.75,55.17) .. controls (377.49,51.27) and (386.12,53.58) .. (390.01,60.33) -- (443.91,153.68) .. controls (447.8,160.42) and (445.49,169.05) .. (438.75,172.95) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (208.98,73.89) .. controls (208.98,71.68) and (210.77,69.89) .. (212.98,69.89) .. controls (215.19,69.89) and (216.98,71.68) .. (216.98,73.89) .. controls (216.98,76.1) and (215.19,77.89) .. (212.98,77.89) .. controls (210.77,77.89) and (208.98,76.1) .. (208.98,73.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (209,130) .. controls (209,127.79) and (210.79,126) .. (213,126) .. controls (215.21,126) and (217,127.79) .. (217,130) .. controls (217,132.21) and (215.21,134) .. (213,134) .. controls (210.79,134) and (209,132.21) .. (209,130) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (159.98,156.89) .. controls (159.98,154.68) and (161.77,152.89) .. (163.98,152.89) .. controls (166.19,152.89) and (167.98,154.68) .. (167.98,156.89) .. controls (167.98,159.1) and (166.19,160.89) .. (163.98,160.89) .. controls (161.77,160.89) and (159.98,159.1) .. (159.98,156.89) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 170; green, 170; blue, 170 } ,fill opacity=1 ] (257.98,156.89) .. controls (257.98,154.68) and (259.77,152.89) .. (261.98,152.89) .. controls (264.19,152.89) and (265.98,154.68) .. (265.98,156.89) .. controls (265.98,159.1) and (264.19,160.89) .. (261.98,160.89) .. controls (259.77,160.89) and (257.98,159.1) .. (257.98,156.89) -- cycle ;
\draw (200.98,69.89) .. controls (200.98,63.27) and (206.35,57.89) .. (212.98,57.89) -- (212.98,57.89) .. controls (219.61,57.89) and (224.98,63.27) .. (224.98,69.89) -- (224.98,129.89) .. controls (224.98,136.52) and (219.61,141.89) .. (212.98,141.89) -- (212.98,141.89) .. controls (206.35,141.89) and (200.98,136.52) .. (200.98,129.89) -- cycle ;
\draw (207.18,119.51) .. controls (212.92,116.2) and (220.26,118.16) .. (223.57,123.9) -- (223.57,123.9) .. controls (226.88,129.64) and (224.92,136.98) .. (219.18,140.29) -- (167.22,170.29) .. controls (161.48,173.61) and (154.14,171.64) .. (150.82,165.9) -- (150.82,165.9) .. controls (147.51,160.16) and (149.48,152.82) .. (155.22,149.51) -- cycle ;
\draw (271.18,149.91) .. controls (276.92,153.22) and (278.88,160.56) .. (275.57,166.3) -- (275.57,166.3) .. controls (272.26,172.04) and (264.92,174.01) .. (259.18,170.69) -- (207.22,140.69) .. controls (201.48,137.38) and (199.51,130.04) .. (202.82,124.3) -- (202.82,124.3) .. controls (206.14,118.56) and (213.48,116.6) .. (219.22,119.91) -- cycle ;
\draw (144.98,160.89) .. controls (144.98,152.61) and (151.7,145.89) .. (159.98,145.89) -- (265.98,145.89) .. controls (274.27,145.89) and (280.98,152.61) .. (280.98,160.89) -- (280.98,160.89) .. controls (280.98,169.18) and (274.27,175.89) .. (265.98,175.89) -- (159.98,175.89) .. controls (151.7,175.89) and (144.98,169.18) .. (144.98,160.89) -- cycle ;
\draw (219.84,54.95) .. controls (226.38,58.73) and (228.62,67.09) .. (224.85,73.63) -- (170.52,167.73) .. controls (166.74,174.27) and (158.38,176.51) .. (151.84,172.73) -- (151.84,172.73) .. controls (145.3,168.95) and (143.06,160.59) .. (146.84,154.05) -- (201.16,59.96) .. controls (204.94,53.42) and (213.3,51.18) .. (219.84,54.95) -- cycle ;
\draw (274.75,172.95) .. controls (268,176.84) and (259.37,174.53) .. (255.48,167.78) -- (201.58,74.44) .. controls (197.69,67.69) and (200,59.06) .. (206.75,55.17) -- (206.75,55.17) .. controls (213.49,51.27) and (222.12,53.58) .. (226.01,60.33) -- (279.91,153.68) .. controls (283.8,160.42) and (281.49,169.05) .. (274.75,172.95) -- cycle ;
\draw (108,109) -- (132,109) -- (132,104) -- (148,114) -- (132,124) -- (132,119) -- (108,119) -- cycle ;
\draw (276,109) -- (300,109) -- (300,104) -- (316,114) -- (300,124) -- (300,119) -- (276,119) -- cycle ;
\draw (444,109) -- (468,109) -- (468,104) -- (484,114) -- (468,124) -- (468,119) -- (444,119) -- cycle ;
\draw (116,86) node [anchor=north west][inner sep=0.75pt] [align=left] {T1};
\draw (284,86) node [anchor=north west][inner sep=0.75pt] [align=left] {T2};
\draw (452,86) node [anchor=north west][inner sep=0.75pt] [align=left] {T3};
\end{tikzpicture}
\end{center}
It is immediate to verify that $H_G$ is $k$-uniform, has overlap $b$ and maximum degree $\Delta$.
Let $\+{I}(H_G)$ be the set of independent sets of $H_G$.
\begin{lemma} \label{lem:count-equivalence}
For any $\Delta$-regular graph $G=(V,E)$ and the constructed hypergraph $H_G$, it holds that $|\mathcal{I}(H_G)|=2^{|E|(k-2b)}Z(G)$.
\end{lemma}
\begin{proof}
We define the following partition over all the independent sets $\mathcal{I}(H_G)
=\biguplus_{\sigma}\mathcal{S}(\sigma)$ in the hypergraph $H_G$, where $\sigma$ rolls over all configurations of the $2$-spin system.
For any vertex $v\in V$, let $B_v$ be the set of constructed vertices in $H_G$ corresponding to $v$ as in step (T2) of the construction.
Given an independent set $I\in\mathcal{I}(H_G)$, the part $\mathcal{S}(\sigma)$ that $I$ falls into is given by, for any $v\in V$,
\begin{itemize}
\item $\sigma(v)=0$, if $|B_v\cap I|\leq b-1$;
\item $\sigma(v)=1$, if $|B_v\cap I|=b$ (namely, $B_v\subseteq I$).
\end{itemize}
Apparently this is a partition because each $I\in\mathcal{I}(H_G)$ falls into exactly one part.
We then show that $|S(\sigma)|=2^{|E|(k-2b)}\wt(\sigma)$, and then the lemma follows immediately.
\begin{itemize}
\item Consider the vertices constructed in (T2).
\begin{itemize}
\item For each $v\in V$ such that $\sigma(v)=0$, there are $2^b-1$ feasible partial configurations of $B_v$.
\item For each $v\in V$ such that $\sigma(v)=1$, there is just one feasible partial configuration of $B_v$.
\end{itemize}
\item Consider the vertices constructed in (T3).
\begin{itemize}
\item For each edge $e$ such that both its endpoints take spin $1$, the rest $k-2b$ vertices of the corresponding hyperedge cannot be in an independent set together, so there are $2^{k-2b}-1$ feasible partial configurations.
\item For any other edge, the corresponding $k-2b$ vertices are free to be included in an independent set, so there are $2^{k-2b}$ feasible partial configurations.
\end{itemize}
\end{itemize}
In all, this gives
\[
|S(\sigma)|=\left(2^{k-2b}-1\right)^{m_{11}(\sigma)}\left(2^{k-2b}\right)^{|E|-m_{11}(\sigma)}\left(2^b-1\right)^{n_0(\sigma)}=2^{|E|(k-2b)}\wt(\sigma). \qedhere
\]
\end{proof}
Our goal then boils down to showing the inapproximability of the constructed $2$-spin system.
To establish this, we invoke the following celebrated result by Sly and Sun \cite{SS14}, which connects the so-called non-uniqueness property of any general anti-ferromagnetic $2$-spin system with computational hardness.
Denote by $\mathbb{T}_{\Delta}$ the infinite $\Delta$-regular tree, and by $\hat{\mathbb{T}}_{\Delta}$ the infinite $(\Delta-1)$-ary tree.
\begin{theorem}[\cite{SS14}] \label{lem:ss14}
For any nondegenerate homogeneous anti-ferromagnetic $2$-spin system with interaction matrix ${\bm B}$ on $\Delta$-regular graphs that lies in the $\mathbb{T}_{\Delta}$ non-uniqueness region, the partition function is $\NP$-hard to approximate, even within a factor of $2^{cn}$ for some constant $c({\bm B},\Delta)>0$.
\end{theorem}
We remark that uniqueness/non-uniqueness regions for $\mathbb{T}_{\Delta}$ coincide with those for $\hat{\mathbb{T}}_{\Delta}$.
However, $(\Delta-1)$-ary trees are more convenient to handle, so we move to $\hat{\mathbb{T}}_{\Delta}$ onwards.
It is known that $\hat{\mathbb{T}}_{\Delta}$ (non-)uniqueness corresponds to the solutions of the standard tree recursion on the ratio of the Gibbs measure, namely $\mu_v(0)/\mu_v(1)$.
The following lemma, originally due to Martinelli, Sinclair and Weitz \cite[Section 6.2]{martinelli2007fast}, characterises these solutions.
\begin{lemma}[{\cite[Lemma 7]{GSV16}}] \label{lem:msw}
For $\Delta\geq 3$ and an anti-ferromagnetic $2$-spin system specified by (\ref{equ:2spin-interaction}),
consider the system of equations
\begin{equation*}
x=\lambda\left(\frac{\beta y+1}{y+\gamma}\right)^{\Delta-1}, \qquad
y=\lambda\left(\frac{\beta x+1}{x+\gamma}\right)^{\Delta-1}
\end{equation*}
where $x,y\geq 0$. Then,
\begin{itemize}
\item in the $\hat{\mathbb{T}}_{\Delta}$ uniqueness region, the system has a unique solution $(Q^\times,Q^\times)$;
\item in the $\hat{\mathbb{T}}_{\Delta}$ non-uniqueness region, the system has three solutions $(Q^+,Q^-), (Q^\times,Q^\times), (Q^-,Q^+)$ where $Q^+>Q^\times>Q^-$.
\end{itemize}
\end{lemma}
Let $d:=\Delta-1$.
Using the above lemma, it suffices to show that the two-step recursion has three fixed points $Q^+>Q^\times>Q^-$ in order to establish non-uniqueness.
Equivalently, we are to show that the following function has $3$ distinct zeros on $(0,+\infty)$ in the regime of parameters in \Cref{thm:main}.
\begin{equation}
f(z):=(2^b-1)\left(1+\frac{1}{2^{k-2b}(2^b-1)\left(1+\frac{1}{2^{k-2b}z+2^{k-2b}-1}\right)^d+2^{k-2b}-1}\right)^{d}-z.
\end{equation}
Because the solution $Q^\times$ is the unique fixed point of the one-step recursion, it also helps to consider the function
\begin{equation}
g(z):=(2^b-1)\left(1+\frac{1}{2^{k-2b}z+2^{k-2b}-1}\right)^d-z.
\end{equation}
The following lemma is sufficient to derive our main theorem.
\begin{lemma} \label{lem:analytic}
Assume integers $k\geq 2$, $1\leq b\leq k/2$ and $d=5\cdot 2^{k-b}$.
Define $z^*:=d/2^{k-2b}=5\cdot 2^b$.
Then $f(z^*)>0$ and $g(z^*)<0$.
\end{lemma}
\begin{proof}[Proof of \Cref{thm:main}]
Note that $g(0)>0$ and $\lim_{z\to +\infty} g(z)=-\infty$.
By $g(z^*)<0$, we know that the unique zero of $g$, which is $Q^\times$, is smaller than $z^*$.
On the other hand, by $f(z^*)>0$ and $\lim_{z\to +\infty} f(z)=-\infty$, there is a zero of $f$ on $(z^*,+\infty)$, and it cannot be $Q^\times$.
This establishes non-uniqueness due to \Cref{lem:msw}.
$\NP$-hardness then follows after \Cref{lem:ss14}.
\end{proof}
In the proof of \Cref{lem:analytic}, the following standard inequality is useful.
\begin{equation} \label{equ:exp}
\exp\{x\}>\left(1+\frac{x}{y}\right)^y>\exp\left\{\frac{xy}{x+y}\right\}\qquad\text{for all }x,y>0.
\end{equation}
\begin{proof}[Proof of \Cref{lem:analytic}]
The $g(z^*)$ part is due to a straightforward estimation:
\begin{align*}
g(z^*)
\leq (2^b-1)\left(1+\frac{1}{5\cdot 2^{k-b}}\right)^{5\cdot 2^{k-b}}-5\cdot 2^b < (2^b-1)\mathrm{e}-5\cdot 2^b < 0.
\end{align*}
For the $f(z^*)$ part, first assume $k\geq 3$. Then
\begin{align*}
f(z^*)=&(2^b-1)\left(1+\frac{1}{2^{k-2b}(2^b-1)\left(1+\frac{1}{5\cdot 2^{k-b}+2^{k-2b}-1}\right)^{5\cdot 2^{k-b}}+2^{k-2b}-1}\right)^{5\cdot 2^{k-b}}-5\cdot 2^b\\
\ge &(2^b-1)\left(1+\frac{1}{2^{k-2b}(2^b-1)\left(1+\frac{1}{5\cdot 2^{k-b}}\right)^{5\cdot 2^{k-b}}+2^{k-2b}-1}\right)^{5\cdot 2^{k-b}}-5\cdot 2^b\\
> &(2^b-1)\left(1+\frac{1}{2^{k-2b}(2^b-1)\mathrm{e}+2^{k-2b}-1}\right)^{5\cdot 2^{k-b}}-5\cdot 2^b\\
> &(2^b-1)\left(1+\frac{1}{2^{k-2b}(2^b-1)\mathrm{e}+2^{k-2b}}\right)^{5\cdot 2^{k-b}}-5\cdot 2^b\\
>&(2^b-1)\exp\left\{\frac{5\cdot 2^{b+k}}{2^k+2^{2b}+2^{k}(2^b-1)\mathrm{e}}\right\}-5\cdot 2^b. \tag{By (\ref{equ:exp})}
\end{align*}
Note that the fraction in $\exp\{\cdot\}$ is monotone increasing with respect to $k$.
Discuss further analysis by two cases.
\begin{itemize}
\item In the case that $b\geq 2$, the whole term minimises at $k=2b$.
Plug this in, we further get
\[
f(z^*)>(2^b-1)\exp\left\{\frac{5\cdot 2^b}{2+(2^b-1)\mathrm{e}}\right\}-5\cdot 2^b =: h(b).
\]
Now it suffices to show $h(b)>0$ for $b\geq 2$.
If $b=2$, we have $h(2)>1.5$.
If $b\geq 3$, then
\[
h(b)> (2^b-1)\exp\left\{\frac{5\cdot 2^b}{\mathrm{e}\cdot 2^b}\right\}-5\cdot 2^b>1.29\cdot 2^b-6.29>0.
\]
\item In the case that $b=1$, the whole term minimises at $k=3$, which is at least $0.7$.
\end{itemize}
Finally, if $k=2$, then $b$ can only take $1$, in which case $f(z^*)>16.0$.
\end{proof}
\bibliographystyle{alpha}
|
1,116,691,501,268 | arxiv | \section{Introduction}
\md{Knowledge Graphs (KG) allow to merge and connect heterogeneous data despite their differences, and this flexibility is key to their success. As they are incomplete by design, the drawback is that entities in a collection can have heterogeneous descriptions, potentially producing unreliable query results. Data producers, people, and organization producing KG data, still need to ensure, as far as possible, the best level of completeness. Completeness is regarded as an essential criterion in most quality methodologies for RDF data, the most used framework to describe KGs}~\cite{mendes2012sieve, harth2012completeness}.
\md {The difficulty is that they have no means to distinguish cases where incomplete entities can and should be fixed. Let us consider the example of a publishing company building a KG with the books they publish and related books, such as those which inspired their books or are quoted in them. The graph merges data coming from several databases in the company, regularly enriched with information gathered from external sources such as Wikidata~\cite{MullerBirn2015} or Geonames~\cite{geonames}. Connecting their data in a KG enables them to power recommendation algorithms, to running analysis of the sales, and so on. Sharing this KG also allows researchers in humanities to analyse the books, and libraries and resellers to reuse the metadata. However, if the data are incomplete, such applications may lead to erroneous results, such as wrong decisions based on the incomplete analysis. While allowing incomplete information makes it possible to merge the various sources, some of the missing data might be fixed, but the various strata of data that were added and modified at different points in time make it difficult to identify them.}
Available tools and methods assess a completeness rate to each property in a collection and produce flat lists of all entities missing each property. Once assessed that, for instance, the publication date is missing for 11\% of the books (1346 books), the data manager has to inspect one by one a list of 1346 entities. After uncountable hours and thousands of clicks, she will maybe find out that certain issues were recurrent, and that she could have fixed them \md{in bulk}. She might realise that more than a hundred books came from the same original database describing the related books and that the date was actually not missing in the original data, but happened to be an uncertain date, expressed by a year, followed by a question mark (e.g.\ `1943?'). She might also notice that several of them had been published during war periods, while another significant part of them had been published clandestinely. Finding what those subsets had in common at the start would have given her a very useful hint: it was very unlikely that she would find more precise information by looking for the date in external data sources; she could have spared hours of unsuccessful research. Other subsets of interest could include books planned for publication, which can only be fixed later, when the date is known; or all books from a specific source, pointing to a bug in the transformation process, in which case she would rather fix the bug and run the transform script again rather than fixing entities one after another; and so on.
She might also never notice those facts, as it is very difficult to find the coherence of scattered items when inspecting them in random order, especially if there are many meaningful subsets.
Our tool, \emph{The Missing Path}, aims at addressing this issue. The map, grouping entities according to their incomplete profile, \md{lets users identify consistent subsets}. Comparing a specific subset with the full collection reveals its distinctive features, giving useful hints to understand the cause of incompleteness, and fix entities \md{in bulk}, saving significant time. \emph{The Missing Path} considers the completeness not only of direct properties (e.g.\ the \md{publisher} of a book) but also of indirect properties (e.g.\ the location of the \md{publisher} of a book), also called \emph{paths} of properties.
The novelty of our approach is 1) to use a map to identify structural similarity of entities in a KG, and 2) to support comparative analysis of the distributions of values at the end of paths of properties in a KG.
Our contributions include:
\begin{itemize}[nosep]
\item A method to transform a collection of entities into a map based on their incompleteness;
\item A visualization tool called \emph{The Missing Path}, based on 3 coordinated views focused on the completeness, to support iterative exploratory analysis;
\item A description of the iterative design process we used to improve and validate our approach while working with 9 Wikidata contributors.
\end{itemize}
We first introduce the basics of RDF and discuss related work regarding the evaluation and the visualisation of completeness. Then, we present the tool; we describe how path-based summaries are extracted and computed, we explain the design rationale and the main parts of the interface, and we illustrate it with a use case featuring a fictional Wikidata contributor. Eventually, we relate the iterative design process we used to improve and validate our approach while working with nine Wikidata contributors, following a methodology inspired by the ``Multi-dimensional In-Depth Long-term Case Studies'' (MILCS) of Shneiderman \& Plaisant~\cite{Shneiderman2006}.
The tool is available as open-source at:
\href{https://gitlab.inria.fr/mdestand/the-missing-path/}{gitlab.inria.fr/mdestand/the-missing-path} and can be run online at: \href{https://missingpath.lri.fr/}{missingpath.lri.fr}.
\section{Background and Related Work}
We introduce RDF and we discuss related work regarding the assessment and visualisation of their completeness.
\subsection{Introduction to RDF data}
RDF data are graph data; \md{their power relies on their structure: they} are made of low-level statements, named triples, that can be chained to answer complex queries, possibly over several data sources. \texttt{example:AuthorA schema:author example:BookB} is a triple, stating that Author A is the author of Book B. Triples are composed of a \emph{subject}, a \emph{predicate} and an \emph{object}. \emph{Subjects} are always \emph{entities}, represented by URIs. For readability, URIs can be prefixed: \texttt{example:AuthorA} stands for \texttt{<http://www.example/AuthorA>}. \emph{Predicates} ---also named \emph{properties}---also URIs; they follow rules defined in domain-specific models named \emph{ontologies}. \texttt{Schema.org} is an ontology specialised in the description of web pages, and \texttt{Schema:author} is one of the properties defined in it. \emph{Objects} can be \emph{entities} or \emph{literals}. When an object is an entity, it is possible to chain statements, for instance: Author A is the author of Book B, Book B's publisher is Editor C, Editor C's location is City D, City D's name is 'Paris'. The chaining stops when the object is a literal, like \texttt{'Paris'}, since a literal cannot be the subject of another triple. A chain of predicates is named a \emph{path} in the graph.
The RDF framework is very flexible and allows each entity to be described with different properties. However, to make their data meaningful and usable, data producers need to ensure a minimum of homogeneity.
\begin{figure}[h]
\frame{\includegraphics[width=\columnwidth]{rw1}}
\caption{\md{Screenshot of LD-VOWL, taken on 2020-12-12 at \href{http://vowl.visualdataweb.org/ldvowl/}{vowl.visualdataweb.org/ldvowl}. The user has selected the property `affiliation' (in red) and can see in the top right panel that it is used 747 times. To know the rate of completeness of this property relative to the class Person, she needs to select the node Person, read in the panel that there are 910 instances, and compute that 747/910*100 = 82\% of the persons have an affiliation.
}}
\label{fig:rw1}
\end{figure}
\begin{figure}[h]
\frame{\includegraphics[width=\columnwidth]{rw3}}
\caption{\md{Screenshot of Path Outlines, taken on 2020-12-05 at \href{http://spf.lri.fr/}{spf.lri.fr}. The user can browse the paths for a collection, filtering them on their completeness rate (among other metrics), and inspect the completeness rate of each path.}}
\label{fig:rw3}
\end{figure}
\subsection{Completeness in RDF}
Though the definition of quality in RDF can have many acceptations, most work on the topic mention completeness as important criteria~\cite{mendes2012sieve, BIZER20091,radulovic2018comprehensive, zaveri2016quality, BenEllefi_2018}. The rate of completeness of a property is the percentage of entities in a given set described by this property.
The set of entities can be the dataset or a subset. Technically speaking, approaches considering the dataset~\cite{auer2012lodstats} give the most accurate overview. However, from an editorial point of view, and except for some very generic properties, like \texttt{rdfs:label}, that might apply to any entity in a dataset; it is more reasonable to expect homogeneous descriptions for groups of entities that are similar, also named collections of entities. Issa et al.~\cite{issa2019revealing} use the class of resources as similarity criteria, for instance \texttt{schema:Person}, \texttt{schema:Organization}, or \texttt{schema:Place}, \md{and display the result as a UML class diagram. They do not support the evaluation of the completeness of paths of properties. A typical use case would be to evaluate the percentage of authors whose place of birth has geocoordinates, to know if plotting a map would give a representative overview of authors.}
\md{Using a node-link diagram to lay out a summary graph of the dataset allows to read paths of properties~\cite{Troullinou2018, weise2016ld}, as displayed in \autoref{fig:rw1}, with the limitation that the counts are given as absolute counts for each selected element. The user has to compute the rate himself for single properties, and cannot access it for paths of properties. To address this limitation, Path Outlines lets users browse paths following their completeness rate and other metrics~\cite{destandau2020}, as displayed in \autoref{fig:rw3}.}
\begin{figure}[h]
\frame{\includegraphics[width=\columnwidth]{rw2}}
\caption{\md{Screenshot of Integraality for Wikidata, taken on 2020-12-12 at \href{https://www.wikidata.org/wiki/Wikidata:WikiProject_sum_of_all_paintings/Property_statistics/Sandbox}{wikidata.org/wiki/Wikidata:WikiProject\_sum\_of \_all\_paintings/Property\_statistics/Sandbox}. The color scale helps users compare the completeness rate in the different groups. However, as the table scrolls over more than 5 screen heights, it is actually difficult to read and use. }}
\label{fig:rw2}
\end{figure}
\md{However, considering the rate of completeness of a property or a path of properties relative to the full collection might not always be enough to help data producers fix their datasets.
In RDF, meaningful aggregation can also be achieved through the values of a property. For instance, entities in the collection \texttt{schema:Person} could be considered regarding their profession, encoded in the value of \texttt{schema:hasOccupation}. This allows identifying smaller subsets with similar profiles and needs. Integraality~\cite{InteGraality} lets users select a property to define subsets in the collection, and then evaluate the completeness of other properties relative to those subsets, as displayed in \autoref{fig:rw2}. The limit is that the table can be huge and thus difficult to read and use and that one single property might not produce useful groups to analyse the completeness of all properties. PRO-WD~\cite{wisesa2019wikidata} supports crossing several properties, but produces a grid of charts that is very difficult to interpret.}
\md{Our approach, instead of starting from the values at the end of properties to define consistent groups, identifies clusters of entities with a similar structure, in order to automatically reveal meaningful contexts concerning the properties over which completeness is evaluated.}
\begin{figure*}
\frame{\includegraphics[width=9.7em]{fig2_map_d1}}
\frame{\includegraphics[width=9.7em]{fig2_map_d2}}
\frame{\includegraphics[width=9.7em]{fig2_map_d3}}
\frame{\includegraphics[width=9.7em]{fig2_map_d4}}
\frame{\includegraphics[width=9.7em]{fig2_map_d6}}
\caption{
Collections C1, C2, C3, C4 and C6 (see~\autoref{tab:collection}). The number of clusters, their size and distribution provide a visual footprint of the shape of a collection, relative to the set of paths selected to produce the map (highlighted in pink on the right side of each thumbnail).}
\label{fig:maps}
\end{figure*}
\begin{figure}
\frame{\includegraphics[width=4.6em]{fig3_histo_d1}}
\frame{\includegraphics[width=4.6em]{fig3_histo_d2}}
\frame{\includegraphics[width=4.6em]{fig3_histo_d3}}
\frame{\includegraphics[width=4.6em]{fig3_histo_d4}}
\frame{\includegraphics[width=4.6em]{fig3_histo_d6}}
\caption{
Collections C1, C2, C3, C4 and C6 (see~\autoref{tab:collection}). Histogram on the frontpage: the steepness of the curve gives a visual footprint of the completeness of the most complete paths in the collection. Scrolling down allows seeing all paths. C1 is our demo collection: it was not curated as a wikiproject, so very few paths are fully complete, and there is a sharp decrease with a long tail of paths with a low rate of completeness. C2 is maintained by an active team of 10 contributors: a large number of paths is complete. C3 is a catalog of films curated before it was imported: is more balanced. C4 has been created and curated over a short time mostly by one contributor. C6 is a starting project mixing sets of data that were curated separately.}
\label{fig:charts}
\end{figure}
\section{\md{Data Representation and Processing}}
\md{The originality of our approach relies on the data representation. We build on the concept of \emph{semantic paths} to summarise the description of a collection, and we use the semantic paths as indexes for vector embeddings to compute a map of completeness as well as detailed summaries.}
\subsection{Paths summaries}
We build on the concept of \emph{semantic paths} to describe a given collection; they encode aggregate information relative to chains of triples. In the article introducing them~\cite{destandauSPaths2020}, their description is limited to the counts of unique and total values at the end of the chain. \md{We use vector embeddings to extend them and offer a} detailed summary of their distribution.
Our API takes as parameters \emph{the URI of a SPARQL endpoint}, \emph{a similarity criterion for the collection}, \emph{a maximum depth for the chains of properties} to analyse. We extract RDF data to process them into a matrix.
We first retrieve all the path patterns---the combinations of chains of properties that will be analysed---up to the max depth, and their completeness rate, as described in \cite{destandau2020}. The list is ordered by completeness, starting with the most complete path, and stored in a file. We assign an auto-incremented index as an identifier to each path.
\subsection{Retrieval of the entities}
Then we fetch the URIs of all entities in the collection. \md{A specific issue with SPARQL endpoints is that an endpoint cannot return more than a given number of lines as a result. This \texttt{quota} is usually set to 10,000 by default. Unlike SQL databases, there is no guarantee to retrieve all the results repeating the same query using the LIMIT, START, and ORDER BY commands. We use the \emph{semantic paths} to find a path to formulate several queries so that each query will retrieve less than \texttt{quota} entities. We initially set a \texttt{maxUniqueValues} variable to 30 in order to keep the number of queries reasonable. We start by checking the best-represented path with less than \texttt{maxUniqueValues} values at the end, and we retrieve the unique values at the end of this path, and the count of entities associated with each. If the highest count is lower than \texttt{quota}, and the number of entities not represented by this path is also lower than \texttt{quota}, we use this path to retrieve the entities: for each value, a query fetches all the entities having this value at the end of the path; then the last query fetches all entities not described with this path. We merge and deduplicate all the entities retrieved. We assign an auto-incremented index as an identifier to each entity. The list is stored in a file. Otherwise, if the path does not match the requirement, we consider the next path. If none of the paths meets the requirements, we increase \texttt{maxUniqueValues} and check the list of paths again.}
\subsection{\md{Values as vector embeddings}}
Each entity is described as a vector, where each column is a path. The value is either 'null' or a list of descriptors, structured as follows: \texttt{[values, datatypes, languages]}. Each element is itself a list, to account for multiple values, since cardinality is not constrained in RDF. For instance, a cell describing the label of an entity with 2 labels could contain: \texttt{[['À la recherche du temps perdu', 'In Search of Lost Time'], null, ['fr', 'en']]}. The datatype descriptors are filled only if they are expressed in the data. A cell describing the publication date of an entity could contain: \texttt{[[1998], ['xsd:dateTime'], null]}. \md{The vector is stored in a dictionary, associated with the URI of the entity.}
\subsection{\md{The completeness matrix}}
\md{The matrix of completeness is created from those vectors, each row is an entity.
The values are transformed as follows: 'null' becomes 1, meaning that a path is missing, and a list becomes 0.
Then, we project the vectors in 2 dimensions, to be later used as coordinates on a map.}
Dimensional reduction techniques~\cite{Nonato2018} allow computing clusters and lay them out on a map. They usually group entities according to the values of their core attributes (for instance, the topics of a set of books and their publication date), to have items with similar descriptions grouped together~\cite{Zaveri2013, hogan2010some}. We fill the vector with the structure of the description; we consider entities as similar if they are described by the same paths of properties, even if the values at their ends are not the same, to identify groups of entities missing such information.
Among the large number of dimensionality reduction techniques available~\cite{Nonato2018}, we opted for UMAP~\cite{umap}; this flexible method accepts both simple or sparse vectors---as we knew that the number of paths to consider, that is, the number of dimensions in a vector, could vary significantly across datasets---, and is fast and efficient for clustering.
We use UMAP with the dissimilarity function \emph{Russel-Rao} from the Scipy library~\cite{scipy}. This function computes a dissimilarity that takes into account the indices of the Boolean values in the vector---as opposed to a Jaccard function, for instance. As a result, items that form clusters on the map are those missing the exact same set of paths---while Jaccard would have grouped entities missing the same number of paths. To our knowledge, using maps to identify similarities in KGs is a novel approach.
\subsection{\md{Advanced summaries}}
\md{To produce the summaries, we construct a matrix with all the vectors, and we transform it into a table (a Pandas DataFrame) to compute the summaries with Python.}
The summaries are based on unique values. All values with a number of occurrences lower than 5\% of the total number of values are merged in an `other' bucket to keep the overview readable. The graphical elements can be used to select entities by clicking on them, as displayed in \autoref{fig:selectionsummary}. The `other' bucket can also be used as a selector, and its values will be detailed as the selection narrows down.
To detect statistically significant differences, the system uses the distributions of the values at the end of a path for the subset, and for the full set, as displayed in the summaries, including the `other' aggregate. It normalises them and compares them against each other, performing a Kolmogorov-Smirnov test, using the \verb|scipy.stats.ks\_2samp| Scipy function. It then repeats this operation with the summaries of the datatypes and languages. If there appears to be a significant difference (p-value $< 0.1$) in either values, datatypes, or languages, the path is colored in pink.
\section{\md{User Interface}}
\md{This new data representation allows us to design an interface to analyse the incompleteness of subsets in RDF data~(\autoref{fig:teaser}). We will present the design rationale and detail the main parts of the interface: the map, the histograms with embedded stacked charts, and the selection bar.}
\subsection{Design rationale}
\md{To support the identification and analysis of subsets of entities relative to the completeness of their paths, The Missing Path coordinates an entity-centric visualisation, the map, with a path-centric visualisation, the histogram.
The map represents all the entities in the collection and allows to situate a selected subset through explicit color encoding (in pink) of selected items. The path summaries describe the full collection and the selection and are laid out in mirror.} The combination of \textit{superposition} (on the map) and \textit{juxtaposition} (in mirror) allows the effective support of comparison~\cite{Gleicher2011}. The tight integration of statistics and visualisation is known to support explorative data analysis~\cite{perer2008integrating}, helping users to make sense of the data.
There are two ways to select a subset of items sharing a similar structure: selecting a cluster on the map or using a combination of graphical elements in the summarised distributions to express logical constraints, e.g.~\emph{all items missing pathA and pathB, but not missing pathC}. The map is intended to guide users in their discovery, while the summarised distributions help them to refine a selection, or to fully express their own constraints to pursue their ideas when new ideas come to them~\cite{Perer2008SYF}.
\subsection{2D map of entities}
On the left part of the screen, the map (\autoref{fig:teaser}) displays clusters of items with similar incomplete profiles, offering an overview of the entities in the collection and allowing to select the clusters. It supports the following tasks:
\begin{itemize}[nosep]
\item see the homogeneity of the collection, regarding the completeness of the paths selected to compute the projection;
\item select subsets, through \md{precomputed} groups or using the interactive lasso; and
\item identify entities that are selected.
\end{itemize}
\subsubsection{Overview of the completeness}
\autoref{fig:maps} shows that different collections have different footprints. If a collection were 100\% complete, there would be only one large cluster. The number of clusters, their size, and distribution, form a visual footprint giving the shape of a collection relative to the set of properties selected to produce the map. Users can modify the list of paths taken into account to build the vector with the projection button \wsv{changeprojection.png} and recompute the map. Our Python API, based on UMAP-learn \cite{umap-learn}, takes a few to 30 seconds to recompute the map for the collections in \autoref{tab:collection}.
For instance, selecting only 2 properties, P1 and P2, to compute a map, could result in 4 clusters: entities missing both P1 and P2 properties, entities missing none of them, entities missing only P1 property, and entities missing only P2 property. The interest does not lie in the systematic enumeration of all combinations (in which case a table would be as efficient as a map). In reality, when more properties are taken into account, not all combinations happen, some are very frequent, and other concern only a few entities, and the map reveals unexpected clusters serving as entry points to explore a collection. Inspecting the profile of a cluster often reveals other similarities, that may relate to the provenance, the history, or the contributor.
\subsubsection{Colors}
\label{sec:map:color}
While the position of the entities is based on missing information, their color is linked to the content of present information. Paths for which the summary of values has more than one value are candidates for color-coding. By default, the most covered candidate path is used. For instance, the default for collection C1 is \texttt{wdt:P31 instance of}, its summary is composed of two values: \texttt{wd:Q1004 Comics} and the aggregate \texttt{Other}. Entities are colored in blue for the former, in green for the latter, or with a gradient if they hold several values. Users can select another path to color the entities with the color button \wsv{changecolor.png} in the top bar.
When a subset of entities is selected, they are colored in pink and others in black.
\subsection{Paths histograms}
Next to the map giving a visual overview of the entities, the histograms (\autoref{fig:teaser}, right) offer an overview of the aggregated completeness of each path, for the full set and the selected subset. Stacked charts embedded in the histograms give access to the distribution of each path. They are laid out in mirror to let users compare the profiles of the subset and the full set, in terms of completeness and distribution. They support the following tasks:
\begin{itemize}[nosep]
\item see and compare the homogeneity of the full set with the selected subset, regarding all properties
\item see and compare the completeness and distributions of the full set with the selected subset
\item select entities based on the presence or absence of a property
\item select entities based on summarised distributions of the values, languages, and datatypes at the end of the paths.
\end{itemize}
\subsubsection{Overview of the completeness}
The grey bars represent all paths describing the collection, ordered by completeness, to give another visual signature of the completeness, showing at first glimpse the number of paths fully complete. \autoref{fig:charts} shows paths summaries for the collections displayed in \autoref{fig:maps}. The map and the histogram are linked and coordinated.
Each row represents a path; the length of the grey bar is mapped to its percentage of completeness. Clicking on a path opens it, showing a summary as detailed in the next paragraph.
Paths labels are displayed on the left of each row. By default, they appear when users hover a path, when they hover a predefined zone on the map, as in \autoref{fig:selectionzone}, or when a path is open. Users can toggle them on permanently as in \autoref{fig:teaser}with the labels button \wsv{openlabels.png}.
\subsubsection{Summarised distributions characterizing a path}\label{sec:summary:open}
When open, a path displays a summary of the distribution of the values at its end, as well as of their datatypes and languages. This work builds on the concept of \emph{semantic paths}. Originally, their description was limited to the counts of unique and total values at the end of the chain~\cite{destandauSPaths2020}. We extend them by adding a more detailed summary of their distribution. Our API takes as parameters \emph{the URI of a SPARQL endpoint}---a service accepting SPARQL query over an RDF dataset, \emph{a similarity criteria for the collection}, \emph{a maximum depth for the chains of properties} to analyse. We first retrieve all the path patterns---the combinations of chains of properties that will be analysed---up to the max depth, and their completeness rate. Then, to be able to compute summaries on any subset in a time that is acceptable for interaction, we retrieve the values at the end for all entities and store them in a matrix that can be processed rapidly with Python. We precompute a summary of the collection. The summaries of subsets will be computed on-demand.
\begin{figure*}
\centering
\frame{\includegraphics[width=\linewidth]{fig4_values_nonnumerical.png}}
\caption{Summary of values for a path: the whole set is presented on the left, in comparison to the selection on the right. The summary details values representing more than 5\% of the total, and aggregates others: for the whole set, only 3 of the 54 unique values are well represented enough to be detailed; the 51 that remain are merged in the `other' rectangle, represented with a dotted texture. Hovering a rectangle displays the label and count of the value it represents. Each value, including the aggregate, can be clicked to be added as a condition for a selection.}
\label{fig:comparesummary}
\end{figure*}
\begin{figure}
\frame{\includegraphics[width=\columnwidth]{fig5_selectionzone}}
\caption{Hovering a predefined zone on the map highlights it in yellow, and gives access to the $+$ button, to use it as a condition for a selection. It also displays and highlights in yellow the names of the paths missing for the entities in this zone.}
\label{fig:selectionzone}
\end{figure}
\begin{figure}
\frame{\includegraphics[width=\columnwidth]{fig6_selectionsummary}}
\frame{\includegraphics[width=\columnwidth]{fig6_selectionsummary2}}
\caption{The user can click on an element of the summary to add it to the selection (top). Once added, it becomes dark pink, and clicking again will remove it (bottom).}
\label{fig:selectionsummary}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig7_selection}
\caption{The selection bar contains controls to inspect and refine the conditions for a selection and its result. The number of checkboxes in ( a) shows how many conditions are pending (here, there is one). Clicking on (a) displays the query in pseudo code (see \autoref{fig:constraints}). Clicking on (b) retrieves the list of entities matching the conditions and their summary. When a selection has been retrieved, (c) indicates the number of the list of entities in the selection, clicking on it displays the list in \autoref{fig:entities}. (d) enables to export the selection, and (e) to clear it.\bigskip\bigskip}
\label{fig:selectiontools}
\end{figure}
\begin{figure}
\frame{\includegraphics[width=\columnwidth]{fig8_constraints}}
\caption{Conditions for a selection are expressed in pseudo code, to let users understand how the tool retrieves entities. They can refine them by toggling the elements that are underlined : `having' can be switched to `not having', resulting in the inverse condition, and `the whole set'' to `the current selection'.}
\label{fig:constraints}
\end{figure}
\begin{figure}
\frame{\includegraphics[width=\columnwidth]{fig9_entitieslist}}
\caption{List of entities in the current selection. The label is in the preferred language when available. Clicking on the URI opens it in a new window.\bigskip\bigskip}
\label{fig:entities}
\end{figure}
\subsubsection{Comparison of the full set with the selected subset}
To make sense of a \emph{subset} of entities, users need to identify its distinctive features, what defines it in comparison to the whole collection. The histogram is laid out as a mirror of the histogram for the full collection, to facilitate this comparison.
For instance in \autoref{fig:teaser}, comparing the two histograms shows that the subset is very homogeneous; although it misses important information (no grey bar in the right column), the 16 paths that are described are complete (full grey bar in the right column), while only 8 of them are fully complete for the full set. Paths missing in the subset are highlighted in yellow, to help users focus on the problem they are trying to solve.
To support users in the comparison task, the tool also draws their attention to which paths to inspect in order to understand the specificity of a selection (how it differs from the full set); it colors them in pink.
\label{sec:distrib:interest}
The yellow color indicates paths that are missing in the subset. It stands out, more intense and luminous than the other colors in the interface, to draw the attention of users to what is not there, and help them make sense of the absence.
\subsection{Selection bar}\label{sec:constraints}
The selection bar supports users in inspecting and refining the conditions for a query.
\emph{Conditions} are selection criteria in the database sense, combined by a conjunction (an ``and'' operator). Hovering over the map highlights predefined zones (\autoref{fig:selectionzone}). The $+$ button in the centre of the zone allows adding the zone as a condition. Clicking on the map switches from region to lasso mode, to let users select zones that are not predefined. Graphical elements in the histograms and the summaries can be added to and removed from the selection.
The selection control bar in \autoref{fig:selectiontools} supports users in understanding what happens when they add a condition, validating the selection, seeing the list of entities selected, and clearing the selection.
\begin{enumerate}[label=\alph*), nosep]
\item \emph{Toggle list of conditions}. Each condition is represented by a checked box.
When at least one condition has been added, (a) and (b) become pink, to indicate that the selection can be queried. Clicking (a) toggles the list of conditions, as shown in~\autoref{fig:constraints}. The query is written in pseudo-code; users can remove conditions from the list, toggle them to their inverse condition, or toggle the scope of the query from `whole collection' to `current subset'.
\item \emph{Inspect selection}. The combination of conditions defines the selection. When users clicked the inspect button, the query is sent to our Python API. The new list of entities in the selection is retrieved, and \autoref{fig:selectiontools}-c is updated first. Then the summary for the entities is computed and displayed under the selection control bar \autoref{fig:selectiontools}-f.
\item \emph{Toggle list of selected entities}. Clicking this button toggles the list in \autoref{fig:entities}. Users can remove entities from the list. Clicking the `Update selection' button at the bottom updates the paths summary for the selection.
\item \emph{Export selection}. This button triggers the download of 3 \texttt{csv} files that can be used to keep track of the query:
\texttt{condition.csv} contains the list of conditions used to get the selection,
\texttt{selection.csv} contains the list of entities in the selection (URI + label), and \texttt{summary.csv} contains the summaries for the subset and full set.
\item \emph{Clear selection}. Clears the current selection and its summary.
\end{enumerate}
\medskip
\section{Scenario of Use}
We designed our tool to help users see what is missing in their dataset and make sense of it. Let us describe the interface from the point of view of a contributor, Alice, who wants to curate Wikidata entities of class \texttt{Q1004 Comics}, describing comic books. She opens the tool, sees the map of entities in \autoref{fig:teaser}. As she moves the mouse, yellow zones delimiting clusters of entities appear, and paths that are missing for the zone are highlighted in yellow. Her attention gets caught by a small cluster, which misses many pieces of information that are important to describe comics, such as \texttt{P407 language of work or name}, \texttt{P495 country of origin}, \texttt{P123 publisher}, \texttt{P577 publication date} and \texttt{P136 genre}.
She decides to inspect this group in more detail: she adds this zone to the conditions for selection using the $+$ symbol and validates the selection with the magnifier button.
The selection bar announces a total of 20 entities, and the summary appears under it.
Some of the paths are colored in pink, indicating that their summary for the selection might be significantly different from the full set.
Alice hovers the paths highlighted in pink to see their labels and starts by opening \texttt{rdfs:label}.
She notices that there are 20 distinct labels, all of them in French. Then, she inspects \texttt{schema:description}.
Its summary reveals that a single value is repeated 20 times: ``stripverhaal van Robbedoes en Kwabernoot'' (``comic strip Spirou \& Fantasio'' in Dutch, a popular comic strip originally written in French).
The 20 descriptions are in Dutch.
She inspects \texttt{schema:dateModified} and sees that 20 entities were last modified on the same day.
The \texttt{P179 part of the series} property indicates that 20 are part of the same series.
Alice finds that those entities appear to have very similar needs. According to her quality standards, labels and descriptions should be available in similar languages (as opposed to labels being in French only and descriptions in Dutch only). From what she knows, Spirou and Fantasio comics are known enough that it should be easy to find the author, language, publisher, and publication date.
The information can likely be found from the same sources for at least some of the albums. If Alice is lucky, one of the sources might even be the URI of the series that all entities belong to.
It looks like she will be able to save time by fixing those entities at once. Now that she has identified that this cluster needs a certain type of action, she would like to make sure that she will check all the entities belonging to the series, even if they miss slightly different information and are not in the initial cluster.
To do so, she clicks on the value shared by 20 entities to add it to conditions for selection.
She then opens the conditions and reads the query: ``SELECT entities HAVING the value \texttt{wd:Q1130014} at the end of the path \texttt{wdt:P179} among the current selection''.
She toggles the scope definition from ``current selection'' to ``full set'' and validates the selection with the magnifier button.
The selection bar now announces a total of 35 entities, all part of the ``Spirou and Fantasio'' series. She clicks the export button and downloads the files describing this group for fixing it later.
She then hovers the next zone. The paths highlighted in yellow indicate that entities in this zone also miss similar important information, the main difference being that they have a \texttt{skos:altLabel}, but no attribute \texttt{wikibase:timeStamp}.
Note that even if the properties discriminating two neighbour zones do not appear to be meaningful properties, this structural approach helps detect coherent subsets.
In order to inspect the new cluster, she adds the zone to conditions for selection using the $+$ symbol and validates the selection with the magnifier button.
The new selection replaces the previous one. The selection bar announces 127 entities. 100\% of them have a \texttt{P179 part of the series}, so she opens the summary for this path that is now colored in pink, hoping that she can detect interesting groups.
The summary announces 25 unique values, and 3 values stand out because they are well represented. Those values are URIs, and she hovers them to dereference them in the URI bar above the map; she sees the corresponding labels: ``Sammy'' (25), ``Bobo'' (21), and ``Natacha'' (14). The rest of the values are merged in an `other' group (67). She clicks on the first value to add it to conditions for selection and validates the selection with the magnifier button. She exports this selection. She repeats the same actions with the two other subgroups. Now she can refer to the \texttt{csv} files she has exported to fix each of those 3 groups.
This exploratory approach enables her to quickly detect small groups that are coherent and thus easy to fix. Let's now see how she can use the tool starting from the summary of paths. She clears the current selection and clicks on the eye pictogram to display all path labels. She figures out at first glance, from the length of the grey bars in the histogram, that less than half of the entities have an author. She decides to make this a priority to fix. She opens the author summary, which confirms a completeness rate of 42\%, and she clicks on the bar to add it to conditions for selection. She opens conditions to read the query: ``SELECT entities HAVING the path \texttt{wdt:P50} among the whole set''. She toggles the condition from `HAVING' to `NOT HAVING' and validates the magnifier button. The selection bar displays 1929 entities for the selection. The summaries for paths are mainly composed of `other' values. Wondering how to deal with this huge list, she considers refining the selection by combining conditions. She sees the property \texttt{P3589 Grand Comics Database Series ID} in the list. She decides to inspect entities having no author but such an identifier, which might mean that the information about the author will be accessible. The subset counts 49 entities, which is indeed more manageable. She exports the selection; the workflow should be easy since the source is the same for all entities; it might even be automatable. There are still 1880 entities without authors. She tries another strategy, looking for entities that have a publisher but no author. The result counts 129 entities.
With The Missing Path, incompleteness can be explored starting from the map or from the summary and then switching between them to refine or expand the exploration.
\section{User Study: Iterative Design and Evaluation}
Using a methodology inspired by MILCS~\cite{Shneiderman2006} we worked with Wikidata contributors to validate our approach and iteratively improve the design of the tool. This methodology is optimised to evaluate creativity support tool, and analysing incompleteness is a task that demands creativity, with no established method or measure to assess its effectiveness.
It relies on an acute knowledge of the data and the workflow underlying their creation and edition.
\subsection{Participants}
We recruited 9 Wikidata contributors (2 female, 7 male) via calls on Wikidata mailing lists and Twitter. 3 were based in France, 1 in Sweden, 1 in Germany, 1 in the Netherlands, 1 in Australia, and 1 in the USA. 4 of them used Wikidata in the context of their work, and 5 as volunteers. They were 30 to 59 years old (avg: 39.89 yo, median: 34 yo). Their experience contributing ranged from 6 months to 7 years (avg: 3.46 years, median: 4 years). They spent between 1 and 165 hours a month contributing (avg: 52.89 hours, median: 24 hours). All participation was voluntary and without compensation.
\subsection{Set-up}
The interviews were lead online through a videoconferencing system. We used an online survey form to guide participants through the first interview and to collect demographic information. Our tool was run on a web server hosted by the laboratory and logs were filed in a database on our server.
\subsection{Procedure}
\subsubsection{First interview}
After going through the informed consent form and collecting demographic information, the interview was guided by the following question:
\begin{em}
\begin{enumerate*}
\item Which Wikidata projects do you contribute to?
\item How do you decide which data you will update in priority?
\item Did it ever happen that you wanted to contribute and didn't know where to start?
\item Can you tell me about the last item you edited?
\item Do you propose items for others to update? How do you select them?
\end{enumerate*}
\end{em}
Then we gave a quick overview of the tool and asked participants if they would be interested in visualising a collection with it.
\subsubsection{Second interview}
We first shared our screen with participants to present the tool and its documentation. We demonstrated basic tasks on the Comics collection in a 5 minutes demo.
Then participants took control, sharing their screen so that we were able to observe them. They registered their unique identifier in the tool for logs and performed the same tasks on their own collections.
We explained to them how to give feedback using Gitlab issues. These Issues can be of three types: feature, problem, and insight.
We encouraged participants to use any other communication channel if they felt more comfortable with it, explaining that we would transform it into issues ourselves.
At the end of the interview, we created issues to file the reactions we have observed during the interview.
\subsubsection{Follow-up}
We communicated with participants by email (and a mix of Twitter direct messages and email for one of them). We conducted an additional video interview with four of them, during which we assisted them with the use of the tool when needed.
\medskip
We name our participants P1 to P9, according to their unique identifier.
We logged a total of 298 actions attributed to our participants, distributed as follows: add a condition (46), remove from condition (20), retrieve subset (74), compute projection (21), clear selection (21), load collection (61), and selectColor (55). P1 had no logs at all --- his web browser privacy settings interfered with our log collection mechanism, although he reported using the tool.
Over 4 months we conducted a total of 22 interviews, with an average of $2.44$ interviews per participant (median 3), and we received a total of 111 emails or Twitter direct messages, with an average of 12.33 messages per participant (median 11). We extracted a total of 78 issues. Only three were filed directly by a participant; we transcribed all others from the interviews (54) and emails (19).
One participant dropped out after the first interview, and one after the second, without giving a reason.
We used a total of 12 collections during the study, as listed in \autoref{tab:collection}.
Comics was our demo collection. Each participant had an initial collection, and three asked for the analysis of an additional collection during the process. The one who dropped out after the first interview had no collection.
\subsection{Data collection and analysis}
We recorded the first interview. For the second and third interviews, we relied on our notes to transcribe issues right after the interview. We also transcribed issues from emails and messages we received.
At the end of the study, we exported the answers to the form and the issues into \texttt{csv} files, and we tagged the type (one of \textit{collection, feature, general comment, insight, problem}) and the status (one of \textit{solved, not relevant, future work}) of issues.
\subsection{Results}
We analyse the results with regards to \md{usability issues and validation of the approach.}
\begin{table}
\centering
\begin{tabular}{llcc}
\textbf{ID} &\textbf{Description} & \makecell{\textbf{number of} \\\textbf{entities}}& \makecell{\textbf{number of} \\\textbf{paths}}\\
\toprule
C1 &Comics & 4567 & 401 \\
C2 &French deputies & 14513 & 1350 \\
C3 &BFI movies & 6666 & 985 \\
C4 &Ice Skating 1 & 2204 & 94 \\
C5 &Ice Skating 2 & 1377 & 70 \\
C6 &Illuminati* & 7938 & 183 \\
C7 &Maps & 142 & 109 \\
C8 &Monuments in France & 48845 & 775 \\
C9 &Monuments in Brittany & 4210 & 367 \\
C10 & Research institutes & 235 & 353\\
C11 &Swedish female sculptors & 292 & 395\\
C12 &Swedish photographers & 760 & 739\\
\bottomrule
\end{tabular}
\caption{Data collections visualised in the tool for the evaluation, available in the demo instance. * The Illuminati collection comes from an instance of Wikibase, Factgrid}\label{tab:collection}
\end{table}
\subsubsection{\md{Usability issues}}
\md{The iterative design process helped us solve usability issues.}
The most critical issue was the understanding of the map. In an earlier version of the tool, the interface emphasised information missing in a subset after its summary was retrieved. P3 stated that he found it difficult to understand which paths were missing. We decided to \md{precompute} default zones on the map and to display missing path names on hover to make the interface self-explanatory. We also added the yellow color to highlight what was missing. After this, users reacted much more positively to the map: ``I understand now'' (P2), ``Now I understand it better'' (P3).
\md{A second issue was the difficulty to identify a distinctive feature in the selection.}
P7 suggested highlighting the paths for which the summary appears to be significantly different in the subset than in the full set. In an earlier version, inspecting a cluster to understand its specificity necessitated looking at each path one by one, which was long and uneasy. Participants did not know where to start, and it could happen that they repeatedly opened paths for which the summary consisted in `other' aggregates, which was not much help to identify the specificity of a group. We added the automatic detection of significant differences, as described in section \nameref{sec:distrib:interest}, and highlighted them in pink.
This feature saves substantial time and provides guidance.
The way we presented summaries also evolved during the process. We had first designed summaries for integers as boxplots, thinking it could be interesting for users to select only outliers or median. We realised that our users could not read boxplots and ignored those summaries, so we switched to a stacked chart of unique values, similar to the one used for text values. On the other hand, dates and times were initially designed as a stacked chart, and most of the time resulted in a single `Other' aggregate. P1 asked if we could group dates, so we implemented binning into hours, days, months, or years. This feature improved the usability of some path summaries like, for instance, for instance, the modification date, as we can see in \autoref{fig:dates}.
When he first used the tool, P1 also tried to select the `other' aggregate as a condition for selection, which was at the time not possible. We also added this feature.
\begin{figure}
\frame{\includegraphics[width=\columnwidth]{fig10_dates}}
\caption{Evolution of the layout for dates summaries during the iterative process. This is the summary for the path \texttt{schema:dateModified} on the collection \texttt{C1 Comics}. In the first version (top) the dates were grouped by unique values, which very often resulted in an `other' aggregate, laid out with a dotted texture. After participants' feedback we implemented binning for dates (bottom), which results in 4 groups, from right to left: ``2018'' (4150), ``2019'' (4423), ``2020'' (460), and `other' (100) --- hovering the rectangles reveal the value and counts. Each value can be used as a condition for selection.}
\label{fig:dates}
\end{figure}
\md{All in all, participants suggested 32 new features and reported 15 problems. We implemented 20 of the new features, marked 3 as irrelevant in the context of our work, and kept 9 for future work. We solved 13 problems, marked one as an exception, and one for future work.}
\subsubsection{Validation of the approach}
We were particularly interested in knowing if users would rely on it to start the exploration of subsets. P1, P9, and P2 did.
P1 explained: ``I see it as a way to start the exploration, see the outlines''. He had already spent a lot of time curating this set of data and knew them well. However, there are more than 14,000 entities in the set, and he worked more specifically on those related to the French Fifth Republic, so the map was useful to spot problems he was not aware of. For instance, the first cluster he inspected during the second interview was a set of 47 deputies having no place of birth. He commented: ``There should not be entities with no place of birth. This group can easily be fixed, the information is available through the Sycomore French deputies database, and they all have a Sycomore ID'' (\texttt{wdt:P Sycomore ID}).
During the third interview, another cluster showed entities (deputies) with no given name. He explained: ``All deputies should have a given name. This can be fixed easily from the labels.'' He thought that even if the focus might switch from the map to the histogram as you get to know your data better and they become more homogeneous, there can always be new stages when you incorporate new sets of entities and want to bring them to the same level of quality as the rest of the data when the map could prove to be useful again.
P9 did also start from the map. He was planning to import and manage his own catalogue of movies in Wikidata. Since he was still at a planning step, we had selected the BFI movie database, which was about similar in size and type of information to what his own data would later be. He figured out there was a cluster of 16 entities without titles. He inspected the summary and found out those entities all had a label, which meant the titles would be very easy to fix. A double-check through the histogram showed that there were 125 entities with no title but a label. Another cluster had no directors. This leads him to use the histogram to look for all entities having no directors, which amounted to 1380 entities. Looking at the map, he could see they were spread into about 20 different clusters, depending on what else was missing. Hovering the clusters then gave him an overview of the possible combination of missing attributes. He inspected two of them in more detail. Trying to imagine how he could use the tool later with his own data, he said he would probably want to configure the projection with paths he wished to achieve full completeness for, and then work on the data until there's only one big cluster.
P2 needed to customize the map, using only the paths that were of prior importance for him to compute the projection. This reduced the map to a few clusters that he found meaningful. ``Now I am satisfied. This is the image I wanted when all the irrelevant criteria that complexified the map have been removed.'' Then he started his exploration from the histogram. He used the combination of conditions to find the list of all monuments qualified as churches --- having \texttt{wd:Q16970 church building} as a value for \texttt{wdt:P31 instance of} --- but with no identifier \texttt{wdt:P3963 Clochers de France ID}, specific to churches. He expressed the wish to see the entities highlighted on the map, a feature described in section \nameref{sec:map:color}, that we added following his demand. While explaining that \texttt{wdt:P18 images} was not a relevant path for the projection in his opinion, because it was normal that some entities had no images, he exclaimed ``I know what I am going to do this afternoon!'' He had figured out he could select all the entities having no \texttt{wdt:P18 image} but a \texttt{wdt:P373 Commons category}, because if they had a Commons identifier, then he knew he could find an image. He added ``I could have done the same with SPARQL, but I would never have had the idea. The tool gave me the idea.''
P5 preferred to start from the summaries and ignored the map.
\md{She suggested a feature to support better exploration from the summaries: the possibility to combine conditions to refine the selection. Interestingly, this made our tool much more flexible, able to support more diverse tasks.}
\begin{figure}
\frame{\includegraphics[width=\columnwidth]{fig11_datasetcomplaint}}
\caption{Entities highlighted on the map of the collection C6 when all entities having a \texttt{factgrid:prop/P17 Dataset complaint} are selected. The contributor who made those statements explained he worked on small groups of consistent entities, and we can see they appear as such on our map, although P17 is not used to compute the map. This shows that those consistent groups miss the same well represented attributes.}
\label{fig:complaint}
\end{figure}
\md{In total, participants made 16 general comments on the approach and reported 12 insights on their data.}
\medskip
In summary, our study helped us make the tool more flexible and adapt it to different workflows.
We had first thought the map would be the main entry point, and statistical summaries would help refine and analyse the clusters. We realised that looking at the histogram overview did also trigger ideas of specific completeness profiles (e.g.\ entities missing a specific path but not missing another one, or entities with a specific feature and missing a path), which is another way to detect coherent clusters. The full list laid flat triggered associations that could be quickly verified.
\section{Conclusion and Future Work}
We have presented \emph{The Missing Path}\xspace, a visualisation tool to support data producers in analysing incompleteness in their data to identify subsets of items that can be fixed. It is based on two novel representations of RDF data; the map provides a structural snapshot of a collection, reflecting its history and allowing users to untangle its various strata; the histograms and stacked charts laid out in mirror allow comparing a subset with the full collection, revealing its distinctive features. The coordination of those new visualisations supports users in the interactive exploration and analysis of incomplete subsets.
Our user study confirmed that Wikidata contributors could gain new insights and identify groups of entities that can be fixed. Participants guided us to make the tool more understandable and usable. Doing so, they also lead us to make it more flexible, supporting various workflows, and this pushed our tool in the direction of an exploratory analysis tool.
To our knowledge, there is no such tool for RDF data. In the future, we would like to investigate other analysis scenarios, besides incompleteness. We will also address the need to keep track of the exploration in the tool, not only by exporting the data so that users can monitor the evolution of their collection.
Having heard of our tool, Wikidata product managers became intrigued, interested, and asked for a demonstration. As one of them told us when we demonstrated the tool, ``One of the big problems our contributors face in keeping the data quality and completeness high is the fact that it is very hard to see the big picture due to Wikidata's modelling being centred around individual entities. Your tool is addressing this issue''. We will continue to interact with the Wikidata community and other RDF data producers to improve our tool and support better quality Knowledge Graphs.
\begin{acks}
The authors wish to thank all the participants in the experiment.
\end{acks}
\bibliographystyle{SageV}
|
1,116,691,501,269 | arxiv | \section{Introduction}
In an interesting paper \cite{MW} the physicists Mehlig and Wilkinson
introduce, in connection with their study of the Gutzwiller semiclassical
trace formula, a class of unitary operators $\widehat{S}:L^{2}(\mathbb{R}%
^{n})\longrightarrow L^{2}(\mathbb{R}^{n})$. These operators are defined as
follows: let $S\in Sp(n)$ have no eigenvalue equal to one; to $S$ one
associates the Weyl operator
\begin{equation}
\widehat{R}(S)=\left( \frac{1}{2\pi }\right) ^{n}\frac{i^{\nu }}{\sqrt{|\det
(S-I)}}\int e^{\frac{i}{2}\left\langle M_{S}z_{0},z_{0}\right\rangle }%
\widehat{T}(z_{0})d^{2n}z_{0} \label{sf3}
\end{equation}%
where $\widehat{T}(z_{0})$ is the Weyl--Heisenberg operator and%
\begin{equation}
M_{S}=\tfrac{1}{2}J(S+I)(S-I)^{-1} \label{ms}
\end{equation}%
$I$ being the identity and $J$ the standard symplectic matrix (see below).
The index $\nu $ is an integer related to the sign of $\det (S-I)$, and
which is not studied in the general case in \cite{MW}. Mehlig and Wilkinson
moreover show that
\begin{equation}
\widehat{R}(SS^{\prime })=\pm \widehat{R}(S)\widehat{R}(S^{\prime })
\label{sf4}
\end{equation}%
for all $S,S^{\prime }$ for which both sides are defined. They claim that
these operators belong to the metaplectic group. This property is however
not quite obvious; what is acceptably \textquotedblleft
obvious\textquotedblright\ is that $\widehat{R}(S)$ is a \textit{multiple}
by a scalar factor of modulus one of any of the two metaplectic operators $%
\pm \widehat{S}$ associated to $Mp(n)$; this is achieved using the
metaplectic covariance of the Heisenberg--Weyl operators (see below). The
purpose of this paper is to precise Mehlig and Wilkinson's statement by
comparing explicitly the integer $\nu $ in (\ref{sf3}) with the Maslov
indices on the metaplectic group we have studied in a previous work \cite%
{AIF}. This is indeed important --and not just an academic exercise-- since
the ultimate goal in \cite{MW} is to apply formula (\ref{sf3}) to give a new
proof of Gutzwiller's trace formula for chaotic systems. It is well-known
that the calculation of the associated \textquotedblleft Maslov
indices\textquotedblright\ is notoriously difficult: it suffices to have a
look on the impressive bibliography devoted to that embarrassingly subtle
topic. We will, in addition, give a semiclassical interpretation of $%
\widehat{R}(S)$, expressed in terms of the phase space wavefunctions we
introduced in \cite{Bullsci,IHP}.
\begin{remark}
An alternative approach to the results of this paper would be to use Howe's
beautiful \textquotedblleft oscillator group\textquotedblright\ method \cite%
{Howe} (see \cite{Folland} for a review); this would however in our case
lead to unnecessary technical complications.
\end{remark}
\subsection*{Notations}
We denote by $\sigma $ the canonical symplectic form on $\mathbb{R}_{z}^{2n}=%
\mathbb{R}_{x}^{n}\times \mathbb{R}_{p}^{n}$%
\begin{equation*}
\sigma (z,z^{\prime })=\left\langle p,x^{\prime }\right\rangle -\left\langle
p^{\prime },x\right\rangle \text{ \ if \ }z=(x,p)\text{, }z^{\prime
}=(x^{\prime }p^{\prime })
\end{equation*}%
that is%
\begin{equation*}
\sigma (z,z^{\prime })=\left\langle Jz,z^{\prime }\right\rangle \text{ \ \ ,
\ }J=%
\begin{bmatrix}
0 & I \\
-I & 0%
\end{bmatrix}%
\text{.}
\end{equation*}%
The real symplectic group $Sp(n)$ consists of all linear automorphisms $S:%
\mathbb{R}_{z}^{2n}\longrightarrow \mathbb{R}_{z}^{2n}$ such that $\sigma
(Sz,Sz^{\prime })=\sigma (z,z^{\prime })$ for all $z,z^{\prime }$. It is a
connected Lie group. We denote by $\ell _{X}$ and $\ell _{P}$ the Lagrangian
planes $\mathbb{R}_{x}^{n}\times 0$ and $0\times \mathbb{R}_{p}^{n}$,
respectively. $\mathcal{S}(\mathbb{R}^{n})$ is the Schwartz space of rapidly
decreasing functions on $\mathbb{R}^{n}$, and its dual $\mathcal{S}^{\prime
}(\mathbb{R}^{n})$ the space of tempered distributions.
\section{Prerequisites}
\subsection{Standard theory of $Mp(n)$: Review}
The material of this first subsection is quite classical; see for instance
\cite{Folland,AIF} and the references therein.
Every $S\in Mp(n)$ is the product of two \textquotedblleft quadratic Fourier
transforms\textquotedblright , which are operators $S_{W,m}$ defined on $%
\mathcal{S}(X)$ by%
\begin{equation}
S_{W,m}f(x)=\left( \frac{1}{2\pi i}\right) ^{n}i^{m}\sqrt{|\det L|}\int
e^{iW(x,x^{\prime })}f(x^{\prime })d^{n}x^{\prime } \label{swm1}
\end{equation}%
where $W$ is a quadratic form in the variables $x,x^{\prime }$ of the type%
\begin{equation}
W(x,x^{\prime })=\frac{1}{2}\langle Px,x\rangle -\langle Lx,x^{\prime
}\rangle +\frac{1}{2}\langle Qx^{\prime },x^{\prime }\rangle \label{wplq}
\end{equation}%
with $P=P^{T}$, $Q=Q^{T}$, $\det L\neq 0$. The integer $m$ appearing in (\ref%
{swm1}) corresponds to a choice of $\arg \det L$:%
\begin{equation*}
m\pi \equiv \arg \det L\text{ \ }\func{mod}2\pi
\end{equation*}%
and to every $W$ there thus corresponds two different choices of $m$ modulo $%
4$: if $m$ is one choice, then $m+2$ is the other (this of course reflects
the fact that $Mp(n)$ is a two-fold covering of $Sp(n)$). The projection $%
\pi :Mp(n)\longrightarrow Sp(n)$ is entirely specified by the datum of each $%
\pi (S_{W,m})$, and we have $\pi (S_{W,m})=S_{W}$ where%
\begin{equation*}
(x,p)=S_{W}(x^{\prime },p^{\prime })\Longleftrightarrow p=\partial
_{x}W(x,x^{\prime })\text{ \ and }p^{\prime }=-\partial _{x^{\prime
}}W(x,x^{\prime })\text{.}
\end{equation*}%
In particular,
\begin{equation}
S_{W}=%
\begin{bmatrix}
L^{-1}Q & L^{-1} \\
PL^{-1}Q-L^{T} & PL^{-1}%
\end{bmatrix}
\label{plq}
\end{equation}%
is the free symplectic automorphism generated by the quadratic form $W$;
observe that $S_{W}\ell _{P}\cap \ell _{P}=0$ for every $W$. The inverse $%
\widehat{S}_{W,m}^{-1}=\widehat{S}_{W,m}^{\ast }$ of $\widehat{S}_{W,m}$ is
the operator $S_{W^{\ast },m^{\ast }}$ where $W^{\ast }(x,x^{\prime
})=-W(x^{\prime },x)$ and $m^{\ast }=n-m$, $\func{mod}4$. Note that if
conversely $S$ is a free symplectic matrix
\begin{equation}
S=%
\begin{bmatrix}
A & B \\
C & D%
\end{bmatrix}%
\in Sp(n)\text{ \ , \ }\det B\neq 0 \label{free}
\end{equation}%
then $S=S_{W}$ with $P=B^{-1}A$, $L=B^{-1}$, $Q=DB^{-1}$.
\subsection{Heisenberg--Weyl operators}
For $z_{0}=(x_{0},p_{0})$ we denote by $T(z_{0})$ the translation $%
z\longmapsto z+z_{0}$; it acts on functions by push-forward: $%
T(z_{0})f(z)=f(z-z_{0})$. We denote by $\widehat{T}(z_{0})$ the
corresponding Heisenberg--Weyl operator: for $f\in \mathcal{S}(\mathbb{R}%
^{n})$ we have
\begin{equation*}
\widehat{T}(z_{0})=e^{(\left\langle p_{0},x\right\rangle -\tfrac{1}{2}%
\left\langle p_{0},x_{0}\right\rangle )}f(x-x_{0})\text{.}
\end{equation*}%
The operators $\widehat{T}(z_{0})$ satisfy the metaplectic covariance
formula:%
\begin{equation}
\widehat{S}\widehat{T}(z)=\widehat{T}(Sz)\widehat{S}\text{ \ \ }(S=\pi (%
\widehat{S})) \label{meco}
\end{equation}%
for every $\widehat{S}\in Mp(n)$ and $z$. In fact, the metaplectic operators
are the only unitary operators, up to a an factor in $S^{1}$ satisfying (\ref%
{meco}):
\begin{quote}
\emph{For every }$S\in Sp(n)$ \emph{there exists a unitary transformation }$%
\widehat{U}$ in $L^{2}(\mathbb{R}^{n})$ \emph{satisfying (\ref{meco}) and }$%
\widehat{U}$ \emph{is uniquely determined apart from a constant factor of
modulus one.}
\end{quote}
The Heisenberg--Weyl operators moreover satisfy the relations
\begin{equation}
\widehat{T}(z_{0})\widehat{T}(z_{1})=e^{-i\sigma (z_{0},z_{1})}\widehat{T}%
(z_{1})\widehat{T}(z_{0}) \label{noco1}
\end{equation}
\begin{equation}
\widehat{T}(z_{0}+z_{1})=e^{-\tfrac{i}{2}\sigma (z_{0},z_{1})}\widehat{T}%
(z_{0})\widehat{T}(z_{1}) \label{noco2}
\end{equation}
as is easily seen from the definition of these operators.
\subsection{Weyl operators}
Let $a^{w}$ be the Weyl operator with symbol $a$:
\begin{equation*}
a^{w}f(x)=\left( \tfrac{1}{2\pi }\right) ^{n}\int e^{i\left\langle
p,x-y\right\rangle }a(\tfrac{1}{2}(x+y),p)f(y)d^{n}yd^{n}p\text{; }
\end{equation*}%
where $f\in \mathcal{S}(\mathbb{R}^{n})$equivalently%
\begin{equation*}
a^{w}=\int a_{\sigma }(z_{0})\widehat{T}(z_{0})d^{n}z_{0}
\end{equation*}%
where $a_{\sigma }$ is the symplectic Fourier transform $F_{\sigma }a$
defined by%
\begin{equation*}
F_{\sigma }a(z)=\left( \tfrac{1}{2\pi }\right) ^{n}\int e^{i\sigma
(z,z^{\prime })}a(z^{\prime })d^{2n}z^{\prime }\text{.}
\end{equation*}%
The kernel of $a^{w}$ is related to $a$ by the formula
\begin{equation*}
a(x,p)=\int e^{-i\left\langle p,y\right\rangle }K(x+\tfrac{1}{2}y,x-\tfrac{1%
}{2}y)d^{n}y\text{.}
\end{equation*}
The Mehlig--Wilkinson operator (\ref{sf3}) is the Weyl operator with twisted
Weyl symbol%
\begin{equation}
a_{\sigma }(z)=\left( \frac{1}{2\pi }\right) ^{n}\frac{i^{\nu }}{\sqrt{|\det
(S-I)}}e^{\frac{i}{2}\left\langle M_{S}z_{0},z_{0}\right\rangle }\text{.}
\label{asig}
\end{equation}
\subsection{Generalized Fresnel Formula}
We will use the following formula, generalizing the usual Fresnel integral
to complex Gaussians. Let $M$ be a real symmetric $n\times n$ matrix. If $M$
is invertible then the Fourier transform of the exponential $\exp
(i\left\langle Mx,x\right\rangle /2)$ is given by the formula%
\begin{equation}
\left( \tfrac{1}{2\pi }\right) ^{n/2}\int e^{-i\left\langle p,x\right\rangle
}e^{\frac{i}{2}\left\langle Mx,x\right\rangle }d^{n}x=|\det M|^{-1/2}e^{%
\frac{i\pi }{4}\limfunc{sgn}M}e^{-\frac{i}{2}\left\langle
M^{-1}x,x\right\rangle } \label{fres}
\end{equation}%
where $\limfunc{sgn}M$, the \textquotedblleft signature\textquotedblright\
of $M$, is the number of $>0$ eigenvalues of $M$ minus the number of $<0$
eigenvalues.
For a proof see for instance \cite{Folland}, App. A.
\section{Discussion of the Mehlig--Wilkinson Formula}
The Mehlig--Wilkinson operators $\widehat{R}(S)$ are Weyl operators with
twisted Weyl symbol%
\begin{equation*}
a_{\sigma }(z)=\left( \frac{1}{2\pi }\right) ^{n}\frac{i^{\nu }}{\sqrt{|\det
(S-I)}}e^{\frac{i}{2}\left\langle M_{S}z_{0},z_{0}\right\rangle }\text{.}
\end{equation*}%
We begin by giving two straightforward alternative formulations of these
operators.
\subsection{Equivalent formulations}
We begin by remarking that the matrix $M_{S}=\frac{1}{2}J(S+I)(S-I)^{-1}$ is
symmetric; this immediately follows from the conditions%
\begin{equation*}
S\in Sp(n)\Longleftrightarrow S^{T}JS=J\Longleftrightarrow SJS^{T}=J\text{.}
\end{equation*}%
Notice that (\ref{ms}) can be \textquotedblleft solved\textquotedblright\ in
$S$, yielding $S=(2M-J)^{-1}(2M+J)$.
\begin{proposition}
The operator
\begin{equation}
\widehat{R}(S)=\left( \frac{1}{2\pi }\right) ^{n}\frac{i^{\nu }}{\sqrt{|\det
(S-I)}}\int e^{\frac{i}{2}\left\langle M_{S}z_{0},z_{0}\right\rangle }%
\widehat{T}(z_{0})d^{2n}z_{0} \label{alf0}
\end{equation}%
can be written in the following alternative two forms:%
\begin{equation}
\widehat{R}(S)=\left( \frac{1}{2\pi }\right) ^{n}\frac{i^{\nu }}{\sqrt{|\det
(S-I)}}\int e^{-\frac{i}{2}\sigma (Sz_{0},z_{0})}\widehat{T}%
((S-I)z_{0})d^{2n}z_{0} \label{alf1}
\end{equation}%
\begin{equation}
\widehat{R}(S)=\left( \frac{1}{2\pi }\right) ^{n}i^{\nu }\sqrt{|\det (S-I)}%
\int \widehat{T}(Sz_{0})\widehat{T}(-z_{0})d^{2n}z_{0} \label{alf2}
\end{equation}
for $\det (S-I)\neq 0$.
\end{proposition}
\begin{proof}
We have
\begin{equation*}
\tfrac{1}{2}J(S+I)(S-I)^{-1}=\tfrac{1}{2}J+J(S-I)^{-1}
\end{equation*}%
hence, in view of the antisymmetry of $J$,%
\begin{equation*}
\left\langle M_{S}z_{0},z_{0}\right\rangle =\left\langle
J(S-I)^{-1}z_{0},z_{0}\right\rangle =\sigma ((S-I)^{-1}z_{0},z_{0})
\end{equation*}%
Performing the change of variables $z_{0}\longmapsto (S-I)^{-1}z_{0}$ we can
rewrite the integral in the right hand side of (\ref{alf0}) as%
\begin{eqnarray*}
\int e^{\frac{i}{2}\left\langle M_{S}z_{0},z_{0}\right\rangle }\widehat{T}%
(z)d^{2n}z_{0} &=&\int e^{\frac{i}{2}\sigma (z_{0},(S-I)z_{0})}\widehat{T}%
((S-I)z_{0})d^{2n}z_{0} \\
&=&\int e^{-\frac{i}{2}\sigma (Sz_{0},z_{0})}\widehat{T}%
((S-I)z_{0})d^{2n}z_{0}
\end{eqnarray*}%
hence (\ref{alf1}). Taking into account the relation (\ref{noco2}) we have%
\begin{equation*}
\widehat{T}((S-I)z_{0})=e^{-\tfrac{i}{2}\sigma (Sz_{0},z_{0})}\widehat{T}%
(Sz_{0})\widehat{T}(-z_{0})
\end{equation*}%
and formula (\ref{alf2}) follows.
\end{proof}
\begin{corollary}
We have $\widehat{R}(S)=c_{S}\widehat{S}_{W,m}$ where $c$ is a complex
constant with $|c|=1$.
\end{corollary}
\begin{proof}
We begin by noting that $\widehat{R}(S)$ satisfies the metaplectic
covariance relation
\begin{equation*}
\widehat{R}(S)\widehat{T}(z_{0})=\widehat{T}(Sz_{0})\widehat{R}(S)
\end{equation*}%
as immediately follows from the alternative form (\ref{alf2}) of $\widehat{R}%
(S)$. On the other hand, a straightforward calculation using formula (\ref%
{alf1}) shows that $\widehat{R}(S)$ is unitary, hence the claim.
\end{proof}
\subsection{The case $\widehat{S}=\widehat{S}_{W,m}$}
We are going to show that the Mehlig--Wilkinson operators coincide with the
metaplectic operators $\widehat{S}_{W,m}$ when $S=S_{W}$ and we will
thereafter determine the correct choice for $\nu $; we will see thast it is
related by a simple formula to the usual Maslov index as defined in \cite%
{AIF}.
Let us first prove the following technical result:
\begin{lemma}
\label{lemma1}Let $S_{W}$ be a free symplectic matrix (\ref{free}). We have
\begin{equation}
\det (S_{W}-I)=\det B\det (B^{-1}A+DB^{-1}-B^{-1}-(B^{T})^{-1}
\label{bofor1}
\end{equation}%
that is, when $S$ is written in the form (\ref{plq}):%
\begin{equation}
\det (S_{W}-I)=\det (L^{-1})\det (P+Q-L-L^{T})\text{.} \label{bofor2}
\end{equation}
\end{lemma}
\begin{proof}
We begin by noting that since $B$ is invertible we can write $S-I$ as%
\begin{equation*}
\begin{bmatrix}
A-I & B \\
C & D-I%
\end{bmatrix}%
=%
\begin{bmatrix}
0 & B \\
I & D-I%
\end{bmatrix}%
\begin{bmatrix}
C-(D-I)B^{-1}(A-I) & 0 \\
B^{-1}(A-I) & I%
\end{bmatrix}%
\end{equation*}%
hence%
\begin{equation*}
\det (S_{W}-I)=\det B\det (C-(D-I)B^{-1}(A-I))\text{.}
\end{equation*}%
Since $S$ is symplectic we have $C-DB^{-1}A=-(B^{T})^{-1}$ (use for instance
the fact that $S^{T}JS=SJS^{T}=J$) and hence%
\begin{equation*}
C-(D-I)B^{-1}(A-I))=B^{-1}A+DB^{-1}-B^{-1}-(B^{T})^{-1}\text{;}
\end{equation*}
the Lemma follows.
\end{proof}
\begin{proposition}
Let $S$ be a free symplectic matrix (\ref{free}) and $\widehat{R}(S)$ the
corresponding Mehlig--Wilkinson operator. We have $\widehat{R}(S)=\widehat{S}%
_{W,m}$ provided that $\nu $ is chosen so that%
\begin{equation}
\nu \equiv m-\limfunc{Inert}(P+Q-L-L^{T})\text{ \ }\func{mod}4
\label{Maslov1}
\end{equation}%
($\limfunc{Inert}(P+Q-L-L^{T})$ the number of $<0$ eigenvalues of the
symmetric matrix $P+Q-L-L^{T}$).
\end{proposition}
\begin{proof}
Recall that we have shown that $\widehat{R}(S)=c_{S}\widehat{S}_{W,m}$ where
$c_{S}$ is a complex constant with $|c_{S}|=1$. Let us determine that
constant. Let $\delta \in \mathcal{S}^{\prime }(\mathbb{R}^{n})$ be the
Dirac distribution centered at $x=0$; setting%
\begin{equation*}
C=\left( \frac{1}{2\pi }\right) ^{n}\frac{i^{\nu }}{\sqrt{|\det (S_{W}-I)}}
\end{equation*}%
we have, by definition of $\widehat{R}(S)$,
\begin{eqnarray*}
\widehat{R}(S)\delta (x) &=&C\int e^{\frac{i}{2}\left\langle
M_{S}z_{0},z_{0}\right\rangle }e^{i(\left\langle p_{0},x\right\rangle -\frac{%
1}{2}\left\langle p_{0},x_{0}\right\rangle )}\delta (x-x_{0})d^{2n}z_{0} \\
&=&C\int e^{\frac{i}{2}\left\langle M_{S}(x,p_{0}),(x,p_{0})\right\rangle
}e^{\frac{i}{2}\left\langle p,x\right\rangle }\delta (x-x_{0})d^{2n}z_{0}
\end{eqnarray*}%
hence, setting $x=0$,%
\begin{equation*}
\widehat{R}(S)\delta (0)=C\int e^{\frac{i}{2}\left\langle
M_{S}(0,p_{0}),(0,p_{0})\right\rangle }\delta (-x_{0})d^{2n}z_{0}
\end{equation*}%
that is, since $\int \delta (-x_{0})d^{n}x_{0}=1$,%
\begin{equation}
\widehat{R}(S)\delta (0)=\left( \frac{1}{2\pi }\right) ^{n}\frac{i^{\nu }}{%
\sqrt{|\det (S-I)}}\int e^{\frac{i}{2}\left\langle
M_{S}(0,p_{0}),(0,p_{0})\right\rangle }d^{n}p_{0}\text{.} \label{sdo}
\end{equation}%
Let us calculate the scalar product
\begin{equation*}
\left\langle M_{S}(0,p_{0}),(0,p_{0})\right\rangle =\sigma
((S-I)^{-1}0,p_{0}),(0,p_{0}))\text{.}
\end{equation*}%
The relation $(x,p)=(S-I)^{-1}(0,p_{0})$ is equivalent to $%
S(x,p)=(x,p+p_{0}) $ that is to%
\begin{equation*}
p+p_{0}=\partial _{x}W(x,x)\text{ \ and \ }p=-\partial _{x^{\prime }}W(x,x)%
\text{.}
\end{equation*}%
Using the explicit form (\ref{wplq}) of $W$ together with Lemma \ref{lemma1}
these relations yield%
\begin{equation*}
x=(P+Q-L-L^{T})^{-1}p_{0}\text{ \ ; \ }p=(L-Q)(P+Q-L-L^{T})^{-1}p_{0}
\end{equation*}%
and hence%
\begin{equation}
\left\langle M_{S}(0,p_{0}),(0,p_{0})\right\rangle =-\left\langle
(P+Q-L-L^{T})^{-1}p_{0},p_{0}\right\rangle \text{.} \label{bofor3}
\end{equation}%
Applying Fresnel's formula (\ref{fres}) we get%
\begin{equation*}
\left( \frac{1}{2\pi }\right) ^{n}\int e^{\frac{i}{2}\left\langle
M_{S}(0,p_{0}),(0,p_{0})\right\rangle }d^{n}p_{0}=e^{-\frac{i\pi }{4}%
\limfunc{sgn}(P+Q-L-L^{T})}|\det (P+Q-L-L^{T})|^{1/2}\text{;}
\end{equation*}%
since
\begin{equation*}
\frac{1}{\sqrt{|\det (S-I)}}=|\det L|^{1/2}|\det (P+Q-L-L^{T})|^{-1/2}
\end{equation*}%
in view of (\ref{bofor2}) in Lemma \ref{lemma1} we thus have%
\begin{equation*}
\widehat{R}(S)\delta (0)=\left( \frac{1}{2\pi }\right) ^{n}i^{\nu }e^{-\frac{%
i\pi }{4}\limfunc{sgn}(P+Q-L-L^{T})}|\det L|^{1/2}\text{.}
\end{equation*}%
Now, by definition of $\widehat{S}_{W,m}$ we have%
\begin{equation*}
\widehat{S}_{W,m}\delta (0)=\left( \frac{1}{2\pi }\right) ^{n}i^{m-n/2}|\det
L|^{1/2}
\end{equation*}%
hence%
\begin{equation*}
i^{\nu }e^{-\frac{i\pi }{4}\limfunc{sgn}(P+Q-L-L^{T})}=i^{m-n/2}\text{.}
\end{equation*}%
It follows that we have%
\begin{equation*}
\nu -\frac{1}{2}\limfunc{sgn}(P+Q-L-L^{T})\equiv m-\frac{n}{2}\text{ \ }%
\func{mod}4
\end{equation*}%
which is the same thing as (\ref{Maslov1}) since $P+Q-L-L^{T}$ has rank $n$.
\end{proof}
\subsection{The general case}
Recall that we established in Lemma \ref{lemma1} the equality%
\begin{equation}
\det (S_{W}-I)=\det L^{-1}\det (P+Q-L-L^{T}). \label{splq}
\end{equation}%
valid for all free matrices $S_{W}\in Sp(n)$. Also recall that every $%
\widehat{S}\in Mp(n)$ can be written (in infinitely many ways) as a product $%
\widehat{S}=\widehat{S}_{W,m}\widehat{S}_{W^{\prime },m^{\prime }}$. We are
going to show that $\widehat{S}_{W,m}$ and $\widehat{S}_{W^{\prime
},m^{\prime }}$ in addition always can be chosen such that $\det (\widehat{S}%
_{W,m}-I)\neq 0$ and $\det (\widehat{S}_{W^{\prime },m^{\prime }}-I)\neq 0$.
For that purpose we need the following straightforward factorization result
(see \cite{AIF}):
\begin{lemma}
Let $W$ be given by (\ref{wplq}); then
\begin{equation}
\widehat{S}_{W,m}=\widehat{V}_{-P}\widehat{M}_{L,m}\widehat{J}\widehat{V}%
_{-Q} \label{fac1}
\end{equation}%
where
\begin{equation*}
\widehat{V}_{-P}f(x)=e^{\frac{i}{2}\left\langle Px,x\right\rangle }f(x)\text{
\ ; \ }\widehat{M}_{L,m}f(x)=i^{m}\sqrt{|\det L|}f(Lx)
\end{equation*}%
and $\widehat{J}$ is the modified Fourier transform given by%
\begin{equation*}
\widehat{J}f(x)=\left( \frac{1}{2\pi i}\right) ^{n/2}\int e^{-i\left\langle
x,x^{\prime }\right\rangle }f(x^{\prime })d^{n}x^{\prime }\text{.}
\end{equation*}
\end{lemma}
Let us now state and prove the main result of this section:
\begin{proposition}
Every $\widehat{S}\in Mp(n)$ is the product of two Mehlig--Wilkinson
operators; these operators thus generate $Mp(n)$.
\end{proposition}
\begin{proof}
Let us write $\widehat{S}=\widehat{S}_{W,m}\widehat{S}_{W^{\prime
},m^{\prime }}$ and apply (\ref{fac1}) to each of the factors; this yields
(with obvious notations)%
\begin{equation}
\widehat{S}=\widehat{V}_{-P}\widehat{M}_{L,m}\widehat{J}\widehat{V}%
_{-(P^{\prime }+Q)}\widehat{M}_{L^{\prime },m^{\prime }}\widehat{J}\widehat{V%
}_{-Q^{\prime }}\text{.} \label{sprod}
\end{equation}%
We claim that $\widehat{S}_{W,m}$ and $\widehat{S}_{W^{\prime },m^{\prime }}$
can be chosen in such a way that $\det (\widehat{S}_{W,m}-I)\neq 0$ and $%
\det (\widehat{S}_{W^{\prime },m^{\prime }}-I)\neq 0$ that is,
\begin{equation*}
\det (P+Q-L-L^{T})\neq 0\text{ \ and \ }\det (P^{\prime }+Q^{\prime
}-L^{\prime }-L^{\prime T})\neq 0\text{.}
\end{equation*}
This will prove the assertion in view of (\ref{splq}). We first remark that
the right hand-side of (\ref{sprod}) obviously does not change if we replace
$P^{\prime }$ by $P^{\prime }+\lambda I$ and $Q$ by $Q-\lambda I$ where $%
\lambda \in \mathbb{R}$. Choose now $\lambda $ such that it is not an
eigenvalue of $P+Q-L-L^{T}$ and $-\lambda $ is not an eigenvalue of $%
P^{\prime }+Q^{\prime }-L^{\prime }-L^{\prime T}$; then
\begin{equation*}
\det (P+Q-\lambda I-L-L^{T})\neq 0\text{ \ and \ }\det (P^{\prime }+\lambda
I+Q^{\prime }-L-L^{T})\neq 0\text{.}
\end{equation*}
\end{proof}
|
1,116,691,501,270 | arxiv | \section{Basic concepts and evolutionary regimes}
\label{Intro}
The basics of mathematical population genetics were developed in the 1930's by Fisher, Haldane and Wright
\cite{Fisher,Haldane1927,Haldane1931,Wright1931,Wright1932}. These three names are generally associated with the
'modern synthesis' of evolutionary biology, which unified the discrete nature of Mendelian heredity with the
Darwinian picture of adaptation by small changes
accumulated over long periods of time. Their key insight was that evolution should be viewed as a stochastic
phenomenon, where discrete, random mutational changes in single individuals
give rise to a seemingly deterministic adaptive process on the population level.
In this perspective, evolutionary theory is the statistical mechanics of genes.
The standard model of adaptation on the population level is the Wright-Fisher model, which describes the
evolution of a population of fixed size $N$ in discrete, non-overlapping generations. Mutations occur randomly at rate $U$ per generation,
and selection is incorporated as a bias in the choice of offspring. Mathematically, the Wright-Fisher model can be defined
as a branching process conditioned on a fixed population size
\cite{Park2010}.
An important elementary process is the
\textit{fixation} of a new mutation which initially arises in a single individual. The probability of fixation can be computed
exactly for the branching process \cite{Haldane1927} as well as for the Moran model \cite{Moran}, a continuous time
process where individuals replicate and die one at a time. The most commonly used expression for the fixation probability
was derived by Kimura \cite{Kimura} in a continuum approximation based on a Langevin equation for the mutant frequency.
A new mutation is most likely to go extinct during the early stage of the fixation process, and mutations that
survive this initial stochastic regime are called \textit{established} \cite{Desai2007,MaynardSmith1971}.
Depending on the population parameters $N$, $U$ and the typical selection coefficient $s$ describing the
fitness advantage of the mutant, different evolutionary regimes emerge \cite{Gillespie1984,Park2010}. Selection is strong if $Ns \gg 1$ and weak if
$Ns \ll 1$. Moreover, when the time to fixation $t_\mathrm{fix} \sim s^{-1} \ln N$ is short compared to the time
$t_\mathrm{mut} \sim (sUN)^{-1}$ between subsequent establishment events, mutations fix independently,
whereas for $t_\mathrm{fix} > t_\mathrm{mut}$ they interfere (see Lecture \ref{CI} for further discussion of this regime).
\section{Sequence space and fitness landscapes}
\label{SeqSp}
The genetic information is encoded in linear sequences of symbols drawn from a finite alphabet. On the microscopic level
the symbols stand for nucleotides forming DNA or RNA molecules, or for amino acids forming proteins; on the coarse grained
level of classical population genetics, they stand for different variants (\textit{alleles}) of a gene. For many purposes
it is sufficient to consider binary sequences, where the symbols merely indicate the presence or absence of a mutation
at a given genetic locus. The space of binary sequences of length $L$ is the $L$-dimensional \textit{hypercube} endowed
with the \textit{Hamming distance} as the natural metric; the Hamming distance between two sequences is simply the number of letters in which they differ.
Assuming that the fitness of an individual is completely determined by its genotype, fitness can be viewed
as a function on sequence space. This idea was first introduced by Haldane \cite{Haldane1931} and Wright \cite{Wright1932},
who also pointed out that the existence of multiple peaks in the fitness landscape was a likely scenario that could
obstruct the evolutionary process. Later Maynard Smith envisioned evolutionary trajectories as pathways in the space
of amino acid sequences that are constrained to move from one viable protein to another \cite{MaynardSmith1970}.
Recent years have seen a surge of renewed interest in the concept, triggered primarily by the availability of empirical
data where fitness (or some proxy thereof, such as antibiotic resistance) is measured for all $2^L$ combinations of
$L$ mutations (typically $L=4-8$), see
\cite{Carneiro2010,Chou2011,deVisser1997,deVisser2009,Lozovsky2009,Poelwijk2007,Weinreich2006}.
\section{Evolutionary accessibility of fitness landscapes}
\label{Acc}
In population genetic terminology, the notion of \textit{epistasis} refers to interactions between
different mutations in their effect on fitness. Of particular importance is \textit{sign epistasis}, which
implies that a given mutation may be beneficial (increasing fitness) or deleterious (decreasing fitness)
depending on the presence of mutations at other loci. Fitness landscapes without sign epistasis
are simple, in the sense that they possess a unique fitness maximum, and fitness increases monotonically
along any path approaching the maximum \cite{Weinreich2005}. In the presence of sign epistasis at
least some of the paths become inaccessible, in the sense that they include steps of decreasing fitness,
but the existence of multiple fitness maxima requires a specific, stronger form of \textit{reciprocal}
sign epistasis \cite{Poelwijk2011}.
The empirical studies described above in Lecture \ref{SeqSp} show that sign epistasis is prevalent
in nature, and it is therefore important to devise fitness landscape models that allow to quantify
this feature. From the point of view of statistical physics, a natural approach is to consider
random ensembles of fitness landscapes with prescribed statistics. In the simplest case random fitness
values are assigned independently to the genotypes, resulting in the House of Cards (HoC) model
first introduced by Kingman \cite{Kingman1978} and Kauffman and Levin \cite{Kauffman1987} in
the genetic context; in the statistical physics of spin glasses this is known as Derrida's
Random Energy Model (REM) \cite{Derrida1981}.
It is easy to see that the probability for a given genotype
to be a local fitness maximum is simply $1/(L+1)$ in the HoC model, and it can be shown that
the distribution of the number of fitness maxima is asymptotically normal \cite{Baldi1989,Macken1989}.
A simple combinatorial argument can also be applied to the question of evolutionary accessibility,
showing that the expected number of fitness-monotonic paths to the global fitness optimum is
equal to 1 irrespective of $L$ and of the initial distance to the peak \cite{Franke2011}.
However, the full distribution of the number of accessible paths can only be explored by numerical
simulations. It is found to display large sample-to-sample fluctuations, with the majority of
realizations (approaching unity for large $L$) having no accessible path spanning the entire landscape.
Real fitness landscapes are not likely to be entirely uncorrelated, and different models with
a tunable degree of fitness correlations have been proposed. A classic example is
the LK-model introduced by Kauffman and Weinberger \cite{Kauffman1989}, in which each of $L$
loci interacts randomly with $K$ other loci. For $K=0$ the landscape is non-epistatic, while for
$K=L-1$ it becomes equivalent to the HoC model. The statistics of local maxima in the LK-model has
been adressed analytically by probabilists \cite{Durrett2003,Limic2004}, but the properties of
accessible mutational pathways has only been studied by simulations so far \cite{Franke2011}. In marked contrast to the HoC model, one finds an increase of evolutionary accessibility with increasing
$L$ (in the sense that the likelihood to find at least one spanning accessible path to the global
fitness maximum increases) when the number of interacting loci $K$ is taken to be proportional to (but smaller than) $L$.
A second example of a tunably rugged fitness landscape is the Rough Mt. Fuji (RMF) model orignally
introduced in the context of protein evolution \cite{Aita2000}. In this model random fitness values
(as in the HoC model) are superimposed on an overall fitness gradient of tunable strength
$\theta$; in spin glass language, the model is equivalent to the REM in an external field.
The problem of evolutionary accessibility in the RMF is closely related to the theory of records
in sets of independent random variables with a linear drift \cite{Franke2010}, and by exploiting
this connection analytic results for the expected number of accessible paths can be derived.
One finds an increase of accessibility with increasing $L$ for any $\theta > 0$,
reflecting the fact that the factorial growth in the number of possible pathways
overwhelms the exponential decrease in the probability of any given pathway to be accessible
\cite{Franke2011}.
The quantitative measures of evolutionary accessibility developed in the model studies can be applied
to empirical fitness landscapes, with the aim of testing the models and estimating epistasis parameters
like $K$ and $\theta$. For this purpose it is useful to decompose the landscape into subgraphs
spanned by subsets of the total set of $L$ mutations under consideration, and to study the behavior
of the accessibility measures as a function of subgraph size. Applying this approach to a fitness data
set containing combinations of 8 individually deleterious mutations in the filamentous fungus
\textit{Aspergillus niger} \cite{deVisser1997}, it was found that the data are well described by
an LK-model with $K/L \approx 1/2$, or by an RMF-model with an intermediate value of $\theta$
\cite{Franke2011}.
\section{Clonal interference and the benefits of sex}
\label{CI}
The reason for the emergence and maintenance of sexual reproduction is a long-standing
puzzle in evolutionary biology, and a number of genetic mechanisms that could explain the ubiquity of sex in higher
organisms have been proposed over the past century. A classic example is the Muller-Fisher mechanism
\cite{Fisher,Muller1932}, which is based on the observation that beneficial mutations arising in different
individuals in an asexual population compete for fixation and therefore obstruct each other's incorporation into
the population; in contrast, in sexuals two individuals carrying different beneficial mutations can mate, thus combining the
mutations into a single genome. This phenomenon of \textit{clonal interference} sets in when
the time scale $t_\mathrm{fix}$ of fixation exceeds the time $t_\mathrm{mut}$ between subsequent beneficial mutations,
see Lecture \ref{Intro}, and it is predicted to dramatically slow down the speed of adaptation in large
asexual populations.
Early attempts to quantify the Muller-Fisher mechanism arrived at the conclusion that the speed of adaptation reaches
a finite limit for $N \to \infty$ \cite{Crow1965,Felsenstein1974,MaynardSmith1971}, but recent work has uncovered a more
complex scenario \cite{Park2010}. The standard model used in these studies assumes an unlimited supply of beneficial
mutations with independent fitness effects (no epistatic interactions) and
selection coefficients $s$ drawn from a probability density $f(s)$.
Since beneficial mutations typically constitute a small fraction of all possible mutations, there is little empirical
information on the shape of $f(s)$ \cite{EyreWalker2007}, but theoretical arguments favor an exponential form
\cite{Orr2003}; alternatively, for theoretical convenience it is often assumed that all mutations have the same effect
and $f(s) = \delta(s-s_0)$ \cite{Desai2007}. In the latter case a systematic calculation of the speed of adaptation is possible, based on
the idea that the fitness distribution of the population can be described as a traveling wave of constant shape
moving towards higher fitness \cite{Beerenwinkel2007,Rouzine2003,Rouzine2008,Tsimring1996}.
A key result is that the speed of adaptation is proportional to the logarithm
of population size, in stark contrast to the behavior for small populations where mutations fix independently and
the dependence is linear in $N$.
An approximate treatment appplicable to the case of continuous distributions of selection coefficients has been
proposed by Gerrish and Lenski \cite{Gerrish1998}. This theory assumes that only the mutation with largest
selection coefficient among those appearing during a typical fixation time survives. As a consequence, the speed of adaptation depends
on the tail shape of $f(s)$ and is proportional to $\ln N$ for the exponential distribution.
Effects of clonal interference on the speed of adaptation have been observed, at least qualitatively, in evolution experiments
with bacterial populations \cite{deVisser1999,deVisser2005,deVisser2006,Elena2003}. By detecting and analyzing individual beneficial
mutations, such experiments can also be used to determine the parameters of the model, primarily the beneficial mutation rate
and the mean selection coefficient \cite{Perfeito2007}; however these estimates depend strongly on the assumption made regarding the distribution $f(s)$ \cite{Hegreness2006}.
As was noted long ago by Maynard Smith \cite{MaynardSmith1968},
the advantage of recombination due to the Muller-Fisher effect
disappears in infinite populations. In that limit recombination
affects the speed of adaptation only if mutations interact
epistatically. To be precise, recombination aids adaptation if the
effect of an mutation decreases as the number of
mutations increases (\textit{negative epistasis}) \cite{Kondrashov1988} but slows it down
in the opposite case. In the presence of \textit{sign epistasis} (as
introduced in Lecture \ref{Acc}) recombination can be strongly
detrimental, leading to a complete localization of the population at
suboptimal fitness peaks for infinite $N$ \cite{deVisser2009,Park2011} and an
exponential growth of the escape time with $N$ when the population
size is finite \cite{Altland2011}. Thus in general recombination can
be beneficial or deleterious depending on the structure of the fitness
landscape.
\section*{References}
|
1,116,691,501,271 | arxiv | \section{Introduction}
\subsection{Recurrence and Lyapunov Exponents}
\hspace*{1em}\let\thefootnote\relax\footnote{\hspace*{-2em}
Key words: Affine group, Grassmannian,
random walk, recurrence, stationary probability.\\
AMS-MSC : 22E40, 60J20.}
Consider a locally compact group $G$
acting continuously on a locally compact second countable space $X$
and $\mu$ a probability measure on $G$.
The \emph{associated random walk on $X$}
is the Markov chain over $X$
defined by the transition probabilities
$P_x=\mu *\delta_x$ for all $x\in X$.
Our aim is to study the recurrence properties of such a random walk.
We will not focus here on the \emph{almost sure recurrence}
as in \cite{BouBabEl} and \cite{Bru1}
but on the \emph{recurrence in law}
as in \cite{Art}, \cite{BouPic} and\cite{EskMarg}.
\begin{df} \label{defrectrans}
The random walk on $X$ is \emph{recurrent in law} at a point
$x\in X$
if for all $\varepsilon>0$, there exists a compact set $C\subset X$
and $n_0\in\mathbb{N}^*$ such that for all $n \geq n_0$:
\[\mu^{*n}*\delta_x(C)\geq 1-\varepsilon.\]
The random walk on $X$ is \emph{uniformly recurrent in law}
if the same compact set $C$ can be chosen for all the starting points $x$.
A probability measure $\nu$ on $X$ is said to be
\emph{$\mu$-stationary} or \emph{$\mu$-invariant}
if one has $\mu * \nu= \nu. $
\end{df}
Those definitions are tightly linked. Indeed,
there exists a $\mu$-stationary probability measure on $X$
if and only if the random walk on $X$ is recurrent in law
at some point $x\in X$
(see Lemma \ref{urloimesinv} for one implication).
In this paper, $G$ will always be a real algebraic group
acting algebraically on a real algebraic variety $X$;
the measure $\mu$ will be compactly supported and
\emph{Zariski dense}, which means that its support spans a Zariski dense subgroup
in $G$.
When $G$ is a reductive group and $X=G/H$ is an algebraic homogeneous space,
it is proven in \cite{Art} that there exists a $\mu$-stationary probability measure on $X$
if and only if $X$ is compact.
The aim of our article is to focus on situations
where the algebraic group $G$ is not reductive.
In particular, in Corollary \ref{casSL}, we will exhibit
examples of non-compact homogeneous spaces
on which there always exists a $\mu$-stationary
probability measure.
The key tool in our analysis will be to link
the recurrence properties of these random walks to the Lyapunov exponents of $\mu$.
The definition of these Lyapunov exponents
depends on the choice of a linear action
of $G$ on $\mathbb{R}^d$.
\begin{df}\label{deflyap} Given
a linear action of $G$ on $\mathbb{R}^d$,
the \emph{Lyapunov exponents} of $\mu$
are the real numbers $\lambda_1,\,\hdots,\, \lambda_d$
such that,
for all $1\leq p \leq d$, we have
\begin{equation}
\label{eqndeflya}
\lambda_1+\hdots+\lambda_p=
\lim_{n\tend\infty} \frac{1}{n} \int_G \log \n[\Lambda^p\, g] \dd\mu^{*n} (g).
\end{equation}
\end{df}
The sequence of Lyapunov exponents is always decreasing:
$\lambda_1\geq \hdots \geq \lambda_d
$ (see \cite[Prop 1.2]{Led84}).
More properties of these exponents are given in \cite{Ose68}, \cite{Led84};
their use in the context of reductive groups is detailed in
\cite{Furst63}, \cite{GuiRaug85}, \cite{GoldMarg} and \cite{Livre}.
\subsection{Action on the Affine Grassmannians}
We assume now that $G$ is either the affine group $G=\mathrm{GL}(d,\R)\ltimes \R^d$ or
the special affine group $G=\mathrm{SL}(d,\R)\ltimes \R^d$.
For $1\leq p\leq d$,
we denote by $\lambda_{p}$
the $p^{\rm th}$-Lyapunov exponent
corresponding to the linear action of $G$ on $\mathbb{R}^d$.
For instance,
in dimension $d=1$, one has
\[\textstyle\lambda_1 = \int_{\mathbb{R}^*\ltimes\mathbb{R}} \log \ab[a] \dd \mu(a,u)\]
where
$g=(a,u)\in \mathbb{R}^*\ltimes\mathbb{R}$.
For any $d\geq 1$, Bougerol and Picard have shown in \cite{BouPic}
that
there exists a $\mu$-stationary probability measure on $\mathbb{R}^d$
if and only if the first Lyapunov exponent of $\mu$ is strictly negative:
$\lambda_1<0$.
The main result of this paper is the following Theorem \ref{recgrassaff},
which extends this equivalence
to the affine Grassmannians $X_{k,\,d}$ where $0\leq k < d$.
By definition the affine Grassmannian $X_{k,\,d}$ is the space of
$k$-dimensional affine subspaces of $\mathbb{R}^d$. The group $G$ acts transitively on
$X_{k,\,d}$.
\begin{thm}\label{recgrassaff}
Let $G$ be the affine group or the special affine group of $\mathbb{R}^d$, let
$\mu$ be a Zariski dense probability measure with compact support on $G$
and let $0\leq k<d$.\\
a) If $\lambda_{k+1}\geq 0$, then the random walk on $X_{k,\,d}$
is nowhere recurrent in law,
there exists no $\mu$-stationary probability measure on $X_{k,\,d}$, and
for all $x$ in $X_{k,\,d}$ the sequence of means of transition probabilities
weakly converges to $0$:
\[\textstyle\frac{1}{n}\sum_{j=1}^n \mu^{*j}* \delta_x\xrightarrow[n\tend\infty]{} 0.\]
b) If $\lambda_{k+1}<0$, then the random walk on $X_{k,\,d}$
is uniformly recurrent in law,
there exists a unique $\mu$-stationary probability measure $\nu$ on $X_{k,\,d}$,
and for all $x$ in $X_{k,\,d}$ the sequence of means of transition probabilities
weakly converges to $\nu$:
\[\textstyle\frac{1}{n}\sum_{j=1}^n \mu^{*j}* \delta_x\xrightarrow[n\tend\infty]{} \nu.\]
\end{thm}
The result of Bougerol and Picard in \cite{BouPic}
covers the $k=0$ case.
In fact, their proof uses only the weaker assumption
that $\mu$ has a finite first moment and
that its support does not preserve any proper affine
subspace of $\mathbb{R}^d$.
The following Corollary, which is particularly noteworthy insofar
as it does not mention Lyapunov exponents,
is deduced from Theorem \ref{recgrassaff}.
\begin{cor}\label{cassymetrique} Assume $\mu$ is symmetric.
Then there exists a $\mu$-stationary probability measure
$\nu$ on $X_{k,\,d}$ if and only if
$2k\geq d$.
In this case, $\nu$ is unique.
\end{cor}
\begin{proof}
Since $\mu$ is symmetric, for all $1\leq p\leq d$, the Lyapunov exponents
satisfy the equalities
$\lambda_p=-\lambda_{d+1-p}$. Moreover,
since $\mu$ is Zariski dense in $G$,
it follows from the Guivarc'h-Raugi simplicity theorem that
the sequence of Lyapunov exponents is strictly decreasing:
$\lambda_1>\cdots >\lambda_d$ (see \cite[Corol. 10.15]{Livre}).
Therefore one has the equivalence
$\lambda_{k+1}<0 \Longleftrightarrow 2k\geq d$.
\end{proof}
\begin{cor}
\label{casSL}
Let $d\geq 2$.
When $G$ is the special affine group and $k=d-1$,
there exists a unique $\mu$-stationary probability measure on $X_{k,\,d}$.
\end{cor}
\begin{proof}
In this case, the sum of the Lyapunov exponents is zero.
Hence, the simpli\-city of the Lyapunov exponents
implies
$\lambda_d<0$.
\end{proof}
For instance, when $G$ is the special affine group of $\mathbb{R}^2$,
the random walk
on the space of affine lines of $\mathbb{R}^2$
is always uniformly recurrent in law while
the random walk on the space of points of $\mathbb{R}^2$ is nowhere recurrent in law.
\subsection{Action on $X_{V,W}$}
By embedding the affine Grassmannian $X_{k,\,d}$ of $\mathbb{R}^d$ in
the projective space of a suitable exterior power $V$ of $\mathbb{R}^{d+1}$,
we will deduce
Theorem \ref{recgrassaff} from the following Theorem \ref{thmeq}:
We first need two definitions.
An algebraic group $G$ is \emph{Zariski connected}
if it is connected for the Zariski topology.
A linear action of $G$ on a vector space $W$
is \emph{proximal}
if there exists a rank $1$ linear endomorphism $\pi$ of $W$
which is a limit of a sequence $\lambda_n\gamma_n$ with
$\lambda_n>0$ and $\gamma_n$ in $\Gamma$.
\begin{thm}\label{thmeq} Let $V$ be a finite-dimensional real vector space,
$G$ a Zariski connected algebraic subgroup of $\mathrm{GL}(V)$,
$W$ a $G$-invariant subspace of $V$ such that\\
$(H1)$ $G$ acts irreducibly and proximally on $W$ and on $W':=V/W$.\\
$(H2)$ The representations of $G$ in $W$ and $W'$
are not equivalent.\\
$(H3)$ $W$ has no $G$-invariant complementary subspace in $V$.\\
Let $X_{V,W}:=\Pj[V]\smallsetminus\Pj[W]$, let $\mu$ be a Zariski dense probability measure with compact support on $G$ and
let $\lambda_1=\lambda_{1,W}$ and $\lambda^{\prime}_1=\lambda_{1,W'}$ be
the first Lyapunov exponents of $\mu$ in $W$ and $W'$ respectively. \\
a) If $\lambda_1\geq \lambda^{\prime}_1$, then the random walk\xspace on $X_{V,W}$
is nowhere recurrent in law,
there exists no $\mu$-stationary probability measure on $X_{V,W}$,
and for all $x$ in $X_{V,W}$ one has the weak convergence
$\textstyle\frac{1}{n}\sum_{j=1}^n \mu^{*j}* \delta_x\xrightarrow[n\tend\infty]{} 0.$\\
b) If $\lambda_1<\lambda^{\prime}_1$,
then the random walk\xspace on $X_{V,W}$ is uniformly recurrent in law,
there exists a unique
$\mu$-stationary probability measure $\nu$ on $X_{V,W}$,
and for all $x$ in $X_{V,W}$, one has the weak convergence
$\textstyle\frac{1}{n}\sum_{j=1}^n \mu^{*j}* \delta_x\xrightarrow[n\tend\infty]{} \nu.$
\end{thm}
\subsection{Strategy of the Proof}
In Chapter \ref{passage}, we explain how to embed the affine Grassmannian
$X_{k,d}$ in the variety
$\Pj[\Lambda^{k+1}\mathbb{R}^{d+1}]\smallsetminus\Pj[\Lambda^{k+1}\mathbb{R}^{d}]$
and we deduce
Theorem \ref{recgrassaff} from Theorem \ref{thmeq}.
The last three
chapters
will deal with the proof of Theorem \ref{thmeq}.
In Chapter \ref{thmeqrecloi}, we prove the uniform recurrence in law when $\lambda_1 < \lambda^{\prime}_1$
(Corollary \ref{l1lp1urloi}). The crux of the proof is the construction of a proper function
on $X_{V,W}$ which is contracted by the averaging operator
(Proposition \ref{PmuUCH}).
In Chapter \ref{l1lp1nrecloi},
we prove the non-recurrence in law when $\lambda_1\geq \lambda^{\prime}_1$ (Proposition \ref{l1lp1nonrecloiprop}).
The key point is the study of the ratio of the norms in $W$ and in $W'$ of a random product $b_1\cdots b_n$. On the one hand, the existence of a $\mu$-stationary probability measure
on $X_{V,W}$ would imply that these ratios are bounded
(Lemma \ref{liminfdnanfinie}).
On the other hand, when $\lambda_1\geq \lambda^{\prime}_1$, the Law of Large Numbers and the Law of Iterated Logarithms for these products
prevent these ratios from being bounded
(Lemma \ref{liminfdnaninfinie}).
In Chapter \ref{thmequnicite}, we prove
the uniqueness of the $\mu$-stationary measure on $X_{V,W}$
(Proposition \ref{unicitedirac}).
Indeed, using the joining measure (Corollary \ref{corjoista}) of two distinct $\mu$-stationary probability
measures on $X_{V,W}$,
we construct (Lemma \ref{lemyvw}) a $\mu$-stationary measure
$\overline{\nu}$ on the space $\Pj[W\oplus W']\smallsetminus (\Pj[W]\cup\Pj[W'])$.
This contradicts the classification of stationary measures in \cite{Art}
since this space does not contain compact $G$-orbits
(Lemma \ref{pasdorbitescompactesdansX}).
The weak convergence of the sequence of means of transition probabilities
follows easily (Corollary \ref{corthmeqlunlpun}).
In Appendix \ref{seclimlaw}, we collect known facts on random walks on reductive groups.\vspace{1em}
In this paper, all the vector spaces will be
finite dimensional real vector spaces,
all the measures will be Borel measures and we will not distinguish between a real algebraic group
and its group of real points.
\section{Recurrence on affine Grassmannians}\label{passage}
\begin{quotation}
We explain first how to deduce Theorem \ref{thmeq}
from Theorem \ref{recgrassaff}
\end{quotation}
We use the notation of Theorem \ref{thmeq}.
The group $G$ is the affine group or the special affine group of $\mathbb{R}^d$,
the space $X_{k,\,d}$ is the affine Grassmannian of $\mathbb{R}^d$,
the probability measure $\mu$ on $G$ is Zariski dense and compactly supported.
Let us construct $G$-vector spaces $W\subset V$
to which we will apply Theorem \ref{thmeq}.
We identify the affine space $\mathbb{R}^d$ with
the affine hyperplane of $\mathbb{R}^{d+1}=\mathbb{R}^d\oplus\mathbb{R}$:
$$
\mathcal{A}=\{(w,\,1)\,|\,w\in\mathbb{R}^d\}.
$$
The group $G$ is then a subgroup of
$\mathrm{GL}(d+1,\,\mathbb{R})$, which stabilizes $\mathcal{A}$,
and we have
$$
X_{k,d}=\mathrm{Gr}_{k+1}(d+1)\smallsetminus\mathrm{Gr}_{k+1}(d),
$$
where $\mathrm{Gr}_{k+1}(d\! +\! 1)$ and $\mathrm{Gr}_{k+1}(d)$ are
the Grassmannians of $(k\! +\! 1)$-dimensional vector subspaces of
$\mathbb{R}^{d+1}$ and of $\mathbb{R}^d$ respectively.
Now, let
$$
V:=\Lambda^{k+1}\mathbb{R}^{d+1}
\;\;{\rm and} \;\;
W:=\Lambda^{k+1}\mathbb{R}^{d}.
$$
The group $G$ acts linearly on the vector space $V$
and leaves invariant its vector subspace $W$.
The Pl\"{u}cker map
$$
\varphi\; :\;\mathrm{Gr}_{k+1}(d+1)\longrightarrow \Pj[V]
\;\; ;\;\;
U\longmapsto \Lambda^{k+1}U.
$$
is an embedding of the Grassmannian variety in the projective space of $V$.
It induces a $G$-equivariant injection
\[\varphi:X_{k,\,d}\hookrightarrow X_{V,W}:= \Pj[V]\smallsetminus\Pj[W].\]
\begin{prop}\label{constructionVWWp} With the above notations,\\
a) Hypotheses $(H1)$, $(H2)$, $(H3)$ hold for these $V$, $W$ and $W'=V/W$.\\
b) The $G$-equivariant inclusion
$X_{k,\,d}\hookrightarrow X_{V,W}$ has closed image.\\
c) We have the equality $\lambda_{k+1}=\lambda_{1,\,W}-\lambda_{1,\,W'}$.
\end{prop}
\begin{proof}[Proof of Proposition \ref{constructionVWWp}]
a) $(H1)$ : The representation of ${\rm SL}(d,\mathbb{R})$ in $W=\Lambda^{k+1}\mathbb{R}^d$
is irreducible by \cite[Chap. 8.13.1.4]{BourGALie78}.
This representation is proximal since the image in ${\rm GL}(W)$
of a diagonal element of $G$
with positive distinct eigenvalues is a proximal element of
${\rm GL}(W)$.
The same is true for the representation in $W'\simeq\Lambda^k\mathbb{R}^d$.\\
$(H2)$ : The fact that the representations of ${\rm SL}(d,\mathbb{R})$ in $W$ and $W'$
are not equivalent is also proven in \cite[Chap. 8.13.1.4]{BourGALie78}.\\
$(H3)$ : Since the representation of ${\rm SL}(d,\mathbb{R})$ in $W$ and $W'$ are
irreducible and are not equivalent,
by Schur's lemma, the only
${\rm SL}(d,\mathbb{R})$-invariant complementary subspace of $W$ in
$V\simeq W\oplus W'$
is $W'$. But $W'$ is not invariant by the translations of $G$.
b) The image $\varphi(X_{k,d})$ is closed in $X_{V,W}$ since
$\varphi^{-1}(\Pj[W])=\mathrm{Gr}_{k+1}(d)$.
c) This equality is the difference of the equalities
\[
\lambda_{1,\,W}=\lambda_1+\hdots+\lambda_{k+1}
\;\;\; {\rm and}\;\;\;
\lambda_{1,\,W'}=\lambda_1+\hdots+\lambda_{k}
\]
which follow from the very definition \eqref{eqndeflya}
of the Lyapunov exponents.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmeq} $\Longrightarrow$ Theorem \ref{recgrassaff}]
We use Proposition \ref{constructionVWWp}.
If $\lambda_{k+1}\geq 0$,
then we can apply Theorem \ref{thmeq}
in the case where $\lambda_{1,\,W}\geq\lambda_{1,\,W'}$,
and there can be no $\mu$-stationary probability measure on $X_{k,\,d}$.
Conversely, if $\lambda_{k+1}<0$, we are in the case where
$\lambda_{1,\,W}<\lambda_{1,\,W'}$.
Since $X_{k,\,d}$ is a $G$-invariant closed subset of $X_{V,W}$,
we obtain uniform recurrence in law on $X_{k,\,d}$.
Lemma \ref{urloimesinv} then ensures
the existence of a $\mu$-stationary probability measure on $X_{k,\,d}$,
which is thus the unique
$\mu$-stationary probability measure on $X_{V,W}$.
\end{proof}
\section{Uniform Recurrence When $\lambda_1<\lambda_1'$}\label{thmeqrecloi}
\begin{quotation}
The goal of this Chapter is to show that the random walk on $X_{V,W}$
is uniformly recurrent in law
when $\lambda_1<\lambda_1'$ (Corollary \ref{l1lp1urloi}).
\end{quotation}
\subsection{The Contraction Hypothesis}
\begin{quotation}
We recall in this section the uniform contraction hypothesis
and why this condition implies the uniform recurrence in law.
\end{quotation}
The setting is very general (see \cite{MeyTwe}, \cite{EskMarg} or \cite{BenQuiFinVol}
for more details).
Let $X$ be a locally compact second-countable space
and $P$ a Markov-Feller operator on $X$.
\begin{df}\label{defUCH}
The operator $P$ satisfies the \emph{uniform contraction hypothesis}
(UCH)
if there exists a
proper map $u:X\rightarrow [0,\,\infty[$
and two constants $0<a<1$ and $b>0$ such that, over $X$,
\begin{equation}
\label{eqnpulaub}
Pu\leq au+b.
\end{equation}
\end{df}
We recall that a map is \emph{proper} if the inverse image
of every compact set is relatively compact.
The definition of recurrence in law extends to
Markov chains on $X$.
Uniform recurrence in law is fundamentally linked with (UCH):
\begin{prop}\label{UCHurloi} If $P$ satisfies (UCH),
then the associated Markov chain on $X$ is uniformly recurrent in law.
\end{prop}
\begin{proof}
See \cite[Thm 15.0.1]{MeyTwe}, \cite[Lem 3.1]{EskMarg} or \cite[Lem 2.1]{BenQuiFinVol}.
\end{proof}
\begin{lem}
\label{urloimesinv}
If $P$ is recurrent in law at point $x\in X$,
there exists a $P$-invariant probability measure on $X$.
\end{lem}
\begin{proof}
By the Banach-Alaoglu Theorem,
the sequence of means of transition probabilities $\nu_n=\frac{1}{n}\sum_{j=1}^n P^j_x$
has at least one accumulation point $\nu_\infty$
for the weak-$*$ topology.
This finite measure $\nu_{\infty}$ is $P$-invariant.
Since $P$ is recurrent in law at $x$, there is no escape of mass
and $\nu_{\infty}$ is a probability measure.
\end{proof}
The following lemma is a useful tool to check (UCH).
\begin{lem}
\label{lempnouch}
Let $n\geq 1$. If $P^n$ satisfies $(UCH)$
then $P$ satisfies $(UCH)$ too.
\end{lem}
\begin{proof}
Let $u$ be the proper map and $a$,$b$ the constants such that
$P^nu\leq au+b$ over $X$.
Let $\alpha_k=a^{-k/n}$ for $0\leq k\leq n-1$,
$a'=a^{1/n}$,
$b'=\frac{a'}{a} b$.
Then the proper map $u':X\rightarrow \mathbb{R}_+$
defined by $u'=\sum_{k=0}^{n-1} \alpha_k P^k u$
satisfies the inequality
$Pu'\leq a'u'+b'$
on $X$,
and thus $P$ satisfies(UCH).
\end{proof}
\subsection{Finding a Contracted Function}
\begin{quotation}
In this section, we use again
the notations and assumptions of Theorem \ref{thmeq}.
We will prove that the averaging operator
satisfies the uniform contraction hypothesis.
\end{quotation}
We recall that $W\subset V$ are real vector spaces,
$G$ is a Zariski connected algebraic subgroup of $\mathrm{GL}(V)$
preserving $W$ and satisfying $(H1)$, $(H2)$, $(H3)$.
We identify the quotient $W'=V/W$ with a complementary subspace $W_s$ of $W$
in $V$.
Note that this subspace $W_s$ is not $G$-invariant.
We recall also that $\mu$ is a Zariski dense probability measure on $G$
with compact support and that $\lambda_1$ and $\lambda^{\prime}_1$
are the first Lyapunov exponents of $\mu$ in $W$ and $W'$,
and that we are studying the associated random walk on
the $G$-space $X_{V,W}:=\Pj[V]\smallsetminus\Pj[W]$.
The corresponding Markov operator
$P_{\mu}:\cont[X_{V,W}]\longrightarrow\cont[X_{V,W}]$ is given by
$$
\textstyle
P_{\mu}f (x)=\int_G f(gx) \dd\mu(g).
$$
\begin{prop}\label{PmuUCH} Same notations and assumptions
as in Theorem \ref{thmeq}.\\
If $\lambda_1<\lambda^{\prime}_1$, then the Markov operator $P_{\mu}$ satisfies (UCH).
\end{prop}
\begin{proof}
The space $X_{V,W}$
can be seen as the set
$$
X_{V,W}=\{[w,\, w']\,|\, w\in W,\, w'\in W_s\smallsetminus\{0\}\}.
$$
Choose a norm on $V$, and, for $\delta>0$, consider the functions
\[
u_{\delta}\; :\;
X_{V,W}\longrightarrow\mathbb{R}_+
\;\; ;\;\; [w,\,w']\longmapsto \tfrac{\n[w]^{\delta}}{\n[w']^{\delta}}.\]
These functions are proper and well-defined.
We want to find
$\delta>0$, $ a\in ]0,\,1[$, $b>0$, $n_0\in\mathbb{N}^*$ such that,
over $X_{V,W}$, one has the inequality
\begin{equation}
\label{ineqUCH}
P_{\mu}^{n_0}u_{\delta}\leq a u_{\delta} +b\, .
\end{equation}
Since $W$ is $G$-invariant, we can write $g\in G$ as
\begin{equation}
\label{eqnagcgdg}
g=\begin{pmatrix}
a_g & c_g \\
0 & d_g
\end{pmatrix}\;
\mbox{\rm with $a_g\in \mathrm{GL}(W),\,d_g\in \mathrm{GL}(W_s),
\, c_g\in \mathcal{L}(W_s,\,W)$.}
\end{equation}
Let $0<\varepsilon < \frac{\lambda^{\prime}_1-\lambda_1}{8}$.
Then, by a lemma due to Furstenberg (cf. \cite[Thm 4.28]{Livre}, \cite{Furst63})
since $G$
acts irreducibly on $W$ and $W'$
there exists $n_0\in\mathbb{N}^*$ such that
for all $n\geq n_0$,
for all non-zero $w\in W,\, w'\in W_s$,
the following inequalities hold:
\begin{align}\label{inegFurstlun}
\lambda_1-\varepsilon &\leq \frac{1}{n}\int_G \log \frac{\n[a_g w]}{\n[w]}\dd \mu^{*n}(g)
\leq \lambda_1+\varepsilon, \\ \label{inegFurstlpun}
\lambda^{\prime}_1-\varepsilon &\leq \frac{1}{n}\int_G \log \frac{\n[d_g w']}{\n[w']}\dd \mu^{*n}(g)
\leq \lambda^{\prime}_1+\varepsilon.
\end{align}
For $\delta >0$ and $x=[w,\,w']\in X_{V,W}$, one computes
\[P^{n_0}_{\mu} u_{\delta}(x)=u_{1}(x)^{\delta} \;
\int_G\frac{u_1(gx)^{\delta}}{u_1(x)^{\delta}}
\dd\mu^{*n_0}(g).\]
We will give an upper bound for the right-hand integral
for all $x$ in the complementary set
of some compact $K$ in $X$ ;
since map $P_{\mu}^{n_0}u_{\delta}$ is bounded on the compact set $K$,
this will give inequality (\ref{ineqUCH}).
Let $c>0$ be the constant defined by
\[c^{-1}= \frac{4}{n_0(\lambda^{\prime}_1-\lambda_1)}\int_G\n[c_g]\,\n[a_g^{-1}] \dd \mu^{*n_0}(g).\]
Let $K$ be the compact subset of $X_{V,W}$ given by
\[K=\{[w,\,w']\,|\, w\in W, \; w'\in W_s,\; \n[w']\geq c\, \n[w] \}.\]
For $\mu^{*n_0}$-almost every $g\in G$,
for all $x\in X\smallsetminus K$,
the following ratio
is bounded:
\[
\frac{u_1(gx)}{u_1(x)}
\; =\;
\frac{\n[a_gw+c_gw']}{\n[w]}\, \frac{\n[w']}{\n[d_gw']}
\;\leq\;
\sup_{g\in \Supp \mu^{*n_0}}
\n[d_g^{-1}](\n[a_g]+c\n[c_g]).
\]
Therefore, we can find some constant $M_{n_0}>0$
such that for all $\delta>0$,
for all $x\in X\smallsetminus K$,
for $\mu^{*n_0}$-almost every $g\in G$,
we can write
\[ \frac{u_{1}(gx)^{\delta}}{u_{1}(x)^{\delta}}=e^{\delta\log \frac{u_1(gx)}{u_1(x)}}
\leq 1+\delta \log \frac{u_1(gx)}{u_1(x)} +\delta^2 M_{n_0}.\]
For all $x\in X\smallsetminus K$, for $\mu^{*n_0}$-almost every $g\in G$,
the following upper bound holds:
\begin{align*}
\log \frac{u_1(gx)}{u_1(x)} &= \log\frac{\n[a_g w]}{\n[w]}
- \log\frac{\n[d_g w']}{\n[w']}+ \log \frac{\n[a_gw+c_gw']}{\n[a_gw]}\\
&\leq \log\frac{\n[a_g w]}{\n[w]}
- \log\frac{\n[d_g w']}{\n[w']}+\frac{\n[c_gw']}{\n[a_gw]} \\
&\leq \log\frac{\n[a_g w]}{\n[w]}
- \log\frac{\n[d_g w']}{\n[w']}+ \n[c_g]\n[a_g^{-1}]c .
\end{align*}
Using inequalities (\ref{inegFurstlun}), (\ref{inegFurstlpun})
and the definition of $c$, we get the inequality
\begin{equation*}
\int_G \log \frac{u_1(gx)}{u_1(x)} \dd \mu^{*n_0}(g)
\;\leq\;
n_0(\lambda_1-\lambda^{\prime}_1+2\varepsilon) + \frac{n_0(\lambda^{\prime}_1-\lambda_1)}{4}
\;\leq\;
\frac{n_0(\lambda_1-\lambda^{\prime}_1)}{2}.
\end{equation*}
Let $\kappa=\frac{n_0(\lambda^{\prime}_1-\lambda_1)}{2} >0$. We then get the upper bound,
for all $x\in X\smallsetminus K$,
\[\int_G \frac{u_{1}(gx)^{\delta}}{u_{1}(x)^{\delta}} \dd \mu^{*n_0}(g)\leq 1-\delta \kappa
+\delta^2 M_{n_0}.\]
Choose $\delta >0$ such that
$a_{n_0,\,\delta}:= 1-\delta \kappa +\delta^2 M_{n_0}$
is strictly between $0$ and $1$.
Therefore, since $K$ is compact, there exists a constant $b_{n_0,\,\delta}$ such that
for all $x\in X$:
\[P^{n_0}_{\mu}u_{\delta} (x) \leq a_{n_0,\,\delta}\, u_{\delta}(x) +b_{n_0,\, \delta},\]
and, by Lemma \ref{lempnouch}, the operator $P_{\mu}$ satisfies (UCH).
\end{proof}
\begin{cor}\label{l1lp1urloi} Same notations and assumptions
as in Theorem \ref{thmeq}.\\
If $\lambda_1<\lambda^{\prime}_1$,
then the random walk\xspace on $X$ is uniformly recurrent in law.
\end{cor}
\begin{proof}
This is a direct consequence of Proposition \ref{PmuUCH}:
since $P_{\mu}$ satisfies (UCH),
we only need to apply Proposition \ref{UCHurloi}.
\end{proof}
\section{ Non-Recurrence in Law When $\lambda_1 \geq \lambda_1'$}
\label{l1lp1nrecloi}
\begin{quotation}
The goal of this Chapter is to show that the random walk on $X_{V,W}$
is nowhere recurrent in law
when $\lambda_1\geq\lambda_1'$ (Proposition \ref{l1lp1nonrecloiprop}).
\end{quotation}
\subsection{The Limit Measures}
\begin{quotation}
We recall in this section the definition and the properties of the
limit probability measures associated to a stationary measure.
\end{quotation}
The setting is very general.
Let $G$ be a locally compact group acting on a
second countable locally compact space $X$ and
$\mu$ be a probability measure on $G$.
Let $B$ be the product space $B=G^{\mathbb{N}^*}$ and
$\beta$ be the product measure $\beta=\mu^{\otimes\mathbb{N}^*}$.
The following lemma is due to Furstenberg. See \cite[Lem 3.2]{StatI}
or \cite[Lemma 2.17]{Livre}.
\begin{lem}
\label{nub}
Let $\nu$ be a $\mu$-stationary probability measure on $X$.
For $\beta$-almost every $b\in B$, the sequence
$(b_1\cdots b_n)_*\nu$
of probability measures on $X$ has a limit $\nu_b$,
which we will call \emph{limit probability}.
Moreover, we have
$\nu=\int_{B}\nu_b\dd\beta(b).$
\end{lem}
The following construction will be useful
in Chapter \ref{secunista}. See \cite[Cor 3.5]{StatI} for a proof.
\begin{cor}
\label{corjoista}
Let $\nu_1$ and $\nu_2$ be two
$\mu$-stationary probability measures on $X$. Then
the probability measure on $X\times X$
\begin{equation}
\label{eqnjoista}
\nu_1\boxtimes\nu_2 := \int_B \nu_{1,\,b}\otimes\nu_{2,\,b}\dd \beta(b)
\end{equation}
is $\mu$-stationary. It is called the \emph{joining measure} of $\nu_1$
and $\nu_2$.
\end{cor}
This corollary will be used in combination with the following basic lemma.
\begin{lem}\label{diagonalesansatome}
Let $m_1$, $m_2$ be probability measures on a topological space $X$
and let $\Delta_X:=\{ (x,x)\mid x\in X\}$ be the diagonal of $X$.
If $m_1\otimes m_2 (\Delta_X) = 1$,
then $m_1$ and $m_2$ are identical Dirac measures.
\end{lem}
\begin{proof}
By assumption, we have
$m_1\otimes m_2 (\Delta_X) = \int_X m_1(\{x\}) \dd m_2 (x) = 1.$
Hence, for $m_2$-almost every $x\in X$, we have $m_1(\{x\})=1$,
which implies that measures $m_1$ and $m_2$ are identical Dirac measures.
\end{proof}
\subsection{No Stationary Measures on $X_{V,W}$}
\begin{quotation}
In this section, we again use
the same notations and assumptions as in Theorem \ref{thmeq}.
We will prove that
the space $X_{V,W}$ supports no $\mu$-stationary measures.
\end{quotation}
Recall that $W\subset V$ are real vector spaces,
$G$ is a Zariski connected algebraic subgroup of $\mathrm{GL}(V)$
preserving $W$ and satisfying $(H1)$, $(H2)$, $(H3)$,
Also recall that $\mu$ is a Zariski dense probability measure on $G$
with compact support, that $\lambda_1$ and $\lambda^{\prime}_1$
are the first Lyapunov exponents of $\mu$ in $W$ and in $W':=V/W$,
and that we are studying the associated random walk on
the $G$-space $X_{V,W}:=\Pj[V]\smallsetminus\Pj[W]$.
\begin{prop}\label{l1lp1nonrecloiprop} Same notations and assumptions
as in Theorem \ref{thmeq}.\\
If $\lambda_1 \geq \lambda^{\prime}_1$, then the random walk\xspace on $X_{V,W}$
is nowhere recurrent in law,
and there exists no $\mu$-stationary probability measure on $X_{V,W}$.
\end{prop}
\begin{proof} By Lemma \ref{urloimesinv} the first assertion follows from the second one.
This second assertion is a consequence of the following Lemmas
\ref{liminfdnanfinie} and \ref{liminfdnaninfinie}.
\end{proof}
Let $B=G^{\mathbb{N}^*}$ and
$\beta=\mu^{\otimes\mathbb{N}^*}$. For $b=(b_1,b_2,\ldots)$ in $B$
we write as in \eqref{eqnagcgdg}:
\begin{equation}
\label{eqnancndn}
b_1\cdots b_n=\begin{pmatrix}
a_n & c_n \\
0 & d_n
\end{pmatrix}.
\end{equation}
\begin{lem}\label{liminfdnanfinie}
Same notations and assumptions
as in Theorem \ref{thmeq}.
If there exists a $\mu$-stationary probability measure
on $X_{V,W}$,
then for $\beta$-almost every $b\in B$,
we have
\begin{equation}
\label{eqnlimdan1}
\sup_{n\geq 1}\; \n[a_n]/\n[d_n]\; <\; \infty.
\end{equation}
\end{lem}
The proof of Lemma \ref{liminfdnanfinie} will be given in Section \ref{secaccpoi}.
It relies on the properties of the limit probability measures $\nu_b$.
\begin{lem}\label{liminfdnaninfinie}
Same notations and assumptions
as in Theorem \ref{thmeq}.
If $\lambda_1\geq \lambda^{\prime}_1$,
then for $\beta$-almost every $b\in B$,
one has
\begin{equation}\label{eqnlimdan2}
\sup_{n\geq 1}\; \n[a_n]/\n[d_n]\; =\; \infty.
\end{equation}
\end{lem}
The proof of Lemma \ref{liminfdnaninfinie} will be given
in Section \ref{secprocar}.
It relies on the law of large numbers
and on the law of the iterated logarithm for the random variables
$\log\| a_n\|\!-\!\log\|d_n\|$.
\subsection{Using the Limit Measures}
\label{secaccpoi}
\begin{quotation}
The aim of this section is to prove Lemma \ref{liminfdnanfinie}.
\end{quotation}
We will need the following analog of \cite[Prop. 3.7]{Livre}
for a non-irreducible action.
\begin{lem}\label{pasdemasseausousespace}
Same notations and assumptions
as in Theorem \ref{thmeq}.
Let $\nu$ be
a $\mu$-stationary probability measure
on $\Pj[V]$ such that $\nu(\Pj[W])=0$.
Then
for every proper subspace $U$ of $V$,
we have
$
\nu(\Pj[U])=0.
$
\end{lem}
\begin{proof}
Assume there exists a proper subspace $U$ of $V$
such that $\nu(\Pj[U])>0$. Let $r_0$ be the minimal dimension
of such a subspace $U$.
If $U_1$ and $U_2$ are two distinct vector subspaces of dimension $r_0$,
one has the equality
\[\nu(\Pj[U_1]\cup\Pj[U_2])=\nu(\Pj[U_1])+\nu(\Pj[U_2]).\]
Let $\alpha: = \sup \{\nu(\Pj[U]) \,|\, U\subset V,\, \dim U = r_0\}>0\;\;$
and consider the set
\[F= \{ U\subset V\,|\,\nu(\Pj[U])=\alpha,\, \dim U = r_0\}.\]
This set is finite and non-empty. By $\mu$-stationarity of $\nu$,
for $\mu$-almost every $g\in G$, we have $g^{-1}F=F$.
Therefore, since $\mu$ is Zariski dense in $G$,
this set $F$ is $G$-invariant.
Since $G$ is Zariski connected, all the subspaces $U$ belonging to $F$ are $G$-invariant.
But by $(H1)$, $(H2)$ and $(H3)$, the only proper
$G$-invariant subspace of $V$ is $W$.
This is contradictory since, by assumption, we have $\nu(\Pj[W])=0$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{liminfdnanfinie}]
We assume also that
there exists a $\mu$-stationary probability measure $\nu$
on $X_{V,W}$.
In order to prove \eqref{eqnlimdan1}, it is enough to check that
for $\beta$-almost every $b\in B$,
for all accumulation points $\pi$ in ${\rm End}(V)$ of the sequence
$p_n:=\frac{b_1\cdots b_n}{\n[b_1\cdots b_n]}$,
the image of $\pi$ is not included in $W$ :
\begin{equation}
\label{eqnimpinw}
\Ima \pi \not\subset W.
\end{equation}
Lemma \ref{pasdemasseausousespace}
shows that $\nu(\Pj[\Ker \pi])=0$,
hence the image probability measure $\pi_*\nu$ is well-defined
and the sequence ${p_n}_*\nu$ weakly converges to $\pi_*\nu$.
By Lemma \ref{nub} this sequence ${p_n}_*\nu$ also weakly converges to $\nu_b$,
and therefore we have
\[\pi_*\nu=\nu_b.\]
Therefore, for $\beta$-almost all $b$ in $B$, one has, for all accumulation point $\pi$,
\[\nu_b(\Pj[\Ima\pi]) =1.\]
Since $\nu(\Pj[W])=0$, one also has, for $\beta$-almost all $b$ in $B$,
\[\nu_b(\Pj[W])=0, \]
and hence the images $\Ima\pi$ are not contained in $W$.
This proves \eqref{eqnimpinw}.
\end{proof}
\subsection{Using the Cartan Projection}
\label{secprocar}
\begin{quotation}
The aim of this section is to prove Lemma \ref{liminfdnaninfinie}.
\end{quotation}
Let $\rho$ be the natural projection
\begin{equation}
\label{eqnrhoggl}
\rho\; :\;
G\longrightarrow \mathrm{GL}(W)\times\mathrm{GL}(W')
\;\; ;\;\;
\begin{pmatrix}
a & c \\
0 & d
\end{pmatrix}
\longmapsto (a,d)\, .
\end{equation}
The image group
$
\overline{G}:=\rho(G)
$
is a reductive subgroup
of $\mathrm{GL}(W)\times\mathrm{GL}(W')$.
The image measure
$
\overline{\mu}:=\rho_*\mu
$
is a Zariski dense probability measure on $\overline{G}$.
The proof of Lemma \ref{liminfdnaninfinie} will use the notations of
Appendix \ref{seclimlaw}
with the reductive group $\overline{G}$
and its probability measure $\overline{\mu}$.
In particular,
$\Lie[g]$ is the Lie algebra of $\overline{G}$,
$\Lie[a]$ is the Lie algebra of a maximal split torus of $\overline{G}$,
$\kappa$ is the Cartan projection,
$\sigma_{\overline{\mu}}$ is the Lyapunov vector,
$\Phi_{\overline{\mu}}$ is the covariance $2$-tensor,
$\Lie[a]_{\overline{\mu}}$ is its linear span, and
$K_{\overline{\mu}}$ is the unit ball of $\Lie[a]_{\overline{\mu}}$.
We will also use the following two lemmas.
We set $r=\dim W$ and $r'=\dim W'$.
\begin{lem}
\label{lemhigdis}
The highest weights $\chi$ and $\chi'$ of the representations
of $\overline{G}$ in $W$ and $W'$ are distinct.
\end{lem}
\begin{proof}
Since $\overline{G}$
is Zariski connected, Condition $(H1)$ tells us that $W$ and $W'$
are irreducible representations of $\Lie[g]$ and that
their highest weight spaces
are one dimensional. Condition $(H2)$ tells us that these
representations of $\Lie[g]$ are not equivalent.
Therefore as in \cite[Chap 8.6.3]{BourGALie78}, the highest weights
$\chi$ and $\chi'$ must be distinct.
\end{proof}
\begin{lem}\label{centreoverG}
The center $Z$ of $\overline{G}$ is equal to $Z=\{(
\alpha I_r ,\beta I_{r'})\in \overline{G}\,|\, \alpha,\,\beta \in\mathbb{R}^*\}.$
\end{lem}
\begin{proof} By Schur's lemma,
the commutant of $\overline{G}$ in ${\rm End}(W)$
is a division algebra.
Since the representation of $\overline{G}$ in $W$ is proximal,
this commutant is the field $\mathbb{R}$ of scalar matrices.
Therefore $Z$ acts on $W$ (and also on $W'$) by scalar matrices.
\end{proof}
\begin{proof}[Proof of Lemma \ref{liminfdnaninfinie}]
Fix norms on $W$ and $W'$ as in Lemma \ref{representationgeometriquecartaniwas},
so that, for any element $g=(a,d)$ in $\overline{G}$
with $a\in\mathrm{GL}(W)$, $d\in\mathrm{GL}(W')$, one has
\begin{equation}
\label{eqnlogadg}
\log \|a\|\; =\; \chi(\kappa(g))
\;\; {\rm and}\;\;
\log \|d\|\; =\; \chi'(\kappa(g))
\end{equation}
In particular, the first Lyapunov exponents in $W$ and $W'$ are given by
\begin{equation*}
\lambda_1\; =\; \chi(\sigma_{\overline{\mu}})
\;\; {\rm and}\;\;
\lambda^{\prime}_1\; =\;\chi'(\sigma_{\overline{\mu}}).
\end{equation*}
Let $\overline{B}=\overline{G}^{\mathbb{N}^*}$ and $\overline{\beta}=\overline{\mu}^{\otimes \mathbb{N}^*}$.
For $b=(b_1,b_2,\ldots)\in\overline{B}$, we write
$b_1\cdots b_n=(a_n,\,d_n).$
We distinguish three cases:
\vspace{1em}
\noindent {\bf First case :} $\lambda_1 > \lambda^{\prime}_1$.\;\;
In this case one has $(\chi-\chi')(\sigma_{\overline{\mu}}) >0$.
According to \eqref{eqnlogadg} and the Law of Large Numbers \ref{corinv},
for $\overline{\beta}$-almost every $b\in \overline{B}$,
we have
\begin{equation*}
\lim_{n\rightarrow\infty}\log (\n[a_n]/\n[d_n])
\; =\;
\lim_{n\rightarrow\infty}(\chi-\chi')(\kappa(b_1\cdots b_n))
\; =\;
\infty .
\end{equation*}
\noindent {\bf Second case :} $\lambda_1=\lambda^{\prime}_1$ and $ (\chi-\chi')(\Lie[a]_{\overline{\mu}})\neq 0$.\;
In this case, one has $(\chi-\chi')(\sigma_{\overline{\mu}})=0$
and there exists $x$ in the unit ball $K_{\overline{\mu}}$ of
$\Lie[a]_{\overline{\mu}}$
such that $(\chi-\chi')(x)>0.$
According to the Law of the Iterated Logarithm \ref{corinv},
for $\overline{\beta}$-almost every $b\in\overline{B}$,
there exists an increasing sequence of integers $n_i$ such that
\[\lim_{i\rightarrow\infty}\frac{\kappa(b_1\cdots b_{n_i})-n_i\,\sigma_{\widetilde{\mu}}}{\sqrt{2n_i\log \log n_i}}\; =\; x,\]
and therefore such that
\begin{equation*}
\lim_{i\rightarrow\infty}\log (\n[a_{n_i}]/\n[d_{n_i}])
\; =\;
\lim_{i\rightarrow\infty}(\chi-\chi')(\kappa(b_1\cdots b_{n_i}))
\; =\;
\infty .
\end{equation*}
\noindent {\bf Third case :} $\lambda_1=\lambda^{\prime}_1$ and $(\chi-\chi')(\Lie[a]_{\overline{\mu}})= 0$.\;
Let
$$
S:=\{(a,d)\in \overline{G}\mid |\det a|=|\det d|=1\}.
$$
Since the group $\overline{G}$ is reductive,
by Lemma \ref{centreoverG}, the subgroup $S$ is semisimple.
Let $\Lie[s]$ be the Lie algebra of $S$.
By \cite[Thm 13.19]{Livre},
we have $\Lie[a]\cap \Lie[s]\subset \Lie[a]_{\overline{\mu}}$,
and thus also
\begin{equation}
\label{eqnchias}
(\chi-\chi')(\Lie[a]\cap \Lie[s])=0.
\end{equation}
We introduce the group morphism $\delta$ defined by:
$$
\delta\; :\; \overline{G}\longrightarrow \mathbb{R}
\;\; ;\;\;
(a,d)\longmapsto \frac{1}{r}\log|\det a| -\frac{1}{r'}\log|\det d|.
$$
For every $g=(a,d)$ in $\overline{G}$,
we can write $g=sz$ with $s\in S$ and $z\in Z$.
Using Equations \eqref{eqnlogadg}, \eqref{eqnchias}
and the equality $\kappa(g)=\kappa(s)+\kappa(z)$,
we compute,
\begin{equation}
\label{eqnchichi}
\log\left(\n[a]/\n[d]\right)
\; =\;
(\chi-\chi')(\kappa(g))
\; =\;
(\chi-\chi')(\kappa(z))
\; =\;
\delta(z)
\; =\;
\delta(g).
\end{equation}
We want to describe the behavior of the random variable
$
T_n=\log \left(\n[a_n]/\n[d_n]\right)
$
on $\overline{B}$ where as above $(a_n,\,d_n)= b_1\cdots b_n$.
Using Equation \eqref{eqnchichi}, we see that
$$
T_n =\delta(b_1\cdots b_n)= \delta(b_1)+\cdots +\delta(b_n)
$$
is the sum of $n$ real-valued independent and identically distributed
random variables $\delta(b_i)$.
Note that the law of the variable $\delta(b_1)$ has compact support.
Since $\lambda_1=\lambda^{\prime}_1$, we have $\mathbb{E}(\delta(b_1))=\frac1n\mathbb{E}(T_n)\xrightarrow[n\tend\infty]{} 0$.
Thus the variable $\delta(b_1)$ is centered.
If this random variable $\delta(b_1)$ were almost surely $0$,
it would mean that for $\overline{\mu}$-almost every
$g=(a,\,d)\in \overline{G}$, we have $\delta(g)=0$.
Since $\overline{\mu}$ is Zariski dense in $\overline{G}$,
this would imply $\delta(G)=0$, or, equivalently,
\begin{equation}
\label{eqndegnul}
(\chi-\chi')(\Lie[z])=0,
\end{equation}
where $\Lie[z]\subset\Lie[a]$ is the Lie algebra of $Z$.
Equalities \eqref{eqnchias} and \eqref{eqndegnul}
would tell us that the highest weights $\chi$ and $\chi'$ were equal.
This would contradict Lemma \ref{lemhigdis}.
Therefore this centered variable $\delta(b_1)$ is not almost surely $0$.
Thus the classical recurrence properties of real random walks
(cf e.g. \cite[Thm 3.38]{Brei})
tell us that $\sup_{n\geq 1}T_n=\infty$ almost surely.
\vspace{1em}
In each of these three cases, we have checked \eqref{eqnlimdan2}.
\end{proof}
\section{Uniqueness of the Stationary Measure }\label{thmequnicite}
\label{secunista}
\begin{quotation}
The main aim of this chapter is to prove the uniqueness of the stationary measure on $X_{V,W}$ (Proposition \ref{unicitedirac}).
\end{quotation}
\subsection{No Stationary Measures on $Y_{V,W}$}
\begin{quotation}
The proof of uniqueness will rely on the following Lemma
\ref{lemyvw}.
\end{quotation}
We keep the notations and assumptions
of Theorem \ref{thmeq}.
Let $p$ be the projection
$$
p\; :\; X_{V,W}\longrightarrow \Pj[W']
\;\; ;\;\; [v]\longmapsto [v+W]
$$
and let $Y_{V,W}$
be the $G$-invariant subvariety of $X_{V,W}^2$
\[Y_{V,W}:= \{(x,\,x')\in X_{V,W}^2\,|\,p(x)=p(x'),\, x\neq x'\}.\]
\begin{lem}\label{lemyvw}
Same notations and assumptions
as in Theorem \ref{thmeq}.\\
There is no $\mu$-stationary probability measure $\widetilde{\nu}$ on $Y_{V,W}$.
\end{lem}
\begin{proof}
Suppose that such a measure $\widetilde{\nu}$ does exist.
Consider again the natural projection
$\rho : G\longrightarrow \mathrm{GL}(W)\times\mathrm{GL}(W')$
introduced in \eqref{eqnrhoggl}.
Let
$
\overline{G}:=\rho(G)
$
be the image of $G$ by $\rho$, a reductive subgroup
of $\mathrm{GL}(W)\times\mathrm{GL}(W')$,
and let
$
\overline{\mu}:=\rho_*\mu
$
be the image of $\mu$ by $\rho$,
a Zariski dense probability measure on $\overline{G}$.
Now consider the map
$$
f\; :\;
Y_{V,W}\longrightarrow \overline{Y}
\;\; ;\;\;
( \lbrack w_1,\, w' \rbrack ,\lbrack w_2,\, w' \rbrack)
\longmapsto
\lbrack w_1-w_2,\, w' \rbrack ,
$$
where $\overline{Y}:=\Pj[W\oplus W']\smallsetminus (\Pj[W]\cup\Pj[W'])$.
Let $\overline{\nu}=f_*\widetilde{\nu}$
be the probability measure on $\overline{Y}$
that is the image of $\widetilde{\nu}$ by $\rho$.
Since the map $f$ is equivariant, the probability measure
$\overline{\nu}$ is $\overline{\mu}$-stationary.
According to Proposition \ref{BQorbitescompactesmeasures}
such a measure $\overline{\nu}$ is supported by a compact
$\overline{G}$-orbit in $\overline{Y}$. This contradicts the following
Lemma \ref{pasdorbitescompactesdansX}.
\end{proof}
\begin{lem}\label{pasdorbitescompactesdansX}
There are no compact $\overline{G}$-orbits in $\overline{Y}$.
\end{lem}
\begin{proof}
Such a compact orbit would be of the form $\overline{G}/\overline{H}$,
where $\overline{H}$ is an algebraic subgroup of $\overline{G}$
containing a conjugate of the group $\overline{A}\,\overline{N}$
with $\overline{A}$ a maximal split subtorus of $\overline{G}$
and $\overline{N}$ a maximal unipotent subgroup normalized by $\overline{A}$.
Since $W$ and $W'$ are proximal irreducible representations of $\overline{G}$,
there is only one $\overline{N}$-invariant line $\mathbb{R} v$
in $W$ and one $\mathbb{R} v'$ in $W'$.
Hence the $\overline{N}$-invariant lines
in $W\oplus W'$ are included in the plane $\mathbb{R} v\oplus\mathbb{R} v'$.
Since, by Lemma \ref{lemhigdis},
the highest weights $\chi$ and $\chi'$ of $W$ and $W'$
are distinct, the lines $\mathbb{R} v$ and $\mathbb{R} v'$
are the only $\overline{A}$-invariant lines in $\mathbb{R} v \oplus \mathbb{R} v'$.
Therefore, a compact $\overline{G}$-orbit in $\Pj[W\oplus W']$
is contained in $\Pj[W]\cup\Pj[W']$.
\end{proof}
\subsection{Proof of Uniqueness}
\begin{quotation}
We can now show the uniqueness of
the $\mu$-stationary probability measure $\nu$ on $X_{V,W}$.
The same proof will tell us that its
limit probability measures $\nu_b$
are Dirac measures.
\end{quotation}
\begin{prop}
\label{unicitedirac}
Same notations and assumptions
as in Theorem \ref{thmeq}.\\
If $\lambda_1<\lambda^{\prime}_1$, the $\mu$-stationary probability measure
$\nu$ on $X_{V,W}$ is unique.\\
Moreover, the limit measures $\nu_b$ are $\beta$-almost surely Dirac measures.
\end{prop}
\begin{proof}[Proof of Proposition \ref{unicitedirac}]
Let $\nu_1$ and $\nu_2$ be two $\mu$-stationary probability measures on $X_{V,W}$.
By Corollary \ref{corjoista} the joining measure $\nu_1\boxtimes\nu_2$
on $X_{V,W}^2$ is $\mu$-stationary.
Let us show that its support is contained in the subvariety
\[Z_{V,W}:= \{(x,\,x')\in X_{V,W}^2\,| \,p(x)=p(x')\},\]
where $p:X_{V,W}\rightarrow \Pj[W']$ is again the canonical projection.
Since the action of $G$ on $W'$ is irreducible and proximal,
there exists a unique $\mu$-stationary measure $\nu'_0$
on $\Pj[W']$ called the {\it Furstenberg measure}. Its limit probability
measures $\nu'_{0,\,b}$
are $\beta$-almost surely Dirac measures
$\delta_{\xi_b}$
for some $\xi_b\in\Pj[W']$.
See \cite[Prop. 3.7]{Livre} for more detail on the Furstenberg measure.
Since $\nu'_0$ is unique,
we have the equalities
$$
p_*\nu_1=p_*\nu_2=\nu'_0 .
$$
Therefore, for $\beta$-almost every $b\in B$, we have
$$
p_*\nu_{1,b}=p_*\nu_{2,b}=\delta_{\xi_b},
$$
and hence
$$
\nu_{1,b}\otimes \nu_{2,b}(Z_{V,W})=1.
$$
By the very definition \eqref{eqnjoista} of the joining measure, integrating this equality gives
$$
\nu_1\boxtimes\nu_2(Z_{V,W})=1.
$$
By definition, this set $Z_{V,W}$ is the union $Z_{V,W}=Y_{V,W}\cup \Delta_{X_{V,W}}$.
By Lemma \ref{lemyvw},
the $G$-variety $Y_{V,W}$ does not support $\mu$-stationary measures.
Therefore the joining measure
$\nu_1\boxtimes\nu_2$
is supported on the diagonal $\Delta_{X_{V,W}}$.
Hence, for $\beta$ almost every $b$ in $B$, the measure
$\nu_{1,b}\otimes\nu_{2,b}$ is also supported on the diagonal:
$
\nu_{1,b}\otimes\nu_{2,b}(\Delta_{X_{V,W}})=1.
$
Therefore, by Lemma \ref{diagonalesansatome},
the limit probability measures $\nu_{1,b}$ and $\nu_{2,b}$ are
both equal to the same Dirac measures. Hence, by Lemma \ref{nub}, one has $\nu_1=\nu_2$.
\end{proof}
\subsection{Limit of Means of Transition Probabilities}
\label{secconmea}
\begin{quotation}
In this section we prove that the sequence of
means of the transition probabilities $\mu^{*n}*\delta_x$
on $X_{V,W}$ always has a limit.
\end{quotation}
\begin{cor}\label{corthmeqlunlpun}
Same notations and assumptions
as in Theorem. \ref{thmeq}.
Let $x\in X_{V,W}$.\\
a) When $\lambda_1\geq\lambda^{\prime}_1$,
one has the weak convergence
$\frac{1}{n}\sum_{j=1}^n \mu^{*j}* \delta_x\xrightarrow[n\tend\infty]{} 0.$\\
b) When $\lambda_1<\lambda^{\prime}_1$,
one has the weak convergence
$\frac{1}{n}\sum_{j=1}^n \mu^{*j}* \delta_x\xrightarrow[n\tend\infty]{} \nu.$
\end{cor}
\begin{proof}[Proof of Corollary \ref{corthmeqlunlpun}]
Every accumulation point
of the sequence of probability measures
$\frac{1}{n}\sum_{j=1}^n \mu^{*j}* \delta_x$
is a $\mu$-stationary finite measure.
When $\lambda_1\geq\lambda^{\prime}_1$, by Proposition \ref{l1lp1nonrecloiprop},
such a measure is necessarily $0$.
When $\lambda_1<\lambda^{\prime}_1$, by Corollary \ref{l1lp1urloi},
the corresponding Markov chain is recurrent in law;
hence, no mass is lost
and the accumulation points are thus $\mu$-stationary probability measures.
By Proposition \ref{unicitedirac}, there is only one such measure.
\end{proof}
|
1,116,691,501,272 | arxiv | \section{Introduction}
ALICE (A Large Ion Collider Experiment) \cite{JINST} is a general-purpose heavy-ion experi\-ment at the CERN LHC. Its main goal is to study the physics properties of the quark-gluon plasma (QGP).
During LHC Run~1 (2009-2013) and Run~2 (2015-2018), the analysis of the data collected in heavy-ion collisions allowed the observation of hot hadronic matter at unprecedented values of temperatures, densities and vo\-lu\-mes. These studies confirmed the basic picture, emerged from the experimental investigation at lower energies, of a QGP as an almost inviscid liquid and has, for example, provided clear evidence of the importance of regeneration in the production of J/$\Psi$ mesons in the freeze-out of the QGP, as well as precision measurements of energy loss of light and heavy quarks, allowing to determine the relevant transport coefficients.
The study of the strongly-interacting state of matter in the second generation of LHC heavy-ion studies in LHC Run~3 (2022-2024) and Run~4 (2027-2030) will focus on rare processes such as the production of heavy-flavour particles, quarkonium states, real and virtual photons and heavy nuclear states \cite{ALICEupLoI}. The earlier methods of triggering will be limited for many of these measurements, particularly at low-$p_{\rm T}$. Therefore, the \mbox{ALICE} collaboration planned to upgrade the LHC Run~1/Run~2 detector by enhancing its low-momentum vertexing and tracking capability, allowing data taking at substantially higher rates and preserving the already remarkable particle identification capabilities. A second upgrade phase, to be completed before LHC Run~4, foresees a further upgrade of the inner tracking system and the introduction of a new forward calorimeter detector.
The ALICE Collaboration is defining its future beyond LHC Run~4 (from 2032 onward) proposing a completely new LHC experiment to be installed in place of the current ALICE detector\cite{ALICE3EoI}. It will be dedicated to the high-statistics study of the production of heavy flavour hadrons and of soft electromagnetic and hadronic radiation produced in high-energy proton-proton and nuclear collisions, opening a new window which will provide a better understanding of the initial temperature of the QGP produced in the collisions, new insights in the interplay between thermalisation of heavy flavour and hadron formation, and in the nature of chiral symmetry breaking, beyond what is planned for the next ten years.
\section{ALICE~2: readiness for LHC Run~3}
The LHC long shutdown 2 (LS2, 2019-2021) is going to be completed soon and ALICE is finally preparing for data taking during LHC Run~3.
Here the list of upgrades ALICE underwent during the LS2:
\begin{itemize}
\item Reduction of the beam-pipe radius from \mbox{29.8 mm} to \mbox{19.2 mm}.
\item Installation of two new high-resolution, high-granularity, low material budget silicon trackers:
\begin{itemize}
\item Inner Tracking System (ITS~2) \cite{ITSUPTDR} in the central pseudo-rapidity.
\item Muon Forward Tracker (MFT) \cite{MFTTDR} covering forward pseudo-rapidity.
\end{itemize}
\item Replacement of the endcap wire chambers of the Time Projection Chamber by Gas Electron Multiplier (GEM) detectors and installation of new readout electronics allowing continuous readout \cite{TPCUPTDR}.
\item Replacement of Resistive Plate Chambers of the Muon Identifier that shown ageing effects during LHC Run~2\cite{RDOUPTDR}.
\item Upgrades of the forward trigger detectors (Fast Interaction Trigger, FIT) \cite{RDOUPTDR}.
\item Upgrades of the readout electronics of the Transition Radiation Detector, Time-Of-Flight, Photon Spectrometer, Muon Spectrometer and Zero Degree Calorimeter for high rate operation \cite{RDOUPTDR}.
\item Upgrades of online and offline systems (O$^{2}$ project) \cite{O2TDR} in order to cope with the expected data volume.
\end{itemize}
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_A2}}
\caption{
Left: example of muon tracks, generated during the SPS to LHC transfer line test stopping the beam at the Target Extraction Dump (TED) close to P2, as reconstructed in the MFT detector. Right: example of cosmic ray track reconstructed in the ITS.}
\label{FigA}
\end{figure}
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_A4}}
\caption{ALICE event display example for an event recorded during the proton-proton collisions provided by LHC during the pilot runs, 27 - 31 October 2021. Left: 3D view. Top right: transverse view. Bottom right: longitudinal view.}
\label{FigA3}
\end{figure}
All the needed interventions have been completed by August 2021, when \mbox{ALICE} started global commissioning with most of the detector systems.
The effort in this period is focused on the integration of the single detectors into the common readout and detector control systems. During this period a large cosmic rays campaign has been carried out, in order to verify detector responses, data-acquisition workflow and detector control system. An example of cosmic ray track reconstructed in the ITS is shown in Fig. \ref{FigA} (right canvas). First global verification of detector performance with large amount of tracks happened at the beginning of October 2021 during the verification of the beam transfer line procedure between SPS and LHC. During these tests, the beam is dumped in the Target Extraction Dump, creating an intense flux of particles (mainly muons) at forward rapidity that could be seen by ALICE detector. Fig. \ref{FigA} (left canvas) shows the tracking capabilities of the MFT in this beam condition.
The following important commissioning step took place during the LHC pilot runs at the end of October 2021, when real proton--proton collisions have been provided by the machine allowing to perform first physics event reconstruction (Fig. \ref{FigA3}).
\section{ALICE~2: R$\&$D activities for LHC Run~4}
The physics program of the experiment during LHC Run~4 could be extended and improved thanks to recent innovations in the field of silicon imaging sensor technology that open extraordinary opportunities for new detector concepts. The ALICE Collaboration presented two Letters of Intent for a further upgrade of the Inner Tracking System (ITS~3)\cite{ITS3LoI} and for the installation of a calorimeter in the forward region (FoCal)\cite{FoCalLoI}.
\subsection{ITS~3}
The ITS~2 consists of seven cylindrical detector layers based on CMOS Monolithic Active Pixel Sensors (MAPS), named ALPIDE (ALice PIxel DEtector), covering a 10 m$^{2}$ area with about 12.5 billion pixels. This sensor has dimensions 3~$\times$~1.5~cm$^{2}$ and is thinned down to 50~$\mu$m (for the three innermost layers, inner barrels). Each layers is azimuthally segmented in units named staves, consisting of the following main components: a carbon fiber support structure, a carbon plate that embeds the cooling pipes, an assembly of ALPIDE sensors (whose number depends on the radial position of the corresponding layer) and a polyimide flexible printed circuit (for chip configuration and data streaming), and (for the four outermost layers) a power bus. The binary readout with zero suppression, implemented in the ALPIDE, allows the average power density to be below 40~mW/cm$^{2}$ hence granting operation at room temperature using water cooling.
In parallel with the reduction of the radial position of the first layer, one of the key detector design feature to improve the momentum resolution and tracking performance, especially for particles with low transverse momentum (low-$p_{\rm T}$ $<$ 1~GeV/c), is the reduction of the material budget. ITS~2 design, with reduction of the support structures and the thickness of the sensor, brought to the excellent result of an estimated mean material budget value of 0.35\%~X/X$_{0}$ for each of the three innermost layers. The material budget angular distribution and the contribution from each component material, for two adjacent staves in the first layer of the ITS~2, is reported in Fig.~\ref{FigB} (left). The following considerations can be made:
\begin{itemize}
\item material budget is, to large extent, formed by passive components such as water cooling, carbon and kapton support structures, and aluminum wires;
\item a lot of irregularities, e.g. due to the overlap of adjacent staves (needed to grant hermeticity) or due to the presence of the water cooling pipes, are visible;
\item silicon makes up only about 15\% of the total material budget.
\end{itemize}
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_B}}
\caption{Material budget angular distribution of two adjacent staves in the ITS~2 configuration (left) and leaving only the silicon sensor (right). Thickness of the silicon is $\approx$ 0.05\%~X/X$_{0}$.}
\label{FigB}
\end{figure}
The new design of the ITS inner barrels for Run~4 aims to avoid the passive material keeping essentially just the silicon layer. Such a detector would achieve an unprecedented low material budget of about 0.05\%~X/X$_{0}$ per layer (Fig.~\ref{FigB}, right). The construction of the detector would require (i) to reduce the power consumption below 20~mW/cm$^{2}$, in order to cool down the sensor only by airflow, (ii) to integrate power and data buses on the chip, in order to remove any other flex covering the sensitive region, and (iii) to rely on the stiffness of large size, bent silicon wafers, in order to remove mechanical support structures. Such a low value for the power consumption can be achieved by moving the sensor periphery, including the serial link, to the edge of the chip and by the usage of the 65~nm CMOS technology. Large size MAPS with an area of up to 21~cm $\times$ 21~cm using wafers that are 300~mm in diameter, can be developed using a stitching technique. The reduction of the sensor thickness to values of about 20 -- 40~$\mu$m will open the possibility of exploiting the flexible nature of silicon to implement large-area curved sensors. In this way, it will become possible to build a cylindrical layer of silicon-only sensors, like depicted in Fig.~\ref{FigC} (left). Installation of a new beam pipe with smaller radius (inner radius 16~mm) and thickness (500 $\mu$m), will allow to place the first layer even closer to the interaction point (from 23~mm of the ITS~2 first layer to 18~mm).
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_C}}
\caption{Left: proposed design for the inner barrel of the ITS in Run 4. Right: ITS 3 mechanical prototype assembly.}
\label{FigC}
\end{figure}
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_D}}
\caption{Comparison of the tracking performance in Run 3 (blue color) and Run 4 (red color). Left: pointing resolution of tracks in the plane transverse to the beam. Right: tracking efficiency.}
\label{FigD}
\end{figure}
Performance studies made for the ITS~3, indicate a further improvement of the pointing resolution by approximately a factor 2 and an increase of the stand-alone tracking efficiency by factor 2 for $p_{\rm T} <$~100~MeV/c (Fig. \ref{FigD}). The improvement of the vertexing performance and the reduction of material budget will have a dramatic impact on the measurement of charm and beauty hadrons at low transverse momentum as well as on the measurement of low-mass and low $p_{\rm T}$ dielectrons. The impact on various physics channels has been studied, showing for example that the significance of the $\Lambda_{c}$ in lead--lead collisions increases by almost a factor 4.
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_E}}
\caption{Left: hit inefficiency as a function of threshold for different rows and incident angles as measured in a bent ALPIDE. Right: $\mu$ITS3 assembly.}
\label{FigE}
\end{figure}
Current R\&D lines are exploring detector integration, sensor performance and chip design.
The possibility to bend an ALPIDE chip has been explored, successfully reaching the ITS~3 target radii without breaking for a silicon thickness below 50~$\mu$m. An estensive study established the best commercially available carbon foam material to be used for mechanical supports, targeting an excellent thermal conductivity with a low density. A full size mechanical prototype, using large dimension, 50~$\mu$m thick, blank wafer, has been assembled, allowing the development of bending tools and verification of carbon foam support structures (Fig.~\ref{FigC}, right).
The ALPIDE sensor characterised in the laboratory showed no effect on the performance due to the bending process; the noise level and number of dead pixels remained unchanged, and the difference in pixel threshold distribution over the matrix is negligible. The high detection efficiency of single ALPIDE sensor has been measured at the DESY test beam facility, and found to be preserved also in bent configuration (Fig.~\ref{FigE}, left): below the threshold of 100~e$^{-}$ (nominal operating point in ITS~2) the hit inefficiency is generally lower than 10$^{-4}$ independently of the incident angle or the position on the chip. Additionally, an assembly, made of 6 ALPIDE sensors bent around cylinders of ITS~3 target radii and having open windows in correspondence to the sensors (the $\mu$ITS3, Fig. \ref{FigE}, right), has been tested with beam to verify tracking and vertexing capabilities.
New chip design in 65 nm reached first milestone in June 2021 with the production of the multi-layer reticle 1 (MLR1) including first test structures like: transistor test structures, analog building blocks, various diode matrices and digital test matrices. Characterisation of these structures in laboratory and under beam are ongoing. The next important milestone will be the first engineering run including first implementation of the stitching technique.
\subsection{FoCal}
\begin{figure}[tb]
\centerline{\includegraphics[width=8cm]{./FIG_F}}
\caption{Approximate (x,Q) coverage for measurement of deep inelastic scattering in various experiments.}
\label{FigF}
\end{figure}
The Forward Calorimeter project (FoCal) extends the scope of ALICE by adding new capabilities to explore the small-x parton structure of nucleons and nuclei. In particular, the FoCal provides unique capabilities at the LHC to investigate Parton Distribution Functions in the as-yet unexplored regime of Bjorken-x down to \mbox{x $\sim$ 10$^{-6}$} and low momentum transfer \mbox{Q $\sim$ 4 GeV/c} (Fig.~\ref{FigF}), where it is expected that the hadronic structure evolves non-linearly due to the high gluon densities. Such effects are a necessary consequence of the non-Abelian nature of quantum chromodynamics (QCD), and their observation and characterisation would be a landmark in our understanding of the strong interaction. The main goals of the FoCal physics program are to: (i) quantify the nuclear modification of the gluon density in nuclei at small-x and Q$^{2}$ by measuring isolated photons in proton--proton and proton--lead collisions; (ii) investigate non-linear QCD evolution by measuring azimuthal $\pi^{0}$--$\pi^{0}$ correlations and isolated $\gamma$--$\pi^{0}$ correlations in proton--proton and proton--lead collisions; (iii) investigate the origin of long range flow-like correlations by correlating neutral meson production over a large range in rapidity in proton--proton and proton--lead collisions; (iv) quantify parton energy loss at forward rapidity by measuring high-$p_{\rm T}$ neutral pion production in lead--lead collisions.
The FoCal design is presented in Fig.~\ref{FigG}. The detector will be placed outside the ALICE solenoid magnet at 7~m from the interaction point, covering pseudo-rapidity range 3.4 $< \eta <$ 5.8, and will consist of an electromagnetic (FoCal-E) and a hadronic (FoCal-H) calorimeter.
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_G}}
\caption{ Installation of the FoCal at the 7 m location from the interaction point (IP) with FoCal-E and FoCal-H detectors.}
\label{FigG}
\end{figure}
To separate $\gamma$ and $\pi^{0}$ at high energy it is necessary to minimise occupancy effects and optimise photon shower separation in the FoCal-E. Tungsten is the best choice for the absorber material, having small Molière radius (R$_{M}$ = 9~mm) and radiation length (X$_{0}$ = 3.5~mm). A fine transverse granularity readout is granted using silicon pixel sensors. The detector design foresees a 20 X$_{0}$ thick silicon-tungsten sampling calorimeter with 18 layers of tungsten absorbers and silicon sensors with two different granularities (Fig.~\ref{FigH}, left). Low-granularity silicon sensors (pad size 1 cm$^{2}$) with fast integration time for charge collection will be used in 16 out of the 18 layers and will provide shower profile and total energy measurement. The 2 remaining layers shall be equipped with high-granularity monolithic active pixel sensors, like the ALPIDE sensor, characterised by slower integration time (about 5 $\mu$s). The high-granularity layers will be placed at the positions, where an electromagnetic shower reaches its maximum, to improve two-shower separation. A total sensor area of 14.5~m$^{2}$ and 1.5~m$^{2}$ are expected for the low- and high-granularity layers, respectively, with a total of about 150$\cdot10^{3}$ individual pad channels and about 4$\cdot10^{3}$ pixel sensors. Two examples of prototypes assembled as part of the FoCal-E R\&D project can be seen in Fig.~\ref{FigI}.
Mini-FoCal is an assembly of 20 layers, each consisting of a 3.5~mm thick tungsten plate followed by a 0.3~mm thick silicon sensor. These are Hamamatsu silicon pad sensors having an 8 $\times$ 8 pad matrix, with each pad 1~cm$^{2}$ large. A first version of this prototype was tested at PS and SPS showing good linearity and energy resolution $\sigma_{e}/E$ = $0.17/\sqrt{E} \bigoplus 0.019$\cite{FoCalLoI}. A second version (shown in Fig.~\ref{FigI}, left) was placed in the ALICE cavern during the proton--proton 13~TeV physics data taking in 2018 to measure the background.
EPICAL, is a small fully digital silicon-tungsten calorimeter with high granularity, based on MIMOSA sensors in its first version\cite{EPICAL} and on ALPIDE sensors in its second assembly. EPICAL-2 (shown in Fig.~\ref{FigI}, right) has 24 layers consisting of a 3~mm thick tungsten absorber followed by 2 ALPIDE sensors, covering a transverse surface of 3~$\times$~3~cm$^{2}$. It was tested with beams at DESY during 2019-2020, showing that MAPSs are suitable for such application.
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_H}}
\caption{(Left) Schematic view of the structure of the FoCal-E detector. (Right top) Longitudinal and transverse profile of two showers produced in the FoCal-E detector by two photons.}
\label{FigH}
\end{figure}
\begin{figure}[tb]
\centerline{\includegraphics[width=12.5cm]{./FIG_I}}
\caption{Pictures of the Mini-FoCal (left) and EPICAL-2 (right) prototypes.}
\label{FigI}
\end{figure}
FoCal-H is needed for photon isolation and jet measurements. It can be built as a conventional sampling hadronic calorimeter with a thickness of $\sim$6 hadron interaction lengths and a total length in z direction of $\sim$1.1~m. The transverse size will be similar to that of FoCal-E. Present design foresees a total of $\sim$1000 scintillating fiber based towers with transverse dimension in the range 2 -- 5~cm. A copper radiator prototype has been assembled and is going under test beam during the 2021. The readout will be based on avalanche photodiodes (APDs) or silicon photomultipliers (SiPMs).
\section{ALICE~3: prospectives after LHC Run~4}
The ALICE Collaboration is preparing a Letter of Intent for the construction of a new LHC experiment, to be submitted to the LHC Experiments Committee (LHCC) in 2022. The goal is to extend the LHC heavy-ion program beyond Run~4 with a dedicated experiment that provides much improved vertex resolution, larger rapidity coverage and clean electron identification combined with high rate capabilities \cite{ALICE3EoI}. In particular it should be able to measure the production of leptons, photons and identified hadrons down to $p_{\rm T}$ scales of the order of a few tens of MeV/c. Examples of topics that gain significant advances are:
\begin{itemize}
\item Hadronization mechanism from QGP via multi-charm hadrons measurement.
\item Microscopic description of the dynamics of heavy quarks in a QGP via bound state (quarkonia and exotica) formation and dissociation mechanisms.
\item The description of the early phases of the collision through real and virtual photons measurement.
\item Experimental evidence for the restoration of chiral symmetry in the hot and dense phase via precision measurement of the thermal dilepton continuum starting from the $\rho$ meson and reaching up to masses of about 1.6~GeV.
\item Low’s theorem verification through photon measurement at very low $p_{\rm T}$, below 10~MeV/c.
\end{itemize}
\begin{figure}[tb]
\centerline{\includegraphics[width=9cm]{./FIG_L}}
\caption{Schematic view of the ALICE~3 detector proposal.}
\label{FigL}
\end{figure}
The observables mentioned in the previous list determine the requirements on the detector design; a schematic view is available in Fig. \ref{FigL}. The detector, which covers the pseudorapidity region of $|\eta|$~$<$~4 over the full azimuth, has a very compact layout. Main tracking system and PID detectors, assembled as a central barrel and two end-caps, are surrounded by a superconducting magnet system with internal radius of 1.5~m and longitudinal dimension of 4~m. Externally to the magnet there are only the muon chambers, preceded by the muon absorber.
To provide the required distance of closest approach (DCA) resolution to the primary vertex, a three layer ultra-light bent silicon detector is expected to go within the beam pipe (inside a secondary vacuum). A retractable design is considered to provide the required beam aperture during beam injection, with 5~mm minimum radial distance to the interaction point for the innermost layer. The possibility to reach the appropriate spatial resolution and material budget are explored in the R\&D activities within the ITS~3 project with the application of the 65~nm CMOS technology and the possibility to bend large dimension thin silicon sensors with removal of mechanical support and cooling infrastructures. R\&D challenges concern mechanical supports, cooling and radiation tolerance.
Tracking will be completed with more detector layers equipped with silicon MAPS sensors arranged in modules with water-cooling and carbon-fiber space frame for mechanical support. In this case, the requirements on spatial resolution and material budget are less stringent and the appropriate design of the magnetic field is essential to obtain the needed transverse momentum resolution. The total silicon surface is expected to be $\sim$60~m$^2$ so it is crucial to develop cost-effective sensor and to automatise as much as possible the module production.
Hadron heavy-flavour decay reconstruction requires $\pi$/K/p separation for transverse momentum up to a few GeV/c. This will be provided with a Time Of Flight (TOF) detector, complemented with a Cherenkov (RICH) detector for higher particle momentum. TOF detector will consist of one barrel layer plus two end-caps (one on each side) made with silicon sensor characterised by timing resolution of $\sim$20~ps. At least three sensor technologies are under consideration for this application: Low Gain Avalanche Diodes (LGAD), MAPS and Single Photon Avalanche Diode (SPAD). Total silicon surface in this case is $\sim$45~m$^2$ requiring cost-effective sensor development.
In the Cherenkov detector, using aerogel as radiator material, the sensitive $p_{\rm T}$ window can be matched to the one of TOF in order to ensure continuity in the PID capabilities. Present strategy foresees photons collection based on SiPMs; an alternative based on MAPS is also considered. End-cap layers are foreseen also for this detector. TOF and RICH are useful for electron identification; a large pion rejection up to few GeV/c is required for dielectrons and quarkonia measurements.
Muon chambers are foreseen as the outermost layers of the detector in the central barrel region. This system is fundamental for quarkonia reconstruction and in particular for charmonia (J/$\Psi$) down to $p_{\rm T} = 0$, complementing the electron identification capabilities. Baseline detector technology is Resistive Plate Chamber (RPC), but other options will also be considered.
Photons and jets measurements require a large acceptance electromagnetic calorimeter. This is expected to be placed between RICH detector and magnet cryostat, covering a large pseudorapidity range in the central region plus a layer in the end-cap.
Ultra-soft photon measurement requires a dedicated detector in the forward region, namely the Forward Conversion Tracker (FCT), based on silicon pixel sensors. Basic requirement is the minimisation of material budget in front of the detector.
\section{Conclusions}
ALICE Collaboration successfully completed the upgrade program and is ready to start to accomplish the reach scientific program foreseen for LHC Run~3. Intense R\&D activities are ongoing to develop the near future upgrades, including installation of two new detectors to supplement the physics goals during LHC Run~4. A proposal for the construction of a new LHC experiment to continue the heavy-ion program after LHC Run~4 and push forward our knowledge of the QCD behaviour in dense and hot conditions has been made by the Collaboration and a Letter of Intent is in preparation.
|
1,116,691,501,273 | arxiv | \section*{Abstract}
Complex distribution networks are pervasive in biology. Examples include nutrient transport in the slime
mold \emph{Physarum polycephalum} as well as mammalian and plant venation.
Adaptive rules are believed to guide development of these networks and lead to a reticulate,
hierarchically nested topology that is both efficient and resilient against perturbations. However,
as of yet no mechanism is known that can generate such networks on all scales.
We show how hierarchically organized reticulation can be generated and maintained through spatially
collective load fluctuations on a particular length scale.
We demonstrate that the resulting network topologies represent a trade-off between optimizing power dissipation, construction cost, and
damage robustness and identify the Pareto-efficient front that evolution is
expected to favor and select for. We show that the typical fluctuation length scale controls the position of the networks on
the Pareto front and thus on the spectrum of venation phenotypes.
We compare the Pareto archetypes predicted by our model with examples of real leaf networks.
\section*{Author Summary}
A large number of biological systems including plants, animals, and slime molds
use complex distribution networks to transport liquids or nutrients.
Often, these networks are highly redundant, containing many alternative paths between
any two points. So far, it has been an issue of debate how biology produces these
redundancies. Spatially collective flow fluctuations are a common occurrence in these systems.
In this work we show that introducing such fluctuations into general models of vein formation
can explain the observed venation patterns.
We proceed to analyze the space of phenotypes that the model predicts and demonstrate
that the networks that are expected to be selected are
characterized by trade-offs between efficient transport, resilience against damage,
and material cost.
Our model can interpolate the entire spectrum of resilient efficient networks.
\section{Introduction}
\begin{figure}
\includegraphics[width=\columnwidth]{Figure1}
\caption{Reticulate hierarchical distribution networks in nature.
A. The leaf veins of \emph{Protium wannigianum} show strong hierarchical ordering
coupled with a large degree of reticulation. Inside the smallest loops, nonreticulate,
treelike networks are found, the freely ending veinlets.
B. Two large hierarchically branching vessels feed the downstream venation of the murine neocortex. Where the tips of the branching vessels meet, anastomoses are formed.
C. Network model of liquid transport. Edges $e$ of length $L_e$ carry currents
$F_e$. At each node $i$, a net current $S_i$ is drawn from the network. The net current $S_i$ is proportional to the area $a_i$ of the tessellation unit supplied by the node.
D. A generic Pareto front in a system where two distinct objective functions are
minimized. The Pareto front (orange) is the set of points out of all possible
phenotypes (grey) for which performance can not be improved at all objectives
simultaneously. For any point not on the Pareto front, e.g., (i), a different point can be
found, e.g., (ii), that has better performance at all objectives. For a point on the
Pareto front, like (iii), this is not possible.
\label{fig:figure1}}
\end{figure}
Complex life would be inconceivable without biological fluid distribution networks such as animal vasculature,
plant xylem and phloem, the network of fungal mycelia or the protoplasmic veins of \emph{Physarum polycephalum}.
These networks distribute oxygen and nutrients, remove waste and serve as important long range biochemical
communication pathways.
Even within a single organism the spectrum of venation network phenotypes can be vast. In mammals, there are
predominantly tree-like networks such as the large veins and arteries that service entire organs, but also
highly reticulate capillaries within the organs such as in the brain or the liver.
In plants, leaf network phenotypic variability within a single organism is large but typically the
hierarchical structure and reticulation are roughly conserved.
However, even within a single family
there is considerable variation (see, e.g., Fig.~\ref{fig:example-leaves}).
It is therefore natural to ask whether there might be a single, simple developmental mechanism at play that
can generate and interpolate between the different archetypes on this phenotypic spectrum of vascular networks. With such a mechanism, evolution would only need to select for a few parameters in order to tune the network phenotype for
its function. This question is the focal point of this work.
At least in plants, where there is a well preserved fossil record of the venation, there is evidence for a simple and easily tunable mechanism both because of the fast transitions
between reticulate and
non-reticulate patterns~\cite{Givnish2005,Blonder2016}, and because
single gene knockouts~\cite{Steynen2003,Carland2009} or small changes in the involved phytohormone
concentrations~\cite{Berleth2000} are sufficient to bring such transitions about artificially.
In addition to investigating the mechanism of venation formation, we
are interested in quantifying and assessing the resulting network phenotypes.
Since vascular function is indispensable for the fitness of the organism, vascular networks are typically
considered to be optimized~\cite{DeVisser2014,Sherman1981,Painter2006,McCulloh2003,Hunt2016}, may
exhibit scaling laws (e.g.,~\cite{Newberry2015,Banavar1999,McCulloh2003,McCulloh2009,Kassab2006,Ronellenfitsch2015}),
and contribute to overall organismal economics~\cite{Wright2004,Blonder2011}. However, the building,
maintenance, and function of networks comes at a premium. The organism has to invest energy and material to build the network,
and pay a cost to maintain it (e.g., the metabolic cost of blood cells in animals).
Then, there is the cost associated with the continuing function of the network, most frequently measured by
dissipated power during transport. How much energy is dissipated through viscous friction and how
much is needed to drive the flow determines the network's transport efficiency.
For example, in the animal circulatory system, this decides how strongly the heart must beat to maintain a desired flow.
In addition to cost considerations, the network has to be able to function when damage is present, but also respond
to fluctuating conditions. However, all these costs and constraints cannot be satisfied equally by the same
network. There exists a fundamental evolutionary trade-off between efficient, non-redundant and cheap
transport and the robustness against damage conferred by a redundant network architecture.
Endowing a network with robustness requires redundancy, i.e., building alternate routes connecting the nodes,
which results in a network that is expensive to build and maintain~\cite{Katifori2010}. Similarly, a
network might need to be able to function under conditions of fluctuating load requiring modifications in the
architecture that would increase construction costs~\cite{Corson2010, Katifori2010}. Depending on the specifics of
the network function, location in the organism, and other physiological constraints, the relative importance of each of the
factors efficiency, cost, and resilience, might vary. The optimal transport network can thus be phenotypically quite different for each organ and
organism~\cite{Ronellenfitsch2015b}.
Similar trade-offs exist for the design of individual vessels~\cite{Hacke2006}. In this work, however,
we focus on the network as a whole.
The concept of Pareto optimality is designed to choose the subset of phenotypes which consists of optimal trade-offs between such objectives: The so-called Pareto front is the subset of phenotypes
none of which networks can be improved in all objectives at the same time (see \cite{Miettinen1999},
and Fig~\ref{fig:figure1}~D). In other words, for such a ``Pareto efficient" phenotype,
performance at one objective can not be increased without decreasing performance at some other objective.
Each phenotype on the Pareto front thus represents an optimum trade-off between the objectives. One can assume that the phenotypes observed in nature are all found approximately on the Pareto front because any other trade-off could be improved upon and is therefore evolutionarily selected against, given otherwise fixed conditions.
In mathematical terms, ``improving performance'' of an objective means
either minimization or maximization of some objective function
which maps a phenotype from the phenotypic space to a real number.
Examples of such performance
functions include the ability to digest seeds in birds~\cite{Shoval2012},
being a forager or a soldier in ants~\cite{Shoval2012}, or efficient transport of fluids in vascular
systems~\cite{Ronellenfitsch2016,Katifori2010,Hu2013}.
The Pareto front can be shown to essentially provide phenotypes which
interpolate between ``functional archetypes,'' e.g., the omnivore
phenotype would lie in between the archetypal herbivore and the carnivore~\cite{Shoval2012}.
Thus, through the realization that the Pareto front provides a means of selecting
optimal trade-offs of phenotypic performance, one can come to the conclusion that
by analyzing the archetypes, it is possible to obtain information about which particular objectives are
the biologically relevant drivers of evolution.
Apart from this, the Pareto paradigm has amongst other things led to the discovery of phase transitions~\cite{Seoane2015}
and clustering in the morphospace~\cite{Avena-Koenigsberger2014} of non-vascular complex networks.
Here, we propose a generalized model for the development of plant and animal venation networks
and analyze the resulting phenotypes using Pareto concepts.
In the case of the venation networks found in plants and animals,
different weighted topologies, expected to be drawn from the Pareto front, are needed for each specific
evolutionary niche.
Building and tuning these networks so that they acquire the desired architecture is a complicated process
several aspects of which are not sufficiently understood. Often, in the
case of animals, the positions and dimensions of the largest vessels (such as the aorta)
are genetically predetermined and fixed.
However, smaller vessels are too numerous to be efficiently genetically
encoded and are believed to develop in a self-organized fashion
in both plants and animals~\cite{Feugier2005,Feller2015,LeNoble2005,Kurz2001,Nguyen2006}.
In fact, the abstract mechanisms governing
self-organization of vasculature in plants and animals can be considered equivalent ~\cite{Ronellenfitsch2016}.
In general, these mechanisms involve a process that is able to remodel an initial mesh of veins
according to the flux of blood (in animals), or cells connected by carrier proteins according to a morphogen (in plants).
If the flux is large, vessels adapt by increasing their diameter (or transport capacity);
unused connections die out. This process has been observed directly in animals~\cite{Chen2012}
and indirectly in plants~\cite{Marcos2014}.
The process relies on the local value of flux and can lead to globally optimized
vascular networks when coupled to overall growth
of the organism or organ in which the veins are embedded ~\cite{Ronellenfitsch2016}.
In this work we consider vascular development in the presence of flow
fluctuations. In animals, such fluctuations are always present
(e.g., \cite{Drew2011,Mott2007}).
We take into account the fact that regions of fluctuating load
are spatially correlated and have a specific extent, or length scale.
Tuning this scale along with the other model parameters leads to diverse network phenotypes
which we quantify by measuring three objectives: (a) dissipated power during
non-fluctuating operation, (b) network construction cost, and (c) robustness against damage.
We proceed to employ the Pareto front concept to identify those network phenotypes
which provide efficient trade-offs between these objectives and which should be
favored by evolution.
We identify the functional archetypes as ``fragile but cheap'' and
``robust but expensive,'' and show that the parameter regulating spatial correlations of the
fluctuations can be used to interpolate between them.
The lowest power dissipation phenotypes are surprisingly located at medium cost and high robustness,
possibly explaining why optimizing for power dissipation alone while requiring
robustness to single edge deletions~\cite{Katifori2010} has been successful in the past;
high robustness and low power dissipation can be achieved together rather easily
with adaptive methods.
The Pareto front identifies the possible phenotypic spectrum of vascular networks and
the fluctuation length scale gives a single parameter that is highly correlated with
position on the spectrum, providing natural selection with a simple mechanism
which can be easily tuned according to functional needs.
The rest of this article is organized as follows.
In Section~\ref{sect:dynamics} we introduce the general model of dynamical network adaptation
used to describe plant and animal veins, slime molds, and other distribution networks.
In Section~\ref{sect:fluctuations} we consider
collective, correlated fluctuations and discuss the quantitative metrics
used to assess network robustness and efficiency.
In Section~\ref{sect:results} we collect results from simulations of the
augmented dynamical model. We show that we obtain hierarchically nested loops and
investigate the Pareto front of efficient networks, analyzing the trade-offs involved
and comparing the obtained archetypes with examples of real leaf networks.
Finally, we conclude with a discussion of our results in Section~\ref{sect:discussion}.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure2}
\caption{Growth dynamics of vascular networks in animals.
A. At first, the vessel network is tightly meshed, creating a capillary plexus.
B. As the network develops, it grows and at the same time vascular pruning occurs. Some vessels
are removed from the mesh, others are strengthened.
C. Finally, what remains is hierarchically organized, reticulate venation.
\label{fig:figure2}}
\end{figure}
\section{Dynamics and adaptation of flow networks}
\label{sect:dynamics}
Here, we describe a general framework capable of describing potential-driven flow of some
quantity through a network that dynamically adapts its conductivities.
Each node is taken to represent a unit of some subdivision of the underlying
tissue, a basin that is fed by that node,
with edges representing the flow between them either through vessels or via a
facilitated diffusion process (see Fig.~\ref{fig:figure1} C).
The current $F_e$ through each edge $e$ connecting adjacent units $i$ and $j$
is given by $F_e = K_e (p_j - p_i)/L_e$, where $K_e$ is the dynamically adaptive conductivity,
$L_e$ is the length of the edge, and $p_i$ is the potential (e.g., blood pressure or
morphogen concentration) at unit $i$.
In plants, proteins embedded in the plasma membrane are responsible
for transporting auxin~\cite{Blakeslee2005,Kramer2006} with facilitated diffusion constants $K_e$.
In animals, blood flow through vessels can be approximated by Poiseuille's
law $K_e = k R_e^4$ with a constant $k$ and effective vessel radius $R_e$~\cite{Hacking1996,Hu2012}.
Let $\Delta: \mathcal N \rightarrow \mathcal E$ be the network's oriented incidence
matrix which maps the node vector space $\mathcal N$ to the edge vector space $\mathcal E$.
The matrix $\Delta$ acts as a discrete difference operator.
For each edge an arbitrary but fixed orientation is chosen (see Fig.~\ref{fig:figure1} C).
Then the components $\Delta_{e,i}$ read:
\begin{align}
\Delta_{e,i} = \begin{cases}
1, &\text{edge $e$ points towards node $i$} \\
-1, &\text{edge $e$ points away from node $i$} \\
0, &\text{edge $e$ is not connected to node $i$.}
\end{cases}
\end{align}
The current vector $\mathbf F \in \mathcal E$ with entries $F_e$ can be derived from the potentials $\mathbf p \in \mathcal N$ using the formula
\begin{align}
\mathbf F = K L^{-1}\; \Delta \mathbf p,
\label{eq:flow}
\end{align}
The conductivities and lengths are summarized in the diagonal matrices $K$ and $L$.
The current balance at each node reads in vector form
\begin{align}
\Delta^T \mathbf F = \mathbf S,
\label{eq:aux-balance}
\end{align}
where
$\mathbf S$ is the source (or net current) term.
Equation~\eqref{eq:aux-balance} is Kirchhoff's current law.
In plants, the source $\mathbf S$ describes the production rate of morphogen in each unit;
in animals, it represents the amount of blood perfusing one area unit.
Combining equations~\eqref{eq:flow} and \eqref{eq:aux-balance}, we can solve for the steady state
currents and obtain
\begin{align}
\mathbf{F} = K L^{-1} \Delta (\Delta^T K L^{-1} \Delta)^\dagger \mathbf S,
\label{eq:flow-sln}
\end{align}
where the dagger represents the Moore-Penrose pseudoinverse.
Eq.~\eqref{eq:flow-sln} can be used to compute the currents given all other
properties of the network.
\subsection{Optimal transport networks}
\label{sect:optimal}
One of the most widely used optimization functionals
studied to describe distribution networks that has been applied
to leaf venation, mammalian capillaries, and slime molds is
\begin{align}
E = \sum_e L_e \frac{F_e^2}{K_e} + \lambda \sum_e L_e K_e^\gamma.
\label{eq:energy-functional-general}
\end{align}
In the case of fluid flow, the first term in \eqref{eq:energy-functional-general} is the total dissipated power due to viscous friction.
For diffusive processes, the ``pressures" are interpreted as concentrations and the ``conductivities" are diffusion constants.
After adaptation has concluded, the final network, generated from a diffusive process, is turned into a vascular network
for which the energy functional~\eqref{eq:energy-functional-general} applies. In order to interpret the results of a
diffusive adaptation process in terms of~\eqref{eq:energy-functional-general}, we therefore assume a proportionality
between the effective diffusion constants during adaptation and the final vascular conductivities.
Further, $\lambda$ is either interpreted as a Lagrange multiplier, enforcing the constraint
$\sum_e L_e K_e^\gamma \equiv \text{const}$ (as in Refs.~\cite{Katifori2010,Corson2010,Bohn2007}),
or as a general nonlinear coupling (as in Ref.~\cite{Hu2013}).
The nonlinear term can be interpreted as a network cost, where larger vessels (or conduits of higher conductivity) are more expensive but follow an economy of scale, determined by the parameter $\gamma$.
This term is necessary to prevent the unphysical case of infinite conductivities which would trivially minimize the functional.
In the physiologically relevant case where $\gamma<1$, there are many inequivalent local
optima in exact correspondence with the spanning trees of the network~\cite{Banavar2000}.
The case $\gamma = 1/2$ is particularly interesting because in the case of Poiseuille flow, as it corresponds to fixing the total
vessel volume.
\subsection{Adaptation dynamics}
Development of vascular networks is believed to rely on local feedback mechanisms where increased flow
through a vascular segment will result in improved conductivity of the vessel.
For instance, in plant leaves, auxin canalization, involving flow of a chemical morphogen,
is believed to guide development of the network pattern~\cite{Smith2009,Scarpella2006,Verna2015}.
Beyond development, such adaptive mechanisms allow organisms to dynamically modify the network structure
and respond to changing environmental cues. In slime molds, adaptation to flow of nutrients
leads to efficient long-range transport~\cite{Nakagaki2000,Tero2008,Tero2010}.
In animal vasculature, both development and adaptation in the adult organism are controlled
by a response to vessel wall shear stress~\cite{Eichmann2005,Hu2012,Kurz2001,Scianna2013,Hacking1996}.
Despite the fact that the biological details vary considerably between species, it is
interesting to note that the abstract adaptive mechanism, effectively matching vessel size
to vessel flow, is common to all of these organisms.
The dynamics of the conductivities is often modeled by an equation that
derived from the parametrized family
\begin{align}
\frac{d K_e}{dt} = a \frac{F_e^{2\beta}}{K_e^{\alpha-1}} - b K_e + c,
\label{eq:adapt-family}
\end{align}
where $\alpha\geq 1$, $\beta > 0$ (e.g.,~\cite{Hu2013,Hacking1996,Rolland-Lagan2005,VanBerkel2013}).
The steady states $dK_e/dt = 0$ minimize \eqref{eq:energy-functional-general}
for $\gamma = \alpha/\beta - 1$ and $c=0$. In the case of animal
vasculature, vessel wall shear stress adaptation is recovered by setting
$\beta=1$, $\alpha=3/2$ (see Ref.~\cite{Hu2013}).
Models for chemical flow adaptation generally have $\alpha=1$.
Eq.~\eqref{eq:adapt-family} describes a local positive feedback process.
Conductivities $K_e$ adapt according to a ``use it or lose it'' rule:
they grow as controlled by the magnitude of $a$ when the current $F_e$ through them is large, and they
decay on a characteristic time scale $b^{-1}$ when it is small.
The parameter $c$ may be interpreted as the presence of some growth factor
such as VEGF in the case of mammalian vasculature or background production of auxin
transporting proteins in the case of plant leaves~\cite{Rolland-Lagan2005}.
When development of the network occurs simultaneously with overall growth of
the organism, a co-moving frame equation can be adopted that incorporates the
growth effects.
For simplicity, let the total area of the network grow as $A(t) = e^{rt/2}A(0)$
with growth rate $r$. Then changing into the co-moving frame that grows along with
the network leads to~\cite{Ronellenfitsch2016}
\begin{align}
\frac{d K_e'}{dt} = a \frac{{F_e'}^{2\beta}}{{K_e'}^{\alpha-1}} - b' K_e' + e^{-\frac{2r\beta}{\alpha}t}c.
\label{eq:adapt-family-mod2}
\end{align}
where $K'_e, \mathbf F'$ are now meant to represent the quantities in the co-moving frame.
We see that the effect of background production is exponentially suppressed in time.
In the following, we stay in the co-moving frame and drop the primes from the variables again.
\section{Hierarchical reticulate networks}
\label{sect:fluctuations}
In this section, we address the problem how local adaptive rules
can produce the types of hierarchically nested reticulation seen in biology, as seen in
Fig.~\ref{fig:figure1} A, B.
In terms of dynamics, in animal vasculature a tightly meshed vascular plexus appears first. The vascular plexus is
subsequently pruned, removing some veins and strengthening others, to produce a network
containing hierarchical reticulation (see \cite{Fleury2007,Eichmann2005,Fruttiger2007,Chen2012}
and Fig.~\ref{fig:figure2}).
It is well known that load fluctuates constantly in animals (e.g., in the neocortex~\cite{Drew2011}).
In plants, a recent study also favors this hypothesis during development~\cite{Marcos2014}.
Thus, fluctuations are a very promising candidate mechanism for the morphogenesis of reticulate venation.
Despite this, current models of fluctuations in adaptive networks
are unable to reproduce most of the diversity seen in biological systems. While they are
able to produce reticulation, it is mostly not hierarchically nested and organized
(see the results in Refs.~\cite{Hu2013,Katifori2010,Corson2010}).
These models were based on the moving sink-approach where one averages over
the flows produced by choosing each individual node as the sink while keeping the source
fixed. Here, we propose that instead, collective, spatially correlated fluctuations
involving several nodes are necessary to explain hierarchical organization in the
resulting networks.
\subsection{Fluctuations produce reticulation}
The networks produced with the adaptation laws presented in the preceding section
are all topological trees
as rigorously shown in ~\cite{Bernot2009,Banavar2000}. We now introduce a theory
of adaptation to fluctuating sources that can produce hierarchical reticulation.
Assuming that the time scale on which fluctuations occur is much smaller than that of adaptation,
one may replace the squared currents in~\eqref{eq:adapt-family-mod2} by a fluctuation average,
\begin{align}
F_e^2 \rightarrow \langle F_e^2 \rangle = \frac{1}{N}\sum_{\text{state } i} (F_i)_e^2.
\end{align}
Here, the fluctuating states $\mathbf F_i$ represent the currents for a particular set of source terms
$\mathbf S_i$ and the summation performs an ensemble average for a given set of fluctuating states.
This approach has been studied in both optimization~\cite{Corson2010,Katifori2010} and
adaptation models~\cite{Hu2013,Graewer2015} with source terms similar to
\begin{align}
\frac{(\mathbf{S}_i)_j}{\hat S} = \delta_{j0} - (1 - \delta_{j0})\, \delta_{ji},
\label{eq:fluct-old}
\end{align}
which models flow between one node $i$ of the network and a fixed sink (see Fig.~\ref{fig:figure3}~A).
Here, $\hat S$ is the total flow through the network, which acts as a typical scale.
The resulting network features the sought-after reticulation, but does not show significant
hierarchical ordering.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{Figure3}
\caption{Modelling collective fluctuations of the network load.
A. A collective fluctuation centered at a specific node. The load strength
decays over the spatial scale $\sigma$.
B. An individual point fluctuation. The scale over which the fluctuation decays is too
small to be resolved by the tessellation, which in this case is a hexagonal grid.
\label{fig:figure3}}
\end{figure}
We generalize \eqref{eq:fluct-old} to include
collectively produced fluctuations, implemented as
\begin{align}
\frac{(\mathbf{S}_i)_j}{\hat S} = \delta_{j0} - (1 - \delta_{j0})\, f\left(\frac{|x_j - x_i|}{\sigma} \right),
\label{eq:fluct-new}
\end{align}
where $x_i$ is the position of node $i$, $\sigma$ is the scale over which the source strength
varies, and
$\sum_j (\mathbf S_i)_j = 0$, i.e., the total in or outflux at the source is
normalized to $1$ (see Fig.~\ref{fig:figure3}~B).
In the rest of this paper we consider Gaussian distributed sources ($f(x) \sim e^{-x^2/2}$).
Other spatial distributions such as exponential
($f(x) \sim e^{-x}$) and randomized are relegated to the Supplement~\nameref{S1_Appendix},
where we show that an exponential distribution produces results comparable to Gaussian,
and that a randomized distribution (i.e., where there is no spatial correlation) does
not reproduce realistic types of network.
Gaussian collective fluctuations appear to reproduce well the hierarchical structure
seen in real plants and animals for a certain range of the scale $\sigma$.
\subsection{Quantifying network traits}
\label{sect:traits}
In order to quantify the network phenotypes, we consider the following three
basic network traits: the power dissipation, cost and a percolation penalty.
First, as a measure of network efficiency, we consider the operating cost of the network quantified by the power dissipation calculated under non-fluctuating conditions,
\begin{align}
E = \sum_e L_e \frac{F_e^2}{K_e},
\end{align}
where the flows are computed for a single source and uniform sinks. The rationale here is that during nominal operation, fluctuations are expected to be small, with large fluctuations to be expected during development.
The network cost
\begin{align}
C = \sum_e L_e K_e^\gamma,
\label{eq:cost}
\end{align}
measures the amount of material investment
that goes into constructing the network.
This should be minimized by any organism that efficiently uses its resources.
Because $E$ neglects effects arising from reticulation, we additionally consider the percolation penalty $\hat A$ as a measure of network robustness.
The percolation penalty quantifies the cost of losing part of the network to
damage. A reasonable penalty function is the expected fraction of perfused area lost
upon removing an edge,
\begin{align}
\hat A = \frac{1}{N_e}\sum_e \frac{A_e}{A_{\text{tot}}},
\label{eq:area-penalty}
\end{align}
where $A_e$ is the area of the network that becomes disconnected from the source
upon removal of edge $e$, $A_\text{tot}$ is the total area of the network,
and $N_e$ is the number of edges.
Optimal resilient networks that are not constrained to remain fully percolated at all
times must minimize the cost~\eqref{eq:cost}, the power
dissipation~\eqref{eq:energy-functional-general} and the area penalty~\eqref{eq:area-penalty}.
It should be noted that observations of real networks, e.g., in leaves, reveal that many treelike components
exist and are important for transport~\cite{Fiorin2015}, and therefore they are likely biologically required.
This means that although the area penalty is minimized,
it is not expected to be perfectly equal to zero.
\section{Results}
\label{sect:results}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Figure4}
\caption{Hierarchically nested reticulation in steady state networks with collective,
correlated fluctuations. We show networks obtained from simulations on a
disordered tessellation with one source at the center. The dimensionless growth parameters were
$\kappa=1.0, \rho = 10$. The collective fluctuations were Gaussian.
A. If the correlation length is small compared to the edge length, there is
little hierarchical ordering. B. Hierarchical ordering appears as $\sigma$ becomes
comparable to the mean edge length. At the same time,
loopiness decreases. C. Hierarchical ordering increases further, and more loops are lost.
D. As $\sigma$ becomes significantly greater than the mean edge length, the network is for the most part tree-like
and only the thickest anastomoses remain. The parameter $\sigma$ is measured in units of the average node distance.
\label{fig:topologies}}
\end{figure}
We solve the non-dimensionalized dynamical adaptation equation \eqref{eq:adapt-family-mod2}, which reads
\begin{align}
\frac{d \tilde K_e}{d\tilde t} = \langle \tilde F_e^2\rangle ^{\beta} - \tilde K_e + \kappa\, e^{-\tilde t/\rho}.
\label{eq:dimensionless}
\end{align}
In this equation we have set $\alpha = 1$ for simplicity. Other values of $\alpha$ lead to qualitatively similar
results. The dimensionless control
parameters are the dimensionless growth strength $\kappa = (c/a) (\hat F/\hat S)^{2\beta}$ and the growth timescale $\rho=b/r$.
Here, the hatted quantities are typical scales for the current and source (see the Supplement
\nameref{S1_Appendix} for a detailed derivation).
The dimensionless source length scale $\sigma$ is measured in units
of the mean edge length.
We further fix the nonlinearity at $\beta = 2/3$, which corresponds to
$\gamma = 1/2$. This value was chosen so as to correspond to a total network volume constraint
in Eq.~\eqref{eq:cost}.
All networks start from the same disordered mesh with 445 nodes and 1255 edges.
The conductivities are initialized with random numbers between 0 and 1.
We consider collective flow fluctuations that are Gaussian with a
scale parameter $\sigma$ that is measured in units of the
mean edge length $\hat L$ at the
start of the dynamics. Results for exponential and randomly distributed sources
are shown in the Supplement~\nameref{S1_Appendix}. Results for exponential sources
do not differ significantly from Gaussian sources.
The generic dynamics of Eq.~\eqref{eq:dimensionless} has been described in~\cite{Ronellenfitsch2016}
and is characterized by two phases. First, the background production term dominates and
produces a homogeneous network. Then, as background production becomes increasingly
suppressed due to the exponential decay term, vascular adaptation takes over,
generating veins in a hierarchical fashion: thick, main veins first, successively
thinner veins later.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Figure5}
\caption{Network phenotypic traits as a function of
the correlation length scale $\sigma$. Results for three values of $\rho$ and
fixed $\kappa=0.1$ are shown.
Biologically relevant phenotypic traits
include A. the network cost $\sum_e L_e K_e^\gamma$, B. the
baseline dissipated power $E$ without fluctuations, C. the percolation penalty.
Network cost decreases with increasing correlation length. The baseline
power dissipation shows a shallow minimum close to $\sigma = 1$.
Percolation penalty increases with the fluctuation scale because the networks increasingly
become topological trees. The observed
discontinuous jumps are due to the sudden appearance of single bridges in the graph
whose removal disconnects a large part.\label{fig:results}}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{Figure6}
\caption{Geometry of the Pareto front of adaptive distribution networks. We plot the phenotypic space
of networks obtained from parameter values $\rho \in \{1, 10, 100\}$, $\kappa \in \{ 1, 0.1, 0.01\}$,
$\sigma \in [0.1, 5]$, $\gamma=0.5$ as an example of the phenotypic space that
can be reproduced using the model.
We calculate the Pareto front for simultaneous minimization of power dissipation, network
cost, and percolation penalty. A--C. The whole data set is plotted in 2d slices
where the colors indicate the value of the correlation length $\sigma$ and the Pareto front is in red.
D. 2d embedding of the Pareto front using multidimensional scaling~\cite{Borg1997}, which reduces dimensionality
while preserving distance between any two points (arbitrary axes). The geometry
of the Pareto front is a one-dimensional line, parametrized by $\sigma$. The ``functional
archetypes" are located at the endpoints, see also Fig.~\ref{fig:variety}.
E,F. The Pareto front in the same multidimensional scaling variables as in (d), but colored by the
parameter values for $\kappa$ and $\rho$. We see that unlike $\sigma$,
these can not be used to parametrize the Pareto front.
\label{fig:pareto}}
\end{figure}
Because fluctuating sources are included in this model, the steady state network produced
will be topologically non-trivial and feature reticulation instead of being a
topological tree.
Previous work has investigated optimization models with fluctuating sources~\cite{Katifori2010,Corson2010}, but they
were unable to reproduce hierarchical organization.
In the following we show that correlations in the fluctuations can produce
hierarchically nested loops similar to those found in leaves and animal vasculature.
In order to mimic the disorder found in biological networks we investigate an
irregular Voronoi tessellation with a single source located at the center,
and collective fluctuating states as sinks according to Eq.~\eqref{eq:fluct-new}.
A source on the boundary, as in leaves, along with random, non-correlated fluctuations
are investigated in detail in the Supplement~\nameref{S1_Appendix},
and some steady state networks are shown for comparison purposes here in Fig.~\ref{fig:variety}.
We note that random fluctuations produce much less hierarchy.
In order to gain an insight into the phenotypic space that is attainable
with the model and the associated Pareto front of efficient networks,
we scan an extended portion of the parameter space given by
$\rho, \kappa, \sigma$.
The visual appearance of several phenotypes is shown in Fig.~\ref{fig:topologies}, where
we keep the growth parameters fixed and scan the correlation scale $\sigma$. It can be seen that a short correlation scale $\sigma < 0.8$
(corresponding to a moving single sink) leads to highly reticulate, completely percolated
but non-hierarchical network structure. Increasing the correlation scale above the average edge
length leads to a gradual loss of percolation but increase in hierarchical organization.
Networks with a realistic degree of hierarchical organization appear around $\sigma = 1.0$.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure7}
\caption{The great variety of network phenotypes that can be produced with a locally adaptive fluctuating load model.
All examples lie approximately equally spaced on the Pareto front of efficient networks,
thus representing different trade-offs between baseline power dissipation, cost, and damage robustness.
The number of loops and thus damage robustness increases to the right.
The Pareto front corresponds to the whole spectrum of reticulate networks, from
highly hierarchical, fragile but cheap to highly robust, expensive networks.
A--E. The source is at the center.
F--J. The source is at the left side.
\label{fig:variety}}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Figure8}
\caption{Examples of dicot leaf networks from the database used in~\cite{Ronellenfitsch2015b} that
exhibit features of the Pareto archetypes (all panels are shown on the same scale). Original scans of cleared and stained leaves were digitally
desaturated and contrast enhanced to make the venation network more apparent.
A, B (\emph{Protium subserratum}, \emph{Protium trifoliolatum}) show many small freely ending veinlets
that are non-reticulate topological trees similar to the
``fragile but cheap" archetype. C, D (\emph{Parkia pendula}, \emph{Schizolobium amazonicum})
show almost no freely-ending veinlets and are highly reticulate
similar to the ``robust but expensive'' archetype.
\label{fig:example-leaves}}
\end{figure}
To quantify the network phenotypic traits we use the power dissipation, the network cost,
and the percolation penalty (see Section~\ref{sect:traits}). We show the dependence of the various phenotypes on the length scale $\sigma$
in Fig.~\ref{fig:results}.
Damage robustness and cost appear to vary together after a certain threshold correlation scale.
For an organism, low cost, efficiency and robustness are all important but competing requirements, so that we expect
that nature strikes a trade-off in the development of vascular networks. From the point of view of optimization,
this means that no single optimal network exists, but instead there is a continuous
Pareto front of equivalent networks which interpolates between the relevant traits.
We show the entire phenotypic space and the Pareto front in Fig.~\ref{fig:pareto}.
Strikingly, the Pareto front is geometrically one-dimensional (consistent with the
general predictions on Pareto optimality in biology in~\cite{Shoval2012}), with functional
archetypes located at the endpoints of the line, and parametrized by the correlation
length scale $\sigma$. The archetypes correspond to completely non-reticulate, cheap networks
and highly reticulate, highly robust, expensive networks
(see also Figs.~\ref{fig:results}, \ref{fig:pareto}).
The most efficient networks in terms of power dissipation produced by the adaptive algorithm appear to lie at a medium
cost (Fig.~\ref{fig:pareto} C) and comparatively high robustness (Fig.~\ref{fig:pareto} A).
Finally, we highlight the extreme variety of network topologies or phenotypes
that can be reproduced using our model, see Fig.~\ref{fig:variety}.
The interplay between growth parameters, correlation length scale, and
boundary conditions leads to a plethora of highly dissimilar networks. Still, many of them resemble
real organisms such as dicot and fern leaves, or the vasculature of the retina or brain.
To demonstrate the archetypes in real biological systems, we compare the identified archetypes with leaf images from the database
used in~\cite{Ronellenfitsch2015b}. In Fig.~\ref{fig:example-leaves}
we show leaves with a very large number of freely ending veinlets,
corresponding to the ``cheap but fragile'' archetype, and highly reticulate ones with barely any freely ending veinlets, corresponding to the ``robust but expensive archetype.''
The majority of the samples in the database fall close to the intermediate, partially hierarchic-partially reticulate archetype. It needs to be noted that there are several samples that deviate from the phenotypic spectrum described in this work, pointing towards additional developmental constraints or functional considerations.
\section{Discussion}
\label{sect:discussion}
We have presented a model of adaptive distribution network with collective, spatially correlated load fluctuations. This model is able to reproduce for the first time a large number of weighted network topologies that can be found in biological organisms such as dicot and fern leaves, and animal vasculature.
Spatial correlation and fluctuations had been observed both in animals and plants before,
and are likely the simplest possible explanation for reticulation in adaptive
networks, because no additional chemical morphogens or fine tuned source dynamics are
required to develop the spectrum of observed morphologies.
The model is able to produce hierarchically organized, reticulate venation patterns
by tuning a correlation length parameter. This parameter is able to interpolate between the entire
spectrum of hierarchical networks that are typically found on different scales of an organism (e.g.,
in leaf venation).
In the main paper, we analyzed a Gaussian model
and in the Supplement~\nameref{S1_Appendix} we showed that an exponential model
produces comparable results whereas a non-correlated, random model does not, demonstrating
that spatial collectivity may be necessary for hierarchical organization.
We showed that realistic networks tend to be produced when the correlation length scale
is on the order of the typical vessel length in the network.
We performed an analysis of network efficiency and robustness and identified dissipated power, network cost, and a penalty for not remaining
percolated in the event of damage as possible relevant observables.
Because nature faces a multi-objective optimization problem, one expects a continuum
of optimal states given by the Pareto frontier.
Thus, we identified the primary trade-offs that networks must consider: low cost
\emph{versus} high efficiency \emph{versus} damage robustness. We showed that
the Pareto front of the states produced by the developmental model is one-dimensional, with functional archetypes corresponding to
non-reticulate, non-robust, cheap networks, and highly reticulate, robust but expensive networks.
Surprisingly, the most highly optimized networks in terms of power dissipation fall in
a region of medium cost and by visual inspection, are a prime candidate for the most ``natural''
phenotypes.
We compared the numerical results with real leaf networks and found examples close to either Pareto archetype.
Thus, we proposed a simple, easily tunable mechanism that is able to produce an entire
spectrum of phenotypic variation in vascular networks. Out of the three control parameters, the growth strength, growth timescale and fluctuation timescale,
only the latter is highly correlated to the position on the
Pareto front and thus to the position on the spectrum of vascular networks. This likely allows natural
selection to more easily adjust for a given needed functionality.
In conclusion, our work for the first time provides a parametrization of the
phenotypic space of adaptive distribution networks, identifies the relevant trade-offs for network efficiency,
and proposes correlated fluctuations as the
most promising mechanism by which hierarchically optimized networks can emerge in nature
without genetic pre-encoding of the topology.
Thus, we provide new insights for the experimental study of adaptive distribution networks
in plants and animals, and in particular in the case of plant leaves where the precise
dynamics of venation development has not been elucidated so far.
\section*{Supporting Information}
\paragraph*{S1 Appendix.}
\label{S1_Appendix}
{\bf Supplemental Material.} Contains the explicit nondimensionalization of the model used in the
paper as well as further results for different lattices and boundary conditions.
\section*{Acknowledgments}
This work was supported by the NSF Award PHY-1554887, and the Burroughs Wellcome Career Award. We thank Patrick J. Drew for the image in Figure~1~B.
|
1,116,691,501,274 | arxiv | \section{Introduction}
\label{intro}
Consider the Navier--Stokes equations (N.S.E.) on a two dimensional torus ${\mathbb T}^2$,
\begin{equation}
\label{E11a}
\begin{aligned}
&\partial_t\vec u(t,x)+ \vec u(t,x)\cdot \nabla_x\vec u(t,x)= \Delta_x\vec u(t,x )-\nabla_x p(t,x)+ \vec F(t,x),\\
& \nabla \cdot \vec u(t,x)= 0,\\
& \vec u(0,x)=\vec u_0(x).
\end{aligned}
\end{equation}
The two dimensional vector field $\vec u(t,x)$ and scalar field $p(t,x)$ over $[0,+\infty)\times{\mathbb T}^2$, are called an Eulerian velocity and pressure, respectively. The forcing $\vec F(t,x)$ is assumed to be a Gaussian white noise in $t$, homogeneous and sufficiently regular in $x$ defined over a certain probability space $(\Omega,{\mathcal F},{\P})$.
Consider the trajectory of a tracer particle defined as the solution of the ordinary differential equation (o.d.e.)
\begin{equation}
\label{E11b}
\dfrac{d x(t)}{d t}=\vec u(t,x(t)),\quad x(0)=x_0,
\end{equation}
where $x_0\in {\@Bbb R}^2$. Thanks to well known regularity properties of solutions of N.S.E, see e.g. \cite{MP}, $\vec u(t,x)$ possesses continuous modification in $x$ for any $t>0$. However, since $\vec u(t,x)$ needs not be Lipschitz in $x$, the equation might not define $x(t)$, $t\ge 0$, as a stochastic process over $(\Omega,{\mathcal F},{\P})$, due to possible non-uniqueness of solutions. In our first result we construct a solution process (see Proposition \ref{CTM}) and show (see Corollary \ref{unique-law-traj}) that the law of any process satisfying \eqref{E11b} and adapted to the natural filtration of $\vec u$ is uniquely determined.
The main objective of this paper is to study ergodic properties of the trajectory process. We prove, see part 1) of Theorem \ref{lab3}, the existence of the Stokes drift
\begin{equation}\label{eq1}
v_*:=\lim_{t\to+\infty}\frac{x(t)}{t},
\end{equation}
where the limit above is understood in probability. A similar result for a Markovian and Gaussian velocity field $\vec u$ (that need not be a solution of a N.S.E.) that decorrelates sufficiently fast in time has been considered in \cite{Kps}. Next, we investigate the size of ''typical fluctuations'' of the trajectory around its mean. We prove, see part 3) of the theorem, that
\begin{equation}
\label{012603}
Z(t):=\frac{x(t)-v_*t}{\sqrt{t}}\Rightarrow Z,\quad\mbox{ as } t\to+\infty
\end{equation}
where $Z$ is a random vector with normal distribution ${\mathcal N}(0,D)$ and the convergence is understood in law. Moreover, we show that the asymptotic variance of $Z(t)$, as $t\to +\infty$, exists and coincides with the covariance matrix $D$.
In our approach a crucial role is played by the {\em Lagrangian process}
$$
\vec\eta(t,x):=\vec u(t,x(t)+x),\quad t\ge 0, \ x\in {\mathbb T}^2
$$
that describes the environment from the vantage point of the moving particle.
It turns out that its rotation in $x$,
$$
\omega(t,x)={\rm rot}\,\vec\eta(t,x):=\partial_2\eta_1(t,x)-\partial _1\eta_2(t,x),\quad t\ge 0,\ x\in {\mathbb T}^2,
$$
satisfies a stochastic partial differential equation (s.p.d.e.) \eqref{E25a} that is similar to the stochastic N.S.E. in the vorticity formulation, see \eqref{E25a0}. The position $x(t)$ of the particle at time $t$, can be represented as an additive functional of the Lagrangian process, i.e.
$$
x(t)=\int_0^t\psi_*(\omega(s))ds,
$$
see the begining of Section \ref{S6} for the definition of $\psi_*$. Then, \eqref{eq1} and \eqref{012603} become the statements about the law of large numbers and central limit theorem for an additive functional of the process $\eta(\cdot)$.
Following the ideas of Hairer and Mattingly, see \cite{HM,HM1}, we are able to prove, see Theorem \ref{T1} below, that the transition semigroup of $\omega(\cdot)$ satisfies the spectral gap property in a Wasserstein metric defined over the Hilbert space $H$ of square integrable mean zero functions. If $\psi_*(\cdot)$ were Lipschitz this fact would make the proof of the law of large numbers and central limit theorem standard, in view of \cite{shirikyan} (see also \cite{kowalczuk,kuksin-shirkyan}). However, in our case the observable $\psi_*$ is not Lipschitz. In fact, it is not even defined on the state space $H$ of the process. Nevertheless, it is a bounded linear functional over another Hilbert space $V$ that is compactly embedded in $H$. Adopting the approach of Mattingly and Pardoux from \cite{MP}, see Theorem \ref{T2} below, we are able to prove that the equation for $\omega$ has regularization properties similar to the N.S.E. and that $\omega(t)$ belongs to $V$ for any $t>0$. In consequence, one can show that the transition semigroup can be defined on $\psi_*$ and has the same contractive properties as the semigroup defined on Lipschitz functions on $H$. The law of large numbers can be then shown, Section \ref{sec5.4}, by a modification of the argument of Shirikyan from \cite{shirikyan} (see also \cite{kowalczuk}). To prove the central limit theorem we construct a corrector field $\chi$, see Section \ref{lab21}, over the ''larger'' space $H$. Then, we proceed with the classical martingale proof of the central limit theorem, see Section \ref{sec5.4}. Such an argument has been used to show this type of a theorem for a Lipschitz observable of the solution of a N.S.E. in \cite{shirikyan}. The proof of the existence of the asymptotic variance is done in Section \ref{sec5.3}.
The model of transport in a fluid flow based on \eqref{E11b} is referred to in the literature as the {\em passive tracer model} (see e.g. Chapter V of \cite{yaglom-monin}). The $d$-dimensional vector field $\vec u$ appearing on the right hand side of \eqref{E11b} is usually assumed to be random, stationary, that in principle may have nothing to do with the N.S.E. Since the fluid flow is incompressible, equation \eqref{E11b} is complemented by the condition $\nabla_x\cdot \vec u(t,x)\equiv 0$. This model has been introduced by G. Taylor in the 1920-s (see \cite{taylor} and also \cite{kraichnan}) and plays an important role in describing transport phenomena in fluids, e.g. in investigation of ocean currents (see \cite{stewart}). There exists an extensive literature concerning the passive tracer both from the mathematical and physical points of view, see e.g. \cite{krama} and the references therein. In particular, it can be shown (see \cite{port-stone}) that the incompressibility assumption implies that the Lagrangian process $\vec u(t,x(t))$, $t\ge 0$, is stationary and if one can prove its ergodicity, the Stokes drift coincides with the mean of the field $\mathbb E\vec u(0,0)$. The weak convergence of $(x(t)-v_*t)/\sqrt{t}$ towards a normal law has been shown for flows possessing good relaxation properties either in time, or both in time and space, see \cite{caxu, fk1,kola, koralov} for the Markovian case, or \cite{fg-1} for the case of non-Markovian, Gaussian fields with finite decorrelation time. According to our knowledge this
is the first result when the central limit theorem has been shown for the tracer in a flow that is given by an actual solution of the two dimensional N.S.E.
\section{Preliminaries}
\subsection{Some function spaces and operators}
Denote by ${\mathbb T}^2$ the two dimensional torus understood as the product of two segments $[-1/2,1/2]$ with identified endpoints. Trigonometric monomials $e_k(x)={\rm e}^{2i\pi k\cdot x}$, $k=(k_1,k_2)\in{\@Bbb Z}^2$, form the orthonormal base in the space $L^2({\mathbb T}^2)$ of all square integrable functions with the standard scalar product $\langle\cdot,\cdot\rangle$ and norm $|\cdot|$. For a given $w\in L^2({\mathbb T}^2)$ let $\hat w_k=\langle w,e_k\rangle$. Let $H$ be the subspace of $L^2({\mathbb T}^2)$ consisting of those functions $w$, for which $\hat w_0=0$. For any $r\in{\@Bbb R}$ let
$$
(-\Delta)^{r/2}w:=\sum_{k\in{\@Bbb Z}^2_*}|k|^r\hat w_ke_k,\quad w\in H^r,
$$
where $H^r$ consists of such $w$, for which $\sum_{k\in{\@Bbb Z}^2_*}|k|^{2r}|\hat w_k|^2<+\infty$ and ${\@Bbb Z}^2_*:={\@Bbb Z}^2\setminus\{(0, 0)\}$. We equip $H^r$ with the graph Hilbert norm $|\cdot |_r:=|(-\Delta )^{r/2}\cdot |$. Let $V:=H^1$ and let $V'$ be the dual to $V$. Then $H$ can be identified with a subspace of $V'$ and $V\hookrightarrow H \hookrightarrow V'$. We shall also denote by $\|\cdot\|$ the respective norm $|\cdot|_1$. It is well known (see e.g. Corollary 7.11 of \cite{gilbarg-trudinger}) that $H^{1+s}$ is continuously embedded in $C({\mathbb T}^2)$ for any $s>0$. Moreover, there exists a constant $C>0$ such that
\begin{equation}
\label{embed}
\|w\|_{\infty}\le C|w|_{1+s},\quad\forall\, w\in C^\infty({\mathbb T}^2).
\end{equation}
Here
$\|w\|_{\infty}:=\sup_{x\in{\mathbb T}^2}|w(x)|$. In addition, the following estimate, sometimes referred to as the Gagliardo--Nirenberg inequality, holds, see e.g. p. 27 of \cite{henry}. For any $s>0$, $\beta\in[0,1]$ there exists $C>0$ such that
\begin{equation}
\label{gagliardo-nirenberg}
|w|_{\beta s}\le C|w|^{1-\beta}|w|_{s}^{\beta},\quad\forall\, w\in C^\infty({\mathbb T}^2).
\end{equation}
Define ${\mathcal K}\colon H^r\to H^{r+1}\times H^{r+1}$ by
\begin{equation}
\label{010512}
{\mathcal K}(w)=({\mathcal K}_1(w),{\mathcal K}_2(w)):=\sum_{k\in{\@Bbb Z}^2_*}|k|^{-2}k^\perp \hat w_ke_k.
\end{equation}
We have
\begin{equation}
\label{010512a}
|{\mathcal K}_i(w)|_{r+1}\le |w|_r, \quad w\in H_r.
\end{equation}
For a given $x\in{\@Bbb R}^2$ and $w\in H^r$ we let $\tau_xw\in H^r$ be defined by
$$
\tau_xw:=w(\cdot+x)=\sum_{k\in{\@Bbb Z}^2_*}{\rm e}^{-2\pi i k\cdot x}\hat w_ke_k.
$$
\subsection{Homogeneous Wiener process}
Write
$$
\mathbb Z^2_+:=[(k_1,k_2)\in \mathbb Z^2_*\colon k_2>0]\cup [(k_1,k_2)\in \mathbb Z^2_*\colon k_1>0,k_2=0]
$$
and let $\mathbb Z^2_-:=-\mathbb Z^2_+$.
Let $ (B_k(t))_{t\ge 0}$, $k\in \mathbb Z^2_+$, be independent, standard one dimensional Brownian motions defined on a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0}, \mathbb{P})$. Define $B_{-k}(t):=B_k(t)$ for $k\in \mathbb Z^2_+$.
Assume that the function $k\mapsto q_k$ is even, i.e.
$q_{-k}=q_k$, $k\in \mathbb Z^2_*$, and real-valued. A cylindrical Wiener process in $H$, given on a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge 0}, \mathbb{P})$, can be written as
$$
W(t):=\sum_{k\in{\@Bbb Z}^2_*}B_k(t)e_k,\quad t\ge0.
$$
Let $Q\colon H\to H^r$ be a symmetric, positive-definite, bounded linear operator given by
\begin{equation}
\label{031002}
\widehat {Qw}_k := q_k\widehat w_k,\qquad k\in
{\@Bbb Z}^2_*.
\end{equation}
The Hilbert--Schmidt norm of the operator, see Appendix C of \cite{DaPrato-Zabczyk}, can be computed from formula
\begin{equation}
\label{011002}
\| Q\| ^2_{L_{(HS)}(H,H^{r})}:= \sum\limits_{k\in{\@Bbb Z}^2_*}\|
Qe_k\| ^2_{H^{r}}= \sum\limits_{k\in{\@Bbb Z}^d}|k|^{2r}
q_k^2,
\end{equation}
\begin{proposition}
If $\| Q\| ^2_{L_{(HS)}(H,H^{r})}<+\infty$ then the process $\left(QW(t)\right)_{t\ge0}$ has realizations in $H^r$, $\mathbb{P}$-a.s. Moreover,
the laws of the Wiener processes $\left(\tau_xQW(t)\right)_{t\ge0}$ are independent of $x\in{\@Bbb R}^2$.
\end{proposition}
\noindent{\sc Proof} The first part of the proposition follows directly from Proposition 4.2, p. 88 of \cite{DaPrato-Zabczyk}. The second part is a simple consequence of the fact that the processes in question have the same covariance operator as $\left(QW(t)\right)_{t\ge0}$. \mbox{$\square$}
\section{Formulation of the main results}\label{sec2.3}
In this section we make it precise what we mean by a solution of \eqref{E11b} with vector field $\vec u$ given by the solution of the Navier--Stokes equations \eqref{E11a} and formulate precisely the main results of the paper dealing with the long time, large scale behavior of the trajectory.
Since, as it turns out, the components of the solution of the N.S.E. belong to $V$, see \cite{MS}, if the initial condition $\vec u_0\in V$, we cannot use equation \eqref{E11b} for a direct definition of the solution because the point evaluation for the field is not well defined (not to mention the question of the existence and uniqueness of solutions to the o.d.e. in question).
\subsection{Vorticity formulation of the N.S.E}
Note that the rotation
$$
\xi(t):={\rm rot}\, \vec u(t)=\partial_2u_1(t)-\partial_1u_2(t)
$$
of $\vec u(t,x)=(\vec u_1(t,x),\vec u_2(t,x))$, satisfies
\begin{equation}\label{E25a0}
d \xi(t) =[\Delta\xi(t) -B_0(\xi(t))]d t + Qd W(t), \qquad \xi(0)=w\in H,
\end{equation}
with a cylindrical Wiener process $W(t)$, $t\ge 0$, on $H$, non-anticipative with respect to the filtration $\{{\mathcal F}_t,\,t\ge0\}$, a certain Hilbert--Schmidt operator $Q\in L_{(HS)}(H,H)$, and $B_0(\xi):=B_0(\xi,\xi)$, $\xi\in V$, where
$
B_0(h,\xi):=\vec u\cdot\nabla \xi,
$
with $\vec u:={\mathcal K}(h)$.
Let ${\mathcal E}_T:=C([0,T];H)\cap L^2([0,T];V)$.
\begin{definition}\label{D21a}
{\rm A measurable and $({\mathcal F}_{t})$-adapted, $H$-valued process $\xi=\left\{\xi(t),\,t\ge0\right\}$ is a solution to $\eqref{E25a0}$ if for any $T\in (0,+\infty)$, $\xi\in L^2(\Omega, {\mathcal E}_T, \mathbb{P})$ and
\begin{equation}
\label{020512a}
\xi(t) = {\rm e} ^{\Delta t} w - \int_0^t {\rm e} ^{\Delta (t-s)} B_0(\xi(s))d s + \int_0^t {\rm e} ^{\Delta (t-s)} Qd W(s)
\end{equation}
for all $t\ge 0$. }
\end{definition}
The following estimate comes from \cite{MP}, see Lemma A. 3, p. 39.
\begin{proposition}
\label{propMP}
For any $T,N>0$ there exists $C>0$ such that
\begin{equation}
\label{012703}
\mathbb E\left[\sup_{t\in[0,T]}(|\xi(t)|^2+t\|\xi(t)\|^2)^N\right]\le C(1+|w|^{4N}),\quad \forall\,w\in H.
\end{equation}
\end{proposition}
Let $\vec u(t):={\mathcal K}(\xi(t))$.
Using the above proposition and \eqref{embed} we conclude that
\begin{corollary}
\label{cor012703}
For any $t>0$, $\vec u(t)\in C({\mathbb T}^2)$ and
\begin{equation}
\label{022703}
\int_0^t\|\vec u(s)\|_{\infty}ds<+\infty,\quad \mathbb{P}-\mbox{a.s.}
\end{equation}
\end{corollary}
\noindent{\sc Proof}
The continuity of $\vec u(t,x)$ with respect to $x$, follows from the Sobolev embedding. From \eqref{010512a} we conclude that there exists $C>0$ such that
\begin{equation}
\label{022803}
\|\vec u(s)\|_{\infty}\le C\|\xi(s)\|,\quad\forall\,s\ge 0.
\end{equation}
On the other hand from \eqref{012703} we conclude that for any $t>0$ there exists a random variable $\tilde C$ that is almost surely finite
and such that
$
\|\xi(s)\|\le \tilde Cs^{-1/2}
$ for all $s\in(0,t]$.
Combining this with \eqref{022803} we conclude \eqref{022703}.
\mbox{$\square$}
\subsection{Definition of trajectory process and its ergodic properties}
\begin{definition}
\label{def-1a}
{\rm Let $x_0\in{\@Bbb R}^2$. By a {\em solution to} $\eqref{E11b}$ we mean any $({\mathcal F}_t)$-adapted process $x(t)$, $t\ge0$, with continuous trajectories, such that
\begin{equation}
\label{integral-ode}
x (t) =x_0+\int_0^t\vec u(s,x(s))d s, \quad\forall\,t\ge0,\qquad \text{$\Bbb P$-a.s.}
\end{equation}
}
\end{definition}
For a given $\nu>0$ denote $e_{\nu}(w):=\exp\{\nu|w|^2\}$, $w\in H$.
\begin{theorem}\label{lab3}
Assume that $Q$ in \eqref{E25a} belongs to $L_{(HS)}(H,V)$ and has a trivial null space, i.e. $Qw=0$ implies $w=0$.
Suppose that the initial vorticity is random, distributed on $H$ according to the law $\mu_0$ for which
\begin{equation}
\label{022011}
\int_{H}e_{\nu_0}(w)\mu_0(dw)<+\infty
\end{equation}
with a certain $\nu_0>0$.
Finally, assume that $\{x(t;x_0),\,t\ge0\}$ is a solution of \eqref{E11b} corresponding to the initial data $x_0\in{\@Bbb R}^2$.
Then, the following are true:
\begin{enumerate}
\item[1)] \label{1} (Weak law of large numbers) there exists $v_*=(v_{*,1},v_{*,2})\in{\@Bbb R}^2$ such that
\begin{equation}
\lim_{T\to+\infty}\frac{x(T;x_0)}{T}= v_{*} \label{spwl}
\end{equation}
in probability.
\item[2)] (Existence of the asymptotic variance) there exists $D_{i j}\in[0,+\infty)$ such that
\begin{equation}
\lim_{T\to+\infty}\frac1T\mathbb E\left[(x_i(T;x_0)-v_{*,i}T)(x_j(T;x_0)-v_{*,j}T)\right]=D_{ij},\quad i,j=1,2.\label{D}
\end{equation}
\item[3)] \label{2} (Central limit theorem)
Random vectors $(x(T;x_0)-v_{*}T)/\sqrt{T}$ converge in law, as $T\to +\infty$, to a zero mean normal law whose co-variance matrix equals ${\bf D}=[D_{ij}]$.
\end{enumerate}
\end{theorem}
\section{Lagrangian and tracer trajectory processes}
\subsection{Uniqueness in law of the trajectory process}
Define the {\em Lagrangian velocity process} as
$$
\vec \eta(t,x)=(\eta_1(t,x),\eta_2(t,x)):=\vec u(t,x(t)+x),\qquad t\ge 0,\ x\in \mathbb{R}^2.
$$
Suppose that the forcing $\vec F$ is a white noise in time and spatially homogeneous Gaussian random field.
Using It\^o's formula we obtain that its vorticity, given by,
$$
\omega(t,x):={\rm rot}\,
\vec\eta(t,x)=\xi(t,x(t)+x)
$$
satisfies $\omega(0)=\tau_{x_0}w\in H$ and
\begin{equation}\label{E25a}
d \omega(t) =[\Delta\omega(t) -B_0(\omega(t))+B_1(\omega(t))]d t + Qd W(t),
\end{equation}
where $W$ is an $({\mathcal F}_t)$-adapted cylindrical Wiener process on $H$, $Q\in L_{(HS)}(H,H)$ and
$$
B_0(\omega):=B_0(\omega,\omega), \quad B_1(\omega):=B_1(\omega,\omega),
$$
$$
B_0(h,\omega):=\vec \eta\cdot\nabla \omega,\quad B_1(h,\omega):=\vec \eta(0)\cdot\nabla \omega, \quad\omega\in V,
$$
with $\vec\eta:={\mathcal K}(h)$, for more details see \cite{fkp, kp}. Since we have assumed that $\omega\in V$ and, by the Sobolev embedding, ${\mathcal K}(V)$ is embedded into the space $C({\mathbb T}^2;{\@Bbb R}^2)$ of two dimensional, continuous trajectory vector fields on ${\mathbb T}^2$, we see that the evaluation of $\vec \eta$ is well defined, and therefore there is no ambiguity in the definition of
$B_1(\omega)$ for $\omega\in V$.
\begin{definition}\label{D21}
{\rm A measurable, $({\mathcal F}_{t})$-adapted, $H$-valued process $\omega=\left\{\omega(t),\,t\ge0\right\}$ is a solution to $\eqref{E25a}$, with the initial condition $\omega(0)=w$, if for any $T>0$, $\omega\in L^2(\Omega, {\mathcal E}_T,\mathbb{P})$ and
\begin{equation}
\label{020512}
\omega(t) = {\rm e} ^{\Delta t} w - \int_0^t {\rm e} ^{\Delta (t-s)} B_0(\omega(s))d s +
\int_0^t {\rm e} ^{\Delta (t-s)} B_1(\omega(s))d s+ \int_0^t {\rm e} ^{\Delta (t-s)} Qd W(s),
\end{equation}
$\mathbb{P}$-a.s. for all $t\ge 0$.}
\end{definition}
Sometimes, when we wish to highlight the dependence on the initial condition and the Wiener process, we shall write $\omega(t;w,W)$. We shall omit writing one, or both of these parameters when they are obvious from the context.
Using a Galerkin approximation argument, as in Section 3 of \cite{MS}, see also Appendix \ref{secAp1} below for the outline of the argument, we conclude the following.
\begin{theorem}
\label{MF1}
Given an initial condition $w\in H$ and an $({\mathcal F}_t)$-adapted cylindrical Wiener process $(W(t))_{t\ge0}$, there exists a unique solution to \eqref{E25a} in the sense of Definition \ref{D21}. Moreover, processes $\{\omega(t;w),\,t\ge0\}$ form a Markov family with the corresponding transition probability semigroup $\{P_t,\,t\ge0\}$ defined on the space $ C_b(H)$ of continuous and bounded functions on $H$.
\end{theorem}
Using the Yamada--Watanabe result, see e.g. \cite{YW} (Corollary after Theorem 4.1.1), or \cite{IW}, from the above theorem we can conclude the following result, see \cite{kp}.
\begin{corollary}
\label{unique-law}
Solutions of \eqref{E25a} have the uniqueness in law property, i.e. the laws over $C([0,+\infty);H)$ of any two solutions of \eqref{E25a} starting with the same initial data (but possibly based on different cylindrical Wiener processes) coincide.
\end{corollary}
This immediately implies the uniqueness in law property for solutions of \eqref{E11b}.
\begin{corollary}
\label{unique-law-traj}
Suppose that $\xi$ and $\xi'$ are two solutions of \eqref{E25a0} with the identical initial data but possibly based on two cylindrical Wiener processes with the respective filtrations $({\mathcal F}_t)$ and $({\mathcal F}_t')$. Assume also that $x(\cdot)$ and $x'(\cdot)$ are the solutions of \eqref{E11b} corresponding to $\vec u(t)={\mathcal K}(\xi(t))$ and $\vec u'(t)={\mathcal K}(\xi'(t))$, respectively. Then, the laws of the pairs $(x(\cdot),\xi(\cdot))$ and $(x'(\cdot),\xi'(\cdot))$ over $C([0,+\infty), \mathbb R^2)\times C([0,+\infty),H)$ coincide.
\end{corollary}
\noindent{\sc Proof}
Both
$
\omega(t,\cdot)=\xi(t,x(t)+\cdot)
$ and $
\omega'(t,\cdot)=\xi'(t,x'(t)+\cdot)
$ satisfy \eqref{E25a}. According to Corollary \ref{unique-law} they have identical laws on $C([0,+\infty),H)$ with the initial condition $\tau_{x_0}w$. In fact, due to an analogue of Proposition \ref{propMP} that holds for the process $\omega(\cdot)$, see part 1) of Theorem \ref{T2} this law is actually supported in $L^1_{{\rm loc}}([0,+\infty),V)$. We can write therefore that $(x(\cdot),\xi(\cdot))=\Psi(\omega(\cdot))$ and $(x'(\cdot),\xi'(\cdot))=\Psi(\omega'(\cdot))$, where the mapping
$$
\Psi=(\Psi_1,\Psi_2):L^1_{{\rm loc}}([0,+\infty),V)\to C([0,+\infty), \mathbb R^2)\times C([0,+\infty),H)
$$
is defined as
\begin{eqnarray*}
&&\Psi_1(X)(t):=x_0+\int_0^t{\mathcal K}(X(s))(0)ds,\\
&& \Psi_2(X)(t,x):=X(t,x-\Psi_1(X)(t)),\quad\forall X\in L^1_{{\rm loc}}([0,+\infty),V),
\end{eqnarray*}
and the uniqueness claim made in the corollary follows.
\mbox{$\square$}
\subsection{Existence of solution of \eqref{E11b}}
\begin{definition}
\label{def-1}
{\em
Suppose that $(\Omega,{\mathcal F}, ({\mathcal F}_t),{\P})$ is a filtered probability
space. Let $x_0\in{\@Bbb R}^2$. By {\em a weak
solution to} $\eqref{E11b}$ we mean a pair consisting of a continuous trajectory $({\mathcal F}_t)$-adapted process $x(t)$, $t\ge0$, and an $({\mathcal F}_t)$-adapted solution $\xi(t)$, $t\ge0$, to \eqref{E25a0} such that
\eqref{integral-ode} holds.}
\end{definition}
Suppose now that we are given a filtration $({\mathcal F}_t)$ and an ${\mathcal F}_t$-adapted solution $\omega$ of \eqref{E25a} with the initial condition $\omega(0)=\tau_{x_0}w$. Define
$
(x(\cdot),\xi(\cdot)):=\Psi(\omega(\cdot)).
$
One can easily check, using It\^o's formula, that $(x(\cdot),\xi(\cdot))$ is a weak solution in the sense of Definition \ref{def-1}. Therefore we conclude the following.
\begin{proposition}
\label{CTM}
Given a filtered probability space there exists a weak solution of \eqref{E11b}.
\end{proposition}
\section{Spectral gap and regularity properties of the transition semigroup}
\label{sec3}
Here we present the basic results that shall be instrumental in the proof of Theorem \ref{lab3} formulated in the previous section. In case of the Navier--Stokes dynamics on a two-dimensional torus, corresponding results have been shown in \cite{HM1}, see Theorem 5.10, Proposition 5.12 and parts 2, 3 of Lemma A.1 from \cite{HM1}. The proofs of analogous results for the Lagrangian dynamics are not much different, some additional care is needed due to the presence of function $B_1(\cdot)$, but it usually does not create much trouble. We present the proofs of these results in Section \ref{sec6} of the appendix.
Let us introduce the space $C_0^\infty(H)$ consisting of all functionals $\phi$, for which there exist $n\ge1$, a function $F$ from $C^\infty_0({\@Bbb R}^n)$ and vectors $v_1,\ldots,v_n\in H$ such that
$$
\phi(v)=F\left(\langle v,v_1\rangle,\ldots,\langle v,v_n\rangle\right),\quad \forall\,v\in H.
$$
Given $\nu>0$ define ${\mathcal B}_\nu$ as the completion of $C^\infty_0(H)$ under the norm
$$
\|\phi\|_\nu:=\sup_{w\in H}e_{-\nu}(w)\left(|\phi(w)|+\|D\phi(w)\|\right),
$$
where, as we recall,
$
e_{\nu}(v)=\exp\left\{\nu|w|^2\right\}.
$
Here $\|D\phi(w)\|=\sup_{|\xi|\le 1}|D\phi(w)[\xi]|$, where $D\phi(w)[\xi]$ denotes the Fr\'echet derivative of a function $\phi\colon H\to{\@Bbb R}$ at $w$ in the direction $\xi\in H$.
By $\tilde {\mathcal B}_\nu$ we understand the Banach space of all Fr\'echet differentiable functions $\phi$ such that
$
\|\phi\|_{\nu}<+\infty.
$
Let $\mathcal P(H)$ be the space of all Borel, probability measures on $H$.
Recall also that $\mu_* \in \mathcal P(H)$ is called an \emph{invariant measure} for $(P_t)_{t\ge 0}$ if
$$
\langle \mu_*,P_t\phi\rangle = \langle \mu_*,\phi\rangle, \quad \forall \,\phi\in C_b(H),\,t\ge 0.
$$
Here $\langle \mu,\phi\rangle:=\int_H\phi d\mu$ for any $\mu\in{\mathcal P}(H)$ and $\phi$ that is integrable.
Our first result can be stated as follows.
\begin{theorem}\label{T1}
Under the assumptions of Theorem \ref{lab3} the following are true:
\begin{itemize}
\item[1)]
there exist $\nu_0, C>0$ such that for any $\nu\in(0,\nu_0]$ we have
\begin{equation}
\label{012011}
\mathbb E e_{\nu}(\omega(t;w))\le Ce_{\nu}(w),\quad\forall\,t\ge0,\,w\in H.
\end{equation}
\item[2)] the constant $\nu_0$ can be further adjusted in such a way that for any $\nu\in(0,\nu_0]$ the semigroup $(P_t)$ extends to $\tilde{\mathcal B}_\nu$ and
$$
P_t({\mathcal B}_\nu)\subset {\mathcal B}_\nu, \quad \forall\,t\ge0.
$$
In addition, for any $\nu$ as above
there exist $C,\gamma>0$ such that
\begin{equation}
\label{010811b}
\|P_t\phi-\langle\mu_*, \phi\rangle\|_{\nu}\le C{\rm e}^{-\gamma t}\|\phi\|_{\nu},\quad \forall\,t\ge0,\,\phi\in \tilde {\mathcal B}_\nu,
\end{equation}
\item[3)] there exist a unique Borel probability measure $\mu_*$ that is invariant for $(P_t)$,
and such that
\begin{equation}
\label{010811a}
\int_He_{\nu}(w)\mu_*(dw)<+\infty,\quad \forall\,\nu\in(0,\nu_0].
\end{equation}
\end{itemize}
\end{theorem}
The property described in \eqref{010811b} is referred to as {\em the spectral gap} of the transition semigroup. Since we shall use an extension of this property to functions defined on a smaller space than $H$ we introduce the following definition.
For $N>0$ and $\phi\in C^1(V)$ define
$$
\|\!|\phi\|\!|_N:=\sup_{w\in V}\frac{|\phi(w)|+\|D\phi(w)\|}{(1+\|w\|)^N}
$$
and denote by $C^1_N(V)$ the space made of functions, for which $\|\!|\phi\|\!|_N<+\infty$.
\begin{theorem}\label{T2}
Under the assumptions of Theorem \ref{lab3} the following are true:
\begin{itemize}
\item[1)] for any $t, N>0$ there exists $C_{t,N}$ such that
\begin{equation} \label{E27-aa}
\mathbb E\|\omega(t;w)\|^N\le C_{t,N}\left(|w|^{2N}+1\right),\quad\forall\,w\in H,
\end{equation}
\item[2)] the definition of the transition semigroup can be extended to an arbitrary $\phi \in C^1_N(V)$ by letting $P_t\phi(w):=\mathbb E\tilde\phi(\omega(t;w))$, where $\tilde \phi$ is an arbitrary, measurable extension of $\phi$ from $V$ to $H$. Moreover,
for any $t, N>0$ there exists $C_{t,N}$ such that for any $\nu>0$,
\begin{equation} \label{E27a}
\|P_t\phi\|_{\nu}\le C_{t,N}\|\!|\phi\|\!|_N,\quad\forall\,\phi\in C^1_N(V).
\end{equation}
\end{itemize}
\end{theorem}
Combining the above result with part 2) of Theorem \ref{T1} we conclude that the following holds.
\begin{corollary}
\label{cor011011}
For any $N>0$ there exist $C,\nu_0,\gamma>0$ such that for any $\nu\in (0,\nu_0]$ we have
\begin{equation}
\label{010811}
\|P_t\phi-\langle\mu_*, \phi\rangle\|_{\nu}\le C{\rm e}^{-\gamma t}\|\!|\phi\|\!|_N,\quad \forall\,t\ge0,\,\phi\in C^1_N(V).
\end{equation}
\end{corollary}
Define
$$
{\frak p}(w):=\left\{
\begin{array}{ll}
\|w\|^2&\mbox{ for }w\in V,\\
+\infty&\mbox{ for }w\in H\setminus V.
\end{array}
\right.
$$
\begin{corollary}
\label{cor011112}
For any $N>0$ we have $\langle \mu_*,{\frak p}^N\rangle<+\infty$. Thus, in particular $\mu_*(V)=1$.
\end{corollary}
\noindent{\sc Proof}
Suppose that $\varphi_R\colon [0,+\infty)\to[0,R+1]$ is a continuous function such that $\varphi_R(u)=u$ if $u\in[0,R]$ and it vanishes on $u\ge R+1$. For a fixed $K>0$ we denote
$$
{\frak p}_K(w):=\sum_{0<|k|\le K}|k|^2|\hat w(k)|^2.
$$
Thanks to part 2) of Theorem \ref{T1} we have $P_t{\frak p}^N\in {\mathcal B}_\nu$ for any $t>0$ and therefore from \eqref{E27-aa} and \eqref{010811a} we get
\begin{equation}\label{071801}
\langle \mu_*,P_t{\frak p}_K^N\rangle\le \langle \mu_*,P_t{\frak p}^N\rangle<+\infty.
\end{equation}
We have therefore
\begin{equation}\label{071801a}
\langle \mu_*,P_t\varphi_R\circ{\frak p}_K^N\rangle= \langle \mu_*,\varphi_R\circ{\frak p}_K^N\rangle\le \langle \mu_*,P_t{\frak p}^N\rangle.
\end{equation}
The first equality follows from the fact that $\mu_*$ is invariant.
Letting first $K\to+\infty$ and then subsequently $R\to+\infty$ we conclude the corollary.
\mbox{$\square$}
\section{Proof of Theorem \ref{lab3}}\label{S6}
\label{sec5}
To abbreviate we assume that $x_0=0$ and we drop it from our notation. Let $\psi_*=(\psi_*^{(1)},\psi_*^{(2)})\colon V\to{\@Bbb R}^2$ be defined as $\psi_*(\omega):={\mathcal K}(\omega)(0)$.
Since, for any $s>0$, $H_{1+s}$ is embedded into $C({\mathbb T}^2)$, for any $s>0$ there exists $C>0$ such that
\begin{equation}
\label{H-s}
|\psi_*^{(i)}(w)|\le C|{\mathcal K}_i(w)|_{1+s}\le C|w|_s,\quad\forall\,w\in H_s,\,i=1,2.
\end{equation}
It is clear therefore that the components of $\psi_*$ are bounded linear functional on $V$ and
$\psi_*\in C^1_1(V)$.
Suppose also that $\omega(t)$ is the solution of \eqref{E25} with the initial data distributed according to $\mu_0$.
\subsection{Proof of part 1)}
\label{sec5.1}
Let $v_*:=(v_{*,1},v_{*,2})$ and
$
v_{*,i}:=\langle \mu_*,\psi_*^{(i)}\rangle,
$
and $\tilde \psi_*:=\psi_*-v_*$.
To prove the weak law of large numbers it suffices only to show that for $i=1,2$,
\begin{equation}
\label{h11}
\lim_{T\to+\infty} \frac1T\mathbb{E} \tilde x_i(T)
=v_{*,i}\quad\mbox{and}\quad \lim_{T\to+\infty}
\frac{1}{T^2}\mathbb{E}\tilde x^2_i(T)= v_{*,i}^{2},
\end{equation}
where
$$
\tilde x(T)=(\tilde x_1(T),\ldots,\tilde x_d(T)):=\int_{0}^{T} \tilde\psi_*(\omega(s))ds.
$$
Using the Markov property we can write that
\begin{eqnarray}
\label{011712}
&&\frac{1}{T}\mathbb{E}\tilde x_i(T)=\frac{1}{T}\int_{0}^{T}
\langle\mu_{0},P_{s}\tilde \psi_{*}^{(i)}\rangle ds,\quad i=1,2.
\end{eqnarray}
Suppose that $\nu_0$ is chosen in such a way that the conclusions of Theorem \ref{T1} and Corollary \ref{cor011011} hold. Assume also that
$\nu\in(0,\nu_0]$. We shall adjust its value later on.
By virtue of \eqref{010811} we conclude that there exists a constant $C>0$ such that
\begin{equation}
\label{032011}
|P_{t}\tilde \psi_*(w)|\le C{\rm e}^{-\gamma t}e_{\nu}(w)\|\!|\tilde \psi_*\|\!|_{1}.
\end{equation}
Hence, the
right hand side of \eqref{011712} converges to $0$, by estimate \eqref{022011} and the Lebesgue dominated convergence theorem.
On the other hand
\begin{equation} \label{lab11}
\begin{aligned}
\frac{1}{T^2}\mathbb{E}\tilde x^2_i(T)&=
\frac{1}{T^{2}}\,\mathbb{E}\Big(\int_{0}^{T}\!\!\tilde\psi_{*,i}(\omega(t))dt\int_{0}^{T}\!\!\tilde\psi_{*,i}(\omega(s))ds\Big) \\
&=\frac{2}{T^{2}}\!\!\int_{0}^{T}\!\!\int_{0}^{t}\!\,\mathbb{E}[\tilde\psi_{*,i}(\omega(t))\tilde\psi_{*,i}(\omega(s))]dt ds.
\end{aligned}
\end{equation}
The utmost right hand side of \eqref{lab11} equals
\begin{eqnarray}
\label{042011}
&& \frac{2}{T^{2}}\int_{0}^{T}\!\!\int_{0}^{t}\!\,\mathbb{E}\big[\tilde\psi_{*,i}(\omega(s))P_{t-s}\tilde\psi_{*,i}(\omega(s))\big] dtds=\frac{2}{T^{2}}\int_{0}^{T}\!\!\int_{0}^{t}\! \langle\mu_0P_{s},\tilde\psi_{*,i} P_{t-s}\tilde\psi_{*,i}\rangle dt ds.
\end{eqnarray}
Using \eqref{032011} we can estimate the right hand side of \eqref{042011} by
\begin{equation}
\frac{C}{T^{2}}\int_{0}^{T}\!\!\int_{0}^{t}{\rm e}^{-\gamma (t-s)} \langle\mu_0P_{s},|\tilde\psi_{*,i}| e_{\nu}\rangle dt ds=
\frac{C(1-{\rm e}^{-\gamma T})}{\gamma T^{2}}\int_{0}^{T}\langle\mu_0P_{s},|\tilde\psi_{*,i}| e_{\nu}\rangle ds.
\label{lab4}
\end{equation}
Applying H\"older's inequality with $q\in(1,\nu_0/\nu)$ and an even integer $p$ such that $p^{-1}:=1-q^{-1}$, we conclude that the right hand side is smaller than
\begin{eqnarray}
\label{012211}
&&\frac{C}{\gamma T^{2}}\int_{0}^{T}\langle\mu_0,P_{s}|\tilde\psi_*|^p \rangle^{1/p} \langle\mu_0P_{s},e_{q\nu}\rangle^{1/q}ds\le \frac{C_1}{\gamma T^{2}}\int_{0}^{T}\langle\mu_0,P_{s}|\tilde\psi_*|^p \rangle^{1/p} ds
\end{eqnarray}
for some constants $C,C_1$ independent of $T$.
The last inequality follows from \eqref{012011} and \eqref{010811a}. Since $|\tilde\psi_*|^p$ belongs to $C^1_p(V)$ we conclude from Corollaries \ref{cor011011}, \ref{cor011112} and condition \eqref{022011} that the right hand side of the above expression can be estimated by
$ C_2T/(\gamma T^{2})$, with $C_2$ a constant independent of $T$, which tends to $0$,
as $T\to+\infty$. Thus, part 1) follows.
\mbox{$\square$}
\subsection{Definition and basic properties of the corrector}
\label{cor-f}
We start with the following.
\begin{proposition}\label{lab21}
Functions
\begin{equation}
\label{chi-t}
\chi_{t}(w)=(\chi_{t}^{(1)}(w),\chi_{t}^{(2)}(w)):=\int_{0}^{t}P_s\tilde\psi_*(w) ds,\quad w\in H,
\end{equation}
converge uniformly on bounded sets, as $t\rightarrow\infty.$ For any $\nu\in(0,\nu_0]$ there is $C>0$ such that
\begin{equation}
\label{022211}
|\chi_{t}^{(i)}|\le Ce_{\nu},\quad\forall\,t\ge1,\,i=1,2.
\end{equation}
The limit
\begin{equation}
\label{chi}
\chi=(\chi^{(1)},\chi^{(2)}):=\lim_{t\to+\infty}\chi_{t}=\int_{0}^{+\infty}P_s\tilde\psi_*\,ds,
\end{equation} called a {\em corrector}, satisfies
\begin{equation}
\label{032211}
|\chi^{(i)}|\le Ce_{\nu},\quad i=1,2,
\end{equation}
with the same constant as in \eqref{022211}.
\end{proposition}
\noindent{\sc Proof}
As a consequence of Corollary \ref{cor011011} we conclude that the functions
$$
\int_{1}^{t}P_s\tilde\psi_*^{(i)}(w) ds,\quad t\ge1,\,i=1,2,
$$
are well defined on $H$ and converge uniformly on bounded sets. The convergence part of the proposition follows from the fact that there exists a constnt $C>0$ such that for $\nu\in(0,\nu_0]$,
\begin{equation}
\label{022002}
\int_0^1\mathbb E\|\omega(s,w)\|^2ds\le Ce_{\nu}(w),\quad\forall\,w\in H,
\end{equation}
see \eqref{010712} below. This estimate together with \eqref{032011} imply both \eqref{022211} and \eqref{032211}.
\mbox{$\square$}
\begin{proposition}\label{lab21a}
One can choose $\nu_0>0$ in such a way that
$
\chi^{(i)}\in {\mathcal B}_{\nu}$ for any $\nu\in(0,\nu_0]$, $i=1,2$.
\end{proposition}
\noindent{\sc Proof}
Since $\tilde\psi_*^{(i)}\in C^1_1(V)$, $i=1,2$, from Corollary \ref{cor011011} we conclude that $P_t\tilde\psi_*^{(i)}\in {\mathcal B}_{\nu}$ for $t\ge1$ and there exists $\nu_0>0$ such that for any $\nu\in(0,\nu_0]$ one can find $C,\gamma>0$, for which
$$
\|P_t\tilde\psi_*^{(i)}\|_{\nu}\le C{\rm e}^{-\gamma t}\|\!|\tilde\psi_*^{(i)}\|\!|_{1},\quad\forall\, t\ge1,\,i=1,2.
$$
This guarantees that $\int_1^{+\infty}P_t\tilde\psi_*^{(i)}dt$ belongs to ${\mathcal B}_{\nu}$. Thanks to estimate \eqref{032211}
it suffices only to show that
\begin{equation}
\label{031801}
\left|\int_0^{1}DP_t\psi_*^{(i)}(w)[\xi]dt\right|\le Ce_{\nu}(w),\quad\forall\,w,\xi\in H,\,|\xi|\le 1.
\end{equation}
To prove the above estimate note that
$$
\int_0^1 DP_t \psi_*^{(i)}(w)[\xi]dt:=\mathbb E\left[ {\mathcal K}(\Xi(1))(0)\right],
$$
where $\Xi(w):= \int_0^1\xi(t;w)dt$ and $\xi(t):=D\omega(t;w)[\xi]$. We have, from \eqref{H-s} for $s=1$, that there exists $C>0$ such that
$$
\left|{\mathcal K}(\Xi(w))(0)\right|\le C\|\Xi(w)\|,\quad\forall\,w\in H.
$$
Hence, from Proposition \ref{prop021801}, we conclude that for any $\nu>0$ there exists $C>0$ such that
$$
\left|\int_0^{1}DP_t\psi_*^{(i)}(w)[\xi]dt\right|^2\le |\xi|^2\mathbb E\exp\left\{\nu |\omega(1)|^2+\frac{\nu}{2e}\int_0^1\| \omega(s)\|^2ds\right\}
$$
and \eqref{031801} follows from estimate \eqref{010712} formulated below.
\mbox{$\square$}
\subsection{Proof of part 2)}
\label{sec5.3}
After a simple calculation we get
\begin{eqnarray*}
&&D_{ij}(T):=\frac{1}{T}\mathbb{E} \Big[\tilde x_i(T)\tilde x_j(T)\Big]=D_{ij}^1(T)+D_{ij}^2(T),
\end{eqnarray*}
with
\begin{eqnarray*}
&&D_{ij}^1(T):=\frac{1}{T}\int_{0}^{T}\!\! \left\langle \mu_0P_s, \tilde\psi_*^{(i)} \int_{0}^{T-s}P_t\tilde\psi_*^{(j)}\,dt\right\rangle ds,\\
&&
D_{ij}^2(T):=\frac{1}{T}\int_{0}^{T}\!\! \left\langle \mu_0P_s, \tilde\psi_*^{(j)} \int_{0}^{T-s}P_t\tilde\psi_*^{(i)}\,dt\right\rangle ds.
\end{eqnarray*}
It suffices only to deal with the limit of $D_{ij}^1(T)$, the other term can be handled in a similar way. We can write that
\begin{equation*}
\Big|D_{ij}^1(T)-\frac{1}{T}\int_{0}^{T}\! \!\! \left\langle \mu_0 P_{s} ,\tilde\psi_*^{(i)} \chi^{(j)}\right\rangle ds\Big|=\frac{1}{T}\left|\int_{0}^{T}\!\! \left\langle \mu_0P_s, \tilde\psi_*^{(i)}(\chi^{(j)}-\chi_{T-s}^{(j)})\right\rangle ds\right|=R_{ij}(T),
\end{equation*}
where
\begin{equation}
\label{012401}
R_{ij}(T):=\left|\int_{0}^{1}\!\! \left\langle \mu_0P_{sT}, \tilde\psi_*^{(i)}(\chi^{(j)}-\chi^{(j)}_{T(1-s)})\right\rangle ds\right|.
\end{equation}
\begin{lemma}
\label{lm012401}
We have
\begin{equation}
\label{012401c}
\lim_{T\to+\infty}R_{ij}(T)=0.
\end{equation}
\end{lemma}
\noindent{\sc Proof}
Suppose that $p$ is a positive even integer and $q$ is sufficiently close to $1$ so that $q\nu<\nu_0$ and $1/q=1-1/p$, where $\nu$ is as in \eqref{022211} and \eqref{032211}, while $\nu_0$ is such that \eqref{022011} is in force. Then, we can find a constant $C>0$ such that
\begin{equation}
\label{022401}
|\chi^{(j)}(w)-\chi^{(j)}_{T(1-s)}(w)|^q\le Ce_{\nu_0}(w),\quad\forall\,w\in H\quad\forall\,s\in[0,1],\,T>0.
\end{equation}
Using Proposition \ref{lab21} and \eqref{022011} we conclude that
$$
\lim_{T\to+\infty} \langle\mu_0P_{sT},|\chi^{(j)}-\chi^{(j)}_{T(1-s)}|^q\rangle=0,\quad\forall\,s\in [0, 1).
$$
Equality \eqref{012401c} can be concluded, provided we can substantiate passage to the limit with $T$ under the integral appearing on the right hand side of \eqref{012401}.
Suppose first that the argument $s$ appearing in the integral satisfies $sT\ge 1$.
Using H\"older's inequality, in the same way as it was done in \eqref{012211}, and estimates \eqref{022211} and \eqref{032211} the expression under the integral can be estimated by
\begin{equation}
\label{052211}
\begin{aligned}
&
\langle\mu_0,P_{sT}|\tilde\psi_*^{(i)}|^p \rangle^{1/p} \langle\mu_0P_{sT},|\chi^{(j)}-\chi^{(j)}_{T(1-s)}|^q\rangle^{1/q} \\
&\qquad
\le \sup_{t\ge1}\langle\mu_0,P_{t}|\tilde\psi_*^{(i)}|^p \rangle^{1/p}\langle\mu_0P_{sT},|\chi^{(j)}-\chi^{(j)}_{T(1-s)}|^q\rangle^{1/q}.
\end{aligned}
\end{equation}
Since $|\tilde\psi_*|^p\in C^1_p(V)$ we have $\sup_{t\ge1}\langle\mu_0,P_{t}|\tilde\psi_*|^p \rangle<+\infty$, thanks to part 2) of Theorem \ref{T2}.
As a result the left hand side of \eqref{052211} is bounded for all $s\in[1/ T,1]$.
From the Lebesgue dominated convergence theorem we conclude therefore that
\begin{equation}
\label{012401a}
\lim_{T\to+\infty}\int_{1/T}^{1}\!\! \left\langle \mu_0P_{sT}, \tilde\psi_*^{(i)}(\chi^{(j)}-\chi^{(j)}_{T(1-s)})\right\rangle ds=0.
\end{equation}
Next we shall prove that there exists $C>0$
such that
\begin{equation}
\label{052401}
\left|\int_{0}^{1/T}\!\! \left\langle \mu_0P_{sT}, \tilde\psi^{(i)}_*(\chi^{(j)}-\chi^{(j)}_{T(1-s)})\right\rangle ds\right|\le \frac{C}{T},
\end{equation}
provided that $T\ge 1$.
Indeed, using first the Cauchy--Schwartz inequality and then \eqref{022211}, and \eqref{032211} we get that the left hand side can be estimated by
\begin{eqnarray*}
&&
C\mathbb E\left\{\left\{ \int_{0}^{1/T}|\tilde\psi_*(\omega(sT))|^2ds\right\}^{1/2}\left\{\int_0^{1/T}e_{2\nu}(\omega(sT)) ds\right\}^{1/2}\right\}.
\end{eqnarray*}
Applying H\"older's inequality with $q\in(1,2)$ and $1/p=1-1/q$ we get that this expression can be estimated by
\begin{eqnarray*}
&&
C \left\{\mathbb E\left\{ \int_{0}^{1/T}|\tilde\psi_*(\omega(sT))|^2ds\right\}^{p/2}\right\}^{1/p}\left\{\mathbb E\left\{\int_0^{1/T}e_{2\nu}(\omega(sT)) ds\right\}^{q/2}\right\}^{1/q}\\
&&
\le C_1\left\{\mathbb E\left\{ \frac1T \int_{0}^{1}\|\omega(s))\|^2ds\right\}^{p/2}\right\}^{1/p}\left\{\mathbb E\left\{\int_0^{1/T}e_{2\nu}(\omega(sT)) ds\right\}\right\}^{1/2}
\\
&&
\le
\frac{C_2}{T}\left\{\mathbb E\exp\left\{ \nu \int_{0}^{1}\|\omega(s))\|^2ds\right\}\right\}^{p/2}\le \frac{C_3}{T},
\end{eqnarray*}
provided $2\nu<\nu_0$. The penulmative inequality follows from \eqref{012011} and assumption \eqref{022011}, while the last estimate is a consequence of \eqref{010712-00} stated below. Thus, \eqref{052401} follows.
\mbox{$\square$}
We are left therefore with the problem of finding
the limit of
\begin{equation}
\label{011012}
S_{ij}(T)=\frac{1}{T}\int_{0}^{T}\! \!\! \left\langle \mu_0 P_{s} ,\tilde\psi_*^{(i)} \chi^{(j)}\right\rangle ds
\end{equation}
as $T\to+\infty$. Let $R\ge1$ be fixed and $\varphi_R\colon {\@Bbb R}\to{\@Bbb R}$ be a smooth mapping such that $\varphi_R(x)=1$ for $|x|\le R$ and $\varphi_R(x)=0$ for $|x|\ge R+1$. Observe that
$$
\hat\chi^{(R)}(w):= \chi^{(j)}(w)\varphi_R(|w|^2)
$$ belongs to $C^1_b(H)$, and thus also to $C^1_b(V)$. Therefore,
$\tilde\psi_*^{(i)}\hat \chi^{(R)}\in C^1_1(V)$. Denote by $S^{(R)}(T)$ the expression in \eqref{011012} with $\chi^{(j)}$ replaced by $\hat\chi^{(R)}$.
Let $\varepsilon>0$ be arbitrary. Using the same argument as in the proof of Lemma \ref{lm012401} one can show that for any $\varepsilon >0$ there exists a sufficiently large $R\ge 1$ and $T_0>0$ so that
$$
\left|\frac{1}{T}\int_{0}^{T}\! \!\! \left\langle \mu_0 P_{s} ,\tilde\psi_*^{(i)} (\chi^{(j)}-\hat\chi^{(R)})\right\rangle ds\right|<\frac{\varepsilon}{2}.
$$
Likewise, we can choose $R\ge 1$ and $T_0>0$ so large that
$$
\left|\left\langle \mu_* ,\tilde\psi_*^{(i)} (\chi^{(j)}-\hat\chi^{(R)})\right\rangle \right|<\frac{\varepsilon}{2}.
$$
By Corollary \ref{cor011011} we have
$$
\|P_t(\tilde\psi_*^{(i)} \hat\chi^{(R)})-\langle\mu_*, \tilde\psi_*^{(i)}\hat \chi^{(R)}\rangle\|_{\nu}\le Ce^{-\gamma t}\|\!|\tilde\psi_* \hat\chi^{(R)}\|\!|_{2},\quad \forall\,t\ge0.
$$
In consequence we conclude that
$$
\lim_{T\to+\infty}S^{(R)}(T)=\langle\mu_*, \tilde\psi_*^{(i)}\hat \chi^{(R)}\rangle.
$$
Hence,
\begin{eqnarray*}
&&\limsup_{T\to+\infty}|S_{ij}(T)-\langle\mu_*, \tilde\psi_*^{(i)} \chi^{(j)}\rangle|\\
&&\qquad
\le \limsup_{T\to+\infty}|S_{ij}(T)-S^{(R)}(T)|+|\langle\mu_*, \tilde\psi_*^{(i)}\hat \chi^{(R)}\rangle-\langle\mu_*, \tilde\psi_*^{(i)} \chi^{(j)}\rangle|<\varepsilon.
\end{eqnarray*}
This proves that
$$
\lim_{T\to+\infty}S_{ij}(T)=\langle\mu_*, \tilde\psi_*^{(i)} \chi^{(j)}\rangle.
$$
We have shown
therefore part 2) of the theorem with
\begin{equation}
\label{052002}
\lim_{T\to+\infty}D_{ij}(T):=\langle\mu_*, \tilde\psi_*^{(i)} \chi^{(j)}\rangle+\langle\mu_*, \tilde\psi_*^{(j)} \chi^{(i)}\rangle.\qquad \mbox{$\square$}
\end{equation}
\subsection{Proof of part 3)}
\label{sec5.4}
\subsubsection{Reduction to the central limit theorem for martingales}
\label{mart-decomp}
Note that
\begin{eqnarray}
\label{decomp}
&&\frac{1}{\sqrt{T}}\int_{0}^{T}\tilde\psi_*(\omega(s))\,ds
=\frac{1}{\sqrt{T}}M_{T}+R_{T},
\end{eqnarray}
where
\begin{eqnarray}
\label{mart}
M_{T}:=\chi(\omega(T))-\chi(\omega(0))+\int_{0}^{T}\tilde\psi_*(\omega(s))\,ds
\end{eqnarray}
and
$$
R_{T}:= \frac{1}{\sqrt{T}}\left[\chi(\omega(0))-\chi(\omega(T))\right].
$$
\begin{proposition}\label{lab20}
The process
$\{
M_{T},\,T\ge0\}
$ is a square integrable, two dimensional vector martingale with respect to the filtration $\{{\mathcal F}_T, T\ge 0\}.$ Moreover, random vectors $R_{T}$ converge to $0$, as $T\to+\infty$, in the $L^{1}$-sense.
\end{proposition}
The proof of this result is quite standard and can be found in \cite{kowalczuk}, see Proposition 5.2 and Lemma 5.3.
\subsubsection{Central limit theorem for martingales}
Assume that $\{{\mathcal M}_n,\,n\ge0\}$ is a zero mean martingale subordinated to a filtration $\{{\frak F}_{n},\,n\ge0\}$ and
${\mathcal Z}_n:={\mathcal M}_n-{\mathcal M}_{n-1}$ for $n\ge1$, is the respective sequence of martingale differences. Recall that the quadratic variation of the martingale is defined
as
$$
\langle {\mathcal M}\rangle_{n}=\sum_{j=1}^{n}\mathbb E\left[{\mathcal Z}_j^2|{\frak F}_{j-1}\right],\quad n\ge1.
$$
The following theorem has been shown in \cite{kowalczuk}, see Theorem 4.1.
\begin{theorem}\label{lab30b}
Suppose also that
\begin{itemize}
\item[M1)]
\begin{equation}
\label{sublinear}
\sup_{n\ge1}\mathbb E {\mathcal Z}_n^2<+\infty,
\end{equation}
\item[M2)] \label{lab310} for every $\varepsilon >0,\,$\,
$$
\lim_{N\to+\infty}\frac {1}{N}\sum_{j=0}^{N-1} \mathbb{E}\Big[ {\mathcal Z}_{j+1}^2, \,
|{\mathcal Z}_{j+1} | \ge \varepsilon\sqrt{N} \Big]=0,
$$
\item[M3)]
there exists $\sigma\ge0$ such that
\begin{equation}
\label{010212}
\lim_{K\rightarrow\infty}\limsup_{\ell\rightarrow\infty}\frac{1}{\ell}\sum_{m=1}^{\ell}\mathbb{E}\Big|\frac{1}{K}
\mathbb E\left[\langle {\mathcal M}\rangle_{mK}-\langle {\mathcal M}\rangle_{(m-1)K}\Big|{\frak F}_{(m-1)K}\right] -\sigma^{2}\Big|=0,
\end{equation}
\item[M4)] for every $\varepsilon >0$
\begin{equation}
\label{m3}
\lim_{K\rightarrow\infty}\limsup_{\ell\rightarrow\infty}\frac{1}{\ell K}\sum_{m=1}^{\ell}\sum_{j=(m-1)K}^{mK-1}\mathbb{E}[1+{\mathcal Z}_{j+1}^2,\, |{\mathcal M}_{j}-{\mathcal M}_{(m-1)K}|\geq\varepsilon\sqrt{\ell K}]=0.
\end{equation}
\end{itemize}
Then,
\begin{equation}
\label{M2b}
\lim_{N\to+\infty}\frac{\mathbb{E}\langle {\mathcal M}\rangle_{N}}{N}
=\sigma^{2}
\end{equation} and
\begin{equation}
\label{052501}
\lim_{N\to\infty} \, \mathbb{E} e^{i \theta
{\mathcal M}_N/\sqrt N} = e^{-\sigma^2 \theta^2/2},\quad\forall\,\theta\in{\@Bbb R}.
\end{equation}
\end{theorem}
\subsubsection{Proof of the central limit theorem for $M_T/\sqrt{T}$}
We prove that $M_n/\sqrt{n}$, where $n\ge1$ is an integer, converge in law to a Gaussian random vector, as $n\to+\infty$. This suffices to conclude that
in fact $M_T/\sqrt{T}$ satisfy the central limit theorem. Indeed, let $Z_n:=M_n-M_{n-1}$ for $n
\ge1$.
Note that for any $\varepsilon>0$
\begin{equation}
\label{vanish}
\lim_{N\rightarrow\infty}\sup_{T\in[N,N+1)}|M_T/\sqrt{T}-M_N/\sqrt{N}|=0,\quad{\P}-{\rm a.s.}
\end{equation}
For a given $\varepsilon_N>0$ we let
$$
A_N:=[\sup_{T\in[N,N+1)}|M_T/\sqrt{T}-M_N/\sqrt{N}|\geq\varepsilon_N].
$$
We have
\begin{eqnarray*}
\mathbb{P}[A_N]&\le& \mathbb{P}[\sup_{T\in[N,N+1)}|M_T-M_N|\ge \varepsilon_N\sqrt{N}/2]
+\mathbb{P}[|M_N|[N^{-1/2}-(N+1)^{-1/2}]\ge \varepsilon_N/2]\\
&\le& \frac{C}{N^2\varepsilon^4_N}\mathbb E|Z_{N+1}|^4+\frac{C}{N^3\varepsilon^2_N}\sum_{j=1}^N\mathbb E|Z_j|^2.
\end{eqnarray*}
The last inequality follows from the Doob and Chebyshev estimates and the elementary inequality $N^{-1/2}-(N+1)^{-1/2}\le CN^{-3/2}$ that holds for all $N\ge1$ and some constant $C>0$. We denote the first and second terms on the right hand side by $I_N$ and $I\!I_N$, respectively. We claim that there exists $C>0$ such that
\begin{equation}
\label{012311}
\mathbb E |Z_{N+1}|^4\le C,\quad\forall\,N\ge0.
\end{equation}
Indeed, we have
\begin{eqnarray*}
&&
\mathbb E |Z_{N+1}|^4 \le C\left\{\mathbb E| \chi(\omega(N+1))|^4+\mathbb E|\chi(\omega(N))|^4+\mathbb E\left|\int_N^{N+1}\tilde\psi_*(\omega(s))ds\right|^4\right\}.
\end{eqnarray*}
To estimate the first two terms appearing on the right hand side we use \eqref{032211} and then subsequently \eqref{010811a}. We conclude that all these terms can be estimated by a constant independent of $N$. The last expectation can be estimated using \eqref{H-s} by
$$
C\mathbb E\left[\int_N^{N+1}\|\omega(s)\|^2ds\right]^2=C\left\langle \mu_0 P_N,\mathbb E\left[\int_0^{1}\|\omega(s;\cdot)\|^2ds\right]^2\right\rangle.
$$ Applying \eqref{010712} and then again \eqref{010811a} we obtain that also this term can be estimated independently of $N$.
Hence
$$
I_N\le \frac{C}{N^2\varepsilon^4_N}.
$$
On the other hand, from \eqref{012311} we conclude also that for some constants $C,C_1>0$ independent of $N$ we have
$$
I\!I_N= \frac{C}{N^3\varepsilon^2_N}\sum_{k=1}^N\mathbb E |Z_k|^2\le \frac{C_1}{N^2\varepsilon^2_N}.
$$
Choosing $\varepsilon_N$ tending to $0$ sufficiently slowly we can guarantee that
$$
\sum_{N\ge1}\mathbb{P}[A_N]<+\infty,
$$
and \eqref{vanish} follows from an application of the Borel--Cantelli lemma.
Choose $a\in{\@Bbb R}^2$ and let
$
{\mathcal M}_n:=M_n\cdot a.
$
Condition M1) obviously holds in light of \eqref{012311}. Condition M2) also easily follows from \eqref{012311} and the Chebyshev inequality.
Before verifying hypothesis M3) let us introduce
some additional notation. For a given probability measure $\mu$ on $H$ and a Borel event $A$ write
$$
{\P}_{\mu}[A]:=\int_H{\P}[A|\omega(0)=w]\mu(dw).
$$
The respective expectation shall be denoted by $\mathbb E_\mu$. We write ${\P}_w$ and $\mathbb E_w$ in case of $\mu=\delta_w$. We can write that
\begin{eqnarray*}
&&
\frac{1}{K} \mathbb E\left[\langle{\mathcal M}\rangle_{mK}-\langle {\mathcal M}\rangle_{(m-1)K}\Big|{\frak F}_{(m-1)K}\right] =\frac{1}{K}\sum_{j=0}^{K-1}P_{j}\Psi(\omega((m-1)K))
\end{eqnarray*}
with
$
\Psi(w):=\mathbb E_w {\mathcal M}_1^2.
$
Suppose that
$\sigma^2=\langle \mu_*,\Psi\rangle.$ Let also $
\tilde \Psi(w):=\Psi(w)-\sigma^2,
$
$$
S_K(w):=\frac{1}{K}
\sum_{j=0}^{K-1}P_{j}\tilde\Psi(w)
$$
and
$$
\tilde S_K(w):=|S_K(w)|-\langle \mu_*,|S_K|\rangle,\quad\,w\in H.
$$
We can rewrite the expression under the limit in \eqref{010212} as being equal to
\begin{equation}
\label{010212a}
\frac{1}{\ell}\sum_{m=1}^{\ell}\mathbb{E}\Big|\frac{1}{K}
\sum_{j=0}^{K-1}P_{j}\tilde\Psi(\omega((m-1)K))\Big|= \langle \mu_0Q_\ell^K, |S_K|\rangle,
\end{equation}
where
$$
Q_\ell^K:=\frac{1}{\ell}\sum_{m=1}^{\ell} P_{(m-1)K}.
$$
It is obvious that the second term on the right hand side of \eqref{010212a} does not contribute to the limit in hypothesis M3).
We prove that
\begin{equation}
\label{010212b}
\lim_{\ell\to+\infty}\sum_{m=1}^{\ell}\langle \mu_0Q_\ell^K, \tilde S_K\rangle =0.
\end{equation}
Then M3) shall follow upon subsequent applications of \eqref{010212b}, as $\ell\to+\infty$, and Birkhoff's individual ergodic theorem, as $K\to+\infty$.
To prove \eqref{010212b} it suffices only to show that the function
$S_K(\cdot)$ is continuous on $H$
and for any $K$ fixed there exists a constant $C>0$ such that
\begin{equation}
\label{030212}
|S_K(w)|\le Ce_{\nu}(w),\quad \forall\,w\in H.
\end{equation}
Equality \eqref{010212b} is then a consequence of the fact that
measures
$
\mu_0Q_\ell^K
$ converge weakly to $\mu_*$ as $\ell\to+\infty$, and
estimate \eqref{012011}.
Continuity of $S_K(\cdot)$ follows from the fact that $\tilde\Psi\in{\mathcal B}_{\nu}$. On the other hand estimate \eqref{030212} follows from the fact that
for any $j\ge 1$ fixed there exists a constant $C>0$ such that
\begin{equation}
\label{030212a}
P_{j}\Psi(w)\le Ce_{\nu}(w),\quad \,w\in H.
\end{equation}
The last estimate can be seen as follows
\begin{eqnarray}
\label{marta}
&&
\Psi(w)\le |a|^2\mathbb E_w|M_{1}|^2=|a|^2\sum_{i=1}^2\left\{P_1[\chi^{(i)}]^2(w)+[\chi^{(i)}(w)]^2+2\int_{0}^{1}P_s(\tilde\psi_*^{(i)}P_{1-s}\chi^{(i)})(w)\,ds\right.\nonumber\\
&&
\\
&&
\left.+2\int_{0}^{1}ds\int_0^sP_{s'}(\tilde\psi_*^{(i)}P_{s-s'}\tilde\psi_*^{(i)})(w)\,ds'+2(\chi^{(i)}P_1\chi^{(i)})(w)+2\chi^{(i)}(w)\int_{0}^{1}P_s\tilde\psi_*^{(i)}(w)\,ds\right\}.\nonumber
\end{eqnarray}
Using estimates \eqref{012011} and \eqref{032211} we
conclude that for any $\nu>0$ there exists a constant $C>0$ such that
$$
\Psi(w)\le Ce_{\nu}(w),\quad \forall\,w\in H.
$$
Hence, using again \eqref{012011}, we conclude \eqref{030212a}. This ends the proof of hypothesis M3).
Finally we verify condition M4). For that purpose it suffices only to prove that
\begin{eqnarray*}
\lim_{K\rightarrow +\infty}\limsup_{\ell\rightarrow +\infty}\frac 1K\sum_{j=0}^{K-1}\langle \mu_{0}Q_{\ell}^K, G_{\ell,j}\rangle=0,
\end{eqnarray*}
where
$$
G_{\ell,j}(w):= \mathbb{E}_w\left[1+|Z_{j+1}|^2,|M_{j}|\geq\varepsilon\sqrt{\ell K}\right].
$$
The latter follows if we show that
\begin{eqnarray}
\label{G-j}
\limsup_{\ell\rightarrow+\infty}\langle \mu_{0}Q_{\ell}^K, G_{\ell,j}\rangle=0,\quad\forall\,j=0,\ldots,K-1.
\end{eqnarray}
From the Markov inequality we obtain
$$
\mathbb{P}_w\Big[|M_{j}|\geq \varepsilon\sqrt{\ell K}\Big]\leq \frac{\mathbb{E}_{w}|M_{j}|}{\varepsilon\sqrt{\ell K}}
\le I_1+I_2,
$$
where
$$
I_1:=
\frac{1}{\varepsilon\sqrt{\ell K}}\sum_{i=1}^2\mathbb{E}_{w}|\chi^{(i)}(\omega(j))-\chi^{(i)}(w)|
$$
and
$$
I_2:=\frac{1}{\varepsilon\sqrt{\ell K}}\sum_{i=1}^2\mathbb{E}_{w}\left|\int_0^j\tilde\psi_*^{(i)}(\omega(s))ds\right|.
$$
Using \eqref{032211}
we conclude that
$$
I_1\leq \frac{C_1e_{\nu}(w)}{\varepsilon\sqrt{\ell K}}.
$$
On the other hand, we have
$$
I_2\le \frac{C_2}{\varepsilon\sqrt{\ell K}}\mathbb E_w\int_0^j\|\omega(s)\|ds
$$
and from \eqref{010712-00} we get that
$$
I_2\leq \frac{C_3e_{\nu}(w)}{\varepsilon\sqrt{\ell K}}.
$$
Summarizing, we have shown that
for any $R>0$,
\begin{eqnarray}
\label{012201-2011}
\sup_{|w|\le R}\mathbb{P}_{w}\Big[|M_{j}|\geq \varepsilon\sqrt{\ell K}|\Big]\leq \frac{C}{\sqrt{\ell K}}.
\end{eqnarray}
Furthermore,
\begin{eqnarray}
\label{022611}
&&\sup_{|w|\le R}\mathbb{E}_{w}\left[|Z_{j+1}|^2,| M_{j}|\geq\varepsilon\sqrt{\ell K}\right]\\
&&
\leq\,
2\sum_{i=1}^2\left\{\sup_{|w|\le R}\mathbb{E}_{w}\left\{\left[\chi^{(i)}(\omega(j+1))-\chi^{(i)}(\omega(j))\right]^2,| M_{j}|\geq\varepsilon\sqrt{\ell K}\right\}\right.\nonumber\\
&&+
\left.\sup_{|w|\le R}\mathbb{E}_{w}\left\{\Big[\int^{j+1}_{j}\tilde\psi_*^{(i)} (\omega(s))ds\Big]^{2},| M_{j}|\geq\varepsilon\sqrt{\ell K}\right\}\right\}\nonumber
\\
&&
\le C\sup_{t\in[0,K]}\sup_{|w|\le R}\mathbb{E}_{w}\left[e_\nu(\omega(t)),| M_{j}|\geq\varepsilon\sqrt{\ell K}\right]\nonumber
\end{eqnarray}
for some constant $C$ independent of $\ell$. The above argument shows that
$$
\lim_{\ell\to+\infty}\sup_{|w|\le R}|G_{\ell,j}(w)|= 0.
$$
To obtain \eqref{G-j} it suffices only to prove that for $\delta>0$ as in H3) we have
\begin{equation}
\label{032201-2011}
\limsup_{\ell\rightarrow+\infty}\langle \mu_{0}Q_{\ell}^K, G_{\ell,j}^{1+\delta/2} \rangle<+\infty,\quad\forall\,K\ge1,
\,0\le j\le K-1.
\end{equation}
Note that
\begin{eqnarray}
\langle \mu_{0}Q_{\ell}^K, G_{\ell,j}^{1+\delta/2} \rangle\le\mathbb{E}_{\mu_{0}Q_{\ell}^K}(1+|Z_{j+1}|^2 )^{1+\delta/2}.\label{lab48}
\end{eqnarray}
This however is a consequence of \eqref{012011}. Thus condition M4) follows.
Summarizing, we have shown that
$$
\lim_{n\to+\infty}\exp\left\{\frac{ia\cdot M_N}{\sqrt{N}}\right\}=\exp\left\{-\frac{1}{2}\sum_{i,j=1}^2D_{ij}a_ia_j\right\},
$$
where
$$
D_{ij}:=\left\langle\mu_*,\mathbb E\left\{\prod_{p=i,j}\left[\chi^{(p)}(\omega(1;w))-\chi^{(p)}(w)+\int_{0}^{1}\tilde\psi_*^{(p)}(\omega(s;w))\,ds\right]\right\}\right\rangle.
$$
After a somewhat lengthy, but straightforward calculation, using stationarity of $\mu_*$ and the fact that
$$
\left\langle\mu_*,\left[P_s\chi^{(i)}-\chi^{(i)}+\int_{0}^{s}P_{s'}\tilde\psi_*^{(i)}\,ds'\right]\tilde\psi_*^{(j)}\right\rangle=0,\quad\forall\,s\ge0
$$
we conclude that
$D_{ij}$ coincides with the expression on the right hand side of \eqref{052002}. \mbox{$\square$}
\section{Proof of the results from section \ref{sec3}}
\label{sec6}
\subsection{Proof of Theorem \ref{T1}}
Part 3) is a direct consequence of parts 1) and 2).
\subsubsection{Proof of part 1)}
Suppose that $\omega(t):=\omega(t;w)$.
From \eqref{010712} to conclude that for $\nu\in(0,\nu_0]$, where $\nu_0=1/(4\|Q\|)$, there exists a constant $C>0$ such that
\begin{equation}
\label{010712a}
\mathbb E\exp\left\{\nu|\omega(n+1)|^2\right\}\le C\mathbb E\exp\left\{q\nu|\omega(n)|^2\right\},\quad\forall\,n\ge 0.
\end{equation}
Let $q=e^{-1/2}$. The right hand side can be further estimated using Jensen's inequality
$$
C\mathbb E\exp\left\{q\nu|\omega(n)|^2\right\}\le C\left(\mathbb E\exp\left\{\nu|\omega(n)|^2\right\}\right)^q\le C^{1+q}\left(\mathbb E\exp\left\{q\nu|\omega(n-1)|^2\right\}\right)^q.
$$
Iterating this procedure we conclude that for any $n\ge 0$
\begin{equation}
\label{010712b}
\begin{aligned}
\mathbb E\exp\left\{\nu|\omega(n+1)|^2\right\}&\le C^{1+q+\ldots+q^n}\left\{\exp\left\{q^{n+1}\nu|\omega(0)|^2\right\}\right\}^{1/q^{n+1}}
\\
&\le C^{1/(1-q)}\exp\left\{\nu|w|^2\right\}.
\end{aligned}
\end{equation}
Therefore (cf. part 3) of Lemma A.1 of \cite{HM1}) we have the following.
\begin{lemma}
\label{lm011112}
There exists a constant $C>0$ such that
\begin{equation}
\label{010712c}
\mathbb E\exp\left\{\nu|\omega(t;w)|^2\right\}\le C\exp\left\{\nu|w|^2\right\},\quad\forall\,t\ge0,\,\nu\in(0,\nu_0],\,w\in H.
\end{equation}
\end{lemma}
The above lemma obviously implies \eqref{012011}.
\subsubsection{A stability result of Hairer and Mattingly}
In our proof we use Theorems 3.4 and 3.6 of \cite{HM1}, which we recall below.
Suppose that $({\mathcal H},|\cdot|)$ is a separable Hilbert space with a stochastic flow $\Phi_t\colon {\mathcal H}\times\Omega \to{\mathcal H}$, $t\ge0$, i.e. a family of $C^1$-class random mappings of ${\mathcal H}$ defined over a probability space $(\Omega,{\mathcal F},{\P})$ that satisfies $\Phi_t(\Phi_s(x;\omega);\omega))=\Phi_{t+s}(x;\omega)$ for all $t,s\ge0$, $x\in{\mathcal H}$ and ${\P}$ a.s. $\omega\in
\Omega$.
We assume that $P_t$ and $P_t(x,\cdot)$, $x\in {\mathcal H}$, are transition semigroup and a family of transition probabilities corresponding to the flow, i.e.
$$
P_t \phi(x)=\int \phi(y)P_t(x,dy)=\mathbb E \phi(\Phi_t(x)),\quad\forall\phi\in B({\mathcal H}), \,x\in{\mathcal H}.
$$
Here $B({\mathcal H})$ is the space of Borel and bounded functions on ${\mathcal H}$. The dual semigroup acting on a Borel probability measure $\mu$ shall be
denoted by $\mu P_t$.
We adopt the following hypotheses on the flow.
{\bf Assumption 1.} There exists a measurable function $V\colon {\mathcal H}\to[1,+\infty)$ and two increasing continuous functions $V_*,V^*\colon [0,+\infty)\to[1,+\infty)$ that satisfy
\begin{itemize}
\item[1)]
$$
V_*(|x|)\le V(x)\le V^*(|x|),\quad \forall\,x\in{\mathcal H},
$$
and $\lim_{a\to+\infty}V_*(a)=+\infty$,
\item[2)] there exist $C>0$ and $\kappa_1>1$ such that
$$
aV^*(a)\le CV_*^{\kappa_1}(a),\quad\forall\,a\ge0,
$$
\item[3)] there exist $\kappa_0<1$, $C>0$ and a decreasing function $\alpha\colon [0,1]\to[0,1]$ with $\alpha(1)<1$ such that
$$
\mathbb E\left[V^{\kappa}(\Phi_t(x))\left(1+|D\Phi_t(x)[h]|\right)\right]\le CV^{\alpha(t)\kappa}(x),\quad\forall\,x,h\in {\mathcal H},\,|h|=1,
$$
and $t\in[0,1]$, $\kappa\in[\kappa_0,\kappa_1]$.
Here $D\Phi_t(x)[h]$ denotes the Fr\'echet derivative at $x$ in the direction $h$.
\end{itemize}
{\bf Assumption 2.} There exist $C>0$ and $\kappa_2\in[0,1)$ such that for any $\varepsilon \in(0,1)$ one can find $C(\varepsilon),T(\varepsilon)>0$,
for which
\begin{equation}
\label{E26}
|DP_t\phi(x)|\le CV^{\kappa_2}(x)\left\{C(\varepsilon)\left[P_t(|\phi|^2)(x)\right]^{1/2}+\varepsilon \left[P_t(|D\phi|^2)(x)\right]^{1/2}\right\},
\end{equation}
for all $x\in{\mathcal H}$, $t\ge T(\varepsilon)$.
Introduce now the following family of metrics on ${\mathcal H}$. For $\kappa\ge0$ and $x,y\in {\mathcal H}$ we let
$$
{\rm d}_\kappa(x,y):=\inf_{c\in \Pi(x,y)}\int_0^1V^{\kappa}(c(t))|\dot c(t)|dt,
$$
where the infimum extends over the set $\Pi(x,y)$ consisting of all $C^1$ regular paths $c\colon [0,1]\to{\mathcal H}$ such that $c(0)=x$, $c(1)=y$. In the special case of $\kappa=1$ we set ${\rm d}={\rm d}_1$.
For two Borel probability measures $\mu_1,\mu_2$ on ${\mathcal H}$ denote by ${\mathcal C}(\mu_1,\mu_2)$ the family of all Borel measures on ${\mathcal H}\times{\mathcal H}$ whose marginals on the first and second coordinate coincide with $\mu_1,\mu_2$ respectively. We denote also by
$$
{\rm d}(\mu_1 ,\mu_2 ):=\sup\left[\left|\langle \mu_1,\phi\rangle-\langle \mu_2,\phi\rangle\right|\colon {\rm Lip}(\phi)\le 1\right].
$$
Here ${\rm Lip}(\phi)$ is the Lipschitz constant of $\phi\colon {\mathcal H}\to{\@Bbb R}$ in the metric ${\rm d}(\cdot,\cdot)$. By ${\mathcal P}_1({\mathcal H},{\rm d})$ we denote the space of all Borel, probability measures $\mu$ on ${\mathcal H}$ satisfying $\int_{\mathcal H} {\rm d}(x,0)\mu(dx)<+\infty$.
Let $A\subset {\mathcal H}\times{\mathcal H}$ be Borel measurable. For a given $t\ge0$ and $x,y\in{\mathcal H}$ denote
$$
{\mathcal P}_t(x,y;A)=\sup\left[\mu[A]\colon \mu\in {\mathcal C}(P_t(x,\cdot),P_t(y,\cdot))\right].
$$
{\bf Assumption 3.} Given any $\kappa\in(0,1)$ and $\delta,R>0$ there exists $T_0>0$ such that for any $T\ge T_0$ there exists $a>0$ for which
$$
\inf_{|x|,|y|\le R}{\mathcal P}_T(x,y;\Delta_{\delta,\kappa})\ge a.
$$
Here,
$$
\Delta_{\delta,\kappa}:=[(x,y)\in {\mathcal H}\times{\mathcal H}\colon {\rm d}_\kappa(x,y)<\delta],\quad \forall\,\kappa,\delta>0.
$$
\begin{theorem}
\label{hm-result}
Suppose that Assumptions 1, 2, 3 stated above are in force. Then the following are true:
\begin{itemize}
\item[1)]
there exist $C,\gamma>0$ such that
\begin{equation}
\label{010812}
{\rm d}(\mu_1 P_t,\mu_2 P_t)\le C{\rm e}^{-\gamma t}{\rm d}(\mu_1,\mu_2),\quad \forall \mu_1,\mu_2\in {\mathcal P}_1({\mathcal H},{\rm d}),
\end{equation}
\item[2)]
there exists a unique probability measure $\mu_*\in {\mathcal P}_1({\mathcal H},{\rm d})$ invariant under $\{P_t,\,t\ge0\}$, i.e. $\mu_*=\mu_* P_t$ for all $t\ge0$,
\item[3)] we have
\begin{equation}
\label{020812}
\| P_t\phi-\langle\mu_*,\phi\rangle\|_{\rm Lip}\le C{\rm e}^{-\gamma t}\| \phi-\langle\mu_*,\phi\rangle\|_{\rm Lip},\quad \forall\,\phi\in C^1({\mathcal H}),\,t\ge0.
\end{equation}
Here
$$
\|\phi\|_{\rm Lip}:=\sup_{x\not=y}\frac{|\phi(x)-\phi(y)|}{{\rm d}(x,y)}+|\langle \mu_*,\phi\rangle|.
$$
\end{itemize}
\end{theorem}
\subsubsection{Proof of part 2)}
\subsection*{Verification of Assumption 1}
Denote $\Phi_t(w;W):=\omega(t;w,W)$, where $W$ is the cylindrical Wiener process appearing in \eqref{E25a}. Let
\begin{equation}
\label{041912}
\xi(t;w,\xi):=D\Phi_t(w)[\xi], \quad\xi\in H.
\end{equation}
In what follows we suppress $w$ and $\xi$ in our notation when their values are obvious from the context.
Define
$V(w):=V_*(|w|)=V^*(|w|)={\rm e}^{\nu |w|^2}$. Assumption 1 of Theorem \ref{hm-result} is a consequence of the result below and estimate
\eqref{010712} shown in the Appendix \ref{secApB}.
\begin{proposition}
\label{prop021801}
For any $\nu>0$ there exists $C>0$ such that
\begin{equation}
\label{051801}
|\xi(t)|\le |\xi|\exp\left\{\nu\int_0^t\| \omega(s)\|^2ds+Ct\right\}
\end{equation}
and
$$
\left\{ \int_0^t\|\xi(s)\|^2ds\right\}^{1/2}\le |\xi|\exp\left\{\nu\int_0^t\| \omega(s)\|^2ds+Ct\right\},\quad\forall\,t\ge0,\quad{\P}-{\rm a.s.}
$$
\end{proposition}
\noindent{\sc Proof}
Note that $\xi(t)$ satisfies a (non-stochastic) equation
\begin{eqnarray}\label{E25-1}
&&
\partial_t \xi(t) =\Delta\xi(t) -\eta(t)\cdot\nabla \xi(t)-{\mathcal K}(\xi(t))\cdot\nabla \omega(t)\\
&&
+\eta(t,0)\cdot\nabla \xi(t)+{\mathcal K}(\xi(t))(0)\cdot\nabla \omega(t), \qquad \xi(0)=\xi\in H. \nonumber
\end{eqnarray}
Hence,
$$
\partial_t |\xi(t)|^2 =-2\|\xi(t)\|^2-2\langle {\mathcal K}(\xi(t))\cdot\nabla \omega(t),\xi(t)\rangle+2\langle{\mathcal K}(\xi(t))(0)\cdot\nabla \omega(t),\xi(t)\rangle.
$$
Using \eqref{020712} and \eqref{030712} (for $r=1/2$) we conclude that for some deterministic $C>0$,
\begin{eqnarray*}
\partial_t |\xi(t)|^2 &&\le -2\|\xi(t)\|^2+C|\xi(t)|_{1/2}\| \omega(t)\||\xi(t)|
\\
&&
\le -2\|\xi(t)\|^2+\nu\| \omega(t)\|^2|\xi(t)|^2+\frac{C^2}{4\nu}|\xi(t)|_{1/2}^2.
\end{eqnarray*}
An application of the Gagliardo--Nirenberg inequality \eqref{gagliardo-nirenberg} with $s=1$, $\beta=1/2$ yields
$$
|\xi(t)|_{1/2}\le C\|\xi(t)\|^{1/2}|\xi(t)|^{1/2}
$$
for some constant $C>0$.
In consequence, there exist $C,C_1>0$ such that
\begin{equation}\label{051801a}
\begin{aligned}
\partial_t |\xi(t)|^2 &\le -\|\xi(t)\|^2+ \nu\| \omega(t)\|^2|\xi(t)|^2+\frac{C^2}{2\cdot 4^3\nu}|\xi(t)|^2\\
&
\le -\|\xi(t)\|^2+ (\nu\| \omega(t)\|^2+C_1)|\xi(t)|^2.
\end{aligned}
\end{equation}
Estimate \eqref{051801} follows upon an application of Gronwall's inequality.
In addition, from \eqref{051801} and \eqref{051801a} we conclude that there exists $C>0$ such that
\begin{eqnarray*}
\int_0^t\|\xi(s)\|^2ds&\le& |\xi|^2+ \int_0^t (\nu\| \omega(s)\|^2+C_1)|\xi(s)|^2ds\\
&\le&
|\xi|^2+ |\xi|^2\int_0^t (\nu\| \omega(s)\|^2+C)\exp\left\{\nu\int_0^s\| \omega(u)\|^2du+Cs\right\}ds\\
&
\le &|\xi|^2\exp\left\{\nu\int_0^t\| \omega(s)\|^2dt+Ct\right\}. \mbox{$\square$}
\end{eqnarray*}
\subsection{Verification of Assumption 2}
Here we follow the ideas of Hairer and Mattingly, see \cite{HM1}. Suppose that $\Psi\colon H\to {\mathcal H}$ is a Borel measurable function. Given an $(\@s F_t)$-adapted process $g\colon [0,\infty) \times \Omega \to H$ satisfying $ {\@Bbb E} \int_0^t |g_s|^2d s <+\infty $ for each $t\ge0$ we denote by $\@s D_g\Psi(\omega(t))$ the Malliavin derivative of $\Psi(\omega(t))$ in the direction of $g$; that is
$$
\@s D_g\Psi(\omega(t;w)):=\lim_{\varepsilon \downarrow
0}\frac{1}{\varepsilon} \left[ \Psi(\omega(t;w,W+\varepsilon g)) -\Psi(
\omega(t;w,W))\right],
$$
where the limit is understood in the $L^2(\Omega,\@s F,\P;{\mathcal H})$ sense. Recall that $\omega_g(t;w):=\omega(t;w,W+g)$ solves the equation
\begin{equation}\label{E25}
\begin{aligned}
d \omega_g(t;w) &=[\Delta\omega_g(t) -B_0(\omega_g(t;w))+B_1(\omega_g(t;w))]d t + Qd W(t)+Qg(t)dt,
\\
\omega(0;w)&=w\in H.
\end{aligned}
\end{equation}
The following two facts about the Malliavin derivative shall be crucial
for us in the sequel. Directly from the definition of the Malliavin
derivative we conclude \emph{the chain rule:} suppose that $\Psi\in
C^1_b(H;{\mathcal H})$ then
\begin{equation}\label{083002}
\@s D_g\Psi( \omega(t;w))=D\Psi(\omega(t;w))[D(t)],
\end{equation}
with $D(t;w,g)=: \@s D_g\omega(t;w)$, $t\ge 0$.
In addition, the \emph{integration by parts formula} holds, see
Lemma 1.2.1, p. 25 of \cite{Nualart}. Suppose that $\Psi\in
C^1_b(H)$ then
\begin{equation}
\label{083003} {\@Bbb E}[\@s D_g\Psi( \omega(t;w))]=
{\@Bbb E}\left[\Psi(\omega(t;w))\int_0^t\langle g(s),d
W(s)\rangle\right].
\end{equation}
In particular, one can easily show that when $H={\mathcal H}$ and $\Psi=I$, where $I$ is the identity operator,
the Malliavin derivative of $\omega(t;w)$ exists and
the process $D(t;w,g)$ (we omit writing $w$ and $g$ when they are obvious from the context), solves the linear equation
\begin{equation}\label{ET28}
\begin{aligned}
\frac{d D}{d t}(t)&=\Delta D(t) -\eta(t)\cdot \nabla D(t)-\delta k(t)\cdot \nabla \omega(t)\\
&+\eta(t,0)\cdot \nabla D(t)+\delta k(t,0)\cdot \nabla \omega(t)
+ Qg(t),\\
&\\
D(0)&=0.
\end{aligned}
\end{equation}
Here
$\delta k(t):={\mathcal K}(D(t))$.
Denote $\rho(t;w,\xi):=\xi(t)- {\@s D}_g\omega(t;w)$. We have the following.
\begin{proposition}
\label{m-lm} For any $\nu,\gamma>0$ there exists a constant $C>0$ such that for any given $w,\xi\in H$ one can find an $(\@s F_t)$-adapted $H$-valued
process $g(t)= g(t,w,\xi)$ that satisfies
\begin{equation}\label{ET210}
\sup_{|\xi|\le 1} {\@Bbb E} \,
|\rho(t;w,\xi)|^2\le Ce_{\nu}(w)e^{-\gamma t},\quad\forall\,t\ge0,
\end{equation}
and
\begin{equation}\label{ET29}
\sup_{|\xi|\le 1} \int_0^\infty {\@Bbb E}\,
|g(s,w,\xi)|^2d s \le Ce_{\nu}(w),\quad \forall\,w\in H.
\end{equation}
\end{proposition}
We prove this proposition shortly. First, however let us demonstrate
how to use it to finish verification of Assumption 2. We have
$$
D P_t\phi(w)[\xi]= \ \ {\@Bbb E}\left\{\,D\phi(\omega(t;w))[D(t)]\right\}
+{\@Bbb E}\,\left\{ D
\phi(\omega(t;w))[\rho(t;w,\xi)]\right\}.
$$
Using the chain rule, see \eqref{083002}, the right hand side can be rewritten as
\begin{eqnarray*}
&&
{\@Bbb E}\, \left\{{\@s D}_g \phi(\omega(t;w))\right\} +{\@Bbb E}\,\left\{ D
\phi(\omega(t;w))[\rho(t;w,\xi)]\right\}
\\
&&
= {\@Bbb E}\,
\left\{\phi(\omega(t;w))\int_0^t\langle g(s),{\rm d}
W(s)\rangle\right\}+ {\@Bbb E}\,\left\{
D\phi(\omega(t;w))[\rho(t;w,\xi)]\right\}.
\end{eqnarray*}
The last equality follows from integration by parts formula \eqref{083003}.
We have
$$
\left| {\@Bbb E}\, \left\{\phi(\omega(t;w))\int_0^t\langle g(s),d
W(s)\rangle\right\}\right|\le \left(P_t|\phi|^2(w)\right)^{1/2}\left({\@Bbb E}\,
\int_0^\infty |g(s)|^2d s\right)^{1/2}
$$
and
$$
\left|{\@Bbb E}\,\left\{
D\phi(\omega(t;w))[\rho(t;w,\xi)]\right\} \right|\le \left(P_t|D\phi|^2(w)\right)^{1/2}\left({\@Bbb E}\,
|\rho(t;w,\xi)|^2\right)^{1/2}.
$$
Hence, by \eqref{ET29} and \eqref{ET210}, given $\kappa_2\in(0,1),$ $\nu>0$ , the corresponding $V(w)=e_{\nu}(w)$ and $\varepsilon \in(0,1)$, we conclude
estimate \eqref{E26} with $T_0$, $C(\varepsilon)$, such that
$$
\left({\@Bbb E}\,
\int_0^\infty |g(s)|^2d s\right)^{1/2}\le C(\varepsilon)V^{\kappa_2}(w)$$
and
$$
\sup_{|\xi|\le 1}\sup_{t\ge T_0} \left\{{\@Bbb E} \,
|\rho(t;w,\xi)|^2\right\}^{1/2}\le \varepsilon V^{\kappa_2}(w).
$$
Therefore Assumption 2 will be verified, provided that
we prove Proposition \ref{m-lm}.
\subsubsection*{Proof of Proposition \ref{m-lm}}
We assume first that $q_k\not=0$ for all $k\in{\@Bbb Z}^2_*$, see \eqref{031002}.
Let us denote by $\Pi_{\ge N}$ the orthogonal projection
onto span $\left\{{\rm e}^{{\rm i} kx}\colon |k|\ge N\right\}$ and let
$\Pi_{< N}:=I-\Pi_{\ge N}$. Write
$$
B:=-B_0+B_1, \quad B_s(h,\omega):=B(h,\omega)+B(\omega,h),
$$
$B_{i,s}(\cdot,\cdot)$ for the symmetrized forms corresponding to $B_i$, $i=0,1$,
and
$$
\Delta_N:=\Pi_{\ge N}\Delta,\quad Q_N:=\Pi_{\ge N}B,\quad \Delta_N^\perp:=\Pi_{<
N}\Delta,\quad Q_N^\perp:=\Pi_{< N}B.
$$
Let $(\zeta(t))_{t\ge0}$ be the solution of the problem
\begin{equation}\label{ET211}
\begin{aligned}
\frac{{\rm d}\zeta}{{\rm d} t}(t)&=-\Delta_N\zeta(t)+ \Pi_{\ge N}B_s(\omega(t;w),
\zeta(t)) -\frac12\zeta_{N}(t)|\zeta_{N}(t)|^{-1},\\
&\\
\zeta(0)&=\xi,
\end{aligned}
\end{equation}
for a given integer $N\ge 1$. Here $\zeta_{N}(t):=\Pi_{< N}\zeta(t)$. We adopt the convention that
\begin{equation}\label{ET212}
\zeta_{N}(t)|\zeta_{N}(t)|^{-1}:=0\qquad
\text{if\quad $\zeta_{N}(t)=0$.}
\end{equation}
Let
\begin{equation}\label{083005}
g:=Q^{-1}f,
\end{equation}
where
\begin{equation}\label{ET213}
f(t):=-\Delta_N^{\perp}\zeta(t)+ \Pi_{<N}B_s(\omega(t),\zeta(t)) +
\frac12\zeta_N(t)|\zeta_N(t)|^{-1}.
\end{equation}
Note that $f$ takes values in a finite dimensional space. Recall that $\rho(t)=\xi(t)- D(t)$.
The proof of the proposition in question shall be achieved at the end of several auxiliary facts formulated as lemmas.
\begin{lemma} \label{L2}
We have
\begin{equation}
\rho(t) =
\zeta(t),\qquad\forall\,t\ge0.
\end{equation}
\end{lemma}
\begin{proof}
Adding $f(t)$ to the both sides of \eqref{ET211} we obtain
\begin{equation}\label{ET215}
\begin{aligned}
\frac{{\rm d} \zeta(t)}{{\rm d} t}(t) +f(t)&=-\Delta\zeta(t)+ B_s(\omega(t),\zeta(t)),\qquad
\zeta(0)&=\xi.
\end{aligned}
\end{equation}
Recall that $\xi(t)$ and $D(t)$ satisfy equations
$\eqref{E25-1}$ and \eqref{ET28}, respectively. Hence $\rho(t)$
satisfies
$$
\begin{aligned}
\frac{{\rm d} \rho(t)}{{\rm d} t} &= -\Delta\rho_t + B_s(\omega(t),\rho(t)) - Qg(t),\\
\rho(0) &= \xi.
\end{aligned}
$$
Since, $f(t)=Qg(t)$ we conclude that $\rho(t)$ and $\zeta(t)$ solve the same linear evolution
equation with the same initial value. Thus the assertion of the lemma follows.
\end{proof}
\begin{lemma}\label{L3}
For each $N\ge1$ we have
\begin{equation}
\label{011312}
\zeta_N(t)=0,\quad \forall\,t\ge 2,
\end{equation}
and
\begin{equation}
\label{021312}
|\zeta_N(t)|\le 1,\quad \forall \,t\ge0.
\end{equation}
\end{lemma}
\begin{proof} By Lemma \ref{L2} we have $\rho(\cdot)=\zeta(\cdot)$. Applying $\Pi_{<N}$ to both
sides of \eqref{ET211} we obtain
\begin{equation}\label{ET216}
\begin{aligned}
\frac{{\rm d}~}{{\rm d} t}\zeta_N(t)&=-\frac12 |\zeta_N(t)|^{-1}\zeta_N(t),\\
\zeta(0)&=\xi.
\end{aligned}
\end{equation}
Multiplying both sides of \eqref{ET216} by $\zeta_N(t)$ we
obtain that $z(t):=|\zeta_N(t)|^2$ satisfies
\begin{equation}\label{ode}
\frac{{\rm d} z}{{\rm d} t}(t)=-\frac12 \sqrt{z(t)},\quad z(0)=|\xi|^2.
\end{equation}
Since $0\le z(0)\le1$ the desired conclusion
holds from elementary properties of the solution of o.d.e. \eqref{ode}.
\end{proof}
Let $\zeta^{(N)}(t):=\Pi_{\ge N}\zeta(t)$. We have
\begin{equation}\label{ET211N}
\frac{{\rm d}}{{\rm d} t}|\zeta^{(N)}(t)|^2=-2\|\zeta^{(N)}(t)\|^2+ \langle\Pi_{\ge N}B_s(\omega(t;w),
\zeta(t)),\zeta^{(N)}(t)\rangle,\quad
\zeta^{(N)}(0)=\Pi_{\ge N}\xi.
\end{equation}
We shall use the following estimates, see Proposition 6.1 of \cite{foias}. There exists $C>0$ such that
\begin{equation}
\label{cofo88}
|\langle B_0(h,\omega_1),\omega_2)\rangle|\le C|h|_{s_1-1}|\omega_1|_{1+s_2}|\omega_2|_{s_3} ,\quad\forall\, h\in H_{s_1-1},\omega_1\in H_{1+s_2},\omega_2\in H_{s_3},
\end{equation}
for all $s_1,s_2,s_3\ge0$ such that $s_1+s_2+s_3>1$. When, in addition
$s_1>1$ we have
\begin{equation}
\label{cofo88a}
|\langle B_1(h,\omega_1),\omega_2)\rangle|\le C|h|_{s_1-1}|\omega_1|_{1+s_2}|\omega_2|_{s_3} ,\quad\forall\, h\in H_{s_1-1},\omega_1\in H_{1+s_2},\omega_2\in H_{s_3}.
\end{equation}
With the help of the above inequalities we can bound the symmetric part of the bilinear form $B(\cdot,\cdot)$ as follows.
\begin{lemma}
\label{lm011412}
For any $\varepsilon\in(0,1)$ and $\nu>0$ there exists a constant $C>0$ such that
\begin{eqnarray}
\label{021412}
&&|\langle B_s(\omega(t;w),
\zeta(t)),\zeta^{(N)}(t)\rangle|\\
&&
\le (\varepsilon N+C+\frac{\nu}{2}\|\omega(t;w)\|^2)|\zeta^{(N)}(t)|^2+\frac{1}{4}\|\zeta^{(N)}(t)\|^2+C\|\omega(t;w)\|^2|\zeta_{N}(t)|^2.\nonumber
\end{eqnarray}
\end{lemma}
\noindent{\sc Proof} From \eqref{cofo88} we have
\begin{eqnarray*}
&&|\langle B_0(\omega(t;w),
\zeta(t)),\zeta^{(N)}(t)\rangle|=|\langle B_0(\omega(t;w),
\zeta^{(N)}(t)), \zeta(t)\rangle|\\
&&
\le C|\omega(t;w)|_{1/2}\|\zeta^{(N)}(t)\||\zeta(t)|\le \frac{1}{16} \|\zeta^{(N)}(t)\|^2+C_1|\omega(t;w)|_{1/2}^2|\zeta(t)|^2.
\end{eqnarray*}
Using the Gagliardo--Nirenberg and Young's inequalities we get
$$
C_1|\omega(t;w)|_{1/2}^2\le \frac{\nu}{8} \|\omega(t;w)\|^2 +C_2|\omega(t;w)|^2
$$
for some $C_2>0$. This yields
$$
|\langle B_0(\omega(t;w),
\zeta(t)),\zeta^{(N)}(t)\rangle|
\le \frac{1}{16} \|\zeta^{(N)}(t)\|^2+ \frac{\nu}{8} \|\omega(t;w)\|^2 +C_2|\omega(t;w)|^2.
$$
Likewise,
\begin{eqnarray*}
|\langle B_0(
\zeta(t),\omega(t;w)),\zeta^{(N)}(t)\rangle|&&\le C\|\omega(t;w)\||\zeta^{(N)}(t)|_{1/2}|\zeta(t)|\\
&&
\le \frac{\nu}{8} \|\omega(t;w)\|^2|\zeta(t)|^2+C_1|\zeta^{(N)}(t)|_{1/2}^2\\
&&
\le \frac{\nu}{8} \|\omega(t;w)\|^2|\zeta(t)|^2+ \frac{1}{16}\|\zeta^{(N)}(t)\|^2+C_2|\zeta^{(N)}(t)|^2.
\end{eqnarray*}
On the other hand
$$
|\langle B_1(\omega(t;w),
\zeta(t)),\zeta^{(N)}(t)\rangle|=|\langle B_1(\omega(t;w),
\zeta^{(N)}(t)), \zeta^{(N)}(t)\rangle|=0
$$
and
\begin{equation}
\label{051412}
|\langle B_1(
\zeta(t),\omega(t;w)),\zeta^{(N)}(t)\rangle|\le C\|\omega(t;w)\||\zeta^{(N)}(t)||\zeta(t)|_{1/2}.
\end{equation}
Note that
$$
|\zeta(t)|_{1/2}^2=|\zeta_N(t)|_{1/2}^2+|\zeta^{(N)}(t)|_{1/2}^2\le N|\zeta_N(t)|^2+|\zeta^{(N)}(t)|_{1/2}^2.
$$
With this inequality we can estimate of the right hand side of \eqref{051412} by
$$
CN^{1/2}\|\omega(t;w)\||\zeta^{(N)}(t)||\zeta_N(t)|+C\|\omega(t;w)\||\zeta^{(N)}(t)||\zeta^{(N)}(t)|_{1/2}.
$$
The first term can be estimated by
$$
C_1\|\omega(t;w)\|^2|\zeta_N(t)|^2+\varepsilon N|\zeta^{(N)}(t)|^2.
$$
The second term is less than, or equal to
\begin{eqnarray*}
&&\frac{\nu}{8}\|\omega(t;w)\|^2|\zeta^{(N)}(t)|^2+C_1|\zeta^{(N)}(t)|_{1/2}^2\\
&&\quad
\le \frac{\nu}{8}\|\omega(t;w)\|^2|\zeta^{(N)}(t)|^2+ \frac{1}{16}\|\zeta^{(N)}(t)\|^2 +C_2|\zeta^{(N)}(t)|^2 .
\end{eqnarray*}
Summarizing the above consideration we have shown \eqref{021412}.
\mbox{$\square$}
\subsubsection*{Proof of \eqref{ET210}}
Performing the scalar product in $H$ of both sides of \eqref{ET211} against $\zeta^{(N)}(t)$ and using Lemma \ref{L2} we conclude that
\begin{equation}\label{ET211b}
\begin{aligned}
\frac{{\rm d}}{{\rm d} t}|\zeta^{(N)}(t)|^2&\le-2\|\zeta^{(N)}(t)\|^2+2 (\epsilon N+C+\frac{\nu}{2}\|\omega(t;w)\|^2)|\zeta^{(N)}(t)|^2\\
&\quad
+\frac12\|\zeta^{(N)}(t)\|^2+C\|\omega(t;w)\|^2|\zeta_{N}(t)|^2\\
&\le -\frac12\|\zeta^{(N)}(t)\|^2+\left[-N^2+2 (\epsilon N+C)\right]|\zeta^{(N)}(t)|^2\\
&\quad
+\nu\|\omega(t;w)\|^2|\zeta^{(N)}(t)|^2+C\|\omega(t;w)\|^2|\zeta_{N}(t)|^2, \\
\zeta(0)&=\xi.
\end{aligned}
\end{equation}
Suppose that $N_0$
is such that
\begin{equation}
\label{N0}
N^2_0-2 (\varepsilon N_0+C)\ge \mathop{\mbox{max}}\{N^2_0/2,\gamma+{\rm tr}\,Q^2\}.
\end{equation}
Then, solve \eqref{ET211} and determine $g(t)$ via \eqref{083005}. According to Lemma \ref{L2} the difference $\rho(t)=\xi(t)-D(t)$
equals $\zeta(t)$. From \eqref{ET211b} we conclude via Gronwall's inequality that
\begin{equation}\label{ET211c}
\begin{aligned}
&|\zeta^{(N_0)}(t)|^2\le|\zeta^{(N_0)}(0)|^2\exp\left\{-\gamma t-{\rm tr}\,Q^2t+\nu\int_0^t\|\omega(s;w)\|^2{\rm d} s\right\}\\
&
+C\int_0^t\exp\left\{-\gamma(t-s)-{\rm tr}\,Q^2(t-s)+\nu\int_s^t\|\omega(r;w)\|^2{\rm d} r\right\}\|\omega(s;w)\|^2|\zeta_{N_0}(s)|^2{\rm d} s, \\
&\zeta(0)=\xi.
\end{aligned}
\end{equation}
From Lemma \ref{L3} the second term on the right hand side of \eqref{ET211c} can be estimated by
\begin{eqnarray*}
&&
C\exp\left\{-\gamma(t-2)-{\rm tr}\,Q^2(t-2)\right\}\exp\left\{\nu\int_0^t\|\omega(r;w)\|^2{\rm d} r\right\}\int_0^2\|\omega(s;w)\|^2{\rm d} s\\
&&\quad
\le C_1\exp\left\{-\gamma(t-2)-{\rm tr}\,Q^2(t-2)\right\}\exp\left\{\nu'\int_0^{t\vee 2}\|\omega(r;w)\|^2{\rm d} r\right\},
\end{eqnarray*}
provided that $\nu'>\nu$. Therefore
\begin{equation}\label{ET211d1}
\begin{aligned}
&|\zeta^{(N_0)}(t)|^2\le|\zeta^{(N_0)}(0)|^2\exp\left\{-\gamma t-{\rm tr}\,Q^2t+\nu\int_0^{t\vee 2}\|\omega(s;w)\|^2{\rm d} s\right\}\\
&\quad
+ C_1\exp\left\{-\gamma(t-2)-{\rm tr}\,Q^2(t-2)\right\}\exp\left\{\nu'\int_0^{t\vee 2}\|\omega(r;w)\|^2{\rm d} r\right\},\quad{\P}\,{\rm a.s.} \\
&\zeta(0)=\xi
\end{aligned}
\end{equation}
for all $t>0$.
Estimate \eqref{ET210}, with $e_{\nu'}(w)$ appearing on the right hand side, is then a
consequence of the above bound, Lemma \ref{L2} and estimate \eqref{010712-00} if only $0<\nu<\nu'<\nu_0$.
\subsubsection*{Proof of \eqref{ET29}}
To prove the estimate observe that from \eqref{083005}, \eqref{ET213} and \eqref{011312} it follows that
$$
|g(t)|= |Q^{-1}\Pi_{<{N_0}}B_s(\omega(t),\zeta(t))|\le |g_0(t)|+|g_1(t)|,\quad\forall\,t\ge 0,
$$
with
$$
g_i(t):= Q^{-1}\Pi_{<{N_0}}B_{i,s}(\omega(t),\zeta(t)),\quad i=0,1.
$$
\subsubsection{Estimates of $|g_1(t)|$}
Note that for $t\ge2$,
$$
|g_1(t)|= |Q^{-1}\Pi_{<{N_0}}B_1(\zeta^{(N_0)}(t),\omega(t))|\le C\|\zeta^{(N_0)}(t)\|\|\Pi_{<{N_0}}\omega(t)\le CN_0\|\zeta^{(N_0)}(t)\||\omega(t)|.
$$
The last inequality holds because
\begin{equation}
\label{012402}
\|\Pi_{<{N_0}}w\|\le N_0|w|,\quad\forall\, w\in H.
\end{equation}
Therefore
$$
\mathbb E\int_2^T|g_1(t)|^2d t\le C J(T),
$$
with
$$
J(T):=\mathbb E\int_2^T\|\zeta^{(N_0)}(t)\|^2|\omega(t)|^2d t.
$$
We use \eqref{ET211b} to get
\begin{equation}\label{ET211d}
\begin{aligned}
&J(T)\le -2\mathbb E\int_2^T\frac{d}{d t}|\zeta^{(N_0)}(t)|^2|\omega(t)|^2d t
+2\nu\mathbb E\int_2^T|\omega(t)|^2\|\omega(t;w)\|^2|\zeta^{(N_0)}(t)|^2d t.
\end{aligned}
\end{equation}
Denote the terms appearing on the right hand side as $J_i(T)$, $i=1,2$, respectively.
We have
\begin{eqnarray*}
&&J_1(T)=2\mathbb E\int_2^T|\zeta^{(N_0)}(t)|^2d|\omega(t)|^2-2\mathbb E|\zeta^{(N_0)}(t)|^2|\omega(t)|^2\Big|_2^T.
\end{eqnarray*}
The boundary term appearing on the right hand side is easily estimated by $Ce_{\nu}(w)$, by virtue of \eqref{ET211d1} and \eqref{010712}. As for the integral term,
using \eqref{051512} and the already proven \eqref{ET210}, we can estimate it by
\begin{eqnarray*}
&&2\mathbb E\int_2^T|\zeta^{(N_0)}(t)|^2\left({\rm tr}\, Q^2-2\|\omega(t)\|^2\right)d t\le Ce_{\nu}(w)\int_2^T{\rm e}^{-\gamma t}d t\le C_1e_{\nu}(w).
\end{eqnarray*}
Next, we can write
\begin{eqnarray*}
&&J_2(T)\le J_{21}(T)+J_{22}(T),
\end{eqnarray*}
where
\begin{eqnarray*}
&&J_{21}(T):=2\nu|\zeta^{(N_0)}(0)|^2\mathbb E\int_2^T|\omega(t)|^2\|\omega(t;w)\|^2{\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\exp\left\{\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}d t\\
&&
J_{22}(T):=2\nu C_1\mathbb E\int_2^T|\omega(t)|^2\|\omega(t;w)\|^2{\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\exp\left\{\nu'\int_0^{t}\|\omega(r;w)\|^2d r\right\}d t.
\end{eqnarray*}
Observe that
\begin{eqnarray*}
&&J_{21}(T)\le 2\mathbb E\int_2^T|\omega(t)|^2{\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\frac{d}{d t}\exp\left\{\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}d t=\sum_{i=1}^3J_{21i}(T),
\end{eqnarray*}
where
\begin{eqnarray*}
&&J_{211}(T) :=2{\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\mathbb E\left\{|\omega(t)|^2\exp\left\{\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}\right\}\Big|_2^T,\\
&&J_{212}(T) :=2(\gamma +{\rm tr}\,Q^2)\int_2^T{\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\mathbb E\left\{|\omega(t)|^2\exp\left\{\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}\right\}dt,\\
&&J_{213}(T):=-2\int_2^T {\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\mathbb E\left\{\exp\left\{\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}d |\omega(t)|^2\right\}.
\end{eqnarray*}
We have
$$
J_{211}(T)\le C{\rm e}^{-(\gamma +{\rm tr}\,Q^2)T}\mathbb E \exp\left\{\nu|\omega(T;w)|^2+\nu\int_0^{T}\|\omega(s;w)\|^2d s\right\}\le C_1{\rm e}^{-\gamma T}e_{\nu}(w).
$$
The last inequality follows from \eqref{010712-00}. On the other hand, by the same token
\begin{eqnarray*}
J_{212}(T)&&\le C\mathbb E\int_2^T{\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\exp\left\{\nu|\omega(t)|^2+\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}d t\\
&&
\le C_2e_{\nu}(w)\int_2^T{\rm e}^{-\gamma t}d t\le C_3e_{\nu}(w),\quad\forall\,T\ge 2
\end{eqnarray*}
and finally
\begin{eqnarray*}
J_{213}(T)&&\le C\mathbb E\int_2^T\|\omega(t)\|^2 {\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\exp\left\{\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}d t\\
&&
\le \frac{C}{\nu}\mathbb E\int_2^T {\rm e}^{-(\gamma +{\rm tr}\,Q^2)t}\frac{d }{d t}\exp\left\{\nu\int_0^{t}\|\omega(s;w)\|^2d s\right\}d t.
\end{eqnarray*}
Repeating the integration by parts argument used before we conclude that also
$$
J_{213}(T)\le Ce_{\nu}(w),\quad\forall\,T\ge2.
$$
Summarizing, we have shown that
$
J_{21}(T)\le Ce_{\nu}(w),
$ for $T\ge2$.
In the same way we can argue that
$
J_{22}(T)\le Ce_{\nu}(w),
$
thus also
$$
J_{2}(T)\le Ce_{\nu}(w),\quad\forall\,T\ge2.
$$
Finally, for $t\in[0,2]$ we use \eqref{012402} to obtain that
\begin{eqnarray*}
|g_1(t)|&&= |Q^{-1}\Pi_{<{N_0}}B_{1,s}(\zeta(t),\omega(t))|\\
&&
\le C\left(\|\zeta^{(N_0)}(t)\|+N_0|\zeta_{N_0}(t)|\right)|\omega(t)|+CN_0\|\omega(t)\||\zeta_{N_0}(t)|.
\end{eqnarray*}
We have therefore
$$
\int_0^2\mathbb E |g_1(t)|^2d t\le J_{31}+J_{32},
$$
with
\begin{eqnarray*}
&&
J_{31}:=C\int_0^2\mathbb E\|\zeta^{(N_0)}(t)\|^2|\omega(t)|^2d t,\\
&&
J_{32}:=C\int_0^2\mathbb E(|\omega(t)|^2+\|\omega(t)\|^2)d t.
\end{eqnarray*}
It is easy to see from \eqref{051512} that
$
J_{32}\le Ce_{\nu}(w).
$
Term $J_{31}$ satisfies an estimate analogous to \eqref{ET211d},
we can write therefore that
$$
J_{31}\le J_{311}+J_{312},
$$
where $J_{311}$, $J_{312}$ are defined as the corresponding expression on the right hand side of \eqref{ET211d}
with the limits of the integrals appearing on the right hand side replaced by $0$ and $2$ correspondingly. In the case of $J_{311}$ we proceed in the same way for $J_1(T)$ and end up with the bound
$
J_{311}\le Ce_{\nu}(w).
$
On the other hand, from \eqref{ET211c} we get
\begin{eqnarray}
\label{011612}
&&
J_{312}\le C\mathbb E\int_0^2\|\omega(t;w)\|^2|\omega(t)|^2\exp\left\{\nu\int_0^t\|\omega(s)\|^2d s\right\}d t\\
&&\qquad
+C\mathbb E\int_0^2\int_0^t\|\omega(s)\|^2|\omega(t)|^2\|\omega(t)\|^2\exp\left\{\nu\int_s^t\|\omega(r)\|^2d r\right\}d td s.\nonumber
\end{eqnarray}
Repeating the argument with the integration by parts we have used in the foregoing we conclude that the first term on the right hand side is estimated by
$e_{\nu}(w).$ The second term equals
\begin{eqnarray*}
&&
-\frac{C}{\nu}\mathbb E\int_0^2|\omega(t)|^2\|\omega(t)\|^2d t\int_0^t\frac{d }{d s}\exp\left\{\nu\int_s^t\|\omega(r)\|^2d r\right\}d s\\
&&\qquad
\le \frac{C}{\nu}\mathbb E\int_0^2|\omega(t)|^2\|\omega(t)\|^2\exp\left\{\nu\int_0^t\|\omega(r)\|^2d r\right\} d t.
\end{eqnarray*}
From here on we estimate as in the foregoing and conclude that this term is less than $e_{\nu}(w).$
Summarizing, we have shown that
$$
J(T)\le Ce_{\nu}(w),\quad\forall\,T\ge2.
$$
\subsubsection{Estimates of $g_0(t)$}
We start with the following.
\begin{lemma}
\label{form1} (cf. Lemma A.1 of \cite{EMS01})
For any $N$ there exists $C_N$ such that
$$
|\Pi_{<N}B_{0}(h,\omega)|\le C_N|h|_{-1}|\omega|,\quad\forall\,h\in H_{-1},\,\omega\in H.
$$
\end{lemma}
\noindent{\sc Proof}
Suppose that
$$
h=\sum_{k\in{\@Bbb Z}^2_*}\hat h(k)e_k,\quad \omega=\sum_{k\in{\@Bbb Z}^2_*}\hat \omega(k)e_k.
$$
We can write that
\begin{eqnarray*}
|\Pi_{<N}B_0(h,\omega)|^2&&=\int_{{\mathbb T}^2}|\Pi_{<N}\nabla\cdot({\mathcal K}(h)(x)\omega(x))|^2d x\\
&&
\le N^2\sum_{0<|k|<N}\left|\sum_{\ell\in{\@Bbb Z}^2_*}\widehat{{\mathcal K}(h)}(\ell)\hat\omega(k-\ell)\right|^2\le N^4|h|^2_{-1}|\omega|^2.\mbox{$\square$}
\end{eqnarray*}
From the above lemma we get that for $T\ge2$,
$$
\mathbb E\int_2^T|g_0(t)|^2d t\le C I(T)
$$
with
\begin{eqnarray*}
I(T)&&:=\mathbb E\int_2^T|\zeta^{(N_0)}(t)|^2|\omega(t)|^2d t\\
&&
\le C\int_2^T \mathbb E\exp\left\{-\gamma t-{\rm tr}\,Q^2t+\nu|\omega(t)|^2+\nu\int_0^t\|\omega(s)\|^2d s\right\}d t\\
&&
+C \mathbb E\int_2^T\exp\left\{-\gamma (t-2)-{\rm tr}\,Q^2(t-2)+\nu'|\omega(t)|^2+\nu'\int_0^t\|\omega(s)\|^2d s\right\}d t
\\
&&
\le C_1e_{\nu'}(w),
\end{eqnarray*}
provided $0<\nu<\nu'<\nu_0$.
The first inequality follows from \eqref{ET211d1}, while the second from \eqref{010712-00}.
This, ends the proof of Proposition \ref{m-lm} and according to our previous remarks concludes the verification
of Assumption 2.
\subsection{Assumption 3} To verified this assumption consider the solution $y(t;w)$, $t\ge 0$, to the deterministic equation
$$
\frac{d y (t)}{d t}=\Delta y(t)+B(y(t)), \qquad t \ge 0,
$$
with the initial condition $y(0)=w$. Then
$$
\lim_{t\to+\infty}\sup_{|w|\le R}|y(t;w)|=0,\quad\forall\,R>0.
$$
Fix $\delta>0$ and $R>0$. Let $T_0>0$ be such that
$$
\sup_{|w|\le R}{\rm d}_{\kappa}(y(T; w),0)\le\delta/4,\quad\forall\,T\ge T_0.
$$
Since
$$W_{\Delta, Q}(t):= \int_0^t {\rm e} ^{\Delta (t-s)} Qd W(s)
$$ is a centered Gaussian random element in the Banach space $C([0, T]; V)$ with the uniform norm
$$
\|f\|_{\infty,T}:=\sup_{t\in[0,T]}\|f(t)\|,\quad f\in C([0, T]; V),
$$ its topological support is a closed linear subspace (see e.g. \cite{VAK}). Thus, in particular, $0$ belongs to the support of its law and for any $\varrho>0$, $\mathbb{P}(F_\varrho)>0$, where
$$
F_{\varrho}=\{\pi\in\Omega\colon \|W_{A, Q}(\pi)\|_{\infty,T}<\varrho\}.
$$
Choose $\varrho_0>0$ such that
$$
{\rm d}_{\kappa}(\omega(T; w_i)(\pi),y(T; w_i))|\le\delta/4\qquad\text{for all $\pi\in F_{\varrho_0}$, $i=1,2$ and $|w|\le R$,}
$$
and set $a:=\mathbb{P}(F_{\varrho_0})>0$. For any $|w_1|,|w_2|\le R$ we have
$$
{\mathcal P}_{T}(w_1,w_2;\Delta_{\delta,\kappa})\ge\mathbb{P}\left[\pi\in\Omega\colon {\rm d}_{\kappa}(\omega(T; w_i)(\pi),y(T; w_i))|\le\delta/4,\,i=1,2\right]\ge \mathbb{P}(F_{\varrho_0})=a,
$$
and thus we have finished verification of Assumption 3.
$\mbox{$\square$}$
\subsection{Proof of Theorem \ref{T2}}
\subsubsection{Proof of part 1)}
Let us fix an arbitrary $T>0$ and define
$
\zeta(t):=|\omega(t)|^2+t\|\omega(t)\|^2
$ and ${\rm tr}\,Q_1:=\sum_{k\in{\@Bbb Z}^2_*}|k|^2q_k^2$.
By It\^o's formula we have
\begin{equation}
\label{012502}
d\zeta(t)=\left[{\rm tr}\, Q^2+t{\rm tr}\,Q_1-2t|\omega(t)|^2_2-\|\omega(t)\|^2+2t\langle B(\omega(t)),\Delta\omega(t)\rangle\right]d t
+dM_t
\end{equation}
and
$$
dM_t:=2\langle Qd W(t), (I+t\Delta)\omega(t)\rangle.
$$
According to \eqref{020712} there exist $C,C_1>0$ such that
$$
|\langle B_0(\omega),\Delta\omega\rangle|\le C|\omega|_{1/2}\|\omega\||\omega|_2\le\frac14 |\omega|_2^2+C_1|\omega|^4,\quad\forall\, \omega\in H_2.
$$
Likewise, from \eqref{cofo88a} with $s_1=3/2$, $s_2=s_3=0$, we have
$$
|\langle B_1(\omega),\Delta\omega\rangle|\le C|\omega|_{1/2}\|\omega\||\omega|_2,\quad\forall\, \omega\in H_2.
$$
With these inequalities we conclude that
$$
|\langle B(\omega),\Delta\omega\rangle| \le\frac12 |\omega|_2^2+C_1|\omega|^4,\quad\forall\, \omega\in H_2.
$$
From here on we proceed as in the proof of Lemma A.3 of \cite{MP} and conclude from \eqref{012502} that
\begin{equation}
\label{032502}
\zeta(t)\le |w|^2+t{\rm tr}\, Q^2+\frac{t^2{\rm tr}\,Q_1}{2}+C\int_0^ts|\omega(s)|^4ds+U(t),
\end{equation}
where $U(0)=0$ and
$$
dU(t)=-(t|\omega(t)|^2_2+\|\omega(t)\|^2)dt+dM_t.
$$
Since
$$
U(t)\le M_t-(\alpha/2) \langle M\rangle_t
$$
for some sufficiently small $\alpha>0$ we conclude from the exponential martingale inequality that
$$
{\P}[\sup_{t\in[0,T]}U(t)\ge K]\le {\rm e}^{-\alpha K},\quad\forall\,K>0.
$$
This, of course, implies that $\mathbb E \exp\left\{\alpha' \sup_{t\in[0,T]}U(t)\right\}<+\infty$ for any $\alpha'\in(0,\alpha)$.
From \eqref{010712-00} we get
$$
\mathbb E\exp\left\{\nu\sup_{t\in[0,T]}|\omega(t)|^2\right\}\le Ce_{\nu}(w),
$$
which in turn implies that
$$
\mathbb E\left[ \sup_{t\in[0,T]}|\omega(t)|^{4N}\right]\le C|w|^{4N}.
$$ Summarizing,
the above consideration we obtain from \eqref{032502} that
for any $T>0$ and $N\ge0$ there exists a constant $C>0$ such that
\begin{equation}
\label{011912}
\mathbb E\left[\sup_{s\in[0,T]}\zeta^{2N}(s)\right]\le C\left(|w|^{4N}+1\right).
\end{equation}
Thus we conclude the proof of part 1) of Theorem \ref{T2}.
\subsubsection{Proof of part 2)}
First note that $P_t\phi(w)$ is well defined thanks to the already proved estimate \eqref{E27-aa} and the definition of the norm $\|\!|\cdot\|\!|_N$. In addition, we have
\begin{equation}
\label{031912}
e_{-\nu}(w)|P_t\phi(w)|\le \|\!|\phi\|\!|_Ne_{-\nu}(w)(1+\mathbb E\|\omega(t;w)\|^N)\le C\|\!|\phi\|\!|_N,\quad \forall\,w\in H.
\end{equation}
To deal with $DP_t\phi(w)[\xi]$ we first show the following:
\begin{lemma}
\label{lm031912}
Suppose that $\{\xi(t),\,t\ge0\}$ is defined by \eqref{041912}. Then, for any $t,\nu>0$ there exists $C>0$ such that
\begin{equation}
\label{021912}
\|\xi(t)\|^2\le C\|\xi\|^2\exp\left\{\nu\int_0^t\|\omega(s;w)\|^2d s+Ct\right\},\quad\forall\,t\ge 0,\,w\in H,\,\xi\in V,\,{\P}-{\rm a.s.}
\end{equation}
\end{lemma}
\noindent{\sc Proof}
Let $\zeta(t):=|\xi(t)|^2+\gamma\|\xi(t)\|^2$, with $\gamma>0$ to be chosen later on. We have
$$
\partial_t\zeta(t)=-2\|\xi(t)\|^2-2\gamma|\xi(t)|^2_2+\gamma\langle B_{s}(\xi(t),\omega(t)),\Delta \xi(t)\rangle
+\langle B(\xi(t),\omega(t)),\xi(t)\rangle.
$$
Thanks to \eqref{cofo88} with $s_1=3/2$, $s_2=s_3=0$ we can find constants $C,C_1>0$ such that
\begin{eqnarray*}
\gamma|\langle B_{0}(\xi(t),\omega(t)),\Delta \xi(t)\rangle|&&\le C\gamma|\xi(t)|_2|\xi(t)|_{1/2}\|\omega(t)\|
\\
&&
\le \frac{1}{4}|\xi(t)|_2^2+C_1\gamma^2|\xi(t)|\|\xi(t)\|\|\omega(t)\|^2\\
&&
\le \frac{1}{4}|\xi(t)|_2^2+\nu|\xi(t)|^2\|\omega(t)\|^2+\frac{C_2\gamma^4}{\nu}
\|\xi(t)\|^2\|\omega(t)\|^2.
\end{eqnarray*}
Using again \eqref{cofo88}, this time with $s_1=2$, $s_2=s_3=0$, we obtain
\begin{eqnarray*}
\gamma|\langle B_{0}(\omega(t),\xi(t)),\Delta \xi(t)\rangle|&&\le C\gamma|\xi(t)|_2\|\xi(t)\|\|\omega(t)\|
\\
&&
\le \frac{1}{4}|\xi(t)|_2^2+C_1\gamma^2\|\xi(t)\|^2\|\omega(t)\|^2.
\end{eqnarray*}
Also from \eqref{cofo88a}, used with $s_1=3/2$, $s_2=s_3=0$, we obtain
\begin{eqnarray*}
\gamma|\langle B_1(\xi(t),\omega(t)),\Delta \xi(t)\rangle|&&\le C\gamma|\xi(t)|_2|\xi(t)|_{1/2}\|\omega(t)\|
\\
&&
\le \frac{1}{4}|\xi(t)|_2^2+\nu|\xi(t)|^2\|\omega(t)\|^2+\frac{C_2\gamma^4}{\nu}
\|\xi(t)\|^2\|\omega(t)\|^2.
\end{eqnarray*}
In addition,
$$
\gamma|\langle B_1(\omega(t),\xi(t)),\Delta \xi(t)\rangle|=0.
$$
On the other hand,
\begin{eqnarray*}
|\langle B_{0}(\xi(t),\omega(t)),\xi(t)\rangle|&&\le C|\xi(t)||\xi(t)|_{1/2}\|\omega(t)\|
\\
&&
\le \nu |\xi(t) |^2|\|\omega(t)\|^2+C_1|\xi(t)|_{1/2}^2\\
&&
\le \nu |\xi(t) |^2|\|\omega(t)\|^2+\frac{1}{4}\|\xi(t)\|^2+C_2|\xi(t)|^2,
\end{eqnarray*}
and
\begin{eqnarray*}
|\langle B_{1}(\xi(t),\omega(t)),\xi(t)\rangle|&&\le C|\xi(t)||\xi(t)|_{1/2}\|\omega(t)\|
\\
&&
\le \nu |\xi(t) |^2|\|\omega(t)\|^2+C_1|\xi(t)|_{1/2}^2\\
&&
\le \nu |\xi(t) |^2|\|\omega(t)\|^2+\frac{1}{4}\|\xi(t)\|^2+C_2|\xi(t)|^2.
\end{eqnarray*}
Summarizing, for a sufficiently small $\gamma>0$ and some constant $C>0$ we can write that
$$
\partial_t\zeta(t)\le \left(\nu\|\omega(t)\|^2+C\right)\zeta(t)
$$
and \eqref{021912} follows by Gronwall's inequality.
\mbox{$\square$}
Concerning the estimates of $|DP_t\phi(w)[\xi]|$ we can write that
\begin{eqnarray}
\label{051912}
&&
e_{-\nu}(w)|DP_t\phi(w)[\xi]|=e_{-\nu}(w)\left|\mathbb E\left[(D\phi)(\omega(t;w))[\xi(t)]\right]\right|\\
&&
\le \|\!|\phi\|\!|_Ne_{-\nu}(w)\mathbb E\left[(1+\|\omega(t;w)\|^N)\|\xi(t)\|\right]\nonumber\\
&&
\le C\|\!|\phi\|\!|_Ne_{-\nu}(w)\left\{\mathbb E(1+\|\omega(t;w)\|)^{2N}\right\}^{1/2}\left\{\mathbb E\|\xi(t)\|^2\right\}^{1/2},\quad \forall\,w\in H.
\end{eqnarray}
By the already proved part 1) of the theorem and Lemma \ref{lm031912} we obtain that the utmost right hand side is less than, or equal to
$$
C_1\|\xi\|\|\!|\phi\|\!|_Ne_{-\nu}(w)(1+|w|^{4N})\mathbb E\exp\left\{\frac{\nu}{2}\int_0^t\|\omega(s;w)\|^2d s+C_1t\right\}\le C_2\|\xi\|\|\!|\phi\|\!|_N.
$$
Hence
$$
e_{-\nu}(w)\|DP_t\phi(w)\|\le C_2\|\!|\phi\|\!|_N
$$
and thus we have finished the proof of part 2) of Theorem \ref{T2}.
|
1,116,691,501,275 | arxiv | \section{Quaternionic Theories.}
The established formalism of quantum mechanics identifies physical
observables with Hermitian operators on a Hilbert space of states.
The existence of quantum states consisting of superpositions of pure
states is permitted by the Hilbert space structure.
Operators corresponding to different observables need not commute,
but the outcome of a measurement on a system will be an eigenvalue of
the corresponding Hermitian operator and hence real.
It is this last fact which led Birkhoff and von Neumann\cite{BvN} to
conclude that it is possible to consider the states to form a vector space
over the real, complex or quaternionic algebras.
Birkhoff and von Neumann state that these theories cannot be differentiated
by known formalistic criteria.
Stueckelberg \cite{Stk} has shown that the description of polarisation states
in a real number formulation of quantum mechanics (RQM) requires the
introduction of a super selection operator with all of the algebraic properties
of the imaginary unit $i$.
Hence, known phenomena have demanded the enlargement of RQM to be essentially
equivalent to complex quantum mechanics (CQM).
The remaining possibility involves considering Hilbert spaces over the
division algebra of real quaternions ${\bf H}$.
This is generated over the real numbers ${\bf R}$ by a basis of abstract
elements $\{i_0 = 1,\,i_1 ,\,i_2 ,\,i_3 \}$ with the multiplication rule,
$\forall r \in \{1,\,2,\,3\}$,
\begin{equation}
1i_r = i_r 1 = i_r ,\ \ i_{1}^2 = i_{2}^2 = i_{3}^2 = -1\ \ \text{and}%
\ \ i_1 i_2 = -i_2 i_1 = i_3 .
\end{equation}
That is, any quaternion $a$ can be written in the form
\begin{equation}
a = a_0 + a_1 i_1 + a_2 i_2 + a_3 i_3 ,\ a_\mu \in {\bf R} .
\end{equation}
The algebra has the $\ast$-anti-involution $1^{\ast} = 1$, $i^{\ast}_r = -i_r$,
$(ab)^{\ast} = b^{\ast}a^{\ast}$, and a norm $\|a\|^2 = a^{\ast}a \in {\bf R}$.
If we introduce an additional complex unit $i_4 = \sqrt{-1}$, defined to
commute with each of $i_0$, $i_1$, $i_2$ and $i_3$, then the algebra ceases
to be a division algebra. This complexification of ${\bf H}$ is called the
biquaternionic algebra, and has been used to treat special relativity and
electromagnetism\cite{Lcz}. The Pauli matrices are complex $2\times 2$
matrix representations of the Hermitian biquaternion units $i_4 i_r$.
The Lie algebra of ${\rm SU}(2)$ is ${\bf H}$ with the commutator product
$[a,b]=ab - ba$. Hence, theories of spin-1/2 fermions,
Salam--Weinberg unified electro-weak theory, and Yang--Mills isospin can be
written in a quaternionic form\cite{IZ}. For a review of these and other
uses of the quaternion algebra in standard quantum mechanics and quantum
field theory, see \cite{Gsy} and references therein.
Quaternionic quantum mechanics (QQM) differs from these other
(mainstream) applications of the quaternions, because it considers the
Hilbert space of quantum states to be a vector space over ${\bf H}$.
Hence, eigenvalues of operators on the quaternionic Hilbert space
${\cal H}_{\bf H}$ need not commute, though we retain the identification of
physical observables with Hermitian operators which, as is well known,
necessarily have real eigenvalues only.
The relation between kets of ${\cal H}_{\bf H}$ and quaternionic
wavefunctions follows the usual pattern (where the $\Psi_\mu \in {\bf R}$).
\begin{equation}
\Psi(x_\mu ) \equiv \langle x_\mu |\Psi\rangle %
= \Psi_0 + \Psi_1 i_1 + \Psi_2 i_2 + \Psi_3 i_3 \,.
\end{equation}
Alternatively, we can express $\Psi$ as an ordered pair of $i_1$-complex
numbers, in its so-called symplectic representation,
\begin{equation}
\Psi(x_\mu ) = \psi_{\alpha} + i_2 \psi_{\beta} ,
\end{equation}
where $\psi_{\alpha} = \Psi_0 + i_1 \Psi_1$ and
$\psi_{\beta} = \Psi_2 - i_1 \Psi_3$.
If we define $|\vec{\Psi}| = (\sum_{r=1}^{3}\Psi_{r}^{2})^{-1/2} \in {\bf R}$,
then there exists a pure imaginary quaternion of norm unity $\eta$, such that
\begin{equation}
\Psi(x_\mu ) = \Psi_{0}(x_\mu ) + |\vec{\Psi}(x_\mu )|\eta(x_\mu ) .
\end{equation}
From this we see that any two wavefunctions of QQM do not commute unless
their imaginary parts are parallel. This property has confounded the
construction of a completely satisfactory tensor product, i.e., one which
would be quaternion linear in each factor, allow the definition of a tensor
product of operators on each factor, and admitting a positive scalar product
for the purpose of second quantisation of the theory
(see \cite{NJ,HzR} for comprehensive discussions).
The obvious and prodigious success of CQM in describing physical systems
demands that QQM must reproduce its results wherever CQM has been successful.
This led Finkelstein {\it et al.} \cite{FJSS} to propose a geometric
generalisation of CQM, producing what is in effect an ``almost complex
quantum mechanics'' \cite{Nak}. Thus QQM is supposed to be
complex in any neighbourhood of an observation, modelled by replacing the
imaginary unit of CQM with a field of pure imaginary unit quaternions
$i_{\bf x}$ over spacetime, and the language of fibre-bundles is used
throughout their work.
Over larger distances, topological properties of the spacetime manifold
are permitted to manifest themselves in quantum mechanical expectation values.
Finkelstein {\it et al.} identify an intermediary limit between full
geometric QQM and standard CQM, which they call the ``electromagnetic limit''.
In this limit the Q--covariant derivative of the imaginary unit vanishes
and the Q--curvature term in the Lagrangian looks like that of EM. The
result of this situation is no extra quaternionic degrees of freedom
in the wavefunction, and the theory is equivalent to a (complex) Abelian
gauge field theory.
Adler\cite{Adler} and Horwitz\cite{Hz} have investigated the case where
it is possible, by use of a quaternionic gauge transformation, to
choose all of the energy eigenkets of a system to be in the same complex
subset of ${\cal H}_{\bf H}$. Then QQM effects are dependent upon the
existence of new, hyper-complex components of the fundamental forces.
The search for these hypothetical extra components motivated the
experimental test of QQM undertaken by Kaiser, George and Werner \cite{KGW}
in 1984.
\section{Interferometry test of QQM.}
The experiment of Kaiser {\it et al.} is based on a proposal of Peres
\cite{Peres}, who suggested that if a pair of potential barriers were to
possess additional hyper-complex components, then a particle (in Peres's
proposal, a neutron) traversing the pair will experience a shift in the
phase of its wavefunction which will depend upon the order in which the
barriers are traversed.
Note that even though relativistic formulations of QQM have been proposed
\cite{FJSS,Adler}, this experiment limits itself to the special case of
classical, external fields acting on nonrelativistic matter-waves, as this
is the situation amenable to investigation by particle interferometry.
The problem of how to quantise quaternionic potentials remains open,
while quantisation of the $i_{\bf x}$ field requires the overcoming of
obstacles similar to those attending the construction of a quantised gravity.
Peres's original suggestion contained the idea that the one should use
potential barriers with a large absorptive cross-section, which
in the standard framework of CQM are modelled by potentials with
large, non-vanishing imaginary components. The idea behind this was that
the imaginary number appearing in the potential might depend upon the
material, so potentials corresponding to different barriers would not
commute, while having a large imaginary component might make it more likely
that the hyper-complex contribution to each potential is significant.
The total phase shift experienced by a neutron of wavelength $\lambda$,
traversing a plate of thickness $D$ and refractive index $n$ is
\begin{equation}
\phi = \frac{2\pi}{\lambda}(n-1)D ,
\end{equation}
where for the thermal neutrons used in the experiment, $n$ is related to the
coherent nuclear scattering length $b$ and the atom density $N$ of the barrier
material, by
\begin{equation}
n = 1 - \frac{\lambda^2 Nb}{2\pi}.
\end{equation}
Kaiser {\it et al.} decided that to achieve very high sensitivity to
differences in the total phase shift caused by traversal of the apparatus
in different directions, the thickness of the slabs should be such that
\begin{equation}
\phi = -\lambda NbD \approx 10\,000^{\circ} .
\end{equation}
This ruled out the possibility of using materials with very high absorption
cross-sections. Instead, they chose aluminium and titanium for their
differing chemical and nuclear properties, aluminium having a positive real
scattering amplitude while titanium has a negative real scattering amplitude,
though both have small absorption cross sections. The experiment, at the
10-MW University of Missouri Research Reactor, used thermal neutrons of
wavelength $\lambda = 1.268\,\text{\AA}$ in a Bonse--Hart three crystal
Laue--Laue--Laue interferometer (see \cite{KW} for a review of the techniques
of neutron interferometry).
Kaiser's {\it et al.} result was that in changing the order of traversal
of the Al and Ti barriers, no phase difference was observed to 1 part in
$30\,000$.
A theoretical analysis of the two potential barrier experiment was later
undertaken by Davies and McKellar \cite{DMK} using the quaternionic
one dimensional Schr\"{o}dinger equation of Adler \cite{Adler}.
This uses the symplectic
decomposition of the wavefunction to rewrite the quaternionic problem into
one of a pair of complex fields coupled through the hyper-complex components
of the system Hamiltonian. In time independent form, this is the pair of
equations
\begin{equation}
\left(-\frac{{\rm d}^2}{{\rm d}x^2} + V_{\alpha}\right)\psi_{\alpha}%
- V^{\ast}_{\beta}\psi_{\beta} = E\psi_{\alpha}
\end{equation}
\begin{equation}
\left(\frac{{\rm d}^2}{{\rm d}x^2} - V_{\alpha}\right)\psi_{\beta}%
- V_{\beta}\psi_{\alpha} = E\psi_{\beta}
\end{equation}
Davies and McKellar attempted to treat the problem exactly but found the
resulting barrier transmission and reflection coefficients to be too
unwieldy to analyse. Instead, they turned to numerical methods and
confirmed that the transmission coefficient for two quaternionic, square
potential barriers of different heights will change by a phase when the
system is traversed in the opposite sense. This then forces us to
ask how this conclusion can be consistent with the null result
(to one part in $30\,000$) of Kaiser {\it et al.}.
Now the possibility remains that QQM modifies only some of the fundamental
forces.
The experiment of Kaiser {\em et al.} would appear to severely
constrain any quaternionic component of the strong nuclear force, though
this experiment does not rule out the possibility that there might be a
quaternionic component to the underlying chromodynamics. Though QQM was
at one time considered as a possible framework for QCD, Adler has
proposed that QQM might provide a natural framework of preonic physics
\cite{Adler}.
Alternatively, it might be that the experiment of Kaiser {\em et al.}\
is too simplistic, as it only involves a single fundamental force.
One can conceive of a situation in which we associate a different complex
algebra with each of the fundamental forces, so that within each class
of interactions complex quantum theory suffices to describe the experimental
phenomena.
To rule out this possibility, Klein\cite{Klein} has suggested that we should
repeat the experiment with combinations of different fundamental forces
\footnote{Since this article first appeared, this experiment has been
attempted. We refer the reader to a description of the preliminary results
in \cite{ALL}.}.
The simplest extension is to introduce a sequence of metal barriers and
magnetic fields which interact with the neutron intrinsic spin.
Additionally, there are known phenomena associated with the terrestrial
gravitational field which are subject to investigation by neutron
interferometric methods.
The fact that such effects have been successfully taken into account in
previous interferometric experiments means that we can already severely
constrain the contribution of a quaternionic addition to the gravitational
force, or else deduce that the gravitational force reached in the classical
limit from a quaternionic theory of quantised gravity, involves observables and
states constrained to lie in a complex subset of ${\bf H}$ coincident with
that selected by the strong force.
An experiment of the type described in \cite{Klein}, if it were to produce a
null result, would leave only the weak interaction as a possible force with
a quaternionic component. We should then search for new effects in
neutrino physics (already suggested by Peres\cite{Peres}).
The extreme weakness of these interactions, however, renders very unlikely
the possibility of carrying out interferometric experiments of the
type previously considered.
A positive outcome from Klein's experiment would give us some knowledge of the
particular form of QQM preferred by Nature. If there was observed a pure
phase shift introduced by swapping the order of potential barriers, this would
imply that we live in the Q--flat limit of QQM (or that Q--curvature is
negligible at the scale of the experiment, and perhaps also in the vicinity of
the Earth). Any more complicated observational result, in particular the
production of qualitatively different interferometric patterns, would imply
the active presence of a Q--curvature.
Before moving on to consider experiments on correlated few body systems,
we mention that Peres \cite{Peres} in fact suggested three
different types of experiments. Besides the experimentally realised
interferometry test described above, he derived a universal relation between
the scattering cross-sections of three coherent sources, taken singly and
pairwise. Numerical violation of this relation would indicate the failure
of the complex description.
Additionally, he suggested the investigation of K$_s$ meson regeneration
by different media taken singly and pairwise.
To the best of our knowledge, neither possibility has been taken up by
experimental researchers.
\section{Experiments on multi-particle correlated systems.}
Note that in the previous discussion, the presence of quaternionic effects
depends upon the existence of hyper-complex components in the interaction
potentials of a physical system.
Theoretical analysis of this situation requires the adoption
of a particular form for the quaternionic dynamics, which in the
non-relativistic Q--flat limit might be characterised by the assumption of QQM
essentially reducing to CQM in potential free regions of space.
Recently, Brumby, Joshi, and Anderson \cite{BJA} have questioned the need to
demand this integrability of the Q--curvature,
instead proposing a different class of experiments.
To this end, consider the variant due to Greenberger {\it et al.} \cite{GHSZ}
of Bohm's {\em gedankenexperiment}\/ test of the Einstein--Podolsky--Rosen
programme. This envisages the preparation of an entangled four body state
\begin{eqnarray}
|\Psi _{\text{GHSZ}}\rangle & = & \frac{1}{\sqrt{2}}\left(%
|+\rangle_{1}\!\otimes|+\rangle_{2}\!\otimes|-\rangle_{3}\!\otimes%
|-\rangle_{4} \right. \nonumber \\
&& \ \ \ \ \ \:-\ %
\left. |-\rangle_{1}\!\otimes|-\rangle_{2}\!\otimes|+\rangle_{3}\!\otimes%
|+\rangle_{4}\right) ,
\end{eqnarray}
whose constituents are permitted to propagate to space-like separations.
We then carry out simultaneous measurements of a component of spin of each
particle (using, say, sets of Stern--Gerlach magnets) and consider the product
of these values. The quantum mechanical expectation value is thus
\begin{equation}
E_{\text{GHSZ}}(\{{\bf n}_k\}) = \langle \Psi _{\text{GHSZ}}|%
\bigotimes_{j=1}^{4} {\bf n}_j\cdot \bbox{\sigma}_j%
|\Psi _{\text{GHSZ}}\rangle ,
\end{equation}
where ${\bf n}_j$ gives the orientation of the $j^{\text{th}}$
Stern--Gerlach apparatus, and $\sigma_j^s$ is the $s^{\text{th}}$
Pauli matrix at the position ${\bf x}_j$ of that apparatus
(and so is a $2\times 2$, $i_{{\bf x}_j}$-complex matrix).
Assuming only that in the nonrelativistic limit the intrinsic spin of a
particle does not effect its propagation in free space,
Brumby {\it et al.} now suggest that this expectation value will be
sensitive to the Q--curvature throughout the 2d region whose boundary
intersects the four observation points. Hence, geometric QQM predicts
experimental results unexplainable within the structure of standard CQM.
Our major difficulty lies in
the adoption of a tensor product of single particle states which adequately
handles the noncommutation of quaternionic wavefunctions and also permits us to
return to an equivalent complex quantum mechanical system through some natural
limiting process (we use the tensor product of quaternionic Hilbert
modules developed by Horwitz and Razon \cite{HzR}). While this question
remains open, investigation of alternative structures has suggested
composition of quaternionic Hilbert spaces could be viewed as a lattice
theoretic problem \cite{NJ}.
It is potentially significant that we show geometric QQM to agree with CQM
in the case of a two body correlated system, essentially due to the
dimensionality of the space within which we align our experimental apparatus,
and the natural requirement that we use only relative angles when constructing
the operator corresponding to the simultaneous observation of components of
intrinsic spin. This special ``hiding'' effect in two body systems is lost
in ${\cal N} \geq 3$ body systems, and hints that the apparent lack of
evidence for QQM may be due to the subtlety of the theory.
Experimental investigations of violations of the Bell inequalities have
almost always used photons rather than electrons (see \cite{CS,Asp}).
Because photons possess a single quantum of angular momentum, the
operator corresponding to a polarisation filter has real components
and QQM has no opportunity to manifest itself.
This situation is changed by the use of circularly polarised photons, in which
case the quantum mechanical prescription for calculating the expectation
values necessarily uses complex numbers, giving the quaternionic entangled
state a probability distribution sensitive to a changing $i_{\bf x}$--field
(hence, not predicted by CQM).
Our prediction of quaternionic terms in multiparticle correlation experiments
provides a further motivation to the reasons given by Greenberger {\em et al.}
to undertake such experiments.
With this type of experiment, we are probing the structure of
geometric QQM's bundle of quaternionic fields over spacetime
(equivalent to determining the integrability of the almost complex structure).
This raises the
possibility that topological defects in the spacetime manifold will have
a new way of effecting the predictions of quantum mechanics in their
vicinity.
Work is in progress on a quaternionic quantum field theory which will allow
us to begin to investigate the consequences of the inclusion of such objects
\cite{BJ}.
\acknowledgements
One of the authors, S.P.B., acknowledges the support of an Australian
Postgraduate Research Award. G.C.J.\ was supported by the Australian
Research Council and the University of Melbourne.
\references
\bibitem{BvN} G.\ Birkhoff and J.\ von Neumann,
Ann. Math. {\bf 37}, 823 (1936).
\bibitem{Stk} E.\ C.\ G.\ Stueckelberg,
Helv. Phys. Acta {\bf 33}, 727 (1960).
\bibitem{Lcz} C.\ Lanczos,
{\it Variational Principles of Mechanics}
(U.\ of Toronto Press, Toronto, 1949).
\bibitem{IZ} C.\ Itzykson and J-B.\ Zuber,
{\it Quantum Field Theory}
(McGraw--Hill, New York, 1980).
\bibitem{Gsy} F.\ G\"{u}rsey,
in {\it Symmetries in Physics (1600--1980) :
Proceedings of the 1st International Meeting on the
History of Scientific Ideas}
(Seminari d'Historia de les Ciencies, Barcelona, 1983),
pp.\ 557--589.
\bibitem{NJ} C.\ G.\ Nash and G.\ C.\ Joshi,
Int.\ J.\ Theor.\ Phys.\ {\bf 31}, 965 (1992);
J.\ Math.\ Phys.\ {\bf 28}, 2883 (1987);
{\bf 28}, 2886 (1987).
\bibitem{HzR} A.\ Razon and L.\ P.\ Horwitz,
Acta Appl.\ Math.\ {\bf 24}, 141 (1991);
{\bf 24}, 179 (1991);
J.\ Math.\ Phys.\ {\bf 33}, 3098 (1992).
\bibitem{FJSS} D.\ Finkelstein, J.\ M.\ Jauch, S.\ Schiminovich, and
D.\ Speiser,
J.\ Math.\ Phys.\ {\bf 3}, 207 (1962);
{\bf 4}, 788 (1963).
\bibitem{Nak} M.\ Nakahara,
{\it Geometry, Topology and Physics}
(Institute of Physics, Bristol, 1990), Chap.\ 8, Sec.\ 7.
\bibitem{Adler} S.\ L.\ Adler,
Phys.\ Rev.\ D {\bf 17}, 3212 (1978);
Phys.\ Lett.\ {\bf 86B}, 203 (1979)
(ERRATUM-{\em ibid.}\ {\bf 87B}, 406 (1979));
Phys.\ Rev.\ Lett.\ {\bf 57}, 167 (1986);
Phys.\ Rev.\ D {\bf 37}, 3654 (1988);
Phys.\ Lett.\ B332 (1994) 358;
{\it Quaternionic quantum mechanics and quantum fields\/}
(Oxford, New York, 1995), and references therein.
\bibitem{Hz} L.\ P.\ Horwitz,
J.\ Math.\ Phys.\ {\bf 34}, 3405 (1993);
L.\ P.\ Horwitz and L.\ C.\ Biedenharn,
Ann.\ Phys.\ {\bf 157}, 432 (1984).
\bibitem{KGW} H.\ Kaiser, E.\ A.\ George and S.\ A.\ Werner,
Phys.\ Rev.\ A {\bf 29}, R2276 (1984).
\bibitem{Peres} A.\ Peres,
Phys.\ Rev.\ Lett.\ {\bf 42}, 683 (1979);
quant-ph/9605024.
\bibitem{KW} A.\ G.\ Klein and S.\ A.\ Werner,
Rep.\ Prog.\ Phys.\ {\bf 46}, 259 (1983).
\bibitem{DMK} A.\ J.\ Davies and B.\ H.\ J.\ McKellar,
Phys.\ Rev.\ A {\bf 40}, 4209 (1989); {\bf 46}, 3671 (1992);
A.\ J.\ Davies, Phys.\ Rev.\ D {\bf 41}, 2628 (1990).
\bibitem{Klein} A.\ G.\ Klein,
Physica B {\bf 151}, 44 (1988).
\bibitem{ALL} B.\ Allman, {\it et al.},
to be published in J.\ Japan Phys.\ Soc.\ (1996).
\bibitem{BJA} S.\ P.\ Brumby, G.\ C.\ Joshi and R.\ Anderson,
Phys.\ Rev.\ A {\bf 51}, 976 (1995).
\bibitem{GHSZ} D.\ M.\ Greenberger, M.\ Horne, and A.\ Zeilinger,
in {\em Bell's Theorem, Quantum Theory, and Conceptions
of the Universe}, edited by M.\ Kafatos
(Kluwer Academic, Dordrecht, The Netherlands, 1989), p.\ 73;
D.\ M.\ Greenberger, M.\ A.\ Horne, A.\ Shimony
and A.\ Zeilinger,
Am.\ J.\ Phys.\ {\bf 58}, 1131 (1990).
\bibitem{CS} J.\ F.\ Clauser and A.\ Shimony,
Rep.\ Prog.\ Phys.\ {\bf 41}, 1881 (1978).
\bibitem{Asp} A.\ Aspect and P.\ Grangier,
in {\it Quantum Concepts in Space and Time},
edited by R.\ Penrose and C.\ J.\ Isham
(Oxford Science Publications, Oxford, 1986).
\bibitem{MoFes} P.\ M.\ Morse and H.\ Feshbach,
{\it Methods of Theoretical Physics}
(McGraw--Hill, New York, 1953), Vol.\ 1.
\bibitem{BJ} S.\ P.\ Brumby and G.\ C.\ Joshi,
hep-th/9610033; to be published in Found. Phys. (1996).
\end{document}
|
1,116,691,501,276 | arxiv | \section{Introduction}
{
Edge Computing (EC) is an enabling paradigm for developing technologies like the Internet of Things (IoT), 5G, online gaming, augmented reality (AR), vehicle-to-vehicle communications, smart grids, and real-time video analytics. In essence, EC brings the services and utilities of cloud computing closer to the end user, providing low latency, location awareness, and better efficiency for delay-sensitive mobile applications \cite{khan2019edge}.
To accomplish this, computational resources are placed at the possible closest location to the mobile devices, with moderate servers’ capabilities placed at the edge of the network to achieve necessary user centric requirements \cite{Shakarami2020}.
Many applications like smart grids with a large number of sensors continuously collect data from the surrounding environment across medium to large geographical areas \cite{Yu2011}.
Because most sensors are resource-constrained, they cannot run more expensive or data-intensive tasks like machine learning (ML) algorithms. In this case, it is more efficient to send these data to more powerful resources nearby (placed on the edge of the network) for faster processing (i.e., with low latency) \cite{Lopes20, Coutinho2020}.
Another example is wearable technologies,
which can depend on energy- and resource-constrained mobile devices or low-end servers placed in the nearby to process their data streams \cite{Jin2021}.
}
In these scenarios, a remarkable trend is the increasing adoption of ML techniques to execute tasks like data classification, spam filtering, anomaly detection in network traffic for cybersecurity, real-time image classification and segmentation, driving support applications, autonomous driving vehicles, and many others. Among ML techniques, ensembles of classifiers have demonstrated remarkable predictive performance for the classification of data streams~\cite{gomes2017survey}.
Historically, ML research focused on improving predictive performance without constraints for computational resources and energy consumption. However, the need to cope with these dynamic environments with potentially infinite data streams with non-stationary behavior, and to run ML algorithms either on mobile devices or low-end servers in the edge create additional challenges.
On the algorithmic side, the ML field is shifting towards data stream learning to face such challenges where requirements like single pass, response time, and constant memory usage are imposed~\cite{gama2014survey}. Nevertheless, little effort has been made to reduce the energy consumption of such algorithms \cite{energy-albert}.
To address this gap, we have elaborated on previous research~\cite{HPCC,IS} to investigate how to optimize the classification of data streams with bagging ensembles with respect to energy efficiency, time performance (i.e., delay, and throughput), and predictive performance. The main contributions of this paper can be summarized as follows:
\begin{enumerate}
\item Present the premier study that applies {\it mini-batching} (proposed in \cite{HPCC,IS}) for improving the energy efficiency of bagging ensembles;
\item Identify the trade-offs between timer performance, energy efficiency, and predictive performance of bagging ensembles for the classification of online data streams;
\item Demonstrate how to achieve a good balance of these trade-offs to obtain significant gains on performance and energy efficiency, at cost of negligible loss in predictive performance on a realistic stream processing testbed with ({\it i}) six state of the art ensemble algorithms, ({\it ii}) five widely used dataset benchmarks, ({\it iii}) three computer architectures ranging from micro- to middle-end servers, and ({\it iv}) three levels of stream workloads intensities (i.e., 10\%, 50\%, and 90\% of the maximum server capacity);
\item Our results present significant gains in 96\% of the evaluated cases.
\end{enumerate}
The remainder of this article is organized as follows, \autoref{sec:relatedwork} discusses the related works. The state-of-art bagging ensemble algorithms are described in \autoref{sec:ensembles}. We present our proposal for applying mini-batching for parallel implementations of bagging ensemble algorithms in \autoref{sec:proposal}, followed by the experimental evaluation in \autoref{sec:experiments}. Finally, our conclusion is presented in \autoref{sec:conclusions}.
\section{Related work}
\label{sec:relatedwork}
{
Computing nodes, from smallest devices to high-end servers comprise several components, most of which can be optimized to save energy \cite{Orgerie2014}.
From atomic block components like a simple functional unit within a chip to CPU cores, disks, network interface cards (NICs) and entire boards can be put in sleep or idle states for saving power when not delivering services
to achieve proportional computing \cite{4404806}. Dynamic power management (DPM) encompasses a set of techniques that achieve energy-efficient by selectively turning off (or reducing the performance of) system components when they are idle (or partially unexploited) \cite{benini2000survey}. Dynamic Voltage and Frequency Scaling (DVFS) define several levels of frequency at which a processor can operate, where lower frequencies become slower to save power \cite{Orgerie2014, snowdon2005power}. Sensors and wearable devices that use small-sized
batteries can also be optimized to save power while delivering functionalities including sensing, storage, and computation \cite{Jin2021}.
The work in \cite{Coutinho2020} combines the use of DVFS and DPM for optimizing performance and energy savings of single-board computers using Pareto frontier for multi-criteria optimization of several computing intensive kernels. {
Despite DVFS can reduce energy consumption, scaling the CPU clock frequency also affects the performance of other applications running in the same node.}}
A modular, scalable, and efficient FPGA-based implementation of kNN for System on Chip devices is presented in \cite{Vieira-KNN}. The solution shows improvements of 60X in execution time and 50X in energy efficiency due to the low power consumption of FPGAs. Despite of its outstanding performance, the contribution is specific in terms of algorithm and hardware.
The work in \cite{Martin15} emphasizes energy consumption and energy efficiency as important factors to consider during data mining algorithm analysis and evaluation. The work extended the CRISP (Cross Industry Standard Process for Data Mining) framework to include energy consumption analysis, demonstrating how energy consumption and accuracy are affected when varying the parameters of the Very Fast Decision Tree (VFDT) algorithm. The results indicate that energy consumption can be reduced by up to 92.5\% while maintaining accuracy.
In \cite{Amezzane19}, the authors analyze power consumption for both batch and online data stream learning. They experimented with three online and three batch algorithms. Among their conclusions is the finding that the CPU consumes up to 87\% of the total energy.
Although evaluating online learners, this work tested only single model classifiers.
{
Aiming to reduce the memory cost, the work in \cite{Costa2018} proposed the Strict VFDT (SVFDT), an algorithm which extends the VFDT. Designed for memory constrained devices,
the algorithm minimizes unnecessary tree growth, substantially reducing memory usage and execution time, while keeping competitive predictive performance. A comparison of the four-way relationship among time efficiency, energy consumption, predictive performance, and memory costs is presented in \cite{Lopes20}. The comparison is made by tuning the hyper-parameters of VFDT, SVFDT, and SVFDT with OLBoost. The work demonstrated that the most complex method delivers the best predictive performance at the expense of worse memory and energy performance.
In \cite{energy-albert}, the authors presented an energy-efficient approach to real-time prediction with high levels of accuracy called \textit{nmin adaptation}. This reduces the energy-consumption of Hoeffding Trees ensembles by adapting the number of instances required to create a split. This method can reduce energy consumption by 21\% on average with a small impact on accuracy. They also presented detailed theoretical energy models for ensembles of Hoeffding trees and a generic approach to creating energy models applicable to any class of algorithms. Although these works propose more energy efficient algorithms, the benefit achieved in these works are limited to the specific algorithms. In contrast, our work focus on optimizations which can be applied to several bagging ensembles. }
{
In summary, the related works are still scarce, and mainly focus either in designing power efficient algorithms or optimizing existing ones. Our purpose is to study the benefits and trade-offs between time performance (e.g., latency and throughput), energy efficiency and predictive performance of applying mini-batching, an optimization technique (proposed in \cite{CASSALES2021260}) which can be applied to bagging ensembles.
Mini-batching is orthogonal to any {\it ad hoc} optimization of specific learning algorithms within the ensemble like the proposals found in \cite{Costa2018,energy-albert,Martin15,Vieira-KNN}. Being orthogonal, mini-batching can be combined with such {\it ad hoc} optimizations that focus on a specific learner algorithms within the ensemble. This combination, however, is out of this work’s scope. }
\section{Edge computing}
\section{Bagging ensembles for stream processing }
\label{sec:ensembles}
In many applications, learning algorithms have to cope with dynamic environments that collect potentially unlimited data streams.
Formally, a data stream $S$ is a massive sequence of data elements $x_1, x_2, \dots, x_n$ that is, $S={\{x_i\}}_{i=1}^n$, which is potentially unbounded (n → $\infty$) \cite{silva2013data}.
As mentioned before, stream processing algorithms have additional requirements, which may be related to memory, response time, or a transient behavior presented by the data stream. In this context, one of the most widely used algorithms is the Hoeffding Tree~\cite{HOEFFDING_TREE}.
The Hoeffding Tree (HT) is an incremental tree designed to cope with massive data streams. Thus, it can create splits with reasonable confidence in the data distribution while having very few instances available. This is possible because of the Hoeffding Bound (HB), which states that with probability $1 - \delta$, the true mean of the variable is at least within $\pm\epsilon$ of the average of the observed variable.
\begin{equation}
\epsilon = \sqrt{ \frac{R^2\ ln(1/\delta)}{2n} },
\end{equation}
where $r$ is a real-valued random variable with a range $R = r_{max} - r_{min}$ (i.e., the subtraction of the maximum and minimum values of $r$) considering the $n$ independent observations of $r$.
\begin{figure*}[ht]
\centering
\begin{tikzpicture}[]
\draw (-2.65,2.75) rectangle (-1.0,1.75) node[pos=0.5] (l1) {...};
\draw (-2.65,1.75) rectangle (-1.0,0.75) node[pos=0.5] {$x^{t-1},y^{t-1}$};
\draw (-2.65,0.75) rectangle (-1.0,-0.25) node[pos=0.5] (data) {$x^t,y^t$};
\draw (-2.65,-0.25) rectangle (-1.0,-1.25) node[pos=0.5] {$x^{t+1},y^{t+1}$};
\draw (-2.65,-1.25) rectangle (-1.0,-2.25) node[pos=0.5] {...};
\node[rotate=90] at (-3.05,0.1) {\large Stream};
\draw (-0.75,3.5) rectangle (10,-3.0);
\draw (-0.5,1) rectangle (1.5,-0.5) node[pos=0.5,align=center] (input) {Poisson\\distribution\\weighting};
\draw (3,3) rectangle (5,2) node[pos=0.5] (tree1) {learner \#1};
\draw (3,1.5) rectangle (5,0.5) node[pos=0.5] (tree2) {learner \#2};
\draw[white] (3,0) rectangle (5,-1) node[black, pos=0.5] {...};
\draw (3,-1.5) rectangle (5,-2.5) node[pos=0.5] (treen) {learner \#n};
\draw (7,0.75) rectangle (9.5,-0.75) node[pos=0.5, align=center] (vote) {Majority vote\\ aggegation};
\draw (11.5,0) ellipse (1cm and 0.5cm) node[anchor=center] {Decision};
\draw[-latex] (data) -- (input);
\draw[-latex] (1.5,0.25) -- (3,2.5);
\draw[-latex] (1.5,0.25) -- (3,1);
\draw[-latex] (1.5,0.25) -- (3,-2);
\draw[-latex] (5,2.5) -- (7,0);
\draw [-latex] (5,1) -- (7,0);
\draw[-latex] (5,-2) -- (7,0);
\draw[-latex] (9.5,0) -- (10.4,0);
\end{tikzpicture}
\caption{Example of a Bagging Ensemble organization.}
\label{fig:ensemble}
\end{figure*}
One of the shortcomings of the HT is that a single tree is considered a `weak' model in the sense that a single tree cannot accurately model complex learning problems. One approach to circumvent such `weakness' is to ensemble several models. A popular strategy to create an ensemble of learners is Bagging~\cite{Breiman1996}.
Although Breiman proposed Bagging and its variants (e.g., Random Forest) more than 20 years ago \cite{Breiman1996}, they are still used to this day as an effective method to reduce error without resorting to intricate models, such as deep neural networks, that are not trivial to train and fine-tune.
In contrast to Boosting~\cite{ADABOOST}, Bagging does not create dependency among the base models, facilitating the parallelization of the method in an online fashion.
Fig. \ref{fig:ensemble} {
depicts the streaming adaptation of Bagging that calculates different weights using a Poisson distribution to simulate the batch bootstrapping process and preserves independence among the learners.
The Poisson distribution effectively creates several different subsets of the data that allow repeatedly training with some instances while ignoring others in the training phase.
}
Besides that, Bagging variants yield higher predictive performance in the streaming setting than Boosting or other ensemble methods that impose dependencies among its base models. This phenomenon is present in several empirical experiments \cite{OzaBag,ozabagadwin,Levbag,Gomes2017}. This behavior can be attributed to the difficulty of effectively modeling the dependencies in a streaming scenario, as noted in \cite{gomes2017survey}.
Next, we present a summary description of six ensemble algorithms that evolved from the original Bagging to an online (streaming) setting by Oza and Russell~\cite{OzaBag}.
Despite other decision tree algorithms~\cite{EFDT}, the HT algorithm is often chosen as the base model for the online bagging algorithms. The HT is often a common denominator, it may not be the most accurate individually, but it does yield reasonable predictive performance without requiring too many computational resources. Our experiments employ the HT as the base model for all six algorithms used to evaluate our mini-batching strategy.
\textbf{Online Bagging (OzaBag - OB)}~\cite{OzaBag} is an incremental adaptation of the original Bagging algorithm. The authors demonstrate how the process of bootstrapping can be adapted to an online setting using a Poisson($\lambda=1$) distribution. In essence, instead of sampling with replacement from the original training set, in Online Bagging, the Poisson($\lambda=1$) is used to assign weights to each incoming instance. These weights represent the number of times an instance will `repeated' to simulate bootstrapping. One concern with using $\lambda=1$ is that about $37\%$ of the instances will receive weight 0, thus not being used to train, which is desired to approximate it to the offline version of Bagging, but may be detrimental to an online learning setting~\cite{gomes2017survey}. Therefore, other works~\cite{Levbag,Gomes2017} increase the number of times an instance is used for training by increasing the $\lambda$ parameter.
\textbf{OzaBag Adaptive Size Hoeffding Tree (OBagASHT)} \cite{ozabagadwin} combines the OzaBag with Adaptive-Size Hoeffding Trees (ASHT). The new trees have a maximum number of split nodes and some policies to prevent the tree from growing bigger than this parameter (i.e. deleting some nodes). This algorithms' objective was to improve predictive performance by enforcing the creation of different trees. Effectively, diversity is created by having different reset-speed trees in the ensemble, according to the maximum size. The intuition is that smaller trees can adapt more quickly to changes, and larger trees can provide better performance on data with little to no changes in distribution. Unfortunately, in practice, this algorithm did not outperform variants that relied on other mechanisms for adapting to changes, such as resetting learners periodically or reactively~\cite{gomes2017survey}.
\textbf{Online Bagging ADWIN (OBADWIN)}~\cite{ozabagadwin} combines OzaBag with the ADAptive WINdow (ADWIN)~\cite{ADWIN} change detection algorithm.
When a change is detected, the classifier with the lowest predictive performance is replaced by a new classifier.
ADWIN keeps a variable-length window of recently seen items. The property that the window has the maximal length statistically consistent with the hypothesis that there has been no change in the average value inside the window. This implies that the average over the existing window can be reliably taken as an estimation of the current average in the stream at any time, except for a very small or very recent change that is still not statistically visible.
\textbf{Leveraging Bagging (LBag)}~\cite{Levbag} extends OBADWIN by increasing the $\lambda$ parameter of the Poisson distribution to $6$, effectively causing each instance to have a higher weight and be used for training more often.
In contrast to OBADWIN, LBag maintains one ADWIN detector per model in the ensemble and independently resets the models.
This approach leverages the predictive performance of OBADWIN by merely training each model more often (higher weight) and resetting them individually. One drawback of LBag compared with OB and OBADWIN is that it requires more memory and processing time since the base models are trained more often, and there are more instances of ADWIN.
In \cite{Levbag}, the authors also attempted to further increase the diversity of LBag by randomizing the output of the ensemble via random output codes. However, this approach was not very successful compared to maintaining a deterministic combination of the models' outputs.
\textbf{Adaptive Random Forest (ARF)} is an adaptation of the original Random Forest algorithm~\cite{RANDOM_FORESTS} to the streaming setting. Random Forest can be seen as an extension of Bagging, where further diversity among the base models (decision trees) is obtained by randomly choosing a subset of features to be used for further splitting leaf nodes.
ARF uses the incremental decision tree algorithm Hoeffding tree~\cite{HOEFFDING_TREE} and simulates resampling as in LBag, i.e., Poisson($\lambda=6$). The Adaptive part of ARF stems from the change detection and recovery strategies based on detecting warnings and drifts per tree in the ensemble. After a warning is signaled, another model is created (namely, a `background tree') and trained without affecting the ensemble predictions. If the warning escalates to a drift signal, then the associated tree is replaced by its background tree. Notice that in the worst case, the number of tree models in ARF can be at most double the total number of trees due to the background trees. However, as noted in \cite{Gomes2017} the co-existence of a tree and its background tree is often short-lived.
\textbf{Streaming Random Patches (SRP)} \cite{SRP} is an ensemble method specially adapted to stream classification, which combines random subspaces and online bagging. SRP is not constrained to a specific base learner as ARF since its diversity inducing mechanisms are not built-in the base learner, i.e., SRP uses global randomization while ARF uses local randomization.
Even though, in \cite{SRP} all the experiments focused on Hoeffding trees and showed that SRP could produce deeper trees, which may lead to increased diversity in the ensemble.
\section{Mini-batching for improving the performance of ensembles}
\label{sec:proposal}
Although all the learners who compose an ensemble may be homogeneous in type, each has its own (and different) model. For instance, all learners implement a Hoeffding Tree that grows in a different shape and can change over time.
One advantage of such methods is that task parallelism can naturally be applied as the underlying classifiers in bagging ensembles execute independently from each other and without communication.
Algorithm \ref{alg:highlevel-new} depicts a task-parallel-based implementation.
This version improves the performance of the current parallel implementation of the ARF algorithm~\cite{Gomes2017}, in the latest version in MOA~\cite{bifet2010moa}, by reusing the data structures and avoiding the costs of allocating new ones for every instance to be processed.
\begin{algorithm}
\caption{High level parallel algorithm}
\label{alg:highlevel-new}
\begin{algorithmic}[1]
\State {\bf Input}: an ensemble $E$, $num\_threads$, a data stream $S$
\State $P \gets Create\_service\_thread\_pool(num\_threads)$
\State $T \gets Create\_trainers\_collection(E)$
\For {each arriving instance $I$ in stream $S$}
\State $E$.classify($I$)
\For {each trainer $T_i$ in trainers $T$}
\State $k \gets poisson(\lambda)$
\State $T_i.update(I, k)$
\EndFor
\For {all trainers $T$} {\bf in parallel}
\State $W\_inst \gets I * k$
\State $Train\_on\_instance(W\_inst)$
\EndFor
\If {change detected}
\State $reset\_classifier$
\EndIf
\If {$ElapsedTime > Timeout$}
\State {\textbf{break}}
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
In lines 2-3, we start a thread pool and create one Trainer (runnable) for each ensemble classifier.
For each arriving data instance (lines 4-17), if the program's elapsed time has not surpassed the set timeout, we obtain the votes from all the classifiers (line 5).
Then, we compute the \textit{Poisson} weights and update the data structures for training in lines 6-9. The prediction phase has a low computational cost because the algorithm uses Hoeffding trees~\cite{HOEFFDING_TREE}, and thus, we classify instances sequentially.
On the other hand, the training phase is more expensive. It involves updating many statistics on each tree's nodes, calculating new splits, and detecting data distribution changes (for three methods).
As the training phase dominates the computational cost, parallelism is implemented (in lines 10-13) by simultaneously training many classifiers.
Lines 14-16 represent the global change detector, present only on OBAdwin, where we replace the ensemble's worst classifier with a brand new one.
Finally, lines 17-19 represent the timeout condition, where the ensemble will run for a limited period and then finish processing.
\subsection{Optimizing the performance and power consumption with mini-batching}
Although task parallelism looks straightforward for implementing ensembles, poor memory usage can severely hinder their performance.
For instance, high-frequency access to data structures larger than cache memories can raise performance bottlenecks. Also, algorithms that continuously perform memory allocation/release operations to discard old models and create new ones during the learning/training process may pressure the garbage collection.
To mitigate such problems, in a previous work we introduced mini-batching \cite{CASSALES2021260}, an optimization strategy which groups several data instances of a stream for processing.
The algorithm assigns a task to each learner. The tasks' responsibility is to process training by iterating uninterruptedly through all instances of a mini-batch instead of processing a single instance at a time.
When a task is invoked, its data structures are loaded into the upper levels of the memory hierarchy (upper-level caches). Once on the upper-level caches, the data structures can be quickly accessed to process the remaining instances of the same mini-batch, reducing cache misses and improving performance.
\begin{algorithm}
\caption{mini-batching algorithm}
\label{alg:batch}
\begin{algorithmic}[1]
\State {\bf Input}: an ensemble $E$, $num\_threads$, a data stream $S$, mini-batch size $L_{mb}$
\State $P \gets Create\_service\_thread\_pool(num\_threads)$
\State $T \gets Create\_trainers\_collection(E)$
\For {each arriving instance $I$ in stream $S$}
\If {$ElapsedTime > Timeout$}
\label{alg:iftimeout}
\State{$E$.process\_minibatch($B$)}
\State{break loop}
\EndIf
\State $B.append(I)$
\label{alg:mbappend}
\If{$B.size() == L_{mb}$}
\label{alg:ifsize}
\State{$E$.process\_minibatch($B$)}
\EndIf
\State sleep();
\Comment{Sleep until next instance arrives}
\label{alg:sleep}
\EndFor
\end{algorithmic}
\end{algorithm}
The Algorithm~\ref{alg:batch} shows the mini-batching strategy.
The first difference between the two algorithms appears in lines \ref{alg:iftimeout}-\ref{alg:ifsize} of the Algorithm~\ref{alg:batch}, where the ensemble will only accumulate the instances until the desired mini-batch size is met, the time limit is reached, or the stream ends. If the time limit is reached (line \ref{alg:iftimeout}), the algorithm treats the current mini-batch as if it was complete, processes it, and breaks the execution loop. In normal cases, when the mini-batch size reaches the size set as a parameter (line \ref{alg:ifsize}), the algorithm processes the mini-batch.
Line \ref{alg:sleep} shows a {\it sleep} operation which is made explicit here just for better illustration on how mini-batching works. In real implementations, such a sleep operation is implicitly implemented by I/O subsystems (e.g., when the algorithm invokes a blocking read operation on a socket to wait for a new incoming data instance). The sleeping period was made explicit here because it is important twofold: ($i$) it releases CPU to other applications (running on the same edge nodes) while waiting for new arriving data instances; ($ii$) it creates opportunity for DPM strategies to turn off idle subsystems (e.g., CPU components) to save energy.
\begin{algorithm}
\caption{process\_minibatch routine}
\label{alg:process}
\begin{algorithmic}[1]
\State {\bf Input}: mini-batch $B$
\For {each trainer $T_i$ in trainers $T$} {\bf in parallel}
\label{alg:forclassify}
\State $T_i.instances \gets B$
\label{alg:copymb}
\State $votes_i \gets T_i.classify(T_i.instances)$
\label{alg:votesclassify}
\EndFor
\State $E.compile(votes)$
\label{alg:compilevotes}
\For {each trainer $T_i$ in trainers $T$} {\bf in parallel}
\label{alg:fortrainparallel}
\For {each instance $I$ in $T_i.instances$}
\State $k \gets poisson(\lambda)$
\State $W\_inst \gets I * k$
\State $T_i.train\_on\_instance(W\_inst)$
\EndFor
\label{alg:endfortrain}
\If {change detected}
\label{alg:ifchangedetector}
\State $reset\_classifier$
\EndIf
\label{alg:endifchangedetector}
\EndFor
\label{alg:endfortrainparallel}
\State $B.clear()$
\label{alg:clear}
\end{algorithmic}
\end{algorithm}
Algorithm \ref{alg:process} depicts the routine used to process the mini-batch.
It performs the classification (lines \ref{alg:forclassify}-\ref{alg:compilevotes}) and training (lines \ref{alg:fortrainparallel}-\ref{alg:endfortrainparallel}). In line \ref{alg:copymb}, we copy the whole mini-batch to each trainer. Line \ref{alg:votesclassify} uses the instances to compute votes for each trainer and stores them for later use. The votes are aggregated and compiled in line \ref{alg:compilevotes} to provide the predictions. The for in line \ref{alg:forclassify} may be sequential or parallel according to the characteristics of the application (e.g., classifiers with small number of operations may disable the parallelism and run it sequentially). Then, each trainer will iterate (sequentially) through all mini-batch instances while calculating the weight, creating the weighted instance, and training the classifier with this instance (lines \ref{alg:fortrainparallel}-\ref{alg:endfortrain}). ARF, SRP, and LBag, exclusively, will execute lines \ref{alg:ifchangedetector}-\ref{alg:endifchangedetector} as a local change detector for each classifier in the ensemble.
In OBAdwin, lines \ref{alg:ifchangedetector}-\ref{alg:endifchangedetector} would be outside the parallel section, as the change detection is a global operation.
Finally, in line \ref{alg:clear}, the mini-batch is emptied to begin accumulating again.
In essence, the grouping of instances and reordering of operations improve the algorithm's execution time and energy consumption thanks to a better access locality.
Among the definitions of access locality provided by YUAN et al. \cite{yuan2019relational}, the definition of \textit{reuse distance} (RD) can be used to demonstrate how the mini-batching approach can improve ensemble implementations' access locality. Reuse distance (RD) is defined as ``the number of distinct data accessed since the last access to the same datum, including the reused datum”~\cite{yuan2019relational}.
The RD is $\infty$ for its first access.
The minimum RD is one since it includes the reused datum. The maximum RD, denoted by $m$, is the number of distinct data elements in the problem. In our case, $m$ denotes the number of classifiers in the ensemble (ensemble size).
For example, assuming a set of only three elements $a,b,c$, the RD sequence would be $\infty \infty \infty$ $333$ $333$ for the access sequence \textit{abc abc abc}. As another example, the RD sequence would be $\infty \infty \infty$ $135$ $135$ for the access sequence \textit{abc cba abc}. Therefore, disregarding the $\infty$ accesses, we can estimate the total RD for the sequential approach as follows:
\begin{equation}
\label{eq:RDseq}
RD_{sequential} = \sum_{1}^{n} \sum_{1}^{m}m
\end{equation}
Using the same three-element set from the previous example, the access sequence when using mini-batching would change to \textit{aaa bbb ccc}. In turn, the RD sequence would change to $\infty 1 1$ $\infty 1 1$ $\infty 1 1$ for the first mini-batch and $m 1 1$ for all the subsequent mini-batches. Again, disregarding the $\infty$ accesses, we can estimate the total RD for the mini-batching strategy as follows:
\begin{equation}
\label{eq:RDmb}
RD_{mini-batching} = \sum_{1}^{\nicefrac{n}{b}} \sum_{1}^{m} (m+b-1)
\end{equation}
As can be noted by comparing equations \ref{eq:RDseq} and \ref{eq:RDmb}, the most significant contribution of the mini-batching strategy in reducing the RD occurs in the outer sum. By dividing the higher limit by the mini-batch size, a mini-batch size as small as ten can reduce the order of magnitude of this problem.
At first glance, bigger mini-batches provide a much larger reduction in RD. However, a sufficiently big $n$ (at least, $n > bm^2$) is needed for the outer sum to overcome the impact of the inner sum.
For more information on this topic, refer to a previous work ~\cite{IS}.
{
In summary, by reducing the RD, mini-batching reduces the number of CPU cycles (and ultimately the energy consumption) to process each data instance. However, notice that mini-batching can improve energy efficiency in a second way. The {sleep} operation in line 4 of Algorithm \ref{alg:sleep} releases the CPU while waiting for the next data instance to arrive. Although this semantics can be implicit in many mini-batching implementations (e.g., our implementation blocks reading data instances from a socket), it was explicitly included in our algorithm description. These sleep periods put system components in idle state so letting DPM techniques implemented in modern processors to turn off components in order to save energy. Besides saving energy, sleep periods free the processor, which is useful to benefit other applications running on the same physical nodes in the edge.}
\section{Experimental setup}
\label{sec:experiments}
This section describes the experimental evaluation of our proposal.
\subsection{The testbed}
\label{subsec:environment}
{
As EC implementations can encompass different hardware platforms ranging from small, low-end devices to mid-end computing servers, we evaluated the optimizations proposed on three hardware architectures.
The first hardware is a single board computer Raspberry Pi 3 Model B with a Broadcom BCM2837 processor of 4 cores Cortex-A53 64-bit [email protected] and 1GB LPDDR2 SDRAM memory, which is frequently pointed as a representative platform for EC implementations. We also included two more powerful hardware platforms, a regular personal computer with Intel i5-2400 [email protected] and 4GB memory, and a rack mountable mid-end server SUPERMICRO SYS-7049GP-TRT with dual Xeon CLX-SP 4208 8C/16T, 128GB RAM, dual power supply as described in Table \ref{tab:hard_spec}. We carried out the experiments on a testbed composed of four nodes (as depicted in Fig. \ref{fig:setup-energy} connected by a dedicated network.}
\begin{table*}[ht]
\centering
\caption{ Hardware specifications}
\label{tab:hard_spec}
\begin{tabular}{r|ccc}
Machine type & single-board & personal & mid-end \\
& computer & computer & server \\
\hline
Processor & Intel Xeon 4208 & Intel i5-2400 & Broadcom BCM2835 \\
Micro architecture & Cascade Lake & Sandy Bridge & Cortex-A53 \\
Cores/socket & 8 & 4 & 4 \\
Threads/core & 2 & 1 & 1 \\
Clock frequency (GHz) & 2.1 & 3.1 & 1.2 \\
\hline
L1 cache (core) & 32 KB & 128 KB & 32 KB \\
L2 cache (core) & 1024 KB & 1024 KB & 512 KB \\
L3 cache (shared) & 11264 KB & 6144 KB & - \\
\hline
Memory (GB) & 128 & 4 & 1 \\
Memory channels & 6 & 2 & -\\
Maximum bandwidth & 107.3 GiB/s & 21 GB/s & - \\
\hline
TDP & 85 W & 35 W & 4 W \\
\end{tabular}
\end{table*}
\begin{figure*}[ht]
\centering
\begin{tikzpicture}[
roundrect/.style={
rectangle,
rounded corners,
draw=black, very thick,
text width=6.5em,
minimum height=2em,
text centered,
fill=black!5},
data/.style={
->,
thick,
shorten <=2pt,
shorten >=2pt,
},
monitor/.style={
dashed,
very thick,
loosely dashed,
shorten <=2pt,
shorten >=2pt,
->,
}
]
\node[roundrect] (mw) {Sensor};
\node[roundrect, inner sep=5pt,left=4cm of mw] (dlog) {Data Logger};
\node[roundrect, above=3cm of mw] (proc) {Data Stream Processor};
\node[roundrect, above=3cm of dlog] (gen) {Data Stream Generator};
\draw[monitor] (proc) to[out=270,in=90] node[auto]{Power Reading} (mw) ;
\draw[monitor] (gen) to[out=270,in=90] node[left]{Statistics} (dlog) ;
\draw[monitor] (mw) to[out=180,in=0] node[auto]{Energy Consumption} (dlog) ;
\draw[data] (gen) to[out=0,in=180] node[auto]{Data Stream} (proc);
\end{tikzpicture}
\caption{The testbed is composed of four nodes (clockwise order): (a) a data stream (load) generator; (b) a data stream processor (whose architectures are described in Table \ref{tab:hard_spec}; (c) a high precision power meter (sensor); and (d) a data logger which registers all experimental data. }
\label{fig:setup-energy}
\end{figure*}
\subsection{Load generation}
{
The {\it data stream generator}
reads the benchmark dataset and transmits the data samples over the network to the {\it data stream processor} node at controlled transmission rates (e.g. to generate low, moderate, and high workloads for each CPU type).} We used five open access \footnote{Available at \url{https://github.com/hmgomes/AdaptiveRandomForest}} datasets (whose characteristics are summarized in Table \ref{tab:datasets}) in the experiments:
\begin{table*}[htpb]
\centering
\caption{Summary of dataset statistics}
\label{tab:datasets}
\begin{tabular}{r|ccccc}
Datasets & Airlines & GMSC & Electricity & Covertype & Kyoto\\ \hline
\# Instances & 540k & 150k & 45k & 581k & 725k\\
\# Features & 7 & 10 & 8 & 54 & 12\\
\# Nominal feat & 4 & 0 & 1 & 45 & 0\\
Normalized & No & No & Yes & Yes & Yes\\
\end{tabular}
\end{table*}
\begin{itemize}
\item The regression dataset from Ikonomovska inspired the Airlines dataset. The task is to predict whether a given flight will be delayed, given information on the scheduled departure. Thus, it has two possible classes: delayed or not delayed.
\item The Electricity dataset was collected from the Australian New South Wales Electricity Market, where prices are not fixed. These prices are affected by the demand and supply of the market itself and set every 5 min. The Electricity dataset tries to identify the price changes (two possible classes: up or down) relative to a moving average of the last 24h. An essential aspect of this dataset is that it exhibits temporal dependencies.
\item The give me some credit (GMSC) dataset is a credit scoring dataset where the objective is to decide whether a loan should be allowed. This decision is crucial for banks since erroneous loans lead to the risk of default and unnecessary expenses on future lawsuits. The dataset contains historical data on borrowers.
\item The forest covertype dataset represents forest cover type for 30 x 30 m cells obtained from the US Forest Service Region 2 resource information system (RIS) data. Each class corresponds to a different cover type. The numeric attributes are all binary.
Moreover, there are seven imbalanced class labels.
\item The Kyoto dataset is an IDS dataset created by researchers from the University of Kyoto. The task is to predict if a flow is an attack of regular traffic. They used honeypots composed of many devices like servers, printers, and IP cameras, among others.
\end{itemize}
\subsection{The ensembles used for benchmarking}
\label{sec:testbed}
{
The {\it data stream processor} implements the optimizations as a wrapper for six ensemble algorithms described in section \ref{sec:ensembles}. We implemented this module in the Massive Online Analysis (MOA) framework~\cite{bifet2010moa} \footnote{Avail. at \url{https://github.com/Waikato/moa}}. We adapted MOA to read from a socket instead of reading from a local ARFF file. We chose MOA because:
($i$) it provides correct and validated implementations of the six ensemble learners used in our experiments,
($ii$) MOA an be easily extended or modified, which allowed us to write a wrapper for a uniform evaluation of the six ensembles with the optimizations; and ($iii$) MOA has been used for many studies in the ML area \cite{bifet2010moa}, so that our results can be easily reproduced and compared. The {\it data stream processor} is executed on each different hardware to evaluate its performance and power consumption. }
\subsection{Performance and power consumption measurements}
\label{subsec:evalmeasure}
The {\it data logger} (Fig. \ref{fig:setup-energy}) collects all experimental data regarding the performance (e.g., throughput, processing delay) and the power consumed by the {data stream processor} for further analysis. The {\it sensor} is implemented by a high precision power meter (Yokogawa MW-100) which periodically collects information directly from the Power Distribution Unit (PDU) and sends to the data logger.
Our interest is to measure the amount of Energy (E) consumed to perform classification tasks. However, most electricity consumption monitors operate by collecting an instantaneous rate of Power (P) being supplied. Energy is the product of the average power and Time (t):
\begin{equation}
E = P \times t
\end{equation}
Since power can vary in time, the total amount of energy consumed to perform a task is given by
\[ E=\int_{0}^{t} P(t) \,dt \]
where $t$ is the time to perform one task. In practice, we can obtain an approximation of E by taking several periodic measures:
\begin{equation}
E = \frac{1}{n}\sum_{i=1}^{n}P_i \times t,
\end{equation}
where $n$ is the number of samples taken by the monitor.
Typically, {\it energy efficiency} is defined as the ratio of the energy spent and the amount of computing performed. As energy is expressed in Joules (J), and the work is expressed as the number of data instances processed (I), we can estimate energy efficiency (as Joules per Instance - JPI) by:
\begin{equation}
JPI = \dfrac{E}{I}.
\end{equation}
In some experiments, we use performance metrics of {\it throughput}, given by the average number of data instances processed by second (IPS), and {\it delay}, which is the average time taken to perform the processing of data instances, including its the transmission over the network, the composition of mini-batches, and the time to process the whole mini-batch by the {\it data stream processor}.
The previous metrics are related to computational performance. A second dimension of performance we considered is the {\it predictive performance}, which can be hindered by the optimization techniques. {\it Accuracy} is a widely known measure and represents the percentage of correctly classified instances.
{
\section{Experimental results and analysis}
}
\label{subsec:prelim}
As memory constraints of small devices can hinder the execution of large ensembles, our preliminary experiment aimed to find the influence of the ensemble size (i.e., the number of classifiers in the ensemble) on the accuracy, energy consumption, and throughput. For this experiment, we used the baseline version of the algorithms while measuring energy consumption. Results shown in Fig. \ref{fig:sizeVS3} demonstrate that accuracy (in red) remains almost constant (less than 1\% loss of accuracy), the throughput (in blue) decreases, and energy consumption (in green) increases as we increase the ensemble size (x-axis). Because accuracy loss is negligible for all datasets and all algorithms, used small ensembles in the Raspberry Pi experiments.
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.25]{Pi-LBag-sizeVS3.png}
\caption{The influence of the ensemble size in accuracy (red), energy efficiency (green), and throughput (blue) for each dataset executing the LBag algorithm on the Raspberry Pi.}
\label{fig:sizeVS3}
\end{figure*}
The next experiment aims at profiling the power consumption of each machine type as we vary the number of CPU cores running at full load. We used the {\tt stress} application and thread pinning to fully load cores during 180 seconds.
Results in Fig. \ref{fig:energy_profile} confirm our expectation for the RaspBerry Pi and Core i5-2400.
Although the TDP for the Xeon 4208 is only 85 W \footnote{\url{https://ark.intel.com/content/www/us/en/ark/products/193390/intel-xeon-silver-4208-processor-11m-cache-2-10-ghz.html}}, the valued measured is higher because it refers to the whole machine (including disk, dual power supply, dual socket, 4 fans, etc.).
\begin{figure*}[t!]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{consumption-profile-gpu.png}
\caption{Xeon 4208}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{consumption-profile-vostro.png}
\caption{i5-2400}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{consumption-profile-pi.png}
\caption{Pi 3 model B}
\end{subfigure}
\caption{Energy consumption profile for each machine.}
\label{fig:energy_profile}
\end{figure*}
\subsection{Energy consumption}
\label{subsec:sockets}
The next experiment evaluates the energy efficiency of algorithms running on the three machines under three workload intensities (i.e., at 10\%, 50\%, and 90\% of its maximum throughput). To accomplish that, we used adapted MOA to read data instances from a socket instead of an ARFF file as described in Section \ref{sec:testbed}.
Then, we tuned the load generator to deliver instances at rates equivalent to to 10\%, 50\%, and 90\% of the maximum capacity of the machine for each scenario. Experiments executed each scenario for at least 3 minutes in order to guarantee that the whole system achieved its steady state.
In this experiment we compare the baseline (Sequential) implementation of each algorithm, a parallel (multi-thread) implementation without mini-batching (B1), and three parallel versions with mini-batches of 50, 250, and 500 instances (B50, B250, and B500) respectively. Figures \ref{fig:pi-jpi-delay}, \ref{fig:vostro-jpi-delay}, and \ref{fig:xeon-jpi-delay} present the results from Raspberry Pi, i5, and Xeon 4208, respectively. The chart shows results for one dataset per row, whereas the columns show results for each algorithm. All charts in the same row have the same scale. The energy consumption (in Joules per instance - JPI) on the left Y-axis, whereas the average Delay (in milliseconds) per instance appears on the right Y-axis.
\begin{figure*}[ht]
\centering
\advance\leftskip-1cm
\includegraphics[width=1.0\linewidth]{Pi-sharey-bars-all-4x1-JPI-delay.png}
\caption{Energy consumption and delay for the Raspberry Pi}
\label{fig:pi-jpi-delay}
\end{figure*}
\begin{figure*}[ht]
\centering
\advance\leftskip-1cm
\includegraphics[width=1.0\linewidth]{Vostro-sharey-bars-all-4x1-JPI-delay.png}
\caption{Energy consumption and delay for the i5-2400}
\label{fig:vostro-jpi-delay}
\end{figure*}
\begin{figure*}[ht]
\centering
\advance\leftskip-1cm
\includegraphics[width=1.0\linewidth]{Xeon-sharey-bars-all-4x1-JPI-delay.png}
\caption{Energy consumption and delay for the Xeon 4208}
\label{fig:xeon-jpi-delay}
\end{figure*}
As a general remark, energy efficiency of each algorithm varies according to its model complexity. For instance, all three versions of OzaBag remarkably show a better energy efficiency (i.e., smaller JPI) than the other algorithms, because OzaBag produce smaller decision trees that require fewer operations and allow a faster traversal. On the other hand, more complex algorithms presented (proportionally) higher reduction in energy consumption (JPI) when compared to its best counterpart without mini-batching. This behavior is related to the higher throughput produced by the mini-batching version, which shortens the execution time and allows longer periods of low power consumption. Thus, the energy efficiency gains can be explained as follows:
\begin{itemize}
\item Notice that the mini-batch is processed only in lines 6 and 11 of Algorithm 2, which are executed only when either the time elapsed exceeds the {\it timeout} or the mini-batch is full (i.e., it reaches $L_{mb}$ instances). Otherwise, the loop (line 4) just appends the data instance
in the mini-batch and waits (i.e., it enters in a sleep state) for the arrival of the next data instance. Because the thread sleeps for a while, it does not consume CPU cycles, so letting the DPM mechanisms implemented by the CPU hardware to turn off idle system components and thus save power;
\item Even under high loads (i.e., sleep periods may tend to zero), mini-batching yields power reduction because it reduces cache misses (as demonstrated in \cite{CASSALES2021260}), thus accelerating the stream processing and reducing the number of cycles and memory accesses per instance.
\end{itemize}
Although mini-batching can improve energy efficiency, the delay resulting from extended and repeated sleep periods may hinder the idea of real-time processing, primarily when the rate of incoming instances is several times smaller than the mini-batch size.
Such a phenomenon appears only in the B250 and B500 experiments.
On the other hand, the B50 experiments present delays similar to the instance-by-instance implementations (Seq and B1) while having a better energy efficiency.
Regardless of the platform and dataset used, SRP has the worst energy efficiency across all the experiments. It happens because this algorithm produces higher decision trees than the other complex algorithms (e.g., ARF and LBag), thus increasing the computational complexity for the stream processing.
Remarkably, mini-batches of 50 near instances yield better benefits in terms of delay reduction, energy efficiency, and smaller impact in predictive performance, while larger mini-batches than this size presented diminishing returns. This result is consistent to our previous work in \cite{CASSALES2021260}.
More detailed results on the energy efficiency gains are given in Tables \ref{tab:delta-Pi}, \ref{tab:delta-Vostro}, and
\ref{tab:delta-Xeon}. They compare energy consumption between the best version without mini-batch (i.e., the best case between the orange and blue bars in the charts) and the mini-batch version. Negative values indicates the (percentage) reduction in energy consumption due to mini-batching. Cases where mini-batch consumed more energy have positive values and are in bold.
\begin{table*}[ht]
\centering
\advance\leftskip-1cm
\setlength\tabcolsep{2.1pt}
\begin{tabular}{c|ccc|ccc|ccc|ccc|ccc}
& \multicolumn{3}{c|}{Airlines} & \multicolumn{3}{c|}{GMSC} & \multicolumn{3}{c|}{Electricity} & \multicolumn{3}{c|}{Covertype} & \multicolumn{3}{c}{Kyoto}\\
Algorithm & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 \\
\hline
ARF & -39.86 & -24.87 & -21.43 & -53.42 & -51.29 & -47.81 & -44.05 & -42.69 & -32.69 & -58.19 & -56.56 & -52.42 & -44.83 & -35.61 & -32.79 \\
LBag & -25.04 & \textbf{ 11.88} & \textbf{ 10.89} & -55.20 & -51.70 & -47.94 & -53.48 & -45.95 & -39.18 & -56.66 & -55.94 & -57.63 & -64.64 & -56.52 & -48.65 \\
SRP & -21.66 & -18.04 & -22.09 & -57.89 & -55.78 & -51.54 & -44.29 & -37.97 & -35.43 & -48.95 & -38.37 & -36.49 & -42.91 & -42.20 & -28.70 \\
OBAd & -60.30 & -51.74 & -48.96 & -61.54 & -57.91 & -52.18 & -53.96 & -50.63 & -44.30 & -55.02 & -56.95 & -64.73 & -54.97 & -53.28 & -54.20 \\
OBASHT & -62.30 & -40.61 & \textbf{ 17.97} & -55.46 & -50.67 & -58.96 & -45.85 & -40.91 & -33.80 & -56.95 & -56.22 & -60.48 & -47.45 & -45.95 & -50.04 \\
OB & -61.23 & -27.51 & -21.63 & -55.14 & -51.26 & -58.31 & -44.83 & -39.42 & -34.47 & -57.53 & -56.68 & -65.04 & -45.84 & -44.35 & -46.95 \\
\end{tabular}
\caption{Percentage difference between the best non-mini-batch version and the mini-batch version on the Raspberry Pi}
\label{tab:delta-Pi}
\end{table*}
\begin{table*}[ht]
\centering
\advance\leftskip-1cm
\setlength\tabcolsep{2pt}
\begin{tabular}{c|ccc|ccc|ccc|ccc|ccc}
& \multicolumn{3}{c|}{Airlines} & \multicolumn{3}{c|}{GMSC} & \multicolumn{3}{c|}{Electricity} & \multicolumn{3}{c|}{Covertype} & \multicolumn{3}{c}{Kyoto}\\
Algorithm & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 \\
\hline
ARF & -26.64 & \textbf{ 112.66} & \textbf{ 27.88} & -55.09 & -45.61 & -39.58 & -48.65 & -40.06 & -34.12 & -55.05 & -48.11 & -42.40 & -53.61 & -34.79 & -36.08 \\
LBag & \textbf{ 47.50} & \textbf{ 105.85} & \textbf{ 164.66} & -62.05 & -52.71 & -48.87 & -60.39 & -50.86 & -49.39 & -54.99 & -48.19 & -43.84 & -55.43 & -38.61 & -43.88 \\
SRP & -14.89 & -25.45 & -8.34 & -54.75 & -44.62 & -34.29 & -40.39 & -28.94 & -21.54 & -52.98 & -42.79 & -36.58 & -33.10 & -32.57 & -3.14 \\
OBAd & -54.64 & \textbf{ 36.33} & \textbf{ 34.65} & -63.66 & -57.72 & -55.74 & -58.95 & -54.26 & -52.44 & -44.37 & -39.91 & -33.65 & -50.78 & -47.40 & -37.44 \\
OBASHT & -60.56 & -58.60 & -67.16 & -63.21 & -59.05 & -51.85 & -54.07 & -47.84 & -45.11 & -40.51 & -39.83 & -35.71 & -50.43 & -46.33 & -36.05 \\
OB & -7.19 & \textbf{ 5.56} & -13.34 & -57.65 & -54.94 & -46.74 & -51.92 & -48.58 & -45.28 & -42.21 & -40.24 & -33.24 & -47.55 & -42.07 & -30.06 \\
\end{tabular}
\caption{Percentage difference between the best non-mini-batch version and the mini-batch version on the i5-2400}
\label{tab:delta-Vostro}
\end{table*}
\begin{table*}[ht]
\centering
\advance\leftskip-1cm
\setlength\tabcolsep{2.5pt}
\begin{tabular}{c|ccc|ccc|ccc|ccc|ccc}
& \multicolumn{3}{c|}{Airlines} & \multicolumn{3}{c|}{GMSC} & \multicolumn{3}{c|}{Electricity} & \multicolumn{3}{c|}{Covertype} & \multicolumn{3}{c}{Kyoto}\\
Algorithm & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 & 10 & 50 & 90 \\
\hline
ARF & -76.81 & -75.08 & -75.30 & -67.09 & -66.98 & -67.38 & -81.74 & -80.47 & -80.30 & -82.64 & -83.06 & -82.33 & -80.12 & -77.36 & -80.25 \\
LBag & -76.71 & -73.22 & -74.66 & -68.16 & -68.30 & -68.92 & -76.63 & -76.37 & -77.18 & -74.41 & -71.04 & -71.24 & -72.75 & -70.85 & -72.61 \\
SRP & -71.57 & -66.02 & -67.11 & -71.23 & -67.91 & -67.84 & -81.06 & -78.40 & -76.87 & -86.37 & -81.20 & -82.01 & -73.70 & -71.24 & -68.50 \\
OBAd & -78.07 & -77.87 & -80.59 & -62.25 & -63.51 & -63.70 & -63.50 & -64.24 & -64.56 & -56.87 & -55.37 & -55.34 & -52.19 & -54.16 & -51.48 \\
OBASHT & -69.59 & -79.84 & -90.91 & -60.48 & -61.67 & -62.04 & -59.17 & -58.30 & -61.89 & -48.61 & -49.38 & -48.26 & -52.69 & -53.35 & -52.64 \\
OB & -44.22 & -77.02 & -81.20 & -51.25 & -52.18 & -52.74 & -53.49 & -54.06 & -57.95 & -50.19 & -48.38 & -49.19 & -44.46 & -45.90 & -43.83 \\
\end{tabular}
\caption{Percentage difference between the best non-mini-batch version and the mini-batch version on the Xeon}
\label{tab:delta-Xeon}
\end{table*}
It is possible to see in the tables that using mini-batch of 50 instances (MB50) improves both performance and energy efficiency in 259 out of 270 experiments (i.e., 96\%). However, mini-batching has proven to increase the performance (e.g., throughput) in all the 270 experiments as illustrated in Figures \ref{fig:pi-tput}, \ref{fig:vostro-tput}, and \ref{fig:xeon-tput} and in Table \ref{tab:tput-vostro-lbag-airlines}.
\begin{figure*}[ht]
\centering
\advance\leftskip-1cm
\includegraphics[width=1.0\linewidth]{Pi-sharey-all-Tput.png}
\caption{Throughput for the Raspberry Pi}
\label{fig:pi-tput}
\end{figure*}
\begin{figure*}[ht]
\centering
\advance\leftskip-1cm
\includegraphics[width=1.0\linewidth]{Vostro-sharey-all-Tput.png}
\caption{Throughput for the i5-2400}
\label{fig:vostro-tput}
\end{figure*}
\begin{figure*}[ht]
\centering
\advance\leftskip-1cm
\includegraphics[width=1.0\linewidth]{Xeon-sharey-all-Tput.png}
\caption{Throughput for the Xeon 4208}
\label{fig:xeon-tput}
\end{figure*}
\begin{table}[ht]
\centering
\begin{tabular}{r|c|c|c}
Algorithm & 10\% & 50\% & 90\% \\
\hline
Sequential & 9.99 & 37.05 & 46.56\\
B1 & 34.96 & 45.68 & 53.46 \\
B50 & 34.92 & 74.44 & 78.12 \\
B250 & 49.19 & 58.95 & 71.05 \\
B500 & 20.01 & 20.80 & 16.21\\
\end{tabular}
\caption{Throughput for the algorithm LeveragingBag using the Airlines dataset.}
\label{tab:tput-vostro-lbag-airlines}
\end{table}
In summary, mini-batching can support consistent improvements in energy consumption and time performance across the experiments. First, mini-batching improves the RD, which reduces the number of CPU cycles to process each data instance faster, and spending less energy. Second, the sleeping periods created (while awaiting for new data instances to compose a new mini-batch) create opportunity for DPM strategies to turn off idle subsystem (e.g., many processor subsystems) to save energy.
Also, notice that mini-batching causes a trade-off between energy consumption, time metrics (delay, and throughput), and accuracy.
While the mini-batching increases the throughput, delay, and energy efficiency, larger mini-batches can modify the accuracy of the algorithms.
This result corroborated by previous studies on mini-batching performance \cite{IS}. However, as demonstrated in our experiments, it is possible to balance this trade off. The mini-batch size of 50 instances is a good balance demonstrated for practically all the experimental scenarios studied.
\section{Conclusion}
\label{sec:conclusions}
Ensemble learning is a fruitful approach to improve the performance of ML models by combining several single models.
Ensembles are also popular in a data stream processing context, where they achieve remarkable predictive performance.
Examples of this class include algorithms such as Adaptive Random Forest, Leveraging Bag, and OzaBag.
Despite their relevance, many aspects of their efficient implementation remained to be studied after their original proposals.
For example, the original Adaptive Random Forest implementation included a simple multi-thread version, but it did not consider energy efficiency or mini-batches to improve the overall run time.
In this paper, we proposed an experimental framework to evaluate the performance and energy efficiency of the mini-batching strategy proposed in \cite{IS} under a realistic scenario where data streams are sent through the network at different load intensities, with six
state-of-art ensemble algorithms (OzaBag, OzaBag Adaptive Size Hoeffding Tree, Online Bagging ADWIN,
Leveraging Bagging, Adaptive RandomForest, and Streaming Random Patches) processing five widely used
machine learning benchmark datasets with varied characteristics on three computer platforms.
Our study demonstrated that
mini-batching yields remarkable reduction in energy consumption and time performance, at cost of small changes in the predictive performance. Larger improvements in time performance and energy savings were observed at low loads, with a trade-off on the average delay to process the instances. Conversely, mini-batching increases throughput when the workload is high, where the energy-reduction is present (but not as intense) and the delay is smaller than the baseline.
Despite of the trade-offs observed, it is possible to balance them to avoid significant loss in predictive performance.
In future work, we intend to investigate if it is possible to improve the solution by using an adaptive mini-batching size regarding predictive performance, throughput, delay, and energy consumption.
\section{Acknowledgements}
\noindent This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001, and Programa Institucional de Internacionalização – CAPES-PrInt UFSCar (Contract 88887.373234/2019-00). Authors also thank Stic AMSUD (project 20-STIC-09), and FAPESP (contract numbers 2018/22979-2, and 2015/24461-2) for their support. Partially supported by the TAIAO project CONT-64517-SSIFDS-UOW (Time-Evolving Data Science / Artificial Intelligence for Advanced Open Environmental Science) funded by the New Zealand Ministry of Business, Innovation, and Employment (MBIE). URL https://taiao.ai/.
\bibliographystyle{IEEEtran}
|
1,116,691,501,277 | arxiv | \section{Introduction}
\label{sec:introduction}
While communication networks have traditionally been designed as ``bit pipes'' meant to reliably convey information,
the anticipated explosion of device-to-device communications, e.g., as part of the Internet of Things, is creating new challenges.
In fact, more than communication by itself, what is crucial for the next generation of networks is to ensure the \emph{cooperation} and \emph{coordination} of the
constituent devices, viewed as autonomous decision makers.
In the present work, coordination is meant in the broad sense of enforcing a joint behavior of the devices through communication. More specifically, we shall quantify this joint behavior in terms of how well we can approximate a target joint distribution between the actions and signals of the devices. Our main objective in the present work is to characterize the amount of communication that is required to achieve coordination for several networks.
A general information-theoretic framework to study coordination in networks was put forward in~\cite{cuff2010}, related to earlier work on
``Shannon's reverse coding theorem''~\cite{bennet2002entanglement} and the compression of probability distribution sources and mixed quantum states~\cite{Soljanin2002,kramer2007communicating,winter2002compression}.
This framework also relates to the game-theoretic
perspective on coordination~\cite{gossner2006optimal} with applications, for instance, power control~\cite{larrousse2015coordination}. Recent extensions of the
framework have included the possibility of coordination through interactive communication~\cite{yassaee2015channel, haddadpour2017simulation}.
Two information-theoretic metrics have been proposed to measure the level of coordination:
\emph{empirical coordination}, which requires the joint histogram of the devices' actions to approach a target distribution, and \emph{strong coordination}, which requires the joint distribution of sequences of actions to converge to an i.i.d. target distribution, e.g., in variational distance~\cite{cuff2010,cuff2013distributed}. Empirical coordination captures an ``average behavior'' over multiple repeated actions of the devices; in contrast, strong coordination captures the behavior of sequences. A byproduct of strong coordination is that it enforces some level of ``security,'' in the sense of guaranteeing that sequence of actions will be \emph{unpredictable} to an outside observer beyond what is known about the target joint distribution of sequences.
Strong coordination in networks was first studied over error free links~\cite{cuff2010} and later extended to noisy communication links~\cite{haddadpour2017simulation}. In the latter setting, the signals that are transmitted and received
over the physical channel become a part of what can be observed, and one can therefore coordinate the actions of the devices with their communication signals~\cite{cuff2011hybrid, treust2017joint}. From a security standpoint, this joint coordination of actions and signals allows one to control the information about the devices' actions that may be inferred from the observations of the communication signals. This ``secure coordination'' was investigated for error-free links in~\cite{satpathy2016secure}.
In the present paper, we address the problem of strong coordination in a two-node network comprised of an
information source and a noisy channel, in which both nodes have access to a common source of randomness. This scenario presents two conflicting goals: the encoder needs to convey a message to the decoder to coordinate the actions,
while simultaneously coordinating the signals coding the message. As in \cite{treust2014correlation,treust2015empirical,larrousse2015coordinating}
we introduce a random state capturing the effect of the environment, to model actions and channels that change with external factors, and
we consider a general setting in which state information and side information about the source may or may
not be available at the decoder. We derive an inner and an outer bound for the strong coordination region by developing a joint source-channel scheme in which an auxiliary codebook allows us to satisfy both goals.
Since the two bounds do not match, the optimality of our general achievability scheme remains an open question.
We, however, succeeded to characterize the strong coordination region exactly in some special cases:
\begin{inparaenum}[i)]
\item when the channel is noiseless;
\item when the decoder is lossless; and
\item when the random variables of the channel are independent from the random variables of the source.
\end{inparaenum}
In all these cases, the set of achievable target distributions is the same as for empirical coordination~\cite{treust2015empirical}, but we show that a positive rate of common randomness is required for strong coordination. We conclude the paper by considering the design of an explicit coordination scheme in this setting. Coding schemes for coordination based on polar codes have already been designed in~\cite{blasco-serrano2012, bloch2012strong, chou2015coordination, obead2017joint}. Inspired by the binning technique using polar codes in \cite{chou2015polar},
we propose an explicit polar coding scheme that achieves the inner bound for the coordination capacity region in \cite{Cervia2017} by extending our coding scheme in \cite{Cervia2016} to strong coordination. We use a chaining construction as in \cite{hassani2014universal,mondelli2015achieving} to ensure proper alignment of the polarized sets.
The remainder of the paper is organized as follows. $\mbox{Section \ref{sec: prel}}$ introduces the notation and some preliminary results.
$\mbox{Section \ref{sec: isit2017}}$ describes a simple model in which there is no state and no side information and derives an inner and an outer bound for the strong coordination region.
The information-theoretic modeling of coordination problems relevant to this work is best illustrated in
this simplified scenario.
$\mbox{Section \ref{sec: innerouter}}$ extends the inner and outer bounds to the general case of a noisy channel with state and side information at the decoder.
In particular, the inner bound is proved by proposing a random binning scheme and a random coding scheme that have the same statistics.
$\mbox{Section \ref{sec: special cases}}$ characterizes the strong coordination region for three special cases and
shows that the separation principle does not hold for strong coordination.
$\mbox{Section \ref{sec: polarcoding}}$ presents an explicit polar coding scheme for the simpler setting where there is no state and no side information.
Finally, $\mbox{Section \ref{sec: conclusions}}$ presents some conclusions and open problems.
\section{Preliminaries}\label{sec: prel}
We define the integer interval $\llbracket a,b \rrbracket$ as the set of integers between $a$ and $b$.
Given a random vector $X^{n}:=$ $(X_1, \ldots, X_{n})$, we note $X^{i}$ the first $i$ components of $X^{n}$,
$X_{\sim i}$ the vector $(X_j)_{j \neq i}$, $j\in \llbracket 1,n \rrbracket $, where the component $X_i$ has been removed and $X[A]$ the vector $(X_j)_{j \in A}$,
$A \subseteq \llbracket 1,n \rrbracket $.
The total variation between two probability mass
functions $P$ and $Q$ on $\mathcal A$ is given by
$$\mathbb V (P, Q):= \frac{1}{2} \sum_{a \in \mathcal{A}} \lvert P(a)-Q(a) \rvert.$$
The Kullback-Leibler
divergence between two discrete distributions $P$ and $Q$ is
$$\mathbb D (P \Arrowvert Q):=\sum_{a} P(a) \log{\frac{P(a)}{Q(a)}}.$$
We use the notation $f(\varepsilon)$ to denote a function which tends to zero as $\varepsilon$ does,
and the notation $\delta(n)$ to denote a function which tends to zero exponentially as $n$ goes to infinity.
We now state some useful results.
First, we recall well-known properties of the variational distance and Kullback-Leibler divergence.
\begin{lem}[$\mbox{\cite[Lemma 1]{csiszar1996almost}}$]\label{lem1csi}
Given a pair of random variables $(A,B)$ with joint distribution $P_{AB}$, marginals $P_A$ and $P_B$ and
$\lvert \mathcal A \rvert \geq 4$, we have
\begin{equation*}
\frac{1}{2 \log2} { \mathbb V(P_{AB},P_{A}P_{B}) }^2 \leq I(A;B) \leq \mathbb V(P_{AB},P_{A}P_{B}) \log {\frac{\lvert \mathcal A\rvert }{\mathbb V(P_{AB},P_{A}P_{B})}}.
\end{equation*}
\end{lem}
\vspace{0,2cm}
\begin{lem}[$\mbox{\cite[Lemma 16]{cuff2009thesis}}$]\label{cuff16}
For any two joint distributions $P_{AB}$ and
$\widehat P_{AB}$, the total variation distance between them can
only be reduced when attention is restricted to $P_{A}$ and $\widehat P_{A}$.
That is,
$$\tv (P_{A}, \widehat P_{A}) \leq \tv (P_{AB}, \widehat P_{AB}).$$
\end{lem}
\vspace{0,2cm}
\begin{lem}[$\mbox{\cite[Lemma 17]{cuff2009thesis}}$]\label{cuff17}
When two random variables are passed through the same channel, the total variation between the resulting input-output joint distributions is the same as the total variation between the input distributions. Specifically,
$$\tv (P_A, \widehat P_A)= \tv (P_AP_{B|A}, \widehat P_A P_{B|A}).$$
\end{lem}
\vspace{0,2cm}
\begin{lem}\label{lemkl}
When two random variables are passed
through the same channel, the Kullback-Leibler divergence between the resulting input-output joint distributions is the same as the Kullback-Leibler divergence between the input distributions. Specifically,
$$\D \left(P_A \Arrowvert \widehat P_A\right)= \D \left(P_AP_{B|A} \Arrowvert \widehat P_AP_{B|A}\right).$$
\end{lem}
\vspace{0,2cm}
\begin{lem}[$\mbox{\cite[Lemma 4]{yassaee2014achievability}}$]\label{lem4}
If $ \tv \left(P_{Y^{n}} P_{X^{n}|Y^{n}},P'_{Y^{n}} P'_{X^{n} |Y^{n}} \right) = \varepsilon$ then there exists $\mathbf y \in \mathcal Y^{n}$ such that
\begin{equation*}
\tv \left(P_{X^{n}|Y^{n}= \mathbf y},P'_{X^{n} |Y^{n}= \mathbf y} \right) = 2 \varepsilon.
\end{equation*}
\end{lem}
The proofs of the following results are in Appendix \ref{appendix prel}.
The following lemma is in the same spirit as \cite[Lemma VI.3]{cuff2013distributed}.
We state a slightly different version which is more convenient for our proofs.
\vspace{0,2cm}
\begin{lem}\label{lemmit}
Let $P_{A^{n}}$ such that
$\tv\left(P_{A^{n}}, \bar P_{A}^{\otimes n}\right) \leq \varepsilon,$
then we have
\begin{equation*}
\sum_{t=1}^{n} I(A_t;A_{\sim t}) \leq n f(\varepsilon).
\end{equation*}
In particular, if $P_{AB}$ is such that $\tv\left(P_{AB}, \bar P_{A}\bar P_{B}\right) \leq \varepsilon$,
then $ I(A;B) \leq f(\varepsilon) $.
\end{lem}
\vspace{0,2cm}
\begin{lem}\label{lemab}
Let $P_{A^{n} B^{n}}$ such that
$\tv\left(P_{A^{n} B^{n}}, \bar P_{AB}^{\otimes n}\right) \leq \varepsilon$.
Then we have
\begin{equation}\label{lemab1}
\sum_{t=1}^{n} I(A_t;A^{t-1} B_{\sim t}|B_t) \leq n f(\varepsilon).
\end{equation}
Let the variable $T$ serve as a random time index, for any random variable $C$ we have
\begin{equation}\label{lemab2}
H(C|B^{n}) \geq n I(A_T;CB_{\sim T} T|B_T ) -n I(A_T B_T;T) -n f(\varepsilon).
\end{equation}
\end{lem}
\section{Inner and outer bounds for the strong coordination region} \label{sec: isit2017}
\subsection{System model} \label{sec: sys}
\begin{center}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21]{./genproblem.pdf}
\caption{Coordination of signals and actions for a two-node network with a noisy channel with non-causal encoder and decoder.}
\label{fig: coordisit}
\end{figure}
\end{center}
\vspace{-0,7cm}
Before we study the general model with a state in detail, it is helpful
to consider a simpler model depicted in Figure \ref{fig: coordisit} to understand the nature of the problem.
Two agents, the encoder and the decoder, wish to coordinate their behaviors:
the stochastic actions of the agents should follow a known and fixed joint distribution.
We suppose that the encoder and the decoder have access to a shared source of uniform randomness $C \in \llbracket 1,2^{nR_0} \rrbracket$.
Let $U^{n} \in \mathcal U^n $ be an i.i.d. source with distribution $\bar P_{U}$.
The encoder observes the sequence $U^{n} \in \mathcal U^n$ and selects a signal
$X^{n}= f_n(U^{n}, C)$, $f_n: \mathcal U^n \times \llbracket 1,2^{nR_0} \rrbracket \rightarrow \mathcal X^n$.
The signal $X^{n}$ is transmitted over a discrete memoryless channel parametrized by the conditional distribution $\bar P_{Y |X}$.
Upon observing $Y^{n}$ and common randomness $C$,
the decoder selects an action $V^{n} = g_n(Y^{n}, C)$, where
$g_n: \mathcal Y^n \times \llbracket 1,2^{nR_0} \rrbracket \rightarrow \mathcal V^n$ is a stochastic map.
For block length $n$, the pair $(f_n , g_n )$ constitutes a code.
We recall the definitions of achievability and of the coordination region for empirical and strong coordination \cite{cuff2010,cuff2009thesis}.
\begin{defi}
A distribution $\bar{P}_{UXYV}$ is \emph{achievable} for \emph{empirical coordination} if for all $\varepsilon>0$ there exists a sequence $(f_n,g_n)$ of encoders-decoders such that
$$ \mathbb P \left\{ \tv \left( T_{U^{n} X^{n} Y^{n} V^{n}}, \bar{P}_{UXYV} \right) > \varepsilon \right\} < \varepsilon$$
where $T_{U^{n} X^{n} Y^{n} V^{n}}$ is the joint histogram of the actions induced by the code.
The \emph{empirical coordination region} $\mathcal R_e$ is the closure of the set of achievable distributions $\bar{P}_{UXYV}$.
\end{defi}
\begin{defi}
A pair $(\bar{P}_{UXYV}, R_0)$ is \emph{achievable} for \emph{strong coordination} if there exists a sequence $(f_n,g_n)$ of encoders-decoders with rate of common randomness $R_0$, such that
$$\lim_{n \to \infty} \tv \left( P_{U^{n} X^{n} Y^{n} V^{n}}, \bar{P}_{UXYV}^{\otimes n} \right)=0$$
where $P_{U^{n} X^{n} Y^{n} V^{n}}$ is the joint distribution induced by the code.
The \emph{strong coordination region} $\mathcal{R}$ is the closure of the set of achievable pairs
$(\bar P_{UXYV}, R_0)$\footnote{As in \cite{cuff2010},
we define the achievable region as the closure of the set of achievable rates and distributions. This definition
allows to avoid boundary complications.
For a thorough discussion on the boundaries of the achievable region when $\mathcal{R}$ is
defined as the closure of the set of rates for a given distribution, see \cite[Section VI.D]{cuff2013distributed}.}.
\end{defi}
Our first result is an inner and outer bound for the strong coordination region $\mathcal{R}$ \cite{Cervia2017}.
\begin{teo} \label{teoisit}
Let $\bar P_{U}$ and $\bar P_{Y|X}$ be the given source and channel parameters, then
$\mathcal R'_{\text{in}} \subseteq \mathcal{R} \subseteq \mathcal R'_{\text{out}}$ where:
{\allowdisplaybreaks
\begin{align}
\mathcal R'_{\text{in}} &:= \begin{Bmatrix}[c|l]
& \bar P_{UXYV}= \bar P_{U} \bar P_{X|U} \bar P_{Y|X} \bar P_{V|UXY} \\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{UXYV}, R_0) & \bar P_{UXYWV}= \bar P_{U} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|X} \bar P_{V|WY}\\
& I(W;U) \leq I(W;Y) \\
& R_0 \geq I(W;UXV|Y)\\
\end{Bmatrix} \label{eq: regionisit}\\
\quad
\mathcal R'_{\text{out}} &:= \begin{Bmatrix}[c|l]
& \bar P_{UXYV}= \bar P_{U} \bar P_{X|U} \bar P_{Y|X} \bar P_{V|UXY} \\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{UXYV}, R_0) &\bar P_{UXYWV}= \bar P_{U} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|X} \bar P_{V|WY}\\
& I(W;U) \leq I(X;Y)\\
& R_0 \geq I(W;UXV|Y)\\
&\lvert \mathcal W \rvert \leq \lvert \mathcal U \times \mathcal X \times \mathcal Y \times {\mathcal V} \rvert+4 \\
\end{Bmatrix}. \label{eq: regionisit2}
\end{align}}
\end{teo}
\begin{oss}
Observe that the decomposition of the joint distributions $\bar P_{UXYV}$ and $\bar P_{UWXYV} $
is equivalently characterized in terms of Markov chains:
{\allowdisplaybreaks
\begin{equation}\label{markov chain isit}
Y-X-U,
\quad \quad
\begin{cases}
Y-X-(U,W)\\
V-(Y,W)-(X,U)
\end{cases}.
\end{equation}
}
\end{oss}
\begin{oss}
The empirical coordination region for the setting of Figure \ref{fig: coordisit} was investigated in \cite{cuff2011hybrid}, in which the authors derived an inner and outer bound.
Note that the information constraint $I(W;U) \leq$ $I(W;Y)$ and
the decomposition of the joint probability distribution $\bar P_{U} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|X} \bar P_{V|WY}$
are the same for empirical coordination \cite[Theorem 1]{cuff2011hybrid}.
The main difference is that strong coordination requires a positive rate of common randomness $R_0 > I(W;UXV|Y)$.
\end{oss}
\subsection{Proof of Theorem \ref{teoisit}: inner bound}
We postpone the achievability proof because it is a corollary of the inner bound in the general setting of Theorem \ref{teouv} proven in Section \ref{inner}.
A stand-alone proof can be found in the conference version of the present paper \cite{Cervia2017}.
\begin{oss}
With the same random binning techniques, \cite{haddadpour2017simulation} characterizes an inner bound for the strong coordination region
in the slightly different scenario in which only $U^n$ and $V^n$ need to be coordinated.
Given the source and channel parameters $\bar P_{U}$ and $\bar P_{Y|X}$ respectively, the inner bound in \cite{haddadpour2017simulation} is:
{\allowdisplaybreaks
\begin{equation}\label{eq: regionhaddad}
\mathcal{R}_{\text{Hadd,in}}:= \begin{Bmatrix}[c|l]
& \bar P_{UXYV}= \bar P_{U} \bar P_{X|U} \bar P_{Y|X} \bar P_{V|UXY} \\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{UV}, R_0) & \bar P_{UXYWV}= \bar P_{U} \bar P_{WX|U} \bar P_{Y|X} \bar P_{V|WY}\\
& I(W;U) \leq I(W;Y) \\
& R_0 \geq I(W;UV)-I(W;Y)\\
\end{Bmatrix}.
\end{equation}}
Note that the joint distribution and the information constraints are the same as in \eqref{eq: regionisit}
but in \eqref{eq: regionisit} the rate of common randomness is larger since
\begin{equation*}
I(W;UXV|Y)=I(W;UXYV)-I(W;Y) \geq I(W;UV)-I(W;Y).
\end{equation*}
The difference in common randomness rate $I(W;XY|UV)$ stems from the requirement in \cite{haddadpour2017simulation},
which coordinates $U^n$ and $V^n$ but not necessarly $(U^{n}, X^{n}, Y^n, V^{n})$.
\end{oss}
\subsection{Proof of Theorem \ref{teoisit}: outer bound}
Consider a code $(f_n,g_n)$ that induces a distribution $P_{U^{n} X^{n} Y^{n} V^{n}}$
that is $\varepsilon$-close in total variational distance to the i.i.d. distribution $\bar P_{U X Y V}^{\otimes n}$.
Let the random variable $T$ be uniformly distributed over the
set $\llbracket 1,n\rrbracket$ and independent of sequence
$(U^{n}, X^{n}, Y^{n}, V^{n}, C)$. The variable $T$ will serve as a random time index.
The variable $U_T$ is independent of $T$ because $U^{n}$ is an i.i.d. source sequence \cite{cuff2010}.
\subsubsection{Bound on $R_0$}\label{isitconvpart1}
We apply Lemma \ref{lemab} to $A^{n}:= U^{n} X^{n} V^{n}$, $B^{n}:=Y^{n}$ and $C$ and, using \eqref{lemab2}, we have
{\allowdisplaybreaks
\begin{align*}
& nR_0 \geq H(C) \geq H(C|B^{n}) \\
& \overset{(a)}{\geq} n I(A_T;CB_{\sim T} T|B_T ) -n I(A_T B_T;T) -n f(\varepsilon) \stepcounter{equation}\tag{\theequation}\label{boundr0}\\
& \overset{(b)}{\geq} n I(A_T;CB_{\sim T} T|B_T ) -2n f(\varepsilon)=n I(U_T X_T V_T;CY_{\sim T} T|Y_T) -2n f(\varepsilon)
\end{align*}}where $(a)$ follows from Lemma \ref{lemab} and $(b)$ comes from \cite[Lemma VI.3]{cuff2013distributed}.
\subsubsection{Information constraint}\label{isitconvpart1}
We have
{\allowdisplaybreaks
\begin{align*}
& 0 \overset{(a)}{\leq} I(X^{n};Y^{n})-I(C,U^{n};Y^{n}) \\
&\leq I(X^{n};Y^{n})-I(U^{n};Y^{n}|C)\\
& = H(Y^{n})-H(Y^{n}|X^{n}) +H(U^{n}|Y^{n} C)-H(U^{n}|C)\\
& \overset{(b)}{\leq} \sum_{t=1}^{n} H(Y_t) \!-\! \sum_{t=1}^{n} H(Y_t|X_t) + \! \sum_{t=1}^{n} H(U_t|U^{t-1} Y_t Y_{\sim t} C)\!- \! \sum_{t=1}^{n} H(U_t)\\
& \overset{(c)}{\leq} \sum_{t=1}^{n} \left(H(Y_t) -H(Y_t|X_t)+ H(U_t|Y_{\sim t} C)- H(U_t)\right)\\
& \overset{(d)}{\leq} n H(Y_T) - n H(Y_T|X_T T)+ n H(U_T|Y_{\sim T} C T) - n H(U_T|T) \\
& \overset{(e)}{=} n H(Y_T) -n H(Y_T|X_T) + nH(U_T|Y_{\sim T} C T)-n H(U_T) \\
&= n I(X_T; Y_T)-n I( U_T;Y_{\sim T}, C, T )
\end{align*}}where $(a)$ comes from the Markov chain $Y^{n}-X^{n}-(C,U^{n})$ and $(b)$ comes from the chain rule for the conditional entropy and the fact that
$U^{n}$ is an i.i.d. source independent of $C$. The inequalities $(c)$ and $(d)$ come
from the fact that
conditioning does not increase entropy and
$(e)$ from the memoryless nature of the channel $\bar P_{Y|X}$ and the i.i.d. nature of the source $\bar P_{U}$.
\subsubsection{Identification of the auxiliary random variable}\label{identification}
We identify the auxiliary random variables $W_t$ with $(C,Y_{\sim t})$ for each $t \in \llbracket 1,n\rrbracket$ and $W$ with
$(W_T,T)=(C,Y_{\sim T}, T)$. For each $t \in \llbracket 1,n\rrbracket$ the following two Markov chains hold:
{\allowdisplaybreaks
\begin{align}
Y_t-X_t-(C, Y_{\sim t}, U_t) \quad &\Longleftrightarrow \quad Y_t-X_t-(W_t, U_t) \label{mc1t}\\
V_t-(C,Y_{\sim t},Y_t)-(U_t,X_t) \quad &\Longleftrightarrow \quad V_t-(W_t,Y_t)-(U_t,X_t)\label{mc2t}
\end{align}}where \eqref{mc1t} comes from the fact that the channel is memoryless and \eqref{mc2t} from the fact that the decoder is non-causal
and for each $t \in \llbracket 1,n\rrbracket$ the decoder generates $V_t$ from $Y^{n}$ and common randomness $C$.
Then, we have
{\allowdisplaybreaks
\begin{align}
Y_T-X_T-(C, Y_{\sim T}, U_T, T) \quad &\Longleftrightarrow \quad Y_T-X_T-(W_T, U_T, T) \label{mc1T}\\
V_T-(C,Y_{\sim T},Y_T, T)-(U_T,X_T) \quad &\Longleftrightarrow \quad V_T-(W_T,Y_T, T)-(U_T,X_T) \label{mc2T}
\end{align}}where \eqref{mc1T} holds because
\begin{align*}
\mathbb P \{Y_T=y| X_T=x, Y_{\sim T}=\tilde{\mathbf y} , U_T=u, T=t, C\}= \mathbb P \{Y_T=y| X_T=x\}
\end{align*}
since the channel is memoryless. Then by \eqref{mc2t}, \eqref{mc2T} holds because
{\allowdisplaybreaks
\begin{align*}
I(V_T;U_T X_T| C Y^{n} T)= \sum_{i=1}^n \frac{1}{n} I(V_t;U_t X_t| C Y^{n} T=t)=0.
\end{align*}
}
Since $W=W_t$ when $T=t$, we also have $(U,X)-(W,Y)-V$ and $Y-X-(U,W)$.
The cardinality bound is proved in $\mbox{Appendix \ref{appendix bounds}}$.
\section{Inner and outer bounds for the strong coordination region with state and side information}\label{sec: innerouter}
\begin{center}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21]{./genproblemsz.pdf}
\caption{Coordination of signals and actions for a two-node network with a noisy channel with state and side information at the decoder.}
\label{fig: generalsetting}
\end{figure}
\end{center}
\vspace{-0,8cm}
In this section we consider the model depicted in Figure~\ref{fig: generalsetting}.
It is a generalization of the simpler setting of Figure~\ref{fig: coordisit}, where
the noisy channel depends on a state $S^n$, and the decoder has access to non-causal side information $Z^n$.
The encoder selects a signal $X^{n}= f_n(U^{n}, C)$, with $f_n: \mathcal U^n \times \llbracket 1,2^{nR_0} \rrbracket \rightarrow \mathcal X^n$
and transmits it over the discrete memoryless channel $\bar P_{Y |XS}$ where $S$ represents the state.
The decoder then selects an action $V^{n} = g_n(Y^{n}, Z^{n}, C)$, where
$g_n: \mathcal Y^n \times \mathcal Z^n \times \llbracket 1,2^{nR_0} \rrbracket \rightarrow \mathcal V^n$ is a stochastic map and $Z^{n}$
represents the side information available at the decoder.
\begin{oss}
Note that the channel state information and side information at the decoder are represented explicitly by the random
variables $S^{n}$ and $Z^{n}$ respectively, but the model is quite general and includes scenarios where
partial or perfect channel state information is available at the encoder as well,
since the variables $U^{n}$ and $S^{n}$ are possibly correlated.
\end{oss}
We recall the notions of achievability and of the coordination region for empirical and strong coordination \cite{cuff2010,cuff2009thesis} in this setting.
\begin{defi}
A distribution $\bar{P}_{USZXYV}$ is \emph{achievable} for \emph{empirical coordination} if for all $\varepsilon>0$ there exists a sequence $(f_n,g_n)$ of encoders-decoders such that
$$ \mathbb P \left\{ \tv \left( T_{U^{n} S^{n} Z^n X^{n} Y^{n} V^{n}}, \bar{P}_{USZXYV} \right) > \varepsilon \right\} < \varepsilon$$
where $T_{U^{n} S^{n} Z^n X^{n} Y^{n} V^{n}}$ is the joint histogram of the actions induced by the code.
The \emph{empirical coordination region} $\mathcal R_e$ is the closure of the set of achievable distributions $\bar{P}_{USZXYV}$.\\
A pair $(\bar{P}_{USZXYV}, R_0)$ is \emph{achievable} for \emph{strong coordination} if there exists a sequence $(f_n,g_n)$ of encoders-decoders with rate of common randomness $R_0$, such that
$$\lim_{n \to \infty} \tv \left( P_{U^{n} S^{n} Z^n X^{n} Y^{n} V^{n}}, \bar{P}_{USZXYV}^{\otimes n} \right)=0$$
where $P_{U^{n} S^{n} Z^n X^{n} Y^{n} V^{n}}$ is the joint distribution induced by the code.
The \emph{strong coordination region} $\mathcal{R}$ is the closure of the set of achievable pairs $(\bar P_{USZXYV}, R_0)$.
\end{defi}
\vspace{0,2cm}
In the case of non-causal encoder and decoder, the problem of characterizing the strong
coordination region for the system model in Figure \ref{fig: generalsetting} is still open, but we establish the following inner and outer bounds.
\begin{teo} \label{teouv}
Let $\bar P_{USZ}$ and $\bar P_{Y|XS}$ be the given source and channel parameters, then
$\mathcal R_{\text{in}} \subseteq \mathcal{R}\subseteq \mathcal R_{\text{out}}$ where:
{\allowdisplaybreaks
\begin{align}
\mathcal R_{\text{in}} &:= \begin{Bmatrix}[c|l]
& \bar P_{USZXYV}= \bar P_{USZ} \bar P_{X|U} \bar P_{Y|XS} \bar P_{V|UXYSZ} \\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{USZXYV}, R_0)&\bar P_{USZWXYV}=\bar P_{USZ} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|XS} \bar P_{V|WYZ}\\
&I(W;U) \leq I(W;YZ)\\
& R_0 \geq I(W;USXV|YZ)\\
\end{Bmatrix} \label{eq: region inn} \\
\mathcal R_{\text{out}} &:= \begin{Bmatrix}[c|l]
& \bar P_{USZXYV}= \bar P_{USZ} \bar P_{X|U} \bar P_{Y|XS} \bar P_{V|UXYSZ} \\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{USZXYV}, R_0) &\bar P_{USZWXYV}=\bar P_{USZ} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|XS} \bar P_{V|WYZ}\\
& I(W;U) \leq \min \{I(XUS;YZ),I(XS;Y)+I(U;Z)\} \\
& R_0 \geq I(W;USXV|YZ)\\
& \lvert \mathcal W \rvert \leq \lvert \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \times {\mathcal V} \rvert+5\\
\end{Bmatrix} \label{eq: region out}
\end{align}
}
\end{teo}
\begin{oss}
As in Theorem \ref{teoisit}, even if inner and outer bound do not match, they only differ on the upper bound on $I(W;U)$.
Note that we cannot compare $I(XUS;YZ)$ and $I(XS;Y)+I(U;Z)$ (for more details, see the discussion in Appendix \ref{appendix compare}).
Hence, in $\mathcal R_{\text{out}}$ the upper bound on the mutual information $I(W;U)$ is the minimum of the two.
\end{oss}
\begin{oss}
Observe that the decomposition of the joint distributions $\bar P_{USZXYV}$ and $\bar P_{USZWXYV}$
is equivalently characterized in terms of Markov chains:
{\allowdisplaybreaks
\begin{equation}\label{markov chain general}
\begin{cases}
Z-(U,S)-(X,Y)\\
Y-(X,S)-U
\end{cases},\quad
\begin{cases}
Z-(U,S)-(X,Y,W)\\
Y-(X,S)-(U,W)\\
V-(Y,Z,W)-(X,S,U)
\end{cases}.
\end{equation}
}
\end{oss}
\subsection{Proof of Theorem \ref{teouv}: inner bound}\label{inner}
The achievability proof uses the same techniques as in \cite{haddadpour2017simulation}
inspired by \cite{yassaee2014achievability}.
The key idea of the proof is to define a random binning and a random coding scheme, each of
which induces a joint distribution, and to prove that the two schemes have the same statistics.
Before defining the coding schemes, we state the results that we will use to prove the inner bound.
The following lemma is a consequence of the Slepian-Wolf Theorem .
\begin{lem}[Source coding with side information at the decoder $\mbox{\cite[Theorem 10.1]{elgamal2011nit}}$ ]\label{lem1}
Given a discrete memoryless source $(A^{n},B^{n})$, where $B^{n}$ is side information available at the decoder,
we define a stochastic encoder $\psi_n: \mathcal A^n \to \llbracket 1,2^{nR} \rrbracket$, where
$C:= \varphi_n(A^{n})$ is a binning of $A^{n}$.
If $R>H(A|B)$, the decoder recovers $A^{n}$ from $C$ and $B^{n}$ with arbitrarily small error probability.
\end{lem}
\begin{comment}
\begin{lem}[Source coding with side information at the decoder $\mbox{\cite[Theorem 10.1]{elgamal2011nit}}$ ]\label{lem1}
Let $(A^n,B^n)$ be a discrete memoryless source, and consider an encoder that observes a sequence $A^{n}$ and transmits a message
$C$ in $\llbracket 1,2^{nR} \rrbracket$ to a decoder that
has access to side information $B^{n}$. If the encoding rate
$R>H(A|B)$, there exists an encoder $\psi(A^n)=C$ such that the decoder can recover $A^{n}$ from $C$ and $B^{n}$ with arbitrarily small error probability.
\end{lem}
\end{comment}
\begin{lem}[Channel randomness extraction $\mbox{\cite[Lemma 3.1]{ahlswede1998common}}$ and $\mbox{\cite[Theorem 1]{yassaee2014achievability}}$]\label{cor1}
Given a discrete memoryless source $(A^{n},B^{n})$,
we define a stochastic encoder $\varphi_n: \mathcal B^n \to \llbracket 1, J_n \rrbracket$, where
$K:= \varphi_n(B^{n})$ is a binning of $B^{n}$ with $J_n$ values chosen independently and uniformly at random.
if $R < H(B|A)$, then we have
$$ \lim_{n \to \infty} \mathbb E_{\varphi_n} \left[\mathbb V \left( P_{A^{n} K}^{\varphi}, Q_K P_{A^{n}}\right)\right]=0,$$
where $\mathbb{E}_{\varphi_n}$ denotes the average over the random binnings and $Q_K$ is the uniform distribution on $\llbracket 1,J_n\rrbracket$.
\end{lem}
Although Lemma \ref{cor1} ensures the convergence in total variational distance and is therefore enough to prove strong coordination,
it does not bring any insight on the speed of convergence.
For this reason, throughout the proof we will use the following lemma instead.
We omit the proof as it follows directly from the discussion in \cite[Section III.A]{pierrot2013joint}.
\begin{lem}[Channel randomness extraction for discrete memoryless sources and channels]\label{1.4.2}
Let $A^n$ with distribution $P_{A^n}$ be a discrete memoryless source and $P_{B^n|A^n}$ a discrete memoryless channel.
Then for every $\varepsilon >0$, there exists a sequence of $(J_n , n)$ codes
$\varphi_n: \mathcal B^n \to \llbracket 1, J_n \rrbracket$ and a constant $\alpha > 0$ such that for $K:= \varphi_n(B^{n})$ we have
\begin{equation}\label{eq1lem1.4.2}
\liminf_{n \to \infty} \frac{\log{J_n}}{n} \leq (1-\varepsilon)H(B|A) \quad \mbox{and} \quad \mathbb D\left(P_{A^nK} \Arrowvert P_{A^n} Q_K \right) \leq 2^{-\alpha n}.
\end{equation}
\end{lem}
\subsubsection{Random binning scheme}\label{rb gen}
Assume that the sequences $U^{n}$, $S^{n}$, $Z^{n}$, $X^{n}$, $W^{n}$, $Y^{n}$ and $V^{n}$
are jointly i.i.d. with distribution
{\allowdisplaybreaks
\begin{align*}
& \bar P_{U^{n}S^{n}Z^{n}} \bar P_{W^{n}| U^{n}} \bar P_{X^{n}| W^{n} U^{n}} \bar P_{Y^{n}|X^{n} S^{n}} \bar P_{V^{n}|W^{n} Y^{n} Z^{n}}.
\end{align*}}
We consider two uniform random binnings for $W^{n}$:
\begin{itemize}
\item first binning $C = \varphi_1(W^{n})$, where $\varphi_1: \mathcal{W}^{n} \to \llbracket 1,2^{nR_0} \rrbracket$ is an encoder which maps each sequence of $\mathcal{W}^{n}$ uniformly and independently to the set $\llbracket 1,2^{nR_0} \rrbracket$;
\item second binning $F = \varphi_2(W^{n})$, where $\varphi_2: \mathcal{W}^{n} \to \llbracket 1,2^{n \tilde R} \rrbracket$ is an encoder.
\end{itemize}
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[scale=0.23]{./doublebinning.pdf}
\caption{The square and the circle represent the possible outputs $C$ of the first binning and the dot and the cross the outputs $F$ of the second binning. Given $\mathbf y$ and the realizations of $C$ and $F$, it is possible to recover $\mathbf w$. }
\label{fig: db}
\end{figure}
\end{center}
\vspace*{-0.6cm}
Note that if $\tilde R+R_0 >H(W|YZ)$, by Lemma \ref{lem1}, it is possible to recover $W^{n}$ from $Y^{n}$, $Z^{n}$ and $(C, F)$ with
high probability using a Slepian-Wolf decoder via the conditional distribution $P^{\text{SW}}_{\widehat W^{n}|CF Y^{n} Z^{n}}$.
This defines a joint distribution:
{\allowdisplaybreaks
\begin{align*}
&\bar P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} C F V^{n}}= \bar P_{U^{n} S^{n}Z^{n}} \bar P_{W^{n}|U^{n}} \bar P_{X^{n}|W^{n} U^{n}} \bar P_{C|W^{n}} \bar P_{F|W^{n}} \bar P_{Y^{n}|X^{n} S^{n}} \bar P_{V^{n}|W^{n} Y^{n} Z^{n}} P^{\text{SW}}_{\widehat W^{n}|C F Y^{n} Z^{n}}.
\end{align*}
}In particular, $\bar P_{W^{n}|CFU^{n}}$ is well defined.
\subsubsection{Random coding scheme}\label{rc gen}
In this section we follow the approach in \cite[Section IV.E]{yassaee2014achievability}.
Suppose that in the setting of Figure \ref{fig: generalsetting}, encoder and decoder have access not only to common randomness $C$
but also to extra randomness $F$, where $C$ is generated uniformly at random
in $\llbracket 1,2^{nR_0} \rrbracket$ with distribution $Q_C$ and $F$ is generated uniformly at
random in $\llbracket 1,2^{n \tilde R} \rrbracket$ with distribution $Q_F$ independently of $C$.
Then, the encoder generates $W^{n}$ according to $\bar P_{W^{n}|CFU^{n}}$ defined above and $X^{n}$ according to $\bar P_{X^{n}|U^{n} W^{n}}$.
The encoder sends $X^{n}$ through the channel.
The decoder obtains $(Y^{n}, Z^{n})$ and $(C,F)$ and reconstructs $W^{n}$
via the conditional distribution $P^{\text{SW}}_{\widehat W^{n}|CF Y^{n} Z^{n}}$.
The decoder then generates $V^{n}$ letter by letter according to the distribution $P_{V^{n}|\widehat W^{n} Y^{n}Z^{n}}$
(more precisely $\bar P_{V^{n}| W^{n} Y^{n} Z^{n}}(\widehat{\mathbf u}|\widehat{\mathbf w}, \mathbf y ,\mathbf z)$, where $\widehat{\mathbf w}$ is the output of the Slepian-Wolf decoder).
This defines a joint distribution:
{\allowdisplaybreaks
\begin{align*}
& P_{U^{n}S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} C F V^{n}} =Q_C Q_F P_{U^{n}S^{n}Z^{n}} \bar P_{ W^{n}|CFU^{n}} \bar P_{X^{n}|W^{n} U^{n}} \bar P_{Y^{n}|X^{n} S^{n}} P^{\text{SW}}_{\widehat W^{n}|CF Y^{n} Z^{n}} P_{V^{n}|\widehat W^{n} Y^{n} Z^{n}}.
\end{align*}
}
We want to show that the distribution $\bar P$ is achievable for strong coordination:
{\allowdisplaybreaks
\begin{align*}
& \lim_{n \to \infty} \tv (\bar P_{U^{n} S^{n}Z^{n} X^{n} W^{n} \widehat W^{n} Y^{n} V^{n}}, P_{U^{n}S^{n}Z^{n} X^{n} W^{n} \widehat W^{n} Y^{n} V^{n}} )=0. \stepcounter{equation}\tag{\theequation}\label{shat}
\end{align*}}We prove that the random coding scheme possesses all the properties of the initial source coding
scheme stated in Section \ref{rb gen}. Note that
{\allowdisplaybreaks
\begin{align*}
& \mathbb D (\bar P_{U^{n} S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} CF} \Arrowvert P_{U^{n} S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} C F}) \\
& = \mathbb D (\bar P_{U^{n}S^{n}Z^{n}} \bar P_{W^{n}|U^{n}} \bar P_{X^{n}|W^{n} U^{n}} \bar P_{C|W^{n}} \bar P_{F|W^{n}} \bar P_{Y^{n}|X^{n} S^{n}} P^{\text{SW}}_{\widehat W^{n}|CF Y^{n} Z^{n}} \\
& \phantom{= \mathbb D |} \Arrowvert Q_C Q_F P_{U^{n} S^{n}Z^{n}} \bar P_{ W^{n}|CFU^{n}} \bar P_{X^{n}|W^{n} U^{n}} \bar P_{Y^{n}|X^{n} S^{n}} P^{\text{SW}}_{\widehat W^{n}|CF Y^{n} Z^{n}}) \stepcounter{equation}\tag{\theequation}\label{chain1} \\
& {\overset{{(a)}}{=}} \mathbb D ( \bar P_{U^{n}S^{n}Z^{n}} \bar P_{W^{n}|U^{n}} \bar P_{C|W^{n}} \bar P_{F|W^{n}} \Arrowvert Q_C Q_F P_{U^{n}S^{n}Z^{n}} \bar P_{W^{n}|CFU^{n}} )\\
& {\overset{{(b)}}{=}} \mathbb D (\bar P_{U^{n} S^{n}Z^{n}CF} \Arrowvert P_{U^{n}S^{n}Z^{n}} Q_C Q_F )
\end{align*}
}where $(a)$ comes from Lemma \ref{lemkl}. Note that $(b)$ follows from Lemma \ref{lemkl} as well,
since $W^{n}$ is generated according to $\bar P_{W^{n}|CFU^{n}}$ and because of the Markov chain $W-U-ZS$, $W^{n}$ is conditionally independent of $(Z^{n},S^{n})$ given $U^{n}$.
Then if $R_0 + \widetilde R < H(W|USZ)=H(W|U)$, we apply Lemma \ref{1.4.2} where $B^{n}=W^{n}$, $K=(C,F)$,
and claim that there exists a fixed binning $\varphi':=(\varphi'_1, \varphi'_2)$ such that, if we denote with $\bar P^{\varphi'}$ and $P^{\varphi'}$
the distributions $\bar P$ and $P$ with respect to the choice of a binning $\varphi'$, we have
\begin{align*}
\mathbb D \left(\bar P_{U^{n}S^{n}Z^{n}CF}^{\varphi'} \Arrowvert P_{U^{n}S^{n}Z^{n}}Q_C Q_F \right)=\delta(n),
\end{align*}
which by \eqref{chain1} implies
{\allowdisplaybreaks
\begin{align*}
& \mathbb D \left(\bar P^{\varphi'}_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF} \Arrowvert P^{\varphi'}_{U^{n} S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} C F}\right) =\delta(n).
\end{align*}
}
Then, by Lemma \ref{lem1csi} we have
{\allowdisplaybreaks
\begin{align*}
& \mathbb V \left(\bar P^{\varphi'}_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF}, P^{\varphi'}_{U^{n} S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} C F}\right) =\delta(n). \stepcounter{equation}\tag{\theequation}\label{suxy}
\end{align*}
}From now on, we will omit ${\varphi'}$ to simplify the notation.
Now we would like to show that we have strong coordination for $V^{n}$ as well, but
in the second coding scheme $V^{n}$ is generated
using the output of the Slepian-Wolf decoder
$\widehat W^{n}$ and not $W^{n}$ as in the first scheme.
Because of Lemma \ref{lem1}, the inequality $\widetilde{R}+R_0>H(W|YZ)$ implies that $\widehat W^{n}$ is equal to $W^{n}$ with high probability and we will use this fact to show that
the distributions are close in total variational distance.
First, we recall the definition of coupling and the basic coupling inequality for two random variables \cite{Lindvall1992coupling}.
\begin{defi}\label{defcoup}
A coupling of two probability distributions $P_A$ and $P_{A'}$ on the same measurable space $\mathcal A$ is any probability distribution
$\widehat P_{AA'}$ on the product measurable space $\mathcal{A} \times \mathcal{A}$ whose marginals are $P_A$ and $P_{A'}$.
\end{defi}
\begin{prop}[$\mbox{\cite[I.2.6]{Lindvall1992coupling}}$]\label{theocoup}
Given two random variables $A$, $A'$ with probability distributions $P_{A}$, $P_{A'}$, any coupling
$\widehat P_{AA'}$ of $P_{A}$, $P_{A'}$ satisfies
\begin{equation*}
\tv(P_A, P_{A'})\leq 2 \mathbb P_{\widehat P_{AA'}}\{A \neq A'\}
\end{equation*}
\end{prop}
Then, we apply Proposition \ref{theocoup} to
{\allowdisplaybreaks
\begin{align*}
&\begin{matrix}[ll]
A = U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF & A'= U^{n} S^{n}Z^{n} \widehat W^{n} X^{n} Y^{n} CF\\
P_{A} =\bar P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF} & P_{A'}=\bar P_{U^{n} S^{n}Z^{n} \widehat W^{n} X^{n} Y^{n} CF} \\
\end{matrix}\\
& \mathcal A= \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal W \times \mathcal X \times \mathcal Y \times \llbracket 1,2^{nR_0} \rrbracket \times \llbracket 1,2^{n \tilde R} \rrbracket.
\end{align*}
}Since $\widehat W^{n}$ is equal to $W^{n}$ with high probability by Lemma \ref{lem1},
and the probability of error goes to zero exponentially in the Slepian-Wolf Theorem \cite[Theorem 10.1]{elgamal2011nit},
we find that for the random binning scheme
{\allowdisplaybreaks
\begin{align*}
\tv(\bar P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF}, \bar P_{U^{n} S^{n}Z^{n} \widehat W^{n} X^{n} Y^{n} CF} )= \delta(n).
\end{align*}
}This implies that:
{\allowdisplaybreaks
\begin{align*}
\tv(\bar P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF}, \bar P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF} \mathds 1_{ \widehat W^{n}| W^{n}})=\delta(n). \stepcounter{equation}\tag{\theequation}\label{34yas'}
\end{align*}
}
Similarly, we apply Proposition \ref{theocoup} again to the random coding scheme and we have
{\allowdisplaybreaks
\begin{align*}
&\tv(P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF}, P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF} \mathds 1_{ \widehat W^{n}| W^{n}})=\delta(n).\stepcounter{equation}\tag{\theequation}\label{35yas'}
\end{align*}
}
Then using the triangle inequality, we find that
{\allowdisplaybreaks
\begin{align*}
& \tv (\bar P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF V^{n}},P_{U^{n} S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} C F V^{n}}) \\
& = \tv ( \bar P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF} \bar P_{V^{n} | W^{n} Y^{n} Z^{n}}, P_{U^{n} S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} C F} P_{V^{n} | \widehat W^{n} Y^{n} Z^{n}} ) \stepcounter{equation}\tag{\theequation}\label{triu} \\
&\leq \tv (\bar P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF} \bar P_{V^{n} | W^{n} Y^{n} Z^{n}} , \bar P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF} \mathds 1_{\widehat W^{n}| W^{n}} \bar P_{V^{n} | W^{n} Y^{n} Z^{n}} ) \\
&\quad + \tv (\bar P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF} \mathds 1_{\widehat W^{n}| W^{n}} \bar P_{V^{n} | W^{n} Y^{n} Z^{n}}, P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF} \mathds 1_{\widehat W^{n}| W^{n}} P_{V^{n} |\widehat W^{n} Y^{n} Z^{n}})\\
&\quad + \tv (P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF} \mathds 1_{\widehat W^{n}| W^{n}} P_{V^{n} | W^{n} Y^{n} Z^{n}}, P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF} P_{V^{n} | \widehat W^{n} Y^{n} Z^{n}}) .
\end{align*}
}The first and the third term go to zero exponentially by applying Lemma \ref{cuff17} to \eqref{34yas'} and \eqref{35yas'} respectively.
Now observe that
\begin{equation*}
\mathds 1_{\widehat W^{n}| W^{n}} \bar P_{V^{n} | W^{n} Y^{n} Z^{n}}\!\!= \mathds 1_{\widehat W^{n}| W^{n}} P_{V^{n} | \widehat W^{n} Y^{n} Z^{n}}
\end{equation*}by definition of $P_{V^{n} | \widehat W^{n} Y^{n} Z^{n}}$.
Then by using Lemma \ref{cuff17} again the second term is equal to
\begin{equation*}
\tv \left(\bar P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF}, P_{U^{n} S^{n}Z^{n}W^{n} X^{n} Y^{n} CF}\right)
\end{equation*}and goes to zero by \eqref{suxy} and Lemma \ref{cuff16}.
Hence, we have
{\allowdisplaybreaks
\begin{align*}
\mathbb V (\bar P_{U^{n} S^{n}Z^{n}W^{n} \widehat W^{n} X^{n} Y^{n} CF V^{n}}, P_{U^{n} S^{n}Z^{n} W^{n} \widehat W^{n} X^{n} Y^{n} C F V^{n}}) =\delta(n). \stepcounter{equation}\tag{\theequation}\label{suuxycf}
\end{align*}
}Using Lemma \ref{cuff16}, we conclude that
$$ \tv (\bar P_{U^{n} S^{n}Z^{n} X^{n} W^{n} \widehat W^{n} Y^{n} V^{n}}, P_{U^{n}S^{n}Z^{n} X^{n} W^{n} \widehat W^{n} Y^{n} V^{n}} )=\delta(n).$$
\subsubsection{Remove the extra randomness F}\label{rf gen}
Even though the extra common randomness $F$ is required to coordinate $\left(U^{n}\right.$, $S^{n}$, $Z^{n}$, ${X}^{n}$, $Y^{n}$, $V^{n}$, $\left.W^{n}\right)$
we will show that we do not need it in order to coordinate only $(U^{n}, S^{n},Z^{n},{X}^{n},Y^{n},V^{n})$.
Observe that by Lemma \ref{cuff16}, equation \eqref{suuxycf} implies that
{\allowdisplaybreaks
\begin{align*}
\mathbb V (\bar P_{U^{n} S^{n}Z^{n} X^{n} Y^{n} V^{n} F}, P_{U^{n} S^{n}Z^{n} X^{n} Y^{n} V^{n} F}) =\delta(n). \stepcounter{equation}\tag{\theequation}\label{convf2}
\end{align*}}
As in \cite{yassaee2014achievability}, we would like to reduce the amount of common randomness by having the two nodes agree on an instance $F=f$.
To do so, we apply Lemma \ref{1.4.2} again where $B^{n}=W^{n}$,
$K=F$, $\varphi= \varphi''_2$ and $A^{n}= U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}$.
If $\tilde R < H(W| SUZXY V)$, there exists a fixed binning such that
{\allowdisplaybreaks
\begin{align*}
\tv (\bar P_{U^{n} S^{n}Z^{n} X^{n} Y^{n} V^{n} F}, Q_F \bar P_{U^{n} S^{n}Z^{n} X^{n} Y^{n} V^{n}})=\delta(n). \stepcounter{equation}\tag{\theequation}\label{bin3}
\end{align*}}
\vspace{-0,3cm}
\begin{oss}\label{binnings2}
Note that in Section \ref{rc gen} we had already chosen a specific binning $\varphi'_2$. In Appendix \ref{appendix bin} we prove that there exists a binning which works for both conditions.
\end{oss}
Because of \eqref{convf2}, \eqref{bin3} implies
{\allowdisplaybreaks
\begin{align*}
& \tv (P_{U^{n} S^{n}Z^{n}X^{n} Y^{n} V^{n} F}, Q_F \bar P_{U^{n} S^{n}Z^{n}X^{n} Y^{n} V^{n}})=\delta(n). \stepcounter{equation}\tag{\theequation}\label{bin4}
\end{align*}}Hence, we can fix $f \in F$ such that $(U^{n},S^{n},Z^{n},X^{n},Y^{n}, \hat U^{n})$ is almost independent of $F$ according to $P$.
To conclude, if $f \in F$ is fixed, the distribution $P_{ U^{n} S^{n}Z^{n} X^{n} Y^{n} V^{n}}$ changes to $P_{U^{n} S^{n}Z^{n} X^{n} Y^{n} V^{n}|F=f}$
and by Lemma \ref{lem4} we have
{\allowdisplaybreaks
\begin{align*}
& \mathbb V (\bar P_{U^{n} S^{n}Z^{n}X^{n} Y^{n} V^{n}|F=f},P_{U^{n} S^{n}Z^{n}X^{n} Y^{n} V^{n}|F=f}) =\delta(n).
\end{align*}}Since $\bar P_{U^{n} S^{n}Z^{n}X^{n} Y^{n} V^{n}|F=f}$ is close to $\bar P_{U^{n}S^{n}Z^{n} X^{n}Y^{n}V^{n}}$ because of \eqref{bin3}, we have
{\allowdisplaybreaks
\begin{equation}\label{finaleq}
\mathbb V (\bar P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}, P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}) =\delta(n).
\end{equation}}
\subsubsection{Rate constraints}
We have imposed the following rate constraints:
{\allowdisplaybreaks
\begin{align*}
H(W|YZ) &< \widetilde R+R_0 < H(W|U),\\
\widetilde R & < H(W|USZXYV).
\end{align*}
}
Therefore we obtain:
{\allowdisplaybreaks
\begin{align*}
& R_0 > H(W|YZ)-H(W|USZXYV) = I(W;USXV|YZ),\\
& I(W;U) < I(W;YZ).
\end{align*}
}
\vspace{-0,7cm}
\subsection{Proof of Theorem \ref{teouv}: outer bound}\label{outer}
Consider a code $(f_n,g_n)$ that induces a distribution $P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}$ that is $\varepsilon$-close in total variational distance to the i.i.d. distribution $\bar P_{U S Z X Y V}^{\otimes n}$.
Let the random variable $T$ be uniformly distributed over the
set $\llbracket 1,n\rrbracket$ and independent of the sequence
$(U^{n}, S^{n}, Z^{n}, X^{n}, Y^{n}, V^{n}, C)$.
The variable $(U_T, S_T, {Z}_T)$ is independent of $T$ because $(U^{n},S^{n}, Z^{n})$ is an i.i.d. source sequence \cite{cuff2010}.
\subsubsection{Bound on $R_0$}\label{genconvpart1}
The proof is the same as in Section \ref{isitconvpart1}, using $A^{n}:= U^{n} S^{n} X^{n} V^{n}$ and $B^{n}:=Y^{n} Z^{n}$
in \eqref{boundr0}. Then, we obtain $R_0 \geq I(W;USXV|YZ)$.
\subsubsection{Information constraint}\label{genconvpart2}
As proved in Appendix \ref{appendix compare},
in the general case we are not able to compare $I(XUS;YZ)$ and $I(XS;Y)+I(U;Z)$. Then, we show separately that:
\begin{align}
& I(W;U) \leq I(XUS; YZ), \label{outer 1} \\
& I(W;U) \leq I(XS;Y)+I(U;Z).\label{outer 2}
\end{align}
\paragraph{Proof of \eqref{outer 1}}
We have
{\allowdisplaybreaks
\begin{align*}
&0 \overset{(a)}{\leq} I(X^{n} S^{n};Y^{n})-I(C U^{n}; Y^{n})
=H(Y^{n}|CU^{n})-H(Y^{n}|X^{n}S^{n})\\
&\overset{(b)}{=} H(Y^{n}|CU^{n})-H(Y^{n}|CU^{n}X^{n}S^{n})
= I(Y^{n};X^{n} S^{n}|C U^{n} ) \leq I(Y^{n} Z^{n};X^{n} S^{n}|C U^{n} ) \stepcounter{equation}\tag{\theequation}\label{convpart2}\\
&= I(Y^{n} Z^{n};X^{n} S^{n} U^{n}| C)-I(Y^{n} Z^{n};U^{n}| C)\overset{(c)}{\leq} n I(Y_T {Z}_T;X_T S_T U_T|T) -n I( U_T;Y_{\sim T}{Z}_{\sim T} C T )
\end{align*}}where $(a)$ and $(b)$ come from the Markov chain $Y^{n}-(X^{n}, S^{n})-(C, U^{n})$.
To prove $(c)$, we show separately that:
\begin{itemize}
\item[(i)] $I(Y^{n} Z^{n};U^{n}| C) \geq n I( U_T;Y_{\sim T}{Z}_{\sim T} C T )$,
\item[(ii)] $I(Y^{n} Z^{n};X^{n} S^{n} U^{n}| C) \leq n I(Y_T {Z}_T;X_T S_T U_T |T)$.
\end{itemize}
\paragraph*{Proof of (i)}
Observe that
{\allowdisplaybreaks
\begin{align*}
I(Y^{n} Z^{n};U^{n}| C)&=H(U^{n}|C)- H(U^{n}|Y^{n} Z^{n} C) \overset{(d)}{=} H(U^{n})- H(U^{n}|Y^{n} Z^{n} C)\\
& \overset{(e)}{=} \sum_{t=1}^{n} \left( H(U_t)- H(U_t|U^{t-1} Y_t {Z}_t Y_{\sim t}{Z}_{\sim t} C)\right) \geq \sum_{t=1}^{n} \left(H(U_t)- H(U_t|Y_{\sim t}{Z}_{\sim t} C)\right) \\
&= n H(U_T|T)- n H(U_T|Y_{\sim T}{Z}_{\sim T} C T) \overset{(f)}{=} n H(U_T) - n H(U_T|Y_{\sim T}{Z}_{\sim T} C T) \\
&= n I( U_T;Y_{\sim T}{Z}_{\sim T} C T )
\end{align*}}where $(d)$ comes from the independence between $U^{n}$ and $C$ and $(e)$ and $(f)$ follow from the i.i.d. nature of $U^{n}$.
\paragraph*{Proof of (ii)}
First, we need the following result (proved in Appendix \ref{appendix ossmc1}.
\begin{lem}\label{ossmc1}
For every $t\in \llbracket 1,n\rrbracket$ the following Markov chain holds:
\begin{equation}\label{markovc1}
(Y_t,{Z}_t) - (X_t,U_t, S_t) - (C, X_{\sim t},U_{\sim t}, S_{\sim t}, Y_{\sim t},{Z}_{\sim t}).
\end{equation}
\end{lem}
Then, observe that
{\allowdisplaybreaks
\begin{align*}
I(Y^{n} Z^{n};X^{n} S^{n} U^{n}| C)&\leq I(Y^{n} Z^{n};X^{n} S^{n} U^{n} C)\\
&=\sum_{t=1}^{n} I(Y_t {Z}_t ;X^{n} S^{n} U^{n} C|Y^{t-1} Z^{t-1}) \leq \sum_{t=1}^{n} I(Y_t {Z}_t; X^{n} S^{n} U^{n} C Y^{t-1} Z^{t-1})\\
&= \sum_{t=1}^{n} I(Y_t {Z}_t;X_t S_t U_t ) +\sum_{t=1}^{n} I(Y_t {Z}_t ;X_{\sim t} S_{\sim t} U_{\sim t} C Y^{t-1} Z^{t-1}|X_t S_t U_t)\\
& \overset{(g)}{=} \sum_{t=1}^{n} I(Y_t {Z}_t;X_t S_t U_t)=n I(Y_T {Z}_T;X_T S_T U_T |T)
\end{align*}}where $(g)$ follows from Lemma \ref{ossmc1}.
Moreover, since the distributions are $\varepsilon$-close to i.i.d. by hypothesis, the last term is close to $n I(Y {Z};X S U)$.
In fact, we have
{\allowdisplaybreaks
\begin{align*}
I(Y_T {Z}_T;X_T S_T U_T |T)& =H(Y_T {Z}_T|T)+H(X_T S_T U_T |T)-H(Y_T {Z}_T X_T S_T U_T |T)\\
&=\sum_{t=1}^{n} \frac{1}{n} H(Y_t {Z}_t)+ \sum_{t=1}^{n} \frac{1}{n} H(X_t S_t U_t ) -\sum_{t=1}^{n} \frac{1}{n} H(Y_t {Z}_t X_t S_t U_t).
\end{align*}
}Then, as in the proof of \cite[Lemma VI.3]{cuff2013distributed},
{\allowdisplaybreaks
\begin{align*}
&\lvert H(Y_t {Z}_t)-H(Y {Z}) \rvert \leq 2\varepsilon \log{\left(\frac{\lvert \mathcal Y \times \mathcal Z \rvert}{\varepsilon}\right)}:= \varepsilon_1,\\
&\lvert H(X_t S_t U_t)-H(X S U) \rvert \leq 2\varepsilon \log{\left(\frac{\lvert \mathcal X \times \mathcal S \times \mathcal U \rvert}{\varepsilon}\right)}:= \varepsilon_2,\\
&\lvert H(Y_t {Z}_t X_t S_t U_t)-H(Y {Z}X S U) \rvert \leq 2\varepsilon \log{\left(\frac{\lvert \mathcal Y \times \mathcal Z \times \mathcal X \times \mathcal S \times \mathcal U \rvert}{\varepsilon}\right)}:= \varepsilon_3.
\end{align*}
}This implies that
\begin{equation}\label{g epsilon}
\lvert I(Y_T {Z}_T;X_T S_T U_T |T)- I(Y {Z};X S U)\rvert \leq g(\varepsilon),
\end{equation}where $g(\varepsilon):= (\varepsilon_1+ \varepsilon_2+ \varepsilon_3)$.
Then, \eqref{convpart2} becomes $0 \leq n I(Y {Z};X S U) -n I( U_T;Y_{\sim T}{Z}_{\sim T} C T )+g(\varepsilon)$.
\vspace{0,2cm}
\paragraph{Proof of \eqref{outer 2}}
In this case, for the second part of the converse, we have
{\allowdisplaybreaks
\begin{align*}
0 & \overset{(a)}{\leq} I(X^{n} S^{n};Y^{n})-I(CZ^{n} U^{n};Y^{n}) \overset{(b)}{\leq} I(X^{n} S^{n};Y^{n})-I( U^{n};Y^{n} C|Z^{n}) \\
& = H(Y^{n})-H(Y^{n}|X^{n} S^{n})-H(U^{n}) + I(U^{n};Z^{n} ) + H(U^{n}|Y^{n} Z^{n} C)\\
& \overset{(c)}{\leq} \sum_{t=1}^{n} H(Y_t) - \sum_{t=1}^{n} H(Y_t|X_t S_t) - \sum_{t=1}^{n} H(U_t) + \sum_{t=1}^{n} I(U_t;Z_t)+ \sum_{t=1}^{n} H(U_t|U^{t-1} Y_t {Z}_t Y_{\sim t}{Z}_{\sim t} C)\\
&\overset{(d)}{\leq} n H(Y_T) - n H(Y_T|X_T S_T T)- n H(U_T|T) + n I(U_T;Z_T|T) + n H(U_T|Y_{\sim T}{Z}_{\sim T} C T) \\
& \overset{(e)}{=} n H(Y_T) -n H(Y_T|X_T S_T) - n H(U_T) + n I(U_T;Z_T) + nH(U_T|Y_{\sim T} {Z}_{\sim T} C T)\\
&= n I(X_T, S_T; Y_T)-n I( U_T;Y_{\sim T} {Z}_{\sim T} C T) + n I(U_T;Z_T)
\end{align*}}where $(a)$ comes from the Markov chain $Y^{n}-(X^{n}, S^{n})-(C,Z^{n}, U^{n})$, $(b)$
from the fact that
\begin{align*}
I(CZ^{n} U^{n};Y^{n}) \geq I(Z^{n} U^{n};Y^{n}|C)=I(Z^{n} U^{n};Y^{n} C) \geq I( U^{n};Y^{n} C|Z^{n})
\end{align*}
by the chain rule and the fact that $U^{n}$ and $ Z^{n} $ are independent of $C$.
Then $(c)$ comes from the chain rule for the conditional entropy.
The inequalities $(d)$ comes from the fact that conditioning does not increase entropy (in particular $H(Y_T|T) \leq H(Y_T)$) and
$(e)$ from the memoryless channel $\bar P_{Y|XS}$ and the i.i.d. source $\bar P_{UZ}$.
Finally, since the source is i.i.d. the last term is $n I(U;Z)$.
\begin{oss}
Note that if $U$ is independent of $Z$ the upper bound for $I(U;W)$ is $I(XS;Y)$.
\end{oss}
\vspace*{0.2cm}
\subsubsection{Identification of the auxiliary random variable}\label{identification2}
For each $t \in \llbracket 1,n\rrbracket$ we identify the auxiliary random variables $W_t$ with $(C,Y_{\sim t}, {Z}_{\sim t})$ and $W$ with
$(W_T, T)=(C,Y_{\sim T},{Z}_{\sim T}, T)$.
The following Markov chains hold for each $t \in \llbracket 1,n\rrbracket$:
{\allowdisplaybreaks
\begin{align}
{Z}_t -(U_t, S_t)-(C, X_t, Y_t, Y_{\sim t}, Z_{\sim t}) \quad &\Longleftrightarrow \quad {Z}_t -(U_t, S_t)-(X_t, Y_t, W_t), \label{mc11t}\\
Y_t-(X_t, S_t) -(C, Y_{\sim t},{Z}_{\sim t}, U_t) \quad &\Longleftrightarrow \quad Y_t-(X_t, S_t) -(W_t, U_t), \label{mc21t}\\
V_t-(C,Y_{\sim t},{Z}_{\sim t},Y_t, {Z}_t)-(U_t, {S}_t,X_t) \quad &\Longleftrightarrow \quad V_t-(W_t,Y_t, {Z}_t)-(U_t, {S}_t,X_t)\label{mc31t}.
\end{align}
}
Then we have
{\allowdisplaybreaks
\begin{align}
{Z}_T -(U_T, S_T)-(C, X_T, Y_T, Y_{\sim T}, Z_{\sim T}, T) \quad &\Longleftrightarrow \quad {Z}_T -(U_T, S_T)-(X_T, Y_T, W_T, T), \label{mc11T}\\
Y_T-(X_T, S_T) -(C, Y_{\sim T},{Z}_{\sim T}, U_T, T) \quad &\Longleftrightarrow \quad Y_T-(X_T, S_T) -(W_T, U_T, T), \label{mc21T} \\
V_T-(C,Y_{\sim T},{Z}_{\sim T},Y_T, {Z}_T, T)-(U_T, {S}_T,X_T) \quad &\Longleftrightarrow \quad V_T-(W_T,Y_T, {Z}_T, T)-(U_T, {S}_T,X_T).\label{mc31T}
\end{align}
}
where \eqref{mc11T} and \eqref{mc21T} come from the fact that
{\allowdisplaybreaks
\begin{align*}
\mathbb P \{Z_T=z| S_T=s, U_T=u, X_T=x, Y_T=y, Y_{\sim T}=\tilde {\mathbf y} , Z_{\sim T}=\tilde {\mathbf z}, T=t, C\}&= \mathbb P \{Z_T=z| S_T=s, U_T=u\},\\
\mathbb P \{Y_T=y| X_T=x, S_T=s, Y_{\sim T}=\tilde {\mathbf y} ,Z_{\sim T}=\tilde {\mathbf z}, U_T=u, T=t, C\}&= \mathbb P \{Y_T=y| X_T=x, S_T=s\}
\end{align*}}
since the source is i.i.d. and the channel is memoryless.
Then by \eqref{mc31t}, \eqref{mc31T} holds because
{\allowdisplaybreaks
\begin{align*}
I(V_T;U_T {S}_T X_T| C Y^{n} Z^{n} T)= \sum_{i=1}^n \frac{1}{n} I(V_t;U_t S_t X_t| C Y^{n} Z^{n} T=t)=0.
\end{align*}
}
Since $W=W_t$ when $T=t$, we also have $Z-(U,S)-(X,Y,W)$, $(U,S,X)-(W,Y, Z)-V$ and $Y-(X,S)-(W,U)$.
The cardinality bound is proved in $\mbox{Appendix \ref{appendix bounds}}$.
\section{Strong coordination region for special cases}\label{sec: special cases}
Although the inner and outer bounds in Theorem \ref{teouv} do not match in general, we characterize the strong coordination region exactly in three special cases:
perfect channel, lossless decoding and separation between the channel and the source.
The empirical coordination region for these three settings was derived in \cite{treust2015empirical}.
In this section we recover the same information constraints as in \cite{treust2015empirical},
but we show that for strong coordination a positive rate of common randomness is also necessary.
This reinforces the conjecture, stated in \cite{cuff2010}, that with enough common randomness
the strong coordination capacity region is the same as the empirical coordination capacity region for any specific network setting.
\subsection{Perfect channel}\label{perfect channel}
\begin{center}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21]{./pcschemez.pdf}
\caption{Coordination of signals and actions for a two-node network with a perfect channel.}
\label{fig: pc}
\end{figure}
\end{center}
\vspace{-0,7cm}
Suppose we have a perfect channel as in Figure \eqref{fig: pc}. In this case $X^n=Y^n$ and the variable $Z^n$ plays the role of side information at the decoder.
We characterize the strong coordination region $\mathcal R_{\text{PC}}$.
\begin{teo} \label{teopc}
In the setting of Theorem \ref{teouv}, suppose that $\bar P_{Y|XS}( \mathbf{y}|\mathbf{x},\mathbf{s})={\mathds 1}_{X=Y} \{\mathbf{x}= \mathbf{y} \}$.
Then the strong coordination region is
\begin{equation}\label{eq: regionpc}
\mathcal R_{\text{PC}} := \begin{Bmatrix}[c|l]
& \bar P_{UZXV}= \bar P_{UZ} \bar P_{X|U} \bar P_{V|UXZ} \\
& \exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{UZXV}, R_0) & \bar P_{UZWXV}= \bar P_{UZ} \bar P_{W|U} \bar P_{X|UW} \bar P_{V|WXZ}\\
& I(WX;U) \leq H(X)+I(W;Z|X)\\
& R_0 \geq I(W;UV|XZ)\\
& \lvert \mathcal W \rvert \leq \lvert \mathcal U \times \mathcal Z \times \mathcal X \times {\mathcal V} \rvert+4 \\
\end{Bmatrix}
\end{equation}
\end{teo}
\begin{oss}
Observe that the decomposition of the joint distributions $\bar P_{UZXV}$ and $\bar P_{UZWXV}$
is equivalently characterized in terms of Markov chains:
{\allowdisplaybreaks
\begin{equation}\label{markov chain pc}
Z-U-X,
\quad \quad
\begin{cases}
Z-U-(X,W)\\
V-(X,Z,W)-U
\end{cases}.
\end{equation}
}
\end{oss}
\subsubsection{Achievability}
We show that $\mathcal R_{\text{PC}}$ is contained in the region $\mathcal R_{\text{in}}$ defined in \eqref{eq: region inn} and thus it is achievable.
We note $\mathcal R_{\text{in}}(W)$ the subset of $\mathcal R_{\text{in}}$ for a fixed $W \in \mathcal W$ that satisfies:
{\allowdisplaybreaks
\begin{align*}
&\bar P_{USZWXYV}=\bar P_{USZ} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|XS} \bar P_{V|WYZ}\\
&I(W;U) \leq I(W;YZ) \stepcounter{equation}\tag{\theequation}\label{constrgen} \\
& R_0 \geq I(W;USXV|YZ)
\end{align*}
}Then the set $\mathcal R_{\text{in}}$ is the union over all the possible choices for $W$ that satisfy the \eqref{constrgen}. Similarly, $\mathcal R_{\text{PC}}$ is the union of all $\mathcal R_{\text{PC}}(W)$ with $W$ that satisfies
{\allowdisplaybreaks
\begin{align*}
&\bar P_{UZWXV}= \bar P_{UZ} \bar P_{W|U} \bar P_{X|UW} \bar P_{V|WXZ}\\
& I(W,X;U) \leq H(X)+I(W;Z|X) \stepcounter{equation}\tag{\theequation}\label{constrpc} \\
& R_0 \geq I(W;UV|XZ)
\end{align*}
}
Let $(\bar P_{UZXV}, R_0) \in \mathcal R_{\text{PC}}(W)$ for some $W \in \mathcal W$.
Then $W$ verifies the Markov chains
$Z-U-(X,W)$ and $V-WXZ-U$ and the
information constraints for $ \mathcal R_{\text{PC}}$.
Note that $(\bar P_{UZXV}, R_0) \in \mathcal R_{\text{in}}(W')$, where $W'=(W,X)$.
The Markov chains are still valid and the information constraints in \eqref{constrpc} imply
the information constraints for $\mathcal R_{\text{in}}(W')$ since:
{\allowdisplaybreaks
\begin{align*}
I(W';U)&=I(W,X;U) \leq H(X)+I(W;Z|X)\\
&=I(W,X;X)+I(W,X;Z|X) = I(W,X;XZ) = I(W';XZ), \stepcounter{equation}\tag{\theequation}\label{mael2015pc} \\
R_0& \geq I(W;UV|XZ)=I(WX;UV|XZ) .
\end{align*}
}
Then $(\bar P_{UZXV}, R_0) \in \mathcal R_{\text{in}}(W')$ and if we consider the union over all suitable $W$,
we have $$\bigcup_{W} \mathcal R_{\text{PC}}(W) \subseteq \bigcup_{(W,X)} \mathcal R_{\text{in}}(W,X) \subseteq \bigcup_{W} \mathcal R_{\text{in}}(W).$$
Finally, $ \mathcal R_{\text{PC}} \subseteq \mathcal R_{\text{in}}$.
\begin{oss}
In the case of perfect channel, \cite[Section IV.A]{treust2015empirical} characterizes the
empirical coordination region and the information constraint is $I(W,X;U) \leq H(X)+I(W;Z|X)$ as in \eqref{eq: regionpc}.
\end{oss}
\vspace{0,2cm}
\subsubsection{Converse}
Consider a code $(f_n,g_n)$ that induces a distribution $P_{U^{n} S^{n} Z^{n} X^{n} V^{n}}$ that is $\varepsilon$-close in total variational distance to the i.i.d. distribution $\bar P_{U S Z X V}^{\otimes n}$.
Let $T$ be the random variable defined in Section \ref{outer}.
We would like to prove that
\begin{equation*}
0 \leq H(X)+I(W;Z|X)-I(W,X;U)=I(W,X;X Z)-I(W,X;U).
\end{equation*}
The following proof is inspired by \cite{treust2015empirical}.
We have
{\allowdisplaybreaks
\begin{align*}
& 0 = H(X^{n} ,Z^{n}) - I(X^{n} Z^{n} ; U^{n} C) - H(X^{n} Z^{n}|U^{n} C)\\
& \overset{(a)}{\leq} \sum_{t=1}^{n} H(X_t ,{Z}_t) - \sum_{t=1}^{n} I(X^{n} Z^{n}; U_t|U_{t+1}^n C) - H(X^{n} Z^{n}|U^{n} C)\\
& \overset{(b)}{=} \sum_{t=1}^{n} I(X^{n} Z^{n} C;X_t {Z}_t) -\sum_{t=1}^{n} I(X^{n} Z^{n} U_{t+1}^n C; U_t) +\sum_{t=1}^{n} I(U_{t+1}^n C; U_t)- H(X^{n} Z^{n}|U^{n} C)\\
& \overset{(c)}{=} \sum_{t=1}^{n} I(X^{n} Z^{n} C;X_t {Z}_t) -\sum_{t=1}^{n} I(X^{n} Z^{n} U_{t+1}^n C; U_t) - H(X^{n} Z^{n}|U^{n} C)\\
& \leq \sum_{t=1}^{n} I(X^{n} Z^{n} C;X_t {Z}_t) -\sum_{t=1}^{n} I(X^{n} Z^{n} C; U_t) - H(X^{n} Z^{n}|U^{n} C)\\
& \overset{(d)}{=} \sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C;X_t {Z}_t)+ \sum_{t=1}^{n} I (Z_t;X_t {Z}_t| X^{n} {Z}_{\sim t} C) -\sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C; U_t) \\
& \quad -\sum_{t=1}^{n} I (Z_t; U_t| X^{n} {Z}_{\sim t} C)- H(X^{n} Z^{n}|U^{n} C)\\
& = \sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C;X_t {Z}_t)-\sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C; U_t) - H(X^{n} Z^{n}|U^{n} C) + \sum_{t=1}^{n} H(Z_t|X^{n} {Z}_{\sim t} C) \\
&\quad - \sum_{t=1}^{n} H(Z_t| X^{n} Z^{n} C) - \sum_{t=1}^{n} H(Z_t|X^{n} {Z}_{\sim t} C) + \sum_{t=1}^{n} H(Z_t|U_t X^{n} {Z}_{\sim t} C)\\
& = \sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C;X_t {Z}_t)-\sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C; U_t) + \sum_{t=1}^{n} H(Z_t|U_t X^{n} {Z}_{\sim t} C)- H(X^{n} Z^{n}|U^{n} C)\\
& \overset{(e)}{\leq} \sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C;X_t {Z}_t)-\sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C; U_t)+ \sum_{t=1}^{n} H(Z_t|U_t C)- H(Z^{n}|U^{n} C)\\
& \overset{(f)}{=} \sum_{t=1}^{n} \!I(X^{n} {Z}_{\sim t} C; X_t {Z}_t)\!-\!\sum_{t=1}^{n} I(X^{n} {Z}_{\sim t} C; U_t) \\
& = n I(X^{n} {Z}_{\sim T} C;X_T {Z}_T|T) -n I(X^{n} {Z}_{\sim T} C; U_T|T) \\
& \leq n I(X^{n} {Z}_{\sim T} C T; X_T {Z}_T) -n I(X^{n} {Z}_{\sim T} C T; U_T) + n I(T; U_T)\\
& \overset{(g)}{=} n I(X_T {X}_{\sim T} {Z}_{\sim T} C T; X_T {Z}_T) - n I(X_T {X}_{\sim T} {Z}_{\sim T} C T; U_T)
\end{align*}}where $(a)$ and $(b)$ follow from the properties of the mutual information and $(c)$ comes from
the independence between $U^n$ and $C$ and
the i.i.d. nature of the source.
Then $(d)$ comes from the chain rule, $(e)$ from the properties of conditional entropy, $(f)$ from
the independence between $(U^n,Z^n)$ and $C$ and
the i.i.d. nature of the source.
Finally, $(g)$ comes from the fact that $I(T; U_T)$ is zero due to the i.i.d. nature of the source.
We identify the auxiliary random variable $W_t$ with $(C,X_{\sim t}, {Z}_{\sim t})$ for each $t \in \llbracket 1,n\rrbracket$ and $W$ with $(W_T, T)=$ $(C,X_{\sim T}, {Z}_{\sim T}, T)$.
Observe that with this identification of $W$ the bound for $R_0$ follows from Section \ref{genconvpart1} with the substitution $Y=X$.
Moreover, the following Markov chains are verified for each $t \in \llbracket 1,n\rrbracket$:
{\allowdisplaybreaks
\begin{align*}
& {Z}_t-U_t-(W_t, X_t)\\
& V_t-(W_t, X_t, {Z}_t)-U_t
\end{align*}}
The first one holds because the source is i.i.d. and $Z_t$ does not belong to $W_t$. The second Markov chain follows from the fact that
$V$ is generated using $C$, $X^{n}$ and $Z^{n}$ that are included in $(W_t, X_t, {Z}_t)= (C,X_{\sim t}, {Z}_{\sim t}, X_t, {Z}_t)$.
With a similar approach as in Section \ref{identification} and Section \ref{identification2}, the Markov chains with $T$ hold.
Then since $W=W_t$ when $T=t$, we also have $ {Z}-U-(W, X)$ and $V-(W, X, {Z})-U $.
The cardinality bound is proved in $\mbox{Appendix \ref{appendix bounds}}$.
\subsection{Lossless decoding}\label{lossless decoding}
\begin{center}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21]{./lossless.pdf}
\caption{Coordination of signals and actions for a two-node network with a noisy channel and a lossless decoder.}
\label{fig: ld}
\end{figure}
\end{center}
\vspace{-0,7cm}
Suppose that the decoder wants to reconstruct the source losslessly, i.e., $V=U$ as in Figure \ref{fig: ld}.
Then, we characterize the strong coordination region $\mathcal R_{\text{LD}}$.
\begin{teo} \label{teolossless}
Consider the setting of Theorem \ref{teouv} and suppose that $\bar P_{V|USXYZ}( \mathbf{v}|\mathbf{u},\mathbf{s},\mathbf{x},\mathbf{y}, \mathbf{z})=\mathds 1_{ V =U } \{\mathbf{u} =\mathbf{v} \}$. Then the strong coordination region is
\begin{equation}\label{eq: regionld}
\mathcal R_{\text{LD}} := \begin{Bmatrix}[c|l]
& \bar P_{USZXYV}= \bar P_{USZ} \bar P_{X|U} \bar P_{Y|XS} \mathds 1_{ V =U }\\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{USZXY}, R_0) &\bar P_{USZWXYV}=\bar P_{USZ} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|XS} \mathds 1_{ V =U }\\
& I(W;U) \leq I(W;YZ)\\
& R_0 \geq I(W;USX|YZ)\\
& \lvert \mathcal W \rvert \leq \lvert \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \rvert+3
\end{Bmatrix}
\end{equation}
\end{teo}
\vspace{0,3cm}
\begin{oss}
Observe that the decomposition of the joint distributions $\bar P_{USZXYV}$ and $\bar P_{USZWXYV}$
is equivalently characterized in terms of Markov chains:
{\allowdisplaybreaks
\begin{equation}\label{markov chain lossless}
\begin{cases}
Z-(U,S)-(X,Y)\\
Y-(X,S)-U
\end{cases},\quad
\begin{cases}
Z-(U,S)-(X,Y,W)\\
Y-(X,S)-(U,W)
\end{cases}.
\end{equation}
}
\end{oss}
\vspace*{0.3cm}
\subsubsection{Achievability}
We show that $\mathcal R_{\text{LD}} \subseteq \mathcal R_{\text{in}}$ and thus it is achievable.
Similarly to the achievability proof in Theorem \ref{teopc} ,
let $(\bar P_{UZXV}, R_0) \in \mathcal R_{\text{LD}}(W)$ for some $W \in \mathcal W$.
Then, $W$ verifies the Markov chains $Z-(U,S)-(X,Y,W)$ and $Y-(X,S)-(U,Z,W)$ and the
information constraints for $ \mathcal R_{\text{LD}}$.
We want to show that $(\bar P_{UZXV}, R_0) \in \mathcal R_{\text{in}}(W)$.
Observe that the Markov chains are still valid.
Hence, the only difference is the bound on $R_0$, but $I(W;USXV|YZ)$ $=$ $ I(W;USX|YZ)$ when $U=V$.
Then, $(\bar P_{UZXV}, R_0) \in \mathcal R_{\text{in}}(W)$ and if we consider the union over all suitable $W$,
we have $$\bigcup_{W} \mathcal R_{\text{LD}}(W) \subseteq \bigcup_{W} \mathcal R_{\text{in}}(W).$$
Finally, $ \mathcal R_{\text{LD}} \subseteq \mathcal R_{\text{in}}$.
\vspace*{0.2cm}
\subsubsection{Converse}
Consider a code $(f_n,g_n)$ that induces a distribution $P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}$ that is $\varepsilon$-close in total variational
distance to the i.i.d. distribution $\bar P_{U S Z X Y}^{\otimes n} \mathds 1_{ V =U }^{\otimes n}$.
Let $T$ be the random variable defined in Section \ref{outer}.
We have
{\allowdisplaybreaks
\begin{align*}
nR_0 & \geq H(C) \geq H(C|Y^{n} Z^{n}) =H(C U^{n}|Y^{n} Z^{n}) - H( U^{n}|C Y^{n} Z^{n}) \\
& \overset{(a)}{\geq} H(C U^{n}|Y^{n} Z^{n}) - n f(\varepsilon)
\geq I(U^{n} S^{n} X^{n};C U^{n}|Y^{n} Z^{n}) - n f(\varepsilon)\\
&=\sum_{t=1}^{n} I(U_t S_t X_t;C U^{n} \!| U^{t-1} S^{t-1} X^{t-1} Y_{\sim t} Z_{\sim t} Y_t Z_t) - n f(\varepsilon)\\
& \overset{(b)}{\geq}\! \sum_{t=1}^{n} I(U_t S_t X_t;C U^{n} Y_{\sim t} Z_{\sim t} U^{t-1} S^{t-1} X^{t-1} | Y_t Z_t) -2nf(\varepsilon)\\
& \geq \sum_{t=1}^{n} I(U_t S_t X_t; C U^{n} Y_{\sim t} Z_{\sim t} | Y_t Z_t) -2nf(\varepsilon)
= n I(U_T S_T X_T; C U^{n} Y_{\sim T} Z_{\sim T} | Y_T Z_T T) -2nf(\varepsilon) \\
& = n I(U_T S_T X_T; C U^{n} Y_{\sim T} Z_{\sim T} T | Y_T Z_T) - n I(U_T S_T X_T; T | Y_T Z_T ) -2nf(\varepsilon)\\
& = n I(U_T S_T X_T; C U^{n} Y_{\sim T} Z_{\sim T} T | Y_T Z_T) - n I(U_T S_T X_T Y_T Z_T; T ) + n I(Y_T Z_T; T )-2nf(\varepsilon)\\
& \overset{(c)}{\geq} n I(U_T S_T X_T; C U^{n} Y_{\sim T} Z_{\sim T} T | Y_T Z_T) -3nf(\varepsilon)
\end{align*}}where $(a)$ follows Fano's inequality which implies that
\begin{equation}\label{byfano}
H( U^{n}|C Y^{n} Z^{n}) \leq n f(\varepsilon)
\end{equation}
as proved in Appendix \ref{appendix fano}.
To prove $(b)$ observe that
\begin{align*}
I(U_t S_t X_t;C U^{n} \!| U^{t-1} S^{t-1} X^{t-1} Y_{\sim t} Z_{\sim t} Y_t Z_t) =& I(U_t S_t X_t;C U^{n} Y_{\sim t} Z_{\sim t} U^{t-1} S^{t-1} X^{t-1} | Y_t Z_t)\\
&- I(U_t S_t X_t;Y_{\sim t} Z_{\sim t} U^{t-1} S^{t-1} X^{t-1}| Y_t Z_t)
\end{align*}and $I(U_t S_t X_t;Y_{\sim t} Z_{\sim t} U^{t-1} S^{t-1} X^{t-1}| Y_t Z_t) \leq f(\varepsilon) $ by Lemma \ref{lemab}.
Finally, $(c)$ comes from the fact, proved in \cite[Lemma VI.3]{cuff2013distributed}, that
$I(U_T S_T X_T Y_T Z_T; T )$ vanishes since the distribution is $\varepsilon$-close to i.i.d. by hypothesis.
With the identifications $W_t=(C,U^{n}, Y_{\sim t}, {Z}_{\sim t})$ for each $t \in \llbracket 1,n\rrbracket$ and $W=(W_T, T)=(C,U^{n},Y_{\sim T},{Z}_{\sim T}, T)$, we have
$R_0 \geq I(W;USX|YZ)$.
For the second part of the converse, we have
\begin{comment}
{\allowdisplaybreaks
\begin{align*}
H(U^{n}) &= I(U^{n} ; Y^{n} Z^{n} C )+ H(U^{n} | Y^{n} Z^{n} C) \\
&\overset{(e)}{\leq} I(U^{n} ; Y^{n} Z^{n} C)+ n \varepsilon = I(U^{n} ; Y^{n} Z^{n} |C)+ n \varepsilon \stepcounter{equation}\tag{\theequation}\label{ld part 2} \\
&= \sum_{t=1}^n I(U^{n} ; Y_t Z_t | Y^{t-1} Z^{t-1} C) + n \varepsilon \leq \sum_{t=1}^n I(U^{n} Y^{t-1} Z^{t-1} C ; Y_t Z_t) +n \varepsilon \\
& \leq \sum_{t=1}^n I(U^{n} Y_{\sim t} Z_{\sim t} C; Y_t Z_t) +n \varepsilon
\end{align*}}where $(e)$ comes from Fano's inequality.
\end{comment}
{\allowdisplaybreaks
\begin{align*}
n I(U; W ) & \leq n H(U)= H(U^{n}) = H(U^{n}|C)= I(U^{n} ; Y^{n} Z^{n} C )+ H(U^{n} | Y^{n} Z^{n} C) \\
&\overset{(d)}{\leq}\sum_{t=1}^n I(U^{n} ; Y_t Z_t | Y^{t-1} Z^{t-1} C) + n f(\varepsilon)
\leq \sum_{t=1}^n I(U^{n} Y^{t-1} Z^{t-1} C ; Y_t Z_t) +n f(\varepsilon) \\
& \leq \sum_{t=1}^n I(U^{n} Y_{\sim t} Z_{\sim t} C; Y_t Z_t) +n f(\varepsilon)
= n I(U^{n} Y_{\sim T} Z_{\sim T} C; Y_T Z_T|T) +n f(\varepsilon)\\
& \leq n I(U^{n} Y_{\sim T} Z_{\sim T} C T; Y_T Z_T) +n f(\varepsilon)
\overset{(e)}{=} n I( W; Y Z)+n f(\varepsilon)
\end{align*}}where $(d)$ comes from Fano's inequality and $(e)$ comes
from the identification $W=(C,U^{n},Y_{\sim T},{Z}_{\sim T}, T)$.
In order to complete the converse, we show that the following Markov chains hold for each $t \in \llbracket 1,n\rrbracket$:
{\allowdisplaybreaks
\begin{align*}
& Y_t-(X_t,S_t)-(U_t,Z_t,W_t),\\
& Z_t-(U_t,S_t)-(X_t,Y_t,W_t).
\end{align*}
}The first one is verified because the channel is memoryless and $Y_t$ does not belong to $W_t$ and the second one holds because
of the i.i.d. nature of the source and because $Z_t$ does not belong to $W_t$.
With a similar approach as in Section \ref{identification} and Section \ref{identification2}, the Markov chains with $T$ hold.
Then, since $W=W_t$ when $T=t$, we also have $ Y-(X,S)-(U,Z,W)$ and $Z-(U,S)-(X,Y,W) $.
The cardinality bound is proved in $\mbox{Appendix \ref{appendix bounds}}$.
\begin{oss}
An equivalent characterization of the region is:
\begin{equation}\label{eq: regionldmael}
\mathcal R_{\text{LD}} := \begin{Bmatrix}[c|l]
& \bar P_{USZXY}= \bar P_{USZ} \bar P_{X|U} \bar P_{Y|XS} \\
& \exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{USZXY}, R_0) & \bar P_{USZWXY}= \bar P_{USZ} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|XS} \\
& H(U)\leq I(WU;YZ)\\
& R_0 \geq I(W;USX|YZ)+H(U|WYZ)\\
& \lvert \mathcal W \rvert \leq \lvert \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \rvert+1 \\
\end{Bmatrix}
\end{equation}The region in \eqref{eq: regionldmael} is achievable since with the choice of the auxiliary random variable $W''=(W,U)$, the constraints in \eqref{eq: regionld} become
{\allowdisplaybreaks
\begin{align*}
& I(WU;U)=H(U) \leq I(WU;YZ) \stepcounter{equation}\tag{\theequation}\label{mael2015uv}\\
& R_0 \geq I(WU;USX|YZ) =I(W;USX|YZ)+I(U;USX|WYZ) \\
& \phantom{R_0}=I(W;USX|YZ)+H(U|WYZ)-H(U|USXWYZ) \stepcounter{equation}\tag{\theequation}\label{mael2015ro}\\
& \phantom{R_0}=I(W;USX|YZ)+H(U|WYZ).
\end{align*}
}
Moreover, the converse in the proof of Theorem \ref{teolossless} is still valid with the identification $W=(C,U_{\sim T}, Y_{\sim T},{Z}_{\sim T}, T)$.
Note that \cite[Section IV.B]{treust2015empirical}
gives a characterization of the empirical coordination region and the constraint for
the mutual information is
\begin{equation*}
0 \leq I(WU;YZ)-H(U)= I(WU;YZ)-H(U)-I(W;S|U)
\end{equation*}
which is the same as in \eqref{mael2015uv} because of the Markov chain $SZ-U-W$.
\end{oss}
\subsection{Separation between source and channel}\label{separation}
Suppose that the channel state $P_{S^n}$ is independent of the source and side information $P_{U^nZ^n}$,
and that the target joint distribution is of the form $\bar{P}_{UZV}^{\otimes n}\bar{P}_{SXY}^{\otimes n}$.
For simplicity, we will suppose that the encoder has perfect state information (see Figure \ref{fig: sep}).
Then we characterize the strong coordination region $\mathcal R_{\text{SEP}}$.
Note that in this case the coordination requirements are three-fold: the random variables $(U^n,Z^n,V^n)$
should be coordinated, the random variables $(S^n,X^n,Y^n)$ should be coordinated and finally $(U^n,Z^n,V^n)$ should be independent of $(S^n,X^n,Y^n)$.
We introduce two auxiliary random variables $W_1$ and $W_2$, where $W_2$ is used to accomplish the coordination of $(U^n,Z^n,V^n)$,
while $W_1$ has the double role of ensuring the independence of source and state as well as coordinating $(S^n,X^n,Y^n)$.
\begin{center}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21]{./separation.pdf}
\caption{Coordination of signals and actions for a two-node network with a noisy channel where the source is separated from the channel.}
\label{fig: sep}
\end{figure}
\end{center}
\vspace{-0,7cm}
\begin{teo} \label{teoseparation}
Consider the setting of Theorem \ref{teouv} and suppose that $\bar P_{USXYZV}= \bar P_{UZV} \bar P_{SXY}$. Then, the strong coordination region is
\begin{equation}\label{eq: regionsep}
\mathcal R_{\text{SEP}} := \begin{Bmatrix}[c|l]
&\bar P_{USZXYV}= \bar P_{UZ} \bar P_{V|UZ} \bar P_{S} \bar P_{X|S} \bar P_{Y|XS} \\
&\exists \mbox{ } (W_1,W_2) \mbox{ taking values in $\mathcal W_1 \times \mathcal W_2$}\\
(\bar P_{USZXY}, R_0) &\bar P_{USZW_1 W_2XYV}=\bar P_{UZ} \bar P_{W_2|U} \bar P_{V|Z W_2} \bar P_{S} \bar P_{X|S} \bar P_{W_1|SX} \bar P_{Y|XS} \\
& I(W_1;S) + I(W_2;U) \leq I(W_1;Y) + I(W_2;Z)\\
&R_0 \geq I(W_1;SX|Y)+I(W_2;UV|Z)\\
&(\lvert \mathcal W_1 \rvert , \lvert \mathcal W_2 \rvert)\leq \lvert \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \times \mathcal V \rvert +3 .
\end{Bmatrix}
\end{equation}
\end{teo}
\begin{oss}
Observe that the decomposition of the joint distribution $\bar P_{USZW_1 W_2XYV}$
is equivalently characterized in terms of Markov chains:
{\allowdisplaybreaks
\begin{equation}\label{markov chain separation}
\begin{cases}
Z-U-W_2\\
Y-(X,S)-W_1\\
V-(Z,W_2)-U
\end{cases}.
\end{equation}
}
\end{oss}
\subsubsection{Achievability}
We show that $\mathcal R_{\text{SEP}}$ is contained in the achievable region $\mathcal R_{\text{in}}$ in \eqref{eq: region inn} specialized to this specific setting.
In this case we are also supposing that the encoder has perfect state information,
i.e. the input of the encoder is the pair $(U^{n}, S^{n})$ as in Figure \ref{fig: sep} as well as common randomness $C$.
The joint distribution $\bar P_{USZXYV}$ becomes $\bar P_{UZ} \bar P_{V|UZ} \bar P_{S} \bar P_{X|S} \bar P_{Y|XS}$
since $(U,Z,V)$ is independent of $(S,X,Y)$ and the Markov chains are still valid.
Observe that the set $\mathcal R_{\text{in}}$ is the union over all the possible choices for $W$ that satisfy
the joint distribution, rate and information constraints in \eqref{eq: region inn}.
Similarly, $\mathcal R_{\text{SEP}}$ is the union of all $\mathcal R_{\text{SEP}}(W_1,W_2)$ with $(W_1,W_2)$ that satisfies
the joint distribution, rate and information constraints in \eqref{eq: regionsep}.
Let $(\bar P_{USZXY}, R_0) \in \mathcal R_{\text{SEP}}(W_1,W_2)$ for some $(W_1,W_2)$ taking values in $\mathcal W_1 \times \mathcal W_2$.
Then, $(W_1,W_2) $ verifies the Markov chains
$Z-U-W_2$, $V-( W_2, Z)- U$ and $Y-(S,X)-W_1$, and the
information constraints for $ \mathcal R_{\text{SEP}}$.
We will show that $(\bar P_{USZXY}, R_0) \in \mathcal R_{\text{in}}(W')$, where $W'=(W_1,W_2)$.
The information constraints in \eqref{eq: regionsep} imply
the information constraints for $\mathcal R_{\text{in}}(W')$ since:
{\allowdisplaybreaks
\begin{align*}
&I(W_1 W_2;YZ)- I(W_1W_2;US)\\
&= I(W_1; YZ)+ I(W_2;YZ|W_1) - I(W_1;US)- I(W_2; US|W_1)\\
&= I(W_1; Y) + I(W_2;Y Z W_1) - I(W_1;S)- I(W_2;U S W_1)\\
&= I(W_1; Y) + I(W_2;Z) - I(W_1;S)- I(W_2;U) \geq 0\\
& I(W_1W_2;USXV|YZ)\\
& =I(W_1;USXV|YZ) + I(W_2;USXV|YZW_1)\\
& = I(W_1;USXVZ|Y)+ I(W_2;USXVY|ZW_1)\\
&= I(W_1;SX|Y)+ I(W_2;USXVY|Z)\\
&=I(W_1;SX|Y)+ I(W_2;UV|Z) \leq R_0
\end{align*}}because by construction $W_1$ and $W_2$ are independent
of each other and $W_1$ is independent of $(U,Z,V)$ and $W_2$ is independent of $(S,X,Y)$ .
Then $(\bar P_{USZXY}, R_0) \in \mathcal R_{\text{in}}(W_1, W_2)$ and if we consider the union over all suitable $(W_1, W_2)$,
we have $$\bigcup_{(W_1, W_2)} \mathcal R_{\text{SEP}}(W_1, W_2) \subseteq \bigcup_{(W_1, W_2)} \mathcal R_{\text{in}}(W_1, W_2) \subseteq \bigcup_{W} \mathcal R_{\text{in}}(W).$$
Finally, $ \mathcal R_{\text{SEP}} \subseteq \mathcal R_{\text{in}}$.
\vspace{0,2cm}
\subsubsection{Converse}
Let $T$ be the random variable defined in Section \ref{outer}.
Consider a code $(f_n,g_n)$ that induces a distribution $P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}$ that
is $\varepsilon$-close in total variational distance to the i.i.d. distribution $\bar P_{U Z V}^{\otimes n} \bar P_{SXY}^{\otimes n}$.
Then, we have
\begin{equation*}
\mathbb V (P_{S^{n} X^{n} Y^{n} Z^{n} U^{n} V^{n}}, \bar P_{UZV}^{\otimes n} \bar P_{SXY}^{\otimes n}) < \varepsilon.
\end{equation*}
If we apply Lemma \ref{lemmit} to $A=S^{n} X^{n} Y^{n}$ and $B=Z^{n} U^{n} V^{n}$, we have
\begin{equation}\label{lemmitsep}
I(S^{n} X^{n} Y^{n}; Z^{n} U^{n} V^{n}) < f(\varepsilon).
\end{equation}
Then, we have
{\allowdisplaybreaks
\begin{align*}
nR_0 &\geq H(C) \overset{(a)}{\geq} I(U^{n} S^{n} X^{n} V^{n};C|Y^{n} Z^{n})= I(S^{n} X^{n} ;C|Y^{n} Z^{n} U^{n} V^{n}) + I(U^{n} V^{n};C|Y^{n} Z^{n}) \\
& = I(S^{n} X^{n} ;C Z^{n} U^{n} V^{n} |Y^{n}) - I(S^{n} X^{n} ; Z^{n} U^{n} V^{n} |Y^{n}) + I(U^{n} V^{n};C Y^{n}| Z^{n}) - I(U^{n} V^{n}; Y^{n}| Z^{n}) \\
& \overset{(b)}{\geq} I(S^{n} X^{n} ;C Z^{n} U^{n} V^{n} |Y^{n}) + I(U^{n} V^{n};C Y^{n}| Z^{n})- 2 f(\varepsilon)\\
& \overset{(c)}{=} \sum_{t=1}^n I(S_t X_t ;C Z^{n} U^{n} V^{n} |S_{t+1}^n X_{t+1}^n Y_t Y_{\sim t})+ \sum_{t=1}^n I(U_t V_t ;C Y^{n}| U^{t-1} V^{t-1} Z_t Z_{\sim t}) - 2 f(\varepsilon)\\
& = \sum_{t=1}^n I(S_t X_t ;C Z^{n} U^{n} V^{n} S_{t+1}^n X_{t+1}^n Y_{\sim t} | Y_t) - \sum_{t=1}^n I(S_t X_t ; S_{t+1}^n X_{t+1}^n Y_{\sim t} | Y_t) \\
&\quad + \sum_{t=1}^n I(U_t V_t ;C Y^{n} U^{t-1} V^{t-1} Z_{\sim t} |Z_t) - \sum_{t=1}^n I(U_t V_t ; U^{t-1} V^{t-1} Z_{\sim t} |Z_t) - 2n f(\varepsilon)\\
& \overset{(d)}{\geq} \sum_{t=1}^n I(S_t X_t ;C Z^{n} U^{n} V^{n} S_{t+1}^n X_{t+1}^n Y_{\sim t} | Y_t) + \sum_{t=1}^n I(U_t V_t ;C Y^{n} U^{t-1} V^{t-1} Z_{\sim t} |Z_t) - 2 f(\varepsilon)- 2n f(\varepsilon)\\
& = n I(S_T X_T ;C U^{n} S_{T+1}^n Y_{\sim T} | Y_T T) + n I(U_T V_T ; C Y^{n} U^{T-1} V^{T-1} Z_{\sim T} |Z_T T) - 2(n+1) f(\varepsilon)\\
&\geq n I(S_T X_T ;C U^{n} S_{T+1}^n Y^{T-1} | Y_T T) + n I(U_T V_T ; C Y^{n} U^{T-1} Z_{\sim T} |Z_T T) - 2(n+1) f(\varepsilon)\\
& = n I(S_T X_T ;C U^{n} S_{T+1}^n Y^{T-1} T| Y_T ) - n I(S_T X_T ; T| Y_T ) \\
& \quad + n I(U_T V_T ; C Y^{n} U^{T-1} Z_{\sim T} T |Z_T ) - n I(U_T V_T ; T |Z_T ) - 2(n+1) f(\varepsilon)\\
& = n I(S_T X_T ;C U^{n} S_{T+1}^n Y^{T-1} T| Y_T ) - n I(S_T X_T Y_T; T ) + n I( Y_T; T )\\
& \quad + n I(U_T V_T ; C Y^{n} U^{T-1} Z_{\sim T} T |Z_T ) - n I(U_T V_T Z_T ; T ) + n I( Z_T; T ) - 2(n+1) f(\varepsilon)\\
& \overset{(e)}{\geq} n I(S_T X_T ;C U^{n} S_{T+1}^n Y^{T-1} T| Y_T ) + n I(U_T V_T ; C Y^{n} U^{T-1} Z_{\sim T} T |Z_T ) - 2(2n+1) f(\varepsilon)
\end{align*}}where $(a)$ follows from basic properties of entropy and mutual information. To prove $(b)$, note that
{\allowdisplaybreaks
\begin{align*}
I(S^{n} X^{n} ; Z^{n} U^{n} V^{n} |Y^{n}) & \leq I(S^{n} X^{n} Y^{n}; Z^{n} U^{n} V^{n})\\
I(U^{n} V^{n}; Y^{n}| Z^{n}) & \leq I(S^{n} X^{n} Y^{n}; Z^{n} U^{n} V^{n})
\end{align*}}and $I(S^{n} X^{n} Y^{n}; Z^{n} U^{n} V^{n}) < f(\varepsilon)$ by \eqref{lemmitsep}.
Then $(c)$ comes from the chain rule for mutual information, $(d)$ follows from Lemma \ref{lemab} and $(e)$ from \cite[Lemma VI.3]{cuff2013distributed}
since the distributions are close to i.i.d. by hypothesis.
The lower bound on $R_0$ follows from the identifications
{\allowdisplaybreaks
\begin{align*}
& W_{1,t} = (C , U^{n} , S_{t+1}^n, Y^{t-1}) \quad t \in \llbracket 1,n\rrbracket \\
& W_{2,t} = (C, Y^{n}, U^{t-1}, Z_{\sim t} ) \quad t \in \llbracket 1,n\rrbracket\\
& W_1 = (W_{1,T}, T)=(C , U^{n} , S_{T+1}^n, Y^{T-1}, T)\\
& W_2 = (W_{2,T}, T)=(C, Y^{n}, U^{T-1}, Z_{\sim T}, T ).
\end{align*}}
Following the same approach as \cite{treust2015empirical, treusttech}, we divide the second part of the converse in two steps.
First, we have the following upper bound:
{\allowdisplaybreaks
\begin{align*}
I( C U^{n} ;Y^{n}) & =\sum_{t=1}^n I(C U^{n} ; Y_t|Y^{t-1}) \leq \sum_{t=1}^n I(C U^{n} Y^{t-1}; Y_t)\\
& = \sum_{t=1}^n I(C U^{n} Y^{t-1} S_{t+1}^n; Y_t)-\sum_{t=1}^n I(S_{t+1}^n; Y_t| C U^{n} Y^{t-1})\\
& \overset{(f)}{=} \sum_{t=1}^n I(C U^{n} Y^{t-1} S_{t+1}^n; Y_t)-\sum_{t=1}^n I(S_t; Y^{t-1}| C U^{n} S_{t+1}^n) \stepcounter{equation}\tag{\theequation}\label{upb}\\
& = \sum_{t=1}^n I(C U^{n} Y^{t-1} S_{t+1}^n; Y_t)-\sum_{t=1}^n I(S_t; C Y^{t-1} U^{n} S_{t+1}^n) + \sum_{t=1}^n I(S_t; C U^{n} S_{t+1}^n)\\
& \overset{(g)}{=} \sum_{t=1}^n I(C U^{n} Y^{t-1} S_{t+1}^n; Y_t)-\sum_{t=1}^n I(S_t; C Y^{t-1} U^{n} S_{t+1}^n) \\
& \overset{(h)}{=} \sum_{t=1}^n I( Y_t; W_{1,t})-\sum_{t=1}^n I(S_t; W_{1,t}) \\
\end{align*}}where $(f)$ comes from Csisz{\'a}r's Sum Identity \cite{elgamal2011nit}, $(g)$ from the fact that $I(S_t; C U^{n} S_{t+1}^n)$ is zero because the source and the common randomness are independent of the state, which
is i.i.d. by hypothesis. Finally, $(h)$ comes from the identification of the auxiliary random variable $W_{1,t}$ for $t \in \llbracket 1,n\rrbracket$.
Then, we show a lower bound:
{\allowdisplaybreaks
\begin{align*}
I( C U^{n} ;Y^{n}) & \geq I(U^{n} ;Y^{n}|C) \overset{(i)}{=} I(U^{n} ;C Y^{n} ) \overset{(j)}{=} I(U^{n} Z^{n};C Y^{n} ) \\
&\geq I(U^{n};C Y^{n}|Z^{n})= \sum_{t=1}^n I(U_t;C Y^{n}|Z^{n} U^{t-1}) \\
&= \sum_{t=1}^n I(U_t; C Y^{n} Z_{\sim t} U^{t-1}|Z_t) - \sum_{t=1}^n I(U_t; Z_{\sim t} U^{t-1}|Z_t) \\
& \overset{(k)}{=} \sum_{t=1}^n I(U_t; C Y^{n} Z_{\sim t} U^{t-1}|Z_t) \stepcounter{equation}\tag{\theequation}\label{lob}\\
& = \sum_{t=1}^n I(U_t Z_t;C Y^{n} Z_{\sim t} U^{t-1}) - \sum_{t=1}^n I(Z_t;C Y^{n} Z_{\sim t} U^{t-1}) \\
& \overset{(l)}{=} \sum_{t=1}^n I(U_t; C Y^{n} Z_{\sim t} U^{t-1}) - \sum_{t=1}^n I(Z_t;C Y^{n} Z_{\sim t} U^{t-1}) \\
& \overset{(m)}{=} \sum_{t=1}^n I(U_t; W_{2,t}) - \sum_{t=1}^n I(Z_t; W_{2,t})
\end{align*}}where $(i)$ comes from the fact that $I(U^{n};C)$ is zero because $U^{n}$ and $C$ are independent, $(j)$ from
the Markov chain $Z^{n}-U^{n}-Y^{n} C$, $(k)$ from the fact that $U^{n}$ and $Z^{n}$ are i.i.d. by hypothesis,
$(l)$ follows from the the Markov chain $ Z_t-U_t-(Y^{n},Z_{\sim t}, U^{t-1}, C )$ for $t \in \llbracket 1,n\rrbracket$ and finally
$(m)$ comes from the identification of the auxiliary random variable $W_{2,t}$ for $t \in \llbracket 1,n\rrbracket$.
By combining upper and lower bound, we have
{\allowdisplaybreaks
\begin{align*}
0 & \overset{(n)}{\leq} \sum_{t=1}^n I( Y_t, W_{1,t})-\sum_{t=1}^n I(S_t; W_{1,t}) + \sum_{t=1}^n I(Z_t; W_{2,t}) - \sum_{t=1}^n I(U_t; W_{2,t})\\
& = n I( Y_T, W_{1,T}|T) - n I(S_T; W_{1,T}|T) +n I(Z_T; W_{2,T}|T) - n I(U_T; W_{2,T}|T)\\
& \leq n I( Y_T, W_{1,T} T) - n I(S_T; W_{1,T} T) + n I(S_T; T) + n I(Z_T; W_{2,T} T) - n I(U_T; W_{2,T} T)+n I(U_T; T)\\
& \overset{(o)}{=} n I( Y_T, W_{1,T} T)- n I(S_T; W_{1,T} T) + n I(Z_T; W_{2,T} T) - n I(U_T; W_{2,T} T)\\
& \overset{(p)}{=} n I( Y; W_1) - n I(S; W_1)+ n I(Z; W_2) - n I(U; W_2)
\end{align*}}where $(n)$ comes from \eqref{upb} and \eqref{lob} and $(o)$ follows from the i.i.d. nature of the source and state.
Finally $(p) $ follows from the identifications for $W_1$ and $W_2$.
With the chosen identification, the Markov chains are verified for each $t \in \llbracket 1,n\rrbracket$:
{\allowdisplaybreaks
\begin{align*}
& Y_t- (X_t, S_t)-W_{1,t} \\
& Z_t-U_t- W_{2,t}\\
& V_t-( W_{2,t}, Z_t)- U_t.
\end{align*}
}The first Markov chain holds because the channel is memoryless and $Y_t$ does not belong to $W_{1,t}$. The second one holds because $Z^{n}$
is i.i.d. and $Z_t$ does not belong to $W_{2,t}$.
Finally, the third one is verified because the decoder is non causal and $V_t$ is a function of $(Y^{n}, Z^{n})$ that is included in
$ (W_{2,t}, Z_t)=(Y^{n}, U^{t-1}, Z_{\sim t}, Z_t)$.
With a similar approach as in Section \ref{identification} and Section \ref{identification2}, the Markov chains with $T$ hold.
Then since $W_1=W_{1,t}$ and $W_2=W_{2,t}$ when $T=t$, we also have $ Y- (X, S)-W_1 $, $ Z-U- W_2$ and $V-( W_2, Z)- U$.
The cardinality bound is proved in $\mbox{Appendix \ref{appendix bounds}}$.
\begin{oss}
Note that even if in the converse proof $W_1$ and $W_2$ are correlated, from them
we can define two new variables $W'_1$ and $W'_2$ independent of each other, with the same marginal distributions
$P_{W'_1 SXY} =P_{W_1 SXY}$ and $P_{W'_2 UVZ}=P_{W_2 UVZ}$, such that the joint distribution $P_{W'_1 W'_2 SXYUVZ}$ splits as $P_{W'_1 SXY} P_{W'_2 UVZ}$.
Since we are supposing $(U,V,Z)$ and $(S,X,Y)$ independent of each other and the constraints only depend on the marginal distributions $P_{W_1 SXY}$ and $P_{W_2 UVZ}$,
the converse is still satisfied with the new auxiliary random variables $W'_1$ and $W'_2$.
Moreover the new variables still verify the the cardinality bounds since they also depend only on the marginal distributions (as shown in $\mbox{Appendix \ref{appendix bounds}}$).
\end{oss}
\vspace{-0,3cm}
\subsection{Coordination under secrecy constraints }\label{secrecy}
\begin{center}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21]{./eve.pdf}
\caption{Wiretap channel: strong coordination implies secrecy.}
\label{fig: wtap}
\end{figure}
\end{center}
\vspace{-0,8cm}
In this section we briefly discuss how in the separation setting of Section \ref{separation}, strong coordination offers additional security guarantees \vv{for free}.
In this context, the common randomness is not only useful to coordinate signals and actions of the nodes but plays the role of a secret key shared between the two legitimate users.
For simplicity, we do not consider channel state and side information at the decoder.
Suppose there is an eavesdropper who observes the signals sent over the channel.
We will show that not knowing the common randomness, the eavesdropper cannot infer any information about the actions.
\begin{lem}
In the setting of Theorem \ref{teoseparation}, without state and side information at the decoder,
suppose that there is an eavesdropper that receives the same sequence $Y^n$ as the decoder but has no knowledge of the common randomness.
There exists a sequence $(f_n,g_n)$ of strong coordination codes achieving the pair
$(\bar P_{UV} \bar P_{X}, R_0) \in \mathcal{R}_{SEP}$ such that
the induced joint distribution $P_{U^n V^n X^n Y^n}$ satisfies the \emph{strong secrecy condition} \cite{bloch2011physical}:
\begin{equation}\label{strong secrecy}
\lim_{n \to \infty} \mathbb D(P_{U^nV^nY^n} \Arrowvert P_{U^nV^n} P_{Y^n})=\lim_{n \to \infty} I(U^nV^n;Y^n)=0.
\end{equation}
\begin{IEEEproof}
Observe that in this setting the target joint distribution is of the form $\bar{P}_{UV}^{\otimes n}\bar{P}_{XY}^{\otimes n}$. Therefore achieving strong coordination means that
$\mathbb V(P_{U^nV^nY^n}, \bar{P}_{UV}^{\otimes n}\bar{P}_{Y}^{\otimes n})$ vanishes.
By the upperbound on the mutual information in Lemma \ref{lem1csi}, we have secrecy if
$\mathbb V(P_{U^nV^nY^n}, \bar{P}_{UV}^{\otimes n}\bar{P}_{Y}^{\otimes n})$ goes to zero exponentially.
But we have proved in Section \ref{inner} that there exists a sequence of codes such that
$\mathbb V (\bar P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}, P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}) $
goes to zero exponentially \eqref{finaleq}. Hence, so does $\mathbb V(P_{U^nV^nY^n}, \bar{P}_{UV}^{\otimes n}\bar{P}_{Y}^{\otimes n})$.
\end{IEEEproof}
\end{lem}
\vspace{-0,3cm}
\subsection{Is separation optimal?}
\begin{center}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21]{./schemecuff.pdf}
\caption{Coordination of the actions $U^n$ and $V^n$ for a two-node network with an error-free link of rate $R$.}
\label{fig: cuff}
\end{figure}
\end{center}
\vspace{-0,7cm}
Strong coordination over error-free channels was investigated in \cite{cuff2010, cuff2008communication}.
When extending this analysis to noisy channels, it is natural to ask whether some form of separation theorem holds between coding for coordination and channel coding.
In this section, we show that unlike the case of empirical coordination, separation does not hold for strong coordination.
If the separation principle is still valid for strong coordination, by concatenating the strong coordination of
the source and its reconstruction with the strong coordination of the input and output of the channel we should retrieve the
same mutual information and rate constraints.
In order to prove that separation does not hold, first we consider the optimal result for coordination
of actions in \cite{cuff2010, cuff2008communication} and than we compare it with our result on joint coordination of signals and actions.
In particular, since we want to compare the result in \cite{cuff2010, cuff2008communication} with an exact region,
we consider the case in which the channel is perfect and the target joint distribution is of the form
$\bar{P}_{UV}^{\otimes n}\bar{P}_{X}^{\otimes n}$.
The choice of a perfect channel might appear counterintuitive but
it is motivated by the fact that we are trying to find a counterexample.
As a matter of fact, if the separation principle holds for any noisy link, it should in particular hold for a perfect one.
We start by considering the two-node network with fixed source $\bar P_U$ and an error-free link of rate $R$
(Figure \ref{fig: cuff}). For this setting, \cite{cuff2010, cuff2008communication}
characterize the strong coordination region as
\begin{equation}\label{region cuff}
\mathcal{R}_{\text{Cuff}}:=\begin{Bmatrix}[c|l]
& \bar P_{UV}= \bar P_{U} \bar P_{V|U} \\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{UWV}, R, R_0) & \bar P_{UWV}= \bar P_{U} \bar P_{W|U} \bar P_{V|UW}\\
& R \geq I(U;W) \\
& R+R_0 \geq I(UV;W)\\
& \lvert \mathcal W \rvert \leq \lvert \mathcal U \times {\mathcal V} \rvert+1
\end{Bmatrix}.
\end{equation}The result in \cite{cuff2010,cuff2008communication} characterizes the trade-off between the rate $R_0$ of available common randomness and
the required description rate $R$ for simulating a discrete memoryless channel for a fixed input distribution.
We compare this region to our results when the requirement to coordinate the signals $X^n$ and $Y^n$ in addition
to the actions $U^n$ and $V^n$ is relaxed.
We consider, in the simpler scenario with no state and no side information, the intersection $\mathcal{R}_{UV\otimes X}:= \mathcal R_{\text{PC}}\cap \mathcal R_{\text{SEP}}$.
The following result characterizes the strong coordination region (proof in Appendix \ref{appendix UVotimesX}).
\begin{prop} \label{teoUVotimesX}
Consider the setting of Theorem \ref{teoisit} and suppose that
$\bar P_{Y|X}( \mathbf{y}|\mathbf{x})={\mathds 1}_{X=Y} \{\mathbf{x}= \mathbf{y} \}$ and
$\bar P_{UXV}=$ $\bar P_{UV} \bar P_{X}$. Then, the strong coordination region is
\begin{equation}\label{UVotimesX}
\mathcal R_{UV\otimes X} := \begin{Bmatrix}[c|l]
&\bar P_{UXV}= \bar P_{U} \bar P_{V|U} \bar P_{X}\\
&\exists \mbox{ } W \mbox{ taking values in $\mathcal W$}\\
(\bar P_{UXV}, R_0) &\bar P_{UW XV}=\bar P_{U} \bar P_{W|U} \bar P_{V| W} \bar P_{X}\\
& I(W;U) \leq H(X)\\
&R_0 \geq I(UV;W)\\
& \lvert \mathcal W \rvert\leq \lvert \mathcal U \times \mathcal V \rvert +1
\end{Bmatrix}
\end{equation}
\end{prop}
To compare $\mathcal{R}_{\text{Cuff}}$ and $\mathcal R_{UV\otimes X}$, suppose that in the setting of Figure \ref{fig: cuff} we use a codebook to send a message to coordinate $U^n$ and $V^n$.
In order to do so we introduce an i.i.d. source $X^n$ with uniform distribution $P_X$ in the model and we use the entropy typical sequences of $X^n$ as a codebook.
Note that in the particular case where $X^n$ is generated according to the uniform distribution, all the sequences in $\mathcal X^n$ are entropy-typical
and $P_{X^n}$ is equal in total variational distance to the i.i.d. distribution $\bar P_{X}^{\otimes n}$.
Hence, we identify $R = H(X)$ and we rewrite the information contraints in \eqref{region cuff} as
\begin{equation*}
H(X) \geq I(U;W) \quad R_0 \geq I(UV;W)-H(X).
\end{equation*}Since in \cite{cuff2008communication} the request is to induce a joint distribution $P_{U^{n} V^{n}}$
that is $\varepsilon$-close in total variational distance to the i.i.d. distribution $\bar P_{UV}^{\otimes n}$, by
imposing $X^n$ generated according to the uniform distribution, we have coordinated separately $X^n$ and $(U^n,V^n)$.
Observe that, while the information constraint is the same in the two regions \eqref{region cuff} and \eqref{UVotimesX}, the rate of common randomness $R_0$ required for strong coordination region in \eqref{UVotimesX} is larger than the rate of common randomness in \eqref{region cuff}.
In fact, in the setting of Figure \ref{fig: cuff} both $X^n$ and the pair $(U^n,V^n)$ achieve coordination separately
(i.e. $P_X^n$ is close to $\bar P_X^{\otimes n}$ and $P_{U^n V^n}$ is close to $\bar P_{UV}^{\otimes n}$ in total variational distance), but there is no extra constraint on the joint distribution $P_{U^{n} X^{n} V^{n}}$.
On the other hand, the structure of our setting in \eqref{UVotimesX} is different and requires the control of the joint distribution $P_{U^{n} X^{n} V^{n}}$
which has to be $\varepsilon$-close in total variational distance to the i.i.d. distribution $\bar P_{UV}^{\otimes n} \bar P_{X}^{\otimes n}$.
Since we are imposing a more stringent constraint, it requires more common randomness.
\begin{oss}
We found $\mathcal R_{UV\otimes X}$ as the intersection of two regions,
but we can give it the following interpretation starting from $\mathcal{R}_{\text{Cuff}}$.
By identifying $R = H(X)$ in $\mathcal{R}_{\text{Cuff}}$, we find that the rate of common randomness has to be greater than $I(UV;W)-H(X)$.
But this is not enough to ensure that $X^n$ is independent of $(U^n,V^n)$. In order to guarantee that, we apply a one-time pad on $X^n$
(which requires an amount of fresh randomness equal to $H(X)$) and we have
$$R_0 \geq I(UV;W)-H(X)+H(X)= I(UV;W)$$
which is the condition on the rate of common randomness in \eqref{UVotimesX}.
\end{oss}
The following example shows that, unlike the case of empirical coordination \cite{treust2017joint},
separation does not hold for strong coordination.
\begin{center}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.44]{./comparisoncuff4.pdf}
\caption{Comparison of the joint coordination region $\mathcal{R}_{UV \otimes X}$ with $\mathcal{R}_{\text{Cuff}}$ \cite{cuff2013distributed, cuff2008communication}:
boundaries of the regions for a binary erasure
channel with erasure probability $p_e = 0.75$ and a Bernoulli-half input.}
\label{fig: plotcuff}
\end{figure}
\end{center}
\vspace{-0,6cm}
\begin{exe}
The difference in terms of rate of common randomness $R_0$ is better shown in an example:
when separately coordinating the two blocks $X^n$ and $(U^n,V^n)$ without imposing a joint behavior
$P_{U^n V^n X^n}$, the same bits of common randomness can be reused for both purposes, and the required rate $R_0$ is lower.
We consider the case, already analyzed in \cite{cuff2013distributed, cuff2008communication}, of a Bernoulli-half source $U$,
and $V$ which is an erasure with probability $p_e$ and is equal to $U$ otherwise.
In \cite{cuff2013distributed} the authors prove that the optimal choice for the joint distributed $P_{UWV}$
is the concatenation of two erasure channels $\bar P_{W|U}$ and $\bar P_{V|W}$
with erasure probability $p_1$ and $p_2$ respectively.
Then we have
\begin{equation*}
p_2 \in [0, \min\{1/2; p_e\}],\quad
p_1= 1-\frac{1-p_e}{1-p_2}
\end{equation*}
and therefore we obtain
{\allowdisplaybreaks
\begin{align*}
& I(U;W)=1-p_1, \qquad I(UV;W)=h(p_e) + (1 - p_1 )(1 - h(p_2 ))
\end{align*}}where $h$ is the binary entropy function.
Figure \ref{fig: plotcuff} shows the boundaries of the regions \eqref{region cuff} (blue) and \eqref{UVotimesX} (green) for $p_e = 0.75$ and a Bernoulli-half input.
The dotted bound $R \geq I(U;V)$ comes directly from combining $R \geq I(U;W)$ with the Markov chain $U-W-V$.
At the other extreme, if $R_0 = 0$ in \eqref{region cuff}, $R+R_0 \geq I(UV;W)\geq C(U;V) $ where
$C(U;V):= \min_{U-W-V} I(U,V;W)$ is Wyner's common information \cite{cuff2008communication}.
On the other hand, in our setting \eqref{UVotimesX}, $R_0\geq I(UV;W)\geq C(U;V)$ for any value of $R=H(X)$.
Moreover, note that as $R=H(X)$ tends to infinity, there is no constraint on the auxiliary random variable $W$ (aside from the Markov chain $U-V-W$)
and similarly to \cite{lapidoth2016conditional} the minimum rate of common randomness $R_0$ needed for strong coordination is Wyner's common information $C(U;V)$.
In particular to achieve joint strong coordination of $(U,X,V)$ a positive rate of common randomness is required.
The boundaries of the rate regions only coincide on one extreme, and $\mathcal R_{UV\otimes X}$ is strictly contained in $\mathcal{R}_{\text{Cuff}}$.
\end{exe}
\section{Polar coding schemes for strong coordination with no state and side information}\label{sec: polarcoding}
Although our achievability results shed some light on the fundamental limits of coordination over noisy channels,
the problem of designing practical codes for strong coordination in this setting is still open.
In this section we focus on channels without state and side information for simplicity,
and we show that the coordination region of Theorem \ref{teoisit} is achievable using polar codes, if
an error-free channel of negligible rate is available between the encoder and decoder.
We note that polar codes have already been proposed for coordination in other settings:
\cite{blasco-serrano2012} proposes polar coding schemes for point-to-point empirical coordination with error free links and uniform actions,
while \cite{chou2015coordination} generalizes the polar coding scheme to the case of non uniform actions.
Polar coding for strong point-to-point coordination has been presented in \cite{bloch2012strong, chou2016soft}.
In \cite{obead2017joint} the authors construct a joint coordination-channel polar coding scheme for strong coordination of actions.
We present a joint source-channel polar coding scheme for strong coordination and we require joint coordination of signals and actions over a noisy channel.
For brevity, we only focus on the set of achievable distributions in $\mathcal R'_{\text{in}}$ for which the auxiliary variable $W$ is binary.
The scheme can be extended to the case of a non-binary random variable $W$ using non-binary polar codes \cite{csacsouglu2012polar}.
\begin{teo}\label{polarregion}
The subset of the region $\mathcal R'_{\text{in}}$ defined in \eqref{eq: regionisit} for which
the auxiliary random variable $W$ is binary is achievable using polar codes,
provided there exists an error-free channel of negligible rate between the encoder and decoder.
\end{teo}
\vspace{0,2cm}
To convert the information-theoretic achievability proof of Theorem \ref{teoisit} into a polar coding proof,
we use source polarization \cite{arikan2010source} to induce the desired joint distribution.
Inspired by \cite{chou2015polar}, we want to translate the random binning scheme into a polar coding scheme.
The key idea is that the information contraints and rate conditions found in the random binning proof directly convert into the definition of the polarization sets.
In the random binning scheme we reduced the amount of common randomness $F$ by having the nodes to agree on an instance of $F$,
here we recycle some common randomness using a chaining construction as in \cite{hassani2014universal,mondelli2015achieving}.
Consider random vectors $U^{n}$, $W^{n}$, $X^{n}$, $Y^{n}$ and $V^{n}$
generated i.i.d. according to $\bar P_{UWXYV}$ that satisfies the inner bound of \eqref{eq: regionisit}.
For $n=2^m$, $m \in \mathbb N$, we note $G_n:= \begin{footnotesize}
\begin{bmatrix}
1 & 0\\
1 & 1
\end{bmatrix}^{\otimes m}
\end{footnotesize}$ the source polarization transform defined in \cite{arikan2010source}.
Let $R^{n}:=W^{n}G_n$ be the polarization of $W^{n}$.
For some $0<\beta<1/2$, let $\delta_n = 2^{-n^ {\beta}}$ and define the very high entropy and high entropy sets:
{\allowdisplaybreaks
\begin{align}\label{eq: hv}
\begin{split}
\V_{W}: & =\left\{j\in\llbracket 1,n\rrbracket:H(R_j|R^{j-1})>1-\delta_n \right\},\\
\V_{W | U}: & =\left\{j\in\llbracket 1,n\rrbracket:H(R_j|R^{j-1} U^{n})>1-\delta_n \right\}, \\
\V_{W | Y}: & =\left\{j\in\llbracket 1,n\rrbracket : H(R_j|R^{j-1} Y^{n})>1-\delta_n \right\},\\
\h_{W | Y}: & =\left\{j\in\llbracket 1,n\rrbracket:H(R_j|R^{j-1} Y^{n})>\delta_n \right\} .
\end{split}
\end{align}}
Now define the following disjoint sets:
{\allowdisplaybreaks
\begin{align*}
A_1 := \V_{W|U} \cap \h_{W|Y} , \quad & A_2 : = \V_{W|U} \cap \h_{W|Y}^c,\\
A_3 := \V_{W|U}^c \cap \h_{W|Y} ,\quad
& A_4 := \V_{W|U}^c \cap \h_{W|Y}^c.
\end{align*}}
\begin{oss}\label{oss card}
We have:
\begin{itemize}
\item $\V_{W | Y} \subset \h_{W | Y}$ and $ \lim_{n \rightarrow \infty} \frac{\lvert \h_{W | Y} \setminus \V_{W|Y} \rvert}{n} = 0$ \cite{arikan2010source},
\item $\lim_{n \rightarrow \infty} \frac{\lvert \V_{W|U} \rvert}{n} = H(W|U)$ \cite {chou2015secretkey},
\item $ \lim_{n \rightarrow \infty} \frac{\lvert \h_{W | Y} \rvert}{n} = H(W|Y)$ \cite{arikan2010source}.
\end{itemize}
Since $H(W|U)- H(W|Y) = I(W;Y)- I(W;U),$ for sufficiently large $n$
the assumption $I(W;Y) \geq I(W;U)$ directly implies that $\lvert A_2 \rvert \geq \lvert A_3 \rvert$.
\end{oss}
\vspace{0,2cm}
\paragraph*{Encoding}
The encoder observes $k$ blocks of the source $U^{n}_{(1:k)}:=(U^{n}_{(1)}, \ldots, U^{n}_{(k)})$ and
generates for each block $i\in \llbracket 1,k\rrbracket$ a random variable $\widetilde R^{n}_{(i)}$ following the procedure described in Algorithm \ref{algcnc}. Similar to \cite{Cervia2016}, the chaining construction proceeds as follows:
\begin{itemize}
\item let $A'_1:=\mathcal V_{W|UXYV}$, observe that $A'_1$ is a subset of $ A_1$ since
$\mathcal V_{W|UXYV} \subset \mathcal V_{W|U}$ and
$\mathcal V_{W|UXYV} \subset \mathcal V_{W|Y} $ $\subset \mathcal H_{W|Y}$.
The bits in $A'_1 \subset \V_{W|U}$ in block $i \in \llbracket 1,k\rrbracket$ are chosen with uniform probability using a uniform randomness
source $\bar C'$ shared with the decoder, and their
value is reused over all blocks;
\item the bits in $A_1 \setminus A'_1 \subset \V_{W|U}$ in block $i \in \llbracket 1,k\rrbracket$ are chosen with uniform probability using a
uniform randomness source $\bar C_{i}$ shared with the decoder;
\item in the first block the bits in $A_2 \subset \V_{W|U}$ are chosen with uniform probability using a local randomness source $M$;
\item for the following blocks, let $A'_3$ be a subset of $A_2$ such that $\lvert A'_3 \rvert= \lvert A_3 \rvert$. The bits of $A_3$ in block $i$ are sent to $A'_3$ in the block $i+1$ using a one time pad with key $C_{i}$. Thanks to the Crypto Lemma
\cite[Lemma 3.1]{bloch2011physical}, if we choose $C_{i}$ of size $\lvert A_3 \rvert$ to be a uniform random key, the bits in $A'_3$ in the block $i+1$ are uniform. The bits in $A_2 \setminus A'_3$ are chosen with uniform probability using the local randomness source $M$;
\item the bits in $A_3$ and in $A_4$ are generated according to the previous bits using successive cancellation
encoding as in \cite{arikan2010source}. Note that it is possible to sample efficiently from
$\bar P_{R_i \mid R^{i-1} U^{n}}$ given $U^{n}$ \cite{arikan2010source}.
\end{itemize}
As in \cite{chou2015polar},
to deal with unaligned indices, chaining also requires in the last encoding block to transmit $R_{(k)}[A_{3}]$ to the decoder.
Hence the coding scheme requires an error-free channel between the encoder and decoder which has negligible rate
since $\lvert R_{(k)}[A_{3}] \rvert \leq \lvert \h_{W | Y} \rvert$ and
\begin{equation*}
\lim_{n \rightarrow \infty \atop k \rightarrow \infty} \frac{\lvert \h_{W|Y}\rvert}{kn} = \lim_{k \rightarrow \infty} \frac{H(W|Y)}{k} =0.
\end{equation*}
The encoder then computes $\widetilde W^{n}_{(i)}= \widetilde R^{n}_{(i)} G_n$ for $i=1, \ldots, k$ and generates $X^{n}_{(i)}$
symbol by symbol from $\widetilde W^{n}_{(i)}$ and $U^{n}_{(i)}$ using the conditional distribution
$$\bar P_{X_{j,(i)} |\widetilde W_{j,(i)} U_{j,(i)}}(x|\widetilde w_{j,(i)}, u_{j,(i)})=\bar P_{X|WU} ( x | w_{j,(i)}, u_{j,(i)})$$ and sends $X^{n}_{(i)}$
over the channel.
\begin{center}\label{fig:kblorecc}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.9]{./kblocchireccc.pdf}
\caption{Chaining construction for block Markov encoding}
\end{figure}
\end{center}
\vspace{-0,7cm}
\begin{algorithm}[ht!]\label{algcnc}
\begin{small}
\DontPrintSemicolon
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{ $(U^{n}_{(1)}, \ldots, U^{n}_{(k)})$, $M$ local randomness (uniform random bits),
common randomness $(\bar C', \bar C_{1:k},C_{1:k-1})$ shared with the decoder, where
$\bar C'$ has size $\lvert A'_1 \rvert$,
$\bar C_{1:k}$ has size $k \lvert A_1 \setminus A'_1 \rvert$,
$C_{1:k-1}$ has size $(k-1) \lvert A_3 \rvert$.}
\Output{$\left( \widetilde R_{(1)}^n, \ldots, \widetilde R_{(k)}^n \right)$}
\If{$i=1$}{
$\widetilde R_{(1)}[A'_1] \longleftarrow \bar C' \qquad \widetilde R_{(1)}[A_1 \setminus A'_1] \longleftarrow \bar C_{1} \qquad \widetilde R_{(1)}[A_2] \longleftarrow M $\;
\For{$j \in A_{3} \cup A_{4}$}{
Given $U^{n}_{(1)}$, successively choose the bits $\widetilde R_{j,(1)}$ according to
\begin{equation} \label{eq: p1}
\bar P_{R_j \mid R^{j-1} U^{n}} \left(\widetilde R_{j,(1)}\mid \widetilde R^{j-1}_{(1)} U^{n}_{(1)} \right)
\end{equation}
}
}
\For{$i=2, \ldots, k$}{
$\widetilde R_{(i)}[A'_1] \longleftarrow \bar C' \qquad \widetilde R_{(i)}[A_1 \setminus A'_1] \longleftarrow \bar C_i $\;
$ \widetilde R_{(i)}[A'_3] \longleftarrow \widetilde R_{(i-1)}[A_3] \oplus C_{i-1} \quad \widetilde R_{(i)}[A_2 \setminus A'_3] \longleftarrow M $ \;
\For{$j \in A_{3} \cup A_{4}$}{
Given $U^{n}_{(i)}$, successively choose the bits $\widetilde R_{j,(i)}$ according to
\begin{equation} \label{eq: p2}
\bar P_{R_j \mid R^{j-1}U^{n}} \left(\widetilde R_{j,(i)} \mid \widetilde R_{(i-1)}^j U^{n}_{(i)} \right)
\end{equation}
}
}
\BlankLine
\caption{Encoding}
\end{small}
\end{algorithm}
\paragraph*{Decoding}
The deconding procedure described in Algorithm \ref{algdecnc} proceeds as follows.
The decoder observes $(Y^{n}_{(1)}, \ldots, Y^{n}_{(k)})$ and $R_{(k)}[A_3]$
which allows it to decode in reverse order.
We note $\widehat R^n_{(i)}$ the estimate of $R^n_{(i)}$ at the decoder, for $i \in \llbracket 1,k\rrbracket$.
In block $i \in \llbracket 1,k\rrbracket$, the decoder has access to $\widehat R_{(i)}[A_{1} \cup A_{3}]= \widehat R_{(i)}[\h_{W \mid Y}]$:
\begin{itemize}
\item the bits in $A_1$ in block $i$ correspond to shared randomness $\bar C'$ and $\bar C_i$ for $A'_1$ and
$A_1 \setminus A'_1$ respectively;
\item in block $i \in [1, k-1]$ the bits in $A_3$ are obtained by successfully recovering $A_2$ in block $i+1$.
\end{itemize}
\paragraph*{Rate of common randomness}
The rate of common randomness is $I(W;UXV|Y)$ since:
{\allowdisplaybreaks
\begin{align*}
&\lim_{n \to \infty} \frac{ k \lvert A_1 \rvert - (k-1) \lvert A'_1 \rvert + (k-1) \lvert A_3 \rvert}{kn} = \lim_{n \to \infty}\frac{ \lvert A_1 \rvert + \lvert A_3 \rvert - \lvert A'_1 \rvert}{n}\\
&= H(W|Y)- H(W|UXYV) =I(W;UXV|Y).
\end{align*}
}
\begin{algorithm}[ht!]\label{algdecnc}
\begin{small}
\DontPrintSemicolon
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$(Y^{n}_{(1)}, \ldots, Y^{n}_{(k)})$, $R_{(k)}[A_3]$ shared with the encoder,
common randomness $(\bar C', \bar C_{1:k-1},C_{1:k-1})$ shared with the encoder, where $\bar C'$ has size $\lvert A'_1 \rvert$,
$\bar C_{1:k}$ has size $k \lvert A_1 \setminus A'_1 \rvert$ and $C_{1:k-1}$ has size $(k-1) \lvert A_3 \rvert$.}
\Output{$(\widehat R_{(1)}^n, \ldots, \widehat R_{(k)}^n)$}
\For{$i=k, \ldots, 1$}{
$\widehat R_{(i)}[A'_1] \longleftarrow \bar C' \qquad \widehat R_{(i)}[A_1 \setminus A'_1] \longleftarrow \bar C_i$\;
\If{$i=k$}{$\widehat R_{(i)}[A_3]$ shared with the decoder}
\Else{$\widehat R_{(i)}[A_3] \longleftarrow \widehat R_{(i+1)}[A'_3]$\;}
\For{$j \in A_{2} \cup A_{4}$}{ Successively choose the bits according to
$\widehat R_{j,(i)} = \begin{cases}
0 \quad \mbox{if } L_n(Y_{(i)}^n,R_{(i-1)}^j) \geq 1\\
1 \quad \mbox{else}
\end{cases}$\;
where
$$L_n(Y_{(i)}^n,R_{(i-1)}^j) = \frac{\bar P_{R_{j,(i)} \mid R_{(i-1)}^j Y_{(i)}^n}\left(0 \mid \widehat R_{(i-1)}^j Y_{(i)}^n \right) }{\bar P_{R_{j,(i)} \mid R_{(i-1)}^j Y_{(i)}^n}\left(1 \mid \widehat R_{(i-1)}^jY_{(i)}^n \right)}$$
}
}
\BlankLine
\caption{Decoding}
\end{small}
\end{algorithm}
\paragraph*{Proof of Theorem \ref{polarregion}}
We note with $\tilde{P}$ the joint distribution induced by the encoding and decoding algorithm of the previous sections.
The proof requires a few steps, here presented as different lemmas. The proofs are in Appendix \ref{appendix polar}.
First, we want to show that we have strong coordination in each block.
\begin{lem}\label{scblock}
In each block $i \in \llbracket 1,k\rrbracket$, we have
\begin{equation}\label{1block}
\tv\left(\widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n X_{(i)}^n Y_{(i)}^n V_{(i)}^n}, \bar P_{UWXYV}^{\otimes n}\right) \leq \delta_n^{(1)}
\end{equation}
where $\delta_n^{(1)}:= 2 \mathbb P \left\{\widehat W_{(i)}^n \neq \widetilde W_{(i)}^n \right\}+ \sqrt{2 \log 2} \sqrt{n \delta_n}$.
\end{lem}
Now, we want to show that two consecutive blocks are almost independent.
To simplify the notation, we set
{\allowdisplaybreaks
\begin{equation*}
\begin{matrix}[ll]
L:=U^{n} X^{n} Y^{n} W^{n} &\\
L_{i}:=U^{n}_{(i)} X_{(i)}^n Y_{(i)}^n V_{(i)}^n & i \in \llbracket 1,k\rrbracket\\
L_{a:b}:=U^{n}_{(a:b)} X^{n}_{(a:b)} Y^{n}_{(a:b)} V^{n}_{(a:b)} & \llbracket a,b\rrbracket \subset \llbracket 1,k\rrbracket
\end{matrix}
\end{equation*}
}
\begin{lem} \label{step2a'}
For $i \in \llbracket 2,k\rrbracket$, we have
$$\tv \left( \widetilde P_{L_{i-1:i} \bar C'} , \widetilde P_{ L_{i-1}\bar C'} \widetilde P_{L_{i}}\right) \leq \delta_n^{(3)}$$
where $ \delta_n^{(3)}:= \sqrt{2 \log 2} \sqrt{n \delta_n + 2 \delta_n^{(1)} (\log{\lvert \mathcal U \times \mathcal X \times \mathcal W \times \mathcal Y \times \mathcal V \rvert}-\log{\delta_n^{(1)}})}$
and $\delta_n^{(1)}$ is defined in Lemma \ref{scblock}.
\end{lem}
Now that we have proven the asymptotical independence of two consecutive blocks,
we use Lemma \ref{step2a'} to prove the asymptotical independence of all blocks. First we need an intermediate step.
\vspace*{0.3cm}
\begin{lem}\label{step2b'}
We have
$$ \tv \left( \widetilde P_{ L_{1:k}}, \prod_{i=1}^k \widetilde P_{ L_{i}} \right) \leq \sqrt{k-1} \delta_n^{(3)}$$
where $\delta_n^{(3)} $ is defined in Lemma \ref{step2a'}.
\end{lem}
\vspace*{0.3cm}
Finally, we prove the asymptotical independence of all blocks.
\begin{lem}\label{step2c'}
We have
$$ \tv \left( \widetilde P_{L_{1:k}}, \bar P_{UXYV}^{\otimes nk}\right) \leq \delta_n^{(5)}$$
where $ \delta_n^{(5)}:=\sqrt{k} (\delta_n^{(3)} + \delta_n^{(2)})$ and
$\delta_n^{(2)}$ and $\delta_n^{(3)}$ are defined in \eqref{emp2} and Lemma \ref{step2a'} respectively.
\end{lem}
\section{Conclusions and perspectives}\label{sec: conclusions}
In this paper we have developed an inner and an outer bound for the strong coordination region when the input and output signals
have to be coordinated with the source and reconstruction.
Despite the fact that we have fully characterized the region in some special cases in Section \ref{sec: special cases},
inner and outer bound differ in general on the information constraint. Closing this gap is left for future study.
The polar coding proof in Section \ref{sec: polarcoding}, though it provides an explicit coding scheme, relies on a chaining construction over several blocks,
which is not practical for delay-constrained applications. This is another issue that may be studied further.
Some important questions have not been addressed in this study and are left for future work. By coordinating signals and
actions, the synthesized sequences would appear to be statistically indistinguishable from i.i.d. to an outside observer.
As suggested in the example in Section \ref{secrecy}, this property could be exploited in a more general setting where two legitimate nodes wish to coordinate while concealing their actions from an eavesdropper who observes the signals sent over the channel.
Moreover, our results could be extended to a strategic coordination setting. This represents a scenario where the objectives of the two agents are not necessarily aligned, and has been investigated for empirical coordination in \cite{treust2016information, LeTreustTomala17}.
\appendices
\section{Proof of preliminary results} \label{appendix prel}
\begin{IEEEproof}[Proof of Lemma \ref{lemmit}]
We have $$I(A_t;A_{\sim t})= H(A_t)-H(A) + H(A)-H(A_t|A_{\sim t})$$ and we prove separately that
{\allowdisplaybreaks
\begin{align*}
& H(A)-H(A_t|A_{\sim t}) \leq f(\varepsilon),\\
& H(A_t)-H(A) \leq f(\varepsilon).
\end{align*}}
First, we need two results.
\begin{lem}[$\mbox{\cite[Lemma 2.7]{csiszar2011information}}$]\label{csi2.7}
Let $P$ and $Q$ two distributions on $\mathcal A $
such that
$\tv(P,Q) = \varepsilon$ and $\varepsilon \leq 1/2$, then
\vspace{-0.3cm}
\begin{equation*}
\lvert H(P)-H(Q) \rvert \leq \varepsilon \log{\frac{\lvert \mathcal A \rvert}{\varepsilon}}.
\end{equation*}
\end{lem}
\begin{lem}[$\mbox{\cite[Lemma 3.2]{yassaee2014achievability}}$]\label{yas3.2'}
If $\tv(P_{A}P_{B|A}, Q_{A}Q_{B|A})\leq \varepsilon$ then
\begin{equation*}
\mathbb P\{A \in \mathcal A | \tv(P_{B|A=a}, Q_{B|A=a}) \leq \sqrt{\varepsilon}\} \geq 1-2 \sqrt{\varepsilon}.
\end{equation*}
\end{lem}
\vspace{0,3cm}
Now, consider the set
$\mathcal E:=\{ \mathbf a \in \mathcal A^{n-1}| \tv(P_{A_t |A_{\sim t}=\mathbf a}, \bar P_{A}) \leq \sqrt{\varepsilon}\}.$
By Lemma \ref{yas3.2'}, $\mathbb P \{\mathcal E\} \geq 1-2 \sqrt{\varepsilon}$.
Then, we have
{\allowdisplaybreaks
\begin{align*}
& H(A)-H(A_t|A_{\sim t}) = H(A) - \!\!\! \!\!\!\sum_{\mathbf a \in \mathcal A^{n-1}} \!\! \!\! P_{A_{\sim t}}(\mathbf a) H(A_t|A_{\sim t}= \mathbf a) = \!\!\!\! \sum_{\mathbf a \in \mathcal A^{n-1}} \!\!\!\! \left(P_{A_{\sim t}}(\mathbf a) H(A)\! -\! P_{A_{\sim t}}(\mathbf a) H(A_t|A_{\sim t}= \mathbf a) \right)\\
& = \sum_{\mathbf a \in \mathcal E} \left( P_{A_{\sim t}}(\mathbf a) H(A)- P_{A_{\sim t}}(\mathbf a) H(A_t|A_{\sim t}= \mathbf a)\right) + \sum_{\mathbf a \in \mathcal E^c} \left( P_{A_{\sim t}}(\mathbf a) H(A)- P_{A_{\sim t}}(\mathbf a) H(A_t|A_{\sim t}= \mathbf a)\right) \stepcounter{equation}\tag{\theequation}\label{part1}\\
& \overset{(a)}{\leq} \sum_{\mathbf a \in \mathcal E} P_{A_{\sim t}}(\mathbf a) \delta + \mathbb P \{\mathcal E^c\} \left(H(A_t)+H(A)\right) \leq \delta + 2 \sqrt{\varepsilon} \left(2 H(A) + \delta \right)
\end{align*}}where $(a)$ comes from the fact that by Lemma \ref{csi2.7} for $\mathbf a \in \mathcal E$
\begin{equation*}
\lvert H(A_t|A_{\sim t}=\mathbf a)-H(A) \rvert \leq \varepsilon \log{\frac{\lvert \mathcal A \rvert}{\varepsilon}}:=\delta.
\end{equation*}
Lemma \ref{csi2.7} also implies that
\begin{equation}\label{part2}
\lvert H(A_t)-H(A) \rvert \leq \delta.
\end{equation}
Hence by \eqref{part1} and \eqref{part2}, we have $I(A_t;A_{\sim t}) \leq 2 \sqrt{\varepsilon} (2H(A)+\delta)+ 2\delta$.
\end{IEEEproof}
\vspace{0,2cm}
\begin{IEEEproof}[Proof of Lemma \ref{lemab}]
The proof of \eqref{lemab1} comes directly from Lemma \ref{lemmit}:
\begin{equation}\label{eqab}
\sum_{t=1}^{n} I(A_t;A^{t-1} B_{\sim t}|B_t)
\leq \sum_{t=1}^{n} I(A_t;A_{\sim t} B_{\sim t}|B_t)
\leq \sum_{t=1}^{n} I(A_t B_t;A_{\sim t} B_{\sim t}) \leq n f(\varepsilon).
\end{equation}
To prove \eqref{lemab2}, we have
{\allowdisplaybreaks
\begin{align*}
H(C|B^{n}) &\geq I(A^{n}; C|B^{n}) = \sum_{t=1}^{n} I(A_t;C|A^{t-1} B_{\sim t}B_t)\\
& = \sum_{t=1}^{n} I(A_t;CA^{t-1}B_{\sim t}|B_t)- \sum_{t=1}^{n} I(A_t; A^{t-1} B_{\sim t}|B_t) \\
& \geq \sum_{t=1}^{n} I(A_t;CB_{\sim t}|B_t) - \sum_{t=1}^{n} I(A_t;A^{t-1} B_{\sim t}|B_t)\\
& \overset{(a)}{\geq} \sum_{t=1}^{n} I(A_t;CB_{\sim t}|B_t)-n f(\varepsilon)
= n I(A_T;CB_{\sim T}|B_T T) -n f(\varepsilon) \\
&= n I(A_T;CB_{\sim T} T|B_T ) -n I(A_T; T|B_T) -n f(\varepsilon)\\
& \geq n I(A_T;CB_{\sim T} T|B_T ) -n I(A_T B_T;T) -n f(\varepsilon)
\end{align*}}where $(a)$ comes from Lemma \ref{lemab}.
\end{IEEEproof}
\section{Proof of Remark \ref{binnings2}} \label{appendix bin}
We want to prove that there exists a fixed binning that satisfy both the conditions in Section \ref{rc gen} and Section \ref{rf gen}.
If we denote with $\mathbb E_{\varphi_1 \varphi_2 }$ and $\mathbb E_{\varphi_2}$ the expected value with respect to the random binnings,
for all $\varepsilon$, there exists $\bar n$ such that $\forall n \geq \bar n$
{\allowdisplaybreaks
\begin{align*}
& \mathbb E_{\varphi_1 \varphi_2 } \left[\tv \left(\bar P^{\varphi_1 \varphi_2}_{ U^{n} S^{n} Z^{n} FC}, Q_{F} Q_{C} \bar P_{U^{n} S^{n} Z^{n}} \right)\right] < \frac{\varepsilon}{2}\\
& \mathbb E_{ \varphi_2} [\tv (\bar P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n} F}^{ \varphi_2}, Q_{F} \bar P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n} } )] < \frac{\varepsilon}{2}
\end{align*}}which implies by Markov's inequality
{\allowdisplaybreaks
\begin{align*}
&\mathbb P_{\varphi_1 \varphi_2} \left\{ \tv \left(\bar P^{\varphi_1 \varphi_2}_{U^{n} S^{n} Z^{n} FC}, Q_{F} Q_{C} \bar P_{U^{n} S^{n} Z^{n}}\right) < \varepsilon \right\} > \frac{1}{2} \\
& \mathbb P_{\varphi_2}\{\tv (\bar P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n} F}^{\varphi_2}, Q_{F} \bar P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}} ) < \varepsilon \} > \frac{1}{2}. \stepcounter{equation}\tag{\theequation}\label{min122}
\end{align*}
}
In Section \ref{rc gen} and \ref{rf gen} we have chosen the binnings $(\varphi'_1, \varphi'_2)$ and $\varphi''_2$ respectively such that
{\allowdisplaybreaks
\begin{align*}
&\lim_{n \to \infty} \tv \left(\bar P^{\varphi'_1 \varphi'_2}_{U^{n} S^{n} Z^{n} FC}, Q_{F} Q_{C} \bar P_{U^{n} S^{n} Z^{n}} \right)=0\\
& \lim_{n \to \infty} \tv (\bar P^{\varphi''_2}_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n} F}, Q_{F} \bar P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}})=0.
\end{align*}
}
It follows from \eqref{min122} that the intersection of the two sets is non-empty, therefore there exists a binning
$\varphi_2^{*}$ that satisfies both conditions.
\hfill \IEEEQED
\section{Comparison between $I(XUS;YZ)$ and $I(XS;Y)+I(U;Z)$}\label{appendix compare}
Observe that
{\allowdisplaybreaks
\begin{align*}
I(XUS; YZ)& = I(XS;YZ)+I(U; YZ|XS)\\
&\overset{(a)}{=} I(XS;Y) +I(U;Z) + I(SXY;Z) -I(Y;Z) +I(UZ;SXY) - I(SXY;Z)-I(U;XS)\\
& \overset{(b)}{=} I(XS;Y) +I(U;Z) -I(Y;Z) +I(UZ;SX) -I(U;XS)\\
& = I(XS;Y) +I(U;Z) - I(Y;Z) +I(Z;SX|U) +I(U;XS) -I(U;XS)\\
& \overset{(c)}{=} I(XS;Y) +I(U;Z) -I(Y;Z) +I(Z;S|U)
\end{align*}}where $(a)$ follows from basic properties of the mutual information, $(b)$ and $(c)$ from the
Markov chains $ Y-XS-UZ$ and $X-US-Z$ respectively.
If we note $\Delta:=I(Z;S|U)-I(Y;Z) $, then $I(XUS; YZ)= I(XS;Y) +I(U;Z) + \Delta$ where
$\Delta$ may be either positive or negative, for instance:
\begin{itemize}
\item in the special case where $S-U-Z$ holds and $ Y=Z$, $\Delta=-H(Y) \leq 0$,
\item if we suppose $Y$ independent of $Z$, $\Delta=I(Z;S|U) \geq 0$.
\end{itemize}
\hfill \IEEEQED
\section{Proof of Lemma \ref{ossmc1}.}\label{appendix ossmc1}
To prove that $I(Y_t {Z}_t; C, X_{\sim t} U_{\sim t} S_{\sim t} Y_{\sim t}{Z}_{\sim t}|X_t U_t S_t)=0$, we have
{\allowdisplaybreaks
\begin{align*}
I(Y_t {Z}_t ; C X_{\sim t} S_{\sim t} U_{\sim t}& Y_{\sim t} {Z}_{\sim t} |X_t S_t U_t)\\
& = I({Z}_t; C X_{\sim t} S_{\sim t} U_{\sim t} Y_{\sim t} {Z}_{\sim t}|X_t S_t U_t) + I( Y_t; C X_{\sim t} S_{\sim t} U_{\sim t} Y_{\sim t} {Z}_{\sim t}|X_t S_t U_t {Z}_t )\\
& = I({Z}_t; C X_t X_{\sim t} S_{\sim t} U_{\sim t} Y_{\sim t} {Z}_{\sim t}|S_t U_t)-I({Z}_t; X_t|S_t U_t)\\
&\quad + I(Y_t; C U_t {Z}_t X_{\sim t} S_{\sim t} U_{\sim t} Y_{\sim t} {Z}_{\sim t}|X_t S_t)-I(Y_t;U_t {Z}_t|X_t S_t)\\
& \leq I({Z}_t; C X^{n} S_{\sim t} U_{\sim t} Y_{\sim t} {Z}_{\sim t}|S_t U_t) + I(Y_t; C U^{n} Z^{n} X_{\sim t} S_{\sim t} Y_{\sim t} |X_t S_t)\\
& \leq I({Z}_t; C X^{n} Y^{n} S_{\sim t} U_{\sim t} {Z}_{\sim t}|S_t U_t) + I(Y_t; C U^{n} Z^{n} X_{\sim t} S_{\sim t} Y_{\sim t} |X_t S_t)
\end{align*}}where both $I({Z}_t; C X^{n} Y^{n} S_{\sim t} U_{\sim t} {Z}_{\sim t}|S_t U_t)$ and $I(Y_t; C U^{n} Z^{n} X_{\sim t} S_{\sim t} Y_{\sim t} |X_t S_t)$ are equal to zero because
by \eqref{markov chain general} the following Markov chains hold:
{\allowdisplaybreaks
\begin{align*}
{Z}_t -(U_t, S_t)-(C, X^{n}, Y^{n}, U_{\sim t}, S_{\sim t}, {Z}_{\sim t}), \qquad Y_t-(X_t,S_t)-(C, Z^{n}, U^{n}, X_{\sim t}, S_{\sim t}, Y_{\sim t} ).
\end{align*}
}\hfill \IEEEQED
\vspace{-0,3cm}
\section{Proof of \eqref{byfano}} \label{appendix fano}
Define the event of error $E$ as follows:
\begin{equation*}
E := \begin{cases}
0 \quad \mbox{ if } \quad U^{n} = V^{n}\\
1 \quad \mbox{ if } \quad U^{n} \neq V^{n}
\end{cases}.
\end{equation*}
We note $p_e:= \mathbb{P} \{U^{n}\neq V^{n}\}$ and recall that by hypothesis the distribution $P_{U^{n} S^{n} Z^{n} X^{n} Y^{n} V^{n}}$ is $\varepsilon$-close
in total variational distance to the i.i.d. distribution
$\bar P_{U S Z X Y V}^{\otimes n}$ where the decoder is lossless.
Then $\mathbb V (P_{V^{n}}, \mathds 1_{ V =U }^{\otimes n}) < \varepsilon$
and therefore $p_e$ vanishes.
By Fano's inequality \cite{fano1961transmission}, we have
\begin{equation}\label{eqfano}
H( U^{n}|C, Y^{n}, Z^{n}) \leq H_2(p_e)+p_e \log{(\lvert \mathcal U^n\rvert -1)}.
\end{equation}Since $p_e$ vanishes, $H_2(p_e)$ is close to zero and the right-hand side of \eqref{eqfano} goes to zero.
Hence, we have that $H( U^{n}|C, Y^{n}, Z^{n}) \leq n f(\varepsilon)$, where $f(\varepsilon)$
denotes a function which tends to zero as $\varepsilon$ does.
\hfill \IEEEQED
\section{Proof of Proposition \ref{teoUVotimesX}} \label{appendix UVotimesX}
\subsection{Achievability}
We show that $\mathcal R_{UV \otimes X}$ is contained in the region $\mathcal R_{\text{PC}}\cap \mathcal R_{\text{SEP}}$ and thus it is achievable.
We consider the subset of $\mathcal R_{\text{SEP}}$ when $\bar P_{Y|X}( \mathbf{y}|\mathbf{x})={\mathds 1}_{X=Y} \{\mathbf{x}= \mathbf{y} \}$ as the union of all
$\mathcal R_{\text{SEP}}(W)$ with $W=(W_1,W_2)$ that satisfies
{\allowdisplaybreaks
\begin{align*}
&\bar P_{UW_1W_2XV}= \bar P_{U} \bar P_{W_2|U} \bar P_{V| W_2} \bar P_{X} \bar P_{W_1|X},\\
& I(W_1;X) \geq I(W_2;U), \stepcounter{equation}\tag{\theequation}\label{sepmi} \\
& R_0 \geq I(W_2;UV).
\end{align*}
}
\vspace{-0,2cm}
Similarly, $\mathcal R_{\text{PC}}$ is the union of all
$\mathcal R_{\text{PC}}(W)$ with $W$ that satisfies
{\allowdisplaybreaks
\begin{align*}
&\bar P_{UWXV}= \bar P_{U} \bar P_{W|U} \bar P_{X|UW} \bar P_{V|WX},\\
& H(X) \geq I(WX;U), \stepcounter{equation}\tag{\theequation}\label{pcmi} \\
& R_0 \geq I(W;UV|X).
\end{align*}
}If we choose $W=(W_1,W_2)$ and we
add the hypothesis that $(W_2, U,V)$ is independent of $(W_1,X)$, \eqref{pcmi} becomes
{\allowdisplaybreaks
\begin{align*}
&\bar P_{UW_1W_2XV}= \bar P_{U} \bar P_{W_2|U} \bar P_{V| W_2} \bar P_{W_1} \bar P_{X|W_1},\\
& H(X) \geq I(W_1W_2X;U)=I(W_2;U), \stepcounter{equation}\tag{\theequation}\label{pcm2} \\
& R_0 \geq I(W_1W_2;UV|X)=I(W_2;UV).
\end{align*}}
Note that if we identify $X=W_1$, we have
$H(X) = I(W_1;X)$ and $\bar P_{W_1} \bar P_{X|W_1}=\bar P_{X} \bar P_{W_1|X}= \bar P_{X} {\mathds 1}_{X=W_1}$.
Then, there exists a subset of $\mathcal R_{\text{SEP}}$ and $\mathcal R_{\text{PC}}$ defined as
the union over all $W_2$ such that
{\allowdisplaybreaks
\begin{align*}
&\bar P_{UW_2XV}= \bar P_{U} \bar P_{W_2|U} \bar P_{V| W_2} \bar P_{X} ,\\
& H(X) \geq I(W_2;U), \stepcounter{equation}\tag{\theequation}\label{inter} \\
& R_0 \geq I(W_2;UV).
\end{align*}}Finally, observe that,
by definition of the region \eqref{UVotimesX},
$\mathcal R_{UV \otimes X}$ is the union over all the possible choices for $W_2$ that satisfy \eqref{inter} and
therefore $ \mathcal R_{UV \otimes X} \subseteq \mathcal R_{\text{PC}} \cap \mathcal R_{\text{SEP}}$.
\subsection{Converse}
Consider a code $(f_n,g_n)$ that induces a distribution $P_{U^{n} X^{n} V^{n}}$ that is $\varepsilon$-close in total variational
distance to the i.i.d. distribution $\bar P_{U V}^{\otimes n} \bar P_{X}^{\otimes n} $.
Let $T$ be the random variable defined in Section \ref{outer}.
Then, we have
{\allowdisplaybreaks
\begin{align*}
nR_0 &\geq H(C) \overset{(a)}{\geq} I(U^{n} V^{n};C|X^n) = I(U^{n} V^{n};C X^{n}) - I(U^{n} V^{n}; X^{n}) \\
& \overset{(b)}{\geq} I(U^{n} V^{n};C X^{n})- n f(\varepsilon) =
\sum_{t=1}^n I(U_t V_t ;C X^{n}| U^{t-1} V^{t-1} ) - n f(\varepsilon)\\
& = \sum_{t=1}^n I(U_t V_t ; C X^{n} U^{t-1} V^{t-1} ) - \sum_{t=1}^n I(U_t V_t ; U^{t-1} V^{t-1} ) - n f(\varepsilon)\\
& \overset{(c)}{\geq}\sum_{t=1}^n I(U_t V_t ; C X^{n} U^{t-1} V^{t-1} ) - 2nf(\varepsilon)
\geq \sum_{t=1}^n I(U_t V_t ; C X^{n} U^{t-1} ) - 2nf(\varepsilon)\\
&= n I(U_T V_T ; C X^{n} U^{T-1}|T ) - 2nf(\varepsilon)=
n I(U_T V_T ; C X^{n} U^{T-1} T ) - n I(U_T V_T ; T ) - 2nf(\varepsilon)\\
& \overset{(d)}{\geq} n I(U_T V_T ; C X^{n} U^{T-1} T ) - 3nf(\varepsilon)
\end{align*}}where $(a)$ follows from basic properties of entropy and mutual information and $(b)$ from
the upperbound on the mutual information in Lemma \ref{lem1csi} since we assume $\mathbb V(P_{U^nV^nX^n}, \bar{P}_{UV}^{\otimes n}\bar{P}_{X}^{\otimes n}) \leq \varepsilon$
and $ \lvert \mathcal U \times \mathcal V \rvert \geq 4$.
Finally, since the distributions are close to i.i.d. by hypothesis, $(c)$ and $(d)$ come from Lemma \ref{lemmit} and
\cite[Lemma VI.3]{cuff2013distributed} respectively.
For the second part of the converse, observe that
{\allowdisplaybreaks
\begin{align*}
& 0 = H(X^{n}) - I(X^{n}; U^{n} C) - H(X^{n}|U^{n} C) \leq \sum_{t=1}^{n} H(X_t) - \sum_{t=1}^{n} I(X^{n}; U_t|U^{t-1} C) - H(X^{n}|U^{n} C)\\
& \leq \sum_{t=1}^{n} H(X_t) - \sum_{t=1}^{n} I(X^{n}; U_t|U^{t-1} C) = \sum_{t=1}^{n} H(X_t) - \sum_{t=1}^{n} I(X^{n} U^{t-1} C; U_t) + \sum_{t=1}^{n} I(U^{t-1} C; U_t)\\
& \overset{(e)}{=} \sum_{t=1}^{n} H(X_t) - \sum_{t=1}^{n} I(X^{n} U^{t-1} C; U_t) = n H(X_T|T) - n I(X^{n} U^{T-1} C; U_T|T) \\
&\leq n H(X_T) - n I(X^{n} U^{T-1} C T; U_T) + n I(T; U_T) = n H(X_T) - n I(X^{n} U^{T-1} C T; U_T).
\end{align*}
}where $(e)$ follows from the i.i.d. nature of the source $\bar P_{U}$.
Then, we identify the auxiliary random variable $W_t$ with $(C,X^n, U^{t-1})$ for
each $t \in \llbracket 1,n\rrbracket$ and $W$ with $(W_T, T)=$ $(C,X^n, U^{T-1}, T)$.
\hfill \IEEEQED
\section{Proof of cardinality bounds}\label{appendix bounds}
Here we prove separately the cardinality bound for all the outer bounds in this paper.
Note that since the proofs are basically identical we will prove it in the first case and then omit most details in all the other cases.
First, we state the Support Lemma \cite[Appendix C]{elgamal2011nit}.
\begin{lem}\label{support lemma}
Let $\mathcal{A}$ a finite set and $\mathcal W$ be an arbitrary set. Let $\mathcal P$ be a connected compact subset of probability mass functions on $\mathcal A$ and $P_{A|W}$ be
a collection of conditional probability mass functions on $\mathcal A$. Suppose that $h_i(\pi)$, $i=1, \ldots, d$, are real-valued continuous functions of $\pi \in \mathcal P$. Then for every $W$ defined
on $\mathcal W$ there exists a random variable $W'$ with $\lvert \mathcal W' \rvert \leq d$ and a collection of conditional probability mass functions $P_{A|W'} \in \mathcal P$ such that
\begin{equation*}
\sum_{w \in \mathcal W} P_W(w) h_i(P_{A|W}(a|w))=\sum_{w \in \mathcal W'} P_{W'}(w)h_i(P_{A|W'}(a|w)) \quad i=1, \ldots, d.
\end{equation*}
\end{lem}
\begin{IEEEproof}[Proof of cardinality bound of Theorem \ref{teoisit}]
We consider the probability distribution
$\bar P_{U} \bar P_{W|U} \bar P_{X|UW} \bar P_{Y|X} \bar P_{V|WY}$
that is $\varepsilon$-close in total variational distance to the i.i.d. distribution.
We identify $\mathcal{A}$ with $\{1,\ldots,\lvert \mathcal{A}\rvert \}$
and we consider $\mathcal P$ a connected compact subset of probability mass functions on $\mathcal A=\mathcal U \times \mathcal X \times \mathcal Y \times \mathcal V$.
Similarly to \cite{treust2017joint}, suppose that $h_i(\pi)$, $i=1, \ldots, \lvert \mathcal A \rvert +4$,
are real-valued continuous functions of $\pi \in \mathcal P$ such that:
{\allowdisplaybreaks
\begin{align*}
h_i (\pi) =
\begin{cases}
\pi(i) &\mbox{for } i= 1, \ldots, \lvert \mathcal A \rvert -1 \\
H(U) &\mbox{for } i= \lvert \mathcal A \rvert \\
H(UXV|Y)& \mbox{for } i= \lvert \mathcal A \rvert +1\\
H(Y|UX)& \mbox{for } i= \lvert \mathcal A \rvert +2\\
H(V|Y)& \mbox{for } i= \lvert \mathcal A \rvert +3\\
H(V|UXY)& \mbox{for } i= \lvert \mathcal A \rvert +4\\
\end{cases}.
\end{align*}}
Then by Lemma \ref{support lemma} there exists an auxiliary random variable $W'$ taking at most
$ \lvert \mathcal U \times \mathcal X \times \mathcal Y \times \mathcal V \rvert +4$ values such that:
{\allowdisplaybreaks
\begin{align*}
& H(U|W)= \sum_{w \in \mathcal W} P_W(w) H(U|W\!=\!w)= \sum_{w \in \mathcal W'} P_{W'}(w) H(U|W'\!=\!w)=H(U|W'),\\
& H(UXV|YW)= \sum_{w \in \mathcal W} P_W(w) H(UXV|YW\!=\!w)= \sum_{w \in \mathcal W'} P_{W'}(w) H(UXV|YW'\!=\!w)=H(UXV|YW'),\\
& H(Y|UXW)= \sum_{w \in \mathcal W} P_W(w) H(Y|UXW\!=\!w)= \sum_{w \in \mathcal W'} P_{W'}(w) H(Y|UXW'\!=\!w)=H(Y|UXW'),\\
& H(V|YW)= \sum_{w \in \mathcal W} P_W(w) H(V|YW\!=\!w)= \sum_{w \in \mathcal W'} P_{W'}(w) H(V|YW'\!=\!w)=H(V|YW'),\\
& H(V|UXYW)= \sum_{w \in \mathcal W} P_W(w) H(V|UXYW\!=\!w)= \sum_{w \in \mathcal W'} P_{W'}(w) H(V|UXYW'\!=\!w)=H(V|UXYW').
\end{align*}}
Then the constraints on the conditional distributions, the information constraints and the Markov chains are still verified
since we can rewrite the inequalities in \eqref{eq: regionisit2} and the Markov chains in \eqref{markov chain isit} as
{\allowdisplaybreaks
\begin{align*}
& H(U)-H(U|W) \leq I(X;Y),\\
& R_0 \geq H(UXV|Y)-H(UXV|WY),\\
& I(Y;UW|X)= H(Y|X)- H(Y|UXW)=0,\\
& I(V;UX|YW)= H(V|YW)-H(V|UXYW)=0.
\end{align*}}
Note that we are not forgetting any constraints: to preserve $H(U)-H(U|W) \leq I(X;Y)$ we only need to fix $H(U|W)$ because the other quantities
depend only on the joint distribution $P_{UXYV}$ (which is preserved).
Similarly, once the distribution $\bar P_{UXYV}$ is preserved, the difference $H(UXV|Y)-H(UXV|WY)$
only depends on the conditional entropy $H(UXV|WY)$ and the difference $H(Y|X)- H(Y|UXW)$ only
depends on $H(Y|UXW)$.
\end{IEEEproof}
\vspace{0.3cm}
\begin{IEEEproof}[Proof of cardinality bound of Theorem \ref{teouv}]
Here let $\mathcal A=\mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \times \mathcal V $
and suppose that $h_i(\pi)$, $i=1, \ldots, \lvert \mathcal A \rvert +5$, are real-valued continuous functions of $\pi \in \mathcal P$ such that:
{\allowdisplaybreaks
\begin{align*}
h_i (\pi) =
\begin{cases}
\pi(i) & \mbox{for } i= 1, \ldots, \lvert \mathcal A \rvert -1 \\
H(U) & \mbox{for } i= \lvert \mathcal A \rvert \\
H(USXV|YZ) & \mbox{for } i= \lvert \mathcal A \rvert +1\\
H(Y|USX) & \mbox{for } i= \lvert \mathcal A \rvert +2\\
H(V|YZ) & \mbox{for } i= \lvert \mathcal A \rvert +3\\
H(V|YZUSX) & \mbox{for } i= \lvert \mathcal A \rvert +4\\
H(Z|YUSX) & \mbox{for } i= \lvert \mathcal A \rvert +5
\end{cases}.
\end{align*}}
By the Markov chain $Z-(U,S)-(X,Y,W)$, the mutual information $I(Z;XYW|US)$ is zero
and once the distribution $\bar P_{USZXYV}$ is preserved, the mutual information
$I(Z;XYW|US)= H(Z|US)- H(Z|USXYW)$ only depends on $H(Z|YUSX)$.
Therefore there exists an auxiliary random variable $W'$ taking at most
$ \lvert \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \times \mathcal V \rvert +5 $
values such that the constraints on the conditional distributions and the information constraints are still verified.
\end{IEEEproof}
\vspace{0.3cm}
\begin{IEEEproof}[Proof of cardinality bound of Theorem \ref{teopc}]
Similarly, here let $\mathcal A=\mathcal U \times \mathcal Z \times \mathcal X \times \mathcal V $ and suppose that $h_i(\pi)$, $i=1, \ldots, \lvert \mathcal A \rvert +4$,
are real-valued continuous functions of $\pi \in \mathcal P$ such that:
{\allowdisplaybreaks
\begin{align*}
h_i (\pi) =
\begin{cases}
\pi(i) & \mbox{for } i= 1, \ldots, \lvert \mathcal A \rvert -1 \\
H(U|XZ) &\mbox{for } i= \lvert \mathcal A \rvert \\
H(UV|XZ) & \mbox{for } i= \lvert \mathcal A \rvert +1\\
H(V|XZ) & \mbox{for } i= \lvert \mathcal A \rvert +2\\
H(V|ZUX) & \mbox{for } i= \lvert \mathcal A \rvert +3\\
H(Z|UX) & \mbox{for } i= \lvert \mathcal A \rvert +4
\end{cases}.
\end{align*}}
The information constraint in Theorem \ref{teopc} can be written as
{\allowdisplaybreaks
\begin{align*}
&H(X)+I(W;Z|X) - I(WX;U) = H(X)+I(WX;Z)-I(Z;X)- I(WX;U) \\
&{\overset{{(a)}}{=}} H(X)-I(Z;X)+I(WX;Z)-I(WX;UZ)=H(X)-I(Z;X)+I(WX;U|Z)\\
&=H(X)-I(Z;X)+H(U|Z)-H(U|WXZ) \geq 0
\end{align*}}
where $(a)$ follows from the fact that $I(WX;UZ)=I(WX;U)$ by the Markov chain $Z-U-(W,X)$.
By fixing $H(UV|XZ)$ the constraint on the bound for $R_0$ is satisfied and similarly to the previous cases
the Markov chains are still verified.
Thus there exists an auxiliary random variable $W'$ taking at most
$ \lvert \mathcal U \times \mathcal Z \times \mathcal X \times \mathcal V \rvert +4 $ values.
\end{IEEEproof}
\vspace{0.3cm}
\begin{IEEEproof}[Proof of cardinality bound of Theorem \ref{teolossless}]
For the lossless decoder, we rewrite the constraints in the equivalent characterization of the region \eqref{eq: regionldmael} as:
{\allowdisplaybreaks
\begin{align*}
H(U)&\leq H(YZ)- H(YZ|UW),\\
R_0 &\geq I(W;USX|YZ)+H(U|WYZ)= H(USX|YZ)- H(USX|WYZ)+H(U|WYZ)\\
&=H(USX|YZ)-H(USX)+H(U)+H(SX|U)-H(SX|UWYZ).
\end{align*}}
Then let $\mathcal A=\mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y$
and suppose that $h_i(\pi)$, $i=1, \ldots, \lvert \mathcal A \rvert +3$,
are real-valued continuous functions of $\pi \in \mathcal P$ such that:
{\allowdisplaybreaks
\begin{align*}
h_i (\pi) =
\begin{cases}
\pi(i) & \mbox{for } i= 1, \ldots, \lvert \mathcal A \rvert -1 \\
H(YZ|U) & \mbox{for } i= \lvert \mathcal A \rvert \\
H(SX|UYZ) & \mbox{for } i= \lvert \mathcal A \rvert +1\\
H(Y|USX) & \mbox{for } i= \lvert \mathcal A \rvert +2\\
H(Z|YUSX) & \mbox{for } i= \lvert \mathcal A \rvert +3
\end{cases}.
\end{align*}}and therefore there exists an auxiliary random variable $W'$ taking at most $\lvert \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \rvert +3 $ values.
\end{IEEEproof}
\vspace{0.3cm}
\begin{IEEEproof}[Proof of cardinality bound of Theorem \ref{teoseparation}]
For case of separation between channel and source, we consider the following equivalent characterization of the information constraints:
{\allowdisplaybreaks
\begin{align*}
& 0 \leq H(YZ)-H(YZ|W_1 W_2) -H(US)+H(US|W_1 W_2),\\
& R_0 \geq H(USXV|YZ)-H(USXV|YZ W_1 W_2)
\end{align*}}In this case we have $W=(W_1, W_2)$.
Let $\mathcal A=\mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \times \mathcal V $
and suppose that $h_i(\pi)$, $i=1, \ldots, \lvert \mathcal A \rvert +3$,
are real-valued continuous functions of $\pi \in \mathcal P$ such that:
{\allowdisplaybreaks
\begin{align*}
h_i (\pi) =
\begin{cases}
\pi(i) & \mbox{for } i= 1, \ldots, \lvert \mathcal A \rvert -1 \\
H(US) &\mbox{for } i= \lvert \mathcal A \rvert \\
H(USXV|YZ) & \mbox{for } i= \lvert \mathcal A \rvert +1\\
H(V|Z)& \mbox{for } i= \lvert \mathcal A \rvert +2\\
H(V|UZ)& \mbox{for } i= \lvert \mathcal A \rvert +3\\
\end{cases}.
\end{align*}}
Then there exists an auxiliary random variable $W'=(W'_1, W'_2)$ taking at most
$ \lvert \mathcal U \times \mathcal S \times \mathcal Z \times \mathcal X \times \mathcal Y \times \mathcal V \rvert +3$ values.
\end{IEEEproof}
\section{Polar coding achievability proofs}\label{appendix polar}
Here we prove the results used in the achievability proof of Theorem \ref{polarregion}.
\begin{IEEEproof}[Proof of Lemma \ref{scblock}]
Similarly to \cite[Lemma 5]{Cervia2016}, we first prove that in each block $i \in \llbracket 1,k\rrbracket$
\begin{equation}\label{emp1}
\D\left( \bar P_{UW}^{\otimes n} \Big\Arrowvert \widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n }\right) =n \delta_n.
\end{equation}
In fact, we have
{\allowdisplaybreaks
\begin{align*}
\D\left( \bar P_{UW}^{\otimes n} \Big\Arrowvert \widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n }\right)
& {\overset{{(a)}}{=}} \mathbb{D}\left( \bar P_{UR}^{\otimes n} \Big\Arrowvert \widetilde P_{U^{n}_{(i)} \widetilde R_{(i)}^n } \right)
{\overset{{(b)}}{=}} \mathbb{D}\left( \bar P_{R^n|U^n} \Big\Arrowvert \widetilde P_{\widetilde R_{(i)}^n |U^{n}_{(i)}} \Big| \bar P_{U^n}\right)\\
& {\overset{{(c)}}{=}} \sum_{j =1}^n \mathbb D \left( \bar P_{R_j|R^{j-1} U^{n} }\Big\Arrowvert \widetilde P_{\widetilde R_{(i),j}| \widetilde R_{(i)}^{j-1} U_{(i)}^{n}} \Big| \bar P_{R^{j-1} U^{n}} \right)\\
& {\overset{{(d)}}{=}}\sum_{j \in A_1 \cup A_2} \mathbb D \left( P_{R_j|R^{j-1} U^{n} }\Big\Arrowvert \widetilde P_{\widetilde R_{(i),j}| \widetilde R_{(i)}^{j-1} U_{(i)}^{n} } \Big| \bar P_{R^{j-1} U^{n}}\right)\\
& {\overset{{(e)}}{=}} \sum_{j \in A_1 \cup A_2} \left( 1- H(R_j \mid R^{j-1} U^{n}) \right) {\overset{{(f)}}{<}} \delta_n \lvert \mathcal V _{W\mid U}\rvert
\leq n \delta_n,
\end{align*}}where $(a)$ comes from the invertibility of $G_n$, $(b)$ and $(c)$ come from the chain rule, $(d)$ comes from \eqref{eq: p1} and \eqref{eq: p2},
$(e)$ comes from the fact that the conditional distribution $\widetilde P_{\widetilde R_{(i),j}| \widetilde R_{(i)}^{j-1} U_{(i)}^{n} } $ is uniform for $j$ in $A_1$ and $A_2$ and $(f)$ from \eqref{eq: hv}.
Therefore, applying Pinsker's inequality to \eqref{emp1} we have
\begin{equation}\label{emp2}
\tv\left(\widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n }, \bar P_{UW}^{\otimes n}\right)\leq \sqrt{2 \log 2} \sqrt{n \delta_n}:= \delta_n^{(2)} \to 0.
\end{equation}
Note that $X_{(i)}^n$ is generated symbol by symbol from $U^{n}_{(i)}$ and $\widetilde W_{(i)}^n$ via the
conditional distribution $\bar P_{X|UW}$ and $Y_{(i)}^n$ is generated symbol by symbol via the channel $\bar P_{Y|X}$.
By Lemma \ref{cuff17}, we add first $X_{(i)}^n$ and then $Y_{(i)}^n$ and we obtain that for each $i \in \llbracket 1,k\rrbracket$,
\begin{equation}\label{oneblockconv}
\tv \!\left(\!\widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n X_{(i)}^n Y_{(i)}^n}, \bar P_{UWXY}^{\otimes n}\right)\!=\! \tv\left(\!\widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n }, \bar P_{UW}^{\otimes n}\right) \! \leq \! \delta_n^{(2)}
\end{equation}
and therefore the left-hand side of \eqref{oneblockconv} vanishes.
Observe that we cannot use Lemma \ref{cuff17} again because $V_{(i)}^n$ is generated using $\widehat W_{(i)}^n$ (i.e. the estimate of $W_{(i)}^n$ at the decoder)
and not $\widetilde W_{(i)}^n$.
By the triangle inequality for all $i \in \llbracket 1,k\rrbracket$
\begin{equation}\label{triangle}
\tv \left( \widetilde P_{U^{n}_{(i)} \widehat W_{(i)}^n X_{(i)}^n Y_{(i)}^n}, \bar P_{UWXY}^{\otimes n} \right) \!\!
\leq \! \tv \left( \widetilde P_{U^{n}_{(i)} \widehat W_{(i)}^n X_{(i)}^n Y_{(i)}^n}, \widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n X_{(i)}^n Y_{(i)}^n} \right) \!\! + \! \tv \left( \widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n X_{(i)}^n Y_{(i)}^n}, \bar P_{UWXY}^{\otimes n} \right).
\end{equation}We have proved in \eqref{oneblockconv} that the second term of the right-hand side in \eqref{triangle} goes to zero, we show that the first term tends to zero as well.
To do so, we apply Proposition \ref{theocoup} to
$$\begin{matrix}[ll]
A = U^{n}_{(i)} \widehat W_{(i)}^n X_{(i)}^n Y_{(i)}^n & A'=U^{n}_{(i)} \widetilde W_{(i)}^n X_{(i)}^n Y_{(i)}^n\\
P =\widetilde P_{U^{n}_{(i)} \widehat W_{(i)}^n X_{(i)}^n Y_{(i)}^n} & P'=\widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n X_{(i)}^n Y_{(i)}^n}\\
\end{matrix}$$
on $\mathcal A= \mathcal U \times \mathcal W \times \mathcal X \times \mathcal Y$.
Since it has been proven in \cite{arikan2010source} that
\begin{equation*}
p_e:= \mathbb P \left\{\widehat W_{(i)}^n \neq \widetilde W_{(i)}^n \right\}=O(\delta_n)
\end{equation*}we find that
$\tv \left( \widetilde P_{U^{n}_{(i)} \widehat W_{(i)}^n X_{(i)}^n Y_{(i)}^n}, \widetilde P_{U^{n}_{(i)} \widetilde W_{(i)}^n X_{(i)}^n Y_{(i)}^n} \right) \leq 2 p_e$
and therefore
{\allowdisplaybreaks
\begin{align*}
&\tv \left( \widetilde P_{U^{n}_{(i)} \widehat W_{(i)}^n X_{(i)}^n Y_{(i)}^n}, \bar P_{UWXY}^{\otimes n} \right)\leq 2p_e+\delta_n^{(2)}=\delta_n^{(1)}\to 0.
\end{align*}}
Since $V^{n}_i$ is generated symbol by symbol from $\widehat W^{n}_i$ and $Y^{n}_i$, we apply Lemma \ref{cuff17} again and find
{\allowdisplaybreaks
\begin{align*}
&\tv\left(\widetilde P_{U^{n}_{(i)} \widehat W_{(i)}^n X_{(i)}^n Y_{(i)}^n V_{(i)}^n}, \bar P_{UWXYV}^{\otimes n}\right)\leq \delta_n^{(1)}\to 0. \stepcounter{equation}\tag{\theequation}\label{convblock}
\end{align*} }
\end{IEEEproof}
\vspace{0.3cm}
\begin{IEEEproof}[Proof of Lemma \ref{step2a'}]
For $i \in \llbracket 2,k\rrbracket$, we have
{\allowdisplaybreaks
\begin{align*}
\D \left(\widetilde P_{ L_{i-1:i} \bar C'} \Big\Arrowvert \widetilde P_{L_{i-1}\bar C'} \widetilde P_{L_{i}} \right) &= I (L_{i-1} \bar C';L_{i}) {\overset{{(a)}}{=}} I (L_{i}; \bar C' ) + I (L_{i-1};L_{i}| \bar C') \\
& {\overset{{(b)}}{=}} I (L_{i}; \bar C' ) = I (L_{i}; \widetilde R_i [A'_1] ) {\overset{{(c)}}{=}} \lvert A'_1 \rvert - H (\widetilde R_i [A'_1] | L_{i}) \\
& {\overset{{(d)}}{=}} \lvert A'_1 \rvert - H (R [A'_1] | W) + \delta_n^{(4)}
{\overset{{(e)}}{\leq}} \lvert A'_1 \rvert - \sum_{j \in A'_1} H (R_j | R^{j-1} L) + \delta_n^{(4)} \stepcounter{equation}\tag{\theequation}\label{eqlem10}\\
&{\overset{{(f)}}{\leq}} \lvert A'_1 \rvert - \lvert A'_1 \rvert (1 - \delta_n) + \delta_n^{(4)} \leq n \delta_n + \delta_n^{(4)}
\end{align*}
}where $(a)$ comes from the chain rule, $(b)$ from the Markov chain
$L_{i-1} - \bar C' - L_{i}$, $(c)$ from the fact that the bits in $A'_1$ are uniform.
To prove $(d)$ observe that
{\allowdisplaybreaks
\begin{align*}
H (\widetilde R_i [A'_1] | L_{i}) - H (R [A'_1] | L)&= H (\widetilde R_{(i)}[A'_1] L_{i}) - H (R [A'_1] L) - H (L_{i}) + H (L)\\
&{\overset{{(g)}}{\leq}} \delta_n^{(1)} \log{\frac{\lvert \mathcal U \times \mathcal X \times \mathcal W \times \mathcal Y \times \mathcal V \rvert}{\delta_n^{(1)}}} \! + \!
\delta_n^{(1)} \log{\frac{\lvert \mathcal U \times \mathcal X \times \mathcal Y \times \mathcal V \rvert}{\delta_n^{(1)}}}\\
&\leq 2 \delta_n^{(1)} (\log{\lvert \mathcal U \times \mathcal X \times \mathcal W \times \mathcal Y \times \mathcal V \rvert}-\log{\delta_n^{(1)}}):= \delta_n^{(4)}
\end{align*}}where $(g)$ comes from Lemma \ref{csi2.7} since
\begin{equation*}
\tv \left( \widetilde P_{L_i }, \bar P_{UXYV}^{\otimes n}\right) \leq \tv \left( \widetilde P_{L_i \widetilde W_{(i)}^n},\bar P_{UWXYV}^{\otimes n}\right) \leq \delta_n^{(1)}
\end{equation*}that vanishes as $n$ goes to infinity.
Finally $(e)$ is true because conditioning does not increase entropy and
$(f)$ comes by definition of the set $A'_1$. Then from Pinsker's inequality
\begin{equation}\label{deltan2}
\tv \left( \widetilde P_{L_{i-1:i} \bar C'} , \widetilde P_{ L_{i-1}\bar C'} \widetilde P_{L_{i}}\right) \leq \sqrt{2 \log 2} \sqrt{n \delta_n + \delta_n^{(4)}}= \delta_n^{(3)} \to 0.
\end{equation}
\end{IEEEproof}
\vspace{0.3cm}
\begin{IEEEproof}[Proof of Lemma \ref{step2b'}]
We have
{\allowdisplaybreaks
\begin{align*}
\D ( \widetilde P_{ L_{1:k}} \Big\Arrowvert \prod_{i=1}^{k} \widetilde P_{ L_{i}} ) & {\overset{{(a)}}{=}} \sum_{i=2}^{k} I (L_{i} ;L_{1:i-1}) \leq \sum_{i=2}^{k} I (L_{i} ;L_{1:i-1} \bar C') \\
& = \sum_{i=2}^{k} \left( I (L_{i} ;L_{i-1} \bar C') + \sum_{j=1}^{i-2} I (L_{i} ;L_{i-j-1} | L_{i-j:i-1} \bar C') \right)\\
& \leq \sum_{i=2}^{k} \left( I (L_{i} ;L_{i-1} \bar C') + \sum_{j=1}^{i-2} I (L_{i} ;L_{i-j-1:i-2} |L_{i-1} \bar C') \right)\\
&{\overset{{(b)}}{=}} \sum_{i=2}^{k} I (L_{i} ;L_{i-1} \bar C') {\overset{{(c)}}{\leq}} (k-1) (n \delta_n + \delta_n^{(4)})
\end{align*}}where $(a)$ comes from \cite[Lemma 15]{chou2016soft}, $(b)$ is true because the dependence structure of the blocks gives the Markov chain $L_{i-j-1:i-2}-L_{i-1} \bar C' -L_{i} $ and $(c)$ follows from \eqref{eqlem10}.
We conclude with Pinsker's inequality.
\end{IEEEproof}
\vspace{0.3cm}
\begin{IEEEproof}[Proof of Lemma \ref{step2c'}]
By the triangle inequality
{\allowdisplaybreaks
\begin{align}\label{tri}
\begin{split}
& \tv \left( \widetilde P_{ L_{1:k}}, \bar P_{UXYV}^{\otimes nk}\right) \leq \tv \left( \widetilde P_{ L_{1:k}}, \prod_{i=1}^{k} \widetilde P_{ L_{i}}\right)+ \tv \left( \prod_{i=1}^{k} \widetilde P_{ L_{i}}, \bar P_{UXYV}^{\otimes nk}\right)
\end{split}
\end{align}}where the first term is smaller than $\sqrt{k-1} \delta_n^{(3)}$ by Lemma \ref{step2b'}.
To bound the second term, observe that
{\allowdisplaybreaks
\begin{align}\label{eqdiv}
\begin{split}
& \D \left( \prod_{i=1}^{k} \widetilde P_{ L_{i}} \Big\Arrowvert \bar P_{UXYV}^{\otimes nk} \right) = \D \left(\prod_{i=1}^{k} \widetilde P_{L_{i}} \Big\Arrowvert \prod_{i=1}^{k} \bar P_{UXYV}^{\otimes n} \right) = \sum_{i=1}^{k} \D \left( \widetilde P_{L_{i}} \Big\Arrowvert \bar P_{UXYV}^{\otimes n} \right).
\end{split}
\end{align}}
By the chain rule we have that
$\D \left( \widetilde P_{ L_{i}} \Big\Arrowvert\bar P_{UXYV}^{\otimes n} \right) \leq \D \left( \widetilde P_{ L_{i} W_{(i)}^n} \Big\Arrowvert \bar P_{UWXYV}^{\otimes n} \right)$.
Since $X_{(i)}^n$, $Y_{(i)}^n$ and $V_{(i)}^n$ are generated symbol by symbol via the conditional distributions $\bar P_{X|UW}$, $\bar P_{Y|X}$ and $\bar P_{V|WY}$ respectively,
by Lemma \ref{lemkl} we have that
\begin{equation}\label{eq div2}
\D \left( \widetilde P_{ L_{i} W_{(i)}^n} \Big\Arrowvert \bar P_{UWXYV}^{\otimes n} \right) = \D \left( \widetilde P_{ U^{n}_{(i)} W_{(i)}^n } \Big\Arrowvert \bar P_{UW} \right).
\end{equation}
Hence, we have
{\allowdisplaybreaks
\begin{align*}
\D \left( \prod_{i=1}^{k} \widetilde P_{ L_{i}} \Big\Arrowvert \bar P_{UXYV}^{\otimes kn} \right) = \sum_{i=1}^{k} \D \left( \widetilde P_{ L_{i}} \Big\Arrowvert \bar P_{UXYV}^{\otimes n} \right)
{\overset{{(a)}}{\leq}} \sum_{i=1}^{k} \D \left( \widetilde P_{ U^{n}_{(i)} W_{(i)}^n} \Big\Arrowvert \bar P_{UW}^{\otimes n} \right){\overset{{(b)}}{=}} kn \delta_n
\end{align*}}where $(a)$ follows from the chain rule and \eqref{eq div2} and $(b)$ comes from \eqref{emp1}.
Then, by Pinsker's inequality, \eqref{tri} becomes:
\begin{equation*}
\tv \left( \widetilde P_{ L_{1:k}}, \bar P_{UXYV}^{\otimes nk}\right) \leq \sqrt{k-1} \delta_n^{(3)} +\sqrt{k} \delta_n^{(2)} \leq \sqrt{k} (\delta_n^{(3)} + \delta_n^{(2)})= \delta_n^{(5)} \to 0.
\end{equation*}
\end{IEEEproof}
\begin{small}
\bibliographystyle{IEEEtran}
|
1,116,691,501,278 | arxiv | \section{Introduction}\label{sec:intro}
The hard X-ray source GRS~1758\ensuremath{-}258 was discovered in 1990
\citep{mandrou:90a,sunyaev:91a} during observations of the Galactic
Center region performed with the \textsl{Granat} satellite. Most of
the time the source displays X-ray properties similar to the canonical
hard state of Galactic black hole binaries, i.e., a comparatively hard
spectrum with power law indices of $\Gamma$=1.4--1.9 and an
exponential cutoff above 100\,keV
\citep{kuznetsov:99a,main:99a,lin:00a} as well as strong short term
variability on frequencies up to 10\,Hz which can be characterized by
a flat-topped power spectrum \citep{smith:97a,lin:00a}. Based on these
X-ray properties and on the detection of a radio counterpart \citep[a
point source plus a double-sided jet structure,][]{rodriguez:92a}
GRS~1758\ensuremath{-}258 is considered to be a microquasar.
With the exception of rare dim soft states that can last up to several
months, the X-ray emission of GRS~1758\ensuremath{-}258 is persistent. In contrast to the
canonical picture for persistent black hole binaries, however, GRS~1758\ensuremath{-}258
most likely has a low mass binary companion and is accreting via Roche
lobe overflow. Three faint IR counterpart candidates have been
identified recently, multi-band photometry and near-infrared
spectroscopy characterizing the brightest of them as a probable K0~III
giant and the other two as main sequence A stars
\citep{marti:98a,goldwurm:01a,eikenberry:01a,rothstein:02a,heindl:02b}.
Taking into account the 18.45$\pm$0.10\,day binary orbit
\citep{smith:02a}, the radius of a K giant is consistent with Roche
lobe overflow, while the other counterpart candidates are too small
\citep{rothstein:02a}.
From 1990 to 1998 the source was monitored in the hard X-ray regime
with \textsl{Granat}/SIGMA, establishing it as persistent hard state
black hole binary but also revealing a factor of eight variability in
the 40--150\,keV flux \citep{kuznetsov:99a}. At softer X-rays
monitoring of GRS~1758\ensuremath{-}258 (and its ``sister source''
\object{1E~1740.7$-$2942}) with \textsl{RXTE } started with monthly
observations in 1996 and is still on-going in 2005, with two
observations each week \citep[][this
work]{main:99a,smith:97a,smith:01a,smith:02a,smith:01b}. This campaign
led to the discovery that in GRS~1758\ensuremath{-}258 (and 1E~1740.7$-$2942) the
observed spectral hardness is not anti-correlated with the 2--25\,keV
flux but rather correlated with the flux derivative -- in the sense
that the spectrum is softest when the 2--25\,keV count rate is
dropping. This behavior is different from the prototype hard state
black hole binary \object{Cyg~X-1} which is showing the canonical
anti-correlation of spectral hardness and soft X-ray flux. It has been
interpreted as an indication for the presence of two
\textsl{independent} accretion flows supplied with proportional
amounts of matter at large radii which are then accreted on different
time scales \citep{main:99a,smith:01b}: a hot, e.g., ADAF-type,
accretion flow, reacting quickly to changes in the accretion rate, and
a large accretion disk with a long viscous time scale (consistent with
accretion via Roche lobe overflow). In addition, \citet{lin:00a}
performed a radio to $\gamma$-ray multi-wavelength study of the hard
state as observed in 1997 August, including spectral modeling and high
time resolution analyses. Applying the thermal Comptonization model
\texttt{compTT} \citep{tit:94} they obtained a temperature of 52\,keV
and an optical depth of 3.4 for the hot plasma. \citet{sidoli:02a}
obtained very similar values using \texttt{compTT} to model a broad
band \textsl{BeppoSAX} spectrum of the source obtained in 1997 April.
A weak soft excess is sometimes seen in the hard state spectra of
GRS~1758\ensuremath{-}258 \citep{heindl:98a,lin:00a} or cannot be excluded to be present
\citep{mereghetti:97a}. It has also been observed in conjunction with
a slightly reduced hard X-ray flux during an intermediate state in
1993 March/April \citep{mereghetti:94a}. A similar recent episode in
2000 September has been characterized as an intermediate state based
on the \textsl{RXTE } monitoring observations \citep[][and references
therein]{heindl:02b}. Modeling an \textsl{XMM-Newton}/EPIC-MOS
spectrum obtained during this time, \citet{goldwurm:01a} found that in
addition to a comparatively soft power law ($\Gamma\sim$2) a blackbody
component ($kT_{\text {in}}\sim$320\,eV) is required.
On two occasions a much softer and dimmer state than the persistent
hard state with occasional softening has been observed: (i) A sudden
drop of the 2--25\,keV rate between one \textsl{RXTE } monitoring observation
and the next occurred in 2001 February, with an estimated
\textsl{decrease} of $\sim$35\% of the 1.3--200\,keV luminosity within
$\sim$20\,d after the transition \citep{smith:01a}. An extreme case of
the unusual flux derivative/hardness relation described above, this
behavior is different from the canonical black hole state behavior
where the soft state corresponds to a higher accretion rate and a
higher bolometric luminosity. For the 1996 soft state of Cyg~X-1,
e.g., \citet{zdz:02a} find a 3--4 times higher bolometric luminosity
than for the typical hard state, an increase even higher than the
$\sim$50--70\% estimated previously \citep{zhang:97a}. In
Sect.~\ref{sec:discussion} we suggest that the dim soft state can be
better understood when compared to outbursts of BHC transients than to
(focused) wind accretors like Cyg X-1. \citet{smith:01a} also found
that the transition to the 2001 dim soft state of GRS~1758\ensuremath{-}258 was mainly
due to a decreasing and softening power law component
($\Gamma\sim$2.75 in 2001 March), revealing a soft component
($kT_{\text {in}}\sim$460\,eV in 2001 March). As predicted by
\citet{smith:01c} based on the two-flow accretion model, the soft
component decayed more slowly than the hard one, on a time scale of
$\sim$28\,days. Displaying a complex structure, including a partial
recovery of the count rates in 2001 July, the dim soft state lasted
until 2001 December \citep[][see also this
work]{heindl:02b}. \textsl{Chandra}/ACIS-HETGS
\citep{heindl:02a,heindl:02b} and \textsl{XMM-Newton}/RGS
\citep{miller:02a} observations in 2001 March support the picture that
the decaying soft flux is emitted by an accretion disk. (ii) It can be
assumed that \textsl{Granat}/SIGMA exposures in fall 1991 and spring
1992 when the 40--150\,keV flux was below the detection limit
\citep{gilfanov:93a} found the source in a similar dim soft state
\citep{smith:01a,miller:02a}. This is supported by the analysis of a
1992 March \textsl{ROSAT}/PSPC spectrum by \citet{grebenev:97a}
resulting in a power law index of $\Gamma\sim$2.5.
In the following we report results of monitoring observations of
GRS~1758\ensuremath{-}258 with \textsl{INTEGRAL } and \textsl{RXTE } in 2003 and 2004. While the source was
in its usual variable hard state during most of the time, the data
obtained in spring 2003 clearly correspond to another dim soft state,
although it did not progress as far as the 2001 dim soft state before
the hard X-ray emission recovered again. In
Sect.~\ref{sec:observations} of this paper we describe the observing
strategy of the two monitoring programs and explain the data
extractions performed to obtain broad band PCA-ISGRI-SPI spectra and
multi-band PCA and ISGRI light curves. In Sect.~\ref{sec:evol} the
long term light curves and flux changes are described and in
Sect.~\ref{sec:spectra} we present results of modeling the broad band
spectra with phenomenological and thermal Comptonization models. The
results, especially the observation of another weak soft state, are
discussed in the light of current black hole outburst and accretion
models in Sect.~\ref{sec:discussion}. Our conclusions are summarized
in Sect.~\ref{sec:conclusions}.
\section{Observations and Data Reduction}\label{sec:observations}
During 2003 and 2004 the guaranteed time program amounted to 30--35\%
of \textsl{INTEGRAL}'s observing time and was mainly dedicated to the
Galactic Plane Scan (GPS) and Galactic Center Deep Exposure (GCDE)
projects within the \textsl{INTEGRAL } team's Core Programme. Especially the
GCDE\footnote{Concentrating on 5\ensuremath{^\circ} around the Galactic Center but
with pointing positions reaching out to Galactic longitudes and
latitudes of about $\pm30$\ensuremath{^\circ} and $\pm20$\ensuremath{^\circ}, respectively, from
it.} provided a wealth of data on GRS~1758\ensuremath{-}258. All Core Programme data up to
January 2005 as well as all data of the source public at that time
have been included into the analysis presented here. Our \textsl{INTEGRAL } data
are grouped into four data sets observed in spring and fall of 2003
and 2004 which in the following shall be called epoch~1--4. See
Table~\ref{tab:obslog} for more details.
\begin{table}
\caption{ \textsl{INTEGRAL } observing epochs for GRS~1758\ensuremath{-}258, giving the exposure times
of the summed spectra analyzed for each epoch and instrument,
including the \textsl{RXTE}/PCA.}\label{tab:obslog}
\begin{tabular}{ccrrr}
\hline
Epoch$^a$& Start \& End Date & ISGRI & SPI & PCA$^{b,c}$\\
& & [ks] & [ks]& [ks]\\
\hline
1 & 2003/02/28--2003/04/23 & 511& -- & 27\\
2 & 2003/08/19--2003/10/14 & 1889& 1358 & 19\\
& & & 1141$^d$ & \\
3 & 2004/02/17--2004/04/20 & 578& 759 & 23\\
4 & 2004/08/21--2004/10/28 & 467& 615 & 10\\
\hline
\end{tabular}
\vspace{2mm}
$^a$The four epochs correspond to \textsl{INTEGRAL } orbits 46--66, 103--122,
164--185, and 226--249.\\
$^b$The PCA exposure includes only data from PCU~2, see text
for more details.\\
$^c$Data from the following \textsl{RXTE } programs are included: 50107,
80102, and 90102.\\
$^d$SPI spectra for two non-overlapping data sets within this epoch
have been extracted (orbits 103--111 and 112--122), see text for more details.
\end{table}
\begin{figure}
\includegraphics[width=88mm]{4077fig1.eps}
\caption{\textsl{INTEGRAL}/ISGRI and \textsl{RXTE}/PCA light curves of GRS~1758\ensuremath{-}258 in the
indicated energy ranges as observed in 2003 and 2004. The ISGRI
light curves have been rebinned to a resolution of 1\,d. Arrows
indicate non-detections. For the PCA the average count rate of each
monitoring observation is shown in all energy bands. Vertical dashed
and dotted lines denote the dim soft state (``DS State'') and the
four \textsl{INTEGRAL } observation epochs (``E1''--``E4''), respectively.}
\label{fig:lcpanel}
\end{figure}
We used version 4.2 of the Offline Scientific Analysis package for
\textsl{INTEGRAL } to extract spectra and light curves of GRS~1758\ensuremath{-}258 obtained by the
\textsl{INTEGRAL } Soft Gamma Ray Imager \citep[ISGRI;][]{lebrun:03a} detector as
well as spectra from the SPectrometer on \textsl{INTEGRAL }
\citep[SPI;][]{vedrenne:03a}\footnote{At the time of submission of
this paper, OSA~5.0 had just become available. The conclusions
presented in this work do not depend on the OSA version. Since OSA~4.2
extractions of ISGRI spectra for sources as faint as GRS~1758\ensuremath{-}258 require
the use of the special procedure described in this section and since
no experience with using OSA~5.0 for faint sources existed prior to
submission, we decided to continue using OSA~4.2.}. See
\url{http://integral.esac.esa.int/workshops/Jan2005/session1/lubinski_cross.pdf}
for information on instrument cross-calibration issues for OSA
4.2. Due to the grid nature of the observations and the usually hard
source spectrum, the smaller field of view Joint European X-ray
Monitor \citep[JEM-X;][]{lund:03a} did not yield any useful data
covering the epoch time scales. In order to extract the ISGRI data
products, all pointings (``science windows'', ``ScWs'') in which the
offset of the source from the spacecraft pointing direction was
smaller than 10\ensuremath{^\circ} have been taken into account. For offsets in
this range systematic effects in the Crab calibration spectra show the
same general trends as for pointings in the fully coded field of
view. This selection amounts to $\sim$1920 ScWs with an exposure of
approximately 1800\,s each. Average spectra for the four epochs were
built by producing images in 12 energy bands for each ScW in a given
epoch, extracting the GRS~1758\ensuremath{-}258 source flux from each image, and
averaging the source fluxes of all ScWs in a given energy band using
standard weighting techniques. This method is described in the ISGRI
user manual
(\url{http://isdc.unige.ch/doc/tec/um/user_support/ibis/ibis_4.2.pdf})
and is the recommended procedure for all but the brightest
sources. For the coded aperture instrument ISGRI the diffuse Galactic
background is part of the background removed when reconstructing the
sky images out of detector shadowgrams
\citep{goldwurm:03a,terrier:03a}. For the spectral modeling we use the
ancillary response file ``isgr\_arf\_rsp\_0006.fits'' and a rebinned
version of the response matrix ``isgr\_rmf\_grp\_0012.fits''
distributed with OSA~4.2. In addition, light curves with a time
resolution of 1000\,s were produced in three energy bands: 20--60,
60--100, and 100--200\,keV.
During the first epoch the source was too soft to be detected by the
SPI instrument. For the remaining three epochs the same ScWs as for
ISGRI were used to produce epoch-summed SPI spectra, with the
exception of epoch 2, where successful OSA runs could only be obtained
by splitting the SPI data into two subsets. The difference in the
exposure times given for ISGRI and SPI in Table~\ref{tab:obslog} are
mainly due to ISGRI's dead-time. The SPI spectra were extracted over
an energy range of 20--500\,keV (25 bins) using the SPIROS package
within OSA, applying maximum likelihood optimization statistics
\citep{skinner:03a}. We set the number of pseudo detectors to 84
(i.e., including events located near borders between the physical SPI
detectors and registered in more than one of them) and selected
background correction method 5 (detector averaged count rate
modulation model). The input catalog of sources consisted of the 18
sources seen in the ISGRI 20--60\,keV mosaic images. Alternative
parameter settings were tested, like changing the number of pseudo
detectors to 18 (i.e., including only single events), using background
model 2 (each detector scaled separately), or allowing sources to be
variable (SEL\_FLAG=2). None of these alternatives lead to a
significant change in the obtained count rates. Applying an
alternative extraction method optimized for recovering spatially
extended emission, \citet{strong:03a} find that the diffuse Galactic
background spectrum is of roughly power law shape, falling with a
slope of 2.5--3. We verified that adding such a component to only the
SPI data does not change the best fit parameters of the
multi-instrument fits discussed in section~\ref{sec:spectra} and that
the normalization of the new power law is consistent with zero. Note
that according to a study by \citet{dubath:05a} based on observations
and simulations, SPI fluxes of sources \textsl{with known positions}
can be well recovered down to a source separation of at least
0$\fdg{}$5. With an angular distance of 0$\fdg{}$66 from GRS~1758\ensuremath{-}258 the
nearest source, the bright LMXB \object{GX~5$-$1}, should therefore
not contaminate our GRS~1758\ensuremath{-}258 SPI spectra. For the data sets presented
here a careful inspection of the spectra of both sources shows no
indication of contamination with the possible exception of one bin
affected by an overcorrection (undercorrection) of the 66.7\,keV
background line for GRS~1758\ensuremath{-}258 (GX~5$-$1). We find, however, that
excluding this bin from the analysis does not change the results.
In order to characterize the source behavior at softer X-ray energies
we use data from \textsl{RXTE}'s Proportional Counter Array
\citep[PCA;][]{jahoda:96} obtained during the on-going \textsl{RXTE }
monitoring campaign. Under this program 1.5\,ks snapshots of GRS~1758\ensuremath{-}258
have been taken monthly in 1996, weekly from 1997 through 2000, and
approximately twice each week since then. For a description of the
offset observing strategy applied to avoid nearby sources and of the
extra background measurements taken to minimize the influence of the
diffuse Galactic emission see \citet{smith:97a} and
\citet{main:99a}. This procedure has been successfully evaluated using
data from the pronounced dim soft state in 2001. Reduction of the PCA
data was performed using the HEASOFT package version~5.3.1. The
responses were generated using pcarsp version~10.1 (see
\url{http://lheawww.gsfc.nasa.gov/docs/xray/xte/pca/} for more
information on the \textsl{RXTE}/PCA energy calibration under this HEASOFT
version). Average spectra for the four \textsl{INTEGRAL } observing epochs were
produced. In addition, long term light curves consisting of the
average count rates of each PCA monitoring pointing were generated in
three energy bands (2.5--4, 4--10, and 10--25\,keV) and for the
overall 2.5--25\,keV light curve. We only use data from PCA's top
layer to optimize the signal to noise ratio. For the average spectra
we additionally decided to select data from one of PCA's five
Proportional Counter Units (PCUs), namely from PCU~2,
only\footnote{PCU~2 and PCU~3 are the units mainly used for
monitoring. In the data sets discussed here PCU~1 and PCU~4 only
contain $\sim$15--30\% of the exposure of PCU~2 or PCU~3.}. The loss
of additional PCA exposure is acceptable since our aim is to study the
broad band spectral continuum (and not, e.g., to perform deeper iron
line studies) with emphasis on characterizing the hard spectral
component, i.e., on the \textsl{INTEGRAL } data. This strategy also allows us to
further minimize systematic effects due to PCU cross-calibration. See
Table~\ref{tab:obslog} for the total exposure times of the
epoch-summed PCA spectra. Note that the All Sky Monitor (ASM) on
\textsl{RXTE } is not well suited to observe GRS~1758\ensuremath{-}258: the source's daily
1.3--12.2\,keV ASM rate averaged over the time of the \textsl{INTEGRAL } mission
up 2005 March, e.g., is 2.0$\pm$2.5\,cps. Also, since the absorbed
soft flux does not change much in the dim soft state (see next
section), no change is seen in the ASM light curve around epoch~1.
\section{Data Analysis and Results}\label{sec:analysis}
\begin{figure}
\includegraphics[width=88mm]{4077fig2_color.eps}
\caption{Evolution of the 2.5--25\,keV PCA count rates of GRS~1758\ensuremath{-}258 since
2000 (open triangles) and of the 20--60\,keV ISGRI count rates in
2003/2004 (open circles, see also Fig.~\ref{fig:lcpanel}). The
binning is the same as in Fig.~\ref{fig:lcpanel}. The dim soft
states are denoted by ``DS'' and the four \textsl{INTEGRAL } observation epochs
are again labeled ``E1''--``E4''.}
\label{fig:longlc}
\end{figure}
\subsection{Long Term Light Curves and Flux Changes}\label{sec:evol}
Fig.~\ref{fig:lcpanel} shows the evolution of the \textsl{INTEGRAL } and \textsl{RXTE }
light curves of GRS~1758\ensuremath{-}258 during the monitoring in 2003 and 2004 in
several energy bands, spanning a total energy range of
2.5--200\,keV. The \textsl{INTEGRAL } light curves have been rebinned to a
resolution of one day, for \textsl{RXTE } the average count rate of each PCA
data set is plotted, normalized to one PCU. The count rates during
epoch~1 are significantly lower in all energy bands above
4\,keV. Above 100\,keV the source is not detected in epoch~1. The PCA
measurements during and between epochs~1 and 2 suggest that the former
almost exactly covers the last two months of a $\sim$3 months long
period during which the source was in a dim soft state and that a
transition to the more common hard started with the end of
epoch~1.
This picture is supported by the average flux values determined from
our spectral fits (Table~\ref{tab:fluxes}). The 4--100\,keV flux is
considerably reduced in epoch~1. Consistent with the light curves the
\textsl{absorbed} 2.5--4\,keV flux does not change much between the
soft and hard state epochs. The \textsl{unabsorbed} flux extrapolated
to slightly lower energies, namely 2--4\,keV, reveals an overall
brightening at very low energies in epoch~1, however. A similar
behavior was also observed during the onset of the 2001 dim soft state
\citep{smith:01a}. Different from 2001, though, the 2003 dim soft
state starts with a short peak in the soft 2.5--4\,keV light curve,
coinciding with the decay above 10\,keV. The 4--10\,keV light curve
shows a superposition of both trends, with the flare dominating first
and then the decay.
\begin{table}
\caption{Average model flux for each epoch based on the best fit values
of Table~\ref{tab:cutoffpl}.}\label{tab:fluxes}
\begin{tabular}{lrrrr}
\hline
\hline
& Epoch 1& Epoch 2& Epoch 3& Epoch 4\\
\hline
$F_{2.5-4}^b$ & 1.6& 1.4& 1.4& 1.5\\
$F_{2-4}^a$ & 4.2& 2.3& 2.5& 2.8\\
$F_{3-9}^{a,c}$ & 1.9& 4.7& 4.5& 4.7\\
$F_{4-100}^a$ & 2.5& 22.9& 18.7& 18.6\\
\hline
\end{tabular}
$^a$ Unabsorbed flux in units of $10^{-10}$ erg cm$^{-2}$
s$^{-1}$.\\
\hspace*{0.15cm} The $N_{\text H}$ values of Table~\ref{tab:cutoffpl} have
been used for the flux correction. \\
$^b$ Absorbed flux in units of $10^{-10}$ erg cm$^{-2}$ s$^{-1}$.\\
$^c$ See comparison with flux changes of \object{GX~339$-$4} in
Sect.~\ref{sec:discussion}.
\end{table}
To put the \textsl{INTEGRAL } observing epochs into the broader context of the
source history, Fig.~\ref{fig:longlc} shows the 2.5--25\,keV light
curve from the PCA monitoring since 2000 as well as the 20--60\,keV
ISGRI light curve again. The dim soft state in 2001 is readily
apparent, including two instances within the off phase (2001 May and
2001 July/August) where the source partly turns on again. The
2.5--4\,keV and 10--25\,keV overview light curves shown by
\citet{heindl:02b} include these turn-ons. In their Fig.~1 it can be
seen that the soft emission only reaches its minimum after the second
turn-on. The soft emission decays slower than the hard emission after
each turn-on, consistent with the two-flow scenario. The decrease of
the 2.5--25\,keV rate in 2003 February is slower than the rapid
initial drop in 2001 February, but the 10--25\,keV light curve in
Fig.~\ref{fig:lcpanel} reveals that the hard emission decreases on a
similar time scale as in 2001, i.e., within about a week. The 2003 dim
soft state is shorter and not declining below $\sim$4\,cps/PCU in the
2.5--25\,keV band, though. In addition to the dim soft states there is
considerable long term variability present in the light curves, the
\textsl{INTEGRAL } 20--60\,keV ISGRI rate, e.g., varies by a factor of 30--40\%
within each hard state epoch. Furthermore, the 2.5--25\,keV PCA light
curve shows several sudden count rate drops, less severe or shorter
than the dim soft states, e.g., in 2002 March or between epochs~3 and
4 or during epoch~4.
\subsection{Spectral Analysis}\label{sec:spectra}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{4077fig3_color.eps}
\caption{Summed counts spectra for the GRS~1758\ensuremath{-}258 monitoring observations
of 2003 spring (epoch~1) and 2003 fall (epoch~2) with the best fit
\texttt{compTT} model and the corresponding residuals. Small dots
are PCA, filled diamonds ISGRI, and open diamonds SPI data. For
reasons of clarity only the first of the two SPI spectra for epoch~2
is shown but both data sets have been used to derive the plotted
model and residuals as well as the best fit parameters listed in
Tables~\ref{tab:cutoffpl} and~\ref{tab:comptt}. Note that ``count
rates'' delivered by the SPI extraction software SPIROS are not
directly comparable to those of other instruments and that here and
in Fig.~\ref{fig:ldata34} the SPI spectra have additionally been
multiplied by a factor of 100 for display
purposes.}\label{fig:ldata12}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{4077fig4_color.eps}
\caption{The same as Fig.~\ref{fig:ldata12} but for the observations
of 2004 spring (epoch~3) and 2004 fall
(epoch~4).}\label{fig:ldata34}
\end{figure*}
\subsubsection{Models and Data Preparation}
Below 100\,keV GRS~1758\ensuremath{-}258 has often been modeled by an absorbed power law
\citep{sunyaev:91a,mereghetti:97a,main:99a}. \citet{kuznetsov:99a}
found that an exponential cutoff power law fits the 1990--1997
\textsl{GRANAT} data above 100\,keV better than a simple power law and
\citet{lin:00a} obtained good fits to their joint \textsl{RXTE}/PCA,
\textsl{RXTE}/HEXTE, and \textsl{CGRO}/OSSE spectrum of 1997 with a cutoff
power law. Thermal Comptonization has also been shown to provide a
good description of the hard state spectra
\citep{kuznetsov:99a,lin:00a,keck:01a}. As reported in
Sect.~\ref{sec:intro}, an additional weak thermal component can be
present in the hard state \citep{heindl:98a,lin:00a} which is more
clearly revealed in the intermediate
\citep{mereghetti:94a,goldwurm:01a} and especially the dim soft state
\citep{smith:01a,miller:02a,heindl:02b}.
From initial power law fits to our \textsl{INTEGRAL } data alone, we found that
for epochs~2 to 4 a cutoff is required. This is imposed by the ISGRI
data sets. Due to SPI's comparatively small effective area, the SPI
data do not carry enough weight to further constrain the cutoff
energy. Our basic phenomenological model for the simultaneous fits to
the summed \textsl{INTEGRAL}/\textsl{RXTE } spectra of each epoch thus consists of an
absorbed cutoff power law plus a Gaussian Fe~K$\alpha$ line (see
Sect.~\ref{sec:fitting} for a discussion of the need to include the
line), with an additional multicolor disk blackbody component if
required. We also applied a thermal Comptonization model
\citep[\texttt{compTT};][]{tit:94} to all four epochs, again including
absorption, Fe emission, and the optional disk blackbody as well as
allowing for a reflected Comptonized component
\citep{magdziarz:95a}. Normalization differences between the
instruments are taken into account in all fits by a multiplicative
constant, set to 1 for the PCA. The exact model compositions of both,
the phenomenological and the physical model, can be found in the
captions of Tables~\ref{tab:cutoffpl} and \ref{tab:comptt} for all
epochs.
We used XSPEC version~11.3.1t to perform the fits. Consistent with the
recommendations of the calibration teams, systematic errors of 0.5\%
and 2\% had to be added to all PCA and ISGRI spectra,
respectively. PCA data from 3--20\,keV, ISGRI data from 20--150\,keV,
and SPI data from 40--200\,keV were taken into account in all fits,
with the exception of epoch~1 where PCA data up to only 18\,keV and
ISGRI data up to only 100\,keV were included. Both modeling approaches
resulted in good descriptions of the data and produced similar
$\chi^2_{\text {red}}$ values for given
epochs. Tables~\ref{tab:cutoffpl} and \ref{tab:comptt} list the best
fit parameters and $\chi^2$ values for the power law and the
Comptonization models, respectively. Single parameter uncertainties
are given on a 90\% confidence level. The results quoted for epoch~2
contain both SPI data sets. Without the 2.5\,Ms of SPI data, the
$\chi^2_{\text {red}}$ values of the epoch~2 fits are in better
agreement with the quality of the other fits, e.g., $\chi^2_{\text
{red}}$=1.1 for the epoch~2 \texttt{cutoffpl} fit (with no significant
changes of the best fit parameters). Fig.~\ref{fig:ldata12} and
Fig.~\ref{fig:ldata34} show the counts spectra, best fit models, and
residuals for the \texttt{compTT} fits.
Since the calibration of the \textsl{INTEGRAL } instruments, especially ISGRI, is
work in progress, we expect that the best fit parameters
characterizing the hard spectrum will be refined and updated in future
iterations of this work. In this iteration we interpret them as
indicators for general trends (e.g., the state change, qualitative
consistency with canonical values, etc.). Modeling the spectra with
Comptonization models also taking non-thermal electron distributions
into account like \texttt{compPS} \citep{poutanen:96a} or
\texttt{eqpair} \citep{coppi:99a} will also be part of future
work. However, consistency checks have been performed, applying the
\texttt{compPS} model in a form comparable to our \texttt{compTT} fits
(thermal electrons, slab geometry, optional multicolor disk
blackbody). We obtain fits of similar quality, with seed photon
temperatures, plasma temperatures and optical depths consistent with
the \texttt{compTT} results. The reflection fraction obtained with
\texttt{compPS} is systematically higher, though, e.g., 24\% for
epoch~2 compared to 10\% with \texttt{compTT}. A similar trend of
lower reflection fractions obtained with \texttt{compTT} was also
observed between \texttt{eqpair} and \texttt{compTT} fits to Cyg~X-1
\textsl{INTEGRAL}/\textsl{RXTE } spectra \citep{pottschmidt:03a,pottschmidt:04a}. While
the \texttt{compTT}/\texttt{eqpair} discrepancy is likely due to the
omission of relativistic smearing of the reflection spectrum in
\texttt{compTT}, as recently suggested by \citet{wilms:05a} on the
basis of \texttt{compTT} and \texttt{eqpair} fits to several hundred
\textsl{RXTE } monitoring observations of Cyg~X-1, this is not the case here
since our \texttt{compPS} fits do not include relativistic smearing.
\begin{table}
\caption{Best fit parameters for the power law models. The full model
in XSPEC notation for epoch~1 is \texttt{const$\times$phabs
[diskbb+gauss+power]}, for epoch~2 it is
\texttt{const$\times$phabs[gauss+cutoffpl]} and for epochs~3 and 4
\texttt{const$\times$phabs[diskbb+gauss+cutoffpl]}. Parameters shown
are the hydrogen column density $N_{\text H}$, the inner accretion
disk temperature $kT_\text{in}$ and its normalization
$A_\text{disk}=((R_\text{in}/\text{km})/(D/10\,\text{kpc}))^2\cos i$,
the power law index $\Gamma$ and the power law cutoff energy
$E_{\text {cutoff}}$, the energy $E_\text{Fe}$ and equivalent width
$EW_\text{Fe}$ of the Gaussian Fe K$\alpha$ line, and the flux
normalization constants of the individual instruments with respect to
the PCA, $c_\text{ISGRI}$ and $c_\text{SPI}$.}\label{tab:cutoffpl}
\begin{tabular}{lrrrr}
\hline
\hline
& Epoch 1& Epoch 2 & Epoch 3& Epoch 4\\
\hline
$N_{\text {H}}/10^{22}$ [cm$^{-2}$] & 1.66$^{+1.60}_{-0.41}$& 1.11$^{+0.25}_{-0.20}$& 1.50& 1.50$^{+3.18}_{-1.20}$\\
$kT_{\text {in}}$ [eV] & 477$^{+11}_{-27}$& --& 679$^{+145}_{-66}$& 536$^{+250}_{-222}$\\
$A_{\text {disk}}/10^{3}$ & 2.7$^{+0.9}_{-0.1}$& --& 0.04$^{+0.05}_{-0.02}$& 0.16$^{+0.41}_{-0.12}$\\
$\Gamma$ & 2.29$^{+0.10}_{-0.05}$& 1.54$^{+0.01}_{-0.02}$& 1.59$^{+0.03}_{-0.04}$& 1.69$^{+0.05}_{-0.05}$\\
$E_{\text {cutoff}}$ [keV] & --& 185$^{+22}_{-17}$& 136$^{+13}_{-16}$& 246$^{+26}_{-56}$\\
$E_{\text {Fe}}$ [keV] & 6.40$^{+0.12}_{-0.19}$& 6.52$^{+0.21}_{-0.27}$& 6.66$^{+0.15}_{-0.24}$& 6.57$^{+0.13}_{-0.37}$\\
$EW_{\text {Fe}}$ [eV] & 146.0& 59.2& 53.8& 49.0\\
$c_{\text {ISGRI}}$ & 0.70$^{+0.06}_{-0.06}$& 0.85$^{+0.02}_{-0.02}$& 0.83$^{+0.02}_{-0.03}$& 0.88$^{+0.01}_{-0.02}$\\
$c_{\text {SPI}}$ & --& 0.99$^{+0.02}_{-0.04}$& 1.00$^{+0.05}_{-0.10}$& 1.00$^{+0.11}_{-0.11}$\\
$\chi^2_{\text {no Fe}}/{\text {dof}}$& 61.3/39& 88.6/69& 83.3/55& 60.2/55\\
$\chi^2/{\text {dof}}$ & 37.7/35& 55.7/65& 56.7/52& 49.4/51\\
$\chi^2_{\text {red}}$ & 1.08& 0.86& 1.09& 0.97\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Best fit parameters for the \texttt{compTT} model. The full
model in XSPEC notation is
\texttt{const$\times$phabs[diskbb+gauss+compTT+reflect(compTT)]},
where for epoch~2 the \texttt{diskbb} component and for epochs~1 and 4
the \texttt{reflect(compTT)} component turned out not to be
required. The parameters shown are mostly the same as in
Table~\ref{tab:cutoffpl} but instead of the power law related
parameters, the physical parameters associated with Comptonization and
reflection are shown, namely the electron temperature of the
Comptonizing plasma $kT_\text{e}$ and its optical depth $\tau$, and
the covering factor of the cold reflecting medium
$\Omega/2\pi$. $kT_{\text {in}}$ in this table is either the
temperature of the \texttt{diskbb} component and/or the seed photon
input for the hot plasma.}\label{tab:comptt}
\begin{tabular}{lrrrr}
\hline
\hline
& Epoch 1& Epoch 2& Epoch 3& Epoch 4\\
\hline
$N_{\text {H}}/10^{22}$ [cm$^{-2}$] & 1.50$^{+0.21}_{-0.26}$& 1.37$^{+0.20}_{-0.15}$& 1.50& 1.50\\
$kT_{\text {in}}$ [eV] & 482$^{+14}_{-16}$& 379$^{+116}_{-378}$& 441$^{+86}_{-55}$& 501$^{+71}_{-90}$\\
$A_{\text {disk}}/10^{3}$ & 2.5$^{+0.4}_{-0.3}$& --& 0.38$^{+0.54}_{-0.04}$& 0.28$^{+0.72}_{-0.07}$\\
$\tau$ & 0.29$^{+0.43}_{-0.13}$& 0.71$^{+0.16}_{-0.07}$& 1.00$^{+0.21}_{-0.21}$& 0.37$^{+0.24}_{-0.12}$\\
$kT_{\text {e}}$ [keV] & 64$^{+4}_{-15}$& 78$^{+34}_{-15}$& 49$^{+29}_{-9}$& 114$^{+32}_{-35}$\\
$E_{\text {Fe}}$ [keV] & 6.41$^{+0.13}_{-0.24}$& 6.62$^{+0.21}_{-0.30}$& 6.74$^{+0.15}_{-0.26}$& 6.47$^{+0.32}_{-0.25}$\\
$EW_{\text {Fe}}$ [eV] & 208.0& 61.0& 42.6& 64.3\\
$c_{\text {ISGRI}}$ & 0.76$^{+0.08}_{-0.08}$& 0.83$^{+0.03}_{-0.02}$& 0.82$^{+0.01}_{-0.02}$& 0.87$^{+0.02}_{-0.02}$\\
$c_{\text {SPI}}$ & --& 0.96$^{+0.05}_{-0.04}$& 0.97$^{+0.10}_{-0.09}$& 0.97$^{+0.12}_{-0.11}$\\
$10^{2}$ $\Omega/2\pi$ & --& 10.0$^{+5.6}_{-5.6}$& 13.8$^{+5.0}_{-5.5}$& --\\
$\chi^2/{\text {dof}}$ & 37.2/34& 52.5/63& 57.3/51& 49.8/52\\
$\chi^2_{\text {red}}$ & 1.09& 0.83& 1.12& 0.96\\
\hline
\end{tabular}
\end{table}
\subsubsection{The 3--200\,keV Spectrum}\label{sec:fitting}
As a Galactic Center source, GRS~1758\ensuremath{-}258 is known to be strongly absorbed
and the $N_\text{H}$ value adopted in most studies is
(1.5$\pm$0.1)$\times10^{22}$\,cm$^{-2}$, as derived by
\citet{mereghetti:97a} from \textsl{ASCA} observations. However,
\citet{keck:01a} report (0.98$\pm$0.08)$\times10^{22}$\,cm$^{-2}$ from
\textsl{ROSAT} observations, \citet{lin:00a} find
(0.93--2.0)$\times10^{22}$\,cm$^{-2}$ from \textsl{RXTE } observations, and
\citet{goldwurm:01a} determine
(1.74$\pm$0.07)$\times10^{22}$\,cm$^{-2}$ with \textsl{XMM}. Modeling
PCA data starting at 3\,keV, $N_\text{H}$ and the blackbody parameters
are known to be strongly correlated and not well constrained. Here, we
obtain best fits with $N_\text{H}$ values generally well consistent
with the canonical value of 1.5$\times10^{22}$\,cm$^{-2}$ for epochs~1
and 2 \citep[for the epoch~2 \texttt{cutoffpl} fit, though,
$N_\text{H}$ is closer to the lower value of][]{keck:01a}. In the case
of epoch~1 this includes a \texttt{diskbb} component which is
obviously required, while in the case of epoch~2 no thermal component
is needed. For epochs~3 and 4 the best fits with free $N_\text{H}$
result in too low values of (0.03--0.7)$\times10^{22}$\,cm$^{-2}$ and
freezing $N_\text{H}$ to the canonical value does not produce
acceptable fits. Adding a disk blackbody component, however, allows
for good fits with the canonical $N_\text{H}$ (frozen with exception
of the epoch~4 \texttt{cutoffpl} fit). For the Comptonization fit of
epoch~4 this procedure results in a somewhat higher plasma temperature
and lower optical depth compared to the other hard state observations
than without including the disk component
(Table~\ref{tab:comptt}). The same tendency in presence of a disk
blackbody is seen when holding $N_\text{H}$ at the lower value of
\citet{keck:01a}. For all Comptonization fits the blackbody
temperature has been tied to the seed photon temperature of the
\texttt{compTT} component.
Clear residuals in the 6--7\,keV range are present for all epochs when
no iron line is included. The $\chi^2$ values obtained when removing
the Gaussian iron line from the models is given for reference in
Table~\ref{tab:cutoffpl}. Note that the $F$-test may not be used to
test for the presence of a line \citep{protassov:02a}. In epoch~4 the
improvement of the fit when including the iron line is considerably
smaller than for the other epochs and an acceptable fit can be
achieved without the line ($\chi^2_{\text {red}}$=1.09), residuals
remain, however. The iron line is generally narrow, with widths around
or below 0.4\,keV, and consistent with zero. The line energy ranges
from 6.40 to 6.73\,keV and is mostly consistent with 6.4\,keV, i.e.,
neutral Fe. Interestingly, the one exception is epoch~3 where we also
measure the strongest reflection component. A 3--4 times higher line
equivalent width is measured for the soft state epoch~1, consistent
with earlier measurements
\citep{heindl:98a,smith:01a,sidoli:02a}. This mainly reflects the
reduced level of continuum emission during that time, since the line
normalization does not change significantly between the epochs,
including the soft state. It has to be kept in mind, though, that the
Galactic diffuse emission features a strong iron line and that the
iron line parameters obtained from the fits are most likely influenced
by a non-perfect correction for this emission.
The parameters we are mainly interested in are those characterizing
the broad band continuum. We caution again that calibration
uncertainties prohibit a statistical comparison with earlier
results. Nevertheless we list earlier results for a qualitative
comparison and to illustrate the overall picture. For epochs~2 to 4 we
find values typical for hard state BHC spectra. For the
phenomenological model the power law indices lie between
1.54$^{+0.01}_{-0.02}$ (epoch~2) and 1.69$^{+0.05}_{-0.05}$ (epoch~4)
and the cutoff energies range from 136$^{+13}_{-16}$\,keV (epoch~3) to
246$^{+26}_{-56}$\,keV (epoch~4). \citet{lin:00a} find
$\Gamma\sim$1.40 (uncertainty of the order of 0.04) and $E_{\text
{cutoff}}\sim$200\,keV (uncertainty of the order of 30\,keV) for their
joint \textsl{RXTE } and \textsl{CGRO}/OSSE spectrum of 1997 and
\citet{kuznetsov:99a} find $\Gamma=1.0\pm0.3$ and $E_{\text
{cutoff}}=89^{+40}_{-20}$\,keV for their combined 1990--1997
\textsl{GRANAT} data (no cutoff was fit to shorter data sets). With
246\,keV the cutoff energy for epoch~4 is at the limit of what can be
measured with these observations. However, no good fit can be obtained
without cutoff ($\chi^2_{\text {red}}$=2.9).
From the Comptonization models we obtain plasma temperatures of
78$^{+34}_{-15}$\,keV and 49$^{+29}_{-9}$\,keV and optical depths of
0.71$^{+0.16}_{-0.07}$ and 1.00$^{+0.21}_{-0.21}$ for epochs~2 and
3. Here the values of \citet{lin:00a} are $kT_{\text {e}}\sim$52\,keV
(uncertainty of the order of 7\,keV) and $\tau\sim$3.4 (uncertainty of
the order of 0.3). Similarly, \citet{sidoli:02a} find values of
$kT_{\text {e}}=44^{+146}_{-7}$\,keV and $\tau=3.6^{+0.4}_{-2.3}$ for
their \textsl{BeppoSAX} data set. In both of these cases the higher
optical depth is probably mainly due to two effects, first the fact that
no reflection component has been included in the models and second
that the sphere$+$disk geometry has been used. We see a moderate trend
towards higher values of $\tau$ when switching from slab to
sphere$+$disk geometry in our fits.
\citet{kuznetsov:99a} obtain $kT_{\text {e}}=41^{+7}_{-5}$\,keV and
$\tau=1.2\pm0.2$. However, they were using a predecessor to
\texttt{compTT}, namely the model of \citet{sunyaev:80}, therefore
their results cannot be directly compared to ours. Also, these values
reflect the average over a wide range of $kT_{\text {e}}$ and $\tau$
values obtained from their two observation periods each year.
\begin{figure}
\includegraphics[width=88mm]{4077fig5_color.eps}
\caption{Unfolded, unabsorbed model spectra corresponding to the
\texttt{compTT} fits listed in Table~\ref{tab:comptt}. The $N_{\text
H}$ values quoted in that table have been used for the flux
correction. Vertical lines denote the range of the modeled
data.}\label{fig:modelunabs}
\end{figure}
In the Comptonization fits we also allow for reflection of the
Comptonized radiation of a cold accretion disk and find reflection
factors of 10.0$^{+5.6}_{-5.6}$\% and 13.8$^{+5.0}_{-5.5}$\% for
epochs~2 and 3, respectively. No reflection is detected in
epoch~4. From the \texttt{cutoffpl} fits it is also clear that the
epoch 4 spectrum is less curved than the other two hard state spectra.
With $kT_{\text {e}}=$114$^{+32}_{-35}$\,keV and
$\tau=$0.37$^{+0.24}_{-0.12}$ the Comptonization parameters of epoch~4
correspond to the hottest and most transparent plasma among the hard
state observations. While the latter might be an artifact due to the
introduction of the disk blackbody necessary to constrain $N_{\text
H}$ (the Compton-$y$ changes only slightly)\footnote{Note, however,
that apart from the already quoted effect of obtaining a higher
reflection fraction, the best fit parameters obtained with
\texttt{compPS} for epoch~4 show the same tendency ($kT_{\text
{in}}$=521\,eV, $kT_{e}$=102\,keV, $\tau$=0.63, $\Omega/2\pi$=0.05,
and $\chi_{\text {red}}^2$=0.95) although no additional disk blackbody
has been included.}, a possible physical origin for the differences
observed in the epoch~4 spectrum is suggested by the occurrence of one
of the sudden moderate drops in the PCA rate during this time (see
Fig.\ref{fig:longlc} and Sect.~\ref{sec:evol}).
In general, the range of different results for the hard state
parameters is not too surprising in the light of the considerable
long term variations known to be present even within the hard state
(Fig.~\ref{fig:longlc}), however, it is also clear that \textsl{INTEGRAL }
calibration caveats apply. With $kT_{\text {e}}\sim$50--60\,keV,
$\tau\sim$1.0--1.2, and reflection fractions of 17--24\% the
\texttt{compTT} fits of \citet{pottschmidt:03a} to a set of \textsl{INTEGRAL } and
\textsl{RXTE } observations of Cyg~X-1 result in similar parameters as
observed for epoch~3.
As expected from the long term evolution of the light curves, the
spectrum of epoch~1 differs considerably from the others. In both
models an additional soft component of comparable strength is clearly
present. We obtain a multicolor disk blackbody temperature of
477$^{+11}_{-27}$\,eV from the power law fit and of
482$^{+14}_{-16}$\,eV from the Comptonization fit. This is consistent
with the 2001 dim soft state where \citet{smith:01a} found a disk
blackbody temperature of 464$\pm$7\,eV with the PCA,
\citet{miller:02a} give values of 340$\pm$10\,eV and 600$\pm$10\,eV,
depending on $N_{\text H}$, for \textsl{XMM} observations, and
\citet{heindl:02a} find 505$\pm$7\,eV with \textsl{Chandra}. Based on
these previously measured soft state values and since the soft state
spectrum is dominated by disk emission below $\sim$5\,keV, we believe
that the values quoted above give a realistic measure of the
temperature. Not surprisingly the seed photon / disk temperature is
not well constrained in the hard state observations. For epoch~3,
e.g., the disk temperatures obtained from the cutoff power law and the
Comptonization fits are formally not consistent but what is consistent
is the fact that in both cases the disk component is needed if
$N_{\text H}$ is assumed to lie within the range of previously
measured values. Where a disk blackbody component was included in the
hard state fits it never dominates the soft spectrum. With
$\Gamma$=2.29$^{+0.10}_{-0.05}$ the power law is significantly steeper
in epoch~1 but does not quite reach the value of
$\Gamma$=2.75$\pm$0.12 observed in 2001 March \citep{smith:01a}. No
cutoff is detected but during this time the high energy flux was
comparatively weak and the spectrum could only be obtained out to
100\,keV. The steepness of the spectrum translates into a small
optical depth of 0.29$^{+0.43}_{-0.13}$ in the Comptonization fit,
while the temperature of the hot plasma is found to be
64$^{+4}_{-15}$\,keV, i.e., not significantly different from the hard
state epochs~2 and 3.
\section{Discussion}\label{sec:discussion}
Is the dim soft state of GRS~1758\ensuremath{-}258 really that different from the soft
states observed in other sources? In Fig.~\ref{fig:modelunabs} the
unfolded, unabsorbed model spectra corresponding to the
\texttt{compTT} fits are shown. The typical pivoting between the soft
state spectrum and the three hard state spectra is seen. With
$\sim$3\,keV the pivot energy lies considerably lower than for Cyg
X-1, where a value of about $\sim$10\,keV is observed
\citep{zdz:02a,wilms:05a}. However, taking the nature of GRS~1758\ensuremath{-}258 as low
mass X-ray binary (LMXB) and Roche lobe accretor into account, its
behavior might be more akin to the state transitions displayed by LMXB
BHC transients than to those of the high mass X-ray binary (HMXB) and
focused wind accretor \mbox{Cyg X-1}, i.e., hysteresis might play an
important role. In the following the term ``hysteresis'' is used to
describe the existence of an ``overlap region'' in luminosity in which
both, soft and hard states can occur \citep[see,
e.g.,][]{miyamoto:95a,zdz:04b,meyer:05a}\footnote{The flux
derivative/hardness correlation may represent the extension of this
hysteretic behavior into the hard state \citep{smith:01b}.}. According
to the rough estimate for the bolometric luminosity that can be
derived from our fits (using a distance estimate of 8.5\,kpc based on
the assumption of a near-GC location of GRS~1758\ensuremath{-}258), the 2003 dim soft state
is 0--20\% less luminous than the hard state, depending on the hard
state epoch and spectral model used for comparison. For a
10\,$M_{\sun}$ black hole the hard state luminosities that we measure
correspond to 2--3\% $L_{\text{Edd}}$. The differences between the
states in terms of fluxes in different energy bands have been
presented in Table~\ref{tab:fluxes}.
Before comparing the range of hysteretic behavior observed in GRS~1758\ensuremath{-}258
to other sources, we note that another possible reason for observing
reduced soft state luminosities might be a geometric effect introduced
by the inclination $i$ of the system: In the hard state the
geometrically thick hot plasma is present which can be assumed to
radiate approximately isotropically. In the soft state only the
decaying accretion disk remains which is geometrically thin with a
luminosity $\propto \cos i$ \citep{frank:92}. If the system is viewed
close to edge-on the projected area of the inner disk is comparatively
small, allowing only for a small percentage of the disk luminosity to
reach the observer. In addition, X-rays from the inner disk may be
further obscured due to flaring of the outer disk \citep{narayan:05a}.
\begin{figure}
\includegraphics[width=88mm]{4077fig6.eps}
\caption{Ratio of the 10--25\,keV and 2.5--4\,keV PCA count
rates. Dashed and dotted vertical lines denote the dim soft state of
2003 and times of sudden hardening, respectively.}\label{fig:ratio}
\end{figure}
GX~339$-$4 is the source for which hysteresis in the above sense has
been best studied so far. Depending on the energy range, its lowest
soft state flux can lie a factor of 2.5--10 below the brightest hard
state flux \citep{nowak:01b,zdz:04a,belloni:05a}. \citet{nowak:01b},
e.g., find that the 3--9\,keV soft state flux can be less than half
the hard state flux, similar to what we find for GRS~1758\ensuremath{-}258
(Table~\ref{tab:fluxes}). The bolometric flux of GX~339$-$4 in the
soft state can be up to an order of magnitude lower than in the hard
state \citep{zdz:04a}, an even more extreme behavior than indicated by
our bolometric estimates for GRS~1758\ensuremath{-}258. Accordingly, the schematic picture
which has recently been developed of the ``\textsf{q}-shaped'' tracks
followed by black hole transients in the hardness-intensity diagram
over an outburst includes a large range of soft state intensities
\citep[$\gtrsim {\cal O}(1)$,][]{fender:04a}, not necessarily
exceeding the highest hard state ones. Although we concentrate on
average soft state parameters in this work, we want to note that the
hardness-intensity diagrams for the 2003 soft state that can be
derived from the energy-resolved PCA light curves show a
counterclockwise evolving pattern comparable to transients. Due to the
pronounced soft state it is rather slightly ``\textsf{p}-shaped'' but
otherwise qualitatively very similar (a detailed quantitative
comparison is beyond the scope of this work). Overall it seems that
GRS~1758\ensuremath{-}258's dim soft state of 2003 -- and also the even dimmer one of 2001
-- are no remarkable states for a non high mass BHC \citep[see
also][]{remillard:05a}. This is especially true if part of the
luminosity reduction in the dim soft state is due to the inclination
effect described above. How about the overall state evolution, though?
Can the occurrence of the dim soft states be understood in the frame
of the outburst evolution scheme mentioned above? In the following two
paragraphs we discuss this question on the basis of the light curves
displayed in Figs.~\ref{fig:lcpanel} and~\ref{fig:longlc}. Note,
however, that a detailed spectral analysis of the individual PCA
pointings is beyond the scope of this paper.
The initial phase of the state change consists of a sudden drop of the
$>4$\,keV count rate around JD 2452680 (2003 February) and a
simultaneous moderate brightening of the thermal component, observed
as a $\sim$70$\%$ increase in the 2.5--4\,keV count rate. Only two
monitoring observations find the source in this phase, i.e., it lasted
roughly a week. During the following weeks the dim soft state is
observed: The hard emission does not recover until end of 2003 April
and after the initial increase the soft emission decays slowly to a
low hard state level. The initial outburst-like phase is similar to a
canonical transition to a soft state but with the system not settling
into a state with a stable thermal component. In this sense the
episode is a ``failed state transition''. The short soft flare may
reflect an actual change in the accretion disk parameters (e.g., a
temperature change and/or a change of the inner disk
radius). Alternatively, the increase in soft photon flux could at
least partly be caused by the disappearance of the Comptonizing
medium, i.e., the soft photons acting as seed photons in the hard
state are now emerging without being Comptonized. While the 2001 dim
soft state showed no initial flaring of the 2.5--4\,keV count rate, a
soft excess compared to the hard state level also became visible in
the unabsorbed spectrum.
In contrast to the weeks long soft X-ray flares of \mbox{Cyg X-1},
however, for which the term ``failed state transition'' was coined
\citep{pottschmidt:00a,pottschmidt:03a}, GRS~1758\ensuremath{-}258 does not settle back
into the hard state after the flaring but the hard component stays
``off''. In the case of the 2001 dim soft state \citet{smith:01a}
suggested a sudden shutoff of mass transfer from the companion being
responsible for the ``off'' phase. Put into the context of the ``q''
pattern of transient outbursts, the dim soft states of GRS~1758\ensuremath{-}258 could
therefore well represent the thermally dominated outburst phase since
the main decay track proceeds through this state
\citep{remillard:05a}. The hard state of GRS~1758\ensuremath{-}258, also covering a
considerable range of luminosities, would then correspond to phases of
rising and peak luminosities, again consistent with transient
outbursts. As mentioned in Sect.~\ref{sec:intro}, additional
intermediate states -- or failed state transitions -- of GRS~1758\ensuremath{-}258 have
been observed \citep{mereghetti:94a,goldwurm:01a,heindl:02b}, further
completing the outburst picture.
Another interesting property of the dim soft state is the fact that
the decay of the hard and soft spectral components is governed by two
different time scales. This has been studied in detail for the 2001
dim soft state by \citet{smith:01a} who found that while the power law
flux decreased by an order of magnitude from one monitoring
observation to the next, the disk black body flux decayed on a time
scale of $\sim$28\,d. This behavior is also visible in the 2003 light
curves (Fig.~\ref{fig:lcpanel}), especially in the fast decline of the
10--25\,keV rates and the much slower trend in the 2.5--4\,keV rates
after the initial drop down from the ``failed state transition''
level. The source also shows several drops of the hard component for
durations of only a few days or less, e.g., around JD 2451932
\citep[2001 January, see Fig.~\ref{fig:longlc} and][]{smith:01a},
2452364 (2002 March, Fig.~\ref{fig:longlc}), 2452788 (2003 May,
Fig.~\ref{fig:lcpanel}), or 2452855 (2003 August,
Fig.~\ref{fig:lcpanel}). All these quasi-independent changes of the
hard and soft spectral component further support the interpretation of
the behavior of GRS~1758\ensuremath{-}258 in terms of two different accretion flows. As
shown by \citet{smith:01a}, the model of \citet{chakrabarti:95a} can
explain many of the observations. As already mentioned, this model
assumes that proportional accretion rate changes introduced to both
flows at large radii propagate with nearly the free-fall time scale
through the Comptonizing medium and independently on the viscous time
scale through the accretion disk. Different propagation speeds are a
general feature of the model, i.e., they are not restricted to its
high accretion rate soft state associated with bulk motion
Comptonization. For lower accretion rates complicated dependencies of
spectral hardness and accretion rate are possible, covering the
correlation between the flux derivative and the spectral hardness as
well as the dim soft state \citep{chakrabarti:95a,smith:01a}. The
strength of these time delay effects increases for larger accretion
disks and there are indications that such a picture might be generally
applicable for Roche lobe overflow transients: a state transition due
to a sudden change in the power law component during a time when the
disk black body parameters evolved smoothly has recently also been
seen in the black hole transient \object{H1743$-$322}, in this case
marking the transition between the thermally dominant and the
intermediate state \citep{kalemci:05b}.
In addition to soft episodes we also observe several occurrences of a
rather sudden hardening (Fig.~\ref{fig:ratio}) mainly due to declines
of the soft component, visible, e.g., in the 2.5--4\,keV rates around
JD 2452797 (2003 June), 2453148 (2004 May), or 2453261 (2004
September). In the first case this is clearly related to a preceding
drop of the hard component. In the second case the drop happens at the
end of a months long decline of the count rates in all energy bands
which is especially visible in the \textsl{INTEGRAL } range (epoch~3). The
situation is less clear for the third occurrence (during epoch~4) but
the 10--25\,keV light curve also indicates a preceding decline. Due to
this overall picture and the probably affected broad band spectrum of
epoch~4, we consider it less likely that the hard episodes are caused,
e.g., by absorption events, but are rather another example of the
quasi-independent behavior of the hard and soft component on different
time scales. Interestingly, a similar episode of sudden hardening has
also been observed for the ``two-flow source'' 1E~1740.7$-$2942
\citep{smith:01b}.
Finally, while the hard state parameters have been discussed in
Sect.~\ref{sec:fitting} already, we emphasize again that apart from
small peculiarities which might be caused by spectral variations
within the epochs (epoch~4) the epoch-summed hard state spectra can be
well described by cutoff power law and thermal Comptonization
parameters which are compatible with canonical values found for BHCs
in the hard state, e.g., Cyg~X-1.
\section{Summary}\label{sec:conclusions}
We have presented analyses of \textsl{INTEGRAL } and \textsl{RXTE } monitoring
observations of the Galactic Center BHC GRS~1758\ensuremath{-}258 obtained in 2003 and
2004. Energy-resolved light curves from 2.5 to 200\,keV have been
studied and broad band spectra accumulated over four $\sim$2--3 months
long \textsl{INTEGRAL } observing epochs have been modeled phenomenologically and
with thermal Comptonization. The main results of this work can be
summarized as follows:
\begin{itemize}
\item From 2003 February to April GRS~1758\ensuremath{-}258 entered another dim soft
state (partly covered by epoch~1) similar to but less prolonged than
the one observed in 2001.
\item Phenomenological models (dominated by a power law in the dim
soft state and an exponentially cutoff power law in the hard states)
as well as thermal Comptonization models (\texttt{compTT} and
\texttt{compPS}) allow for a good description of the epoch-summed
spectra.
\item The fit parameters obtained in the hard state are canonical BHC
hard state parameters, similar to those obtained for Cyg~X-1 and
generally consistent with previous results obtained for GRS~1758\ensuremath{-}258. The
hard state in GRS~1758\ensuremath{-}258 is known to be intrinsically variable. Since it
is not clear how much of the range of the fit parameters observed
between the hard state epochs is due to calibration uncertainties,
however, these differences are not interpreted further.
\item In the dim soft state the flux is only higher than in the hard
state below 3--4\,keV. In all other energy bands the flux is
considerably lower. A tentative estimate of the bolometric
luminosity, however, shows a reduction of only 0--20\% compared to
the hard state epochs.
\item While the dim soft state is different from the soft state in
persistent HMXBs like Cyg~X-1 or \object{LMC~X-3} where softening is
associated with higher luminosities (mass accretion rates), it is
well within the range of hysteretic behavior displayed by LMXB
transients like GX~339$-$4, where a large range of soft state
intensities, not necessarily exceeding the highest hard state ones,
is observed (``q'' pattern in the hardness-intensity diagram). The
dim soft state would thus correspond to the outburst decay of a
transient. This can be understood since GRS~1758\ensuremath{-}258 most likely has a low
mass companion and is accreting via Roche lobe overflow.
\item As discovered for the 2001 dim soft state by \citet{smith:01a},
the decay of the soft and hard component progresses on different
time scales which can be understood if the accretion flows through
the cool disk and the hot plasma are independent in the sense of the
two-flow model of \citet{chakrabarti:95a}. The evolution of the
light curves, especially comparing the 2.5--4\,keV and 10--25\,keV
PCA bands during the 2003 dim soft state also suggests different
time scales for changes in the two spectral components.
\item In addition, several instances are observed during which
predominantly one of the two spectral components shows a substantial
flux change from one monitoring observation to the next, further
supporting the picture of at least partly independent flows.
\end{itemize}
\begin{acknowledgements}
We thank J\"orn Wilms for helpful discussions. This work has been
partly funded by NASA contract NAS5-30720 (KP) as well as by NASA
grants NAG5-13576 and NNG04GP41G (DMS, NB). AAZ and PL have been
supported by KBN grants 1P03D01827, 1P03D01727, 1P03D01128,
PBZ-KBN-054/P03/2001 and 4T12E04727. This work is based on
observations with \textsl{INTEGRAL}, an ESA project with instruments
and science data centre funded by ESA member states (especially the PI
countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech
Republic and Poland, and with the participation of Russia and the USA.
We thank the \textsl{RXTE } schedulers for making the years long
monitoring campaign of GRS~1758\ensuremath{-}258 possible. KP thanks the Aspen Center for
Physics for its hospitality during the final stages of the preparation
of this paper.
\end{acknowledgements}
|
1,116,691,501,279 | arxiv | \section{Learning Feature Interactions}
\label{sec:learning}
\begin{figure*}[th!]
\centering
\scalebox{0.8}{\includegraphics[width=\linewidth]{pictures/Framework_with_specifications.png}}
\caption{Proposed Framework to Detect Unwanted Feature Interactions in a Software Product Line}
\label{fig:framework}
\end{figure*}
In this section we describe the process by which we aim to learn unwanted feature interactions. In Section \ref{sec:learningspecs} we describe how our FINCH approach learns unwanted feature interactions whenever specifications of constraints on permissible feature combinations exist for a software system or product line. In Section \ref{sec:learningnospecs} we describe how GOLDFINCH, our generalized FINCH approach, can detect potential, unwanted feature-relevant dependencies even when feature-constraint specifications are not available.
\subsection{Learning unwanted feature interactions with specifications}
\label{sec:learningspecs}
Figure \ref{fig:framework} shows the steps FINCH takes to predict unwanted feature interactions when specifications of feature-constraints are available for it to use.
In Step 1, the implementations of the existing products, here C code, is input to the symbolic execution engine to extract the stack traces and path constraints related to each product in our repository. Stack traces contain the functions called, and path constraints contain constraints along a path. Figures \ref{fig_approach_stack_failure}, \ref{fig_approach_path_failure}, \ref{fig_approach_stack_normal} and \ref{fig_approach_normal_path} have shown the stack trace and the path constraints for both a normal and an unwanted feature interaction failure path, respectively. In Step 2, we automatically preprocess, clean and extract the functions that are called as well as the constraints from the files. In Step 3 we build a bag-of-words model
\cite{aggarwal2012survey} used as input to the learning algorithms. Therefore, we have two different sources of data, functions in the stack trace and atomic constraints from the path condition. We also investigate a combination of stack trace and path constraints data, which we will refer to as {\em combined data}. In Step 4 the saved learning models are used to answer developers' and product-line engineers' queries about possible unwanted feature interactions in new software product-line products.
We use supervised learning to learn from existing failure paths and success paths related to unwanted feature interactions and label a future path. When a developer combines a set of features to build a new product, the classifier helps developers know early on if this combination of features produces unwanted feature interactions in a new product. We describe in detail the use of our learning algorithms to classify the bag-of-word models in Section \ref{sec:results} below.
\subsection{Learning feature interactions without specifications}
\label{sec:learningnospecs}
\begin{figure*}[th!]
\centering
\scalebox{0.8}{ \includegraphics[width=\linewidth]{pictures/framework_without_specc.png}}
\caption{Proposed Framework to Detect Unwanted Feature Interactions in a Highly Configurable System without Specifications}
\label{fig:framework_withou_specs}
\end{figure*}
What if there are not any specifications of constraints on the feature combinations, or if the specifications of those constraints are partial, outdated or inaccurate? This often happens in practice for both long-lived software product lines and configurable systems. For example, Fischer et al. reported in 2018 that it was difficult for them to find variability models that matched the implementations \cite{fischer2018predicting}.
This lack of feature-constraint specifications seriously complicates detection of unwanted feature interactions in a new product or configuration.
GOLDFINCH offers an automated way to learn unwanted feature interactions even in the absence of feature-constraint specifications.
Figure \ref{fig:framework_withou_specs} shows how GOLDFINCH detects potential unwanted feature interactions when we do not have any feature-constraint specifications. Steps 1 and 2 were described in Section \ref{sec:modex}.
Features in a C preprocessor-based highly configurable system are defined in the code by \#ifdef blocks \cite{hunsen2016preprocessor}.
We use the
cppstats tool \cite{cppstats}, which measures preprocesor-based variability, to automatically locate features and store this information in the repository, as shown in Figure \ref{fig:framework_withou_specs}.
In Step 1, we apply our program analysis tool PROMPT on C preprocessor-based code of a highly configurable system \cite{yavuz2020analyzing}. PROMPT performs component-level symbolic execution and supports various types of environment modeling \cite{yavuz2020tutorial}. We here extended PROMPT to output the data-flow dependencies of the C preprocessor-based code of a highly configurable system as explained in Section \ref{sec:extraction}. We focus on two types of memory dependencies, \textbf{Store-Load} and \textbf{Store-Store}, obtaining pairs of instructions that access the same memory location.
We use these data-flow dependencies \cite{rhein2018variability} as a useful source of data toward detecting potential unwanted feature interactions. This is because in many cases a \textbf{Feature Dependency} occurs, i.e., one feature accesses a memory location to read and write data that another feature has already changed.
\begin{figure}[!h]
\centering
\scalebox{1}{
\includegraphics[width=3.5in]{pictures/example_data_dependency.png}}
\caption{An Explanation of a Store-Store Data Flow Dependency that Occurs in the ls.c File of coreutils in BusyBox 1.32.0.}
\label{fig_source-destination_feature}
\end{figure}
\begin{figure}[th!]
\centering
\includegraphics[width=\linewidth]{pictures/feature_dependency_prompt.png}
\caption{A Snapshot of Feature Dependencies in coreutils of BusyBox 1.32.0}
\label{fig:feature_dependencies}
\end{figure}
In Step 2, GOLDFINCH uses this feature location data to automatically relate each line of a data-flow dependency with its corresponding features. For example, Figure \ref{fig_source-destination_feature} displays a Store-Store data-flow dependency that occurs in line 1173 and 1181 of the ls.c file of coreutils of BusyBox 1.32.0. In Step 1, PROMPT outputs this pair of lines. In Step 2, we automatically find the corresponding features related to these lines, labeling them as \textbf{Source-Features} and \textbf{Destination-Features}. In this same way, we obtain all feature-dependency pairs of our configurable system.
In a highly configurable system, there may be many feature dependency records.
In Step 3, we learn from the large set of feature dependency pairs extracted in Step 2.
To automate learning from the existing feature dependency records, we use frequent item set mining, i.e., association rule mining, to learn the most frequent dependent features. We apply the Apriori algorithm \cite{agrawal1994fast}, an unsupervised learning method for frequent item set mining and association rule learning over relational databases. Association rule mining enables us to learn which pairs of features occur together based on the large amount of data we have \cite{borgelt2012frequent,rhein2018variability}.
The Apriori algorithm uses parameters “support” and “confidence”. Support is the frequency of an item in a dataset. Confidence is how likely two items are to occur together in the dataset. Each record in our use of the Apriori algorithm is a set of two items. An item is a feature, and an item set refers to features that are dependent and control the same memory region. For example, one input to the Apriori algorithm is an item set of two features {\tt ENABLE\_FEATURE\_LS\_RECURSIVE} and {\tt ENABLE\_FEATURE\_LS\_SORTFILE} that control the variable {\tt option\_mask32} and are dependent. In some cases, ``OR", ``AND" or ``NOT" combination of features control a memory region. We consider the whole logical expression as an item. For example, the logical expression of {\tt ENABLE\_UNEXPAND} \&\& {\tt ENABLE\_UNICODE\_SUPPORT} controls a memory region and thus is considered an item.
Figure \ref{fig:feature_dependencies} shows a snapshot of feature dependencies in coreutils of BusyBox 1.32.0. Figure \ref{fig:set_rep} shows how we convert the feature dependencies shown in Figure \ref{fig:feature_dependencies} to an item set in order to feed it into the Apriori algorithm.
\begin{figure}[th!]
\centering
\scalebox{0.6}{
\includegraphics[width=\linewidth]{pictures/set_rep.png}}
\caption{Set Item Representation of Feature Dependency for Unsupervised Learning}
\label{fig:set_rep}
\end{figure}
{\it Encoding.} The input to the Apriori algorithm is a set representation of items, here the features, used to find the relevant items. The feature-dependency pairs thus must be encoded with the information as to whether the feature is a source feature or a destination feature, as well as the data-flow dependency type. We encode each feature dependency in the frequent item set as follows: (1) each feature can be related to the source or the destination in each data flow dependency; (2) each item is labeled with data-flow dependency type; and (3) each feature controls the source or destination part of the data-flow dependency.
Figure \ref{fig:set_rep} gives an explanatory example of the process of encoding a data-flow dependency into a set representation of feature dependency appropriate for input to the Apriori algorithm. It shows the existence of a
data-flow dependency in lines 1173 and 1181. The data-flow dependency type is Store-Store, meaning that both lines write to the same variable, namely option\_mask32. Line 1173 is in the scope of
feature LS\_RECURSIVE. The memory access type is a Store (S).
We thus encode this feature as $LS\_RECURSIVE\_Source\_\{Store-Store\}\_Store$ for input to the Apriori algorithm.
In this way, we can encode the dependency type, source/destination, and the specific feature combination expression into the set representation used by the Apriori algorithm to learn the features most dependent on each other.
The output of Step 3 is a set of association rules showing features that are dependent on each other. For example, the following association rule shows that the two features $LS\_RECURSIVE$ and $LS\_SORTFILE$ are dependent on each other: \newline
$LS\_RECURSIVE\_Source\_\{Store\_Store\}\_Store \iff$ \\
$LS\_SORTFILE\_Destination\_\{Store\_Store\}\_Store$. \\
Not all frequent item sets detected will be of interest. We thus filter out for developers' consideration only the pairwise feature set where the source and destination feature differ. That is, if the Source-Feature and Destination-Feature in a pair are identical, we exclude it, as shown in Figure \ref{fig:feature_dependencies} record number 10.
In Step 4, the final filtered pairs are stored, and GOLDFINCH reports the association rules to the developers, since these identify feature interactions
to be checked for whether they are unwanted.
The intended usage scenario is for results to be made available to developers, either through system-wide reports or through queries regarding specific feature combinations of concern.
\begin{table*}[th!]
\caption{Description of the three Software Product Lines}
\label{tab:case_studies}
\centering
\begin{footnotesize}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
SPL name & \# Features & \# Products & \# Feature & \# Normal & \# Unwanted Feature & \% Failure \\
& & & Interaction & path & Interaction path & Ratio\\
\hline
\hline
Email \cite{apel2013feature} & 9 &36 &10 &13595 & 101 & 0.7\\
Elevator \cite{apel2013feature} & 6 & 20 & 5 & 84 & 49 & 36.9 \\
Mine Pump \cite{apel2013feature} & 7 &64 & 5 &27775 & 96 & 0.34\\
\hline
\end{tabular}
\end{footnotesize}
\end{table*}
\begin{table}[th!]
\caption{Features in the Email Software Product Line, extracted from \cite{apel2011detection}}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|c|}
\rowcolor{lightgray}
\hline
Feature name & Short description\\
\hline
Addressbook & manage e-mail contacts
\\
\hline
Autoresponder & respond to e-mails\\
\hline
EmailClient or Base & basic e-mail client\\
\hline
Decrypt & decrypt incoming e-mails\\
\hline
Encrypt & encrypt outgoing e-mails\\
\hline
Forward & forward incoming e-mails\\
\hline
Sign & sign outgoing e-mails\\
\hline
Verify & verify e-mail signatures\\
\hline
\end{tabular}
\end{footnotesize}
\label{tab1}
\end{center}
\end{table}
\begin{table}[th!]
\caption{Known Feature Interactions, with their Identifiers from the Original Sources \cite{hall2005fundamental,apel2011detection}}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|c|}
\hline
\rowcolor{lightgray}
Feature Interaction Id & Features Involved\\
\hline
0 & Decrypt, Forward\\
\hline
1 & Addressbook, Encrypt\\
\hline
3 & Sign, Verify\\
\hline
4 &Sign, Forward\\
\hline
6 & Encrypt, Decrypt\\
\hline
7 & Encrypt, Verify\\
\hline
8 & Encrypt, Autoresponder\\
\hline
9 & Encrypt, Forward\\
\hline
11 & Decrypt, Autoresponder\\
\hline
27 & Verify, Forward\\
\hline
\end{tabular}
\end{footnotesize}
\label{tab:known_fi_email}
\end{center}
\end{table}
In summary, we use the Apriori algorithm to learn which features depend upon each other, either in a highly configurable system or in a software product line's systems. This enables automated detection of feature dependencies indicative of possible unwanted feature interactions even when no feature-constraint specifications exist.
\subsection{Case Studies}
Table \ref{tab:case_studies} shows the three software product lines used in our evaluation: the Email product line,
\cite{hall2005fundamental}, the Elevator product line \cite{plath2001feature}, and the Mine Pump product line \cite{kramer1983conic}. All three have been used as benchmarks in the software product line literature \cite{apel2011detection, apel2013strategies} and have a variety of features with potential, unwanted interactions causing failed executions if they occur. We use the software product line versions written in C and provided in \cite{apel2013strategies}.
As an example, we look more closely at the Email software product line. Table \ref{tab1} shows the eight available features (units of functionality) in it. The base feature is Email Client, which is shared by all products. There are ten pairwise known unwanted feature interactions as shown in Table \ref{tab:known_fi_email}.
By ``known unwanted feature interactions'', we mean that they are documented in the product line's feature specifications. We refer the reader to \cite{colder2000feature} and \cite{hall2005fundamental} for details of the other interactions shown in the table. The Elevator product line similarly has six features and five unwanted feature interactions specified, and the Mine Pump product line has seven features and five unwanted feature interactions specified.
We also investigated and refined our approach through application on a large, highly configurable system. BusyBox is a collection of many Unix utilities implemented in a single executable file. We used the latest version of BusyBox 1\_32\_0, and
focused on the \textbf{coreutils} section of BusyBox, as it had the most problematic interactions based on this study \cite{abal2018variability}.
Coreutils of BusyBox 1\_32\_0 has 139 features and about 19k lines of code. With 139 features, we can have 9591 pairwise options, making it
a very large configurable system \cite{thianniwet2016scaling}.
\section{Acknowledgements}
This work was supported in part by the National Science Foundation under grants CCF-1513717 and CNS-1942235.
\bibliographystyle{plain}
\section{Feature Relevant Model Extraction}
\label{sec:modex}
In this section, we present extraction of various feature relevant models from the source code using program analysis.
Section \ref{sec:featuremodels} presents the types of models we use,
and Section \ref{sec:extraction} explains their extraction using symbolic execution.
\subsection{Source Code Level Feature Models}
\label{sec:featuremodels}
Figure \ref{fig:overview} in Section \ref{sec:introduction} showed an overview of our approach. The first step toward predicting unwanted feature interactions, described here, is to extract the feature-relevant models we need from the code.
\subsubsection{Control-Flow Models}
\label{sec:cfmodels}
The control-flow information on
an execution path can be described in various ways.
One basic way is to describe it in terms of the sequence of executed
instructions. Although this model would be very detailed, it is
difficult to make associations with the features at the level
of instructions without additional context information.
Another way is to describe control-flow in terms of the set of stack traces that get generated. A {\it }stack trace provides a snapshot of a possible way of reaching a program location in terms of a sequence of callsites and provides context information at the level of functions and code locations. Features are often implemented by a set of
functions or they do interact in the context of certain functions.
Therefore, a stack trace can potentially provide information
relevant to a single feature or to the interaction of multiple features.
In this work, we use use stack trace information as a control-flow model.
\subsubsection{Data-flow Models}
\label{sec:dfmodels}
A data-flow model provides information about the values of program
variables at various program locations and how these values flow from
one program variable to another.
Variables take different roles depending on whether they are input variables, other variables used internally to carry computations, or output variables.
Input variables may directly or indirectly relate to a feature.
Therefore, values of the input variables may control
how a feature behaves individually as well as
how features behave collectively.
Additionally, the data-flow between two variables
may determine
observable variations in behavior that occur when certain features are
enabled or disabled. We consider two types of dependencies:
{\em store-load} and {\em store-store}.
\begin{definition}[Store-Load]
There is a store-load dependency between a store instruction $s$ and
a load instruction $l$ if they both access the same memory region, $s$ happens before $l$, and $l$ reads the value stored by $s$, i.e., there
is no other intervening store between $s$ and $l$.
\end{definition}
\begin{definition}[Store-Store]
There is a store-store dependency between two store instructions $s_1$
and $s_2$ if they both write to the same memory region without any
other intervening write to that memory region between the first one and the second one.
\end{definition}
In this work, we thus extract and use two types of data-flow models: 1) data constraints on the feature-related input variables, and 2) the store-load and store-store dependencies of the feature-related variables.
\subsection{Extracting Feature Models}
\label{sec:extraction}
We use \emph {dynamic symbolic execution} to extract our feature models due to its ability to
provide the stack-trace-based control-flow model and
the three
types of data-flow dependencies described in Section
\ref{sec:featuremodels}.
We use the PROMPT tool \cite{yavuz2020analyzing}, which is designed to work at the component level. PROMPT extends the KLEE symbolic execution engine \cite{cadar2008klee} using lazy initialization and, therefore, can be applied at the function-level without the need for a
test-driver. This enables extraction of feature relevant models at
the component-level.
In dynamic symbolic execution, inputs are labeled as symbolic to
represent the fact that they can take any values.
The underlying symbolic execution engine keeps the memory as a
map from addresses to values that can be concrete values or
symbolic expressions.
The engine interprets each instruction according to
its operational semantics. If all the operands have concrete values, then
the standard semantics is applied.
If any of the operands has a symbolic expression as its value then the engine either produces another symbolic
expression as a result (for non-branching instructions) or uses the symbolic expression to decide the feasibility of the branch targets (for branching instructions). Branching due to symbolic expressions may also happen in non-branching instructions such as load and store if the address is a symbolic expression. In that case, a separate path is generated for each memory object that may correspond to the symbolic
expression. So, dynamic symbolic execution generates
a symbolic execution tree, where the leaf nodes correspond to
different execution paths
and the internal nodes with multiple children represent branching decisions due to symbolic expression evaluation.
Symbolic execution explores the underlying program for
feasible paths.
However, due to the well-known path explosion problem, symbolic execution may fail to explore all possible paths
In this work, we apply symbolic execution at the component-level
as implemented in the PROMPT tool \cite{yavuz2020analyzing} to deal with the path explosion problem.
Algorithm \ref{alg:baselinesymex} shows the baseline
symbolic execution algorithm as implemented in KLEE.
It keeps track of the set of active paths $active$, which gets initialized with the initial state of a given program $P$.
As long as there are some active paths and the timeout $\tau$ has
not been reached, it executes
the next instruction $inst$ on the current path $s$ and updates $active$ with the successors of $s$, which is denoted by a call to $\mathit{executeAndUpdate}$.
Branch instructions
and memory access instructions (load/store) with symbolic addresses
may generate multiple successors and lead to the creation of new
paths, which become the children of $s$ in the generated symbolic execution tree.
Another important side-effect of the $\mathit{executeAndUpdate}$ operation is to update the path condition on the current path $s$
as well as on its successors. The path condition, $\mathit{PC}$ is the conjunction
of all the symbolic constraints that have been found to be feasible on
the current path. These constraints originate either from the conditions of the branching instructions or the constraints on the symbolic addresses, e.g., whether a symbolic index expression evaluates to a valid range or not for a given memory region, which is modeled as an array of bytes in KLEE. The feasibility of these constraints
are checked using an SMT solver.
For this project we have extended the symbolic execution engine: 1) to record the
stack traces as a sequence of callsites and the path constraints as a set of atomic constraints, 2) to distinguish paths with respect to normal termination versus termination with an error, and 3) to compute store-load and store-store dependencies.
\begin{algorithm}
\caption{Baseline Symbolic Execution Algorithm.}
\label{alg:baselinesymex}
\begin{footnotesize}
\begin{algorithmic}[1]
\State {\bf BaselineSymEx}($P$: $PL$, $\tau$: $\mathcal{Z}$):
\State $active \gets \{ \mathit{initState(P)} \} $
\While{$active \not = \emptyset$ AND $\tau$ not reached}
\State $s \gets \mathit{chooseNextState}(active)$
\State $inst \gets nextInst(s)$
\State $\mathit{executeAndUpdate}(s, inst, active)$
\EndWhile
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
Algorithm \ref{alg:symex} shows how we extend the baseline symbolic execution algorithm to detect feature-relevant interactions. Specifically, we introduce metadata that stores control-flow and
data-flow dependencies and introduce Algorithm \ref{fig:executeTrack} to extend the $\mathit{executeAndUpdate}$ algorithm that performs symbolic execution of an instruction within the context of a
symbolic execution path.
To keep track of the terminated paths according to their termination status, we detect instructions that cause termination or an error and
update the relevant metadata, $\mathit{failTerm}$ or $\mathit{normalTerm}$, which denote the set of paths terminated with an error and those that terminated normally, respectively.
As shown in Algorithm \ref{fig:executeTrack}, at every callsite, we compute the current call sequence and update the
set of call sequences for the current path in the map, $T$.
$SM$ keeps track of the most recent store on a memory object for each state.
For each store instruction, we record the store-store dependency with the most recent instruction that wrote into the same memory region in $SS$, if any. The algorithm also updates the most recent store to the relevant memory region in $SM$.
For each load instruction, it records the store-load dependency
with the most recent instruction that stored into the same memory region in $SL$, if any.
\begin{algorithm}
\caption{Symbolic Execution Based Extraction of Control-flow and Data-flow Dependencies.}
\label{alg:symex}
\begin{footnotesize}
\begin{algorithmic}[1]
\State {\bf ExtractFeatureModels}($P$: $PL$, $\tau$: $\mathcal{Z}$, $L$: $\mathcal{Z}$):
\State $normalPaths, failPaths$:
$\mathcal{P}((\mathcal{P}(Sequence),\mathcal{P}(Constraint))$ \Comment{Normal/failure termination path models}
\State $SM$: $State \times Address \mapsto Instruction$ \Comment{Store Map}
\State $SS$, $SL$: $\mathcal{P}(Instruction \times Instruction)$ \Comment{Store-store/Store-load pairs}
\State $T: State \times Z \times \mathcal{P}(Sequence)$
\State $SM \gets \lambda s.a. \text{undefined}$
\State $SS \gets SL \gets \emptyset$
\State $T = \lambda s.l. \emptyset$
\State $active \gets \{ \mathit{initState}(P) \} $
\State $normalTerm, failTerm \gets \emptyset$
\While{$active \not = \emptyset$ AND $\tau$ not reached}
\State $s \gets \mathit{chooseNextState}(active)$
\State $inst \gets \mathit{nextInst}(s)$
\State $\mathit{executeTrackAndUpdate}(s, inst, active, normalTerm, failTerm)$
\EndWhile
\State $normalPaths, failPaths\gets \emptyset$
\For{$s \in normalTerm$}
\State $normalPaths \gets normalPaths \cup \{(chooseLongest(T(s), L),$ $Atomic(s.PC))\}$
\EndFor
\For{$s \in failTerm$}
\State $failPaths \gets failPaths \cup \{(\{callSeq(s.stack)\}, Atomic(s.PC))\}$
\EndFor
\State $SSPairs \gets \bigcup_{s \in normalTerm \cup failTerm} SS(s)$
\State $SLPairs \gets \bigcup_{s \in normalTerm \cup failTerm} SL(s)$
\State {\bf return} $(normalPaths, failPaths, SSPairs, SLPairs)$
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
\begin{algorithm}
\caption{The Extended Instruction Execution for Tracking Data-flow.}
\label{fig:executeTrack}
\begin{footnotesize}
\begin{algorithmic}[1]
\State {\bf executeTrackAndUpdate}($s$, $inst$, $active$, $normalTerm$, $failTerm$)
\State $\mathit{executeAndUpdate}(s, inst, active)$
\For{each successor $s'$ of $s$ do}
\If{$inst$ is a callsite for function $F$}
\State $cseq \gets add(callSeq(s'.stack),F)$
\State $l \gets length(cseq)$
\State $T = T[s' \rightarrow T(s')[l \rightarrow T(s')(l) \cup \{cseq\}]]$
\EndIf
\If{$s'$ terminates with an error}
\State $failTerm \gets failTerm \cup \{s'\}$
\EndIf
\If{$s'$ terminates normally}
\State $normalTerm \gets normalTerm \cup \{s'\}$
\EndIf
\If{$inst \equiv store \ V \ to \ A$}
\State $SS(s') \gets SS(s') \cup \{(SM(s',m.baseAddress), inst)\}$
\State Let $m$ denote the memory object that the store address $A$ corresponds to on path $s'$
\State $SM(s',m.baseAddress) \gets inst$
\EndIf
\If{$inst \equiv load \ A$}
\State $SL(s') \gets SL(s') \cup \{(SM(s',m.baseAddress), inst)\}$
\State Let $m$ denote the memory object that the store address $A$ corresponds to on path $s'$
\EndIf
\EndFor
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
\subsection{Examples of Extracted Models}
\label{sec:modelexamples}
In this section, we provide examples of the models that we extracted
from three software product-line benchmarks and the BusyBox configurable software system.
Figures \ref{fig_approach_stack_failure} and \ref{fig_approach_stack_normal} show examples of control-flow models in the form of stack traces for a failure path and a normally terminated path, respectively. The red lines in Figure \ref{fig_approach_stack_failure}
indicate some post-processing we perform to clean the data. In order to remove bias, we exclude function calls, e.g., {\tt \_\_automaton\_fail} to check the feature-constraint specifications in the code.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.6]{pictures/stack_trace.png}
\caption{Stack Trace of a Verify-Forward Feature-Interaction Failure Path}
\label{fig_approach_stack_failure}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{pictures/verify_forward_normal_stacktraces.png}
\caption{Stack Trace of a Normal Path of Verify-Forward }
\label{fig_approach_stack_normal}
\end{figure}
Figures \ref{fig_approach_path_failure} and \ref{fig_approach_normal_path}
show examples of data-flow models in the form of path constraints for a failure path and a normally terminated path, respectively.
The constraints are shown in the KQuery format used by the KLEE symbolic engine. The formula {\tt Eq 1 (ReadLSB w32 0 isEncryptedRes)} corresponds to the atomic constraint $\mathit{isEncryptedRes} = 1$.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{pictures/verify_forward_failure_pathconstraints_shorter.png}
\caption{Path Constraints of a Verify-Forward Unwanted Feature-Interaction Path in KQuery Format.}
\label{fig_approach_path_failure}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{pictures/verify_forward_normal_pathconstraints.png}
\caption{Path Constraints of a Normal Path of Verify-Forward }
\label{fig_approach_normal_path}
\end{figure}
Path constraints typically represent control-flow decisions with regard to symbolic inputs. Here, however, we collect and record certain data values in the path constraints. To do this we add instrumentation in the code that is relevant to checking feature constraints
and introduce some symbolic metadata variables.
\begin{figure}[th!]
\centering
\begin{footnotesize}
\begin{verbatim}
void VerifyForward_spec__1(int client , int msg )
{ int pubkey ;
int tmp, tmp___0, tmp___1 ;
puts("before deliver\n");
tmp___1 = isVerified(msg);
if (tmp___1) {
tmp = getEmailFrom(msg);
tmp___0 = findPublicKey(client, tmp);
pubkey = tmp___0;
if (pubkey == 0)
__automaton_fail();
}
return;
}
\end{verbatim}
\end{footnotesize}
\caption{An Example Code that Checks the Specification for the Unwanted
Feature Interaction between the {\tt Verify} and {\tt Forward} features.}
\label{fig:spec}
\end{figure}
Specifically, we identify and annotate functions that return boolean or integer values, as such functions are commonly used to check certain conditions and may be used in feature constraint checking, as shown in Figure \ref{fig:spec}. There, the function {\tt VerifyForward\_spec\_\_1} checks
if the email has been verified and, if so, asserts that the
public key is available for encryption before the message is forwarded.
Figure \ref{fig:isver} shows the annotated version of the {\tt isVerified} function from the Email product line. First, we introduce a symbolic metadata variable, {\tt isVerifiedRes} (line 1) and make it symbolic using the {\tt klee\_make\_symbolic} intrinsic function (line 4). Then we constrain the current symbolic execution path with a generic equality predicate
{\tt metadata variable == returned value} using the {\tt klee\_assume} intrinsic function (lines 9, 14, and 18). This annotation does not have any side-effect on the program semantics as our annotation does not change any
variables in the original program. The annotation does not affect the control-flow since the added expression is consistent with the value passed to the return value. Also, each time the function is called, a new instance of the symbolic metadata variable will be created. This ensures that the values returned at different callsites do not interfere to change the control-flow semantics. Figure \ref{fig_approach_path_failure} gives an example of the multiple instances of these
symbolic metadata variables. There {\tt findPublicKeyRes} ends up having two instances/versions: {\tt findPublicKeyRes\_1} and {\tt findPublicKeyRes\_2}.
\begin{figure}[th!]
\centering
\begin{footnotesize}
\begin{verbatim}
int isVerifiedRes;
int isVerified(int handle )
{ int retValue_acc ;
klee_make_symbolic(&isVerifiedRes,sizeof(int),
"isVerifiedRes");
{
if (handle == 1) {
retValue_acc = ste_email_isSignatureVerified0;
klee_assume(isVerifiedRes == retValue_acc);
return (retValue_acc);
} else {
if (handle == 2) {
retValue_acc = ste_email_isSignatureVerified1;
klee_assume(isVerifiedRes == retValue_acc);
return (retValue_acc);
} else {
retValue_acc = 0;
klee_assume(isVerifiedRes == retValue_acc);
return (retValue_acc);
}
}
klee_assume(isVerifiedRes == retValue_acc);
return (retValue_acc);
}
}
\end{verbatim}
\end{footnotesize}
\caption{An Annotated Version of the {\tt isVerified} Function that Saves a Symbolic Constraint about the Return Value Using the Metadata Variable {\tt isVerifiedRes}.}
\label{fig:isver}
\end{figure}
Finally, two examples of data-flow dependency between feature-relevant instructions are given in Figure \ref{fig:ssexample}. They appear in lines 1173 and 1181 (line numbers are inserted as comments here) of coreutils/ls.c in BusyBox 1.32.0.
There is a feature-relevant Store-Load dependency and a Store-Store dependency between lines 1173 and 1181. This is because there are store instructions
accessing the {\tt option\_mask32} variable at lines 1173 and 1181, and
there is a load instruction accessing the same variable in line 1181.
\begin{comment}
\begin{figure}[th!]
\centering
\begin{footnotesize}
\begin{minted}[mathescape,
linenos,numbersep=2pt,autogobble,xleftmargin=0.03\textwidth,,breaklines]
{c}
#if ENABLE_SELINUX
security_context_t scontext = NULL; //Line 593
if (option_mask32 & OPT_SELINUX) {
if ((option_mask32 & OPT_DEREFERENCE
? lgetfilecon(filename, &scontext)
: getfilecon(filename, &scontext)
) < 0
) {
bb_simple_perror_msg(filename);
return 0;
}
}
#endif
...
#if ENABLE_FEATURE_STAT_FORMAT
if (format == NULL) {
...
print_it(format, filename, print_stat, &statbuf IF_SELINUX(, scontext)); // Line 678
#else /* FEATURE_STAT_FORMAT */
...
\end{minted}
\end{footnotesize}
\caption{A Store-Store and a Store-Load dependency between the {\tt ENABLE\_SELINUX} and
{\tt ENABLE\_FEATURE\_STAT\_FORMAT} configuration options}
\label{fig:slexample}
\end{figure}
\end{comment}
\begin{figure}[th!]
\centering
\begin{footnotesize}
\begin{verbatim}
if (ENABLE_FEATURE_LS_RECURSIVE && (opt & OPT_d))
option_mask32 &= ~OPT_R; // Line 1173
if (!(opt & OPT_l)) { /* not -l? */
if (ENABLE_FEATURE_LS_TIMESTAMPS && ENABLE_FEATURE_LS_SORTFILE)
if (opt & (OPT_c|OPT_u)) {
option_mask32 |= OPT_t; // Line 1181
/* choose a display format if one was not already specified by an option */
if (!(option_mask32 & (OPT_l|OPT_1|OPT_x|OPT_C))) // Line 1187
option_mask32 |= (isatty(STDOUT_FILENO) ? OPT_C : OPT_1);
\end{verbatim}
\end{footnotesize}
\caption{A Store-Store and a Store-Load Dependency between {\tt ENABLE\_FEATURE\_LS\_RECURSIVE} and
the Configuration Options {\tt ENABLE\_FEATURE\_LS\_TIMESTAMPS} and
{\tt ENABLE\_FEATURE\_LS\_SORTFILE} within coreutils/ls.c in BusyBox 1.32.0.}
\label{fig:ssexample}
\end{figure}
\begin{comment}
Given a parameter, $L$, representing the number of function call sequences to be extracted for each
symbolic execution state, our implementation dumps the $L$ longest FCSs as long as there are at least that many, or all the FCSs recorded for that state, otherwise.
We get the atomic constraints from the path constraint of each execution path.
We applied our model extraction technique on three software product line benchmarks, the Email, Elevator, and Mine Pump \cite{apel2013feature} implementations.
The KLEE engine checks, for instance, the 36 selected products in the Email system and returns a set of stack traces for each execution path of the selected products. Each execution path can have one of two states: terminated successfully without violation of specifications or failure termination with violation of specifications.
The Email system has ten specifications which must be observed when we combine two features to build a new product. These specifications are used to capture the ten feature interactions described in Table 3. These unwanted feature interactions can happen when we combine features to build new products.
We set a timeout of 500 seconds for running the Extended KLEE engine on each product and set $L$ to 10 to extract up to 10 longest normally terminated execution paths per each product.
As shown in Figure 1, we perform a union of all normal stack traces that belong to each product per specification.
Each stack trace contains a set of function names which are called during the execution of each path. Figure \ref{fig_approach_stack_failure} shows a stack trace output that contains a failure termination, which indicates a violation of the specification with ID of 27, that is, an unwanted feature interaction between Verify and Forward.
\end{comment}
\section{Motivating Examples}
\label{sec:motiv}
In this section, we introduce two examples of unwanted feature interactions that illustrate the problem and challenges that motivated our approach.
\subsection{An unwanted feature interaction in a system with feature-constraint specifications}
The first example is from an electronic email system where a feature-constraint specification exists that prohibits two optional features from both being included in a product. This is done by checking in the code whether the unwanted feature interaction occurs or not.
The Email software product line is a well-known benchmark in the literature \cite{hall2005fundamental,apel2011detection}. As shown in Figure \ref{fig:motivation_email}, the Email product line has a base Email Client shared across all products as well as seven optional features.
There are ten known unwanted feature interactions in the Email case study from \cite{colder2000feature,apel2011detection}
A constraint specification in the code checks two conditions: if the second party in the scenario received the email as encrypted, and if the message in the email is kept as encrypted when forwarding it to the third party. A failure occurs when the email is received as encrypted, $in\_encrypted$, but is not kept as encrypted when forwarding to the third party, $!isEncrypted(msg)$. This happens because the second party does not have the third party's key and therefore forwards the email's message as plain text to the third party, violating the system's security property.
Our proposed approach to automating unwanted feature interaction detection uses feature-constraint specifications, when they are available,
to learn the failure paths and normal paths in the various products of the product line.
\subsection{An unwanted feature interaction in a system without feature-constraint specifications}
\begin{figure}[th!]
\centering
\includegraphics[width=9cm]{pictures/mot_busybox_recursive_sortfile.png}
\caption{The Vulnerability in the coreutils/ls Example.}
\label{fig:motivation_busybox}
\end{figure}
\begin{figure}[th!]
\begin{center}
\begin{footnotesize}
\begin{verbatim}
#ifdef CONFIG_FEATURE_LS_RECURSIVE
void dfree(struct dnode **arr)
{
cur = arr[0]; // is no longer the head
while (cur != NULL) {
next = cur->next;
free(cur); // Load
cur = next;
}
}
#endif
void showdirs(int **arr)
{
...
#ifdef CONFIG_FEATURE_LS_RECURSIVE
dfree(arr); //ERROR
#endif
}
#ifdef CONFIG_FEATURE_LS_SORTFILES
void sort(int **arr, int size)
{
...
for(i=0;i<size;i++) {
for(j=i;j<size;j++) {
if(*arr[i] > *arr[j]) {
temp=*arr[i];
*arr[i]=*arr[j]; // Store
*arr[j]=temp;
}
}
}
}
#endif
int main(int argc, char **argv)
{
#ifdef CONFIG_FEATURE_LS_SORTFILES
sort(arr, size);
#endif
showdirs(arr);
}
\end{verbatim}
\end{footnotesize}
\end{center}
\caption{A Code Excerpt Demonstrating a Memory Leak Bug Related to Interaction of the Features {\tt LS\_SORTFILES} and {\tt LS\_RECURSIVE}.}
\label{fig:motivls}
\end{figure}
For many real-world software product lines and most configurable systems, disallowed combinations of features or configuration options are not specified or are not available to developers \cite{soares2018exploring}. Our second example, which motivates us to investigate unwanted feature interactions in those cases {\it without} feature-constraint specifications, is an unwanted feature interaction in BusyBox, a large, highly configurable system. BusyBox is a collection of many Unix utilities implemented in a single executable file \cite{wells2000busybox}. It has over 1000 configuration options or features \cite{abal2018variability}.
Figure \ref{fig:motivation_busybox} shows the features in
the ls part of coreutils in BusyBox 1.32.0. The ls utility is used to list files or directories in Linux and other Unix-based operating systems. As shown in Figure \ref{fig:motivation_busybox}, there are nine optional features in ls that a user can enable.
However, an unwanted feature interaction occurs if both features {\tt LS\_SORTFILES} and {\tt LS\_RECURSIVE} are enabled.
Figure \ref{fig:motivls} shows a code excerpt demonstrating how two features, {\tt LS\_SORTFILES} and {\tt LS\_RECURSIVE}, interact in a way that leads to a memory leak. The bug is manifested when
both features are enabled. The root cause of the memory leak is a data-flow dependency between some program statements within the {\tt sort} and {\tt dfree} functions.
The assumption that {\tt arr[0]} refers to
the head of the link list gets invalidated by the {\tt sort} function.
An example data-flow dependency
in this feature-controlled code
is the store instruction at line 29 and the load instruction at line 7.
This bug is reported in the BusyBox bug database and classified as a variability bug in \cite{abal2018variability}. Variability bugs are bugs related to features or configuration options in highly configurable systems \cite{abal2018variability}. A variability bug with two or more features contributing to it is a feature interaction.
Since multiple studies have shown that most feature interactions involve two features, our aim is to detect pairwise unwanted feature interactions \cite{williams2000determination,oster2010automated, liebig2010analysis,siegmund2012predicting,siegmund2012spl,soares2018exploring,kolesnikov2019relation,abal2018variability}.
In our proposed approach to detecting unwanted feature interactions in the absence of constraint specifications, we use feature-related data flow dependencies to learn feature dependencies and present that information to developers.
\section{Conclusion}
\label{sec:summary}
In this section we summarize our results and offer concluding remarks regarding the contributions of the paper.
Our proposed approach is shown in the experiments described here to be effective in producing accurate results for unwanted feature interactions both with and without the availability of feature-constraint specifications. When feature-constraint specifications are available, results from FINCH show that we are able to classify the data from both stack traces and path constraints with high balanced accuracy to predict unwanted feature interactions in three small software product lines. Prediction is very fast, and our approach predicts the actual failure paths related to unwanted feature interactions in 0.007-0.6 seconds. The models built using SVM and combined data (both stack traces and path constraints) have high balanced accuracy across all three product lines.
Our evaluation of FINCH shows that it accurately predicts some new feature interactions, based on its learned model of existing feature interactions. Moreover, it detects failure paths related to unwanted feature interactions with only partial data, using only 75\% of functions called and atomic path constraints in three product-line case studies. The finding that we can achieve the same high accuracy with partial data is promising both for faster prediction of unwanted feature interactions in product lines and for steering the symbolic execution engine towards the failure paths.
In real-world applications there are often no feature-constraint specifications available \cite{soares2018exploring}.
For these cases, we extend our approach to seek unwanted feature interactions through automated detection of feature-relevant data-flow dependencies.
Our generalized approach, GOLDFINCH, is able to detect a Store-Store type dependency between two features in BusyBox's coreutilies subsystem that previously were involved in an unwanted feature interaction.
While results confirm the advantage of using specifications when they are available, GOLDFINCH detects 17 of the 19 unwanted feature interactions across three software product lines even without access to feature-constraint specifications.
Results reported here indicate that GOLDFINCH is scalable in terms of the number of features and code size it can handle. In experiments comparing our automated method with SVF, a static analysis tool, our method has comparable running-time performance and more optimized memory usage.
In summary, the automated learning-based method we describe here has the potential to provide developers with earlier and better insights into unwanted feature interactions in feature-rich systems such as product lines and highly configurable systems. On projects where specifications of constraints on feature combinations already exist,
we use symbolic execution-guided supervised learning to automatically predict unwanted feature interactions in a new or evolving product. On projects where such specifications are not available,
we use dynamic symbolic-execution-guided program analysis and association rule mining to automatically infer potentially problematic feature dependencies meriting developers' attention.
\section{Discussion}
\label{sec:conclusion}
In this section we describe use cases for FINCH and its generalization in GOLDFINCH,
and discuss threats to validity.
{\it Use Cases.} We have proposed a method to automatically target feature interactions, whether or not unwanted feature interactions are explicitly specified. The envisioned user of our method is a developer implementing a new product in a software product line or new or changed options in a highly configurable system. Feature interactions have been shown to be hard for developers to detect in a new product \cite{kruger2019effects}.
Even where constraints on feature combinations have been documented, that information may not be accessible to developers, may be obsolete, or may not match the code. Moreover, the rationales for avoiding certain feature interactions may be undocumented or subtle, requiring additional domain knowledge beyond the developer's expertise, and thus be not taken into account in the source code. The features involved in an unwanted interaction often are in components that are distant from each other and may be the responsibility of different individual developers or teams, further complicating their discovery and analysis.
Moreover, atypical or rare combinations of features may not have been thoroughly analyzed or tested, especially if they were added ad-hoc, for example, in response to an urgent customer need.
There are thus three use cases that motivate our work. The first use case is to answer a developer's query
as to whether a change enables features to interact so as to violate a specified feature constraint. In this case, the developer uses the output from FINCH to check that the planned pairing of
features in a new product still satisfies the relevant specification. This helps validate the introduction of a new feature combination earlier in development.
The second use case is to give developers information that helps them test for unwanted feature interactions. The output from our learning algorithms identifies combinations of features and paths to prioritize in probing for unwanted feature interactions. Our method also gives testers the feature-dependency information they need regarding which configurations to cover in tests.
The third use case is to improve program comprehension. Feature interactions often cause problems for program comprehension \cite{kruger2019effects}. Improved automatic detection of feature interactions improves program understanding toward reducing bugs and speeding bug repair of unwanted interaction behavior.
This includes information both about unwanted feature interactions known to have occurred in prior products and the learning-based detection of new feature dependencies and interactions that our automated method provides to the developers.
An additional potential use of our method is to reduce software aging. Feature-rich software systems such as software product lines and highly configurable software are subject to aging over time \cite {parnas1994software, cotroneo2014survey}, with performance degradations and increased failure rates. A known cause of software aging in these systems is the introduction of new features that interact in unforeseen ways with existing features \cite{johnsson2000quantifying, siegmund2012spl, garvin2013failure}. A strength of GOLDFINCH is that, for the many feature-rich systems lacking feature-constraint specifications, it can help reduce this cause of aging by pinpointing problematic feature dependencies meriting developers’ further attention.
{\it Threats to Validity.} An external threat to validity is that we used only three small product lines from the literature and a large, real-world subsystem to evaluate our approach. However, these software systems are all well-studied and open-sourced, are in different domains, and have a variety of features and unwanted feature interactions. Our results showed similarities in the models' accuracy and performance, and indicated the feasibility of our approach.
An internal threat to validity is that the specifications for the product lines might be incorrect, leading to incorrect results regarding which interactions are unwanted.
However, the product lines we used have been used by other researchers, and we reviewed those papers as well as the code to ensure that the feature interactions considered here to be faulty in fact were problematic. A few papers have added other features and feature interactions to the email product line beyond those that appear in this paper; however, we restricted ourselves to those interactions that are more standard across the literature. Another internal threat to validity is that we only consider feature interaction pairs. However, as previously noted, pairwise interactions have been found to be the most common, and it has been shown that it is exceedingly rare to find a 3-way feature interaction that is not detectable as a 2-way interaction \cite{calder2006feature}.
\section{Introduction}\label{sec:introduction}
Feature-rich software systems are widely used when the goal is to satisfy a broad variety of customers. Both software systems that form part of a product line and highly configurable software systems are feature-based, meaning that various combinations of features (units of functionality) are used to meet different needs \cite{apel3feature, berger2015feature}. The number of features and potential feature combinations tends to increase over time to accommodate the shifting needs of customers and users. \cite{pohl2005software,meinicke2016essential,mukelabai2018tackling}.
\begin{figure*}[th]
\centering
\includegraphics[width=\linewidth]{pictures/framework_general.png}
\caption{Overview of Our Proposed Approach to Learn Unwanted Feature Interactions whether or not Specifications of Feature-combination Constraints Are Available.}
\label{fig:overview}
\end{figure*}
A {\it feature interaction} is an interference between two features such that, while each feature behaves as intended in a product with only one of the features present, having both features in a new product creates different, unintended behavior \cite{calder2003feature,apel2014feature}.
An ongoing problem for developers of feature-rich systems is how to detect {\it unwanted feature interactions} in a new product or configuration \cite{apel2013feature}. Unwanted feature interactions may involve new features or combinations of existing features that have not been used previously. Unwanted feature interactions introduce unpredicted behavior, often escape testing, and add risk \cite{Lutz93, apel2011detection}. In safety-critical systems, they have caused multiple accidents
\cite{muscedere2019detecting,lutz2000software, apel2014feature,abdessalem2020automated}
Specification of constraints on feature combinations takes many forms, including feature models \cite{batory2020}, source code, user guides and operational manuals. However, such documentation is often poorly maintained, partial, inconsistent with the code, and quickly outdated \cite{nadi2014mining,nadi2015configuration, abal2018variability,soares2020feature}. Especially in highly configurable software systems, i.e., ones with features that can be added and removed \cite{northrop2002software, garvin2013failure}, specification of constraints on feature combinations often does not exist outside the code.
We thus focus on specifications of feature constraints in the source code, as these are the specifications of greatest concern to developers and the most likely to be maintained and used.
The problem that this paper investigates is how to automatically identify unwanted feature interactions in a software system whether or not specifications of constraints on feature combinations exist.
To address this problem, \emph {we propose the use of symbolic-execution-guided machine learning to assist developers in identifying unwanted feature interactions} in a product line or feature-rich configurable system. Our approach is implemented and can automatically find problematic feature interactions and dependencies that may have been inadvertently introduced in a software version or new product. We envision that the most beneficial usage of our method will be to pinpoint for developers, prior to testing
, feature interactions of concern for investigation and joint testing.
The need to identify unwanted or undetected feature interactions has driven multiple approaches to be proposed in the literature
\cite{atlee2015measuring,abal2018variability,soares2018exploring,apel2021}.
Many approaches defer the effort until testing; however, despite important advances in sampling technologies, testing is inherently limited as a solution \cite{nguyen2016igen,temple2016using}
and risks delaying discovery of critical feature interactions until operations.
Other approaches require the creation of formal models or other manually developed artifacts that most projects do not have \cite{atlee2015measuring}. Such problems are compounded in large configurable software systems where it is more likely that products are siloed, that each developer is familiar with only part of the system, and that documentation and configuration guidance are out of date
\cite{nadi2015configuration,Cashman18}.
The limited adoption in practice of existing approaches motivates us to pursue automated learning of unwanted feature interactions from models extracted from source code.
Our goal in the work described in this paper is to provide automated support to learn unwanted feature interactions toward its use in developing feature-rich systems.
Figure \ref{fig:overview} shows an overview of our approach. We use symbolic-execution-guided extraction of learning models from the code to identify unwanted feature interactions.
As shown in the top half of the figure, our approach takes advantage of the fact that in software product lines, information from earlier products can be exploited to improve later products. Here we describe a way to learn from knowledge of which features interfere with each other in order to identify, for a new product, problematic feature interactions that can occur in it. We call our approach and its implementation FINCH (Feature INteraCtion Help) as, similar to miners' use of finch birds to detect poisonous carbon monoxide in coal mines, developers can use FINCH to detect and warn of unwanted or unknown feature interactions.
However, many software product lines and most highly configurable systems lack constraint specifications or have only partial and/or obsolete specifications of feature constraints. The bottom half of Figure \ref{fig:overview} shows how we generalize our approach to handle feature-rich systems without specifications of feature constraints. We call this extension of our approach and its implementation GOLDFINCH (Generalized FINCH). All artifacts, code, and analysis used in this study are available at https://tinyurl.com/ydbtsc8r.
We evaluated the effectiveness of our approach by using it to learn feature dependencies in three product lines and a highly configurable subsystem. Our evaluation addressed the following research questions:
\begin{itemize}
\item \emph {RQ1: How effective is the proposed approach in producing accurate results for known feature interactions?}
\item \emph{RQ2: How accurately can the proposed approach predict new unwanted feature interactions based on existing feature interactions?}
\item \emph{RQ3: How scalable is the performance of our approach when specifications are unavailable?}
\end{itemize}
The contributions of the paper are:
\begin{itemize}
\item Where constraints on allowable feature combinations are specified, we learn unwanted feature interactions, first using symbolic execution to extract a model from prior products' normal and failed paths and then using it to classify unseen paths in a new product.
We show that this approach is successful in identifying even some unseen feature interactions and even with partial data.
We generalize our approach to highly configurable systems, where constraints on allowable feature combinations typically are not specified. We use dynamic symbolic execution to detect feature-related data flow dependencies and association rule mining to infer feature interactions.
We show that our method locates feature interactions of concern to developers.
\item We present evaluation results
showing that our method is fast and effective in detecting unwanted feature interactions. With
specifications, FINCH predicted all 19 known unwanted feature interactions in three product lines within seconds. Without access to specifications, our generalized GOLDFINCH method
still could detect 75-100\% of these unwanted feature interactions
and, in a configurable system with 139 features, identified a data flow dependency between two features that previously were involved in an unwanted feature interaction.
\item We describe how our method supports developers by automatically finding features that interact in unrecognized or unwanted ways, and thus should be analyzed and tested together.
\end{itemize}
\begin{figure}[th!]
\centering
\includegraphics[width=3.5in]{pictures/motivation_email_updated.png}
\caption{Email Product Line Example with Unwanted Feature Interaction Specification.}
\label{fig:motivation_email}
\end{figure}
The remainder of the paper is organized as follows. Section \ref{sec:motiv} introduces two motivating examples. Section \ref{sec:modex} describes our feature-relevant model extraction approach. Section \ref{sec:learning} describes how we learn feature interactions and introduces the case studies used in the experiments. Section \ref{sec:results} describes results from experiments evaluating our approach in light of our research questions. Section \ref{sec:related} describes related work. Section \ref{sec:conclusion} discusses use cases and threats to validity, and Section \ref{sec:summary} offers a brief summary of contributions and concluding remarks.
\section{Related Work}
\label{sec:related}
Previous research on using machine learning to detect feature interactions has aimed primarily at coverage testing of configurable systems \cite{nguyen2016igen}, at finding faulty configurations without incurring the cost of testing \cite{temple2016using}, or on the effect of feature interactions on performance \cite{bacciu2015using, kolesnikov2019relation,velez2019configcrusher}. Recent work has suggested that combining information from learning-aided configuration with information from experts may further improve results \cite{amand2019towards}.
A study of feature interactions in the product-line literature by Soares, et al. \cite{soares2018feature} found that 43\% of the papers aimed to understand feature interaction at early stages of the software life cycle. Among this 43\%, the majority used formal methods, especially feature-aware verification to automate detection of interactions \cite{apel2011detection, apel2013feature}, and model checking to measure behavioral changes when a new feature is added \cite{atlee2015measuring}, as well as to detect conflicts among features \cite{calder2006feature, beidu2019detecting}. However, such formal models are costly to create and typically not available for most real-world systems.
Our approach to dealing with the feature interaction problem is the first we are aware of that uses symbolic-execution-guided machine learning on a model built using data from prior products in a software product line to detect unwanted feature interactions in a new product. Similar to other approaches that use call sequence information \cite{raychev2014code,yessenov2017demomatch},
our approach also learns some type of semantic association among the functions. However,
unlike in these works, we consider systems in which samples of correct usage of the functions and
features may not be available. Our use of symbolic constraints in learning differs from the approach in \cite{fowze2019proxray} as our constraints are not on the input variables and, instead, involve internal state of the computation, which is important for detecting unwanted feature interactions.
Regarding feature interactions in highly configurable systems without specifications, Soares, et al. \cite{soares2018exploring} used dynamic analysis based on the data and control flow execution of Java-based configurable systems. Our work reported here differs in that we focus on analyzing data flow and static analysis of C-based configurable systems and do not require availability of test cases.
Kolesnikov, et al. \cite{kolesnikov2019relation} investigated the relation of control flow and performance feature interactions in predicting the performance of a highly configurable system in terms of timing. Our work differs since we use data flow interaction to predict feature interactions in a highly configurable system.
Rhein, et al. \cite{rhein2018variability} proposed a variability aware static analysis technique based on seven control and data flow analyses. They reported that their technique out-performed sample-based, static-analysis approaches in terms of execution times and potential bugs found. Our study differs in its use of highly accessible and general-purpose static analysis tools including PROMPT \cite{yavuz2020tutorial} and SVF \cite{sui2014detecting,sui2016svf} to capture the data-flow interactions in highly configurable systems.
Our attention to feature interactions is also shared by combinatorial interaction testing (CIT) techniques \cite{yilmaz2014moving}, which have been applied to the testing of both software product lines and highly configurable systems. CIT identifies a subset of features to be considered in combination to achieve, most commonly, pairwise coverage \cite{lopez2015first}. It is a form of sampling, informed by constraints on allowable feature combinations, that reduces the number of tests needed. Our approach differs from CIT in our use of symbolic-execution-guided learning and in identifying interactions whether or not feature-constraint specifications exist.
\section{Evaluation and Results}
\label{sec:results}
In this section we describe the results from our investigation of the following research questions in four case studies. We first evaluate FINCH's discovery of unwanted feature interactions when {\it feature-constraint specifications are available}, as in many product lines. We then evaluate GOLDFINCH's discovery of feature interactions when such {\it specifications are not available}, as in most highly configurable systems. The key research questions are the following.
\begin{itemize}
\item \emph {RQ1: How effective is the proposed approach in producing accurate results for known feature interactions?}
\item \emph{RQ2: How accurately can the proposed approach predict new unwanted feature interactions based on existing feature interactions?}
\item \emph{RQ3: How scalable is the performance of our approach when specifications are unavailable?}
\end{itemize}
\subsection{Detecting unwanted feature interactions with specifications }
\label{sec:eval_with}
In this subsection we describe evaluation results for systems where feature-constraint specifications exist, that is, when we can use FINCH.
To answer the research questions for FINCH, we first built a bag-of-words model \cite{zhang2010understanding,aggarwal2012survey} of the execution traces and path constraints extracted, using our extension to the KLEE
symbolic execution engine \cite{cadar2008klee}, for the Email, Elevator, and Mine Pump software product lines \cite{apel2013feature} described in Table \ref{tab:case_studies}. We used the following setup for our experiments:
\begin{itemize}
\item Balanced Accuracy, defined as
(BAC = 0.5*(True Positive/(True Positive + False Negative) + True Negative/(True Negative + False Positive)), the average accuracy obtained on two classes, is used for reporting the accuracy of the machine-learning models. \cite{brodersen2010balanced}.
\item 80\% of the data is used for the training of the models and 20\% for the testing.
\item 5 repeats of 10-Fold Cross Validation are used inside the training.
\item SMOTE sampling is used inside the training since the data is imbalanced, with the number of records with a failure label being very small compared to the number of records with a normal label, as shown in Table \ref{tab:case_studies} \cite{chawla2002smote}.
\end{itemize}
\textit {\textbf{RQ1.F:} How effective is FINCH in classifying paths in new products for known feature interactions?}
This research question investigates whether we can
classify termination paths into two classes, normal termination (success) and interaction-associated failed termination (failure).
The ``F'' suffix in the RQ indicates that its investigation uses FINCH. We selected three well-regarded machine learning classifiers to categorize the text \cite{joachims1998text,aggarwal2012survey}: Support Vector Machine (SVM), Naive Bayes, and Random Forest. We learned from the existing labeled data, with the labels being failure path and success path, built the classifier, and classified previously unseen traces of functions and atomic constraints from the path conditions.
The feature vector produced by our bag-of-words model, as shown in Figure \ref{fig:framework}, fed into these classifiers. We divided the data to perform 10-fold cross-validation and reported the corresponding Balanced Accuracy, Training Time, and Prediction Time values. Figures \ref{fig:rq1_email}, \ref{fig:rq1_elevator} and \ref{fig:rq1_minepump} show these values for the three software product lines.
We experimentally tested the three selected ML algorithms on three different sources of data: (1) functions that appear on the stack traces, (2) path constraints, and (3) combinations of both stack traces and path constraints. For each case study, we investigated how accurately we were able to classify failure and normal cases. We also investigated which of these sources of data aided the classifier in distinguishing between the normal and failure classes for a path.
Results shown in Figure \ref{fig:rq1_email} for the Email product line indicate that our approach was able to correctly detect failure cases and could accurately classify the data using just the stack trace data. For the Elevator product line, as shown in Figure \ref{fig:rq1_elevator}, we could perfectly classify the failure and normal termination paths using either path constraints or stack traces. In Figure \ref{fig:rq1_minepump}, the results on the Mine Pump data show that the path constraints data led to a slightly more accurate model for SVM.
In summary, our experiments showed that all three learned models had high Balanced Accuracy. In Mine Pump, SVM surpassed Naive Bayes and Random Forest. Using both path constraints and stack traces yielded an accurate model for all three software product-line case studies.
To investigate why there was a more accurate model if we used stack traces for the Email product line, and a more accurate model if we used path constraints for the Mine Pump product line, we measured how much of the data for each was provided from stack traces and how much from path constraints. Table \ref{tab:rq1_length} describes both sources of data in terms of the length of the minimum path:maximum path for each product line's unique functions, and in terms of the min:max size of its path constraints.
As shown in Table \ref{tab:rq1_length}, there are 33 unique functions in the stack traces of the Email product line and 11 unique constraints in its path constraints. This indicates that we had more data in the stack traces compared to the path constraints. This appears to lead to a more accurate model using stack traces for the Email product line. For the Mine Pump product line, we speculate that its larger min-to-max range of path lengths for the Path Constraints Data assists in learning a more accurate model using path constraints. However, for the Elevator product line, which also has a large min-to-max path range, learning the model with both sources of data yielded a perfect classifier. This may be due to its having a sufficient number of both unique functions and unique path constraints. This leads us to {\bf recommend learning a model on both stack-trace data and path-constraint data}. It further suggests that study of what constitutes sufficient stack-trace and path-constraint data for learning a product line model would be useful.
Regarding the computation time, SVM took longer to build a model in all three case studies; however, SVM was also the fastest algorithm in predicting an unwanted feature interaction. Overall, SVM displayed better performance compared to Naive Bayes and Random Forest in our experiments.
{\bf Finding: Our approach accurately classified failure paths related to unwanted feature interactions and identified all known unwanted feature interactions for the three product lines.}
To try to answer RQ2, ``How accurately can the proposed approach predict new unwanted feature interactions based on existing feature interactions?," we investigated three sub-questions, RQ2.1F, RQ2.2F, and RQ2.3F. We discuss each of these below.
\textit {\textbf{RQ2.1F:} Can FINCH identify a new unwanted feature interaction?}
Recent studies show that a new feature similar to an existing feature involved in an existing feature interaction tends to behave similarly \cite{khoshmanesh2018role,khoshmanesh2019leveraging,9233030}. Therefore, we want to know if having information about existing unwanted feature-interaction paths helps predict new unwanted feature interactions.
To evaluate our classifiers on new unwanted feature interactions, we performed the following steps:
1) select a feature interaction pair; 2) exclude the data related to this feature interaction; 3) build the classifiers on the remaining data with 10-fold cross-validation; 4) test the classifiers on the new data excluded in step 2; and 5) repeat for all 10 feature interaction pairs in the Email benchmark.
\begin{table}[th!]
\centering
\caption{Comparing Path Lengths and Unique Functions and Constraints for Each Software Product Line}
\begin{footnotesize}
\begin{tabular}{|l|r|r|r|r|}
\toprule
& \multicolumn{2}{c|}{\textbf{Stack Trace Data}} & \multicolumn{2}{c|}{\textbf{Path Constraints Data}} \\
\midrule
\textbf{SPL name} & \multicolumn{1}{l|}{\textbf{Path length(min:max)}} & \multicolumn{1}{l|}{\textbf{\#Funcs.}} & \multicolumn{1}{l|}{\textbf{Const. size(min:max)}} & \multicolumn{1}{l|}{\textbf{\# Const.}} \\
\midrule
\textbf{Email} & 5:18 & 33 & 1:25 & 11 \\
\midrule
\textbf{Elevator} & 2:16 & 24 &28:223 & 37 \\
\midrule
\textbf{Mine Pump} & 4:15 & 21 & 3:49 & 15 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\label{tab:rq1_length}%
\end{table}%
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_email_balancedaccuracy.png}
\caption{Balanced Accuracy}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_email_trainingtime.png}
\caption{Training Time}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_email_predictiontime.png}
\caption{Prediction Time}
\end{subfigure}
\caption{Email: Performance and Time Evaluations}
\label{fig:rq1_email}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_elevator_balancedaccuracy.png}
\caption{Balanced Accuracy}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_elevator_trainingtime.png}
\caption{Training Time}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_elevator_predictiontime.png}
\caption{Prediction Time}
\end{subfigure}
\caption{Elevator: Performance and Time Evaluations}
\label{fig:rq1_elevator}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_minepump_balancedaccuracy.png}
\caption{Balanced Accuracy}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_minepump_trainingtime.png}
\caption{Training Time}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{pictures/rq1_minepump_predictiontime.png}
\caption{Prediction Time}
\end{subfigure}
\caption{Mine Pump: Performance and Time Evaluations}
\label{fig:rq1_minepump}
\end{figure*}
\begin{table}[th!]
\centering
\caption{Percentage of Detected Unwanted Feature Interactions}
\begin{footnotesize}
\begin{tabular}{|l|r|r|r|}
\toprule
\textbf{Across all ML Algorithms} & \multicolumn{1}{l|}{\textbf{Email}} & \multicolumn{1}{l|}{\textbf{Elevator}} & \multicolumn{1}{l|}{\textbf{Mine Pump}} \\
\midrule
Stack traces data & 100 & 100 & 100 \\
\midrule
Path constraints data & 43 & 100 & 100 \\
\midrule
Combined data & 100 & 100 & 100 \\
\midrule
\midrule
\textbf{Machine Learning Algorithms}.& \multicolumn{3}{c|}{ }\\
\midrule
SVM & 100 & 100 & 100 \\
\midrule
Naïve Bayes & 86 & 100 & 100 \\
\midrule
Random Forest & 86 & 100 & 100 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\label{tab:rq2}%
\end{table}%
Table \ref{tab:rq2} shows the percentage of unwanted feature interactions
that we could detect for each product line using different sources of data and different machine learning algorithms. We ran experiments using each of the two sources of data--stack traces and path constraints--as well as with both of them combined. Furthermore, we evaluated three machine learning algorithms as to their accuracy in detection of feature interaction failures.
Table \ref{tab:rq2} shows that SVM had the best performance among the learning algorithms in our study. The table also shows that using the combined data yielded the highest percentage of unwanted feature-interaction failures that were detected. These data are consistent with our findings from RQ1 above.For the Email product line, using only the path constraints data was less effective than using the stack traces for detecting new unwanted feature interactions. Stack trace data or combined data resulted in high accuracy in predicting even new feature interactions.
\vspace{1em}
\begin{figure*}[th!]
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{pictures/rq3_email.png}
\caption{Email}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{pictures/rq3_elevator.png}
\caption{Elevator}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{pictures/rq3_minepump.png}
\caption{Mine Pump}
\end{subfigure}
\caption{Balanced Accuracy vs. Partial Feature Elimination in Testing}
\label{pic:rq3}
\end{figure*}
\textit {\textbf{RQ2.2F:} How effective is FINCH in dealing with partial data in testing?}\\
This research question asks whether we can detect new feature interaction failures with partial and unseen data. This is of interest primarily because using only partial data could speed up the computation time needed for the testing (learning-based prediction of feature interactions) for each new product in a product line. It also is of interest because it could enable FINCH to be used earlier for a new product while only partial data is available.
To explore the accuracy of our model in dealing with unseen data, we manipulated the testing data to exclude varying portions of the features. We used the models learned in RQ1 and tested the models as follows: 1) excluded one quarter, two quarters, or three quarters of functions and path constraints from the top of the stack traces and from the path constraints; 2) tested the models with the reduced data; and 3) checked whether and, if so, how much the accuracy of detecting failure cases deteriorates.
Figure \ref{pic:rq3} shows the balanced accuracy for each model in testing on partial data. For the Email software product line, even when one quarter of the features were excluded from the testing data, we still obtained a high balanced accuracy, 0.91, by using SVM on stack traces data in detecting failure cases. For the Elevator product line, the balance accuracy also was high, 0.94, when we excluded a quarter of the features on path constraints. Building the model on the stack trace data when a quarter of the data was excluded from the top of the path constraints also yielded a highly accurate model, using Naive Bayes or SVM. Even excluding two quarters of data still achieved a highly accurate model using stack trace data and SVM.
However, in Mine Pump, we were not able to classify a path with only partial data.
These mixed results suggest that, for at least some product lines, our method can accurately predict many but not all unwanted feature interactions with access to only 75\% of the data related to the new product. This finding may assist with reducing the computation time needed to find unwanted feature interactions.
\textit {\textbf{RQ2.3F:} Which features are more important for FINCH?}\\
To address this research question, we worked to identify which features are more and less important for the Random Forest model described for RQ1. We used the Gini importance index to measure feature importance. Gini importance is defined as the total decrease in node impurity averaged over all trees of the ensemble model \cite{wright2015ranger}.
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{pictures/rq4_email.png}
\caption{Email}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{pictures/rq4_elevator.png}
\caption{Elevator}
\label{fig:rq4_elevator}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{pictures/rq4_minepump.png}
\caption{Mine Pump}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{pictures/rq4_part2.png}
\caption{Diff Feature Importance Scores in Email }
\label{fig:rq4_diff_email}
\end{subfigure}
\caption{Feature Importance Comparison on Stack Traces, Path Constraints, and Combined Data}
\label{fig:rq4_importance}
\end{figure*}
Figure \ref{fig:rq4_importance} shows the feature importance values for the Email, Elevator and Mine Pump product lines. The table shows how in Elevator, Figure \ref{fig:rq4_elevator}, the Random Forest model is built on the combined data, using features from both stack traces and path constraints to classify the data. This contrasts with both Email and Mine Pump where the combined model mostly uses stack trace features. These findings indicate that the relative importance of the data captured in stack traces and path constraints may vary case by case in learning the model. This again suggests that using both sources of data--stack traces and path constraints--can result in a more accurate model for predicting unwanted feature interactions versus normal terminations.
We also can use the result of this research question to prioritize the paths and guide the symbolic execution engine to explore more problematic paths. We give an example from the Email system to show how we can take advantages of the feature importance results. Figure \ref{fig:rq4_diff_email} shows the differences in scores of feature importance for stack traces data when path constraints data are added to the model. Three functions "deliver", "getclientautoresponse", and "getclientprivatekey" retain the same scores when path constraints data is added while the scores of the "isencrypted" and "sendmail" functions are increasing.
\begin{table}[htbp]
\centering
\caption{Comparison of Learning Models' Performance on 5 Selected Features}
\begin{footnotesize}
\begin{tabular}{|l|c|c|c|c|c|}
\toprule
\multicolumn{1}{|c|}{\textbf{ML model}} & \textbf{Balanced Accuracy} & \textbf{Recall} & \textbf{Precision} & \textbf{Training Time} & \textbf{Prediction Time} \\
\midrule
SVM, 5 Features & 0.9974255 & 1 & 0.58 & 21 & 0.007 \\
\midrule
Naïve Bayes, 5 Features & 0.9974255 & 1 & 0.58 & 10 & 0.02 \\
\midrule
Random Forest, 5 Features & 0.9974255 & 1 & 0.58 & 22 & 0.24 \\
\midrule
SVM, All Features & 1 & 1 & 1 & 51 & 0.02 \\
\midrule
Naïve Bayes, All Features & 1 & 1 & 1 & 26 & 0.07 \\
\midrule
Random Forest, All Features & 1 & 1 & 1 & 25 & 0.261 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\label{tab:rq4_feature_importance}%
\end{table}%
\begin{table*}[th!]
\caption{The Data-flow Type Dependencies That Were Detected by GOLDFINCH Within the Sample Code, Explaining the Configuration Bugs in BusyBox as Reported in \cite{abal2018variability,VBDB}.}
\label{tab:knownbugs}
\centering
\begin{footnotesize}
\begin{tabular}{cccc}
{\bf Buggy Configuration} & {\bf Bug Type} & {\bf Dependency Type} & {\bf Dependent Features} \\ \toprule
BB\_MMU \&\& FEATURE\_HTTPD\_GZIP & Behavior & Store-Load & ENABLE\_FEATURE\_HTTPD\_GZIP, \\
\&\& FEATURE\_HTTPD\_BASIC\_AUTH & Violation & & FEATURE\_HTTPD\_BASIC\_AUTH \\ \hline
BB\_FEATURE\_LS\_FILETYPES \&\& & Uninitialized & Store-Load & BB\_FEATURE\_LS\_FILETYPES,\\
!BB\_FEATURE\_LS\_USERNAME & Variable & & BB\_FEATURE\_LS\_USERNAME\\ \hline
FEATURE\_LS\_SORTFILES \&\& & Memory & Store-Load & FEATURE\_LS\_SORTFILES, \\
FEATURE\_LS\_RECURSIVE & Leak & & FEATURE\_LS\_RECURSIVE \\ \hline
FEATURE\_MDEV\_CONF \&\& & Unused & - & - \\
FEATURE\_MDEV\_RENAME \&\& & variable & & \\
!FEATURE\_MDEV\_RENAME\_REGEXP & & & \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{table*}
Moreover, adding path constraints data helps some features from stack traces manifest themselves better in the decision-making process. We built machine-learning models using data containing the five features mentioned above to check if a model having only these features could correctly detect all the failure cases. Table \ref{tab:rq4_feature_importance} shows the result of the machine-learning models built on these five features. SVM had the best performance and correctly detected all the failure cases (recall=1) for the 5 feature model very quickly (0.007 secs.).
This result may be important for the use of the symbolic execution engine, assisting to prioritize paths and reduce the path explosion problem.
Such reduced sets of the machine learning features can potentially be leveraged to speed up the search for detecting
unwanted feature interactions when using exhaustive techniques such as symbolic execution.
{\bf Finding: In experiments that excluded data related to each feature interaction, our approach predicted unseen feature interactions with high balanced accuracy. With partial data (75\%) it predicted all but one of the unwanted feature interactions.}
\subsection{Detecting unwanted feature interaction without specifications }
\label{sec:eval_without}
We next describe our results in finding potentially unwanted feature interactions when we do not have access to specifications.
GOLDFINCH uses the data flow dependencies described in Section \ref{sec:dfmodels} to detect unwanted feature interactions in highly configurable systems without access to specifications of constraints on feature interactions.
We first report results from applying GOLDFINCH on a highly configurable software system. We then report results from applying GOLDFINCH on the same three product-line software benchmarks previously used for FINCH; however, {\it without} using their feature-constraint specifications. The research question we investigate in these experiments is the following.
\emph{RQ1.G: How effective is GOLDFINCH in producing accurate results for known feature interactions?}
\subsubsection{Evaluation of GOLDFINCH on a configurable system.}
Since some feature dependencies can cause variability bugs,
we investigated whether there exists any bug or warning related to the feature dependencies that GOLDFINCH finds. We first checked the known variability bugs in \cite{abal2018variability}. A variability bug is a bug that occurs due to enabling or disabling a combination of configuration options or features.
We focused on those 9 recorded variability bugs that involve two or more features.
Filtering out those that are related to compilation/build issues
yielded 4 known variability bugs.
We applied GOLDFINCH on the sample code provided for those four bugs at \cite{VBDB}.
Table \ref{tab:knownbugs} shows our results. For three out of the four known feature-interaction configuration bugs in earlier BusyBox versions, GOLDFINCH detected a Store-Load type dependency in the sample code explaining the bug. This experiment suggests that feature dependencies found by GOLDFINCH can be useful as indicators of unwanted feature interactions.
The known variability bug not found by GOLDFINCH does not involve a data-flow dependency; rather, it is
an unused variable bug, which is about the absence of data-flow rather than its existence. It was later fixed in BusyBox by moving the declaration of
the variable within the scope of {\tt FEATURE\_MDEV\_RENAME\_REGEXP}.
\begin{table}[ht!]
\caption{Timing and Memory Results for Generating Store-Load (SL) and Store-Store (SS) Dependencies for the coreutils Subsystem of BusyBox 1.32.0. Timeout was set to 60 secs.}
\label{tab:busyboxDF}
\centering
\scalebox{0.9}{\begin{tabular}{crrrr} \toprule
{\bf Benchmark} & {\bf SL} & {\bf SS} & {\bf Time (s)} & {\bf Mem (MB)} \\ \toprule
chgrp & 1 & 1 & 0.01 & 15.39 \\
chmod & 11 & 2 & 0.01 & 15.34 \\
chown & 2 & 0 & 0.01 & 15.14 \\
comm & 6 & 5 & 0.01& 15.27\\
cp & 2 & 0 & 330.70 & 219.76 \\
cut & 8 & 1 & 0.01& 15.27 \\
date & 6 & 1 & 0.01& 15.14 \\
df & 7 & 1 & 0.01& 15.36 \\
dos2unix & 1 & 0 & 65.82 & 233.69 \\
du & 25 & 16 & 0.01& 217.25 \\
env & 1 & 0 & 0.01& 15.34 \\
expand & 14 & 9 & 65.23 & 218.26 \\
expr & 13 & 4& 0.01& 15.38\\
fold & 2& 0 & 0.01& 15.27 \\
head & 2& 0& 0.01 & 15.33\\
id & 29& 8 & 0.01 & 15.14 \\
install & 0 & 0 & 324.67 & 223.35 \\
ln & 0 & 0 & 326.94 & 219.57 \\
ls & 8 & 10 & 0.01& 15.21 \\
md5\_sha1\_sum & 16& 14 & 0.01 & 15.30 \\
mkdir & 0 & 0 & 64.58 & 217.73 \\
mknod & 1& 0 & 4.64 & 214.65 \\
mktemp & 1& 1 & 0.01& 15.27 \\
nice & 7& 1 & 0.01 & 15.28 \\
nl & 12 & 9 & 0.01 & 15.31 \\
nproc & 0 & 0 & 0.01& 15.30 \\
od & 11& 10 & 0.01& 15.36 \\
paste & 4 & 1 & 0.01& 15.20\\
printf & 7 & 15 & 0.01& 15.24 \\
rm & 9& 1 & 0.01& 15.14 \\
shred & 2 & 0 & 0.01& 15.20 \\
shuf & 1& 0 & 0.01& 15.20 \\
sleep & 7& 1 & 9.27 & 214.86 \\
split & 3& 0 & 0.01& 15.14 \\
stat & 1& 0 & 0.01& 15.32 \\
stty & 1& 0 & 0.01& 15.20 \\
tail & 18& 2 & 0.01& 15.20 \\
tee & 7& 3 & 0.01& 15.36 \\
timeout & 9& 1 & 0.01 & 15.28 \\
touch & 4& 0 & 0.01 & 15.36 \\
tr & 6& 1 & 0.01& 15.14 \\
tty & 7& 1 & 6.87 & 214.90\\
uname & 3& 0& 6.88 & 214.75 \\
usleep & 7& 1 & 0.01& 15.27 \\
uudecode & 11& 3 & 0.01& 15.36 \\
wc & 4& 11 & 64.92 & 216.60 \\
who & 1& 0& 6.97 & 214.85 \\
yes & 3& 0 & 0 0.01& 15.34\\
\bottomrule
\end{tabular} }
\end{table}
\begin{table*}[t]
\centering
\caption{Comparison of Feature Dependency Results for PROMPT and SVF on Core Utilities of BusyBox 1.32.}
\begin{tabular}{crrrr}
\toprule
{\bf Analysis Tool} & {\bf Store-Load} & {\bf Store-Store} & {\bf Total Time (s)} & {\bf Max Memory (MB)} \\ \hline
PROMPT & 301 & 134 & 1277.85 & 233.69\\
SVF & 402 & 960 & 3603.50 & 10844.05\\
\bottomrule
\end{tabular}%
\label{tab:prompt_svf_busybox32}%
\end{table*}%
\begin{table*}[th!]
\caption{GOLDFINCH: Number of Feature Dependencies Across Two Types of Memory Dependencies in Core utilities of BusyBox 1.32. }
\centering
\begin{footnotesize}
\begin{tabular}{cccccc}
\toprule
{\bf } & {\bf } &{\bf Store-Load}& {\bf Store-Load}& {\bf Store-Store}& {\bf Store-Store}\\
{\bf Source} & {\bf Destination} &{\bf PROMPT}& {\bf SVF}& {\bf PROMPT}& {\bf SVF}\\ \toprule
{\bf Feature Relevant} & {\bf Not Feature Relevant} &9 & 15 & 6 & 10 \\
{\bf Not Feature Relevant} & {\bf Feature Relevant} &8 & 3 & 2 & 4 \\
{\bf Not Feature Relevant} & {\bf Not Feature Relevant} & 281 & 330 & 122 & 927 \\
{\bf Feature Relevant} & {\bf Feature Relevant} & 3 & 54 & 4 & 19 \\ \hline
\multicolumn{2}{c}{\bf Total} & 301 & 402 & 134 & 960 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\label{tab:busybox_feature_dependency_stats}%
\end{table*}%
Next we evaluated our method on its planned target: a large, configurable system. We used the latest version of BusyBox, $BusyBox 1.32.0$. We selected its coreutils both because coreutils implements BusyBox's core functionalities and because coreutils had the highest reported variability bugs within BusyBox in a recent study \cite{abal2018variability}.
Table \ref{tab:busyboxDF} reports our experimental results for GOLDFINCH's detection of data-flow dependencies. The table presents the number of Store-Load and Store-Store pairs, together with timing and memory information. We discuss the encouraging implications of the results for the scalability of our approach at the end of Section \ref{sec:results}.
We also evaluated our approach in comparison with static analysis. To further assess our use of the dynamic symbolic-execution PROMPT tool for dependency analysis in GOLDFINCH, we compared the results from using PROMPT to those obtained by using a static analyzer, SVF \cite{sui2016svf}.
SVF is a state-of-the-art static analysis tool that provides a variety of pointer analysis for LLVM-based languages \cite{sui2014detecting,sui2016svf}.
Specifically, we configured SVF to use context-sensitive pointer analysis.
We then traversed SVF's internal representations of the Sparse Value-Flow Graph (SVFG) and the Program Assignment Graph (PAG) to find Store-Load and Store-Store dependencies, respectively.
Tables \ref{tab:prompt_svf_busybox32}-\ref{tab:busybox_feature_dependency_stats} thus also compare data-flow dependency results for PROMPT and SVF.
We used the SVFG to capture flow-sensitive Store-Load dependencies, and had to use the Program Assignment Graph (PAG) to find out store instructions that access the same memory locations.
While we used PROMPT to generate a Store-Store dependency for each pair of {\em consecutive} pair of store instructions that access the same memory location,
we used the PAG created by SVF to generate a Store-Store dependency
for any pair of store instructions that access the same memory.
Thus, the
set of Store-Store dependencies generated using SVF is in principle a superset of the set of Store-Store dependency generated using PROMPT and is less precise. This is because, unlike the Store-Load dependencies, SVF does not generate Store-Store dependencies by default.
This explains why the number of Store-Store dependencies found by SVF (960)
is much higher than those found by PROMPT (134).
Similarly, SVF finds
more Store-Load dependencies (402) than PROMPT (301). However,
this is because we configure PROMPT to abstract away the library functions and to use a symbolized return value for those with a return value \cite{yavuz2020analyzing} as a way to deal with the path explosion problem. Since library function bodies do not involve feature dependent code, SVF's reporting of Store-Load dependencies within the
library functions is something that we can avoid using PROMPT.
As the timing results in Table \ref{tab:prompt_svf_busybox32} shows,
GOLDFINCH scales symbolic execution to extract feature relevant models from large-scale configurable systems.
\begin{figure}[th!]
\centering
\begin{footnotesize}
\begin{verbatim}
store i32
br label
; <label>:47 ; preds =
br i1
; <label>:52 ; preds =
store i32
br label
._crit_edge: ; preds =
\end{verbatim}
\end{footnotesize}
\caption{An LLVM Bitcode Excerpt for the Source Code Example from coreutils/ls.c as Given in Figure \ref{fig:ssexample}.}
\label{fig:sourcelocchallenge}
\end{figure}
Both PROMPT and SVF find a true feature-relevant Store-Store dependency within coreutil's
{\tt ls.c} in BusyBox 1.32.0, corresponding to the dependency between lines 1173-1181 shown in Figure \ref{fig:ssexample}.
This dependency is between {\tt ENABLE\_FEATURE\_LS\_RECURSIVE} and \\
{\tt ENABLE\_FEATURE\_LS\_TIMESTAMPS \&\& ENABLE\_FEATURE\_LS\_SORTFILES},
as both program locations update the variable {\tt option\_mask32}.
Although we do not think that this dependency implies a variability
bug in that particular version of BusyBox, future changes may
create issues.
As an example, if a bug were to make the expressions {\tt OPT\_R} and {\tt OPT\_t} evaluate to the same value, then the second store instruction at line 1181 would set the bit that gets cleared by the first store instruction at line 1173.
PROMPT reports an additional dependency that turns out to be a false positive, involving accesses to different fields of the
same {\tt struct} type object. The cause of this false positive is that the KLEE symbolic
execution engine used in PROMPT represents the entire {\tt struct} object as a
single memory object. In future work, we will incorporate offset
information into our analysis to eliminate such false positives.
It is remarkable that both PROMPT and SVF missed the Store-Load
dependency within coreutil's
{\tt ls.c} in BusyBox 1.32.0, corresponding to the dependency between lines 1173-1181 in Figure \ref{fig:ssexample}.
A deeper investigation into why both tools missed this dependency revealed
precision issues related to source line number generation during compilation.
Figure \ref{fig:sourcelocchallenge} demonstrates one source of imprecision in GOLDFINCH due to the optimizations performed by the compiler,
specifically, the load instructions related to the {\tt option\_mask32}
variable on lines 1181 and 1187 in Figure \ref{fig:ssexample}.
These two load instructions are used in a {\tt phi} instruction so that,
depending on the evaluation of the branch instruction on line 1180
in Figure \ref{fig:ssexample},
either the result of the instruction at line 10 or line 14 in Figure
\ref{fig:sourcelocchallenge} gets used. The issue is that,
the source line reported for the instruction at line 10 is the same
as that reported for the instruction at line 17, which happens to be 1187 and does not get
included within the scope of any feature. However, the source line
for the instruction at line 10 is actually 1181.
This causes GOLDFINCH to
miss this feature-relevant dependency as the destination tuple
seems not be related to any feature.
It illustrates why source line numbers may not always be precisely reflected to the results of GOLDFINCH, which performs dependency analysis at the intermediate-representation level.
\begin{table}[th!]
\caption{Timing and Memory Results for Generating Store-Load (SL) and Store-Store (SS) Dependencies for the Product Lines. Timeout was set to 60 secs.}
\label{tab:prodDF}
\centering
\begin{footnotesize}
\begin{tabular}{crrrr} \toprule
{\bf Benchmark} & {\bf SL} & {\bf SS} & {\bf Time (s)} & {\bf Mem (MB)} \\ \toprule
elevator\_spec1 & 475 & 426 & 100.00 & 66.49\\
elevator\_spec2 & 352 & 178 & 100.00 & 46.78\\
elevator\_spec3 & 437 & 507 & 100.00 & 87.04 \\
elevator\_spec9 & 356 & 238 & 100.00 & 42.15\\
elevator\_spec13 & 350 & 146 & 100.00 & 36.34\\
elevator\_spec14 & 444 & 445 & 100.00 & 50.02\\
email\_spec0 & 275 & 17 & 100.00 & 237.07\\
email\_spec1 & 265 & 17 & 100.00 & 250.10 \\
email\_spec3 & 287 & 25 & 100.00 & 226.56\\
email\_spec4 & 266 & 28 & 100.00 & 208.84\\
email\_spec6 & 258 & 17 & 100.00 & 215.13\\
email\_spec7 & 254 & 16 & 100.00 & 175.22\\
email\_spec8 & 260 & 17 & 100.00 & 234.68\\
email\_spec9 & 253 & 18 & 100.00 & 215.09 \\
email\_spec11 & 293 & 19 & 100.00 & 216.24\\
email\_spec27 & 289 & 23 & 100.00 & 239.80 \\
minepump\_spec1 & 56 & 9 & 100.00 & 389.52\\
minepump\_spec2 & 58 & 16 & 100.00 & 379.85\\
minepump\_spec3 & 65 & 9 & 100.00 & 385.63\\
minepump\_spec4 & 55 & 9 & 100.00 & 373.12\\
minepump\_spec5 & 60 & 12 & 100.00 & 353.97\\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{table}
\begin{table*}[th!]
\caption{GOLDFINCH: Result of Data Flow Dependencies in Finding Unwanted Feature Interaction without Specifications in Three Small Software Product Lines}
\centering
\begin{footnotesize}
\begin{tabular}{ccccc} \toprule
{\bf SPL name} &{\bf \# Data Flow }& {\bf \# Unwanted Feature }& {\bf\# Detected Unwanted Feature }& {\bf Time(S)}\\
{} &{\bf Dependency}& {\bf Interactions}& {\bf Interactions}& {}\\
\toprule
{\bf Email} & 2897 & 10 & 10 & 33.52 \\
{\bf Elevator} & 4354 & 5 & 4 & 12.21 \\
{\bf Mine Pump} & 349 & 4 & 3 & 3.03 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\label{tab:productline_feature_dependency}%
\end{table*}%
Although the number of interactions found is small, our results are consistent with prior findings. For example, Garvin, Cohen and Dwyer showed that only a small set of feature combinations in a system's configuration space are associated with failures \cite{garvin2013failure}. The relative rarity of unwanted feature interactions may be part of the reason why constraints on combinations of features often remain unspecified or are not updated as options evolve \cite{Cashman18}.
Here, manual inspection of the dependencies found by GOLDFINCH indicate that either a distinct
set of memory locations are accessed in each feature or
accesses to the same memory location at different program locations
correspond to the same feature combination.
\subsubsection{Evaluation of GOLDFINCH on software product lines}
We also evaluated our feature dependency method GOLDFINCH on the same three small software product lines--Email, Elevator and Mine Pump--that we used for our evaluation of FINCH. This enabled us to compare GOLDFINCH's results {\emph without} access to feature-constraint specifications with FINCH's results {\emph with} access to feature-constraint specifications. To make this comparison we thus used the known unwanted feature interactions for the software product lines \cite{apel2013feature} only as our ground truth.
As with our evaluation of GOLDFINCH on BusyBox, we used the framework described in Fig. \ref{fig:framework_withou_specs}.
We again use data flow interactions to discover feature dependencies, that is, the features that are dependent on the same location of memory. As shown in Step 1 of Fig. \ref{fig:framework_withou_specs}, the PROMPT tool extracts the feature-relevant model from the product lines' source code.
Table \ref{tab:prodDF} presents the number of Store-Load and Store-Store
pairs and the timing information.
As shown in Step 2 of Fig. \ref{fig:framework_withou_specs}, we automatically locate the feature-relevant data flow by analyzing the feature dependencies of the two memory dependency types, Store-Store and Store-Load for each of the three product lines. Since the implementation of features in these product lines is not based on the \#ifdef C preprocessor, we do not use the cppcheck tool\cite{hunsen2016preprocessor} to locate features, as we had in BusyBox. Instead, we use the names of the functions where the data flow interactions occur to automatically locate the features implemented by those functions.
For example, the following two functions in the Email product line access the same memory region, with ${printMail\_\_role\__Sign\_source\_sl\_s}$ storing to the same variable that \\ ${printMail\_\_role\_\_Verify\_dest\_sl\_l}$ loads from it. The name of the function identifies the name of its related feature, i.e., ${printMail\_\_role\_\_Sign()}$ belongs to the Sign feature and \\ ${printMail\_\_role\_\_verify()}$ belongs to the Verify feature. The type of memory dependency here is Store\_Load (sl), with the Sign feature being the Source and store (s) the Value; the Verify feature being the Destination and load (l) being from the variable that the Sign feature stored to it.
Step 3 of GOLDFINCH, as shown in Fig. \ref{fig:framework_withou_specs}, uses association rule mining, namely the Apriori algorithm \cite{borgelt2012frequent}, to learn the frequent item set of features that interact together in the feature dependency data. The feature dependencies extracted by PROMPT, once encoded, serve as the input to the Apriori unsupervised learning in order to find the most relevant features.
Here, GOLDFINCH correctly detects that the Sign and Verify features have a potential feature interaction. This is a true positive since it is a known unwanted feature interaction, as reported in Table \ref{tab:known_fi_email}.
As shown in Step 4 of Fig. \ref{fig:framework_withou_specs}, GOLDFINCH provides a set of association rules detected by the Apriori unsupervised learning model to developers to inform them of possible feature interaction pairs of concern. For our Sign/Verify example the following rule is output: \\
$\{printMail\_\_role\__Sign\_source\_sl\_s\}
\iff
\{printMail\_\_role\_\_Verify\_dest\_sl\_l\}$.
Table \ref{tab:productline_feature_dependency} summarizes the results from our experimental evaluation of GOLDFINCH for each of the three software product lines, Email, Elevator and Mine Pump. The second column of the table shows the number of known unwanted feature interactions for each product line \cite{apel2013feature}. These specifications are used as the ground truth for our evaluation of the results.
The third column shows the number of these known feature interactions that GOLDFINCH finds. For the Email software product line, GOLDFINCH finds all ten of the known unwanted feature interactions (100\%) described in \cite{apel2013feature}. For the Elevator product line, GOLDFINCH finds four out of the five (80\%) unwanted feature interactions in it. For the Mine Pump product line, GOLDFINCH finds three out of its four (75\%) unwanted feature interactions.
As we can see in Table 12, some constraint specifications are not implemented in the code. As a consequence, we could not detect two unwanted feature interactions in Elevator and Mine Pump.
The results in Section \ref{sec:eval_with} clearly show the advantage of using FINCH's supervised learning when unwanted feature interactions {\it are specified and available}. Moreover, the results in Section \ref{sec:eval_without} show that, when unwanted feature {\it specifications are not available}, GOLDFINCH's unsupervised learning successfully discovered many feature dependencies of concern that otherwise could elude developers.
For the many highly configurable, real-world systems without access to feature-constraint specifications, GOLDFINCH offers an effective way to surface and increase understanding of unrecognized feature dependencies.
{\bf Finding: When feature-constraint specifications were not available, our method efficiently discovered
the same feature relevant data dependency that static analysis approach discovered, as well as
17 of the 19 unwanted feature interactions in three product lines.}
\emph{RQ3.G: How scalable is the performance of our approach when specifications are unavailable?}
We evaluate the scalability of GOLDFINCH based on its handling of a large number of features in software product lines and of options in a highly configurable system.
As shown in Table \ref{tab:productline_feature_dependency},
GOLDFINCH detects unwanted feature interactions in three small software product lines of six to ten features in 3 to 34 seconds. As shown in Table \ref{tab:prompt_svf_busybox32},
GOLDFINCH performs comparably with SVF, a static analysis tool,
in analyzing the core utilities of a real-world configurable system
with 139 features.
Since SVF is a whole-program analysis tool, the time we report in
Table \ref{tab:prompt_svf_busybox32} is the total time SVF spends
on the BusyBox code base, which is approximately 3 times the total
time spent by PROMPT on the coreutils subsystem of BusyBox.
Similarly, the memory presented in Table \ref{tab:prompt_svf_busybox32} for
SVF is for the analysis of the entire BusyBox code base, which is
50 times the memory used by PROMPT for the analysis of the coreutils subsystem.
This is significant in that, although SVF's static analysis has better coverage than PROMPT's
API modeling based analysis, PROMPT detects the same
feature relevant dependency in the coreutils subsystem of BusyBox 1.32.0 as found by SVF while achieving a performance that is comparable in terms of running time and more optimized in terms of memory usage.
{\bf Finding: Results from our experiments indicate that our GOLDFINCH approach, for use when feature-constraint specifications are not available, is scalable in terms of the number of features and code size it can handle.}
|
1,116,691,501,280 | arxiv | \section{Introduction}
Rodent models are widely used in preclinical studies to evaluate treatments and understand pathophysiology and behavioral changes associated with a variety of brain diseases and injuries, including ischemia \cite{Lo2003-nn} and traumatic injury \cite{cabeen2020computational}. Magnetic Resonance Imaging (MRI) is an emerging, valuable tool in establishing tissue readout measures in such preclinical studies \cite{chamorro2021future}. Compared to histopathology and microscopy, MRI has the advantage of being more readily standardized, acquired at multiple time points, applied to measure multiple tissue characteristics simultaneously, and it has a more direct translation path to treating patients. Preclinical MRI analysis pipelines often involve a complex sequence of processes, but an essential initial step of most pipelines is brain extraction, which isolates the brain from the skull and head. A successful brain extraction tool should be able to perform on a large data set that results from a variety acquisition setups and image contrasts. A variety of robust brain extraction tools have been developed for human MRI \cite{ashburner2012spm} \cite{smith2000bet}; however, these tools are not directly applicable to rodent data due to structural differences in the tissue and differences in image acquisition, e.g. field strength and imaging sequences. For rodent models, there is no standardized automated brain extraction tool that performs as well as those built for human models, perhaps due to heterogeneity of rodent imaging approaches and a relatively smaller demand.
Traditional brain extraction tools for rodent MRI have demonstrated successful automation with high accuracy \cite{oguz2014rats} \cite{chou2011robust} \cite{liu2020automatic} \cite{ruan2021automated}; however, there remain practical limitations due a need for parameter optimization based on specific sequences used and a high failure rate when applying tools on new datasets, particularly due to pathology in preclinical trials involving brain trauma. State-of-the-art approaches for brain extraction in non-human models now use convolutional neural networks (CNNs) \cite{hsu2020automatic} \cite{de2021automated} \cite{pontes2021deep} \cite{wang2021u}, which employ the U-Net architecture developed by Ronneberger et al. \cite{ronneberger2015u}, which has shown to provide superior performance across a wide range of biomedical segmentation tasks \cite{isensee2021nnu}. Nearly all previous work has shown that U-Net used on rodent models significantly improves the accuracy and reduces the time of rodent brain extract. However, there remain several practical questions to address regarding how such U-net models generalize. How do they handle multiple image contrasts, data from different scanners, and longitudinal changes due to pathology?
We investigate these questions in the present paper, with the larger goal of developing a brain extraction tool for an image analysis pipeline for the Stroke Preclinical Assessment Network (SPAN) \cite{lyden2022stroke}. SPAN is a multi-center study funded by the National Institutes of Neurological Disorders and Stroke (NINDS) to investigate putative stroke treatments and to address critical issues of rigor, transparency, and reproducibility in preclinical stroke research. The network includes six research universities and a coordinating center (CC) who manage enrollment of animals, experimental stroke, and blinded and randomized treatment with several candidate cerebroprotectants. SPAN is an ideal environment for testing the generalizability of preclinical image analytics, as it provides a diverse dataset varied by site, time, and imaging contrast. SPAN also provides a uniquely large dataset for this evaluation, as our present analysis includes data from 1368 imaging sessions with a total of 5472 image volumes. In comparison, the largest preclinical brain extraction evaluation was performed by De Feo et al. with 1782 image volumes, collected cross-sectionally from a single imaging center with minimal gross pathology \cite{de2021automated}. Our experiments examine the performance of a typical U-net CNN model across the wide range of data found in SPAN, including multiple quantitative imaging contrasts, such as T2 and apparent diffusivity coefficient (ADC) maps; multiple time points after experimental stroke; multiple imaging centers with different hardware configurations. We identify relevant variables that influence the performance of U-net brain extraction models and demonstrate that they can provide excellent performance across these various data conditions.
\section{Methods}
\begin{figure}
\includegraphics[width=\textwidth]{data24.pdf}
\caption{Example data showing imaging data from SPAN, which included four quantitative parameter maps collected at six distinct imaging centers. }
\label{fig:images}
\end{figure}
This section focuses on the data used in our experiments, how we constructed our brain extraction models, and our experimental design. As our work is based around SPAN and ultimately creating a robust image analysis pipeline for the network, our primary goal of this paper is to understand the capabilities and limitations of our brain extraction approach. We initially developed a traditional rule-based approach, and while this worked in some cases, it was not sufficiently robust for the high-throughput design of SPAN; however, we explored more state-of-the-art U-net approaches. We describe the data collection scheme of SPAN, the design of these two brain extraction approaches, and how we used SPAN data to comprehensively evaluate the performance of the U-net approach across data types.
{\bf Imaging protocol and data collection:} Data were collected from a mouse model with experimental middle cerebral artery occlusion (MCAO) at day 2 and day 30 after injury. With respective ethics approval, imaging was performed across six imaging centers on Bruker scanners with variable field strengths including 7T, 9.4T, 11.7T, with one site using a surface coil and all others using a volume coil. The multi-parameter imaging protocol included multi-echo T2, and diffusion-weighted MRI (DWI), which were collected at 150 $\mu$m$^2$ coronal in-plane resolution and 500 $\mu$m slice thickness. All sites used three b-values for DWI (0, 500, 1000 s/mm$^2$) and the T2 protocol used either three echoes (0, 45, 75 ms) or ten echoes (equally spaced from 0 to 100 ms). 100 mice were scanned in an initial pilot phase of SPAN to establish SOPs. Following this, SPAN Stage One proceeded to acquire MRI data from 780 animals with a total of 1,368 scanning session, accounting for mortality after injury. All data were routinely uploaded by each site in the DICOM format to the LONI Image Database Archive \cite{Crawford2016-yx} for long term storage and analytics. Example imaging data is shown in Fig. \ref{fig:images}.
{\bf Pre-processing and quality assessment:} While the sequences are similar across sites, there are subtle differences in the image data structure that we first reconcile by parsing DICOM tags, sorting by imaging parameters, fixing image coordinates, converting using dcm2nii, and finally producing a set of matching NIfTI files for each case. We applied adaptive non-local means denoising \cite{manjon2010adaptive} with voxelwise noise estimation, and to account for differences in image grids between scans, we also uniformly resample the images at 150 $\mu$m isotropic resolution using tricubic interpolation. We then perform image quality assessment for each modality by first segmenting foreground and background using Otsu thresholding and computing the signal-to-noise ratio, contrast-to-noise ratio, and signal variance-to-noise variance ratio. We then performed relaxometry to derive quantitative parameter maps, which included a signal baseline and rate of decay for the multi-echo T2 scan (T2$_{base}$ and T2$_{rate}$) and DWI (ADC$_{base}$ and ADC$_{rate}$) scans. For simplicity of presentation, we encoded all T2$_{rate}$ values as the inverse relaxation rate (R2). These steps were implemented using the Quantitative Imaging Toolkit (QIT) \cite{cabeen2018quantitative}.
{\bf Traditional Rule-based Segmentation Approach:} Our initial brain extraction stage of the SPAN pipeline used a traditional rule-based approach, in which a series of image processing steps to derive a brain mask. This included the following steps: (1) foreground extraction with an Otsu threshold, (2) histogram-based contrast normalization, (3) edge-preserving smoothing with a non-local means filter \cite{manjon2010adaptive}, (4) gradient magnitude estimation with a Sobel filter bank, (5) thresholding and graph-based segmentation \cite{felzenszwalb2004efficient}, (6) mathematical morphology including opening, closing, and filling, and (7) regularization with a Markov random field \cite{felzenszwalb2006efficient}. We applied this procedure to the ADC$_{base}$ contrast because it showed the least lesion contrast. This approach was effective in a large proportion of cases (over 80\%), and most failures were due to partial volume effects that blurred the skull boundary, resulting in either small errors on the superficial surface of the brain or catastrophic errors in which the brain mask ``leaked'' to the rest of the head. We also experiment with the AFNI rodent segmentation method 3dSkullStrip \cite{cox1996afni} as well as BET \cite{smith2000bet} with the input parameters ({\it -shrink\_fac .8 -rat}) and ({\it -f 0.8 -R -m}) respectively; however after quality control, we did not find these to provide improved performance over the described rule-based approach. This challenges pose a limitation for its use in SPAN, as we needed to preserve as much data as possible, so we used this approach to bootstrap a more accurate and modern approach using U-nets.
{\bf Modern Neural Network Approach:} We developed a brain extraction tool using a CNN neural network approach with a U-net architecture proposed by Ronneberger et al. \cite{ronneberger2015u}, which has demonstrated superior performance in previous rodent imaging studies \cite{Hsu2020-ku} \cite{De_Feo2021-xy}. Our model was implemented in PyTorch and included five contraction/expansion stages, a kernel size of 64, and an input resolution of 128x128. We trained a single 2D U-net with data from all image planes, and during inference, we applied the 2D U-net three times to include every possible image plane, and we finally took the average prediction from the three applications to each voxel to obtain the final prediction. We trained our model with the ADAM optimizer with a cross-entropy loss, a learning rate of 0.0001, a batch size of 20, and included albumentations-based data augmentation for translation, rotation, scaling, contrast, and deformations \cite{buslaev2020albumentations}. Our system was a Lambda Labs workstation with Ubuntu 20.04, a AMD Ryzen Threadripper 3970X 32-Core CPU, and an Nvidia 1080 Ti 12GB GPU. We created our training dataset from the best performing results from the our previous rule-based approach. In particular, we selected training examples from 180 cases and hold-out testing and validation examples from 30 cases; each was split roughly evenly between imaging centers and time points. We trained the model for 10 epochs on the training data, selected the best model with the validation data, and finally evaluated hold-out performance with the test data. We report our accuracy estimates with the Dice coefficient \cite{dice1945measures}. This described model is the focus of our evaluation experiments, which are described in the next section.
{\bf Experimental design:} Because CNNs are black-box approaches by nature, a major question is how they perform across different conditions. Substantial previous work has tested how different types of U-net architecture affect segmentation performance \cite{de2021automated} \cite{isensee2021nnu}. In contrast, we sought to understand how U-net performance varies across different types of data. SPAN provides a wide range of data types, varying across imaging contrasts, scanner hardware, and longitudinal time points, and we investigated how a typical U-net architecture performs relative to these different conditions. This is important both for SPAN, but it also may provide useful knowledge about how these models generalize to other preclinical studies and when their scope is expanded. Our experiments systematically tested these factors using the SPAN data as follows. We fit eight U-net models on different combinations of data and investigated how different data characteristics relate to segmentation performance. Among these eight models, we trained models that include all four quantitative parameters as multi-channel input (R2$_{base}$, R2$_{rate}$, ADC$_{base}$, and ADC$_{rate}$) as well as a single-channel model trained on the concatenated whole of these image contrasts. We also examined performance of four models trained separately on each image contrast. Finally, we investigated the ability of the model to generalize to new sites by holding out data from three of the sites. Specifically, for this site-based test, the training and validation data were from three sites (B, D, F) and the testing data was only from three other distinct sites (A, C, E), thus it is designed to indicate how well the model might generalize to unseen data from sites added to SPAN in the future. We conducted this site-based test two, for both multi- and single-channel models. Note: because there are four image contrasts, the single-channel models has four times the data as the multi-channel models. As a final evaluation step, we applied our brain extraction tool to the entirety of the SPAN Stage One dataset (N = 1368) and qualitatively assessed the results to determine if they pass quality control; we report this outcome as the ``failure rate''. The 8 models we trained will be referred to as such: M-full-multi (MFM), M-full-single (MFS), M-half-multi (MHM), M-half-single (MHS), M-contrast-T2-base (MT2B), M-contrast-T2-rate (MT2R), M-contrast-ADC-base (MADCB), and M-contrast-ADC-rate (MADCR).
\section{Results and Discussion}
In this section, we present results from experimental evaluation of the U-net model across data types. We first examine results from a multi-channel U-Net model trained on the full dataset, and then compare it to a single channel model where each scan is fed individually into the model. We then report how these same models perform for specific sites and specific time points. We then report how restricted models perform, i.e. those looking a model restricted to train on only half of the sites (with the remaining half left for testing), as well as models trained on specific imaging contrasts.
\begin{table*}
\centering
\label{table:results}
\ra{1.3}
\caption{Quantitative results from our experiments. The top table shows results from two models: one trained using multi-channel input with four contrasts and the other trained with a single channel input taking each contrast separately. The first column shows the overall results and the following columns show results broken down by site and time point. The bottom table shows results for more specific models which were trained on a subset. The first four columns show results from training the model on individual image contrasts. The last two columns show results from training a model on data from sites B, D and F, and then testing it on sites A, C, and E to gauge generalization across sites.}
\vspace*{1em}
\begin{tabular}{@{}lcccccccccccc@{}}\toprule
& & \phantom{abc}& \multicolumn{6}{c}{Split by Imaging Site} & \phantom{abc} & \multicolumn{2}{c}{Split by Time}\\
\cmidrule{4-9} \cmidrule{11-12}
Channels & All & & A & B & C & D & E & F && Day 2 & Day 30 \\
\midrule
Multi & 0.964 && 0.974 & 0.976 & 0.980 & 0.972 & 0.965 & 0.914 && 0.953 & 0.974 \\
Single & 0.957 && 0.967 & 0.967 & 0.973 & 0.952 & 0.964 & 0.919 && 0.946 & 0.968 \\
\bottomrule
\end{tabular}
\newline
\vspace*{1em}
\newline
\begin{tabular}{@{}lccccccc@{}}\toprule
& \multicolumn{4}{c}{Models for Each Contrast} & \phantom{abc}& \multicolumn{2}{c}{Half-site Restricted Model} \\
\cmidrule{2-5} \cmidrule{7-8}
& ADC$_{base}$ & ADC$_{rate}$ & R2$_{base}$ & R2$_{rate}$ && Multi-channel & Single-channel \\
\midrule
Dice Score & 0.964 & 0.958 & 0.935 & 0.954 && 0.953 & 0.949 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}[!t]
\includegraphics[width=\textwidth]{results_figures.pdf}
\caption{Results (left to right): 1) Comparison of traditional segmentation and MFM U-Net on a fail and a training case; 2) MFM vs MFS total dice scores; 3) MFM vs MFS broken down by time point (early vs late); 4) MFM and MFS total dice scores; 5) All 4 M-Single Contrast models (MADCB,MADCR,MT2B, MT2R); 6) MFS broken down by site.}
\label{fig:plots}
\end{figure}
Our results are summarized quantitatively in Table 1 and visualized in Fig. \ref{fig:plots}. M-full-multi had the highest performance with a Dice score of 0.964. The M-full-single had only slightly lower performance than the multi-channel model with a Dice of 0.957. Looking at the data qualitatively, both M-full-multi and M-full-single model had a 0\% failure rate. This trend was consistent across sites and time points. When broken down by site, the Dice scores reflected image quality, with Site F having the highest SNR and the lowest dice scores of 0.914 and 0.919 for the multi and single channel models respectively. This site was the sole site with a surface receive coil (the other sites had volume coils), which may also explain the lower performance. Considering the restricted models, training the models on half (B,D,F) of the sites and testing it on the other half (A,C,E) resulted in a mean Dice scores of 0.953 (M-half-multi) and .954 (M-half-single), indicating that both models can perform robustly on data from sites added to SPAN in the future, without being trained on data from those sites. Considering the restricted models for individual contrasts, M-contrast-ADC-base, M-contrast-ADC-rate, M-contrast-T2-base, M-contrast-T2-rate (Table 1) also had robust Dice scores, with T2 baseline being the lowest at 0.935, indicating that the model can be trained on a single contrast and still perform robustly on that contrast. In both M-full-multi and M-full-single, there was an increase in performance between the early and late time points. This is likely because the brain has a more regular appearance at the late time point 30 days post injury and consists of more healthy tissue, while the early time point 2 days post stroke, it has greater injury and morphometric abnormality, leading to slightly lower performance.
We selected a group of 206 cases that failed catastrophically when processed by the rule-based brain extraction approach, and we looked at the qualitative performance of the U-net in these specific cases. The M-full-multi successfully extracted all of these cases, and two independent quality checks of the segmented data from the first stage of the SPAN study (1368 4-channel scans) showed a 4\% failure rate on large multi-site preclinical mouse ADC and T2 datasets with stroke pathology. Some of the cases that failed quality control were excluded due to motion artifact, operator error, etc.; because it includes these other factors, it is more general than only the failures from the brain extraction step.
Considering our findings in relation to previous work, the Dice scores of our models are comparable to an existing study of the U-Net’s performance on large preclinical data conducted by De Feo et al. in 2021 while evaluating network architectures. The model of De Feo et al. achieved a brain mask Dice score of 0.978 when tested on 1782 T2 volumes of healthy and Huntington mice \cite{de2021automated}. It should be noted that while the Huntington mice are a disease model, they lack the morphometric abnormality found in stroke cases, so we could reasonable expect the performance to be higher. Our results show that when the U-Net is evaluated on thicker slices and a variety of contrasts, image channels, time points, and sites, it maintains the robustness demonstrated by De Feo et al.
\noindent {\bf Conclusions:} We evaluated the U-Net neural network model’s ability to segment a large dataset from a multi-site preclinical rodent stroke imaging study, exploring a myriad of factors related to data quality and character. We have demonstrated that the U-Net performs reliable brain extraction on mice data collected from various imaging hardware, multiple time points, with varying contrasts and field strengths where traditional methods failed. Quality control of 1,368 scans segmented by our model has shown that this technique can be successfully used to create a robust pipeline for mice brain extraction in high throughput preclinical imaging studies. We also found that a single-channel model of the U-Net is nearly as robust as the multi-channel version, allowing flexibility in reducing the scanning protocol in subsequent stages SPAN going forward. Future opportunities include testing the U-Net’s generalization to rats, and specimens that are obese, aged, and female. We may further evaluate the speed of this approach on various hardware available to labs, i.e. comparing performance on single GPU machine or CPU-based grid computing environments. Looking forward, our model can help inform the design of preclinical studies and potentially improve their scalability and reliability in the future. We provide an open-source implementation and trained models of our method online at \url{http://github.com/cabeen/neu-net}.\\
\noindent {\bf Acknowledgements:} Work reported here was supported by grant NS U24 NS113452 (PL). RPC is supported by the CZI Imaging Scientist Award Program, under grant number 2020-225670 from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation.
\bibliographystyle{splncs04}
\section{Introduction}
Rodent models are widely used in preclinical studies to evaluate treatments and understand pathophysiology and behavioral changes associated with a variety of brain diseases and injuries, including ischemia \cite{Lo2003-nn} and traumatic injury \cite{cabeen2020computational}. Magnetic Resonance Imaging (MRI) is an emerging, valuable tool in establishing tissue readout measures in such preclinical studies \cite{chamorro2021future}. Compared to histopathology and microscopy, MRI has the advantage of being more readily standardized, acquired at multiple time points, applied to measure multiple tissue characteristics simultaneously, and it has a more direct translation path to treating patients. Preclinical MRI analysis pipelines often involve a complex sequence of processes, but an essential initial step of most pipelines is brain extraction, which isolates the brain from the skull and head. A successful brain extraction tool should be able to perform on a large data set that results from a variety acquisition setups and image contrasts. A variety of robust brain extraction tools have been developed for human MRI \cite{ashburner2012spm} \cite{smith2000bet}; however, these tools are not directly applicable to rodent data due to structural differences in the tissue and differences in image acquisition, e.g. field strength and imaging sequences. For rodent models, there is no standardized automated brain extraction tool that performs as well as those built for human models, perhaps due to heterogeneity of rodent imaging approaches and a relatively smaller demand.
Traditional brain extraction tools for rodent MRI have demonstrated successful automation with high accuracy \cite{oguz2014rats} \cite{chou2011robust} \cite{liu2020automatic} \cite{ruan2021automated}; however, there remain practical limitations due a need for parameter optimization based on specific sequences used and a high failure rate when applying tools on new datasets, particularly due to pathology in preclinical trials involving brain trauma. State-of-the-art approaches for brain extraction in non-human models now use convolutional neural networks (CNNs) \cite{hsu2020automatic} \cite{de2021automated} \cite{pontes2021deep} \cite{wang2021u}, which employ the U-Net architecture developed by Ronneberger et al. \cite{ronneberger2015u}, which has shown to provide superior performance across a wide range of biomedical segmentation tasks \cite{isensee2021nnu}. Nearly all previous work has shown that U-Net used on rodent models significantly improves the accuracy and reduces the time of rodent brain extract. However, there remain several practical questions to address regarding how such U-net models generalize. How do they handle multiple image contrasts, data from different scanners, and longitudinal changes due to pathology?
We investigate these questions in the present paper, with the larger goal of developing a brain extraction tool for an image analysis pipeline for the Stroke Preclinical Assessment Network (SPAN) \cite{lyden2022stroke}. SPAN is a multi-center study funded by the National Institutes of Neurological Disorders and Stroke (NINDS) to investigate putative stroke treatments and to address critical issues of rigor, transparency, and reproducibility in preclinical stroke research. The network includes six research universities and a coordinating center (CC) who manage enrollment of animals, experimental stroke, and blinded and randomized treatment with several candidate cerebroprotectants. SPAN is an ideal environment for testing the generalizability of preclinical image analytics, as it provides a diverse dataset varied by site, time, and imaging contrast. SPAN also provides a uniquely large dataset for this evaluation, as our present analysis includes data from 1368 imaging sessions with a total of 5472 image volumes. In comparison, the largest preclinical brain extraction evaluation was performed by De Feo et al. with 1782 image volumes, collected cross-sectionally from a single imaging center with minimal gross pathology \cite{de2021automated}. Our experiments examine the performance of a typical U-net CNN model across the wide range of data found in SPAN, including multiple quantitative imaging contrasts, such as T2 and apparent diffusivity coefficient (ADC) maps; multiple time points after experimental stroke; multiple imaging centers with different hardware configurations. We identify relevant variables that influence the performance of U-net brain extraction models and demonstrate that they can provide excellent performance across these various data conditions.
\section{Methods}
\begin{figure}
\includegraphics[width=\textwidth]{data24.pdf}
\caption{Example data showing imaging data from SPAN, which included four quantitative parameter maps collected at six distinct imaging centers. }
\label{fig:images}
\end{figure}
This section focuses on the data used in our experiments, how we constructed our brain extraction models, and our experimental design. As our work is based around SPAN and ultimately creating a robust image analysis pipeline for the network, our primary goal of this paper is to understand the capabilities and limitations of our brain extraction approach. We initially developed a traditional rule-based approach, and while this worked in some cases, it was not sufficiently robust for the high-throughput design of SPAN; however, we explored more state-of-the-art U-net approaches. We describe the data collection scheme of SPAN, the design of these two brain extraction approaches, and how we used SPAN data to comprehensively evaluate the performance of the U-net approach across data types.
{\bf Imaging protocol and data collection:} Data were collected from a mouse model with experimental middle cerebral artery occlusion (MCAO) at day 2 and day 30 after injury. With respective ethics approval, imaging was performed across six imaging centers on Bruker scanners with variable field strengths including 7T, 9.4T, 11.7T, with one site using a surface coil and all others using a volume coil. The multi-parameter imaging protocol included multi-echo T2, and diffusion-weighted MRI (DWI), which were collected at 150 $\mu$m$^2$ coronal in-plane resolution and 500 $\mu$m slice thickness. All sites used three b-values for DWI (0, 500, 1000 s/mm$^2$) and the T2 protocol used either three echoes (0, 45, 75 ms) or ten echoes (equally spaced from 0 to 100 ms). 100 mice were scanned in an initial pilot phase of SPAN to establish SOPs. Following this, SPAN Stage One proceeded to acquire MRI data from 780 animals with a total of 1,368 scanning session, accounting for mortality after injury. All data were routinely uploaded by each site in the DICOM format to the LONI Image Database Archive \cite{Crawford2016-yx} for long term storage and analytics. Example imaging data is shown in Fig. \ref{fig:images}.
{\bf Pre-processing and quality assessment:} While the sequences are similar across sites, there are subtle differences in the image data structure that we first reconcile by parsing DICOM tags, sorting by imaging parameters, fixing image coordinates, converting using dcm2nii, and finally producing a set of matching NIfTI files for each case. We applied adaptive non-local means denoising \cite{manjon2010adaptive} with voxelwise noise estimation, and to account for differences in image grids between scans, we also uniformly resample the images at 150 $\mu$m isotropic resolution using tricubic interpolation. We then perform image quality assessment for each modality by first segmenting foreground and background using Otsu thresholding and computing the signal-to-noise ratio, contrast-to-noise ratio, and signal variance-to-noise variance ratio. We then performed relaxometry to derive quantitative parameter maps, which included a signal baseline and rate of decay for the multi-echo T2 scan (T2$_{base}$ and T2$_{rate}$) and DWI (ADC$_{base}$ and ADC$_{rate}$) scans. For simplicity of presentation, we encoded all T2$_{rate}$ values as the inverse relaxation rate (R2). These steps were implemented using the Quantitative Imaging Toolkit (QIT) \cite{cabeen2018quantitative}.
{\bf Traditional Rule-based Segmentation Approach:} Our initial brain extraction stage of the SPAN pipeline used a traditional rule-based approach, in which a series of image processing steps to derive a brain mask. This included the following steps: (1) foreground extraction with an Otsu threshold, (2) histogram-based contrast normalization, (3) edge-preserving smoothing with a non-local means filter \cite{manjon2010adaptive}, (4) gradient magnitude estimation with a Sobel filter bank, (5) thresholding and graph-based segmentation \cite{felzenszwalb2004efficient}, (6) mathematical morphology including opening, closing, and filling, and (7) regularization with a Markov random field \cite{felzenszwalb2006efficient}. We applied this procedure to the ADC$_{base}$ contrast because it showed the least lesion contrast. This approach was effective in a large proportion of cases (over 80\%), and most failures were due to partial volume effects that blurred the skull boundary, resulting in either small errors on the superficial surface of the brain or catastrophic errors in which the brain mask ``leaked'' to the rest of the head. We also experiment with the AFNI rodent segmentation method 3dSkullStrip \cite{cox1996afni} as well as BET \cite{smith2000bet} with the input parameters ({\it -shrink\_fac .8 -rat}) and ({\it -f 0.8 -R -m}) respectively; however after quality control, we did not find these to provide improved performance over the described rule-based approach. This challenges pose a limitation for its use in SPAN, as we needed to preserve as much data as possible, so we used this approach to bootstrap a more accurate and modern approach using U-nets.
{\bf Modern Neural Network Approach:} We developed a brain extraction tool using a CNN neural network approach with a U-net architecture proposed by Ronneberger et al. \cite{ronneberger2015u}, which has demonstrated superior performance in previous rodent imaging studies \cite{Hsu2020-ku} \cite{De_Feo2021-xy}. Our model was implemented in PyTorch and included five contraction/expansion stages, a kernel size of 64, and an input resolution of 128x128. We trained a single 2D U-net with data from all image planes, and during inference, we applied the 2D U-net three times to include every possible image plane, and we finally took the average prediction from the three applications to each voxel to obtain the final prediction. We trained our model with the ADAM optimizer with a cross-entropy loss, a learning rate of 0.0001, a batch size of 20, and included albumentations-based data augmentation for translation, rotation, scaling, contrast, and deformations \cite{buslaev2020albumentations}. Our system was a Lambda Labs workstation with Ubuntu 20.04, a AMD Ryzen Threadripper 3970X 32-Core CPU, and an Nvidia 1080 Ti 12GB GPU. We created our training dataset from the best performing results from the our previous rule-based approach. In particular, we selected training examples from 180 cases and hold-out testing and validation examples from 30 cases; each was split roughly evenly between imaging centers and time points. We trained the model for 10 epochs on the training data, selected the best model with the validation data, and finally evaluated hold-out performance with the test data. We report our accuracy estimates with the Dice coefficient \cite{dice1945measures}. This described model is the focus of our evaluation experiments, which are described in the next section.
{\bf Experimental design:} Because CNNs are black-box approaches by nature, a major question is how they perform across different conditions. Substantial previous work has tested how different types of U-net architecture affect segmentation performance \cite{de2021automated} \cite{isensee2021nnu}. In contrast, we sought to understand how U-net performance varies across different types of data. SPAN provides a wide range of data types, varying across imaging contrasts, scanner hardware, and longitudinal time points, and we investigated how a typical U-net architecture performs relative to these different conditions. This is important both for SPAN, but it also may provide useful knowledge about how these models generalize to other preclinical studies and when their scope is expanded. Our experiments systematically tested these factors using the SPAN data as follows. We fit eight U-net models on different combinations of data and investigated how different data characteristics relate to segmentation performance. Among these eight models, we trained models that include all four quantitative parameters as multi-channel input (R2$_{base}$, R2$_{rate}$, ADC$_{base}$, and ADC$_{rate}$) as well as a single-channel model trained on the concatenated whole of these image contrasts. We also examined performance of four models trained separately on each image contrast. Finally, we investigated the ability of the model to generalize to new sites by holding out data from three of the sites. Specifically, for this site-based test, the training and validation data were from three sites (B, D, F) and the testing data was only from three other distinct sites (A, C, E), thus it is designed to indicate how well the model might generalize to unseen data from sites added to SPAN in the future. We conducted this site-based test two, for both multi- and single-channel models. Note: because there are four image contrasts, the single-channel models has four times the data as the multi-channel models. As a final evaluation step, we applied our brain extraction tool to the entirety of the SPAN Stage One dataset (N = 1368) and qualitatively assessed the results to determine if they pass quality control; we report this outcome as the ``failure rate''. The 8 models we trained will be referred to as such: M-full-multi (MFM), M-full-single (MFS), M-half-multi (MHM), M-half-single (MHS), M-contrast-T2-base (MT2B), M-contrast-T2-rate (MT2R), M-contrast-ADC-base (MADCB), and M-contrast-ADC-rate (MADCR).
\section{Results and Discussion}
In this section, we present results from experimental evaluation of the U-net model across data types. We first examine results from a multi-channel U-Net model trained on the full dataset, and then compare it to a single channel model where each scan is fed individually into the model. We then report how these same models perform for specific sites and specific time points. We then report how restricted models perform, i.e. those looking a model restricted to train on only half of the sites (with the remaining half left for testing), as well as models trained on specific imaging contrasts.
\begin{table*}
\centering
\label{table:results}
\ra{1.3}
\caption{Quantitative results from our experiments. The top table shows results from two models: one trained using multi-channel input with four contrasts and the other trained with a single channel input taking each contrast separately. The first column shows the overall results and the following columns show results broken down by site and time point. The bottom table shows results for more specific models which were trained on a subset. The first four columns show results from training the model on individual image contrasts. The last two columns show results from training a model on data from sites B, D and F, and then testing it on sites A, C, and E to gauge generalization across sites.}
\vspace*{1em}
\begin{tabular}{@{}lcccccccccccc@{}}\toprule
& & \phantom{abc}& \multicolumn{6}{c}{Split by Imaging Site} & \phantom{abc} & \multicolumn{2}{c}{Split by Time}\\
\cmidrule{4-9} \cmidrule{11-12}
Channels & All & & A & B & C & D & E & F && Day 2 & Day 30 \\
\midrule
Multi & 0.964 && 0.974 & 0.976 & 0.980 & 0.972 & 0.965 & 0.914 && 0.953 & 0.974 \\
Single & 0.957 && 0.967 & 0.967 & 0.973 & 0.952 & 0.964 & 0.919 && 0.946 & 0.968 \\
\bottomrule
\end{tabular}
\newline
\vspace*{1em}
\newline
\begin{tabular}{@{}lccccccc@{}}\toprule
& \multicolumn{4}{c}{Models for Each Contrast} & \phantom{abc}& \multicolumn{2}{c}{Half-site Restricted Model} \\
\cmidrule{2-5} \cmidrule{7-8}
& ADC$_{base}$ & ADC$_{rate}$ & R2$_{base}$ & R2$_{rate}$ && Multi-channel & Single-channel \\
\midrule
Dice Score & 0.964 & 0.958 & 0.935 & 0.954 && 0.953 & 0.949 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}[!t]
\includegraphics[width=\textwidth]{results_figures.pdf}
\caption{Results (left to right): 1) Comparison of traditional segmentation and MFM U-Net on a fail and a training case; 2) MFM vs MFS total dice scores; 3) MFM vs MFS broken down by time point (early vs late); 4) MFM and MFS total dice scores; 5) All 4 M-Single Contrast models (MADCB,MADCR,MT2B, MT2R); 6) MFS broken down by site.}
\label{fig:plots}
\end{figure}
Our results are summarized quantitatively in Table 1 and visualized in Fig. \ref{fig:plots}. M-full-multi had the highest performance with a Dice score of 0.964. The M-full-single had only slightly lower performance than the multi-channel model with a Dice of 0.957. Looking at the data qualitatively, both M-full-multi and M-full-single model had a 0\% failure rate. This trend was consistent across sites and time points. When broken down by site, the Dice scores reflected image quality, with Site F having the highest SNR and the lowest dice scores of 0.914 and 0.919 for the multi and single channel models respectively. This site was the sole site with a surface receive coil (the other sites had volume coils), which may also explain the lower performance. Considering the restricted models, training the models on half (B,D,F) of the sites and testing it on the other half (A,C,E) resulted in a mean Dice scores of 0.953 (M-half-multi) and .954 (M-half-single), indicating that both models can perform robustly on data from sites added to SPAN in the future, without being trained on data from those sites. Considering the restricted models for individual contrasts, M-contrast-ADC-base, M-contrast-ADC-rate, M-contrast-T2-base, M-contrast-T2-rate (Table 1) also had robust Dice scores, with T2 baseline being the lowest at 0.935, indicating that the model can be trained on a single contrast and still perform robustly on that contrast. In both M-full-multi and M-full-single, there was an increase in performance between the early and late time points. This is likely because the brain has a more regular appearance at the late time point 30 days post injury and consists of more healthy tissue, while the early time point 2 days post stroke, it has greater injury and morphometric abnormality, leading to slightly lower performance.
We selected a group of 206 cases that failed catastrophically when processed by the rule-based brain extraction approach, and we looked at the qualitative performance of the U-net in these specific cases. The M-full-multi successfully extracted all of these cases, and two independent quality checks of the segmented data from the first stage of the SPAN study (1368 4-channel scans) showed a 4\% failure rate on large multi-site preclinical mouse ADC and T2 datasets with stroke pathology. Some of the cases that failed quality control were excluded due to motion artifact, operator error, etc.; because it includes these other factors, it is more general than only the failures from the brain extraction step.
Considering our findings in relation to previous work, the Dice scores of our models are comparable to an existing study of the U-Net’s performance on large preclinical data conducted by De Feo et al. in 2021 while evaluating network architectures. The model of De Feo et al. achieved a brain mask Dice score of 0.978 when tested on 1782 T2 volumes of healthy and Huntington mice \cite{de2021automated}. It should be noted that while the Huntington mice are a disease model, they lack the morphometric abnormality found in stroke cases, so we could reasonable expect the performance to be higher. Our results show that when the U-Net is evaluated on thicker slices and a variety of contrasts, image channels, time points, and sites, it maintains the robustness demonstrated by De Feo et al.
\noindent {\bf Conclusions:} We evaluated the U-Net neural network model’s ability to segment a large dataset from a multi-site preclinical rodent stroke imaging study, exploring a myriad of factors related to data quality and character. We have demonstrated that the U-Net performs reliable brain extraction on mice data collected from various imaging hardware, multiple time points, with varying contrasts and field strengths where traditional methods failed. Quality control of 1,368 scans segmented by our model has shown that this technique can be successfully used to create a robust pipeline for mice brain extraction in high throughput preclinical imaging studies. We also found that a single-channel model of the U-Net is nearly as robust as the multi-channel version, allowing flexibility in reducing the scanning protocol in subsequent stages SPAN going forward. Future opportunities include testing the U-Net’s generalization to rats, and specimens that are obese, aged, and female. We may further evaluate the speed of this approach on various hardware available to labs, i.e. comparing performance on single GPU machine or CPU-based grid computing environments. Looking forward, our model can help inform the design of preclinical studies and potentially improve their scalability and reliability in the future. We provide an open-source implementation and trained models of our method online at \url{http://github.com/cabeen/neu-net}.\\
\noindent {\bf Acknowledgements:} Work reported here was supported by grant NS U24 NS113452 (PL). RPC is supported by the CZI Imaging Scientist Award Program, under grant number 2020-225670 from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation.
\bibliographystyle{splncs04}
|
1,116,691,501,281 | arxiv | \section{Introduction}\lb{s-i}
The first and foremost
aim of this paper is to develop the classical and
quantum field-to-particle transition formalism
for multiscalar field theory in two-dimensional flat spacetime.
Below we will call this formalism as ``zero-brane'' in the sense
``non-minimal point particle'' rather than in the sense
of supersymmetric string/brane theories.
The study of the field-to-particle transition formalism as such
also lies within the well-known dream programm of constructing of the
theory which would not contain matter as external postulated entity but
would consider fields as sources of matter and particles as
special field configurations.
Such a programm was inspired probably
first by Lorentz and Poincar\'e and since that time many efforts had been
made to come it into reality, one may recall the Einstein's, Klein's
and Heisenberg's attempts,
but discovering of fermionic fields and success of Standard Model
decreased interest to such theories.
Nevertheless, the problem of the origin of matter
remains to be still open and important,
especially in what concerns the theoretical explanation of
the fundamental properties of observable particles.
Nowadays, it seems to be possible to realize this programm for
boson fields (below we will show that
it is easily possible for multiscalar field in two dimensions) but it
is yet unclear how to obtain fermionic matter.
Some hope that it can be done arises from the supersymmetry but
the generalization of the proposed approach on higher-dimensional
theories encounters severe mathematical troubles.
But thinking about physical relevance of the presented approach
one should not exclude the second possibility.
Namely, as we will demonstrate below
the special field solutions indeed can
be correctly regarded as particles which
are sufficiently {\it point-like} ones
(despite the presence in action of non-minimal terms
depending on curvature, etc.).
From the other hand, there exist the extended models of particles
which suggest that pointness is no more than scale approximation
\cite{zlo005}.
Thus, one can ask the question of justified choice
between ``nonminimal point-particle''
and ``extended particle'' paradigmes.
Recently we cannot answer this question yet, we just
point out that this paper is devoted to the former point of view.
Then, by way of an example, we will apply the
field-particle approach to the
particular 2D theory of gravity admitting
at certain parametrization the correspondence to some scalar field
theory acting on flat spacetime.
The Jackiw-Teitelboim dilaton gravity discovered in 1984 \cite{jt}
can be obtained also as a dimensional reduction of the 3D BTZ black
hole \cite{btz,ao} and spherically symmetric solution of 4D
dilaton Einstein-Maxwell gravity used as a model of
evaporation process of a near-extremal black hole \cite{cm}.
In spite of the fact taht the
JT solution is locally diffeomorphic to the
DeSitter space, it has all the global attributes of a black hole.
Besides, it is simple enough to obtain main results in a
non-perturbative way that seems to be important for highly non-linear
general relativity.
Despite the wide literature devoted to classical and quantum
aspects of the theory
(see Refs. \cite{gk,lgk,dil-qua} and references therein),
it is concerned mainly with standard methods of study, whereas it
is clear that black holes are extended objects and thus should
be correctly considered within framework of the brane theory
where there is no rigid fixation of spatial symmetry.
The quantum aspects were studied mainly in connection with
group features of JT dilaton gravity in general
whereas we will quantize the theory
in the vicinity of certain non-trivial vacuum induced by a static solution
emphasizing on the corrections to mass spectrum.
Thus, our purpose is to study the 2D dilaton gravity
in the neighbourhood of the
classical and quantum Jackiw-Teitelboim black hole
within the frameworks of the brane approach, which
consists in the
constructing of the effective action where the non-minimal
terms (first of all, depending on the world-volume curvature) are induced
by field fluctuations.
Then the effective action evidently arises after nonlinear
reparametrization
of an initial theory when excluding zero field oscillations.
The paper is arranged as follows.
In Sec. \ref{s-jt} we study the JT solution as a soliton-dilaton
(more correctly, instanton-dilaton) doublet and
its properties at the classical level.
In Sec. \ref{s-ea} we generalize the approach \cite{kpp} for
arbitrary multiscalar fields and apply it for JT dilaton gravity in
fixed-gauge (flat-spacetime) formulation.
Minimizing the action with respect to field fluctuations,
we remove zero modes and obtain the point-particle action
with non-minimal terms corresponding to this theory.
Sec. \ref{s-q} is devoted to quantization of the action as a constrained
theory with higher derivatives.
In result we obtain the Schr\"odinger wave equation describing
wave function and mass spectrum of a point particle with curvature
and apply them for the JT black hole.
Then we calculate the zeroth and first excited levels to get the
mass of the quantum JT black hole with quantum corrections.
Conclusions are made in Sec. \ref{s-c}.
\section{Jackiw-Teitelboim gravity}\lb{s-jt}
Consider the action of Jackiw-Teitelboim dilaton gravity
\begin{equation}
S_{JT}[\tau, g] = \frac{1}{2 G} \int d^2 x \sqrt{-g}
\tau (R+2 m^2), \lb{eq2.1}
\end{equation}
where $G$ is the gravitational coupling constant, dimensionless in
2D case. Extremizing this action with respect to metric and dilaton
field variations we obtain the following equations of motion
\begin{eqnarray}
&&R + 2 m^2= 0, \lb{eq2.2}\\
&&\left(\nabla_\mu\nabla_\nu - m^2 g_{\mu\nu} \right)\tau= 0. \lb{eq2.3}
\end{eqnarray}
Further, if one performs the parametrization of a metric
\begin{equation} \lb{eq2.4}
d s^2 = - \sin{\!^2 (u/2)} d t^2 +
\cos{\!^2 (u/2)} d x^2,
\end{equation}
and puts the metric ansatz into
eqs. (\ref{eq2.1})-(\ref{eq2.3}), they can be rewritten \cite{gk},
respectively, as
\begin{equation}
S_{JT}[\tau, u] = \frac{1}{2 G} \int d^2 x
\tau (\Delta u - m^2 \sin{u}), \lb{eq2.5}
\end{equation}
\begin{eqnarray}
&&\Delta u - m^2 \sin{u}= 0, \lb{eq2.6} \\
&&\left(\Delta - m^2 \cos{u} \right)\tau= 0,
\end{eqnarray}
where $\Delta$ is the flat Euclidean Laplacian,
$\partial_{t}^2 + \partial_{x}^2$.
If we wish to choose from the solutions of eq. (\ref{eq2.6}) only the
one-instanton ones, we have the following
instanton-dilaton pair:
\begin{eqnarray}
&&u^{(s)} (x,t) = 4 \arctan{\exp{(m\rho)}}, \lb{eq2.8} \\
&&\tau^{(s)} (x,t) = -C_1 \Sech{}{m\rho} +
C_2
\left[
\sinh{(m\rho)} + m\rho \Sech{}{m\rho}
\right], \lb{eq2.9}
\end{eqnarray}
where $C_i$ are arbitrary constants, $\rho = \gamma (x-v t)$,
$1/\gamma = \sqrt{1 + v^2}$. Then the metric (\ref{eq2.4}) after
the transformation $\{x,t\} \to \{R,T\}$, such that
\begin{eqnarray}
&&d T = v \sqrt{\frac{m}{2 G \gamma M}}
\left[
d t - \frac{v/\gamma}{\Cosech{2}{m\rho}- v^2}d \rho
\right], \nonumber \\
&&R = \frac{1}{v} \sqrt{\frac{2 G M}{\gamma m^3}}
\Sech{}{m\rho}, \nonumber
\end{eqnarray}
where $M$ is an arbitrary constant, can be rewritten in the explicit
form representing the JT black hole solution
\begin{equation} \lb{eq2.10}
d s^2 =
-
\left(
m^2 R^2 - \frac{2 G \gamma M}{m}
\right)d T^2
+
\left(
m^2 R^2 - \frac{2 G \gamma M}{m}
\right)^{-1} d R^2,
\end{equation}
having the following energy and event horizon
\begin{equation} \lb{eq2.11}
E_{\text{BH}} = \gamma M, \ \
R_{\text{BH}} = \sqrt{\frac{2 G \gamma M}{m^3}}.
\end{equation}
Our purpose now will be to take into account the field fluctuations
in the neighborhood of this solution and to construct the effective
action of the JT black hole as a zero-brane.
Before we go further, one should develop a general approach.
\section{Effective action}\lb{s-ea}
In this section we will construct the nonlinear effective action of
an arbitrary multiscalar 2D theory in the vicinity of
a solitary-wave solution, and then apply it for JT dilaton gravity.
In fact, here we will
describe the procedure of the correct transition from field
to particle degrees of freedom.
Indeed, despite the solitary-wave solution resembles a particle
both at classical and quantum levels,
it yet remains to be a {\it field solution} with infinite number of
field degrees of freedom
whereas a true particle has a finite number of degrees of freedom.
Therefore, we are obliged to correctly handle this circumstance
(and several others)
otherwise deep contradictions may appear.
\subsection{General formalism}\lb{s-ea-gf}
Let us consider the action describing $N$ scalar fields
\begin{equation}
S[\varphi] = \int L(\varphi)\, d^2 x, \lb{eq3.1}
\end{equation}
\begin{equation} \lb{eq3.2}
L (\varphi) = \frac{1}{2}
\sum_{a=1}^{N} (\partial_m \varphi_a) (\partial^m \varphi_a) -
U (\varphi).
\end{equation}
The corresponding equations of motion are
\begin{equation} \lb{eq3.3}
\partial^m \partial_m \varphi_a + U_a(\varphi) = 0,
\end{equation}
where we defined
\[
U_a(\varphi) = \Der{U(\varphi)}{\varphi_a},~~
U_{ab}(\varphi) = \Der{^2\, U(\varphi)}{\varphi_a \partial \varphi_b}.
\]
Suppose, we have a solution in the class of solitary waves
\begin{equation} \lb{eq3.4}
\phik{a}{}(\rho) = \phik{a}{}
\left( \gamma ( x-v t)
\right),
~~\gamma= 1/\sqrt{1-v^2},
\end{equation}
having the localized energy density
\begin{equation} \lb{eq3.5}
\varepsilon (\varphi) = \sum_{a}
\Der{L(\varphi)}{(\partial_0\varphi_a)} \partial_0\varphi_a - L(\varphi),
\end{equation}
and finite mass integral
\begin{equation} \lb{eq3.6}
\mu = \int\limits_{-\infty}^{+\infty}
\varepsilon (\phik{}{})\ d \rho =
-\int\limits_{-\infty}^{+\infty}
L (\phik{}{})\ d \rho < \infty,
\end{equation}
coinciding with the total energy up to the Lorentz factor $\gamma$.
Let us change to the set of the collective coordinates
$\{\sigma_0=s,\ \sigma_1=\rho\}$ such that
\begin{equation} \lb{eq3.7}
x^m = x^m(s) + e^m_{(1)}(s) \rho,\ \
\varphi_a(x,t) = \widetilde \varphi_a (\sigma),
\end{equation}
where $x^m(s)$ turn out to be the coordinates of a (1+1)-dimensional point
particle, $e^m_{(1)}(s)$ is the unit spacelike vector orthogonal
to the world line.
Hence, the action (\ref{eq3.1}) can be rewritten in new coordinates as
\begin{equation} \lb{eq3.8}
S[\widetilde \varphi] =
\int L (\widetilde \varphi) \,\Delta \ d^2 \sigma,
\end{equation}
\[
L (\widetilde \varphi) = \frac{1}{2} \sum_{a}
\left[
\frac{(\partial_s \widetilde\varphi_a)^2}{\Delta^2} -
(\partial_\rho \widetilde\varphi_a)^2
\right]
- U (\widetilde \varphi),
\]
where
\[
\Delta = \text{det}
\left|
\left|
\Der{x^m}{\sigma^k}
\right|
\right|
= \sqrt{\dot x^2} (1- \rho k),
\]
and $k$ is the curvature of a particle world line
\begin{equation} \lb{eq3.9}
k = \frac{\varepsilon_{mn} \dot x^m \ddot x^n}{(\sqrt{\dot x^2})^3},
\end{equation}
where $\varepsilon_{m n}$ is the unit antisymmetric tensor.
This new action contains the $N$ redundant degrees of freedom which
eventually
lead to appearance of the so-called ``zero modes''.
To eliminate them we must constrain the model
by means of the condition of vanishing of the functional derivative with
respect to field fluctuations about a chosen static solution,
and in result we will obtain the required effective action.
So, the fluctuations of the fields $\widetilde\varphi_a (\sigma)$ in the
neighborhood of the static solution $\phik{a}{} (\rho)$
are given by the expression
\begin{equation}
\widetilde\varphi_a (\sigma) =
\phik{a}{} (\rho) + \delta \varphi_a (\sigma).
\end{equation}
Substituting them into eq. (\ref{eq3.8}) and considering the static
equations of motion (\ref{eq3.3}) for $\phik{a}{}(\rho)$ we have
\begin{eqnarray} \lb{eq3.11}
S[\delta \varphi]
&=& \int d^2 \sigma \
\Biggl\{\Delta
\Biggl[L( \phik{}{}) +
\frac{1}{2} \sum_{a}
\Biggl(
\frac{\left(\partial_s \ \delta \varphi_a \right)^2}
{\Delta^2}
-
\Bigl(
\partial_{\rho} \delta \varphi_a
\Bigr)^2 - \nonumber \\
&& \sum_b U_{ab} ( \phik{}{})
\delta \varphi_a\, \delta \varphi_b
\Biggr)
\Biggr]
- k \sqrt{\dot x^2}\sum_a \phik{a}{\prime} \delta \varphi_a
+ O (\delta \varphi^3)
\Biggr\} + \{\text{surf. terms}\},
\end{eqnarray}
\[
L( \phik{}{}) = -
\frac{1}{2} \sum_a \phik{a}{\prime\, 2} - U ( \phik{}{}),
\]
where prime means the derivative with respect to $\rho$.
Extremizing this action with respect to
$\delta \varphi_a$ one can obtain
the system of equations in partial derivatives for field fluctuations:
\begin{equation}
\left(
\partial_s \Delta^{-1} \partial_s -
\partial_{\rho} \Delta \partial_{\rho}
\right) \delta\varphi_a
+\Delta \sum_{b} U_{ab} ( \phik{}{}) \delta\varphi_b
+ \phik{a}{\prime} k\sqrt{\dot{x}^2} =
O(\delta \varphi^2),
\end{equation}
which has to be the constraint removing redundant degrees of
freedom.
Supposing $\delta\varphi_a (s,\rho) = k(s) f_a(\rho)$, in the
linear approximations
$\rho k\ll 1$ (which naturally guarantees also
the smoothness of a world line at $\rho \to 0$)
and $O(\delta\varphi^2)=0$ we obtain the system
of $N+1$ ordinary derivative equations
\begin{eqnarray}
&&\frac{1}{\sqrt{\dot{x}^2}} \frac{d}{ds}
\frac{1}{\sqrt{\dot{x}^2}} \frac{dk}{ds} +ck = 0, \lb{eq3.13}\\
&&-f''_a + \sum_b
\left(
U_{ab} ( \phik{}{}) - c \delta_{ab}
\right) f_b + \phik{a}{\prime} = 0, \lb{eq3.14}
\end{eqnarray}
where $c$ is the constant of separation.
Searching for a solution of the last subsystem in the form
\begin{equation} \lb{eq3.15}
f_a = g_a + \frac{1}{c} \phik{}{\prime},
\end{equation}
we obtain the homogeneous system
\begin{equation}
-g''_a + \sum_b
\left(
U_{ab} ( \phik{}{}) - c \delta_{ab}
\right) g_b = 0. \lb{eq3.16}
\end{equation}
Strictly speaking, the explicit form of $g_a (\rho)$ is not significant
for us, because we always can suppose integration constants to be zero
thus restricting ourselves by the special solution.
Nevertheless, the homogeneous system should be considered as the
eigenvalue problem for $c$ (see below).
Substituting the found functions $\delta\varphi_a = k f_a$ back
in the action
(\ref{eq3.11}), we can rewrite it in the explicit zero-brane form
\begin{equation} \lb{eq3.17}
S_{\text{eff}} =
S_{\text{eff}}^{\text{(class)}} + S_{\text{eff}}^{\text{(fluct)}} =
- \int d s \sqrt{\dot x^2}
\left(
\mu + \alpha k^2
\right),
\end{equation}
describing a point particle with curvature,
where $\mu$ was defined in (\ref{eq3.6}), and
\begin{equation} \lb{eq3.18}
\alpha =
\frac{1}{2} \sum_a \int\limits_{-\infty}^{\infty}
f_a \phik{a}{\prime} \ d \rho
+
\frac{1}{2} \sum_a
\int\limits_{-\infty}^{+\infty}
\left(
f_a f_a^{\prime}
\right)^\prime \ d \rho.
\end{equation}
Further, contracting (\ref{eq3.3}) with $\phik{a}{\prime}$, we
obtain the expression
\begin{equation} \lb{eq3.19}
\sum_a
\left(
\phik{a}{\prime\prime} -
U_a(\phik{}{})
\right)
\phik{a}{\prime} = 0,
\end{equation}
which can be rewritten as
\begin{equation} \lb{eq3.20}
\sum_a
\phik{a}{\prime 2}
= 2 U(\phik{}{}(\rho)).
\end{equation}
Considering the Eqs. (\ref{eq3.5}), (\ref{eq3.6}), (\ref{eq3.15}),
(\ref{eq3.19}) and (\ref{eq3.20}), the expression for $\alpha$ can be
written in the simple form (for simplicity here we suppose the
same eigenvalues, $c_a \equiv c$, otherwise
the first integral in Eq. (\ref{eq3.18})) cannot be reduced
to the integral (\ref{eq3.6})
and should be evaluated separately)
\begin{equation}
\alpha = \frac{\mu}{2 c} + \frac{1}{2 c^2}
\int\limits_{-\infty}^{+\infty}
U^{\prime\prime}(\phik{}{}(\rho)) \ d \rho,
\end{equation}
where the second term can be integrated as a full derivative.
Therefore, even if it is non-zero\footnote{It
identically vanishes when $|\phik{a}{}(\rho)| \leq O(1)$ at infinity.},
we always can remove it by means of including into surface terms of
the action (\ref{eq3.11}) or adding of an appropriate
counterterm to the action (\ref{eq3.8}):
\[
S^{\text{(reg)}} [\widetilde \varphi] =
S [\widetilde \varphi] -
\frac{1}{2 c^2}
\int\limits_{-\infty}^{+\infty}\, d^2 \sigma \Delta k^2
U^{\prime\prime}(\rho).
\]
Thus, we obtain the final form of the effective zero-brane action of
the theory
\begin{equation} \lb{eq3.22}
S_{\text{eff}} =
- \mu \int d s \sqrt{\dot x^2}
\left(
1 + \frac{1}{2 c} k^2
\right).
\end{equation}
It is straightforward to derive the corresponding
equation of motion in the Frenet basis
\begin{equation}
\frac{1}{\sqrt{\dot x^2}}
\frac{d}{d s}
\frac{1}{\sqrt{\dot x^2}}
\frac{d k}{d s} +
\left(c - \frac{1}{2} k^2
\right) k = 0,
\end{equation}
hence one can see that eq. (\ref{eq3.13}) was nothing
but this equation in the linear approximation $k \ll 1$,
as was expected.
Thus, the only problem which yet demands
resolving is the determination of eigenvalue $c$.
It turns out to be the Sturm-Liouville problem for the system
(\ref{eq3.16}) under some chosen boundary conditions.
If one supposes, for instance, the finiteness of $g$ at infinity
then the $c$ spectrum turns out to be discrete.
Moreover, it often happens that $c$ has only one or two admissible
values\footnote{For instance, in the works \cite{kpp}
(one-scalar $\varphi^4$ theory) or \cite{zlo006}
($\varphi^3$ and Liouville model),
where the special cases of this formalism were used, $c$ has the form
$\beta m^2$ where $\beta$ is a single positive half-integer or integer;
the cases with $c<0$ does not have, as a rule, independent physical
sense, because at quantization they either can be interpreted in terms
of antiparticles or appear to be unphysical at all.}.
Be that as it may, the exact value of $c$ is necessary hence the system
(\ref{eq3.16}) should be resolved as exactly as possible.
Let us consider it more closely.
The main problem there is the functions $g_a$ are mixed between
equations.
To separate them, let us recall that there exist $N-1$ orbit
equations, whose varying resolves the separation problem.
We consider this for the case $N=2$, i.e., for a biscalar theory,
all the more so it will be helpful when applying for JT gravity.
Considering (\ref{eq3.15}), the varying of a single
orbit equation yields
\begin{equation}
\frac{\delta \varphi_2}{\delta\varphi_1} =
\frac{\phik{2}{\prime}}{\phik{1}{\prime}} =
\frac{g_2}{g_1},
\end{equation}
hence the system (\ref{eq3.16}), $N=2$, can be separated into the
two independent equations
\begin{equation}
-g''_a +
\left(
\frac{\phik{a}{\prime\prime\prime}}
{\phik{a}{\prime}}
- c
\right) g_a = 0, \lb{eq3.25}
\end{equation}
if one uses
$\displaystyle{\varphi_a''' = \sum_b U_{ab} (\varphi) \varphi_b'}$.
In this form it is much easier to resolve the eigenvalue problem.
Therefore, the two independent parameters for the action (\ref{eq3.22}),
$\mu$ and $c$, can be determined immediately by virtue of
eqs. (\ref{eq3.6}) and (\ref{eq3.25}).
Finally, it should be pointed out that the developed method can be
generalized both in the qualitative direction (considering it for
the Yang-Mills and spinor Lagrangians \cite{zlop}) and toward the
increasing of spatial dimensions.
\subsection{Application for JT black hole}\lb{s-ea-af}
For further studies
it is convenient to perform the Wick rotation and to
work in terms of solitons and Lorentzian time rather then in terms
of instantons and Euclidean time, all the more so the main results
of the previous subsection are independent of $v$.
Omitting topological surface terms, we will consider instead of the
action (\ref{eq2.5}) its Lorentzian analogue
\begin{equation}
S_{JT}[\tau, u] = \frac{1}{2 G} \int d^2 x
(\partial_m \tau \partial^m u -
m^2 \tau \sin{u}). \lb{eq3.26}
\end{equation}
The soliton-dilaton doublet (\ref{eq2.8}), (\ref{eq2.9})
has the localized energy density (\ref{eq3.5})
\[
\varepsilon (x,t) = \frac{2 m^2}{G}
\frac{
C_1 \tanh{(m \rho)} +
C_2
\left[
1 - m\rho \tanh{(m \rho)}
\right]
}
{\cosh{\!^2\, (m\rho)}},
\]
and can be interpreted as the relativistic point particle with the
energy
\begin{equation} \lb{eq3.27}
E_{\text{class}} = \int\limits_{-\infty}^{+\infty}
\varepsilon (x,t)\ d x \equiv \gamma \mu =
\frac{2 C_2 \gamma m}{G},
\end{equation}
i.e. the integral (\ref{eq3.6}) is finite and coincides with the
energy (\ref{eq2.11})
\begin{equation} \lb{eq3.28}
\mu = M,
\end{equation}
if one redefines $C_2$.
The action (\ref{eq3.26}) always can be linearly rearranged in the form
(\ref{eq3.1}), (\ref{eq3.2}),
if we introduce fields $\varphi_a$ such that
\begin{equation} \lb{eq3.29}
2\theta u = \varphi_1 - i \varphi_2,\quad
\tau/\theta = \varphi_1 + i \varphi_2,
\end{equation}
where $\theta$ is an arbitrary real constant
which (similarly to $C_1$) will not affect on final results.
We will suppose the final zero-brane
action (\ref{eq3.22}).
The flat spacetime coordinates $x^\mu$ (\ref{eq3.7}) should be understood
in the sense we substituted initial curved spacetime for flat one
with effective potential,
but the meaning of the collective coordinates $\rho$ and $s$
remains unchanged because it describes internal structure and hence
is independent of whether we are
working in curved or flat space.
Therefore, the main task now is to
specify the parameters of the action (\ref{eq3.22})
for our case.
We have $\mu$ already determined by (\ref{eq3.28}), and
the eigenvalue $c$ remains to be
the only unknown parameter for eq. (\ref{eq3.22}).
For eqs. (\ref{eq3.25}) we will require
the traditional boundary conditions
\begin{equation}
g_a(+\infty) - g_a(-\infty) = O (1),
\end{equation}
whereas provided eqs. (\ref{eq2.8}), (\ref{eq2.9}), (\ref{eq3.29})
the system (\ref{eq3.25}) have the form
\begin{equation}
-g''_a +
\left(
K_a
- c
\right) g_a = 0, \lb{eq3.31}
\end{equation}
where
\begin{eqnarray}
&&K_1 = \frac{m^2}{\Cm{2}}
\frac{
A_1 (\Cm{2}-2) + C_2 \Cm{5} + A_2 (6-\Cm{2})
}
{
\Cm{} (4\theta^2+ C_2 + C_2 \Cm{2}) - A_2
}, \nonumber \\
&& K_2 = K_1 |_{C_i \to - C_i}, \nonumber \\
&&A_1 = \Cm{} (4\theta^2 + 3 C_2), \quad
A_2 = \Sm{} (C_1 + C_2 m \rho), \nonumber
\end{eqnarray}
hence it is clear that
\begin{equation} \lb{eq3.32}
K_a (0) = -m^2, \quad
K_a (-\infty) = K_a (+\infty) = m^2.
\end{equation}
This eigenvalue equation is evidently hard to solve exactly,
hence we use the method of the approximing potential which would have
the main properties of $K_a$ especially those presented by (\ref{eq3.32}).
Besides, we will consider the equation for $K_1$ only because both
potentials have approximately the same behaviour.
Thus, omitting the index we will assume the following eigenvalue equation
\begin{equation}
-g'' + m^2
\left(
1- \frac{2}{\cosh{\!^2\, (m\rho)}}
\right)g - c g = 0,
\end{equation}
instead of eq. (\ref{eq3.31}).
Its potential has the main features of $K_a$ but appears to be
exactly solvable:
according to the proven theorem (see Appendix \ref{a-a}),
the only admissible non-zero $c$ is
\begin{equation} \lb{eq3.34}
c=m^2.
\end{equation}
This result is confirmed also by the quasiclassical approximation.
Indeed, the necessary condition
of convergence of the phase integral
\[
\oint p d \rho =
2 \int\limits_{-\infty}^{+\infty} \sqrt{K_a - c} ~ d \rho
\]
appears to be the following one
\[
K_a (\pm \infty) - c =0,
\]
which yields eq. (\ref{eq3.34}) again.
Therefore, the effective zero-brane action of the dilaton gravity about
Jackiw-Teitelboim black hole with fluctuational corrections is
\begin{equation} \lb{eq3.35}
S_{\text{eff}} =
- \mu \int d s \sqrt{\dot x^2}
\left(
1 + \frac{k^2}{2 m^2}
\right),\ \ \mu=M=\frac{2 C_2 m}{G}.
\end{equation}
In the next section we will quantize it to obtain the quantum
corrections to the mass of fixed-gauge JT black hole.
\section{Quantization}\lb{s-q}
In the previous section we obtained a classical
effective action for the model in question.
Thus, to quantize it we must consecutively construct the
Hamiltonian structure of dynamics of the point particle with
curvature \cite{ner,ply,dhot}.
\subsection{General formalism}\lb{s-q-gf}
From the brane action (\ref{eq3.22})
and definition of the world-line curvature
one can see that we have the
theory with higher derivatives \cite{ply,dhot}.
Hence, below we will treat the coordinates and momenta as the
canonically independent coordinates of phase space.
Besides, the Hessian matrix constructed
from the derivatives with respect to accelerations,
\[
M_{a b} =
\left|
\left|
\Der{^2 L_{\text{eff}}}{\ddot x^a \partial \ddot x^b}
\right|
\right|,
\]
appears to be singular that says about the presence of the
constraints on phase variables of the theory.
As was mentioned, the phase space consists of the two pairs of
canonical variables:
\begin{eqnarray}
&&x_m,\ \ p_m = \Der{L_{\text{eff}}}{q^m} - \dot \Pi_m, \\
&&q_m = \dot x_m,\ \ \Pi_m =\Der{L_{\text{eff}}}{\dot q^m},
\end{eqnarray}
hence we have
\begin{eqnarray}
&&p^n = - e^n_{(0)} \mu
\left[
1- \frac{1}{2 c}
\right] +
\frac{\mu}{c}
\frac{e^n_{(1)} }{\sqrt{q^2}} \dot k, \\
&&
\Pi^n = -
\frac{\mu}{c}
\frac{e^n_{(1)}}{\sqrt{q^2}} k,
\end{eqnarray}
where the components of the Frenet basis are
\[
e^m_{(0)} = \frac{\dot x^m}{\sqrt{\dot x^2}},\
e^m_{(1)} = - \frac{1}{\sqrt{\dot x^2}} \frac{\dot e^m_{(0)}}{k}.
\]
There exist the two primary constraints of first kind
\begin{eqnarray}
&&\Phi_1 = \Pi^m q_m \approx 0, \\
&&\Phi_2 = p^m q_m + \sqrt{q^2}
\left[
\mu + \frac{c}{2 \mu} q^2 \Pi^2
\right] \approx 0,
\end{eqnarray}
besides we should add the proper time gauge condition,
\begin{equation}
G = \sqrt{q^2} - 1 \approx 0,
\end{equation}
to remove the non-physical gauge degree of freedom.
Then, when introducing the new variables,
\begin{equation}
\rho = \sqrt{q^2},\ \ v =
\text{arctanh}
\left(
p_{(1)}/p_{(0)}
\right),
\end{equation}
the constraints can be rewritten in the form
\begin{eqnarray}
&&\Phi_1 = \rho \Pi_\rho, \nonumber \\
&&\Phi_2 = \rho
\left[
-\sqrt{p^2} \cosh{v} + \mu -
\frac{c}{2 \mu}
\left(
\Pi^2_v - \rho^2 \Pi^2_\rho
\right)
\right], \\
&&G=\rho-1, \nonumber
\end{eqnarray}
hence finally we obtain the constraint
\begin{equation}
\Phi_2 =
-\sqrt{p^2} \cosh{v} + \mu -
\frac{c}{2 \mu}
\Pi^2_v \approx 0,
\end{equation}
which in the quantum theory ($\Pi_v = - i \partial/\partial v$)
yields
\[
\widehat\Phi_2 |\Psi\rangle =0.
\]
As was shown in Ref. \cite{kpp} (see also Ref. \cite{ply}),
the constraint $\Phi_2$ on the
quantum level admits several coordinate representations that,
generally speaking, lead to different
nonequivalent theories, therefore,
the choice between the different forms of
$\widehat\Phi_2$ should be based on the physical relevance.
Then the physically admissible
equation determining quantum dynamics of the quantum
kink and bell particles has the form:
\begin{equation} \lb{eq4.11}
[ \widehat H-\varepsilon] \Psi(\zeta) = 0,
\end{equation}
\begin{equation}
\widehat H = -\frac{d^2}{d \zeta^2} +
\frac{B^2}{4}
\sinh{\! ^2 \zeta}
-B
\left(
S+\frac{1}{2}
\right)
\cosh{\zeta},
\end{equation}
where
\begin{eqnarray}
&&\zeta=v/2,\ \sqrt{p^2} = {\cal M}, \nonumber \\
&&B= 8 \sqrt{
\frac{\mu {\cal M}}{c}
}, \lb{eq4.13}\\
&& \varepsilon = \frac{8 \mu^2}{c}
\left(
1 - \frac{{\cal M}}{\mu}
\right), \nonumber
\end{eqnarray}
and $S=0$ in our case.
As was established in the works \cite{raz,zu},
SU(2) has to be the dynamical symmetry
group for this Hamiltonian which can be rewritten in the form of
the spin Hamiltonian
\begin{equation}
\widehat H= -S^2_z - B S_x,
\end{equation}
where the spin operators,
\begin{eqnarray}
&&S_x = S \cosh{\zeta} - \frac{B}{2} \sinh{\!^2 \zeta} - \sinh{\zeta}
\frac{d}{d\zeta}, \nonumber \\
&&S_y = i \left\{
-S \sinh{\zeta} + \frac{B}{2} \sinh{\zeta}\cosh{\zeta} +
\cosh{\zeta} \frac{d}{d\zeta} \right\}, \\
&&S_z = \frac{B}{2} \sinh{\zeta}
+ \frac{d}{d\zeta}, \nonumber
\end{eqnarray}
satisfy with the commutation relations
\[
[S_i,~S_j] = i \epsilon_{ijk} S_k,
\]
besides
\[
S_x^2+S_y^2+S_z^2 \equiv S (S+1).
\]
In this connection it should be noted that though the reformulation of
some interaction
concerning the coordinate degrees of freedom in terms of
spin variables is widely used (e.g., in the theories with the
Heisenberg Hamiltonian, see Ref. \cite{lp}), it has to be just
the physical approximation as a rule,
whereas in our case the spin-coordinate correspondence is exact.
Further,
at $S\geq 0$ there exists an irreducible ($2 S+1$)-dimensional
subspace of the representation space of the su(2) Lie algebra, which is
invariant with respect to these operators.
Determining eigenvalues and eigenvectors of the spin Hamiltonian
in the matrix representation
which is realized in this subspace, one can prove
that the solution of eq. (\ref{eq4.11}) is the function
\begin{eqnarray}
\Psi (\zeta) =\exp{
\left(
-\frac{B}{2} \cosh{\zeta}
\right)
}
\sum_{\sigma=-S}^{S}
\frac{c_\sigma}
{
\sqrt{
(S-\sigma)\verb|!|~
(S+\sigma)\verb|!|
}
}
\exp{
\left(
\sigma \zeta
\right)
},
\end{eqnarray}
where the coefficients $c_\sigma$ are the solutions of
the system of linear equations
\[
\biggl(
\varepsilon+\sigma^2
\biggr)c_\sigma + \frac{B}{2}
\biggl[
\sqrt{(S-\sigma)(S+\sigma+1)}~ c_{\sigma+1}
+ \sqrt{(S+\sigma)(S-
\sigma+1)}~ c_{\sigma-1}
\biggr] = 0,
\]
\[
c_{S+1} = c_{-S-1}=0,~~\sigma=-S,~-S+1,...,~S.
\]
However, it should be noted that these expressions give only the
finite number of exact solutions which is equal to the dimensionality of
the invariant subspace
(this is the so-called QESS, quasi-exactly solvable system).
Therefore, for the spin $S=0$ we can find only the ground state wave
function and eigenvalue:
\begin{equation}
\Psi_0 (\zeta) = C_1
\exp{
\left(
- \frac{B}{2} \cosh{\zeta}
\right)
},\
\varepsilon_0 = 0.
\end{equation}
Hence, we obtain that the ground-state mass of
the quantum particle with curvature coincides with the classical one,
\begin{equation} \lb{eq4.18}
{\cal M}_0 = \mu,
\end{equation}
as was expected.
Further, in absence of exact wave functions for more excited
levels one can find
the first (small) quantum correction to mass
in the approximation of the quantum harmonic oscillator.
It is easy to see that at $B \geq 1$
the (effective) potential
\begin{equation}
V(\zeta) =
\left(
\frac{B}{2}
\right)^2 \text{sinh}^2 \zeta
-
\frac{B}{2} \cosh{\zeta}
\end{equation}
has the single minimum
\[
V_{\text{min}} = - B/2 \ \ \text{at} \ \ \zeta_{\text{min}}=0.
\]
Then following the $\hbar$-expansion technique we shift the origin of
coordinates in the point of minimum (to satisfy
$\varepsilon = \varepsilon_0 = 0$ in absence
of quantum oscillations), and expand $V$ in the
Taylor series to second order
near the origin thus reducing the model to the oscillator
of the unit mass, energy $\varepsilon/2$ and oscillation frequency
\[
\omega = \frac{1}{2} \sqrt{B(B-1)}.
\]
Therefore, the quantization rules yield the discrete spectrum
\begin{equation}
\varepsilon = \sqrt{B(B-1)} (n + 1/2)
+ O (\hbar^2),\ \ n=0,\ 1,\ 2, ...,
\end{equation}
and the first quantum correction to particle masses will be
determined by the lower energy of oscillations:
\begin{equation}
\varepsilon = \frac{1}{2} \sqrt{B(B-1)} + O (\hbar^2),
\end{equation}
that gives the algebraic equation for ${\cal M}$ as a function of $m$
and $\mu$.
We can easily resolve it in the approximation
\begin{equation} \lb{eq4.22}
B \gg 1 \ \Longleftrightarrow \ c/\mu^2 \to 0,
\end{equation}
which is admissible for the major physical cases, and obtain
\begin{equation}
\varepsilon = \frac{B}{2} + O (\hbar^2 c/\mu^2),
\end{equation}
that after considering of eqs. (\ref{eq4.13}) and (\ref{eq4.18}) yields
\begin{equation}
({\cal M}-\mu)^2 = \frac{c {\cal M}}{4 \mu} + O (\hbar^2 c/\mu^2).
\end{equation}
Then one can seek for mass in the form ${\cal M}=\mu+\delta$
($\delta \ll \mu$), and
finally we obtain the mass of a particle with curvature (\ref{eq3.22})
with first-order quantum corrections
\begin{equation} \lb{eq4.25}
{\cal M} = \mu \pm \frac{\sqrt{c}}{2} + O(\hbar^2 c/\mu^2).
\end{equation}
The nature of the justified choice of the root sign before the second term
is not so clear as it seems for a first look,
because there exist the two historically interfering arguments.
The first (physical) one is:
if we apply this formalism for the one-scalar $\varphi^4$
model \cite{kpp} and compare the result with that obtained in other
ways \cite{raj}, we should suppose the sign ``$+$''.
However, the second, mathematical, counterargument is as follows:
the known exact spectra of the operators with the QES potentials
like (\ref{eq4.11}) are
split, as a rule by virtue of radicals, hence the signs ``$\pm$''
can approximately represent such a bifurcation and thus
should be unharmed.
If it is really so, quantum fluctuations should divide the
classically unified particle with
curvature into several subtypes with respect to mass.
Finally, comparing the first term (\ref{eq4.25})
and the estimate (\ref{eq4.22}), one
can see that the obtained spectrum is
nonperturbative and can not be derived by virtue of the Taylor
series with respect to $1/\mu$.
\subsection{Mass of quantum JT black hole}\lb{s-q-mo}
Thus, considering eqs. (\ref{eq3.35}) and (\ref{eq4.25}), the mass of
quantum JT black hole as a soliton-dilaton boson
in the first approximation is
\begin{equation}
{\cal M} = M \pm m/2 + O(m^2/M^2),
\end{equation}
therefore, the approximation (\ref{eq4.22}) has to be justified
in this case.
The problem of the obtaining of further corrections turns out to be
reduced to the mathematically standard
Sturm-Liouville problem for the Razavi potential, all
the more so the latter is well-like on the whole axis and hence admits
only the bound states with a discrete spectrum.
Finally, it should be noted that we quantized the reduced
theory (\ref{eq3.26}) rather than complete dilaton gravity because in
general case the latter
has two first-class constraints which were removed by fixed metric gauge.
Besides, unlike the previous works we quantized the theory about the
static solution rather than in the neigbourhood of trivial vacuum,
and were interested first of all in obtaining of mass spectrum.
The question of the construction of corresponding formalism for dilaton
gravity in general case remains open because it requires
the consistent generalization of the
field-to-particle transition procedure for
fields in curved spacetime.
\section{Conclusion}\lb{s-c}
Let us enumerate the main items studied.
It was shown that the Jackiw-Teitelboim dilaton gravity can
be reduced to biscalar theory admitting the
doublet consisting of instanton and dilaton components,
which can be interpreted as a massive quantum particle.
Further, considering field fluctuations in the neighborhood of
the JT black hole it was ruled out the action for JT field
doublet as a non-minimal point particle with curvature,
thereby we generalized the procedure of
obtaining of brane actions for the multiscalar case.
From the fact, that the (1+1)-dimensional dilaton gravity
yields the effective action for JT black hole as a spatially
zero-dimensional brane (nonminimal point particle),
we can conclude that the ordinary 4D
black hole (in the case of arbitrary symmetry and field
fluctuations in a neighborhood) could be consecutively described within
frameworks of a five-dimensional field theory.
When quantizing this action as the constrained
theory with higher derivatives,
it was shown that the resulting Schr\"odinger equation is the
special case of that with the Razavi
potential having SU(2) dynamical symmetry group in the ground state.
Finally, we found the first quantum correction to
mass of the Jackiw-Teitelboim black hole which could not be
calculated by means of the perturbation theory.
|
1,116,691,501,282 | arxiv | \section{Notation and Kinematics}
Consider an arc length parametrized, time-dependent, planar curve ${\bf X}(s,t)$ carrying the dyad of unit vectors $(\uvc{t},\uvc{n})$ such that
\begin{equation}
\partial_s \left(\begin{array}{c}
{\bf X}\\
\uvc{t}\\
\uvc{n}\\
\end{array}\right) = \left(\begin{array}{cccc}
0 & 1 & 0\\
0 & 0 & \kappa\\
0 & -\kappa & 0\\
\end{array}\right)
\left(\begin{array}{c}
{\bf X}\\
\uvc{t}\\
\uvc{n}\\
\end{array}\right) \, .
\end{equation}
Specification of the curvature $\kappa$, which may take on negative values, defines the curve up to rigid motions. Locally arc length preserving planar motions may be specified by the tangential velocity $T$,
\begin{equation}
\partial_t {\bf X} \equiv T\uvc{t} + \tfrac{\partial_s T}{\kappa}\uvc{n} \, ,
\end{equation}
and we may recast the time evolution in terms of the curvature as \cite{Brower84,GoldsteinPetrich91,Nakayama92}
\begin{equation}\label{kappaevol}
\partial_t \kappa = \partial_s \left(\partial_t\uvc{t}\cdot\uvc{n}\right) = \partial_s\left[ \partial_s \left(\tfrac{\partial_s T}{\kappa}\right) + \kappa T \right] \, .
\end{equation}
\section{Physics}
The physics of a perfectly flexible, inextensible string with uniform mass density $\mu$ is described by the vector wave equation, inextensibility constraint, and boundary condition:
\begin{eqnarray}
\mu\partial^2_t {\bf X} &=& \partial_s\left(\sigma\partial_s {\bf X}\right) \, , \label{vectorwave} \\
\partial_s {\bf X} \cdot \partial_s {\bf X} &=& 1\, , \label{vectorconstraint} \\
\left. \sigma \partial_s{\bf X} \, \right|_{\mathrm{ends}} &=& {\bf{f}}_\mathrm{appl} \, , \label{bc}
\end{eqnarray}
where the stress $\sigma$ is a multiplier field enforcing the constraint \cite{EdwardsGoodyear72,Hinch94,Reeken77,Healey90,Belmonte01,SchagerlBerger02}, and ${\bf{f}}_\mathrm{appl}$ are forces applied to the ends of the chain. We will be considering a situation where ${\bf{f}}_\mathrm{appl} = 0$ at one end. In the absence of bending resistance, the curvature and its first derivative are not required to vanish at a free end.
The projections of the arc length derivative of \eqref{vectorwave} along the tangent and normal vectors are
\begin{eqnarray}
\sigma\kappa^2-\partial^2_s\sigma &=& \mu\partial_t\uvc{t}\cdot\partial_t\uvc{t} \, , \label{tangproj} \\
2\partial_s\sigma\kappa + \sigma\partial_s\kappa &=& \mu\partial_t^2\uvc{t}\cdot\uvc{n} \, , \label{normproj}
\end{eqnarray}
where we have used \eqref{vectorconstraint} to reduce the order of time derivatives in \eqref{tangproj}, thus also displaying the non-negativity of the expression on the left hand side. These equations are well known \cite{Routh55} when written in terms of the tangential angle whose arc length derivative is the curvature. The form of \eqref{tangproj} implies that stresses are screened by curvature and generated by changes in orientation of tangent vectors, whether by centripetal motion of material along the curve, rotation of the curve, or evolution of the curvature.
In two dimensions, $\partial_t\uvc{t}\cdot\partial_t\uvc{t} = \left(\partial_t\uvc{t}\cdot\uvc{n}\right)^2$ and $\partial_t^2\uvc{t}\cdot\uvc{n} = \partial_t \left( \partial_t\uvc{t}\cdot\uvc{n}\right)$. Using these relationships along with \eqref{kappaevol}, \eqref{tangproj}, and \eqref{normproj}, we may write
\begin{eqnarray}
\mu^{\frac{1}{2}}\partial_t\kappa &=& \pm\partial_s\left[\left(\sigma\kappa^2-\partial^2_s\sigma\right)^{\frac{1}{2}}\right] \, , \label{first} \\
\pm \mu^{\frac{1}{2}} \partial_t\left[ \left( \sigma\kappa^2-\partial^2_s\sigma \right)^{\frac{1}{2}}\right] &=& 2\partial_s\sigma\kappa + \sigma\partial_s\kappa \, . \label{second}
\end{eqnarray}
Equations \eqref{first} and \eqref{second} are, to our knowledge, new. They are in general quite obnoxious, although for constant stress $\sigma \equiv \sigma_0$ they both reduce to a simple traveling wave expression for the curvature $\kappa$. Such a constant $\sigma_0$ must be non-negative, according to \eqref{tangproj}.
On a closed or infinite string, we may ignore the boundary condition \eqref{bc}. The constant-stress solutions are simply traveling waves:
\begin{eqnarray}
\partial_t \kappa &=& \pm \left( \frac{\sigma_0}{\mu} \right)^{\frac{1}{2}} \partial_s \kappa \, ,\\
T &=& \pm \left( \frac{\sigma_0}{\mu} \right)^{\frac{1}{2}} + c_1 \sin \int^s \!\!\!\!d\tilde{s}\,\kappa(\tilde{s}) + c_2 \cos \int^s \!\!\!\!d\tilde{s}\,\kappa(\tilde{s}) \, ,
\end{eqnarray}
with $c_1$ and $c_2$ arbitrary constants that account for Galilean invariance of the system. Setting these to zero gives us uniform and purely tangential motion with $\sigma_0 = \mu T^2$. The corresponding shape, designated by ${\bf X}$ or $\kappa$, is motionless in the laboratory frame, and also completely arbitrary, as the wave equation \eqref{vectorwave} is nondispersive for uniform $\sigma$. These steady solutions of arbitrary shape, reminiscent of lariats \cite{LariatChain,HealeyPapadopoulos90}, were known to Routh \cite{Routh55} through his wrangling with the 1854 Mathematical Tripos, and have also been discovered independently of their mathematical description \cite{Aitken1878,StringLauncher}.
Our discussion has implicitly presumed generic shapes for which $\kappa$ is not simply zero everywhere. Nonrotating straight lines are special in that their tangent vector fields are also constant vector fields, and thus purely tangential motions of straight lines are also Galilean boosts that leave the stress unaffected. This makes straight strings singular limits of the towing problem discussed below.
Consider a semi-infinite string, or a string for which one distant end is pulled with the appropriate force to sustain some bulk solution. Perhaps this bulk motion is lariat-like, with predominantly tangential motion and nearly uniform stress, but the details are unimportant other than that we assume that the tangents are changing and, thus, stresses are nonzero. We will be concerned with the other, free end $s=0$, where \eqref{bc} tells us that the stress must vanish. If the stress is continuous, it will have spatial gradients. These imply, via \eqref{first}, time evolution in shape. Barring discontinuities, it is impossible to pull an open, curved string so that it moves purely along its tangents. This may be illustrated quite easily by taking a long string or chain, wrapping it around a smooth convex object, and pulling one end \cite{Cambou12,Calkin89}. Invariably, the string tries to squeeze or unwrap off of the surface of the object. Aside from suggesting a nice method for loosening strings tied around frictionless surfaces, this fact implies strong consequences for slender structures in passive locomotion in the absence of guides and obstacles.
We now proceed in search of time-dependent configurations with a free end.
\section{A Solution for Time-Independent Stress}
Hoping for simplification, let's consider stresses $\sigma(s)$ that are independent of time, although still spatially non-uniform. This transforms the equations \eqref{first} and \eqref{second} into a coupled pair of conservation laws (see also Appendix \ref{cons}):
\begin{eqnarray}
\mu^{\frac{1}{2}} \partial_t \kappa &=& \pm \partial_s \left( \left[ \sigma\kappa^2-\partial^2_s\sigma\right]^{\frac{1}{2}} \right) \, , \label{conservation1} \\
\mu^{\frac{1}{2}} \partial_t \left( \sigma \left[ \sigma\kappa^2-\partial^2_s\sigma\right]^{\frac{1}{2}} \right) &=& \pm \partial_s \left( \sigma^2\kappa \right) \, . \label{conservation2}
\end{eqnarray}
These may be combined and integrated to define a function of time $f(t)$,
\begin{equation}\label{relate}
\frac{\left(\sigma\kappa^2-\partial^2_s\sigma\right)^{\frac{1}{2}}}{\sigma^2\kappa} = \pm e^{f(t)} \, ,
\end{equation}
so that we may invert for $\kappa$,
\begin{equation} \label{kappasq}
\kappa^2 = \frac{\partial_s^2\sigma}{\sigma\left(1-\sigma^3e^{2f}\right)} \, ,
\end{equation}
and rewrite the conservation laws as
\begin{eqnarray}
\mu^{\frac{1}{2}} \partial_t \kappa &=& \pm \partial_s \left(e^{f} \sigma^2\kappa \right) \, , \label{cons1} \\
\mu^{\frac{1}{2}} \partial_t \left( e^{f}\sigma^3\kappa \right) &=& \pm \partial_s \left( \sigma^2\kappa \right) \, . \label{cons2}
\end{eqnarray}
From \eqref{kappasq} and the time invariance of $\sigma$ we may derive
\begin{equation}
\partial_t \kappa = -\frac{\kappa}{2} \partial_t \ln\left[ \pm \left(\sigma^3e^{2f}-1\right)\right] = \kappa \frac{\sigma^3\partial_tfe^{2f}}{1-\sigma^3e^{2f}} \, ,
\end{equation}
with the sign chosen to make the expression real (for the physically likely situation $\frac{\partial^2_s\sigma}{\sigma} < 0$, the sign should be positive). As it is derived from what is essentially a compatibility condition for equations \eqref{cons1} and \eqref{cons2}, using this expression to replace $\partial_t \kappa$ in either of those leads to the same thing:
\begin{equation}
\partial_s\kappa = \kappa \left[ \mu^\frac{1}{2} \frac{\sigma\partial_t\left(e^f\right)}{1-\sigma^3e^{2f}} - \frac{\partial_s\left(\sigma^2\right)}{\sigma^2} \right] \, .
\end{equation}
Thus,
\begin{equation}
\partial_s\ln\left(\pm\sigma^2\kappa\right) = \mu^\frac{1}{2} \frac{\sigma\partial_t\left(e^f\right)}{1-\sigma^3e^{2f}} \, ,
\end{equation}
and, using \eqref{kappasq} again,
\begin{equation}
\pm\partial_s\ln\left(\left[ \frac{\sigma^3\partial_s^2\sigma}{\left(1-\sigma^3e^{2f}\right)} \right]^{\frac{1}{2}} \right) = \mu^\frac{1}{2} \frac{\sigma\partial_t\left(e^f\right)}{1-\sigma^3e^{2f}} \, .
\end{equation}
Now expand and boil down the left hand side, using the spatial invariance of $f$ in the process, until its denominator matches that of the right hand side, and equate numerators to find:
\begin{equation}
\pm2\mu^\frac{1}{2}\partial_t\left(e^f\right) + \sigma^2\frac{\partial_s^3\sigma}{\partial_s^2\sigma} e^{2f} - \frac{1}{\sigma}\left( \frac{\partial_s^3\sigma}{\partial_s^2\sigma} + 3\frac{\partial_s\sigma}{\sigma} \right) = 0 \, .
\end{equation}
The only way this equation can hold for non-constant $e^f$ is if the coefficients are all constants. Let
\begin{eqnarray}
\sigma^2\frac{\partial_s^3\sigma}{\partial_s^2\sigma} &\equiv& - B^3 \, , \\
\frac{1}{\sigma}\left( \frac{\partial_s^3\sigma}{\partial_s^2\sigma} + 3\frac{\partial_s\sigma}{\sigma} \right) &\equiv& C^3 \, .
\end{eqnarray}
Combining gives
\begin{equation}
\partial_s\sigma = \frac{C^3\sigma^3 + B^3}{3\sigma} \, ,
\end{equation}
which can be solved implicitly. However, taking derivatives of this equation and reinserting above tells us that $C$ must be zero. This means that
\begin{eqnarray}
\sigma^2 &=& \frac{2B^3}{3}s \, , \\
e^{2f} &=& \frac{4\mu}{B^6 t^2} \, ,
\end{eqnarray}
for $\sigma(0)=0$, ignoring a constant that would shift time. Inserting in \eqref{kappasq}, we have
\begin{equation}
\kappa = \pm \frac{t}{2s} \left[ \frac{1}{\tilde{\mu}s^\frac{3}{2} -t^2} \right]^{\frac{1}{2}} \, ,
\end{equation}
where $\tilde{\mu} \equiv 4\mu\left(\frac{2}{3B}\right)^\frac{3}{2}$. The curvature $\kappa$ will have the same sign as $\pm e^f$. Clearly, when $t$ is nonzero there is imaginary curvature at small $s$, which does not correspond to a real rectifiable space curve. The singularity in curvature is not a problem, as it corresponds to a well-behaved angle. But we are unable to reach the endpoint $s=0$ except at $t=0$. To describe the dynamics of the end of the curve will require additional work.
\section{A Splice Job}
We have attempted to reach the end of a generically shaped string, but have fallen a bit short. However, we have at our disposal other exact solutions that apply for the trivial geometry of a straight string with a free end. Let us try to splice such an end solution ${\bf X}^e$ to our time-independent-stress bulk solution ${\bf X}^b$:
\begin{equation}
{\bf X} = \left\{ \begin{array}{rc}
{\bf X}^e & \quad\quad\, 0 < s < s_m(t) \\
{\bf X}^b & s_m(t) < s < \infty \quad
\end{array} \right. \, .
\end{equation}
Influenced by observations of chain dynamics \cite{Cambou12, Tomaszewski06}, we let ${\bf X}^e$ be a straight segment of string that does not rotate but may accelerate along itself. The length of this end segment will be a function of time, vanishing at $t=0$ when the splice point $s_m(0)=0$ joining the two solutions hits the end of the string. Unlike that of the bulk piece, the stress in the end segment will be time-dependent. The stress may, in theory, be kept continuous even at $t=0$, an impossibility for the arrival at the free end of a lariat solution with uniform nonzero stress. The curvature singularity of the bulk solution does not preclude continuity of tangents of the spliced solution, but such continuity is not a necessity in a perfectly flexible string.
Remarkably, the bulk solution for curvature falls into the rare category of Ces\`{a}ro equations that may be analytically integrated twice to obtain the embedding vector ${\bf X} \equiv X\uvc{x} + Y\uvc{y}$. Given $\kappa(s,t)$, the tangential angle is $\phi(s,t) = \int^s\!\!ds' \kappa(s',t)$ and the Cartesian coordinates are $X(s,t) = \int^s\!\!ds' \cos \phi(s',t)$ and $Y(s,t) = \int^s\!\!ds' \sin \phi(s',t)$. The bulk solution ${\bf X}^b$, set horizontal at large $s$, is
\begin{eqnarray}
\sigma^b &=& \left(\frac{2B^3s}{3}\right)^\frac{1}{2} \, ,\\
\kappa^b &=& \frac{t}{2s} \frac{1}{\left[ \tilde{\mu}s^\frac{3}{2} -t^2 \right]^{\frac{1}{2}}} \, , \\
\phi^b &=& - \frac{2}{3}\arctan\frac{t}{\left[\tilde{\mu}s^\frac{3}{2} -t^2 \right]^\frac{1}{2}} \, , \\
X^b &=& s\left[ 2\cos\phi^b - \cos\left(2\phi^b \right)\right] \, , \\
Y^b &=& s\left[ 2\sin\phi^b + \sin\left(2\phi^b \right)\right] \, .
\end{eqnarray}
This will be spliced to the end solution ${\bf X}^e$:
\begin{eqnarray}
\sigma^e &=& A(t)s \, , \\
\kappa^e &=& 0 \, , \\
\phi^e &=& \phi^e_0(\mathrm{sgn}(t)) \, , \\
X^e &=& \left[s+ \int^t\!\!\!\!dt' \int^{t'}\!\!\!\!dt'' \frac{A(t'')}{\mu} \right] \cos\phi^e \, , \\
Y^e &=& \left[s+ \int^t\!\!\!\!dt' \int^{t'}\!\!\!\!dt'' \frac{A(t'')}{\mu} \right] \sin\phi^e \, , \end{eqnarray}
a straight segment that moves along its own tangent. When this segment disappears at $t=0$, the otherwise constant angle $\phi^e_0$ may change abruptly. For simplicity of description, we neglect rigid translation and constant terms for the moment. The forms of $\sigma$ ensure that $\sigma^e(0, t) = \sigma^b(0) = 0$, as the free end condition requires.
The splice point $s_m(t)$ must be greater than the critical value of $s$ that marks the limit of existence of the bulk solution:
\begin{equation}
s_m(t)^\frac{3}{2} \ge \frac{t^2}{\tilde{\mu}} \, .
\end{equation}
At this location, a jump condition \cite{Antman05} is imposed:
\begin{equation}\label{jump}
\left. \left( \sigma^b\partial_s{\bf X}^b - \sigma^e\partial_s{\bf X}^e \right) + \mu \partial_t s_m(t) \left( \partial_t {\bf X}^b - \partial_t {\bf X}^e \right) \right|_{s_m(t)} = 0 \, .
\end{equation}
Enforcing continuous position leads to the constraints:
\begin{eqnarray}
&&\begin{split}
&0=\left[ s_m(t) + \int^t\!\!\!\!dt' \int^{t'}\!\!\!\!dt'' \frac{A(t'')}{\mu} \right] \cos\phi^e_0 \\
&+ s_m(t)\left[-2\cos\left(\frac{2}{3}\arctan\frac{t}{\left[\tilde{\mu}s^\frac{3}{2}_m(t) -t^2 \right]^\frac{1}{2}}\right) + \cos\left(\frac{4}{3}\arctan\frac{t}{\left[\tilde{\mu}s^\frac{3}{2}_m(t) -t^2 \right]^\frac{1}{2}}\right) \right] \, , \label{errorx}
\end{split}\\
&&\begin{split}
&0 = \left[ s_m(t) + \int^t\!\!\!\!dt' \int^{t'}\!\!\!\!dt'' \frac{A(t'')}{\mu} \right] \sin\phi^e_0 \\
&+s_m(t)\left[2\sin\left(\frac{2}{3}\arctan\frac{t}{\left[\tilde{\mu}s^\frac{3}{2}_m(t) -t^2 \right]^\frac{1}{2}}\right) + \sin\left(\frac{4}{3}\arctan\frac{t}{\left[\tilde{\mu}s^\frac{3}{2}_m(t) -t^2 \right]^\frac{1}{2}}\right) \right] \, . \label{errory}
\end{split}
\end{eqnarray}
These imply that
\begin{eqnarray}
s_m(t)^\frac{3}{2} &=& (c^2+1)\frac{t^2}{\tilde{\mu}} \, , \label{sm} \\
A(t) &=& \frac{4}{9} \mu d^2 \frac{s_m(t)}{t^2} \, , \label{slope}
\end{eqnarray}
for some constants $c^2$ and $d^2$, which relations assure that all the terms have the same $t$ dependence. As this dependence is not linear in $t$, it cannot be absorbed into a rigid translation term. This means that the quantities, obtained after insertion of \eqref{sm} and \eqref{slope} in \eqref{errorx} and \eqref{errory},
\begin{eqnarray}
X^{\mathrm{err}} &\equiv& s_m(t) \left[ \left(d^2 +1\right) \cos\phi^e_0 - 2 \cos\left(\frac{2}{3}\arctan\frac{1}{c}\right) + \cos\left(\frac{4}{3}\arctan\frac{1}{c}\right) \right] \, , \label{bracketx} \\
Y^{\mathrm{err}} &\equiv& s_m(t) \left[ \left(d^2 +1\right) \sin\phi^e_0 + \mathrm{sgn}(t)\left[ 2\sin\left(\frac{2}{3}\arctan\frac{1}{c}\right) + \sin\left(\frac{4}{3}\arctan\frac{1}{c}\right)\right] \right] \, , \label{brackety}
\end{eqnarray}
must both vanish, giving a transcendental relationship between the constants and any possible discontinuity in the tangents. The jump condition \eqref{jump} provides two more quantities that must vanish,
\begin{eqnarray}
&&\frac{5}{4}d^2\left(c^2+1\right)\cos\phi^e_0 - \cos\left(\frac{2}{3}\arctan\frac{1}{c}\right) + \frac{c^2+1}{c}\left[\sin\left(\frac{2}{3}\arctan\frac{1}{c}\right) - \sin\left(\frac{4}{3}\arctan\frac{1}{c}\right)\right] \, , \label{jumpx} \\
&&\frac{5}{4}d^2\left(c^2+1\right)\sin\phi^e_0 + \mathrm{sgn(t)}\left[ \sin\left(\frac{2}{3}\arctan\frac{1}{c}\right) + \frac{c^2+1}{c}\left[\cos\left(\frac{2}{3}\arctan\frac{1}{c}\right) + \cos\left(\frac{4}{3}\arctan\frac{1}{c}\right)\right] \right] \, , \label{jumpy}
\end{eqnarray}
obtained after dividing out a common term proportional to $t^\frac{2}{3}$. This is a total of four equations for three quantities. We empirically observe, by plotting the functions on the computer, that it appears possible to satisfy all equations by approaching the limit $\phi^e_0 \rightarrow 0$, $c \rightarrow \infty$, $d \rightarrow 0$. This corresponds to two straight lines with continuous tangents (\mbox{$\phi^e_0 = - \mathrm{sgn}(t) \frac{2}{3}\arctan\frac{1}{c}$}), one of them having vanishing stress, and an infinite front velocity between the two. Unfortunately, whether or not it may have a formal justification, this is not a useful limit to take.
Hence, splicing these two exact solutions together in any nontrivial way will lead to an error in the solution, which must be kept small. For some choice of end angle $\phi^e_0$, there will in general be a finite $c^2$ and $d^2$ that respect the jump conditions \eqref{jumpx} and \eqref{jumpy}. Using these constants, continuity of position is enforced by subtracting the error terms \eqref{bracketx} and \eqref{brackety} from the end solution ${\bf X}^e$. The error, localized in the end segment of the resulting ``pseudosolution'', will scale with time in a manner proportional to the true quantities. A measure of this proportional error is
\begin{equation}\label{properr}
E \equiv \frac{ \| \mu \partial_t^2 \left(X^\mathrm{err}, Y^\mathrm{err} \right) \| }{ \| \mu \partial_t^2 \left(X^e, Y^e\right) \| } = \frac{1}{d^2}\sqrt{ \left(\frac{X^\mathrm{err}}{s_m(t)}\right)^2 + \left(\frac{Y^\mathrm{err}}{s_m(t)}\right)^2 } \, .
\end{equation}
Neglecting error terms, the magnitude of the acceleration of the end segment is now
\begin{equation}
\| \mu \partial_t^2 \left(X^e, Y^e\right) \| = A(t) = \tfrac{4}{9}\mu d^2\left(c^2+1\right)^{\frac{2}{3}} \left(\tilde{\mu} t\right)^{-\frac{2}{3}} \, ,
\end{equation}
while the length of this segment vanishes as $t^\frac{4}{3}$. The splicing has led to an acceleration of the end piece that goes as $t^{-\frac{2}{3}}$, though the velocity vanishes as $t^\frac{1}{3}$. The zero-time singularity in acceleration appears to have a physical basis. The end segment ${\bf X}^e$ swings, with all the other points, around the horizontal straight configuration adopted by the bulk solution ${\bf X}^b$ at $t=0$. As it vanishes, it experiences a sudden change of angle, as we set $\phi_0^e(-t) = -\phi_0^e(t)$. This change is also the angular phase accumulated by the entire object over the course of the motion. One could imagine that, in a real system, all of this singular business might be absorbed into some brief, slight stretching of the string.
Resigned to a pseudosolution, let us try to minimize its error. It so happens that the relative error $E$ given by \eqref{properr} hovers at a bit more than fifteen percent for jumps in tangential angle of magnitudes less than about $\frac{\pi}{2} -1$, showing little variation in this range but growing large outside it. The jumps are such as to bend the end piece further away from the bulk piece's straight $t=0$ configuration. A shallow minimum in error occurs for one such kinked pseudosolution that has a tangent discontinuity of about $\frac{1}{3}$ radian.
Two types of splicing are presented in the figures: a continuous tangent pseudosolution, and one with a kink of $\frac{4}{9}$ radian, both with $E \approx \frac{1}{6}$. All examples shown consist of a unit length string, and use $\tilde{\mu} = \frac{16}{9}$ for simplicity.
Stresses and curvatures at one instant are shown in Figure \ref{sigmaandkappa}. If tangents are made continuous at the splice point, the curvature there diverges. The stress is continuous at $t=0$, but not at other times.
Earlier, a presumed continuity in stress was invoked to explain why strings with free ends must have time-dependent configurations. Yet our pseudosolution has a stress discontinuity. One might wonder whether we should have simply attempted to splice a constant-nonzero-stress lariat solution to a zero-stress end solution. However, for some ${\bf X}^b$ with time-dependent tangents $\partial_s {\bf X}^b \propto \partial_t {\bf X}^b$, one cannot satisfy the jump condition \eqref{jump} with $\sigma^e=0$ and the corresponding constant $\partial_t {\bf X}^e$. One could, of course, introduce large errors into the end solution, but we have been able to keep the error within a reasonable bound here. One might also wonder why we don't simply splice two straight strings together to find a solution. If such a simple approach worked, perhaps it could be extended to multiple strings and, thus, a piecewise approximation to any curve. In Appendix \ref{straight}, we show that splicing two straight pieces does lead to an exact solution, but a rather unphysical one that also has singularities in velocity and tension at the point of reflection. Using a different jump condition, an inelastic assumption, and even more severe restrictions on the geometry, Bernstein, Hall, and Trent \cite{Bernstein58} also found discontinuous expressions with velocity and tension singularities.
Uniform velocities may be added to any result. As $t \rightarrow 0$, $\partial_t Y^b \approx -\frac{8}{3\sqrt{\tilde{\mu}s^\frac{3}{2}}}$, which motion along $\uvc{y}$ may be subtracted out from a favored point for convenience. Figure \ref{sym} shows snapshots of the configurations, along with the trajectories of the splice points, for the continuous and kinked examples using the pseudosolution
\begin{equation}\label{fisheq}
\left. \begin{array}{rr}
\left(X^e - X^{\mathrm{err}} , Y^e - Y^{\mathrm{err}} \right) & 0 < s < s_m \\
\left(X^b , Y^b \right) & s_m < s < 1 \;\;\,
\end{array} \right\} + \left( 0 , \tfrac{8}{3\sqrt{\tilde{\mu}}} \right) t \, .
\end{equation}
The differences between continuous and kinked are subtle, and the overall angular phase shifts of both curves are $\approx -\frac{2\pi}{3}$.
Adding a velocity along $\uvc{x}$ breaks the symmetry of the patterns. Figure \ref{pull} shows a continuous tangent example in which the end segment is made to move along itself at early times, by using the pseudosolution
\begin{equation}\label{pulleq}
\left. \begin{array}{rr}
\left(X^e - X^{\mathrm{err}} , Y^e - Y^{\mathrm{err}} \right) & 0 < s < s_m \\
\left(X^b , Y^b \right) & s_m < s < 1 \;\;\,
\end{array} \right\} + \tfrac{8}{3\sqrt{\tilde{\mu}} \sin |\phi^e|} \left(\cos |\phi^e| , \sin |\phi^e| \right) t \, .
\end{equation}
This example approximately mimics the twin conditions of horizontal pulling at moderate $s$ and tangential motion of the free end near small $s$, a bit like the physical situation of pulling a chain initially wrapped around an obstacle \cite{Cambou12}.
\begin{figure}[here]
\subfigure{
\begin{overpic}[width=3.2in]{sigmacont.png}\label{sigmacont}
\put(150,100){\Large{\subref{sigmacont}}}
\end{overpic}
}
\subfigure{
\begin{overpic}[width=3.2in]{sigmakink.png}\label{sigmakink}
\put(150,100){\Large{\subref{sigmakink}}}
\end{overpic}
}\\
\subfigure{
\begin{overpic}[width=3.2in]{kappacont.png}\label{kappacont}
\put(150,100){\Large{\subref{kappacont}}}
\end{overpic}
}
\subfigure{
\begin{overpic}[width=3.2in]{kappakink.png}\label{kappakink}
\put(150,100){\Large{\subref{kappakink}}}
\end{overpic}
}
\caption{Snapshot of the stress $\sigma$ and curvature $\kappa$ at $t = \frac{1}{5}\sqrt{\frac{\tilde{\mu}}{c^2+1}}$ with $\tilde{\mu} = \frac{16}{9}$. \subref{sigmacont} and \subref{kappacont} have parameter values $c=0$, $d^2 = 2.4$ and continuous tangents, while \subref{sigmakink} and \subref{kappakink} have $c = 0.714$, $d^2 = 1.76$, and a jump in tangential angle of $\approx 0.\bar{4}$ radians. The discontinuities occur at the splice point, where the curvature also diverges for the continuous tangent pseudosolution. The end segments of the pseudosolutions have a relative error of $\approx \frac{1}{6}$.}
\label{sigmaandkappa}
\end{figure}
\begin{figure}[here]
\subfigure{
\begin{overpic}[width=3.2in]{symcont.png}\label{symcont}
\put(50,250){\Large{\subref{symcont}}}
\end{overpic}
}
\subfigure{
\begin{overpic}[width=3.2in]{symconttraj.png}\label{symconttraj}
\put(50,250){\Large{\subref{symconttraj}}}
\end{overpic}
}
\subfigure{
\begin{overpic}[width=3.2in]{symkink.png}\label{symkink}
\put(50,250){\Large{\subref{symkink}}}
\end{overpic}
}
\subfigure{
\begin{overpic}[width=3.2in]{symkinktraj.png}\label{symkinktraj}
\put(50,250){\Large{\subref{symkinktraj}}}
\end{overpic}
}
\caption{\subref{symcont} and \subref{symkink} Snapshots at eleven evenly spaced times between $t = \pm \sqrt{\frac{\tilde{\mu}}{c^2+1}}$ with $\tilde{\mu} = \frac{16}{9}$, and \subref{symconttraj} and \subref{symkinktraj} clockwise trajectories of the splice point, for the pseudosolution \eqref{fisheq}. The string is of unit length and all images have the same scale. The upper images \subref{symcont} and \subref{symconttraj} have $c=0$, $d^2 = 2.4$ and continuous tangents; they correspond to the stress and curvature in \ref{sigmacont} and \ref{kappacont}. The lower images \subref{symkink} and \subref{symkinktraj} have $c = 0.714$, $d^2 = 1.76$, and a jump in tangential angle of $\approx 0.\bar{4}$ radians; they correspond to \ref{sigmakink} and \ref{kappakink}. The end segments of the pseudosolutions have a relative error of $\approx \frac{1}{6}$.}
\label{sym}
\end{figure}
\clearpage
\begin{figure}[here]
\subfigure{
\begin{overpic}[width=3.2in]{pullcont.png}\label{pullcont}
\put(50,100){\Large{\subref{pullcont}}}
\end{overpic}
}
\subfigure{
\begin{overpic}[width=3.2in]{pullconttraj.png}\label{pullconttraj}
\put(50,100){\Large{\subref{pullconttraj}}}
\end{overpic}
}
\caption{\subref{pullcont} Snapshots at eleven evenly spaced times between $t = \pm \sqrt{\frac{\tilde{\mu}}{c^2+1}}$ with $\tilde{\mu} = \frac{16}{9}$, and \subref{pullconttraj} the trajectory of the splice point, for the pseudosolution \eqref{pulleq}. The parameters are $c=0$, $d^2 = 2.4$, and continuous tangents, corresponding to the stress and curvature in \ref{sigmacont} and \ref{kappacont}. The string is of unit length and both images have the same scale. The splice point begins a bit below the kink in the trajectory and initially moves down and to the left, then reverses course for the remainder of the trajectory.
The end segment of the pseudosolution has a relative error of $\approx \frac{1}{6}$.}
\label{pull}
\end{figure}
\section{Assessment}
Curved--- that is, generic--- solutions to the string equations with one free and one moving end must have a time-dependent shape. Our attempt to find such a solution in two dimensions began with the purely pragmatic assumption of time-independent stress, which led to a conservation law form for the equations. A new solution emerged, exact but with a time-dependent range of validity. This was extended to the free end by $C^0$ splicing to another exact but geometrically trivial solution, the result being an error in the complete pseudosolution which could be kept on the order of 15\% for moderate jumps in angle at the splice point. This led to a $t^{\frac{4}{3}}$ scaling for the propagation of information--- the location of the splice point--- to and from the end of the string, and a corresponding $t^{-\frac{2}{3}}$ singularity in acceleration. The singularity is suggestive of the rapid change in direction of the end segment observed in real systems.
Though it is not a solution, our pseudosolution is less geometrically restricted than prior \cite{Bernstein58} and current (Appendix \ref{straight}) efforts that splice two straight segments together, and avoids the presumably unphysical associated singularities in velocity and tension. Thus, we may hope that it reflects more information about the real physical system. Its construction leaves the bulk portion of the solution exact, and satisfies the jump condition at the internal boundary. However, once the splicing is performed, the formerly exact end portion is no longer a solution, and only approaches a solution in an uninteresting limit. There likely exists an exact end solution, consisting of some time-dependent curve whose form we do not currently know, that could be spliced to the bulk solution without errors. There are likely also solutions that do not require splicing of any sort, for which both tangents and stress are continuous, but our results suggest that these will require time-dependent stress in the bulk.
It is not clear how general the present results are. There is no reason why the stress should be constant in time in the bulk, or why the end of the string should be treated as straight. The time scalings found here may result from the details of these assumptions. An important question is whether the effects of the boundary can be localized near the end, or if information really must propagate out from and back into the bulk. Also open is the question of whether bulk solutions may approach a generic lariat condition of uniform stress, or if the effects of the boundary are truly long-range in this sense as well. At large $s$, the bulk solution found does not saturate to a uniform value and, even worse, the curvature goes to zero. It might also be remarked here that, after all our trouble, the curved bulk solutions are not really all that far from straight, and we were further unable to introduce acute angles at the splice point without significant errors. Future work will consider traveling wave \emph{ans\"{a}tze} for both stress and curvature in an attempt to find a more useful variety of solutions.
\clearpage
\section*{Acknowledgments}
Work on this problem was inspired by observations made by J.-C. G\'{e}minard and E. Hamm and subsequent discussions with them. P.-T. Brun and D. Vella shared their unpublished results on a related problem and provided helpful insights. We also wish to thank S. Arzoumanian and E. de Langre for telling us about relevant literature on towed cylinders. Funding came from National Science Foundation grant DMR 0846582.
|
1,116,691,501,283 | arxiv | \section{Introduction}
Compression of complementary sequences \cite{DK:DCC:2015} has been proved to be a valuable tool to discover several
new complementary sequences of various kinds in the past decade.
\section{Legendre Pairs of lengths \ellmodfive}
\noindent We consider Legendre Pairs $(A,B)$ of length \ellmodfive\ and set $m = \ell/5$.
In this case, it is known that there are exact evaluations of $\operatorname{PSD}_A(m)$ and $\operatorname{PSD}_B(m)$, given in \cite{AnnComb_2009_KKS}, that we rewrite here in a slightly different form:
\begin{align}\label{eqn:Koukouvinos}
\begin{split}
\operatorname{PSD}_A(m)& = p_2({\cal A}_m) -\displaystyle\frac{1}{2}e_2({\cal A}_m) + \displaystyle\frac{\sqrt{5}}{2} \left( \operatorname{PAF}_{{\cal A}_m}(1) - \operatorname{PAF}_{{\cal A}_m}(2) \right), \\
\operatorname{PSD}_B(m)& = p_2({\cal B}_m) -\displaystyle\frac{1}{2}e_2({\cal B}_m) + \displaystyle\frac{\sqrt{5}}{2} \left( \operatorname{PAF}_{{\cal B}_m}(1) - \operatorname{PAF}_{{\cal B}_m}(2) \right),
\end{split}
\end{align}
where ${\cal A}_m = [A_1,A_2,A_3,A_4,A_5]$ and
${\cal B}_m = [B_1,B_2,B_3,B_4,B_5]$ are the $m$-compressions of $A$ and $B$ respectively.
In addition, we have the standard relation \[\operatorname{PSD}_A(m) + \operatorname{PSD}_B(m) = 2\ell +2.\]
Since $A_1+\cdots+A_5=B_1+\cdots+B_5=\pm1$ (throughout the paper, WLOG, we are going to assume $A_1+\cdots+A_5=B_1+\cdots+B_5=1$), we must have
\begin{align}\label{eqn:1a2}
\begin{split}
p_2({\cal A}_m) -\displaystyle\frac{1}{2}e_2({\cal A}_m) &=
\displaystyle\frac{1}{4} ( 5 p_2({\cal A}_m) -1), \\
p_2({\cal B}_m) -\displaystyle\frac{1}{2}e_2({\cal B}_m) &=
\displaystyle\frac{1}{4} ( 5 p_2({\cal B}_m) -1).
\end{split}
\end{align}
Experimental evidence gathered for $\ell = 15,25,35,45,55$ indicates that there are Legendre pairs of these orders such that
\begin{align}\label{eqn:5p2}
\begin{split}
\displaystyle\frac{1}{4} ( 5 p_2({\cal A}_m) -1)& = \ell + 1 ,\\
\displaystyle\frac{1}{4} ( 5 p_2({\cal B}_m) -1)& = \ell + 1.
\end{split}
\end{align}
Equations~(\ref{eqn:5p2}) simplify to
\begin{align}\label{eqn:p2a2}
\begin{split}
p_2({\cal A}_m)& = 4 m + 1,\\
p_2({\cal B}_m) &= 4 m + 1.
\end{split}
\end{align}
The following lemma expresses the $m$th entry of the power spectral density ${\rm PSD}_{A}(m)$ of a sequence $A$ of length $\ell=5m$ in terms of the values in its PAF sequence.
\begin{lemma}\label{lem:Dursun}
Let $m \in \mathbb{Z}^{\geq 1}$, $w$ be the $\ell$th primitive root of unity and $w'=w^{m}$, and $A$ be a sequence of length $\ell=5m$. Then
\begin{align*}
\begin{split}
{\rm PSD}_{A}(m)=&\sum_{j=0}^{\ell-1}
{\rm PAF}_A(j)w^{mj}=\sum_{j=0}^{\ell-1}{\rm PAF}_A(j)(w')^{j}=
\sum_{i=0}^4\sum_{j=0}^{\frac{\ell}{5}-1}{\rm PAF}_A(5j+i)(w')^i\\
=&\sum_{j=0}^{\frac{\ell}{5}-1}{\rm PAF}_A(5j)+\sum_{i=1}^2
2\cos \left (\frac{2\pi i}{5}\right)\sum_{j=0}^{\frac{\ell}{5}-1}{\rm PAF}_A(5j+i)\\
=&\sum_{j=0}^{\frac{\ell}{5}-1}
{\rm PAF}_A(5j)+\sum_{j=0}^{\frac{\ell}{5}-1}\sum_{i=1}^2\left(\frac{-1+(-1)^{i+1}\sqrt{5}}{2}\right){\rm PAF}_A(5j+i)\\
=&\sum_{j=0}^{\frac{\ell}{5}-1}
{\rm PAF}_A(5j)-\sum_{j=0}^{\frac{\ell}{5}-1}\sum_{i=1}^2\frac{{\rm PAF}_A(5j+i)}{2}+\frac{\sqrt{5}}{2}
\left(\sum_{j=0}^{\frac{\ell}{5}-1}
\sum_{i=1}^2(-1)^{i+1}{\rm PAF}_A(5j+i)\right).
\end{split}
\end{align*}
\end{lemma}
\begin{proof}
The proof follows from the Wiener Khintchin Theorem.
\end{proof}
The following proposition follows from Lemma~\ref{lem:Dursun}.
\begin{proposition}\label{prop:Dursun}
For a pair of sequences $(A,B)$ of length
$\ell = 5 m$, the Legendre pair constraint
$${\rm PSD}_{A}(m)+{\rm PSD}_{B}(m)=2\ell+2$$
is satisfied if and only if
$$ \sum_{j=0}^{\frac{\ell}{5}-1}
\left({\rm PAF}_A(5j)+{\rm PAF}_B(5j)\right)-\sum_{j=0}^{\frac{\ell}{5}-1}\sum_{i=1}^2\frac{{\rm PAF}_A(5j+i)+{\rm PAF}_B(5j+i)}{2}=2\ell+2,$$
and
$$ \sum_{j=0}^{\frac{\ell}{5}-1}
\sum_{i=1}^2(-1)^{i+1}\frac{{\rm PAF}_A(5j+i)+{\rm PAF}_B(5j+i)}{2}=0.$$
\end{proposition}
The following proposition follows directly from equations~(\ref{eqn:Koukouvinos}) and the fact that a Legendre pair $(A,B)$ of length $\ell = 5 m$ satisfies ${\rm PSD}_{A}(m)+{\rm PSD}_{B}(m)=2\ell+2$.
\begin{proposition}\label{porp:Dursun}
Let $(A,B)$ be a Legendre pair of length $\ell = 5 m$. Then there exists an even integer $x $ and $n_1,n_2\in \mathbb{Z}$ with $n_1+n_2=2\ell+2$ such that
\begin{align*}
\operatorname{PSD}_A(m) &= n_1 + \displaystyle\frac{\sqrt{5}}{2} \cdot x, \\
\operatorname{PSD}_B(m) &= n_2 - \displaystyle\frac{\sqrt{5}}{2} \cdot x.
\end{align*}
\end{proposition}
\noindent Exhaustive searches for $\ell = 15, 25$ reveal that all Legendre pairs of these lengths have $n_1=n_2=\ell+1$, where $n_1,n_2$ are as in Corollary~\ref{porp:Dursun}.
\begin{comment}
in which the standard relationship $\operatorname{PSD}_A(m) + \operatorname{PSD}_B(m) = 2\ell +2$ is materialized. More specifically:
\begin{enumerate}
\item for $\ell = 15$, we have that $\operatorname{PSD}_A(3) = 15 + 1 + \cdots$ and $\operatorname{PSD}_B(3) = 15 + 1 - \cdots$
\item for $\ell = 25$, we have that $\operatorname{PSD}_A(5) = 25 + 1 + \cdots$ and $\operatorname{PSD}_B(5) = 25 + 1 - \cdots$
\end{enumerate}
\end{comment}
\noindent Non-exhaustive searches for larger odd values of $\ell$ which are multiples of $5$ reveal the same pattern, i.e.
the standard relationship $\operatorname{PSD}_{A}(m) + \operatorname{PSD}_{B}(m) = 2\ell +2$ is materialized (often, but not always) in the same very specific manner. These observations give rise to the following conjecture.
\begin{conjecture}\label{conj:Ilias}
For every odd positive integer \ellmodfive, there exist Legendre pairs $(A,B)$ of length $\ell = 5 m$, such that
\begin{align}\label{eqn:Ilias}
\begin{split}
\operatorname{PSD}_A(m) &= \ell + 1 + \displaystyle\frac{\sqrt{5}}{2} \cdot x, \\
\operatorname{PSD}_B(m) &= \ell + 1 - \displaystyle\frac{\sqrt{5}}{2} \cdot x
\end{split}
\end{align}
for some even integer $x\geq 0$.
\label{conjecture_1}
\end{conjecture}
Conjecture~\ref{conj:Ilias} says that there exists a Legendre pair $(A,B)$ such that $n_1=n_2=\ell+1$ in Proposition~\ref{porp:Dursun}, i.e.,
the constant $2\ell + 2$ can be
distributed in a ``balanced'' manner, in two equal parts, in $\operatorname{PSD}_{A}(m)$ and $\operatorname{PSD}_{B}(m)$. Table~\ref{tab:Conjecture}
provides computational evidence for Conjecture~\ref{conj:Ilias}.
\begin{table}[ht!]
\centering
\begin{tabular}{c|c|c}
m & $\ell = 5 m$ & x \\
\hline
1 & 5 & 2 \\
3 & 15 & $ \{0,4,8\}$ \\
5 & 25 & 2 \\
7 & 35 & 16 \\
9 & 45 & 6 \\
11 & 55 & 24 \\
13 & 65 & 14 \\
15 & 75 & 48 \\
\hline
17 & 85 & \textcolor{red}{36} \\
\hline
\end{tabular}
\caption{Computationally verified cases for Conjecture~\ref{conj:Ilias} with their corresponding $x$ values}
\label{tab:Conjecture}
\end{table}
The following proposition provides valid constraints on the $m$-compressed sequences $(\mathcal{A}_{(m)},\mathcal{B}_{(m)})$, of a Legendre pair $(A,B)$ of length $\ell = 5 m$.
\begin{proposition} Let $\ell=5m$ be an odd integer. Then a pair of $m$-compressed sequences $(\mathcal{A}_{(m)},\mathcal{B}_{(m)})$, of a Legendre pair $(A,B)$ of length $\ell = 5 m$,
must satisfy
\[{\rm PAF}_{\mathcal{A}_{(m)}}(1) - {\rm PAF}_{\mathcal{A}_{(m)}}(2) = - ( {\rm PAF}_{\mathcal{B}_{(m)}}(1) - \operatorname{PAF}_{\mathcal{B}_{(m)}}(2) )=x,\]
where $x$ is as in Proposition~\ref{porp:Dursun}.
\label{proposition_1Dursun}
\end{proposition}
\begin{proof}
This result follows directly from equations~(\ref{eqn:Koukouvinos}) and the fact that
a Legendre pair $(A,B)$ of length $\ell = 5 m$ satisfies ${\rm PSD}_{A}(m)+{\rm PSD}_{B}(m)=2\ell+2$.
\end{proof}
\noindent The following are some observations pertaining to Conjecture~\ref{conjecture_1}.
\begin{enumerate}
\item For a fixed value of $\ell$, the value of $x$ in Conjecture \ref{conjecture_1} is not necessarily unique.
\item The value of the even integer $x$ is determined by the $m$-compressed sequences $(\mathcal{A}_{(m)},\mathcal{B}_{(m)})$.
\item
There exist Legendre pairs of length $\ell = 5 m$ that do not satisfy Conjecture \ref{conjecture_1}.
\item A proof of Conjecture \ref{conjecture_1} would entail an infinite class of Legendre pairs, of lengths $\ell = 5 \cdot m$, for every odd $m$.
\end{enumerate}
The following proposition provides valid constraints on the $m$-compressed sequences $\mathcal{A}_{(m)}$ and $\mathcal{B}_{(m)}$ of a Legendre pair $(A,B)$ of length $\ell = 5m$
satisfying equations~(\ref{eqn:Ilias}).
\begin{proposition}\label{proposition_1}
A pair of $m$-compressed sequences $({\mathcal A}_{(m)},{\mathcal B}_{(m)})$ of a Legendre pair $(A,B)$ of length $\ell = 5 m$ satisfying equations~(\ref{eqn:Ilias})
must satisfy
\[p_2(\mathcal{A}_m)={\rm PAF}_{\mathcal{A}_{(m)}}(0) = p_2(\mathcal{B}_m)={\rm PAF}_{\mathcal{B}_{(m)}}(0) = 4m+1.\]
\end{proposition}
\begin{proof}
This result follows directly from equations~(\ref{eqn:1a2}) and~(\ref{eqn:p2a2}).
\end{proof}
\begin{comment}
The $m$-compressed sequences $\mathcal{A}_{(m)}, \mathcal{B}_{(m)}$ of a Legendre pair of length $\ell=5m$ satisfy
$${\rm PSD}_{\mathcal{A}_{(m)}}(1) + {\rm PSD}_{\mathcal{B}_{(m)}}(1)=2\ell+2.$$
Then by applying Proposition~\ref{prop:Dursun} to the pair $(\mathcal{A}_{(m)}, \mathcal{B}_{(m)})$ and taking $m'=1$ we get
$$\sum_{i=1}^2 (-1)^{(i+1)}({\rm PAF}_{\mathcal{A}_{(m)}}(i) + {\rm PAF}_{\mathcal{B}_{(m)}}(i))=0,$$
and this is equivalent to
\[{\rm PAF}_{\mathcal{A}_{(m)}}(1) - {\rm PAF}_{\mathcal{A}_{(m)}}(2) = - ( {\rm PAF}_{\mathcal{B}_{(m)}}(1) - \operatorname{PAF}_{\mathcal{B}_{(m)}}(2) ).\]
The $m$-compressed sequences $A_{(m)}, B_{(m)}$ of a Legendre pair of length $\ell=5m$ satisfy
$${\rm PSD}_{A_{(m)}}(1) + {\rm PSD}_{B_{(m)}}(1)=2\ell+2.$$
Then by applying Proposition~\ref{prop:Dursun} to the pair $(A_{(m)}, B_{(m)})$ and taking $m'=1$ we get
$$ {\rm PAF}_{A_{(m)}}(0) + {\rm PAF}_{B_{(m)}}(0)-\sum_{i=1}^2 \frac{{\rm PAF}_{A_{(m)}}(i) + {\rm PAF}_{B_{(m)}}(i)}{2} =2\ell+2=10m+2$$
Moreover, the $m$-compressed sequences $A_{(m)}, B_{(m)}$ of a Legendre pair of length $\ell=5m$ satisfy
$$ {\rm PAF}_{A_{(m)}}(i) + {\rm PAF}_{B_{(m)}}(i)= -2m \quad \text{ for } i =1,\ldots,5.$$
Hence, by combining equations(\ref{eqn:}) and~(\ref{eqn:})
we get
$$ {\rm PAF}_{A_{(m)}}(0) + {\rm PAF}_{B_{(m)}}(0)= 8m+2.$$
\end{comment}
\noindent
The first two conditions of
Proposition~\ref{proposition_1} can be recast as the following sums-of-squares
Diophantine equations
\begin{equation}
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 m + 1, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 m + 1.
\end{array}
\label{SOS_Diophantine_equations}
\end{equation}
In addition, we are only interested in all-odd solutions,
as $A_i, B_i$ are odd numbers for $i = 1,\ldots,5$.
\noindent Proposition \ref{proposition_1} drastically reduces the search space by drastically truncating the number of possible $m$-compressed sequences $({\mathcal A}_{(m)}, {\mathcal B}_{(m)})$
that can give rise to a Legendre pair of order $\ell = 5 m$.
We illustrate this fact in the next section. \\
\noindent Another consequence of
Proposition~\ref{proposition_1} is that the alphabet of the possible $m$-compressed sequences $({\mathcal A}_{(m)}, {\mathcal B}_{(m)})$ is also significantly truncated. More specifically,
while the full alphabet for candidate $m$-compressed sequences
is $$\{ -m, -(m-2), \ldots, -1, +1, \ldots, (m-2), m \},$$
using Proposition~\ref{proposition_1}, the full alphabet for candidate $m$-compressed sequences
is \[\{ -7, -5, -3, -1, 1, 3, 5, 7 \},\] for $m = 3$, $5$, $7$, $9$, $11$, $13$, $15$, $17$, $19$,
i.e., Proposition~\ref{proposition_1} dictates a mild growth on the size of the alphabet for all odd $\ell$ which are multiples of $5$ for $1 < \ell < 100$.
\noindent Finally, the linear constraints $A_1 + A_2 + A_3 + A_4 + A_5 = 1$ and $B_1 + B_2 + B_3 + B_4 + B_5 = 1$ are used to rule out some, if not any of the solutions of the sums-of-squares Diophantine equations (\ref{SOS_Diophantine_equations}).
Hence, this sometimes provides additional reductions in the number of possible candidate $m$-compressed sequences $({\mathcal A}_{(m)}, {\mathcal B}_{(m)})$.
\section{Computational evidence for Conjecture \ref{conjecture_1}}
\noindent In this section, we verify
Conjecture~\ref{conjecture_1} for Legendre pairs of the seven known lengths $\ell = 15, 25, 35, 45, 55, 65, 75$ and the (subsequent) eighth open length $\ell = 85$.
Therefore, we exhibit the first known example of a Legendre pair of length $\ell = 85$, which has been the smallest previously unknown length case for Legendre pairs.
\subsection{$\ell = 15,\, m = 3$}
\noindent All Legendre pairs of length $15$ satisfy:
\begin{align*}
{\rm PSD}_A(3) &= 15 + 1 + \displaystyle\frac{\sqrt{5}}{2} \cdot x, \\
{\rm PSD}_B(3)& = 15 + 1 - \displaystyle\frac{\sqrt{5}}{2} \cdot x.
\end{align*}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 3 + 1 = 13, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 3 + 1 = 13.
\end{array}
$$
Each of these equations has a unique solution (up to sign changes and permutations of entries) with all-odd values, namely $[ 1,1,1,1,3]$.
\subsection{$\ell = 25,\, m = 5$}
\noindent All Legendre pairs of length $25$ satisfy:
\begin{align*}
{\rm PSD}_A(5) &= 25 + 1 + \displaystyle\frac{\sqrt{5}}{2} \cdot x, \\
{\rm PSD}_B(5) &= 25 + 1 - \displaystyle\frac{\sqrt{5}}{2} \cdot x.
\end{align*}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 5 + 1 = 21, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 5 + 1 = 21.
\end{array}
$$
Each of these equations has a unique solution (up to sign changes and permutations of entries) with all-odd values, namely $[ 1,1,1,3,3 ]$.
\subsection{$\ell = 35,\, m = 7$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 7 + 1 = 29, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 7 + 1 = 29.
\end{array}
$$
Each of these equations has only two solutions (up to sign changes and permutations of entries) with all-odd values, namely $[1,1,1,1,5]$, $[1,1,3,3,3]$.
\subsection{$\ell = 45,\, m = 9$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 9 + 1 = 37, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 9 + 1 = 37.
\end{array}
$$
Each of these equations has only two solutions (up to sign changes and permutations of entries) with all-odd values, namely $[ 1,1,1,3,5 ]$, $[ 1,3,3,3,3 ]$.
\subsection{$\ell = 55,\, m = 11$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 11 + 1 = 45, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 11 + 1 = 45.
\end{array}
$$
Each of these equations has only two solutions (up to sign changes and permutations of entries) with all-odd values, namely $[ 1,1,3,3,5 ]$, $[ 3,3,3,3,3 ]$.
Since the linear equations $A_1 + A_2 + A_3 + A_4 + A_5 = 1$ and $B_1 + B_2 + B_3 + B_4 + B_5 = 1$ must be satisfied, solutions based on $[3,3,3,3,3]$ are ruled out.
\subsection{$\ell = 65, \, m = 13$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 13 + 1 = 53, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 13 + 1 = 53.
\end{array}
$$
Each of these equations has only three solutions (up to sign changes and permutations of entries) with all-odd values, namely $[ 1,1,1,1,7 ]$, $[ 1,1,1,5,5 ]$, $[ 1,3,3,3,5 ]$.
Since the linear equations $A_1 + A_2 + A_3 + A_4 + A_5 = 1$ and $B_1 + B_2 + B_3 + B_4 + B_5 = 1$ must be satisfied, solutions based on $[1,1,1,1,7 ]$ are ruled out.
\subsection{$\ell = 75, \, m = 15$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 15 + 1 = 61, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 15 + 1 = 61.
\end{array}
$$
Each of these equations has only three solutions (up to sign changes and permutations of entries) with all-odd values, namely $[ 1,1,1,3,7 ]$, $[ 1,1,3,5,5 ]$, $[ 3,3,3,3,5]$.
\subsection{$\ell = 85,\, m = 17$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 17 + 1 = 69, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 17 + 1 = 69.
\end{array}
$$
Each of these equations has only two solutions (up to sign changes and permutations of entries) with all-odd values, namely $[ 1,1,3,3,7 ]$, $[ 1,3,3,5,5 ]$. \\
\noindent We used the subgroup of order~$2, H = \{ 1, 69 \}$ which acts on $\mathbb{Z}_{85}^\star$ and yields $16$ cosets of size $1$ and $34$ cosets of size $2$. We choose $12$ cosets of size 1 and $15$ cosets of size 2, to make a block of size $12 \cdot 1 + 15 \cdot 2 = 42$. Therefore, the size of the search space is $\binom{16}{12} \cdot \binom{34}{15} = 1820 \cdot 1{,}855{,}967{,}520 = 3{,}377{,}860{,}886{,}400$. We found 4 Legendre pairs of length $\ell =85$, made out of 6 different sequences, this was not an exhaustive search. Their LexRank encoding is given as
\begin{align*}
&\{\{12, 1321116338\},\, \{42, 1275934280\}\}, \\
&\{\{12, 1843909851\}, \, \{42, 606586783\}\}, \\
&\{\{42, 1275934280\}, \, \{9, 1555522731\}\}, \\
&\{\{42, 606586783\},\, \{9, 788215097\}\}.
\end{align*}
\noindent For the first Legendre pair $(A,B)$ of length $\ell = 85$ shown above, i.e. $\{\{12, 1321116338\},$ \& $\{42, 1275934280\}\}$, in order to decode the LexRank encoding one can proceed as follows.
\begin{itemize}
\item In the space of $\binom{16}{12} = 1820$ 12-subsets of $\{1,\ldots,16\}$, decode
\begin{align*}
& 12 \mbox{ as } A_\mathrm{ones} = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14, 15\}, \\
& 42 \mbox{ as } B_\mathrm{ones} = \{1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 14, 15\}.
\end{align*}
\item In the space of $\binom{34}{15} = 1,855,967,520$ 15-subsets of $\{1,\ldots,34\}$, decode
\begin{align*}
& 1321116338 \mbox{ as } A_\mathrm{twos} = \{3, 4, 5, 7, 10, 11, 22, 24, 25, 27, 28, 29, 30, 31, 34\}, \\
& 1275934280 \mbox{ as } B_\mathrm{twos} = \{2, 8, 10, 11, 12, 15, 19, 21, 23, 25, 26, 28, 29, 33, 34\}.
\end{align*}
\item Enumerate the $16$ cosets of size $1$ in increasing order as \\
$O_1 = \{ \{5\}, \{10\}, \{15\}, \{20\}, \{25\}, \{30\}, \{35\}, \{40\}, \{45\}, \{50\}, \{55\}, \{60\}, \{65\},$ \\
$\{70\}, \{75\}, \{80\} \}.$
\item Enumerate the $34$ cosets of size $2$ in increasing order of their first element as \\
$O_2 = \{ \{1, 69\}, \{2, 53\}, \{3, 37\},
\{4, 21\}, \{6, 74\}, \{7, 58\}, \{8, 42\}, \{9, 26\},
\{11, 79\}, $ \\ $ \{12, 63\},
\{13, 47\}, \{14, 31\}, \{16, 84\}, \{17, 68\}, \{18, 52\}, \{19, 36], \{22, 73], \{23, 57\},$ \\ $ \{24, 41\}, \{27, 78\},
\{28, 62\}, \{29, 46\},\{32, 83\},\{33, 67\}, \{34, 51\}, \{38, 72\}, \{39, 56\}, $
\\ $ \{43, 77\}, \{44, 61\}, \{48, 82\}, \{49, 66\},\{54, 71\},\{59, 76\},\{64, 81\} \}.$
\item Make the block of size $42$ of the indices of the positions of the $-1$ elements in $A$, by combining $12$ elements of $O_1$ whose indices are given by $A_\mathrm{ones}$ and $15$ elements of $O_2$ whose indices are given by $A_\mathrm{twos}$. This yields the following $A$-block of size $42$: \\
$\{5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 70, 75, 3,
37, 4, 21, 6, 74, 8, 42, 12, 63, 13, 47, 29, 46, $ \\ $33, 67, 34, 51, 39, 56, 43,
77, 44, 61, 48, 82, 49, 66, 64, 81\}$.
\item Make the block of size $42$ of the indices of the positions of the $-1$ elements in~$B$, by combining $12$ elements of $O_1$ whose indices are given by $B_\mathrm{ones}$ and $15$ elements of $O_2$ whose indices are given by $B_\mathrm{twos}$.
This yields the following $B$-block of size $42$: \\
$\{5, 10, 15, 20, 25, 30, 35, 40, 50, 55, 70, 75, 2,
53, 9, 26, 12, 63, 13, 47, 14,
31, 18, 52, 24, 41,$ $28, 62, 32, 83, 34, 51, 38, 72, 43, 77, 44, 61, 59, 76, 64, 81\}$.
\item It can readily be verified that the above $A$-block and $B$-block (both of size 42),
give a Legendre pair for $\ell = 85$.
\end{itemize}
\noindent For the first Legendre pair $(A,B)$ of length $\ell = 85$ shown above,
we have that the $17$-compressions of $A$, $B$ are
$$
\mathcal{A}_{(17)} = [1, 3, 3, 1, -7], \, \mathcal{B}_{(17)} = [3, 1, 1, 3, -7].
$$
These possess the properties required by Proposition~\ref{proposition_1}, i.e.,
\[{\rm PAF}_{\mathcal{A}_{(17)}}(0) = {\rm PAF}_{\mathcal{B}_{(17)}}(0) = 4 \cdot 17 + 1 = 69.\]
As a consequence, we have that the coefficients of $\displaystyle\frac{\sqrt{5}}{2}$ in ${\rm PSD}_A(17)$ and ${\rm PSD}_B(17)$ cancel out, where
$$
\begin{array}{ccc}
{\rm PSD}_A(17) & = & 85 +1 +\displaystyle\frac{\sqrt{5}}{2} \cdot 36 = 126.2492236, \\[2ex]
{\rm PSD}_B(17) & = & 85 +1 -\displaystyle\frac{\sqrt{5}}{2} \cdot 36 = 45.75077641.
\end{array}
$$
\subsection{$\ell = 95,\, m = 19$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 19 + 1 = 77, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 19 + 1 = 77.
\end{array}
$$
These equations have only four solutions (up to sign changes and permutations of entries) with all-odd values, namely
$$
[1, 1, 1, 5, 7],\, [1, 1, 5, 5, 5],\,
[1, 3, 3, 3, 7],\, [3, 3, 3, 5, 5].
$$
However, solutions based on $[1, 1, 5, 5, 5]$ are ruled out, as the linear equations $A_1 + A_2 + A_3 + A_4 + A_5 = 1$ and $B_1 + B_2 + B_3 + B_4 + B_5 = 1$ cannot be satisfied.
\subsection{$\ell = 115,\, m = 23$}
Proposition \ref{proposition_1} implies the following two sums-of-squares Diophantine equations
$$
\begin{array}{c}
A_1^2 + A_2^2 + A_3^2 + A_4^2 + A_5^2 = 4 \cdot 23 + 1 = 93, \\
B_1^2 + B_2^2 + B_3^2 + B_4^2 + B_5^2 = 4 \cdot 23 + 1 = 93.
\end{array}
$$
These equations have only three solutions (up to sign changes and permutations of entries) with all-odd values, namely
$$
[1, 1, 1, 3, 9],\, [1, 3, 3, 5, 7],\, [3, 3, 5, 5, 5].
$$
However, solutions based on $[1, 1, 1, 3, 9]$ are ruled out, as the linear equations $A_1 + A_2 + A_3 + A_4 + A_5 = 1$ and $B_1 + B_2 + B_3 + B_4 + B_5 = 1$ cannot be satisfied.
\section{Newly discovered Legendre pairs of order $\ell = 87$ with balanced compressions}
By a property of compression of Legendre pairs that can be deduced from a theorem in~\cite{DK:DCC:2015}, a 3-compression of a Legendre pair of order $\ell = 87$
must contain $14$ elements with absolute value equal to three and $2 \cdot 29 - 14 = 44$ elements with absolute value equal to $1$. Experimental evidence for other orders
that are divisible by $3$ indicate that ``balanced'' configurations of candidate $3$-compressions that have exactly half of the total number of elements with absolute value equal to $3$, are likely to yield Legendre pairs.
Hence, we computed approximately six thousand candidate $3$-compressions
satisfying the following constraints, where only the last constraint is not necessary.
\begin{enumerate}
\item The sequences $\mathcal{A}_3, \mathcal{B}_3 \in \{-3,-1,+1,+3\}^{29}$ contain $14$ elements with absolute value equal to $3$ and $44$ elements with absolute value equal to $1$.
\item ${\rm \operatorname{PAF}}_{\mathcal{A}_3}(s) + {\rm \operatorname{PAF}}_{\mathcal{B}_3}(s) = (-2) \cdot 3 = -6$.
\item ${\rm \operatorname{PSD}}_{\mathcal{A}_3}(s) + {\rm \operatorname{PSD}}_{\mathcal{B}_3}(s) = 2 \cdot 87 + 2 = 176$.
\item $A_1 + \cdots + A_{29} = 1$.
\item $B_1 + \cdots + B_{29} = 1$.
\item Each of $\mathcal{A}_3$ and $\mathcal{B}_3$ contains $7$ elements with absolute value equal to $3$ and $22$ elements with absolute value equal to $1$.
\end{enumerate}
Subsequently, we run our {\tt C} 3-uncompression code for approximately two thousand candidate 3-compressions and discovered the two Legendre pairs of order $\ell = 87$ below.
{\footnotesize
\begin{align*}
A_{87} =& [-1,-1,-1,-1,1,-1,-1,-1,1,1,-1,1,-1,1,-1,1,1,-1,-1,1,1,-1,-1,1,-1,1,-1,1,1, \\
&-1,-1,1,1,1,-1,-1,-1,-1,-1,1,-1,-1,1,-1,-1,-1,1,1,1,1,-1,-1,-1,1,-1,1,1,\\
&1,-1,1,1,-1,-1,-1,-1,1,1,1,1,1,1,-1,-1,1,1,1,-1,1,1,1,1,1,-1,1,-1,1,-1],\\
B_{87} =& [-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,-1,1,1,-1,1,1,-1,1,-1,1,1,1,1,-1,-1,1,1,\\
&-1,1,1,-1,1,-1,-1,1,-1,-1,1,1,1,-1,1,-1,1,1,-1,-1,1,-1,-1,1,1,1,1,1,-1,-1,\\
&-1,1,1,1,-1,-1,1,-1,1,-1,-1,1,-1,1,-1,-1,-1,1,-1,-1,1,1,1,-1,1,-1,1,-1],\\
A_{87} =& [-1,-1,-1,-1,1,-1,-1,-1,1,1,1,1,1,1,-1,1,-1,-1,-1,1,1,1,-1,1,-1,-1,-1,1,\\
&
1,-1,-1,1,1,1,-1,-1,-1,1,-1,-1,1,-1,1,-1,-1,1,1,1,1,1,-1,-1,1,1,1,-1,1,1,-1,1,1, \\
&-1,-1,-1,-1,1,-1,1,1,-1,-1,-1,-1,1,1,1,-1,1,1,-1,1,-1,-1,1,1,1,-1],\\
B_{87} =& [-1,1,-1,1,1,-1,-1,1,1,1,1,1,1,-1,1,-1,1,1,1,-1,-1,1,-1,1,-1,1,-1,1,-1,\\
&-1,-1,1,-1,-1,-1,-1,-1,-1,-1,1,-1,1,1,1,-1,1,-1,-1,1,-1,-1,1,1,1,-1,-1,\\
&1,1,-1,-1,1,-1,1,-1,-1,1,-1,1,-1,1,-1,-1,1,-1,-1,1,-1,-1,1,1,1,1,1,1,1,1,-1].
\end{align*}
}
Both of the above Legendre pairs of length $\ell = 87$ 3-compress to
{\small
$$
\mathcal{A}_{29} = [-3, -1, 1, -1, 1, -3, -3, -1, 1, 1, 1, 1, -1, 1, -3, 1, 1, 1, -1, 3, 3, -1, -1, 1, -1, 1, -1, 3, 1],
$$
$$
\mathcal{B}_{29} = [-3, -1, 1, -1, 1, -3, -3, 1, -1, 1, 1, 1, 1, -1, 3, -3, 1, 1, -1, -1, -1, 1, 1, 3, 1, 1, -1, 3, -1].
$$}
\noindent The lengths $\ell = 85, 87$ have been the smallest open lengths $< 100$ for Legendre pairs, see \cite{TKBG:DCC:2021}, \cite{KK:JCD:2021}.
\section{Conclusion}
Recently, a Legendre pair of (the previously open) length $77$ and of lengths $117, 129, 133,$ $147$ were found recently in \cite{TKBG:DCC:2021} and \cite{KK:JCD:2021} respectively. In this paper, we find Legendre pairs of (the previously open) lengths $85,87$.
This reduces the list of integers less than $200$ for which the question of existence of Legendre pairs remains open to
$$
115, 145, 147, 159, 161, 169, 175, 177, 185, 187, 195.
$$
\bibliographystyle{plain}
|
1,116,691,501,284 | arxiv | \section{Introduction}
\label{sec:intro}
Image-based brain parcellation is a fundamental tool for understanding brain organization and function, in which the brain is divided into multiple non-overlapping and interacting regions \citep{eickhoff2018imaging}. Modern neuroimaging techniques enable the collection of whole-brain magnetic resonance (MR) scans in large samples of individuals. Several large studies, such as the Human Connectome Project \citep{van2013wu}, have collected such data along with human behavioral and cognitive information, leading to
a surge of interest in
relating
structural brain networks with various human traits \citep{glasser2016human,roine2019reproducibility,lin2020mapping,zhang2019tensor,apkarian2020neural}. A key issue in the understanding of brain connectivity, as noted by~\cite{Park2013}, is the way brain connectivity is measured, represented, and modeled.
In structural connectomics, the brain connectome is typically defined as corresponding to the collection of white matter fiber tracts. Classically, the intricate spatial
locations of all the fibers in the brain
are summarized via an adjacency matrix, with the cells containing summaries of connections between pairs of regions of interest (ROIs) \citep{o2013fiber}. However, as we argue and demonstrate in this paper, there are other ways to summarize the brain connectome that have distinct advantages \citep{sporns2005thehuaman,toga2012mapping}.
A method does not need to focus on an adjacency matrix representation for it to be an approach for analysis of brain connectomes.
This article contributes to the growing literature of structural connectomics by proposing a new representation of brain connectivity.
Existing literature on brain parcellation and structural brain networks typically represents structural brain connectivity via anatomical connectivity that is estimated by tractography on diffusion-weighted images \citep{behrens2003non}. The associated anatomical parcellation analysis (APA)---one of the most popular approaches in connectivity-based parcellation analysis \citep{yao2015review}---obtains the connectivity map by calculating connectivities between all pairs of regions in a predetermined anatomical parcellation scheme. Based on the selected atlas, we may build the connectivity matrix by counting the number of fibers passing between each pair of ROIs after fiber tracking. This connectivity matrix can then be used as a matrix-valued predictor in statistical analyses studying relationships with human traits ~\citep{zhang2018mapping,wang2019symmetric,lin2020mapping,de2013parcellation}.
However, such APA analyses require a particular anatomical ROI definition \citep{eickhoff2015connectivity}, and many different schemes are available involving different numbers and locations of ROIs, such as the automated anatomical labeling (AAL), automatic nonlinear imaging matching and anatomical labeling (ANIMAL) atlases, and many others \citep{tzourio2002automated,he2007small,he2008structural,yao2015review,wang2019symmetric}.
Choosing which scheme to use in practice is challenging. Several studies have reported impacts of different atlases on brain networks \citep{zalesky2010whole,messe2020parcellation}, and evidence suggests that not only the connectivity maps but also the inferences relating connectomes to human traits
are strongly sensitive to the parcellation strategy.
An additional major issue is that APA leads to connectome representations corresponding to high-dimensional adjacency matrices. While there is a growing literature focused on statistical analysis of such replicated graph or network data \citep{schiffler2017cortex,bansal2018personalized}, such methods are under-developed and poorly understood relative to the rich literature on methods for high-dimensional vector-valued predictors, and computationally efficient methods that scale well in practice are lacking. For these reasons, it is common to simply vectorize the upper-triangular part of the adjacency matrix prior to statistical analysis. However, this fails to exploit the network structure in performing dimension reduction \citep{wang2017bayesian,o2008analysis,hochberg2007whom}, and can suffer from substantial loss of efficiency and accuracy
~\citep{wang2019symmetric}.
In this article, we propose a novel tractography-based representation of the connectome, which clusters fiber endpoints to define a data adaptive parcellation targeted to explain variation among individuals and predict human traits. This leads to Principal Parcellation Analysis (PPA), representing individual brain connectomes by compositional vectors building on a basis system of fiber bundles that captures the connectivity at the population level. Unlike APA connectomes, PPA connectomes do not rely on any anatomical atlas or choosing ROIs a priori, thus reducing subjectivity and leading to a substantially different representation of the connectome.
This new representation of structural brain connectomes facilitates statistical analyses using well-established statistical methods designed for vector data.The PPA representation provides an alternative to the current standard ROI-based adjacency matrix representation in analyses studying how structural connectomes vary across individuals, both randomly and in relation to individual traits, and can accomplish these same inference goals at a fraction of the cost for implementing APA-based counterparts.
We illustrate the proposed approach through applications to data from the Human Connectome Project (HCP) and show that PPA connectomes, when combined with standard high-dimensional regression methods, improve power in predicting human traits over state-of-the-art methods based on classical connectomes, while dramatically improving parsimony and maintaining interpretability.
Our proposed method is general for any data that consist of a collection of fibers.
Our PPA package is publicly available on GitHub, and can be implemented routinely for diffusion image based parcellation analysis.
The rest of the paper is structured as follows. Section \ref{s:method} introduces
the proposed parcellation approach and PPA formulation.
In Section \ref{s:realdata}, we compare PPA to state-of-the-art APA-based methods using HCP data, and focus on prediction, visualization, and interpretability. Section
\ref{s:discuss} contains a discussion.
\section{Methods}\label{s:method}
\subsection{The PPA framework} \label{sec:PPA}
Suppose that we observe structural MRI, diffusion MRI, and human traits from $n$ individuals. The PPA pipeline, as illustrated in Figure \ref{tppa_pipeline}, consists of three modules: (i) reconstruction of fibers; (ii) representation of fibers and unsupervised clustering; and (iii) high-dimensional supervised learning adaptive to human traits. Each module of PPA encompasses a variety of choices, equipping PPA with easy extensibility.
We first describe each module of PPA using initial default settings,
followed by a discussion on extensions.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth]{fig/HCP-pipeline.pdf}\\
\caption{Pipeline of tractography-based Principal Parcellation Analysis.}\label{tppa_pipeline}
\end{figure}
Let $\mathcal{F}=\{f_{ik}, k=1,\ldots,m_i, i=1,\ldots,n\}$, where $f_{ik}$ is the $k$-th fiber in the $i$-th individual's brain, and $m_i$ is the total number of fibers in the $i$-th subject.
In addition, let $y_i(s)$ denote the $s$th `trait' of the $i$th individual with $\ve{y}(s) = (y_1(s),\ldots,y_n(s))^T$ for $s=1,\ldots,S$; traits can range from demographic characteristics, alcohol and drug exposures, to scores on cognitive, psychological and behavioral assessments.
In Module (i), we reconstruct fibers
using the recently proposed
TractoFlow method \citep{theaud2019tractoflow}; alternative fiber tracking algorithms can be used instead without changing the subsequent steps in the PPA pipeline.
TractoFlow takes raw diffusion weighted images as the input, consists of 14 steps for the diffusion weighted image (DWI) processing and 8 steps for the T1 weighted image processing, and outputs classical diffusion imaging measures.
The outlier fiber tracts are detected and removed using the method proposed in \cite{garyfallidis2012quickbundles}.
Module (i) is also a key step in estimating APA connectomes.
In Module (ii), we formulate connectomes at the population level through basis networks in the form of fiber bundles that represent groups/clusters of streamlines, and represent individual connectomes via compositional vectors. Let $\{\ve{a}_{ik}\}_{k=1}^{m_i}$ and $\{\ve{b}_{ik}\}_{k=1}^{m_i}$ be the 3D coordinates of two endpoints for each fiber from the $i$-th subject. Let $Z$ be a $6\times m$ matrix stacking all $(\ve{a}_{ik}^T,\ve{b}_{ik}^T)^T$ as columns for $k=1,\ldots,m_i$ and $i = 1, \ldots, n$, where $m=\sum_i m_i$ is the total number of fibers from all subjects. We perform a cluster analysis at the fiber level using the matrix $Z$, outputting a collection of partitions of $\mathcal{F}$, denoted by $\mathcal{A}_K=\{{A}_K^{(1)},\ldots,{A}_K^{(K)}\}$, where $K$ is the number of clusters and each set ${A}_K^{(k)}$ can be interpreted as a fiber bundle. The enormous number of fibers presents substantial computational challenges in clustering; for example, traditional $K$-Means does not scale well and requires large memory, leading to prohibitive computational cost. As such, we adopt mini-batch $K$-Means \citep{sculley2010web}, which reduces the memory use and converges to the optimal value orders of magnitude faster than the full-batch $K$-Means. For a fiber and cluster center, we define their distance as the minimum of their Euclidean distances considering two orderings of fiber endpoints, accounting for fiber bi-directionality similarly to fiber flipping~\citep{garyfallidis2012quickbundles}.
The number of clusters $K$ is an important hyperparameter. Our experiments have provided guidance on good values of $K$ in practice. To automate the choice of $K$, we can rely on cross validation as illustrated in Figure \ref{mse1}. Alternatively, we can decrease sensitivity to $K$ by choosing multiple values in a multi-scale representation of the brain connectome.
For a given $K$, an individual's connectome can be represented by the proportions of the individual's fibers belonging to each of the inferred population-level fiber bundles. In particular, the $i$th individual's connectome is represented via the $K$-dimensional compositional vector
$\ve{\omega}_i=\{\omega_{i1},\ldots,\omega_{iK}\}$, with
$\omega_{ik}$ the proportion of fibers belonging to the $k$th fiber bundle $A_K^{(k)}$, for $k=1,\ldots,K$. The connectome data for all $n$ subjects is then contained in the matrix
$\ve{\omega}=(\ve{\omega}_1^T,\cdots,\ve{\omega}_n^T)^T$. This provides a much simpler representation than the adjacency matrix-based APA approach. Note that the framework does not require all clusters $K$ to be present in all subjects since for certain individuals the number of fibers belonging to a particular cluster can be zero.
In Module (iii), we relate the connectome $\ve{\omega}_i$ to traits $y_i(s)$. For simplicity in interpretation, we initially focus on trait-specific linear regression models:
\begin{equation}
{y}_i(s)=\beta_0(s)+\sum_{k=1}^{K-1}\omega_{ik}\beta_k(s)+\epsilon_i(s),\label{tpaa_lm}
\end{equation}
where $\beta_0(s)$ is a trait specific intercept, which can be expanded to include non-connectome covariates, and $\beta_k(s)$ ($k=1,\ldots,K$) is a regression coefficient characterizing the relationship between the density of connections in the $k$th fiber bundle and the $s$th trait. For a sufficiently flexible specification, one may choose $K$ to be large in which case many of the $\beta_k(s)$ coefficients are expected to be zero or close to zero. Standard sparse learning methods can be used to estimate the coefficients while learning the sparsity pattern.
This yields a set of estimated non-zero coefficients $\{\hat{\beta}_k: k \in \mathcal{K}(s)\}$, where $\mathcal{K}(s) = \{k_1, \ldots, k_{m(s)}\} \subset \{1, \ldots, K - 1\}$ collects the indices of
the $m(s)$ fiber bundles having non-zero coefficients. We refer to these fiber bundles as ``active'' for the $s$th trait. Active bundles impact the response $y_i(s)$ via equation (\ref{tpaa_lm}), while inactive bundles have no impact on the response.
In our numerical experiments, we use LASSO~\citep{tibshirani1996regression}, one of the most popular high-dimensional regression methods, as a representative example. LASSO has been very widely studied and relatively efficient algorithms are readily available. However, applying LASSO to the vectorized upper triangle portion of APA connectome adjacency matrices produces less reliable estimation and has worse predictive performance than an identical analysis using PPA instead of APA connectomes. This is consistent with previous results motivating complex statistical methods that take into account the graph structure of the APA connectomes
\citep{wang2019symmetric}.
\subsection{Extensions of PPA} \label{sec:extension}
In Module (i), other fiber tracking algorithms alternatives to TractoFlow can be considered, such as Euler Delta Crossings (EuDX) in \cite{garyfallidis2012quickbundles} and Sparse Fascicle Model (SFM) in \cite{rokem2015evaluating}. We will compare various fiber tracking algorithms in our analyses of HCP data. In Module (ii), other clustering or factorization methods, including spectral clustering and non-negative matrix factorization (NMF), can also be adopted. In addition to the endpoints, the length and shape of the fibers may contain useful information~\citep{zhang2018mapping}, which can be incorporated in clustering analyses.
Module (iii) can be modified building on the rich literature on high-dimensional supervised learning methods. Instead of LASSO, other sparse shrinkage methods, such as elastic net \citep{zou2005regularization} and smoothly clipped absolute deviation (SCAD) penalty \citep{fan2001variable}, can be used without complication.
\section{Human Connectome Project Data Analyses}
\label{s:realdata}
In this section we use the HCP dataset to compare PPA-based methods with state-of-the-art APA-based approaches using various human traits, demonstrate how to choose $K$ in a data-driven manner, assess the robustness of PPA with respect to the fiber tracking algorithms and regularization strategies, and illustrate the interpretability of PPA.
\subsection{HCP Data Description}
Data collection and sharing for this project was provided by the MGH-USC Human Connectome Project (HCP; Principal Investigators: Bruce Rosen, M.D., Ph.D., Arthur W. Toga, Ph.D., Van J. Weeden, MD). HCP funding was provided by the National Institute of Dental and Craniofacial Research (NIDCR), the National Institute of Mental Health (NIMH), and the National Institute of Neurological Disorders and Stroke (NINDS). HCP data are disseminated by the Laboratory of Neuro Imaging at the University of Southern California.
We use the same set of 1065 HCP subjects as in \cite{wang2019symmetric}, including dMRI data along with human traits, downloaded from {\it HCP 1200 Subjects Data Release}\footnote{https://www.humanconnectome.org/study/hcp-young-adult/document/1200-subjects-data-release}. Details about the dMRI data acquisition and preprocessing can be found in \cite{van2012human,sotiropoulos2013advances}. For the human traits data, seven different scores were selected: receptive vocabulary, oral reading, list sorting, flanker, picture sequence memory, card sort, and processing speed. These scores can be used to study human cognition. Note that although we use task-based scores in this section, the proposed methods are broadly applicable for any measurement reflecting human traits. All the scores are age-adjusted, and their details can be found on the HCP website\footnote{https://wiki.humanconnectome.org/display/PublicData/}. A brief description of each trait is also included in the Appendix for easy reference.
\subsection{Analysis using PPA and APA}
PPA and APA provide distinct representations of human brain connectomes. Performance in studying relationships between connectomes and traits depends on the downstream analysis methods after the connectomes are obtained. As such, we chose state-of-the-art methods developed under APA connectomes, and adopted one of the most standard analysis methods, LASSO, under PPA connectomes. Such comparisons give APA an advantage. We also implemented LASSO for the vectorized APA connectomes.
We implemented PPA using the default choices in Section~\ref{s:method}. In particular, we used TractoFlow for fiber tracking, which depends on two main technologies: Nextflow and Singularity \citep{kurtzer2017singularity, di2017nextflow, garyfallidis2014dipy, tournier2019mrtrix3, avants2009advanced, jenkinson2012fsl}.
We obtained around 2.8 billion fibers for the HCP subjects. The fiber tracking results for two randomly selected subjects are displayed in Figure \ref{fiber_tracking}. We clustered the fibers using mini-batch $K$-Means \citep{sculley2010web}. We set the batch size to 1000 and varied the number of clusters $K$ from 10 to 500
across $\{10,50,100,200,300,400,500\}$. For each choice of $K$, we conducted analyses relating the PPA connectome to the trait of interest. We can then assess, based on cross validation, which choice of $K$ is best for that particular trait.
For each $K$, we obtained $K$ fiber bundles $A_K^{(k)}$ for $k = 1, \ldots, K$, leading to PPA connectome $\ve{\omega}_i$ for each individual $i = 1, \ldots, n.$
Figures \ref{clusters} and \ref{10clusters} show examples of the inferred fiber clusters ($K=10$) with each color denoting one cluster.
The same clusters, and corresponding colors, are used for the different subjects, and some heterogeneity is apparent across subjects.
The numbers of fibers in each cluster are shown in
Figure \ref{sizep} for these two subjects. The profile of these counts across clusters is similar for the two subjects, but subject 2 has a considerably greater proportion of fibers in cluster 10. A close inspection of the inferred clusters indicates that the anatomical locations of fiber bundles produced by PPA often correspond to known fascicles in the literature (\cite{gupta_thomopoulos_rashid_thompson_2017,shin_rowley_chowdhury_jolicoeur_klein_grova_rosa-neto_kobayashi_2019, Friederici2013}).
Taking the 10 clusters in Figure \ref{10clusters} as an example, cluster 2 and cluster 10 cover the {\it inferior longitudinal fasciculus}; cluster 5 overlaps with {\it corticospinal tract}; cluster 3 and cluster 5 contain {\it corpus callosum} and {\it uncinate fasciculus}; cluster 8 and 9 contain {\it superior longitudinal fasciculus}, {\it arcuate fasciculus} and {\it inferior fronto-occipital fasciculus} (\cite{Friederici2013}), which are language function related ROIs.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth]{fig/fiber_tracking.pdf}\\
\caption{Examples of fiber tracking results for two randomly selected HCP subjects.}\label{fiber_tracking}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\linewidth, height=5cm]{fig/c1.png} &
\includegraphics[width=0.3\linewidth,height=5cm]{fig/c2.png}&
\includegraphics[width=0.3\linewidth,height=5cm]{fig/c3.png}\\
\includegraphics[width=0.3\linewidth, height=5cm]{fig/c4.png} &
\includegraphics[width=0.3\linewidth, height=5cm]{fig/c5.png}&
\includegraphics[width=0.3\linewidth, height=5cm]{fig/c6.png}\\
{\small (a) Axial} & {\small (b) Sagittal} & {\small (c) Coronal}\\
\end{tabular}
\caption{Fiber clusters ($K=10$) of two subjects (each row represent one subject) at three planes: (a) Axial, (b) Sagittal, and (c) Coronal. In each plane, each color denotes one cluster.}
\label{clusters}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{cccccccccc}
\includegraphics[width=0.09\linewidth, height=10cm]{fig/cc1.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth,height=10cm]{fig/cc2.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth, height=10cm]{fig/cc3.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth,height=10cm]{fig/cc4.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth,height=10cm]{fig/cc5.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth, height=10cm]{fig/cc6.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth,height=10cm]{fig/cc7.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth, height=10cm]{fig/cc8.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth,height=10cm]{fig/cc9.png}\hspace*{-1em}&
\includegraphics[width=0.09\linewidth,height=10cm]{fig/cc10.png}
\end{tabular}
\caption{An illustration of $K=10$ clusters in one subject at 6 different angles of view (each row denotes one angle of view) where left to right: cluster 1 to cluster 10.}
\label{10clusters}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\linewidth,height=6cm]{fig/sizep1}&
\includegraphics[width=0.45\linewidth,height=6cm]{fig/sizep2}\\
{\small (a) Subject 1} & {\small (b) Subject 2} \\
\end{tabular}
\caption{The number of fibers in each cluster for the two subjects from Figure \ref{clusters}.}
\label{sizep}
\end{figure}
For APA connectomes, one first chooses brain ROIs according to an atlas template, and then selects a summary of connectivity between each pair of ROIs, such as the number of connections.
APA-based methods represent the $i$th individual's brain connectome as a $p \times p$ matrix $W_i$. Each cell of this matrix contains a summary of the strength of connection between a pair of brain ROIs; here, we use the number of fibers connecting the regions. Different atlas templates lead to different connectivity matrices having different dimensionality $p$. We used the same fiber tracking method TractoFlow, and chose the HCP842 tractography atlas \citep{yeh2018}, which segments the brain into 80 regions; see Tables S1
and S2
in the Appendix for descriptions of these 80 regions. Note that one can also obtain different count values using different tractography algorithms \citep{knosche2015validation} as well as many filtering approaches, e.g., scale-invariant feature transform (SIFT) \citep{burger2016scale} to make streamline counting more quantitative.
For PPA connectomes, we used LASSO to fit Model~\eqref{tpaa_lm}; the hyperparameter in LASSO was selected using cross validation. For APA connectomes, we implemented two recently proposed methods: symmetric bilinear regression (SBL) \citep{wang2019symmetric} and multi-graph principal component analysis (MultiGraphPCA) \citep{winter2020multiscale}. SBL investigates the relationship between human traits and connectivity matrices through a symmetric bilinear regression model, and MultiGraphPCA proposes a tensor network factorization method that links the scale-specific brain structural connectivity matrices through a common set of individual-specific scores, which are further used for human trait prediction. Tuning in SBL and MultiGraphPCA followed the recommendations by the authors. In particular, for SBL we use K = 14; gamma = 6.9; fullit = 50; maxit = 10000; tol = 1e-6; Replicates = 5. There is a single tuning parameter $K$ in MultiGraphPCA; we compare results for $K = \{2, 10, 20, 50, 70, 200, 400, 500\}$. We also implemented LASSO on the vectorized (only keeping upper-triangular elements) connectivity matrix $W_i$.
In order to assess the robustness of the proposed method, we tested different versions of PPA with respect to the fiber tracking algorithms and regularization strategies. In particular, we adopted another two fiber tracking algorithms in Module (i), Euler Delta Crossings or EuDX \citep{garyfallidis2014dipy}, and local tracking with Sparse Fascicle Model or SFM \citep{rokem2015evaluating}. For the regularization strategy, we also consider elastic net \citep{zou2005regularization},
which combines the ${L_1}$ penalty of LASSO with the
${L_2}$ penalty of ridge regression \citep{hoerl1970ridge}, with the goal of simultaneous selection of correlated predictors.
Figures \ref{compare1} and \ref{compare2} show the comparison of various versions of PPA using different fiber tracking algorithms and regularization strategies, respectively. The results are similar to those shown above, and demonstrate the robustness of the PPA method to the
fiber tracking algorithm and regularization approach.
\subsection{Predictive performance \& parsimony}
We implemented three methods under APA connectomes, which are coded as LASSO, SBL, and MultiGraphPCA, and the version of PPA using Tractoflow fiber tracking algorithm and LASSO regularization to analyze the 1065 HCP subjects. We calculated the 5-fold cross validation mean squared error (MSE) to compare their predictive performance for seven human traits. We also included the performance of a \textit{null model}, which only contains an intercept term and uses the sample average of responses in the training set as the prediction; this model assumes no significance of the connectome in explaining selected traits and serves as a reference model.
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\linewidth, height=4cm]{fig/PicVocab.png} &
\includegraphics[width=0.3\linewidth,height=4cm]{fig/ReadEng.png}&
\includegraphics[width=0.3\linewidth,height=4cm]{fig/ListSort.png}\\
\includegraphics[width=0.3\linewidth, height=4cm]{fig/bar_picv.png} &
\includegraphics[width=0.3\linewidth, height=4cm]{fig/bar_read.png}&
\includegraphics[width=0.3\linewidth, height=4cm]{fig/bar_list.png}\\
{\small (a) Receptive Vocabulary} & {\small (b) Oral Reading} & {\small (c) List Sorting} \\
\end{tabular}
\caption{Comparison of 5-fold cross validation MSE of trait prediction based on PPA and three APA-based methods (LASSO, SBL, and MultiGraphPCA) for three traits: (a) PicVocab, (b) ReadEng, and (c) ListSort. Second row is the bar-plot of MSE for APA and PPA based method for $K=400$. APA methods (SBL, LASSO, and MultiGraphPCA) are in blue and PPA method in cyan. The red horizontal line indicates the MSE of the null model, which is 230.92 in PicVocab; 219.02 in ReadEng; 174.66 in ListSort.}
\label{mse1}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\linewidth,height=5cm]{fig/Flanker.png} &
\includegraphics[width=0.45\linewidth,height=5cm]{fig/PicSeq.png} \\
{\small (a) Flanker} & {\small (b) Picture Sequence Memory} \\
\includegraphics[width=0.45\linewidth,height=5cm]{fig/CardSort.png} &
\includegraphics[width=0.45\linewidth,height=5cm]{fig/ProcSpeed.png} \\
{\small (c) Card Sort} & {\small (d) Processing Speed}\\
\end{tabular}
\caption{Comparison of 5-fold cross validation MSE of trait prediction based on PPA and three APA methods (LASSO, SBL, and MultiGraphPCA) for four traits: (a) Flanker, (b) PicSeq, (c) CardSort, and (d) ProSpeed. Moreover, 5-fold cross validation MSE under the null model is 101.0769 in Flanker; 272.0460 in PicSeq; 97.7893 in CardSort; 402.0294 in ProcSpeed.}
\label{mse2}
\end{figure}
Figure \ref{mse1} plots the MSEs for three traits. The upper row of Figure \ref{mse1} shows that the proposed PPA, using the simple LASSO method,
clearly outperforms SBL, LASSO, and MultiGraphPCA in most scenarios especially as the number of clusters $K$ increases.
For the connectivity matrix-based methods, e.g, SBL, LASSO, and MultiGraphPCA, the performance does not depend on the number of clusters $K$, which is a PPA-specific tuning parameter. The bottom row of Figure \ref{mse1} compares the MSEs of our PPA-based methods at $K = 400$ to the selected three APA methods, while the red horizontal line represents the performance of the null model. Since all APA-based methods use TractoFlow, we first focus on the PPA method using the same fiber tracking algorithm. In this case, MSEs of PPA are smaller than the three APA-based methods, uniformly across the three traits.
In sharp contrast to the excellent performance of LASSO for the PPA connectomes, LASSO predictions based on vectorized APA connectomes did no better than the null model.
SBL and MultiGraphPCA improve the MSEs over LASSO, as a result of a better utilization of the network structure of APA connectomes. The comparison of MSEs suggests that fundamentally changing the connectome representation based on defining population fiber bundles can perhaps lead to even bigger gains.
Figure \ref{mse1} shows that defining large numbers of fiber bundles may lead to predictive gains.
For the other four traits (Flanker, PicSeq, CardSort, ProcSpeed), all methods tend to give a MSE close to the null model (Figure \ref{mse2}), indicating limited predictive power of structural connectivity for these traits. It is reassuring that the proposed method is consistent with APA-based methods in these cases. We remark that the lack of predictive power might be caused in part by a weak relationship between these measured traits and actual innate abilities in the test subjects.
While Figure \ref{mse1} shows how MSE varies with $K$ for the three different fiber tracking algorithms, TractoFlow, EuDX and SFM, Figure~\ref{kselect.3traits} plots the MSEs against the number of active fibers to provide additional insight into the impact of $K$. The number of active fibers is defined as the total number of fibers that belong to the active fiber bundles selected by LASSO, i.e., it is
$\sum_{k \in \mathcal{K}(s)} |A_K^{(k)}|,$ where $|A_K^{(k)}|$ is the number of fibers in the bundle $A_K^{(k)}$.
Taking picture vocabulary test (PicVocab) as an example, the best MSE is achieved when the number of active fibers is around $0.8*10^7$. This optimal number varies from trait to trait, but a U-shaped curve typically emerges with $K$ varied up to 500. For the other traits in Figure \ref{mse2}, MSEs do not change much as we vary the number of active fibers, which is expected as the MSE curve is flat when plotted against $K$; we omit these curves here as they are redundant.
\begin{figure}[ht!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\linewidth]{fig/picv_n.png} &
\includegraphics[width=0.3\linewidth]{fig/read_n.png}&
\includegraphics[width=0.3\linewidth]{fig/list_n.png}
\end{tabular}
\caption{Comparison of 5-fold cross validation MSE of trait prediction based on PPA for three traits: (a) PicVocab, (b) ReadEng, and (c) ListSort.}
\label{kselect.3traits}
\end{figure}
Table \ref{fp} reports the number of selected parameters in PPA and APA-based methods, which shows parsimony and effectiveness of PPA-based methods compared to LASSO applied to APA connectomes and SBL. In particular, LASSO for APA connectomes selects nearly zero active connections, which explains its poor predictive performance in Figure~\ref{mse1}. PPA selects substantially fewer non-zero parameters than SBL; this combined with the better MSEs in Figure~\ref{mse1} shows the effectiveness of PPA connectomes in representing key features of brain networks predictive of traits. The last four traits show little to no signal for any of the methods and selecting few if any features for these traits seems appropriate.
\begin{table}[h!]
\centering
\begin{tabular}{cccccccc}
\hline \hline
& \small PicVocab & \small ReadEng & \small ListSort & \small Flanker & \small PicSeq &\small CardSort & \small ProcSpeed \\
\small PPA (K=50) & \small 23 & \small 20 & \small 12 & \small 11 & \small 15 & \small 0 & \small 1 \\
\small PPA (K=100) & \small 38 & \small 33 & \small 19 & \small 3 & \small 29 & \small 3 & \small 2 \\
\small PPA (K=200) & \small 64 & \small 47 & \small 12 & \small 7 & \small 6 & \small 0 & \small 0 \\
\small PPA (K=300) & \small 53 & \small 31 & \small 2 & \small 3 & \small 1 & \small 0 & \small 0 \\
\small PPA (K=400) & \small 19 & \small 20 & \small 18 & \small 18 & \small 15 & \small 2 & \small 1 \\
\small PPA (K=500) & \small 56 & \small 39 & \small 22 & \small 1 & \small 3 & \small 0 & \small 1 \\
\small LASSO+HCP842 & \small 2 & \small 1 & \small 0 & \small 0 & \small 11 & \small 0 & \small 1 \\
\small SBL+HCP842 & \small 1134 & \small 1053 & \small 972 & \small 729 & \small 1134 & \small 810 & \small 1134 \\
\hline\hline
\end{tabular}
\caption{Number of selected parameters in different methods. For MultiGraphPCA, the number of parameters is set to be $K$ used in PPA.}
\label{fp}
\end{table}
To evaluate generalizability of our comparisons between PPA and APA and assess robustness of APA methods, we tested multiple different versions of APA with respect to the atlases and summary of connectivity used in defining the connectivity matrices. In our analyses as we vary these two aspects when implementing APA methods, the improved predictive performance of PPA over APA that was shown earlier is consistently observed, indicating the way connectomes are represented (PPA vs APA) appears to be a more important factor in explaining the performance gain.
In particular, for all APA related methods (SBL, LASSO, MultiGraphPCA), we checked three different atlases: HCP842, AAL2, and FreeSurferDKT. HCP842 is a tractography atlas \citep{yeh2018}, and we choose the settings used previously in obtaining the results in Figures \ref{mse1} and \ref{mse2}. This atlas provides tractography of white matter with expert labeling and examination, complementary to traditional histologically-based and voxel-based white matter atlases. AAL2 stands for automated anatomical labeling atlas 2, providing an alternative parcellation of the orbitofrontal cortex \citep{rolls2015implementation}. Finally, FreeSurferDKT is an atlas manually labeled in the macroscopic anatomy in magnetic resonance images by the Desikan–Killiany–Tourville (DKT) protocol \citep{klein2012101}.
From the results in
Figure \ref{compare3}, we can see that changing the atlas has little impact on the predictive performance of APA methods.
For analyses assessing sensitivity of the APA results to the summary of connectivity between regions, we focused on the HCP842 atlas and tested three different ways of calculating the connectivity matrix: ``count", ``ncount", and ``ncount2". For each entry in the connectivity matrix, these three summaries correspond to counting the number of tracts that pass two ROIs (``count''), which is used in the previous comparison in Figures \ref{mse1} and \ref{mse2}, normalizing the count by the median length of the fibers (``ncount"), and multiplying the count by the sum of the inverse of the length (``ncount2"). Figure \ref{compare4} shows that the performance of APA methods is robust to the summary used in calculating connectivity matrices.
\subsection{Integrating PPA and atlas}
The proposed PPA connectome does not rely on any tractography atlas in defining the connectome or building a regression model for traits. This reduces subjectivity in choosing ROIs and facilitates statistical analyses with improved prediction and parsimony, as shown in the preceding sections. In this section, we demonstrate another feature of PPA in terms of its compatibility with traditional ROIs---as an \textit{ab initio}, tractography-based representation of connectomes, PPA can be integrated with any existing atlas templates to borrow the ROI information encoded therein in a straightforward manner. Such integration allows us to relate the interpretation of PPA results to traditional ROIs. Through visualization, we find the proposed PPA leads to interesting and interpretable findings.
We align active fibers produced by PPA to an atlas. In particular, based on a selected atlas, we build the connectivity matrix at the population level with each matrix entry generated by counting the number of active fibers passing between each pair of ROIs. Note that similarly to deriving the connectivity matrix in APA, one can obtain many different summaries of connectivity between two regions by using various filtering approaches such as SIFT and different ways of normalization, with the count just one simple choice.
We
use the HCP842 tractography atlas \citep{yeh2018} as in our implementation of APA-based methods, which segments the brain into 80 regions. Refer to \url{http://brain.labsolver.org/diffusion-mri-templates/tractography} for more details about the 80 ROIs.
For each human trait, we visualize the PPA-induced connectivity matrix and anatomy of connections through DSI Studio (\url{http://dsi-studio.labsolver.org}), a tractography software tool that maps brain connections and correlates findings with traits. We adopt the default setting in DSI Studio by thresholding matrix entries with a small number of connecting tracks relative to the maximum connecting tracks in the connectivity matrix. We use 0.5 as the threshold for this ratio.
According to the visualization plots in Figure \ref{fig:pv}-\ref{fig:f}, PPA discovers some insightful connections of various anatomical regions that are related to the human traits. Some interesting findings are listed as follows.
For most human traits, the primary pattern in the connectivity matrix does not vary much as the number of clusters (fiber bundles) increases. When using the trait PicVocab, the subgraph including connections among ROI 3 (Cortico\_Striatal\_Pathway\_L), ROI 4 (Cortico\_Striatal\_Pathway\_R), ROI 7 (Corticothalamic\_Pathway\_L), ROI 8 (Corticothalamic\_Pathway\_R), and ROI 44 (Corpus\_Callosum), can be found in all the 4 different settings of number of clusters, i.e., $K$=50,100,200,400 (see Figure~\ref{fig:pv}). The trait ListSort that is related to human working memory (Figure \ref{fig:ls}), also shows a common pattern in the connectivity matrix across cluster settings, leading to a network with shared nodes containing ROI 3 (Cortico\_Striatal\_Pathway\_L), ROI 7 (Corticothalamic\_Pathway\_L), ROI 21 (Arcuate\_Fasciculus\_L), ROI 37 (U\_Fiber\_L) and ROI 44 (Corpus\_Callosum).
For language associated human traits (e.g., the trait PicVocab is related to language and vocabulary comprehension while the trait ReadEng is related to language and reading decoding), the significant regions are mainly located in the left hemisphere (see Figure \ref{fig:pv} and Figure \ref{fig:re}). This finding indicates that the left hemisphere is particularly important for language, which has been consistently verified in clinical and experimental settings \citep{ries2016choosing}.
We find that a particularly important ROI 44 (Corpus\_Callosum) is detected for most human traits; ROI 44 is consistently a prominent node in the identified subgroup for most values of $K$ (see the second row in Figure \ref{fig:pv}-\ref{fig:f}).
Corpus Callosum is a large C shape white matter region, forming the floor of the longitudinal fissure that separates the cerebral cortex into the left and right hemispheres \citep{carpenter1985study}.
This ROI is responsible for transmitting sensory, motor, and cognitive signals between the hemispheres.
The Flanker task measures both a participant's attention and inhibitory control. According to the visualization plots in Figure \ref{fig:f}, significant regions identified are strongly lateralized to the right hemisphere. Right frontal dominance for inhibitory motor control has become a commonly accepted view \citep{swick2008left, garavan1999right}, indicating that our finding is in agreement with existing literature. The visualization plots for the remaining three human traits that we considered are reported in the appendix; see Figures \ref{fig:ps}-\ref{fig:psd}.
\begin{figure}[ht!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/pv_50.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/pv_100.png} &
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/pv_200.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/pv_400.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/pv_50_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/pv_100_1.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/pv_200_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/pv_400_1.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/pv_50_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/pv_100_2.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/pv_200_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/pv_400_2.png} \\
{\small (a) $K$=50} & {\small (b) $K$=100} & {\small (c) $K$=200} & {\small (d) $K$=400}
\end{tabular}
\caption{Visualization for trait PicVocab: each column represents the visualization for a different number of clusters ($K$=50; $K$=100; $K$=200; $K$=400) in PPA; First row represents the connectivity matrix between any two ROI regions in the HCP842 tractography atlas. Second row shows anatomy of connections in an axial view. Third row shows anatomy of connections in a sagittal view.}
\label{fig:pv}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/ls_50.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/ls_100.png} &
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/ls_200.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/ls_400.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/ls_50_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/ls_100_1.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/ls_200_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/ls_400_1.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/ls_50_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/ls_100_2.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/ls_200_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/ls_400_2.png} \\
{\small (a) $K$=50} & {\small (b) $K$=100} & {\small (c) $K$=200} & {\small (d) $K$=400}
\end{tabular}
\caption{Visualization for trait ListSort: each column represents the visualization under different $K$ ($K$=50; $K$=100; $K$=200; $K$=400) in PPA; First row represents the connectivity matrix between any two ROI regions in HCP842 tractography atlas. Second row shows anatomy of connections in an axial view. Third row shows anatomy of connections in a sagittal view.}
\label{fig:ls}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/er_50.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/er_100.png} &
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/er_200.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/er_400.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/er_50_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/er_100_1.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/er_200_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/er_400_1.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/er_50_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/er_100_2.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/er_200_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/er_400_2.png} \\
{\small (a) $K$=50} & {\small (b) $K$=100} & {\small (c) $K$=200} & {\small (d) $K$=400}
\end{tabular}
\caption{Visualization for trait ReadEng: each column represents the visualization under different $K$ ($K$=50; $K$=100; $K$=200; $K$=400) in PPA; First row represents the connectivity matrix between any two ROI regions in HCP842 tractography atlas. Second row shows anatomy of connections in an axial view. Third row shows anatomy of connections in a sagittal view.}
\label{fig:re}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/f_50.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/f_100.png} &
\includegraphics[width=0.225\linewidth, height=3.5cm]{fig/f_200.png} &
\includegraphics[width=0.225\linewidth,height=3.5cm]{fig/f_400.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/f_50_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/f_100_1.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/f_200_1.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/f_400_1.png} \\
\includegraphics[width=0.225\linewidth,height=4cm]{fig/f_50_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/f_100_2.png} &
\includegraphics[width=0.225\linewidth, height=4cm]{fig/f_200_2.png} &
\includegraphics[width=0.225\linewidth,height=4cm]{fig/f_400_2.png} \\
{\small (a) $K$=50} & {\small (b) $K$=100} & {\small (c) $K$=200} & {\small (d) $K$=400}
\end{tabular}
\caption{Visualization for trait Flanker: each column represents the visualization under different $K$ ($K$=50; $K$=100; $K$=200; $K$=400) in PPA; First row represents the connectivity matrix between any two ROI regions in HCP842 tractography atlas. Second row shows anatomy of connections in an axial view. Third row shows anatomy of connections in a sagittal view.}
\label{fig:f}
\end{figure}
\section{Discussion}
\label{s:discuss}
In this article, we propose a new tractography-based representation of brain connectomes as an alternative to the widely used anatomical connectivity. This representation leads to Principal Parcellation Analysis (PPA), where we represent individual brain connectomes by compositional vectors building on a basis system of fiber bundles that captures the connectivity at the population level.
PPA reduces subjectivity of classical connectomes by eliminating the need to choose atlases and ROIs a priori. Unlike the traditional connectomes where data objects are of complex graph-structure and ultra-dimension, our PPA connectomes can be analyzed using existing statistical tools for high-dimensional vector-valued features.
Our application to HCP data indicates that PPA is robust to the specific choice of fiber tracking algorithm.
We propose an approach to integrate the parcellation produced by PPA with an atlas, so that the results can be visualized and interpreted using traditional ROIs.
There are several interesting next directions building on our initial PPA approach. Firstly, the methods used in each of the three modules can be refined. For example, we can consider different fiber clustering algorithms that take into account more than just the endpoint locations. Also, instead of just applying LASSO within a linear regression model for the trait responses, we can use more elaborate predictive algorithms and inferential approaches. Particularly for large datasets, improved predictive accuracy may be possible with flexible non-linear regression methods.
We have focused on using the proposed connectomes to analyze human traits via a regression model, and it is interesting to consider other inference problems beyond explaining and predicting task-based scores. For example, one may consider both structural and functional connectivity \citep{tian2018characterizing} and utilize other dynamical features \citep{kobeleva2021revealing}, with the proposed PPA connectomes serving as a building block to represent structural connectivity.
\section*{Data and code availability statement}
The dMRI and human traits data from the original study are available from HCP: \url{https://www.humanconnectome.org/}. Code for implementing PPA is freely available online at \url{https://github.com/xylimeng/PPA}.
\section*{Declaration of Competing Interest}
The authors declare no conflict of interest.
\section*{Credit authorship contribution statement}
\textbf{Rongjie Liu}: Conceptualization, Methodology, Software, Writing- Original draft preparation. \textbf{Meng Li}: Conceptualization, Methodology, Validation, Supervision, Writing- Reviewing and Editing. \textbf{David Dunson}: Conceptualization, Methodology, Validation, Writing- Reviewing and Editing.
\section*{Acknowledgements}
We thank Lu Wang for helpful discussions on implementing the SBL method, and Yerong Li for helping with numerical experiments in an earlier version of PPA. This work was partially funded by grant DMS-2015569 from the National Science Foundation and grant R01MH118927 of the National Institute of Mental Health of the United States National Institutes of Health.
\bibliographystyle{plainnat}
|
1,116,691,501,285 | arxiv |
\section{Introduction and Background}
The advent of large and high-resolution cosmological simulations such as the Renaissance Simulations \citep{2015ApJ...807L..12O,2016ApJ...833...84X} provide an opportunity to glean observables from theoretical and numerically-deduced phenomena. However because radiative transfer is computationally expensive inside a full simulation, post-processing is usually required to better extrapolate the fine features of the spectral energy distribution related emission lines and dust extinction.
Therefore the contemporary frontier of synthetic spectrometry and photometry lies in the sophistication and physical accuracy of post-processing techniques. As the second paper in the series chronicling the methodological development of the {\sc Caius} pipeline, this work seeks to invest more computational power in the modelling of absorption, scattering, and emission of photons and explores the impact of high-energy photons from sources unique to the early universe.
\subsection{Population III stars and X-ray Binaries}
Prior to the formation of the first generation of stars, the primordial medium was essentially metal-free. The lack of metal cooling theoretically increases the Jeans mass of molecular cloud and allows for the possibility for the formation of more massive stars than the observed initial mass function (IMF) of stars in the local universe. However the primordial IMF is limited at the upper range by the effect of radiation pressure, the nature of the protostar, the effective sound speed, and the propensity of the medium to clump \citep[see reviews by][]{2007ARA&A..45..565M,2015ComAC...2....3G}. These early ``Population III" stars were luminous and hot for their masses and thus contributed to the ionization and heating of the surrounding medium.
Massive Population III stars are likely to end in neutron stars and black holes after a short mass-dependent lifespan \citep{2002A&A...382...28S}. When part of binary systems, compact remnants may thereupon accrete material from a longer-lived stellar partner, converting potential energy into high-energy photons in their accretion disks. In this scenario, termed an X-ray binary, the flux of X-rays and UV photons ionize metals in the interstellar medium (ISM) to multiply-ionized states like {\sc C IV}. Indeed, {\sc C IV} lines have been confirmed in the spectrum of analogous low-redshift systems of low mass ($M_\star < M_\rm{bh}$) X-ray binaries \citep[e.g.][]{2005ApJ...634L.105D}, and high mass X-ray binaries \citep[HMXB; e.g.][]{2007ASPC..367..459I}.
\subsection{Emission Lines}
While there has been progress on the modelling of emission lines and dust \citep{2013ApJ...777...39Z,2014ApJ...782...32C,2016MNRAS.460.3170W,2017MNRAS.469.4863B}, the work of solving photoionization is usually left to routines or analysis run on isolated scenarios rather than within a cosmological context. This work endeavours to simulate photochemistry as part of an extinction and emission routine that appreciates a fully three-dimensional arrangement of dust, gas, and stars. In Sec. \ref{sec:meth}, we propose a methodology for generating and propagating the intrinsic spectrum of metal-enriched stars, metal-free Population III stars, and HMXBs through gas and dust to produce resultant galactic continuum and emission lines. In Sec. \ref{sec:resul}, diagnostics of observables from our treatment are presented including emission line strengths and photometry for the forthcoming James Webb Space Telescope (JWST). Finally, the observational and physical implications of our results are discussed in Sec. \ref{sec:diss} and summarized in Sec. \ref{sec:con}.
\section{Methods}
\label{sec:meth}
We use the ``rare-peak" zoom-in region of the Renaissance Simulations \citep{2015ApJ...807L..12O,2016ApJ...833...84X}, which are performed using the hydrodynamic adaptive-mesh refinement (AMR) code {\sc enzo}{} with radiative transfer \citep{2011MNRAS.414.3458W} and focus on the first generations of stars and galaxies early in the Epoch of Reionization. The simulations are run using $\Omega_M = 0.266$, $\Omega_{\Lambda} = 0.734 $, $\Omega_b = 0.0449$, $h = 0.71$, $\sigma_8 = 0.8344$, and $n = 0.9624$ from the 7-year $Wilkinson\ Microwave\ Anisotropy\ Probe$ results \citep[WMAP;][]{2011ApJS..192...16L}, achieving an effective dark matter resolution of $2.9 \times 10^4\ \rm{M_\odot}$ at $z = 15$ and a spatial resolution of 19 comoving parsecs. We identified 1654 galaxies containing stellar clusters within the ``rare-peak" simulation at $z= 15$ using dark matter halo-finding code {\sc Rockstar} \citep{2013ApJ...762..109B}. Of these galaxies, we focus our analysis on 146 that contain metal-free stellar populations.
\subsection{Stellar Spectra}
We use the Flexible Stellar Population Synthesis code \citep[FSPS;][]{2010ApJ...712..833C} to generate source spectra of metal-enriched stellar clusters from the age and metallicities of particles in the simulation. Both FSPS and the simulation routines treat stellar clusters as probabilistic distributions of stars and treat luminosity as a linear function of mass. We exploit this similarity to assign spectral energy distributions (SEDs) to stellar cluster particles for radiative transfer post-processing in a manner consistent with the assumptions used to produce those particles. Using quantities calculated in the simulation, we assign each particle a metallicity isochrome and allow FSPS to interpolate an SED based on the particle's age before weighting our result by the mass of the cluster.
\begin{figure*}
\begin{center}
\includegraphics[scale=.36]{plotfour-0.pdf}
\caption{Top Row: Total extinction profiles for gas and metal emission lines and a gas absorption continuum as a function of wavelength and temperature. Bottom Row: Corresponding emission profiles assuming thermodynamic equilibrium.}
\label{fig:plotfour}
\end{center}
\end{figure*}
For metal-free stars, the Renaissance Simulations generate stellar particles corresponding to individual stars rather than clusters in the simulation by randomly sampling the distribution
\begin{equation}
f(\log M)dM = M^{-x} \exp\left[-\left(\frac{M_c}{M}\right)^{\beta} \right] dM,
\label{eq:IMF}
\end{equation}
when the appropriate environmental conditions are achieved \citep{2017MNRAS.469.4863B}. This form obeys the \citet{1955ApJ...121..161S} distribution above a characteristic mass, $M_c$, and has an exponential cut off at low mass \citep{2003PASP..115..763C,2012MNRAS.427..311W}. We limit Population III star particle mass to the range $1\ {\rm M}_\odot$ to $300\ {\rm M}_\odot$ and use $x = 1.3$, $\beta = 1.6$, and $M_c = 40\ {\rm M}_\odot$. Generally, the peak of the distribution is given by
\begin{equation}
M_{\rm{peak}} = M_c \left(\frac{\beta}{x}\right)^{1/\beta},
\end{equation}
which corresponds to $\sim 46 \ {\rm M}_\odot$ in our simulation after accounting for the mass cut-offs. As noted by \citet{2011ApJ...740...13Z} in the description of the {\sc Yggdrasil} metal-free stellar SED calculator, the exact form of the initial mass function (IMF) and therefore the spectra of Population III stars is a matter of great uncertainty. {\sc Yggdrasil} offers three prescriptions with three different IMFs. The PopIII.1 tool assumes a \citet{2002A&A...382...28S} power law with slope $\alpha = -2.35$ and stellar masses from 50 to 500 ${\rm M}_\odot$ and models the most antecedent generation of stars prior to any radiative feedback and binary formation. The PopIII.2 model assumes a log-normal distribution with a characteristic mass of 10 ${\rm M}_\odot$ and models an IMF that includes stars from 1 to 500 ${\rm M}_\odot$ assuming primordial abundances but some impact from prior star formation and radiative processes \citep{2008AIPC..990D..13O}. The third model assumes a metal-enriched IMF \citep{2001MNRAS.322..231K} with a piece-wise power law for different mass regimes.
For this work, we determined the PopIII.1 model to be too top heavy for the lower end of our mass range and the Kroupa model to be inconsistent with the goal of differentiating metal-enriched and metal-free stellar populations so we generate our SED using the PopIII.2 model for stars smaller than 55 ${\rm M}_\odot$ and the PopIII.1 model for larger stars. We generate and match SEDs to the age and mass of the simulation star particles assuming an instantaneous burst. Since we treat the effect of extinction separately in our calculations, we do not include a covering fraction to produce the SED. We note that there are some discrepancies between flux of ionizing radiation modelled in the simulation and the flux calculated from {\sc Yggdrasil} due to our practice of using an average IMF to model individual stars in post-processing. We expect this to manifest as inconsistencies between the flux of ionizing radiation from the metal-free SEDs and the size of ionized regions in the simulation so we hereafter give priority to the values of electron fraction and temperature from the simulation in our radiative transfer analysis.
\subsection{Emission Line Extinction}
\begin{figure*}
\begin{center}
\includegraphics[scale=.34]{crossectchivhe.pdf}
\includegraphics[scale=.34]{crossectchivmet.pdf}
\caption{Isothermal cross-sections of extinction profiles. Left: Mass extinction coefficient for hydrogen and helium continuum and lines. Right: Mass extinction coefficient for metal lines with a hydrogen and helium continuum. Both plots are normalized by their maximum value in Fig. \ref{fig:plotfour} and are plotted with thin, medium, and thick lines for gas of temperatures $10^4$,$10^5$, and $10^6$ respectively.}
\label{fig:emcross}
\end{center}
\end{figure*}
The antecedent treatment of emission lines from \citet{2017MNRAS.469.4863B} used the photo-ionization solver {\sc Cloudy} \citep{2013RMxAA..49..137F} to determine emission line strength in simulation AMR cells containing stellar populations. Line emissions were therefore limited to a small fraction of the interstellar medium (ISM) and almost none of the circum-galacitic medium (CGM). A separate Monte Carlo gas and dust extinction calculation of the continuum using {\sc Hyperion} \citep{2011A&A...536A..79R} was used to attenuate the lines in conjunction with an empirical correction from \citet{2015MNRAS.454..269P}. In this work, we more thoroughly examine the use of {\sc Cloudy} and {\sc Hyperion} as tools for emission line extinction and line transfers in arbitrary stellar, dust, and gas arrangements while eschewing the use of empirical attenuation models.
In a similar manner to our prior method, we use gas densities and metallicities from the cosmological simulation and stellar spectra to stage a {\sc Cloudy} calculation for AMR cells containing interior stellar populations. For cells with more than one stellar particle, which may include combinations of metal-enriched clusters and metal-free stars, spectra are summed into a single source for the calculation. In the simulation, stellar particles are formed within a single cell by design, but may move between cells after their formation. As a result, the highest refinement level within a halo usually contains one, or at most, a few stellar particles. However, the cells contain too little medium to calculate the photochemistry in the ISM after the tenth level of refinement. Therefore we attempt to balance this with our desire to limit the use of particle SED summing by allowing up to the ninth level of the simulation AMR grid. We run {\sc Cloudy} until the electron fraction matches the value from the corresponding cell rather than allow the calculation to come to a thermal equilibrium and apply Doppler broadening to the lines by using the temperature of the cell and the mass of the emitting molecule. We take enough samples of the spectra to produce a discernible Gaussian distribution of most lines, which increases the computational load of our pipeline considerably when compared to our prior investigation. The line profiles are then redistributed back to the intrinsic spectra of each particle proportionally to their fraction of the total luminosity of the summed source. The result is usually a relatively small addition to the spectra from diffuse emission, but we include this calculation in our method to capture any unique photochemical interactions in regions with high flux from a local source.
In addition to the lines added to the intrinsic stellar spectra, we model emission and absorption more generally throughout the interstellar and circumstellar medium. To account for a chemically inhomogeneous interstellar medium, we treat our halos as similarly inhomogeneous distributions of metallicity by using the precise emission line wavelengths and source molecule to segregate line opacities and emissions generated by non-metals from those generated by metals. We calculate extinction, albedo, and emission for 400 equally log-spaced temperatures between $10^{2.5}$ and $10^{7}$ K using a flat spectrum in thermodynamic equilibrium and constant density and metallicity. For simplicity, we assume solar abundance patterns \citep{2009ARA&A..47..481A} when the metallicity of the gas exceeds $10^{-6}\ {\rm Z}_\odot$ and turn off the presence of metals entirely below that value. We note that, for example, carbon line absorption from high-redshift quasars demonstrate a nearly constant carbon column density throughout reionization \citep[e.g.][]{2006ApJ...653..977S}. This implies that early enrichment biased away from the products of Type Ia supernovae and that the relative contribution from elements like carbon and silicon may be understated in this work.
Extinction of lines is calculated by generating a high-resolution frequency-dependent line opacity map with {\sc Cloudy} for frequencies corresponding to emission lines and adding the result to the absorption profile of the continuum.
As shown in Fig. \ref{fig:plotfour}, this allows us to create a profile for extinction that scales with metal density and one that scales with non-metal density. Notably, the emission profiles show a clear delineation between temperatures that represent thermally ionized hydrogen and neutral species at around 4000 K which corresponds roughly to 50\% ionization of hydrogen according to the Saha equation. Both the emission and extinction profiles include the bremsstrahlung effect. For the metal profiles, we include the continuum extinction and emission due to hydrogen and helium by necessity to ensure that each photon has some extinction, but reason that this only accounts for a negligible over-accounting of extinction by gas. Fig. \ref{fig:emcross} shows how Gaussian broadening results in overlapping absorption profiles for species at low wavelengths, allowing metals to absorb X-ray emission efficiently. Additionally we use a third profile for dust opacities using \citet{2003ARA&A..41..241D} ($\rm{R_v} = 2.1$) that we scale as a fraction of the metal density.
Gas, metal, and dust extinction and emission profiles are used to perform a radiative transfer calculation with {\sc Hyperion} by propagating $5 \times 10^8$ photons at $5 \times 10^4$ log-spaced wavelengths from 10 to 50000 \AA (rest). These values are chosen such that they have sufficient resolution to capture line profiles of up to third period elements and calculate rest-frame radiative interactions from infrared through X-ray photon energies. While final emission profiles interpolate all values in the temperature range, we are forced to average extinction profiles into ten bands for the Monte Carlo algorithm. As shown in Fig. \ref{fig:plotfour}, extinction for individual species usually extends across a broad range of temperatures and since the medium within each cell would likely exhibit a range of temperatures at higher resolution, our use of averaged bands is still representative of the physical analogues to our simulations without losing too many features in the profile. Compared to the antecedent method, the inclusion of opacities and emission profiles allows for the treatment of line transfer of photons from external sources, anisotropic absorption and emission lines, and calculation for emission line strength from H~{\sc II} regions larger than the size of a cell.
We estimate the specific radiative power of baryons in a cell by using the equation
\begin{equation}
\mathscr{E} = 4 \sigma T^4 \frac{\int^{\nu_{\rm{max}}}_{\nu_{\rm{min}}} (\kappa_\nu / \rho) B_\nu (T,\nu) d\nu}{\int^{\nu_{\rm{max}}}_{\nu_{\rm{min}}} B_\nu (T,\nu) d\nu},
\end{equation}
in agreement with \citet{1999A&A...344..282L} where $B_\nu (T)$ is the Planck spectral radiance distribution given by
\begin{equation}
B_\nu (T,\nu) = \frac{2h \nu^3}{c^2} \frac{1}{e^{h \nu/ k T}-1}.
\end{equation}
The quantity $\kappa_\nu/\rho$ is the mass absorption coefficient formed by dividing the absorption opacity by the density of the medium. The other variables have their standard physical definitions. The specific power $\mathscr{E}$, determined in units of $\rm{erg\ s^{-1} g^{-1}}$, sets the emitted radiative power per unit mass.
\begin{figure}
\begin{center}
\includegraphics[scale=.24]{mbinary.pdf}
\caption{Number of Population III stars (green), number of X-ray binaries (blue), fraction of X-ray binaries (black), and fraction of high mass X-ray binaries (red) for a burst of star formation assuming that 10\%(top), 25\%(middle), and 50\%(bottom) of the stars form initially in close binaries and accrete at the Eddington limit. The persistence of an individual particle is largely a function of sampling of the IMF, but the number and fraction of HMXBs is related to the close binary fraction.}
\label{fig:mbinary}
\end{center}
\end{figure}
We find that some lines and scenarios with high scattering opacities prove to be infeasible to model using a Monte Carlo method so we rescale the scattering and absorption coefficients to produce a physically similar phenomenon with fewer scatterings. We reason that as long as the quantity $L \times \kappa_\nu$ is a constant where $L$ is the path length of an individual photon, the probability of absorption remains fixed. We also reason that as long as a photon scatters at least once within a cell, the final direction of the photon is indistinguishable from a scenario where it scatters many times. Therefore, we note that the dispersion of the radius of a three-dimensional random walk is given by $\langle R^2 \rangle = N \lambda^2$ where $R$ is the radius from the starting position, $N$ is the number of scatterings, and $\lambda = 1/\kappa_s$ is the mean free path. The path length is simply $L = N \lambda$ and the average number scatterings before crossing a cell is $N_1 = R^2 \kappa_{s,1}^2$. For a chosen maximum feasible number of scatterings $N_2$, we determine the corresponding scattering coefficient to be
\begin{equation}
\kappa_{s,2} = \frac{\sqrt{N_2}}{R}.
\end{equation}
Likewise because we take the path length times the absorption coefficient to be constant
\begin{equation}
\frac{L_1}{L_2} = \frac{\kappa_{\nu,2}}{\kappa_{\nu,1}} = \frac{\kappa_{\rm{s},1}}{\kappa_{\rm{s},2}},
\end{equation}
and we determine the corresponding absorption coefficient to be
\begin{equation}
\kappa_{\nu,2} = \frac{R \kappa_{s,1} \kappa_{\nu,1}}{\sqrt{N_2}}.
\end{equation}
Therefore for a given cell width, an opacity distribution, and a predetermined number of scatterings, we can recreate a physically similar scenario to the true opacities in optically thick regions. We choose $N_2$ to be 1000 to ensure that scatterings occur with an appropriately high resolution within a cell and only apply this correction to situations where $N_1 > N_2$. As an example, $N_2$ = 1000 corresponds to photons with a Lyman-$\alpha$ scattering cross-section of $10^{-16}\ \rm{cm^2}$ in a 10 parsec box and a hydrogen density of $\sim 1.7 \times 10^{-26}\ \rm{ g\ cm^{-3}}$.
\subsection{High Mass X-Ray Binaries}
\label{sec:HMXB}
\begin{figure}
\begin{center}
\includegraphics[scale=.37]{2black-40.pdf}
\caption{Multi-color disk (MCD) and power law (PL) components of a SED from a 40 ${\rm M}_\odot$ black hole accreting at the Eddington Limit. Absorption at wavelengths above 200 $\AA$ attenuate the power law considerably. A re-normalized $z=15.3$ \citet{2013ApJ...776L..31F} intrinsic HMXB spectra is shown for comparison.}
\label{fig:black3}
\end{center}
\end{figure}
Armed with theoretical spectra of Population III stars and a generalized emission line routine, we extend our investigation to the impact of high mass X-ray binaries.
\subsubsection{Luminous X-Ray Binary Fraction Simulation}
Self-consistent formation of binary metal-free stars are outside of the scope of the cosmological simulation so we attempt to set reasonable bounds to parameters associated with their population using a semi-analytical treatment. We simulate the life cycle of a burst of Population III stars with masses sampled from the Population III IMF (Equation \ref{eq:IMF}), mass-dependent lifetimes from \citep{2002A&A...382...28S}, and an IMF-dependent stellar endpoints \citep{2003ApJ...591..288H} including the possibility that no remnant is left in the case of a pair-instability supernovae. To even out statistical noise, we simulate a metal-free starburst of 36,050 systems assuming scenarios where half, 25\%, and 10\% of systems are formed as ``close binaries." That is, the stars are close enough that should the shortest-lived member form a black hole or neutron star at the end of its life, the longer-lived member will accrete its mass onto the remnant and thus form an X-ray binary. The number of systems comes from the integration of a star formation rate that peaks in at one million years and peters out exponentially over the next four million years. We note that HMXB populations synthesis models in \citep{2013ApJ...764...41F} describe an inverse relationship between metallicity and HMXB number as weaker stellar winds in low-metallicity gas promotes the production of larger remnants and less angular momentum loss in tight binary orbits, which implies a higher fraction of HMXB than observed at lower redshift.
We further assume \citet{1926ics..book.....E} mass accretion rates for luminous compact objects and recalculate the lifetime and remnant type of stars as they lose mass. This assumption places a lower bound on the duration of HMXBs in the calculation. These scenarios are presented in Fig. \ref{fig:mbinary} which shows that the persistence of X-ray binary systems is largely a function of lucky sampling of the IMF. The maximum number of X-ray binaries in the 50\% calculation is $\sim 6.5$ times as many as in the 10\% calculation in this example, but this too is subject to the whims of random sampling for any individual halo. For context, target galaxies in the $Chandra$ Deep Field South survey have been shown to emit an X-ray luminosity (2-10 keV) from HMXBs of $\sim 3 \times 10^{39}\ \rm{erg\ s^{-1}}$ or about two continuous 40 $\rm{M_\odot}$ HMXB per $\rm{M_\odot\ yr^{-1}}$ star formation rate \citep{2016ApJ...825....7L} which grows to $\sim 3 \times 10^{40}\ \rm{erg\ s^{-1}}$ at $z=10$ \citep{2017ApJ...840...39M}. Our Population III bursts typically have 25 Myr averaged star formation rates that peak at $\sim 0.01\ \rm{M_\odot\ yr^{-1}}$, which implies that our average halo would have about one HMXB. The halo with the highest peak star formation rate corresponds to approximately 15 continuous HMXBs, but this is exaggerated by the top-heavy IMF of metal-poor stars.
For this study, we are less interested in the precise global fraction of HMXBs and more interested in whether they plausibly exist and therefore warrant study as a possible source for high energy photons in the SED of galaxies with metal-free stars. From our rough calculation, we can conclude that the presence of HMXBs are possible in any halo that once contained two or more metal-free stars from a few million years after the initial burst until about 17 Myr. Since our sample of halos focus on galaxies with mixed populations of metal-free and metal-enriched stars, we reason that most of our galaxies are subject to the region of the predicted HMXB distribution corresponding to 4-17 Myr after the starburst and indeed most halos have small populations of lower-mass, longer living metal-free stars. For that scenario, we convert as many as two of the Population III star particles into HMXBs. If the maximum age of the metal-free star particles in the halo is less than 2 Myr, we do not convert any of them into HMXBs because those stellar systems are too young to contain a compact object. We note that HMXBs are possible in metal-enriched stellar populations, but with lower frequency due to the propensity of metal-enriched gas to fragment and form less massive stars and fewer compact objects. We therefore do not include metal-enriched HMXBs in our study.
\subsubsection{X-Ray Binary Spectra}
\begin{figure}
\begin{center}
\includegraphics[scale=.35]{stellarplt.pdf}
\caption{Plot of total metal-free stellar mass versus total halo mass tinted by metal-free stellar fraction of the total stellar mass.}
\label{fig:stellarplt}
\end{center}
\end{figure}
For simplicity and because of Pop III IMF uncertainty, we assume a black hole mass equal to the simulation characteristic mass of 40 ${\rm M}_\odot$ to calculate its spectrum. We assume a radiative efficiency of 0.1 and that emission is equally distributed between a multi-color accretion disk and a power law of the form $\dot{E} \propto E^{-1.7}$ in units of eV.
For the multi-color disk we use the temperature profile from \citet{2003ApJ...597..780E} given by
\begin{equation}
T_{\rm{eff}} = \left[\frac{3GM\dot{
M}}{8 \pi \sigma_{\rm{T}} r^3} \left(1-\sqrt{\frac{r_{\rm{in}}}{r}}\right)\frac{r_{\rm{in}}}{r} \right]^{1/4},
\end{equation}
where $\sigma_{\rm{T}}$ is the Thompson scattering cross section and the innermost radius, $r_{\rm{in}}$, is set to six gravitational radii. We also apply the correction $T_{\rm{col}} = 1.7\ T_{\rm{eff}}$ due to the Comptonization of the disk and calculate color temperatures out to 5000 gravitational radii. A black body distribution is calculated for each temperature and weighted by the factor $2 \pi r \Delta r$. The resulting distribution is finally normalized to half the Eddington luminosity, which is given by
\begin{equation}
L_{\rm{edd}} = \frac{4 \pi G M m_{\rm{p}} c}{\sigma_{\rm{T}}},
\end{equation}
where $m_{\rm{p}}$ is the mass of a proton. We also apply hydrogen and helium absorption to the power law assuming primordial abundances and distribute the other half of the black hole's luminosity to the absorbed result. For absorption, we assume a neutral hydrogen column density of log$[N_{HI}/\rm{cm^{-2}}]=20$ due to the accumulation of material in the vicinity of the accretion disk as the star is quickly disrupted and consumed. The assumption of sub-grid neutral hydrogen absorption does not strongly affect the amount of ionizing radiation fed through the medium in most halos due to the presence of strong flux at those wavelengths from Population III stars. The resulting intrinsic spectra for a 40 ${\rm M}_\odot$ black hole is shown in Fig. \ref{fig:black3} absent the spectra for the binary star. A \citet{2013ApJ...776L..31F} intrinsic HMXB spectra calculated from a considered stellar population synthesis model for metal-enriched X-ray binaries is shown to have roughly similar features.
\subsection{Spectral Energy Distribution Analysis}
The filtering and imaging routines are effectively the same as those discussed in \citet{2017MNRAS.469.4863B}. To summarize, we calculate flux using $JWST$ and $HST$ filter throughputs to integrate the processed SED after applying cosmological corrections as a function of redshift. Images are created by summing photons intersecting a distant plane using {\sc Hyperion} and applying noise, Gaussian blur, and the telescope's resolution when processed through a telescope prescription.
For bolometric flux, the equations are
\begin{equation}
d_{\rm{L}} \ = \frac{c (1+z)}{H_{\rm{0}}}\int_0^z \frac{dz'}{\sqrt{\Omega_{\rm{M,0}} (1+z')^3 + \Omega_{\rm{\Lambda,0}}}}
\end{equation}
\begin{equation}
f(\nu_{\rm{0}}) = \frac{1}{4 \pi d_{\rm{L}}^2} \int_0^{\infty} \frac{L_{\nu}(\nu_{\rm{e}})}{\nu_{\rm{e}}} R(\nu_e) d\nu_{\rm{e}},
\end{equation}
where $R(\nu_e)$ is the redshift-transformed filter response as a function of the emitted frequency and all other variables take their standard definitions. We leave any further cosmological and instrumental adjustments like aperture and surface brightness to the reader.
\begin{figure*}
\begin{center}
\includegraphics[scale=.26]{scattering-23951-0.pdf}
\caption{Top row: Integral of density-weighted mean density (left), density-weighted mean temperature (middle), and density-weighted mean metallicity (right). Bottom row: Integrated total emission (left), scattering per unit area (middle), and integrated dust emission (right). The location of HMXBs are shown as diamonds. Circles are metal-enriched stellar clusters. Star markers are metal-free stars. Subhalos A1 and A2 are labelled in the top left plot for reference.}
\label{fig:scat}
\end{center}
\end{figure*}
\section{Results}
\label{sec:resul}
There are 146 halos in the ``rare peak" zoom-in region of the Renaissance Simulations with active metal-free stellar populations. As shown in Fig. \ref{fig:stellarplt}, most of these halos are small with a mean halo mass of only $3.40 \times 10^7\ {\rm M}_\odot$ owing to the tendency for these stars to form and die soon after a halo first cools into molecular clouds early in its evolution. Unfortunately, small halos imply small clusters of metal-free stars and low luminosity, therefore reducing the chance of a direct observation by any telescopes planned for the near future. However, sometimes mergers can mix stellar populations and generate scenarios where larger and brighter halos are influenced by ionizing photons from Population III stars and X-ray binaries, creating an opportunity to indirectly observe these objects sooner. More rarely, a Population III starburst may occur forming a relatively bright "Population III galaxy" \citep[e.g.][]{2009MNRAS.399...37J,2012AIPC.1480..101Z}.
Our simulation shows an example of both scenarios so we dedicate the first section of our discussion of our results on those two specific halos as well as the machinery of our radiative transfer pipeline. We then explore the emission lines trends and spectra of the full sample before finally presenting our photometric results.
\subsection{Stellar Population Merger Scenario}
\label{sec:merger}
\begin{figure*}
\begin{center}
\includegraphics[scale=.35]{phase-23951-0.pdf}
\caption{Top row: The spectra of Halo A shown before the application of lines (dashed) with lines from gas in close proximity to the star (thick line) and after extinction (thin line). Here the dashed and think lines appear as nearly identical in the chosen scale. The plot on the top right shows the mean emission wavelength and power of the gas as a function of gas density. Bottom row: Left-hand plot shows the ratio of emission to the nearest wavelength in the intrinsic spectrum. Some noise is present due to the relative coarseness of the intrinsic spectrum. The plot on the bottom right is the mean difference between the mean emission wavelength and the mean absorption wavelength for the combination of both gas and dust. }
\label{fig:phase}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=.26]{scattering-22155-0.pdf}
\caption{A compact Population III stellar cluster plotted in the same manner as Fig. \ref{fig:scat}. The location of HMXBs are shown as red circles while individual Population III stars are shown in white. Cyan circles are metal-enriched stellar clusters.}
\label{fig:scat2}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=.35]{phase-22155-0.pdf}
\caption{Halo B plotted in the same manner as Fig. \ref{fig:phase}.}
\label{fig:phase2}
\end{center}
\end{figure*}
As described in Table \ref{tab:Halolist}, Halo A has a total mass of $1.30 \times 10^{8}\ {\rm M}_\odot$, a metal-enriched stellar population of $5.47 \times 10^{5}\ {\rm M}_\odot$, and two Population III stars totalling $6\ \rm{M}_\odot$ which we treat as high-mass x-ray binaries. The halo is composed of a compact, dense ($> 10^{-23}\ \rm{g\ cm^{-3}}$) clump (sub-halo A1) merging with a larger, lower density clump (sub-halo A2) as shown in the density weighted projections in the top row of Fig. \ref{fig:scat}. Sub-halo A1 hosts cool, metal-enriched gas while the sub-halo A2 is on average an order of magnitude hotter and hosts both metal-enriched and metal-free gas. The hottest gas ($T > 3 \times 10^4$ K) is concentrated in the CGM and in a supernova remnant to the right of the sub-halos as viewed in the figure. Sub-halo A2 contains the metal-free stars and by extension the HMXBs.
We integrate emission and scattering per unit volume from dust and gas along the projection axis used to produce the density, temperature, and metallicity figures resulting in plots of integrated specific luminosity in units of $\rm{erg\ s^{-1}cm^{-2}}$ which is proportional to but not equivalent to flux. Under the assumption of local thermodynamic equilibrium, the combination of dense, cool, and metal/dust-rich gas in sub-halo A1 results in thermal emission on the order of $10^{-8}$ to $10^{-7}\ \rm{erg\ s^{-1}cm^{-2}}$. The emission contribution from dust peaks at $\sim 4 \times 10^{-10}\ \rm{erg\ s^{-1}cm^{-2}}$ within a burst metal-enriched stars in close proximity to the HMXBs. Though this region has a lower metallicity than the peaks found in A1 as well a lower density of dust, warmer temperatures contribute to a higher overall dust emission.
To estimate scattering energy, we calculate the mean absorption-weighted albedo using the intrinsic stellar spectra and the density fraction of the constituents in the relationship
\begin{equation}
\langle \alpha \rangle_{x} = \frac{\int \kappa_{\nu,x} I_\nu \alpha_{\rm{x}} d \nu}{\int \kappa_{\nu,x} I_\nu d \nu}
\end{equation}
\begin{dmath}
\langle \mathscr{E} \rangle_{\rm{scattering}} \approx f_{\rm{gas+metals}}\mathscr{E}_{\rm{gas+metals}} \frac{\langle \alpha \rangle_{\rm{gas+metals}}}{1-\langle \alpha \rangle_{\rm{gas+metals}}} + f_{\rm{dust}}\mathscr{E}_{\rm{dust}} \frac{\langle \alpha \rangle_{\rm{dust}}}{1-\langle \alpha \rangle_{\rm{dust}}},
\end{dmath}
where $I_\nu$ is the incident frequency-dependent flux. We find that the scattering in sub-halo A1 is limited to less than $10^{-9}\ \rm{erg\ s^{-1}cm^{-2}}$ while scattering in sub-halo A2 is of the order of $10^{-9}$ to $10^{-7}\ \rm{erg\ s^{-1}cm^{-2}}$. Taken together, this implies that the reprocessing of the intrinsic spectra in A1 is absorption and emission-dominated while the reprocessing of the high mass X-ray binaries in A2 is a roughly equally-weighted combination of gas emission, absorption, and scattering.
\begin{figure*}
\begin{center}
\includegraphics[scale=.38]{savelines-4.pdf}
\hfill
\includegraphics[scale=.38]{commonplt.pdf}
\caption{Left: Four most common emission lines amongst halos in our simulation with active Population III stars (left bar) and only metal-enriched stars (right bar). Right: Emission versus gas temperature normalized by the maximum emission of each line calculated for a fixed density and metallicity of $10^{-25}\ \rm{g\ cm^{-3}}$ and 0.1 $\rm{Z_\odot}$ respectively. C IV refers to the $\lambda\lambda1548,1551$ UV doublet and Ca II refers to the $\lambda\lambda\lambda8498,8542,8662$ IR triplet.}
\label{fig:lines}
\end{center}
\end{figure*}
As shown in Fig. \ref{fig:phase} (top left and bottom left) there are few differences between the intrinsic spectra and the spectra with local emission lines since the regions used to calculate those lines are compact. However, the impact of extinction of the spectra through the rest of the ISM is very pronounced. High-energy photons from the HMXB are reprocessed into emission lines by the metal-enriched gas while hydrogen-ionizing radiation is absorbed and reprocessed into IR photons. Lyman-Werner absorption is also pronounced except at Ly$\alpha$ where a high equivalent-width line forms from excitations in neutral hydrogen within sub-halo A1. Thus, the spectra of Halo A demonstrates the signature of a cool metal-rich halo merging with a warm metal-free halo.
The components contributing to the reddening of the spectra is further demonstrated when we segregate the emission from gas and plot it against density and mean emission wavelength (Fig. \ref{fig:phase} top right). The hottest gas ($>10^{7}$ K) forms an artificial ridge at $\sim 5.3\ \AA$ at the limit of our calculation of the emission profile (Fig. \ref{fig:plotfour}) and a second ridge at $1.14 \times 10^{3}\ \AA$ at our minimum temperature of 316 K
Between these two bounds we see that the lowest-density gas tends to be hotter and therefore more luminous per unit mass as we would expect. These bins correspond to the large number of emission lines seen at the high energy end of the post-extinction spectra. However most of the emission is coming from the cooler, medium density gas in A1. As mentioned, dust emission is weak, but present throughout the various density and temperature environments in the halo and therefore we see dust contributing to the spectra over the entire range of wavelengths simulated. The reddening plot (Fig. \ref{fig:phase} bottom right) shows the relative change in wavelength between absorption and emission. Like the scattering plot (Fig. \ref{fig:scat}, bottom center), absorption wavelength is an absorption weighted mean using the intrinsic spectrum. Therefore the mean change is
\begin{equation}
\langle \nu\rangle_{\rm{absorption},x} = \frac{\int \kappa_{\nu,x} I_\nu \nu d \nu}{\int \kappa_{\nu,x} I_\nu d \nu},
\end{equation}
\begin{equation}
\langle \nu\rangle_{\rm{emission},x} = \frac{\int j_{\nu,x} \nu d \nu}{\int j_{\nu,x} d \nu},
\end{equation}
\begin{dmath}
\Delta \lambda \approx f_{\rm{gas+metals}}\mathscr{E}_{\rm{gas+metals}} (\langle \lambda \rangle_{\rm{emission},\rm{gas+metals}}- \langle \lambda \rangle_{\rm{absorption},\rm{gas+metals}}) + f_{\rm{dust}}\mathscr{E}_{\rm{dust}} (\langle \lambda \rangle_{\rm{emission},\rm{dust}}- \langle \lambda \rangle_{\rm{absorption},\rm{dust}}).
\end{dmath}
The relative change, $\Delta \lambda/\langle \lambda \rangle_{\rm{absorption}}$, shows three distinct phenomena. At the low density end, we see that the hot gas has the propensity to either increase or decrease the wavelength of the spectra for mean absorption wavelengths $\sim 600\ \AA$. For moderate densities, all the gas contributes to the reddening of the spectra, and at high densities the contribution from high-density pockets of cool gas is seen to cause low-power reddening. Since the analysis of emission and absorption are drawn analytically from bulk characteristics, they do not paint a complete picture of reprocessing of the spectra. The impact of scattering and iterative energy balance in the Monte Carlo calculation as well as the precise three-dimensional distribution of gas, metals, and dust accounts for the difference between the intuition gained in the phase and projection plots and the fully simulated result in Fig. \ref{fig:phase} (top left). However, we predict the signature of the merger scenario to be markers of very high and relatively low energy gas and metals within the same halo.
\subsection{Population III Galaxy Scenario}
\label{sec:pop3}
\begin{figure*}
\begin{center}
\includegraphics[scale=.35]{savelines-2_0.pdf}
\caption{Top row: Total luminosity of Ly$\alpha$ (left) and H-$\alpha$ (right) tinted by the ratio of the mass of Population III stars to the total stellar mass. Bottom row: C IV doublet (left) and Ca II triplet (right) luminosities tinted by mean stellar age, inclusive of both metal-free and metal-enriched stars. C IV is a product of an hot plasma and a high ionizing flux. Ca II lines have been examined in the broad line regions of AGN and in stellar atmospheres.}
\label{fig:lines2}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=.35]{savelines-0_0.pdf}
\caption{Same as Fig. \ref{fig:lines2} but showing equivalent width rather than luminosity. Ly$\alpha$ equivalent width shows a negative log-log relationship with increasing stellar mass as a product of a nearly flat luminosity-mass relationship.}
\label{fig:lines3}
\end{center}
\end{figure*}
Halo B has a total mass of $2.30 \times 10^{7}\ {\rm M}_\odot$, 9144 ${\rm M}_\odot$ of metal-enriched stars with an intrinsic bolometric luminosity of $4.38 \times 10^6\ {\rm L}_\odot$. There are an additional 30 Population III stars totalling 712 ${\rm M}_\odot$. Once again, we convert two of those stars into HMXBs with a 40 ${\rm M}_\odot$ compact object accreting at the Eddington rate. Combined, the HMXBs and Population III stars have an intrinsic bolometric luminosity of $4.79 \times 10^6\ {\rm L}_\odot$ of which the HMXBs contribute $2.72 \times 10^6\ {\rm L}_\odot$ mostly in soft and hard X-rays. Thus the intrinsic spectra is dominated by HMXBs in the X-ray, metal-free stars in the hydrogen-ionizing UV band, and metal enriched stars at higher wavelengths.
As shown in Fig. \ref{fig:scat2}, allof the metal-free stars reside in the sole clump in a region of high density ($> 10^{-22} \rm{g\ cm^{-3}}$) gas, high metallicity, and relatively low temperature gas ($<10 ^4$ K). The range of temperatures are particularly remarkable because they imply that this cluster of Population III stars, the second largest in the entire simulation, has yet to undergo significant heating from ionizing radiation and a disruptive supernova. The cool gas results in absorption-dominated spectra reprocessing and low emittive power renders dust essentially irrelevant for our radiative transfer routines in this halo. The metal-enriched stars are embedded within the cluster of metal-free stars and have a mean age of only $3.5 \times 10^5$ yr and a mean metallicity of $0.17\ {\rm Z}_\odot$ after quickly being triggered by the death of a Population III star in the cool medium.
The simple structure of Halo B results in two distinct signatures in the spectra as seen in Fig. \ref{fig:phase2} (top left and bottom left). While Halo A reprocessed emission from the HMXB into emission lines at roughly the same wavelength, we see that Halo B absorbs those photons and re-emits them as hydrogen-ionizing radiation. Also unlike Halo A, ionizing radiation is a strong component of both the intrinsic and reprocessed spectra and stronger than in the intrinsic spectra ($f_{\rm{esc}} >1$). Fig. \ref{fig:phase2} (top right) confirms that all of the emission is confined to the UV. The estimated mean change in wavelength due to absorption and re-emission (bottom right) is reddening by as much as factor of 7 $\Delta \lambda/\lambda$.
Of the emission lines, Ly$\alpha$ is again the most prominent UV emission line feature and about the same luminosity as the Ly$\alpha$ emission in Halo A despite having a lower bolometric luminosity by about an order of magnitude. This implies that Ly$\alpha$ alone cannot be used to distinguish between these two scenarios, but in the presence of H-$\alpha$ and the Ca II $\lambda\lambda\lambda8498,8542,8662$ triplet in the final spectra result from the cooler gas in Halo B.
\subsection{Emission Lines}
Since emission lines are generated as part of a Monte Carlo calculation, no $a\ priori$ knowledge of the continuum emission can be used to identify the lines or calculate their equivalent widths. Therefore we subject the SEDs to a series of tests to determine the presence, source, and peak wavelengths of the lines based on the first and second derivatives of the continuum. This includes a check that eliminates line candidates with equivalent widths below 0.75 \AA\ unless the peak is two or more times the continuum, which has the effect of removing weak, but present features from our study so our counts should be interpreted as lower bounds. We also restrict our line catalogues to rest wavelengths greater than the Lyman limit.
The most common emission lines in order of their frequency within our sample are the Ly$\alpha$ $\lambda$ 1216 line, the C IV $\lambda\lambda1548,1551$ doublet lines, the Balmer $\alpha$ (H-$\alpha$) line, and the Ca II $\lambda\lambda\lambda8498,8542,8662$ triplet lines which appear in the spectra of 90\%, 51\%, 26\%, and 22\% of the 146 halos respectively. To construct a control group, we found the closest match between the intrinsic JWST NIRCam F277 wideband filter of a half-sample of halos with metal-free stars to the intrinsic flux of 73 halos without metal-free stars or HMXBs within the ``rare-peak" simulation at the same redshift. Within that group, Ly$\alpha$ was detectable in 90\%, the C IV $\lambda\lambda1548,1551$ doublet in 41\%, H-$\alpha$ in 7\%, and Ca II $\lambda\lambda\lambda8498,8542,8662$ in 27\%, as shown in Fig. \ref{fig:lines} (left). This implies that H-$\alpha$ is significantly more rare or weak in the spectra of our control group than in our sample of halos with metal-free stars and HMXBs.
\subsubsection{Lyman-$\alpha$}
Substantial Ly$\alpha$ emission is prominent in gas with temperatures between $10^4$ and $10^5$ K. Because these temperatures are associated with warm H~{\sc II} regions and Population III stars that are generally short-lived and embedded in their birth clouds, higher Ly$\alpha$ emission is a feature in most of our SEDs. Unlike the other common emission lines, we see fairly consistent Ly$\alpha$ luminosity between $10^{36}$ and $10^{38}$ $\rm{erg\ s^{-1}}$ in halos with high and low fractions of Population III stars as well as high and low masses of metal-enriched stars (Fig. \ref{fig:lines2}, top left). Consequently, there is a mostly well-correlated inverse relationship between equivalent width and both total stellar mass as well as Population III fraction (Fig. \ref{fig:lines3}, top left). As shown in \cite{2017MNRAS.469.4863B}, the mass-weighted mean age of metal-enriched stars in halos with total stellar masses between $10^5$ and $10^6$ ${\rm M}_\odot$ begin to settle into a narrow range as the H~{\sc II} regions around stars no longer encompass the entire halo. This allows star formation to transition from a series of bursts to a pattern of continuous formation. While emission continues from star-forming regions, the average stellar cluster is older and divorced from its birth molecular cloud and thus Ly$\alpha$ emission does not scale with stellar mass. Secondarily, the extinction cross section of Ly$\alpha$ photons in neutral hydrogen is high, implying that scattering through dense gas may attenuate emission along any line of sight.
While our prior treatment shows that the inverse relationship between intrinsic Ly$\alpha$ equivalent width (EW) and total stellar mass is generally extensible to metal-enriched stellar populations in similar environments, below $10^4$ ${\rm M}_\odot$, the pattern is exclusive to starbursts as several halos have had their star formation extinguished and have low intrinsic Ly$\alpha$ EW. For our sample, high metal-free stellar mass to total stellar mass fractions consistently exhibit the highest Ly$\alpha$ EWs (> 5 \AA) in our sample after extinction.
\subsubsection{$C\ IV\ \lambda\lambda1548,1551$}
C IV UV emission lines are an intrinsic feature of active galactic nuclei \citep[AGN; see review by][]{2000A&ARv..10...81V}. Their inclusion in our results demonstrate the consideration of line transfer from high-energy interactions to lower-energy photons in our calculations. Due to the ionization energies of carbon, C IV $\lambda\lambda1548,1551$ emission requires temperatures between $10^5$ and $10^6$ K. In this temperature range, absorption cross sections are highest for X-rays in metal-enriched gas, so the presence of high-energy sources like HMXBs directly impacts the prominence of this line. The occurrence of the C IV UV emission lines in approximately half of the halos in our sample is therefore partially a product of our decision to affix a HMXB to almost all of our halos.
In environments with low-metallicity ISMs and CGMs like those we see in our sample, X-ray escape fractions are generally high so despite a fixed number of HMXBs, the doublet's luminosity grows proportionally to the mass of the halo as seen in Fig. \ref{fig:lines2} (bottom left). The doublet is also mostly confined to halos with very young stellar populations (< 24 Myr) as these halos are more likely to have hot gas heated either by supernovae or photo-heating from young stars. This is in contrast with Ly$\alpha$ emission which also implied young stellar populations but required the persistence of colder star-forming gas.
C IV UV doublet equivalent widths were consistently between about 0.8 and 1.7 \AA\ as their strength is closely tied to absorption of the incident spectra which, in conjunction with the availability of metal-enriched gas, caps their equivalent widths.
\subsubsection{H-$\alpha$}
The Balmer series in ionized hydrogen forms from a recombination cascade in diffuse medium. This is tempered by collisional excitations in warm gas where the density and energy of particles or photons are high enough to ensure continuous re-ionization. Balmer-series emission is more susceptible to this effect than Ly$\alpha$ due to the lower energy of their transitions. Thus the relative luminosity of these lines both peak and drop off at lower temperatures.
H-$\alpha$ luminosity scales with stellar mass and the fraction of Population III stars in our sample. As shown in Fig. \ref{fig:lines}, H-$\alpha$ emission implies gas temperatures below $5 \times 10^4$ K and by extension the coolest star-forming halos and molecular clouds. However unlike Ly$\alpha$ emission, H-$\alpha$ emission has a higher escape fraction in neutral hydrogen and is therefore less susceptible to attenuation in dense gas. As shown in Fig. \ref{fig:lines3} (top-right), H-$\alpha$ EW also scales weakly with stellar mass and is the only one of the prominent emission lines to do so in agreement with observations of H$\alpha$-derived specific star formation rates of higher mass galaxies at $z\sim 2$ by \citet{2006ApJ...647..128E}. Higher fractions of metal-free stars are roughly inversely related to H-$\alpha$ EW which is a function of both the overall tendency for Population III stars to be a smaller fraction of the stellar mass in larger halos and heating from metal-free stars. With $JWST$, H-$\alpha$ emission for objects at $z=15$ should appear in the MIRI F1000W band. We note that IR Paschen-$\alpha$ transitions at 18750 \AA\ were also present in 33 halos due to the recombination cascade.
\subsubsection{$Ca\ II\ \lambda\lambda\lambda8498,8542,8662$}
The ionization potential of Ca$^+$ is $\sim$ 11.9 eV, making it susceptible to ionization by strong Ly$\alpha$ (10.19 eV) due to the presence of a meta-stable energy state of 1.7 eV Ca$^+$ that provides the difference \citep{1989A&A...208...47J}. Thus the Ca II NIR triplet emission neatly overlaps the thermal trends of Ly$\alpha$ and is only weakly related to temperature in its absence. Therefore, like Ly$\alpha$, Ca II NIR triplet emission is tied to AGN \citep{1989ApJ...347..656F} and bursts of star-formation in metal-enriched gas \citep{1993Ap&SS.205...85G}. However, as shown in Fig. \ref{fig:phase2}, gas metal-enrichment within large Population III stellar clusters is sufficient to generate this line so its presence does not automatically indicate a metal-enriched stellar population. We observe a well-correlated power law relationship between the emission of this triplet and the luminosity of the halo and no discernible relationship in equivalent widths. These trends imply that halos are mostly transparent to Ca II emission and the precise arrangement of the gas, dust, and metals are less important than the incident flux and the temperature of metal-enriched gas. With $JWST$, Ca II emission for objects at $z=15$ should appear in the MIRI F1280W and F1500W bands.
\begin{figure*}
\begin{center}
\includegraphics[scale=.35]{JCC-15.pdf}
\includegraphics[scale=.35]{JCC-15-2.pdf}
\caption{The right-hand panels show a control sample of galaxies without metal-free stars or HMXB, but with the similar intrinsic $J_{277w}$ fluxes to our sample of galaxies with both (left-hand). Top: $JWST$ colour-colour plot of the sample of halos with at least 1 $\rm{M_\odot}$ Population III stars tinted by their total stellar masses at a redshift of $z = 15$. Open circles and lines show changes from the intrinsic stellar and HMXB spectra. Bottom: Mean final spectral energy distribution ($\nu f_\nu$ vs \AA) of the sample shaded by one standard deviation above and below the mean and normalized by the values at 1500\AA.}
\label{fig:color-clolor}
\end{center}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=.16]{images_5.png}
\caption{Left to right: Mean density, rest frame optical RGB composite, monochromatic image corresponding to the $\rm{J_{277w}}$ image, $\rm{J_{277w}}$ image, and $\rm{J_{200w}}$ image for halo A, B, C, and D with a magnification factor of 10 and a 1 Ms exposure time. Markers in the density plots are circles, stars, and diamonds for metal-enriched stellar clusters, metal-free stars, and HMXBs respectively.}
\label{fig:photos}
\end{figure*}
\subsection{Aggregate Spectrographic and Photometric Results}
We produce $JWST$ colours by applying a filter throughput to our SEDs after accounting for the effects of redshifting. For our sample, we find that the intrinsic SED of our stars and HMXBs are poor predictors of the final colours produced by our radiative transfer calculations. As shown in Fig. \ref{fig:color-clolor} (top left), halos with final $\rm{J_{200w} - J_{277w}}$ and $\rm{J_{150w} - J_{277w}}$ colours around 0.45 and 0.25 respectively are likely to have changed little or reddened slightly after our calculations. Conversely, many halos with intrinsic colours in this range have their colours change drastically after processing. Shifts to a bluer colour implies reddening from higher energy photons from metal-free stars into the UV range of the filter and were mostly absent from the analysis of metal-enriched populations in the antecedent work. Drastic changes in $\rm{J_{150w} - J_{277w}}$ occur because the $\rm{J_{150w}}$ filter straddles the Lyman limit at $z =15$ so the colour is extremely sensitive to the production and escape fraction of ionizing radiation. The prominent Ly$\alpha$ lines at $\sim$ 19,450 \AA(observer) are captured by $JWST$ NIRCam's $\rm{J_{200w}}$ filter with the caveat that they are likely to be subject to extinction in the IGM not captured by our analysis. We expect this extinction to be particularly prominent early during the Epoch of Reionization and will seek to capture this effect in future studies. Here, the strength of this line increased $\rm{J_{200w} - J_{277w}}$ colours in many low stellar mass halos, sometimes dramatically.
\begin{table*}
\centering
\caption{Individual halo properties.}
\begin{tabular*}{0.99\textwidth}{@{\extracolsep{\fill}} cccccccc}
Halo & $\log M_{\rm tot}$ & $\log M_{\rm \star,ME}$ & $\log M_{\rm \star,MF}$ & $\log \rm{J_{277w}}$ & S/N$_{\rm max}$ & $\log \rm{J_{200w}}$ & S/N$_{\rm max}$ \\
& [\ifmmode{M_\odot}\else{$M_\odot$}\fi] & [\ifmmode{M_\odot}\else{$M_\odot$}\fi] & [\ifmmode{M_\odot}\else{$M_\odot$}\fi] & [$\rm{erg\ s^{-1}cm^{-2}}$] & & [$\rm{erg\ s^{-1}cm^{-2}}$] & \\
\hline
A & $8.47$ & $5.74$ & $0.78$ &$-20.22$ & 3.92 & $-20.16$ & 5.08 \\
B & $7.69$ & $3.96$ & $2.85$ &$-21.46$ & 1.03 & $-21.22$ & 1.02 \\
C & $8.65$ & $5.49$ & $2.42$ &$-21.34$ & 2.71 & $-21.09$ & 2.91\\
D & $8.56$ & $5.63$ & $0.48$ &$-20.04$ & 7.59 & $-19.87$ & 9.25\\
\hline
\end{tabular*}
\parbox[t]{0.99\textwidth}{\textit{Notes:}
The columns show halo mass, metal-enriched stellar mass, metal-free stellar mass, and $JWST$ $\rm{J_{200w}}$ and $\rm{J_{277w}}$ filter fluxes at $z=15$. Signal to noise ratios are for the brightest pixels shown in Fig. \ref{fig:photos} with a 1 Ms exposure time and $\mu = 10$.}
\label{tab:Halolist}
\end{table*}
Fig. \ref{fig:color-clolor} (bottom left) shows a composite of all the final SEDs in our sample normalized to emission at 1500 \AA(rest). In addition to the oft-mentioned Ly$\alpha$ line, we see several lines in the Lyman continuum. We did not explore these features because we expect neutral hydrogen to drastically attenuate ionizing emission in the IGM, but we note that several strong He I and He~{\sc II} emission lines are present in the ISM and CGM of most of the halos in our sample. In general, the fraction of ionizing radiation varied drastically between our halos due to the wide range of Population III stellar mass fractions. At the extremes, ionizing radiation was completely attenuated in some cases and represented the peak emitted energy in others. In the UV, results were more consistent as halos fell into the narrow range of 0.3 to 0.8 $\rm{J_{200w} - J_{277w}}$ demonstrated in the colour-colour plot with a few a outliers.
The control group of 73 halos with similar intrinsic $J_{277w}$ flux, but composed of metal-enriched stars and no HMXBs are plotted in the right-hand panels of Fig. \ref{fig:color-clolor}. Colours show fewer outliers are generally more red in both colors. The aggregate spectra show more prominent emission lines from metals in the Lyman continuum and a shallower UV, visual, and infrared slope.
We explore photometry for the halos examined in Section \ref{sec:merger} and \ref{sec:pop3} as well as two more merger scenarios (Halos C and D). The composition and $JWST$ fluxes for all four halos are shown in Table \ref{tab:Halolist}. For our analysis, we take background noise to be Gaussian with a mean and standard deviation given by
\begin{equation}
\langle N \rangle = \frac{5 \times 10^{-8} \rm{Jy}}{\sqrt{t_{\rm exposure}}},
\end{equation}
which approximates the sensitivity of $JWST$'s NIRCam. We choose the lower band of the pixel colormap to be the maximum of our mean noise value and one standard deviation below the mean pixel flux which produces tinting of the low flux pixels in our processed images. We take the signal to noise ratio to be the brightest pixel divided by 2$\langle N \rangle$.
Each halo is brighter than the noise through $JWST$ at $z = 15$ assuming a 1 Ms exposure time and a factor of 10 magnification using gravitational lensing. Generally however, galaxies of this kind are not resolved by $JWST$ and extend over only 1-2 pixels. The stellar population merger scenario (Halo A) occupies four pixels and appears as two distinct sub-halos of two pixels each. The Population III galaxy scenario (Halo B) illuminates a single pixel and produces a stronger flux in the $\rm{J_{200w}}$ filter than the $\rm{J_{277w}}$ filter, but we do not expect it to be directly observable with $JWST$ at this redshift without an extensive exposure time. Halo C however also features a high fraction of Population III stars and should be barely observable with a S/N of 3 in our example. Halo D would be the brightest of the four, but only contains a single Population III star amidst a much more luminous metal-enriched stellar population.
\section{Discussion}
\label{sec:diss}
To facilitate our study of the spectrographic impact of Population III stars and HMXBs, we have contributed a few methodological improvements to radiative transfer post-processing of cosmological simulations. At the core of our calculations is the dust radiative transfer code {\sc Hyperion} which we extend to gas and emission lines by creating two dimensional arrays of extinction and emission prescriptions with {\sc Cloudy}. With irony, we note that our relatively simple treatment of dust leaves the most to be desired and improvements to our dust models will be part of the focus of future investigations. However, galaxy dust ratios at high redshift have been shown to vary greatly with the assumed grain accretion timescales \citep{2015MNRAS.451L..70M} which itself varies sensitively with ISM density \citep{2016MNRAS.457.1842S} and composition, making dust difficult to constrain.
We also briefly explore the prevalence of high mass X-ray binaries in Section \ref{sec:HMXB}. Since the impetus for those calculations was a desire to physically motivate their inclusion in our study, we were less concerned with the implications of their global fraction on the cosmological environment, but that subject certainly deserves some consideration. The multi-color accretion disk model implies an inverse relationship between black hole mass and peak temperature. This suggests that larger black holes emit more of their radiation as hydrogen-ionizing photons than smaller ones, which emit most of their energy at wavelengths too small to interact with gas in the ISM and CGM, but contribute to slow heating of the IGM for photons in the 500 eV to 1 keV energy range \citep{2014ApJ...791..110X}. This may have considerable implications for reionization, star formation, and estimates for escape fractions if luminous high mass compact objects are determined to be fairly prevalent. Furthermore, X-ray emission from binaries have been shown to strongly contribute to the cosmic X-ray background \citep{2016ApJ...833...84X}.
Generally, metal-free stars at high-redshift remain elusive to direct detection with their supernovae as the best chance of detection \citep[e.g.][]{2013ApJ...762L...6W}. Galaxies where Population III starbursts comprise most or all the stellar mass like Halo B were too small and dim to be observed with $JWST$ even with generous exposure times and gravitational lensing. In scenarios with a merger between a halo with a metal-enriched stellar population and a metal-free stellar population, the metal enriched population provides enough of a boost to the luminosity to make the halo discernible, but dense gas in deeper potential wells limits the permeability of ionizing and UV radiation. In this case, it may be possible to estimate the temperature profile from the UV slope and deduce the presence of a hot, ionizing source like a HMXB or a metal-free stellar cluster. However, the best and rarest scenario for observation was a merger between two galaxies with metal-free stars (Halo C), but there was only one such configuration in our simulation comoving box size of $133.6\ \rm{Mpc}^3$. Therefore, we predict that direct observation is possible at this redshift, but fairly improbable with the current generation of hardware.
In their analysis of the void region of the Renaissance Simulations, \citet{2016ApJ...823..140X} discover Population III star formation in the terminating redshift of $z = 7.6$ in halos that were generally larger than those hosting these stars in the rare peak volume. Late formation is enabled by strong LW flux from metal-enriched stars suppressing formation in the surrounding pristine gas and may continue to even lower redshift. Their sample includes rather large Population III starbursts with one in excess of $10^3\ {\rm M}_\odot$. There are twelve halos with active Population III stellar populations at the terminating redshift of the void simulation in a comoving volume of 220.5 $\rm{Mpc}^3$. Given their luminosity and redshift, some of these would likely be detectable with $JWST$.
Our use of averaged metal-free IMF prescriptions likely has little impact on observables like colour or imaging. By maintaining the size of the ionized region and temperatures from our simulation, the effect of this discrepancy is mostly minimized to calculation of the absorbed radiative energy within the Monte Carlo step. However since observation requires either a large number of metal-free stars or a metal-enriched population, the impact of small changes in the incident spectra of individual stars is vanishingly small, especially when compared to contributions from the other factors like the impact of morphology and viewing angle when observing irregular galaxies. For a detailed study of astrophysical radiative transfer phenomena on the other hand, a more robust spectral routine would be desirable.
For objects at high redshift, emission line diagnostics serve as more of a long term prediction and a theoretical exploration with the notable exception of the Ly$\alpha$ line, which sits near the center of the $\rm{J_{200w}}$ filter at $z=15$ and is luminous enough to impact color. This is tempered by the tendency for this line to become lost against the continuum in brighter galaxies as starbursts comprise of a smaller fraction of the emission.
For HMXBs, the C IV UV doublet is a constant companion, growing in strength proportionally to the overall luminosity of the halo due to our decision to include them in most of our sample. Since HMXBs can form in metal-enriched populations, diagnostics of this emission line are somewhat extensible to observation of these objects in the local Universe. However, both C IV and the Ca II IR triplet are already a well-established feature of the broad-line regions of nearby accreting compact objects. We find them in our control sample of metal-enriched halos as well which undermines the premise that they are unique to the presence HMXB. We will therefore wait until we perform simulations to lower redshift before we attempt to glean more about the emission-line diagnostics of present-day HMXBs. We also note that our use of solar chemical abundances may significantly underestimate the prevalence of C IV if gas in early galaxies are carbon-enhanced due to the lack of Type Ia supernovae. H-$\alpha$ emission was much more prevalent in our sample of halos with metal-free stars and more luminous in galaxies with a higher fraction of these stars. Though emission is relatively weak compared to the other lines, it may serve as a potential fingerprint for this class of halos.
\section{Conclusion}
\label{sec:con}
We introduce a new radiative transfer post-processing pipeline, {\sc Caius}, for {\sc enzo}\ cosmological simulations which we apply to explore the observability of metal-free stellar populations and high mass X-ray binaries. Our main findings are:
\begin{enumerate}
\item High mass X-ray binaries would peak at about 20\% of the stellar systems within a Population III starburst if it is generously assumed that half the stars form as close binaries.
\item About six halos in our sample would be discernible with $JWST$ with long exposure times (1-10 Ms) and gravitational lensing ($\mu = 10$).
\item Galaxies with high fractions of metal-free stars tend to have low luminosity at high redshift. Therefore the best scenario for direct observation of a metal-free stellar population might be a merger between two such galaxies though that configuration is rare in our simulations.
\item The youth of metal-free stars implies strong Ly$\alpha$ emission. Ly$\alpha$ EW are inversely proportional to the total stellar mass of the halo. Through filters, high EW appear as an increase in $\rm{J_{200w} - J_{277w}}$ as compared to their intrinsic values from the underlying stellar spectra.
\item The inclusion of Population III stars and HMXBs significantly increased the prevalence of H-$\alpha$ emission versus the control group and H-$\alpha$ further scaled with the fraction of stellar mass that comes from Population III stars.
\item Strong Ly$\alpha$ emission gives rise to the Ca II IR triplet, which suffers less extinction than Ly$\alpha$ while indicating the same physical scenario.
\item Our sample of galaxies with Population III stars and HMXB were generally bluer than the control sample.
\end{enumerate}
We have shown the impact of ISM and CGM extinction of the gas and dust continuum as well as emission lines on galaxies with high-energy sources in the early Universe. Our prescription treats extinction and photochemistry in both optically thin and optically thick media. With our pipeline, we are able to produce synthetic photometry and further process those results into instrument-relevant data. We will continue to improve our post-processing models as we explore more cosmological scenarios.
\section*{Acknowledgements}
KSSB acknowledges support from the Southern Regional Education Board doctoral fellowship.
JHW acknowledges support from National Science Foundation (NSF) grants
AST-1333360 and AST-1614333 and Hubble theory grants HST-AR-13895 and
HST-AR-14326 and NASA grant NNX17AG23G.
AA acknowledges support fromm NSF grant AST-1333360.
BWO was supported in part by NSF grants PHY-1430152 and AST-1514700, by NASA grants NNX12AC98G, NNX15AP39G, and by Hubble Theory Grants HST-AR-13261.01-A and HST-AR-14315.001-A.
MLN was supported by NSF grant AST-1109243 and acknowledges partial support from NSF grant AST-1615848. The
simulation was performed using \textsc{Enzo} on the Blue Waters
operated by the National Center for Supercomputing Applications (NCSA)
with PRAC allocation support by the NSF (award number
ACI-0832662). This research is part of the Blue Waters
sustained-petascale computing project, which is supported by the NSF
(award number ACI 1238993 and ACI-1514580) and the state of Illinois. Blue Waters is a
joint effort of the University of Illinois at Urbana-Champaign and its
NCSA. This research has made use of NASA's Astrophysics Data System
Bibliographic Services. Analysis was performed on XSEDE's Maverick
resource with XSEDE allocation AST-120046. The majority of the
analysis and plots were done with \textsc{yt} and \textsc{matplotlib}.
\textsc{Enzo} and \textsc{yt} are developed by a large number of
independent researchers from numerous institutions around the
world. Their commitment to open science has helped make this work
possible.
|
1,116,691,501,286 | arxiv | \section{Introduction}
\label{sec:intro}
Parallel workloads are often modeled as directed acyclic task graphs,
or DAGs, where nodes represent tasks and edges represent dependencies
between tasks. Task graphs arise from many scientific domains, such as
image processing, genomics, and geophysical simulations. In this
paper, we focus on task graphs coming from sparse linear algebra, and
especially from the factorization of sparse matrices using the
multifrontal method. Liu~\cite{liu:90} explains that the computational
dependencies and requirements in Cholesky and LU factorization of
sparse matrices using the multifrontal method can be modeled as a task
tree, called the \emph{assembly tree}. We therefore focus on
dependencies that can be modeled as a tree.
In the abundant existing literature, several variants of the task
graph scheduling problem are addressed, depending on the ability to
process a task in parallel: tasks are either \emph{sequential} (not
amenable to parallel processing), \emph{rigid} (requesting a given
number of processors), \emph{moldable} (able to cope with any fixed
number of processors) or even \emph{malleable} (processed on a variable
number of processors) in the terminology of~Drozdowski~\cite[chapter
25]{handbook}. When considering moldable and malleable tasks, one has
to define how the processing time of a task depends on the number of
allocated processors. Under some general
assumptions, Jansen and Zhang~\cite{jansen05} derive a
3.29~approximation algorithm for arbitrary precedence constraints,
which is improved in a 2.62~approximation in the particular case of a
series-parallel precedence graph by Lepere et
al.~\cite{lepere02}. However, although polynomial, these algorithms
relies on complex optimization techniques, which makes them difficult
to implement in a practical setting.
In this study, we consider a special case of malleable tasks, where
the speedup function of each task is $p^\alpha$, where $p$ is the
number of processors allocated to the task, and $0<\alpha\leq 1$ is a
global parameter. In particular, when the share of processors $p_i$
allocated to a task $T_i$ is constant, its processing time is given by
$L_i/ p_i^\alpha$, where $L_i$ is the sequential duration of
$T_i$. The case $\alpha=1$ represents the unrealistic case of a
perfect linear speed-up, and we rather concentrate on the case
$\alpha<1$ which takes into consideration the cost of the
parallelization. In particular $\alpha<1$ accounts for the cost of
intra-task communications, without having to decompose the tasks in
smaller granularity sub-tasks with explicit communications, which
would make the scheduling problem intractable. This model has been
advocated by Prasanna and Musicus~\cite{prasmus2} for matrix
operations, and we present some new motivation for this model in our
context. As in~\cite{prasmus2}, we also assume that it is possible to
allocate non-integer shares of processors to tasks. This amounts to
assume that processors can share their processing time among
tasks. When task $A$ is allocated 2.6 processors and task $B$ 3.4
processors, one processor dedicates 60\% of its time to $A$ and 40\%
to $B$. Note that this is a realistic assumption, for example, when
using modern task-based runtime systems such as StarPU~\cite{starpu},
KAAPI~\cite{kaapi}, or PaRSEC~\cite{parsec}. This allows to simplify
the scheduling problem and to derive optimal allocation algorithms.
Our objective is to minimize the total processing time of a tree of
malleable tasks. Initially, we consider a homogeneous platform
composed of $p$ identical processors. To achieve our goal, we take
advantage of two sources of parallelism: the \emph{tree parallelism}
which allows tasks independent from each others (such as siblings) to
be processed concurrently, and the \emph{task parallelism} which
allows a task to be processed on several processors. A solution to
this problem describes both in which order tasks are processed and
which share of computing resources is allocated to each task.
In~\cite{prasmus,prasmus2}, the same problem has been addressed by
Prasanna and Musicus for series-parallel graphs (or SP-graphs). Such
graphs are built recursively as series or parallel composition of two
smaller SP-graphs. Trees can be seen as a special-case of
series-parallel graphs, and thus, the optimal algorithm proposed
in~\cite{prasmus,prasmus2} is also valid on trees. They use optimal
control theory to derive general theorems for any strictly increasing
speedup function. For the particular case of the speedup function
$p^\alpha$, Prasanna and Musicus prove some properties of the unique
optimal schedule which allow to compute it efficiently. Their results
are powerful (a simple optimal solution is proposed), but to obtain
these results they had to transform the problem in a shape which is
amenable to optimal control theory. Thus, their proofs do not provide
any intuition on the underlying scheduling problem, yet it seems
tractable using classic scheduling arguments.
In this paper, our contributions are the following:
\begin{compactitem}
\item In Sect.~\ref{sec:motiv}, we show that the model of malleable
tasks using the $p^\alpha$ speed-up function is justified in the
context of sparse matrix factorization.
\item In Sect.~\ref{sec:shared}, we propose a new and simpler proof
for the results of ~\cite{prasmus,prasmus2} on series-parallel
graphs, using pure scheduling arguments.
\item In Sect.~\ref{sec:dist}, we extend the previous study on
distributed memory machines, where tasks cannot be distributed
across several distributed nodes. We provide NP-completeness
results and approximation algorithms.
\end{compactitem}
\section{Validation of the Malleable Task Model}
\label{sec:motiv}
In this section, we evaluate the model proposed by
Prasanna and Musicus in~\cite{prasmus,prasmus2} for our target
application. This model states that the instantaneous speedup of a task processed on
$p$ processors is $p^\alpha$. Thus, the processing time of a task
$T_i$ of size $L_i$ which is allocated a share of processors $p_i(t)$
at time $t$ is equal to the smallest value $C_i$ such that
$
{ \int_{0}^{C_i} \left(p_i(t)\right)^{\alpha} dt} \ \geq\ L_i,
$
where $\alpha$ is a task-independent constant. When the share
of processors $p_i$ is constant, $C_i = L_i/p_i^\alpha$. Our goal is
(i) to find whether this formula well describes the evolution of the
task processing time for various shares of processors and (ii) to
check that different tasks of the same application have the same
$\alpha$ parameter. We target a modern multicore platform composed of
a set of nodes each including several multicore processors. For the
purpose of this study we restrict ourselves to the single node case
for which the communication cost will be less dominant. In this
context, $p_i(t)$ denotes the number of \emph{cores} dedicated to task
$T_i$ at time $t$.
We consider applications having a tree-shaped task
graph constituted of parallel tasks. This kind of
execution model can be met in sparse direct solvers where the matrix
is first factorized before the actual solution is computed. For
instance, either the multifrontal method~\cite{dure:83} as implemented
in \texttt{MUMPS}\xspace~\cite{mumps11} or \texttt{qr\_mumps}\xspace~\cite{buttari10}, or the supernodal
approach as implemented in \texttt{SuperLU}\xspace~\cite{superlu} or
in \texttt{PaStiX}\xspace~\cite{pastix}, are based on tree-shaped task graphs (namely the
assembly tree~\cite{aglp:87}). Each task in this tree is a partial
factorization of a dense sub-matrix or of a sparse panel. In order to
reach good performance, these factorizations are performed using tiled
linear algebra routines (BLAS): the sub-matrix is decomposed into 2D
tiles (or blocks), and optimized BLAS kernels are used to perform the
necessary operations on each tile. Thus, each task can be seen as a
task graph of smaller granularity sub-tasks.
As computing platforms evolve quickly and become more complex (e.g.,
because of the increasing use of accelerators such as GPUs or Xeon
Phis), it becomes interesting to rely on an optimized dynamic runtime
system to allocate and schedule tasks on computing resources. These
runtime systems (such as StarPU~\cite{starpu}, KAAPI~\cite{kaapi}, or
PaRSEC~\cite{parsec}) are able to process a task on a prescribed
subset of the computing cores that may evolve over time. This
motivates the use of the malleable task model, where the share of
processors allocated to a task vary with time. This approach has been
recently used and evaluated~\cite{hugo14} in the context of the \texttt{qr\_mumps}\xspace
solver using the StarPU runtime system.
In order to assess whether tasks used within sparse direct
solvers fit the model introduced by Prasanna and Musicus
in~\cite{prasmus2} we conducted an experimental study on several dense
linear algebra tasks. We used a test platform composed of 4 Intel
E7-4870 processors having 10 cores each clocked at 2.40~GHz and having
30~MB of L3 cache for a total of 40 cores. The platform is equipped
with 1~TB of memory with uniform access. We considered dense
operations which are representative of what can be met in sparse
linear algebra computations, namely the standard frontal matrix
factorization kernel used in the \texttt{qr\_mumps}\xspace solver. We used either
block-columns of size 32 (1D partitioning) or square blocks of size
256 (2D partitioning). All experiments were made using the StarPU
runtime.
\begin{figure}[bt]
\centering
\subfigure[\mystrut(1.2em) Timings and model (lines) with 1D partitioning]{\label{fig.qrm1D}%
\includegraphics[width=0.6\linewidth]{qrm1d_model_log_log.fig}%
}
\hspace{0.5cm}
\raisebox{65pt}{%
\subfigure[\mystrut(1.2em)Values of $\alpha$]{\label{fig.qrmalpha}%
\scalebox{0.9}{%
\begin{tabular}[c]{|c|c|c|}
\hline
matrix & ~~1D~~ & ~~2D~~ \\
\hline
5000x1000~~ & 0.78 & 0.93\\
\hline
10000x2500~~ & 0.88 & 0.95 \\
\hline
20000x5000~~ & 0.89 & 0.94 \\
\hline
\end{tabular}}
}
}
\caption{Timings and $\alpha$ values for \texttt{qr\_mumps}\xspace frontal matrix factorization kernel}
\label{fig.qrm}
\end{figure}
Figure~\ref{fig.qrm1D} presents the timings obtained when processing
the \texttt{qr\_mumps}\xspace frontal matrix factorization kernel on a varying number of
processors. The logarithmic scales show that the $p^\alpha$ speedup
function models well the timings, except for small matrices when $p$
is large. In those cases, there is not enough parallelism in tasks to
exploit all available cores. We performed linear regressions on
the portions where $p\leq 10$ to compute $\alpha$ for
different task sizes (Fig.~\ref{fig.qrmalpha}). We performed
the same test for 2D partitioning and computed the corresponding
$\alpha$ values (using $ p\leq 20$). We notice that the value of
$\alpha$ does not vary significantly with the matrix size, which
validates our model. The only notable exception is for the smallest
matrix (5000x1000) with 1D partitioning: it is hard to efficiently use
many cores for such small matrices. In all cases, when the number of
processors is larger than a threshold the performance deteriorates and
stalls. Our speedup model is only valid below this threshold,
which threshold increases with the matrix size. This is not a
problem as the allocation schemes developed in the next sections
allocate large numbers of processors to large tasks at the top of the
tree and smaller numbers of processors for smaller tasks. In other
words, we produce allocations that always respect the validity
thresholds of the model.
Finally, note that the value of $\alpha$ depends on the
parameters of the problem (type of factorization, partitioning, block
size, etc.). It has to be determined for each kernel and each set of
blocking parameters.
\section{Model and Notations}
\label{sec:model}
We assume that the number of available computing resources may vary
with time: $p(t)$ gives the (possibly rational) total number of processors
available at time $t$, also called the {processor profile}\xspace. For the sake of
simplicity, we consider that $p(t)$ is a step function. Although our
study is motivated by an application running on a single multicore
node (as outlined in the previous section), we use the term
\emph{processor} instead of \emph{computing core} in the following
sections for readability and consistency with the scheduling
literature.
We consider an in-tree $G$ of $n$ malleable tasks $T_1, \ldots, T_n$.
$L_i$ denotes the length, that is the sequential processing time, of
task $T_i$. As motivated in the previous section, we assume that the
speedup function for a task allocated $p$ processors is $p^\alpha$,
where $0 < \alpha \leq 1$ is a fixed parameter.
A schedule $S$ is a set of nonnegative piecewise continuous functions
$\big\{ p_i(t)\ \big|\ i\in I\big\}$ representing the time-varying
share of processors allocated to each task. During a time interval
$\Delta$, the task $T_i$ performs an amount of work equal to $ \inte{\Delta\
}{p_i(t)^\alpha}{dt}$. Then, $T_i$ is completed when the total work
performed is equal to its length $L_i$. The completion time of task
$T_i$ is thus the smallest value $C_i$ such that $
\int_0^{C_i}{p_i(t)^\alpha}{dt} \geq L_i$.
We define $w_i(t)$ as the ratio of the
work of the task $T_i$ that is done during the time interval $[0,t]$:
$w_i(t) = \inte[t]{0}{p_i(x)^\alpha}{dx} \big/ L_i $. A schedule is a
valid solution if and only if:
\begin{compactitem}
\item it does not use more processors than available: $\forall t, \sum_{i\in I} p_i(t) \leq p(t)$;
\item it completes all the tasks: $\exists\tau,\ \forall i \in I, \ \ w_i(\tau)=1$;
\item and it respects precedence constraints:
$\forall i\in I, \forall t$, if $p_i(t)>0$ then, $\forall j \in I$,
if $j$ is a child of $i$, $w_j(t)=1$.
\end{compactitem}
The makespan $\tau$ of a schedule is computed as $\min \{t \ | \
\forall i\ w_i(t) = 1\}$. Our objective is to construct a valid
schedule with optimal, i.e., minimal, makespan.
Note that because of the speedup function $p^\alpha$, the computations
in the following sections will make a heavy use of the functions
$f:x\mapsto x^\alpha$ and $g:x\mapsto x^{(1/\alpha)}$. We assume that
we have at our disposal a polynomial time algorithm to compute both
$f$ and $g$. We are aware that this assumption is very likely to be
wrong, as soon as $\alpha<1$, since $f$ and $g$ produce irrational
numbers. However, without these functions, it is not even possible to
compute the makespan of a schedule in polynomial time and, hence, the
problem is not in NP. Furthermore, this allows us to avoid the
complexity due to number computations, and to concentrate on the most
interesting combinatorial complexity, when proving NP-completeness
results and providing approximation algorithms. In practice, any
implementation of $f$ and $g$ with a reasonably good accuracy will be
sufficient to perform all computations including the computation
of makespans.
In the next section, following Prasanna and Musicus, we will not
consider trees but more general graphs: \emph{series-parallel graphs}
(or SP graphs). An SP graph is recursively defined as a single task,
the series composition of two SP graphs, or the parallel composition
of two SP graphs.
A tree can easily be transformed into an SP graph by joining the
leaves according to its structure,
the resulting graph is then called a \emph{pseudo-tree}. We will use
$(\para ij)$ to represent the parallel composition of tasks $T_i$ and
$T_j$ and $(\seri ij)$ to represent their series composition.
Thanks to the construction of pseudo-trees, an algorithm which solves
the previous scheduling problem on SP-graphs also gives an optimal
solution for trees.
\section{Optimal Solution for Shared-Memory Platforms}
\label{sec:shared}
The purpose of this section is to give a simpler proof of the results
of~\cite{prasmus,prasmus2} using only scheduling arguments. We
consider an SP-graph to be scheduled on a shared-memory platform
(each task can be distributed across the whole platform). We assume
that $\alpha<1$ and prove the uniqueness of
the optimal schedule.
Our objective is to prove that any SP graph $G$ is \emph{equivalent}
to a single task $T_G$ of easily computable length: for any {processor profile}\xspace
$p(t)$, graphs $G$ and $T_G$ have the same makespan. We prove that the
ratio of processors allocated to any task $T_i$, defined by
$r_i(t)= p_i(t)/p(t)$, is constant from the
moment at which $T_i$ is initiated to the moment at which it is
terminated. We also prove that in an optimal schedule, the two
subgraphs of a parallel composition terminate at the same
time and each receives a constant total ratio of processors throughout
its execution.
We then prove that these properties imply that the optimal schedule is unique and obeys
to a {\it flow conservation} property: the shares of processors
allocated to two subgraphs of a series composition are equal. When
considering a tree, this means that the whole schedule is defined by
the ratios of processors allocated to the leaves. Then, all the children of a node $T_i$ terminate at the
same time, and its ratio is the sum of its children
ratios.
We first need to define the length $\LG{G}$ associated to a graph $G$, which
will be proved to be the length of the task $T_G$. Then, we state a few lemmas
before proving the main theorem. We only present here sketches of the proofs,
the detailed versions can be found in \cite{RR-ipdps-2014}.
\begin{defi}
\label{def.eq-task}
We recursively define the length $\LG{G}$ associated to a SP graph $G$:
\begin{inparaitem}
\item $\LG{T_i} = L_i$ \hfill
\item $\LG{\seri{G_1}{G_2}} = \LG{G_1} + \LG{G_2}$ \hfill
\item $\LG{\para{G_1}{G_2}} = \left(\LG{G_1}^{1/\alpha} + \LG{G_2}^{1/\alpha}\right)^\alpha$
\end{inparaitem}
\end{defi}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\begin{lemma}
\label{lem:allproc}
An allocation minimizing the makespan uses all the processors at any time.
\end{lemma}
We call a \emph{clean interval} with regard to a schedule $S$ an
interval during which no task is completed in $S$.
\begin{lemma}
\label{lem:esc}
When the number of available processors is constant, any optimal schedule
allocates a constant number of processors per task on any clean
interval.
\end{lemma}
\begin{proof}
By contradiction, we assume that there exists an optimal schedule
$\mathcal{P}$ of makespan $M$, a task $T_j$ and a clean interval
$\Delta=[t_1,t_2]$ such that $T_j$ is not allocated a constant number of
processors on $\Delta$. By definition of clean intervals, no task
completes during $\Delta$. $|\Delta|=t_2-t_1$ denotes the duration of $\Delta$,
$I$ the set of tasks that receive a non-empty share of processors
during $\Delta$, and $p$ the constant number of available processors.
We want to show that there exists a valid schedule with a makespan
smaller than $M$. To achieve this, we define an intermediate and
not necessarily valid schedule $\mathcal{Q}$, which
nevertheless respects the resource constraints (no more than
$p$ processors are used at time $t$). This schedule is
equal to $\mathcal{P}$ except on $\Delta$.
%
The constant share of processors allocated to task $T_i$ on $\Delta$ in
$\mathcal{Q}$ is defined by $q_i = \frac{1}{|\Delta|}\int_\Delta
p_i(t)dt$. For all $t$, we have $\sum_{i\in I} p_i(t) = p$ because
of Lemma~\ref{lem:allproc}. We get $\sum_{i\in I} q_i = p$. So
$\mathcal{Q}$ respects the resource constraints.
Let $W_i^\Delta(\mathcal{P})$ (resp. $W_i^\Delta(\mathcal{Q})$) denote the
work done on $T_i$ during $\Delta$ under schedule $\mathcal{P}$
(resp. $\mathcal{Q}$).
We have
\begin{align*}
W_i^\Delta(\mathcal{P}) &= \int_\Delta p_i(t)^\alpha dt = |\Delta|
\int_{[0,1]} p_i(t_1+ t |\Delta|)^\alpha dt\\
W_i^\Delta(\mathcal{Q}) &= \int_\Delta \left(\frac{1}{|\Delta|}\int_\Delta p_i(t) dt\right) ^\alpha dx
= |\Delta| \left(\int_{[0,1]} p_i(t_1+t |\Delta|) dt\right) ^\alpha
\end{align*}
As $\alpha<1$, the function $x\mapsto x^\alpha$ is concave and then,
by Jensen inequality \cite{Hardy}, $W_i^\Delta(\mathcal{P}) \leq
W_i^\Delta(\mathcal{Q})$. Moreover, as $x\mapsto x^\alpha$ is
\emph{strictly} concave, this inequality is an equality if and only
if the function $t\mapsto p_i(t_1+t|\Delta|)$ is equal to a constant on
$[0,1[$ except on a subset of $[0,1[$ of null measure \cite{Hardy}.
Then, by definition, $p_j$ is not constant on $\Delta$, and
cannot be made constant by modifications on a set of null measure.
We thus have $W_j^\Delta(\mathcal{P}) < W_j^\Delta(\mathcal{Q})$.
Therefore, $T_j$ is allocated too many processors under
$\mathcal{Q}$. It is then possible to distribute this surplus among
the other tasks during $\Delta$, so that the work done during $\Delta$ in
$\mathcal{P}$ can be terminated earlier. This remark implies that
there exists a valid schedule with a makespan smaller than $M$;
hence, the contradiction.\qed
\end{proof}
We recall that $r_i(t)= p_i(t)/p(t)$ is the instantaneous ratio of
processors allocated to a task $T_i$ .
\begin{lemma}
\label{lem:rate}
Let $G$ be the parallel composition of two tasks, $T_1$ and $T_2$. If
$p(t)$ is a step function, in any optimal schedule $r_1(t)$ is
constant and equal to $\pi_1 =
{1}\left/\left({1+\left({L_2}/{L_1}\right)^
{1/\alpha}}\right)\right. = L_1^{1/\alpha} \left/ \LG{\para
12}^{1/\alpha} \right.$ up to the completion of $G$.
\end{lemma}
\begin{proof}
First, we prove that $r_1(t)$ is constant on any optimal
schedule.
We consider an optimal schedule $S$, and two consecutive time intervals $A$ and
$B$ such that $p(t)$ is constant and equal to $p$ on $A$ and $q$ on $B$, and
$S$ does not complete before the end of $B$. Suppose also that $|A|p^\alpha =
|B|q^\alpha$ (shorten one interval otherwise), where $|A|$ and
$|B|$ are the durations of intervals $A$ and $B$. By Lemma \ref{lem:esc},
$r_1(t)$ has constant values $r_1^A$ on $A$ and $r_1^B$ on
$B$. Suppose by contradiction that $r_1^A \neq r_1^B$
We want to prove that $S$ is not optimal, and so that we can do the
same work as $S$ does on $A\cup B$ in a smaller makespan. We
set $r_1 = \left.\left({r_1^A+r_1^B}\right) \right/{2}$. We define
the schedule $S'$ as equal to $S$ except on $A \cup B$ where the ratio
allocated to $T_1$ is $r_1$ (see Fig. \ref{fig:rarb}).
\begin{figure}[tb]
\centering
\input{fig/figrpi.tex}
\caption{Schedules $S$ and $S'$ on $A\cup B$. The abscissae represent the time and the ordinates the ratio of processing power}
\label{fig:rarb}
\end{figure}
\\
The work $W_1$ on task $T_1$ under $S$ and $W'_1$ under $S'$ during $A\cup B$ are
equal to:
$$W_1 = |A|p^\alpha \left(r_1^A\right)^\alpha + |B|q^\alpha \left(r_1^B\right)^\alpha
\qquad W'_1 = r_1^\alpha \left(|A|p^\alpha + |B|q^\alpha\right)$$
Then, with the concavity inequality and the fact that $|B|q^\alpha =
|A|p^\alpha$, we can deduce that $W_1'>W_1$ and symmetrically that $W_2'>W_2$.
Therefore, $S'$ performs strictly more work for each task during $A\cup B$ than
$S$. Thus, as in Lemma~\ref{lem:esc}, $S$ is not optimal. So
$r_1(t)$ is constant in optimal schedules.
There remains to prove that in an optimal schedule $S$, $r_1(t) =
\pi_1$; hence, the optimal schedule is unique. As $p(t)$ is a step
function, we define the sequences $\left(A_k\right)$ and
$\left(p_k\right)$ such that $A_k$ is the duration of the $k$-th
step of the function $p(t)$ and $p(t)=p_k>0$ on $A_k$. The sum of the
durations of the $A_k$'s is the makespan of $S$.
Then, if we note $V = \sum_k |A_k| p_k^\alpha$ and $r_1$ the value of $r_1(t)$,
we have:
$$
\begin{array}{c}\displaystyle
L_1 = \sum_k |A_k| r_1^\alpha p_k^\alpha = r_1^\alpha V
\qquad\displaystyle\text{and}\qquad
L_2 = \sum_k |A_k| (1-r_1)^\alpha p_k^\alpha = (1-r_1)^\alpha V
\end{array}
$$
Then, $\ r_1 = {1}\left/\left({1+\left({L_2}/{L_1}\right)^
{1/\alpha}}\right)\right. = \pi_1$.\qed
\end{proof}
\begin{lemma}
\label{lem:ratequiv}
Let $G$ be the parallel composition of tasks $T_1$ and $T_2$, with
$p(t)$ a step function, and $S$ an optimal schedule. Then, the
makespan of $G$ under $S$ is equal to the makespan of the task
$T_G$ of length $\LG G = \LG{\para 12}$.
\end{lemma}
\begin{proof}
We characterize $p(t)$ by the sequences $(A_k)$ and $(p_k)$ as in the proof of
Lemma~\ref{lem:rate}. We know by Lemma~\ref{lem:rate} that the share allocated
to $T_1$ is constant and equal to $\pi_1p_k$ on each interval $A_k$.
%
Then, by summing the work done on each interval for both tasks, one can prove
that they are completed simultaneously, and that this completion time is the
same as that of task $T_G$ under the same processor profile.\qed
\end{proof}
\begin{theorem}
\label{th:step}
For every graph $G$, if $p(t)$ is a step function, $G$ has the same
optimal makespan as its equivalent task $T_G$ of length $\LG
G$ (computed as in Definition~\ref{def.eq-task}). Moreover, there is a unique optimal schedule, and it can be
computed in polynomial time.
\end{theorem}
\begin{proof}
In this proof, we only consider optimal schedules. Therefore, when
the makespan of a graph is considered, this is implicitly its
optimal makespan.
%
We first remark that in any optimal schedule, as $p(t)$ is a step
function and because of Lemma \ref{lem:esc}, only step functions are
used to allocate processors to tasks, and so Lemma
\ref{lem:ratequiv} can be applied on any subgraph of $G$ without
checking that the {processor profile}\xspace is also a step function for this
subgraph.
%
We now prove the result by induction on the structure of $G$.
\begin{itemize}
\item $G$ is a single task. The result is immediate.
\item $G$ is the series composition of $G_1$ and $G_2$. By
induction, $G_1$ (resp. $G_2$) has the same makespan as task
$T_{G_1}$ (resp. $T_{G_2}$) of length $\LG {G_1}$ (resp. $\LG {G_2}$) under any
{processor profile}\xspace. Therefore, the makespan of $G$ is equal to $\LG G = \LG {\seri {G_1}{G_2}} =
\LG {G_1} + \LG {G_2}$.
%
The unique optimal schedule of $G$ under $p(t)$ processors is the
concatenation of the optimal schedules of $G_1$ and $G_2$.
\item $G$ is the parallel composition of $G_1$ and $G_2$. By
induction, $G_1$ (resp. $G_2$) has the same makespan as task
$T_{G_1}$ (resp. $T_{G_2}$) of length $\LG {G_1}$ (resp. $\LG {G_2}$) under any
{processor profile}\xspace.
Consider an optimal schedule $S$ of $G$ and let $p_1(t)$ be the
{processor profile}\xspace allocated to $G_1$. Let $\tilde S$ be the schedule of
$(\para{T_{G_1}}{T_{G_2}})$ that allocates $p_1(t)$ processors to
$T_{G_1}$. $\tilde S$ is optimal and achieves the same makespan as $S$
for $G$ because $T_{G_1}$ and $G_1$ (resp. $T_{G_2}$ and $G_2$) have the
same makespan under any {processor profile}\xspace. Then, by Lemma
\ref{lem:ratequiv}, $\tilde S$ (so $S$) achieves the same makespan as
the optimal makespan of the task $T_G$ of length $\LG {\para{G_1}{G_2}} = \LG G$.
Moreover, by Lemma \ref{lem:rate} applied on $(\para{T_{G_1}}{T_{G_2}})$,
we have $p_1(t) = \pi_1 p(t)$. By induction, the unique optimal
schedules of $G_1$ and $G_2$ under respectively $p_1(t)$ and
$(p(t)-p_1(t))$ processors can be computed. Therefore, there is a
unique optimal schedule of $G$ under $p(t)$ processor: the
parallel composition of these two schedules.
\end{itemize}
Therefore, there is a unique optimal schedule for $G$ under $p(t)$. Moreover,
it can be computed in polynomial time. We describe here the algorithm to
compute the optimal schedule of a tree $G$, but
it can be extended to treat SP-graphs. The length of the equivalent
task of each subtree of $G$ can be computed in polynomial time by a
depth-first search of the tree (assuming that raising a number to the power
$\alpha$ or $1/\alpha$ can be done in polynomial time). Hence, the ratios
$\pi_1$ and $\pi_2$ for each parallel composition can also be computed in
polynomial time. Finally, these ratios imply the computation in linear time of
the ratios of the processor profile that should be allocated to each task
after its children are completed, which describes the optimal schedule.\qed
\end{proof}
\section{Extensions to Distributed Memory}
\label{sec:dist}
The objective of this section is to extend the previous results to the
case where the computing platform is composed of several nodes with
their own private memory. In order to avoid the large communication overhead
of processing a task on cores distributed across several
nodes, we forbid such a multi-node execution: the tasks of the tree can
be distributed on the whole platform but each task has to be processed on a
single node. We prove that this additional constraint, denoted by
\ensuremath{\mathcal{R}}\xspace, renders the problem much more difficult. We concentrate first
on platforms with two homogeneous nodes and then with two heterogeneous
nodes.
\subsection{Two Homogeneous Multicore Nodes}
\label{sec:dist-hom}
In this section, we consider a multicore platform composed of two
equivalent nodes having the same number of computing cores $p$. We
also assume that all the tasks $T_i$ have the same speedup function
$p_i^\alpha$ on both nodes.
We first show that finding a schedule with minimum makespan is weakly
NP-complete, even for independent tasks:
\begin{theorem}
Given two homogenous nodes of $p$ processors, $n$ independent tasks
of sizes $L_1, ..., L_n$ and a bound $T$, the problem of finding a
schedule of the $n$ tasks on the two nodes that respects \ensuremath{\mathcal{R}}\xspace, and
whose makespan is not greater than $T$, is (weakly) NP-complete for
all values of the $\alpha$ parameter defining the speedup function.
\end{theorem}
The proof relies on the Partition problem, which is known to be weakly
(i.e., binary) NP-complete~\cite{gareyjohnson}, and uses tasks of
length $L_i=a_i^\alpha$, where the $a_i$'s are the numbers from the
instance of the Partition problem. We recall that we assume that
functions $x\mapsto x^\alpha$ and $x\mapsto x^{1/\alpha}$ can be
computed in polynomial time.
\iflong
\todo[inline]{faire une courte preuve de
NP-complétude}
\else
Details can be found in the companion research report~\cite{RR-ipdps-2014}.
\fi
We also provide a constant ratio approximation algorithm. We recall
that a $\rho$-approximation provides on each instance a solution whose
objective $z$ is such that $z \leq \rho z*$, where $z*$ is the optimal
value of the objective on this instance.
\begin{theorem}
\label{th.43approx}
There exists a polynomial time \ensuremath{\left(\frac{4}{3}\right)^\alpha}-approximation algorithm for
the makespan minimization problem when scheduling a tree of malleable tasks on
two homogenous nodes.
\end{theorem}
\iflong\else Due to lack of space, we refer the interested reader to
the companion research report
for the complete description of the
algorithm and proof~\cite{RR-ipdps-2014}. The proof of the
approximation ratio consists in comparing the proposed solution to the
optimal solution on a single node made of $2p$ processors, denoted
\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace. Such an optimal solution can be computed as proposed in the
previous section, and is a lower bound on the optimal makespan on 2
nodes with $p$ processors. The general picture of the proposed
algorithm is the following. First, the root of the tree is arbitrarily
allocated to
the $p$ processors of one of the two nodes. Then, the subtrees $S_i$'s
rooted at the root's children are considered. If none of these
subtrees is allocated more than $p$ processors in \ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace, then we show
how to ``pack'' the subtrees on the two nodes and bound the slow-down
by \ensuremath{\left(\frac{4}{3}\right)^\alpha}. On the contrary, if one of the $S_i$'s is allocated more
than $p$ processors in \ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace, then we allocate $p$ processors to its
root, and recursively call the algorithm on its children and on the
remaining subtrees.
\fi
\iflong
The proof of this theorem is done by induction on the structure of the
tree and relies on the following lemmas. The approximation algorithm
is summarized in Algorithm~\ref{alg:hybapp}. Two initialization cases
are depicted. One of them is easily handled, and the second one is
handled by Lemma \ref{lem:x<1}. The heredity property is proved in the
theorem, and uses Lemmas
\ref{lem:muopt},\ref{lem:uopt},\ref{lem:slb}. Lemma
\ref{lem:chainroot} allows the restriction to a slightly simpler class
of graphs.
Let \ensuremath{\mathcal{S}_{\mathrm{OPT}}}\xspace be a makespan-optimal schedule, and \ensuremath{{M}_{\mathrm{OPT}}}\xspace be its makespan.
We consider the optimal schedule $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ of $G$ on $2p$ processors
without the constraint \ensuremath{\mathcal{R}}\xspace. $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ is then a PM so a PFC schedule. Its
makespan is $\ensuremath{M^{\mathrm{PM}}}\xspace_{2p} = \LG G \big/ (2p)^\alpha$, which is then a
lower bound of the optimal makespan with the restriction \ensuremath{\mathcal{R}}\xspace.
One can observe that a $2^\alpha$ approximation is immediate: a
solution is the PM schedule of $G$ under only $p$ processors, whose
makespan is $\ensuremath{M^{\mathrm{PM}}}\xspace_p = \LG G \big / p^\alpha$. As the optimal makespan
is not smaller than $\ensuremath{M^{\mathrm{PM}}}\xspace_{2p}$, $\ensuremath{M^{\mathrm{PM}}}\xspace_p$ is indeed a
$2^\alpha$-approximation.
Let $\{c_i\}$, for $i\in [1,n_c]$, be the set of children of the root
of $G$, and let $C_i$ be the subtree of $G$ rooted at $c_i$ and
including its descendants. We can suppose than the indices are ordered
such that the $\LG{C_i}$'s are in decreasing order. We denote
$\sigma_c= \sum_{i=1}^{n_c} \LG{C_i}^{1/\alpha}$.
We denote $x=\frac{2\LG{C_1}^{1/\alpha}}{\sigma_c}$, which means that
$xp$ processors are dedicated to $C_1$ in \ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace.
The following lemma, whose proof is immediate, allows to restrict the
following discussion on a slightly simpler class of graphs:
\begin{lemma}
\label{lem:chainroot}
We can suppose without loss of generality that the length of the root
of $G$ is $0$ and the root has at least two children. Otherwise, the
chain starting at the root can be aggregated in a single task of
length $0$ before finding the schedule on this modified graph
$\tilde{G}$. It is then immediate to adapt it to the original graph,
by allocating $p$ processors to each task of this chain.
\end{lemma}
\begin{lemma}
\label{lem:x<1}
If we have $\ x \leq 1$, then a $ \ensuremath{\left(\frac{4}{3}\right)^\alpha}$-approximation is computable in polynomial time.
\end{lemma}
\begin{proof}
Let $p_i$ be the share allocated to $C_i$ in \ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace. Each $p_i$ is constant because $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ is PFC by definition. By hypothesis, we have $p_i\leq p$ for all $i$, as $\LG{C_1}$ is the largest $\LG{C_i}$ and its share is equal to $xp \leq p$.
If $n_c=2$, both $p_1$ and $p_2$ are not larger than $p$ so equal $p$. Therefore, the schedule $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ respects restriction $\ensuremath{\mathcal{R}}\xspace$, is then optimal and so is a \ensuremath{\left(\frac{4}{3}\right)^\alpha} approximation.
Otherwise, we have $n_c\geq 3$ and we partition the indices $i$ in three sets $S_1$, $S_2$, $S_3$ such that the sum $\Sigma_k$ of $p_i$'s corresponding to each set $S_k$ is not greater than $p$:
$ \forall k \in \{1,2,3\},\ \Sigma_k = \sum_{i \in S_k} p_i \leq p$
, which is always possible because no $p_i$ is greater than $p$ and the sum of all $p_i$'s is $2p$. Indeed, we just have to iteratively place the largest $p_i$ in the set that has the lowest $\Sigma_k$. If a $\Sigma_k$ exceeds $p$, it was at least equal to $p/2$ at the previous step, and both other $\Sigma_k$ also: the sum of all $p_i$'s then exceeds $2p$, which is impossible.
Then, we place the set with the largest $\Sigma_k$, say $S_1$, on one half of the processing power, and aggregate the two smallest, $S_2\cup S_3$ in the other half. We now compute the PM schedule of $S_1$ with $p$ processors and $S_2\cup S_3$ with $p$ processors. The makespan is then $M = \max{\left(\LG{S_1},\LG{\para{S_2}{S_3}}\right)}\big/p^\alpha= \LG{\para{S_2}{S_3}}\big/p^\alpha$.
Indeed, we have $\Sigma_1 \leq p \leq \Sigma_2+\Sigma_3$ and $\LG{S_1}\big/\Sigma_1^\alpha = \LG{\para{S_2}{S_3}}\big/(\Sigma_2+\Sigma_3)^\alpha$, as these quantities represent the makespan of each subpart of the tree in $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$, and all subtrees $C_i$ terminate simultaneously in \ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace. So $\LG{S_1}<\LG{\para{S_2}{S_3}}$.
We know that $\Sigma_1 \geq \max\left(\Sigma_2,\Sigma_3\right)$ and $\Sigma_1+\Sigma_2+\Sigma_3 =2p$, so $\Sigma_1\geq \frac 23p$, then $\Sigma_2+\Sigma_3\leq \frac 43 p$.
Therefore, in $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$, $\Sigma_2+\Sigma_3 \leq \frac 43p$ processors are allocated to $S_2\cup S_3$.
Then, the makespan of $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ verifies $\ensuremath{M^{\mathrm{PM}}}\xspace_{2p} \geq \LG{\para{S_2}{S_3}} \big/ \left( \frac 43 p \right)^\alpha$, and so $M/\ensuremath{M^{\mathrm{PM}}}\xspace_{2p} \leq \ensuremath{\left(\frac{4}{3}\right)^\alpha}$. Therefore, the schedule is indeed a \ensuremath{\left(\frac{4}{3}\right)^\alpha} approximation.\qed
\end{proof}
\begin{defi}
\label{def:Rq}
For any $0<q\leq p$, let $\ensuremath{\mathcal{R}}\xspace_q$ be the constraint that forces $q$ processors to be allocated to $c_1$.
\end{defi}
\begin{figure}[Ht]
\centering
\input{fig/figsu.tex}
\caption{A schedule $S_u$, for $u<p$ on the left and the schedule $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace=S_{xp}$ on the right}
\label{fig:hybsu}
\end{figure}
\begin{defi}
\label{def:su}
We denote by $B$ the subgraph $G\setminus \mathit{root} \setminus C_1$.
We define the schedule $S_u$ parametrized by $u\in]0,p]\cup\{xp\}$,
which respects $\ensuremath{\mathcal{R}}\xspace_u$ but not $\ensuremath{\mathcal{R}}\xspace$. It allocates a constant share
$u\leq p$ of processors to $c_1$ until it is terminated. Meanwhile,
$2p-u$ processors are allocated to schedule a part $B_u$ of $B$. $B_u$
may contain fractions of tasks. Before, the rest of the graph, which
is composed of $C_1\setminus c_1$ and of the potential remaining part
$\bar B_u$ of $B$, is scheduled on $2p$ processors by a PM schedule,
regardless of the $\ensuremath{\mathcal{R}}\xspace$ constraint. We denote by $v_u$ the share
allocated to $C_1\setminus c_1$ and by $M_u$ the makespan of the
schedule. See Fig.~\ref{fig:hybsu} for an illustration.
Let $G_{u,1}$ be the graph $\para{c_1}{B_{u}}$ and $G_{u,2}$ be the graph $\para{\left(C_1\setminus c_1\right)}{\bar B_u}$.
We denote by $\Delta_{u,1}$ (resp. $\Delta_{u,2}$) the execution time of $G_{u,1}$ (resp. $G_{u,2}$) in $S_u$. Then, $M_u=\Delta_{u,1}+\Delta_{u,2}$.
One can note that the PM schedule $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ is equal to $S_{xp}$, where $u=v_u=xp$.
\end{defi}
\begin{lemma}
\label{lem:muopt}
For any $u\in]0,p]$, under the constraint $\ensuremath{\mathcal{R}}\xspace_u$, the makespan-optimal schedule is $M_u$.
\end{lemma}
\begin{proof}
Let $S$ be the the makespan-optimal schedule that respects the constraint $\ensuremath{\mathcal{R}}\xspace_u$. We want to show that $S= S_u$.
First, suppose that $c_1$ terminates before $B$ in $S$. This means that a time range is dedicated to schedule $B$ at the end of the schedule. We can modify slightly $S$ by moving this time range to the beginning of the schedule. This is possible as there is no heredity constraint between $B$ and $C_1$, and the same tasks of $B$ can be performed in the new processor profile, by a PM schedule on $B$. So we now assume that the schedule terminates at the execution of $c_1$.
Because of $\ensuremath{\mathcal{R}}\xspace_u$, $S$ must allocate $u$ processors to $c_1$ at the end of the schedule. In parallel to $c_1$, only $B$ can be executed, and before the execution of $c_1$, both subgraphs $B$ and $C_1\setminus c_1$ can be executed.
Suppose that $S$ differs from $S_u$ in the execution of $B$ in parallel to $c_1$. Consider the schedule of $C_1$ fixed, and let $p_b(t)$ be the number of processors allocated to $B$ at the time $t$ in $S$. As $B_u$ is scheduled according to PM ratios in $S_u$, and $S$ differs from this schedule, in $S$, $B$ is not scheduled according to PM ratios under the processor profile $p_b(t)$. Then, this schedule can be modified to schedule $B$ in a smaller makespan, and then to schedule the whole graph $G$ in a smaller makespan, which contradicts the makespan-optimality of $S$.
So $S$ and $S_u$ are equal during the time interval $\Delta_{u,1}$. Then, it remains to schedule the graph $G_{u,2}=\para{\left(C_1\setminus c_1\right)}{\bar B_u}$, which has a unique optimal schedule, the PM schedule, that is followed by $S_u$. Therefore, $S=S_u$.\qed
\end{proof}
\begin{lemma}
\label{lem:uopt}
$S_p$ is the makespan-optimal schedule among the $S_w$ for $w\in]0,p]$, i.e., we have $p=\argmin_{w\in]0,p]}\left(M_w\right)$.
\end{lemma}
\begin{proof}
Let $\displaystyle \ensuremath{u_{\mathrm{OPT}}}\xspace=\argmin_{w\in]0,p]}\left(M_w\right)$. We will prove here that $\ensuremath{u_{\mathrm{OPT}}}\xspace=p$.
For the sake of simplification, we denote in this proof $\ensuremath{u_{\mathrm{OPT}}}\xspace$ by $u$, $v_{u}$ by $v$, $\Delta_{u,1}$ by $\Delta_{1}$ and $\Delta_{u,2}$ by $\Delta_{2}$. We will then consider the schedule $S_u$, which is makespan-optimal among the $S_w$, for $w\in]0,p]$.
Suppose by contradiction that $u<p$. We will build an other schedule $\bar{S}$ following the constraint $\ensuremath{\mathcal{R}}\xspace_{\bar{u}}$ for a certain $\bar{u}$ respecting $u<\bar{u}<p$, that will give a contradiction with the optimality of $S_u$.
\bigskip
The following paragraphs prove the inequality $v>p$, which can be intuitively deduced by an observation of the schedules.
As we have $x> 1$, we know that $\LG{C_1\setminus c_1} > \LG{\bar B_{xp}} = \LG{B} - \LG{B_{xp}} > \LG{B}-c_1$. The first inequality holds because in $S_{xp}$, the subgraphs $C_1\setminus c_1$ and $\bar B_{xp}$ are scheduled in parallel, and each subgraph is scheduled according to the PM ratios. Then, each subgraph {\color{red} can be replaced by its equivalent task}. Moreover, $xp$ (resp. $(2-x)p$) processors are allocated to $C_1\setminus c_1$ (resp. $\bar B_{xp}$). As $x>1$, more processors are allocated to $C_1\setminus c_1$, so $\LG{C_1\setminus c_1} > \LG{\bar B_{xp}}$.
By the same reasoning between $c_1$ and $B_{xp}$ in $S_{xp}$, we get $\LG{B_{xp}}<c_1$ and the second inequality holds. See Fig.~\ref{fig:hybsu} for an illustration.
With similar arguments between the subgraphs $c_1$ and $B_u$ in the schedule $S_u$, and using the hypothesis $u<p$, we get $\LG{B_u} > c_1$. The difference with the previous case is that the share of processors allocated to both subgraphs is not computed by the PM ratios, but as $B_u$ is scheduled under $(2p-u)$ processors with the PM ratios, {\color{red} it can still be replaced by its equivalent task}.
Combining these two inequalities, we have $\LG{\bar B_u} < \LG{B}-c_1 < \LG{C_1\setminus c_1}$, and by using the same reasoning in the other way with the parallel execution of $C_1\setminus c_1$ and $\bar B_u$ in $S_u$, we finally prove $v>p$.
\begin{figure}[!ht]
\centering
\input{fig/figsb.tex}
\caption{Schedules $S_u$ (left) and $\barS$(right), assuming that $B$ begins after $C_1$ in $\barS$}
\label{fig:hybsb}
\end{figure}
\bigskip
Let $\varepsilon>0$ small enough such that $u+\varepsilon\Delta_{2}<p$ and $v-\varepsilon\Delta_{1}>p$. Let $\bar{u}=u+\varepsilon\Delta_{2}$ and $\bar{v}=v-\varepsilon\Delta_{1}$. One can note that $0<u<\bar{u}<p<\bar{v}<v$.
Let $\bar{S}$ be the schedule allocating $\bar{u}$ processors to $C_1$ during a time $\Delta_{1}$ at the end of the schedule, and $\bar{v}$ processors to $C_1$ before.
The subgraph $B$ is scheduled following PM ratios in parallel to $C_1$, in such a way that it terminates at the same time as $c_1$ and there is no idle time after the beginning of its execution. The subgraph $C_1$ is scheduled in the same way as $B$, following PM ratios as soon as its execution begins.
One can note that $B$ and $C_1$ do not necessarily begin simultaneously. See Fig. \ref{fig:hybsb} for an illustration of the case where $B$ begins after $C_1$. Let $\bar M$ be the makespan of $\bar{S}$.
As $\bar u>u$, $c_1$ is completed in a time smaller than $\Delta_1$ in $\barS$, so $\bar S$ respects the constraint $\ensuremath{\mathcal{R}}\xspace_{\bar u}$.
Then, by Lemma \ref{lem:muopt}, as $\barS \neq S_{\bar u}$, we know that $\bar M> M_{\bar u}$. In addition, by the definition of $M_u$, we get $\bar M> M_{\bar u}\geq M_u$.
We can assume without loss of generality that $C_1$ and $B$ are both a unique task. This can be reached by replacing them by their equivalent task, which does not change their execution time. We do not lose generality by this transformation here because both subgraphs are scheduled under the PM rules. For example, we could not consider the equivalent task of the total graph $G$ because constraints of the form $\ensuremath{\mathcal{R}}\xspace_w$, with $w\neq xp$, are followed, and so the PM rules are not respected.
Let $\bar\Delta$ be the interval of time ending when $S_u$ terminates and having a length of $M_u$. See Fig. \ref{fig:hybsb} for an illustration.
Let $\bar W_C$ (resp. $\bar W_B$) be the total length of the task $C_1$ (resp. $B$) that is executed in $\bar{S}$ during $\bar\Delta$. Similarly, we define $W_C$ (resp. $W_B$) the total length of the task $C_1$ (resp. $B$) that is executed in $S_u$. These two last quantities are equal to:
\begin{align*}
W_C &= \Delta_{1}u^\alpha+\Delta_{2}v^\alpha \left( = \LG{C_1}\right)\\
W_B &= \Delta_{1}(2p-u)^\alpha+\Delta_{2}(2p-v)^\alpha \left( = \LG{B}\right)
\end{align*}
As $\bar M> M_u$, we must not have simultaneously $\bar W_C\geq W_C$ and $\bar W_B\geq W_B$. Indeed, if it was the case, then all the tasks of $G$ would be completed by $\barS$ in a makespan smaller than $M_u$, which is a contradiction.
For each task, we separate two cases.
If $C_1$ begins in $\barS$ during $\bar\Delta$, then $\bar W_C=\LG{C}=W_C$ because the execution of $C_1$ would hold entirely in $\bar\Delta$.
Otherwise, we have:
$$\bar W_C = \Delta_{1}\bar{u}^\alpha+\Delta_{2}\bar{v}^\alpha$$
We know that $0<u<\bar{u}<\bar{v}<v$.
Therefore, by the concavity of the function $t\mapsto t^\alpha$, we have the inequality below. The plot on the right illustrates the inequality. Indeed, the slope of the red segment corresponding to $u$ and $\bar u$ is larger than the other one.
\vspace{.25em}
\begin{minipage}{.36\columnwidth}
\begin{align*}
\frac{\bar{u}^\alpha-u^\alpha}{\bar u - u} &> \frac{v^\alpha-\bar{v}^\alpha}{ v -\bar v}
\end{align*}
\end{minipage}
\begin{minipage}{.58\columnwidth}
\input{fig/figconcav.tex}
\end{minipage}
\vspace{.5em}
Then, we derive from this inequality the fact that $\bar W_C$ is larger than $W_C$:
\begin{align*}
\frac{\bar{u}^\alpha-u^\alpha}{\bar u - u} &> \frac{v^\alpha-\bar{v}^\alpha}{ v -\bar v}\\
\frac{\bar{u}^\alpha-u^\alpha}{\varepsilon\Delta_{2}} &> \frac{v^\alpha-\bar{v}^\alpha}{\varepsilon\Delta_{1}}\\
{\Delta_{1}}\left(\bar{u}^\alpha-u^\alpha\right) &> \Delta_{2}\left(v^\alpha-\bar{v}^\alpha\right)\\
\Delta_{1}\bar{u}^\alpha+\Delta_{2}\bar{v}^\alpha &> \Delta_{1}u^\alpha+\Delta_{2}v^\alpha\\
\bar W_C &> W_C
\end{align*}
Therefore, in any case, we have $\bar W_C \geq W_C$
\bigskip
Then, we treat similarly the subgraph $B$.
If $B$ begins in $\barS$ during $\bar\Delta$, then $\bar W_B=\LG{B}=W_B$.
Otherwise, we have:
$$\bar W_B = \Delta_{1}(2p-\bar{u})^\alpha+\Delta_{2}(2p-\bar{v})^\alpha$$
Similarly, we know that $2p-u>2p-\bar{u}>2p-\bar{v}>2p-v>0$. Therefore, by the concavity of the function $t\mapsto t^\alpha$, we have:
\begin{align*}
\frac{(2p-{u})^\alpha-(2p-\bar u)^\alpha}{\bar u - u} &< \frac{(2p-\bar v)^\alpha-(2p-{v})^\alpha}{v - \bar v}\\
\frac{(2p-{u})^\alpha-(2p-\bar u)^\alpha}{\varepsilon\Delta_{2}} &< \frac{(2p-\bar v)^\alpha-(2p-{v})^\alpha}{\varepsilon\Delta_{1}}\\
\frac{(2p-\bar{u})^\alpha-(2p-u)^\alpha}{\varepsilon\Delta_{2}} &> \frac{(2p-v)^\alpha-(2p-\bar{v})^\alpha}{\varepsilon\Delta_{1}}\\
{\Delta_{1}}\left((2p-\bar{u})^\alpha-(2p-u)^\alpha\right) &> \Delta_{2}\left((2p-v)^\alpha-(2p-\bar{v})^\alpha\right)\\
\Delta_{1}(2p-\bar{u})^\alpha+\Delta_{2}(2p-\bar{v})^\alpha &> \Delta_{1}(2p-u)^\alpha+\Delta_{2}(2p-v)^\alpha\\
\bar W_B &> W_B
\end{align*}
Then, in any case, we have both $\bar W_C\geq W_C$ and $\bar W_B\geq W_B$, so we get the contradiction.
Therefore, we have $u\geq p$ and so $u=p$.\qed
\end{proof}
\begin{lemma}
\label{lem:slb}
The makespan of $S_p$ is a lower bound of $\ensuremath{{M}_{\mathrm{OPT}}}\xspace$.
\end{lemma}
\begin{proof}
One can note that in $\ensuremath{\mathcal{S}_{\mathrm{OPT}}}\xspace$, a constant share $u_*\leq p$ processors must be allocated to $c_1$ due to \ensuremath{\mathcal{R}}\xspace, as in $S_{u_*}$. Indeed, if this share is not constant, because of the concavity of the function $t\mapsto t^\alpha$, it would be better to always allocate the mean value to $c_1$. This would allow to terminate earlier both $c_1$ and the tasks executed in parallel to $c_1$ on the same part. {\color{red} See Lemma ???.}
Let $\ensuremath{\mathcal{R}}\xspace'$ be the constraint that enforces a schedule to respect one of the constraint $\ensuremath{\mathcal{R}}\xspace_w$, for any $w\in]0,p]$. Otherwise stated, $\ensuremath{\mathcal{R}}\xspace'$ enforces a schedule to allocate a constant share $w$ not larger than $p$ to $c_1$.
Let $\ensuremath{\mathcal{S}_{\mathrm{OPT}}}\xspace'$ be the makespan-optimal schedule respecting $\ensuremath{\mathcal{R}}\xspace'$, and let $\ensuremath{{M}_{\mathrm{OPT}}}\xspace'$ be its makespan. As $\ensuremath{\mathcal{S}_{\mathrm{OPT}}}\xspace$ respects $\ensuremath{\mathcal{R}}\xspace'$, we have $\ensuremath{{M}_{\mathrm{OPT}}}\xspace\geq \ensuremath{{M}_{\mathrm{OPT}}}\xspace'$.
Furthermore, there exists $u'\leq p$ such that $\ensuremath{\mathcal{S}_{\mathrm{OPT}}}\xspace'=S_{u'}$. Therefore, $\ensuremath{{M}_{\mathrm{OPT}}}\xspace'\geq \min_{w\in]0,p]}M_{w}$, and, by Lemma \ref{lem:uopt}, we get $\ensuremath{{M}_{\mathrm{OPT}}}\xspace'\geq M_{p}$.
Finally, we have $\ensuremath{{M}_{\mathrm{OPT}}}\xspace\geq M_p$.\qed
\end{proof}
We prove the following theorem by induction on $n$. We are now able
to prove Theorem~\ref{th.43approx}, by induction on the tree
structure. The corresponding approximation algorithm is described in
Algorithm~\ref{alg:hybapp}.
\begin{proof}[of Theorem~\ref{th.43approx}]
First, we treat the initialization case, where $n=1$. An optimal schedule is the one that allocates $p$ processors to the unique task.
Then, we treat the cases that do not need the heredity property.
\begin{itemize}
\item if $x\geq 1$ and $c_1$ is a leaf, no schedule can have a makespan smaller than $x^\alpha \ensuremath{M^{\mathrm{PM}}}\xspace_{2p}$, as no more than $p$ processors can be allocated to $c_1$. Then, the schedule that only differs from \ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace by reducing the share allocated to $c_1$ to $p$ is makespan-optimal.
\item if $x\leq 1$, the result is given by Lemma \ref{lem:x<1}.
\end{itemize}
Now, we suppose the result true for $m<n$. The case remaining is when $c_1$ is not a leaf and $x>1$. Consider such a graph $G$ of $n$ nodes.
We consider the schedule $S_p$, whose makespan $M_p$ is a lower bound of \ensuremath{{M}_{\mathrm{OPT}}}\xspace as stated in Lemma \ref{lem:slb}.
We now build the schedule $S$, which achieves a $\ensuremath{\left(\frac{4}{3}\right)^\alpha}$-approximation respecting $\ensuremath{\mathcal{R}}\xspace$.
At the end of the schedule, $G_{p,1}$ is scheduled as in $S_p$. At the beginning of the schedule, we use the heredity property to derive from $S_p$ a schedule of $G_{p,2}$ that follows the \ensuremath{\mathcal{R}}\xspace constraint.
More formally, we have $G_{p,2}$, which is the parallel composition $\para{(C_1\setminus c_1)}{\bar B_p}$, composed of at most $n-1$ nodes. So, by induction, a schedule $S^r$ achieving a \ensuremath{\left(\frac{4}{3}\right)^\alpha}-approximation can be computed for $G_{p,2}$. This means that its makespan $M^r$ is at most $\ensuremath{\left(\frac{4}{3}\right)^\alpha} \Delta_{p,2}$, as $S_p$ completes $G_{p,2}$ with PM ratios in a time $\Delta_{p,2}$, which is then the optimal time.
Consider the schedule $S$ of $G$ that schedules $G_{p,2}$ as in $S^r$, then schedules $G_{p,1}$ as in $S_p$. The time necessary to complete $G_{p,1}$ is then equal to $\Delta_{p,1}$.
The makespan $M$ of $S$ respects then:
\begin{align*}
M &= \Delta_{p,1} + M^r \leq \Delta_{p,1} + \ensuremath{\left(\frac{4}{3}\right)^\alpha} \Delta_{p,2} \\
&\leq \ensuremath{\left(\frac{4}{3}\right)^\alpha} \left(\Delta_{p,1} + \Delta_{p,2}\right)\leq \ensuremath{\left(\frac{4}{3}\right)^\alpha} M_{p} \leq \ensuremath{\left(\frac{4}{3}\right)^\alpha} \ensuremath{{M}_{\mathrm{OPT}}}\xspace
\end{align*}
Then, $S$ is a $\ensuremath{\left(\frac{4}{3}\right)^\alpha}$-approximation.\qed
\end{proof}
\begin{figure}[Ht]
\begin{algorithmic}[1]
\Require{A graph $G$, the parameter $p$ of the processor platform $\mathcal{P}$}
\Ensure{A schedule $S$ of $G$ on $\mathcal{P}$ that is a \ensuremath{\left(\frac{4}{3}\right)^\alpha}-approximation of the makespan}
\Function{HybApp}{$G$,$p$}
\State $\tilde{G} \gets G$
\State Modify $G$ as in Lemma \ref{lem:chainroot}
\State Compute the PM schedule $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ of $G$ on $2p$ processors
\State Compute the $c_i$'s, the $C_i$'s, $B$, and $x$
\If{$x\geq 1$ and $c_1$ is a leaf}
\State Build $S$: shrink from $\ensuremath{\mathcal{S}_{\mathrm{PM}}}\xspace$ the share of processors allocated to $c_1$ to $p$ processors
\ElsIf{$x\leq1$}
\State Build $S$: map the $C_i$'s as in Lemma \ref{lem:x<1}, and compute the PM schedule on each part
\Else
\Comment{we have $x>1$ and $c_1$ is not a leaf}
\State Compute the schedule $S_p$ and partition $G$ in $G_{p,1}$ and $G_{p,2}$ as in Definition \ref{def:su}
\State $S^r \gets\ \Call{HybApp}{G_{p,2},p}$
\State Build $S$: schedule $G_{p,2}$ as in $S^r$ then $G_{p,1}$ as in $S_p$
\EndIf
\State Adapt the schedule $S$ to the original graph $\tilde{G}$ if $G\neq\tilde G$
\State \Return $S$
\EndFunction
\end{algorithmic}
\caption{Approximation algorithm for the hybrid problem}
\label{alg:hybapp}
\end{figure}
\paragraph{Worst case of the algorithm}
Let $G$ be the parallel composition of $3$ identical subtrees $G_1$, $G_2$, $G_3$ (plus the root of length 0). Each subtree $G_i$ is composed of $3$ tasks: a root $T_i$ of length $\epsilon$, and $2$ leaves $T_{i,1}$ and $T_{i,2}$ each of length $L$.
The algorithm follows the second case, and place $2$ of these trees on one part. The makespan obtained is then
$$M_1 = \frac{L}{\left(\frac p4\right)^\alpha} + \frac{\epsilon}{\left(\frac p2\right)^\alpha}$$
Now, consider the schedule that places two roots on one part, but $3$ tasks $T_{i,j}$ on each part at the beginning. Its makespan is then
$$M_2 = \frac{L}{\left(\frac p3\right)^\alpha} + \frac{\epsilon}{\left(\frac p2\right)^\alpha}$$
When $\epsilon$ is close to $0$, we get
$$\frac{M_1}{M_2} = \ensuremath{\left(\frac{4}{3}\right)^\alpha}$$
So the approximation ratio is tight.
\fi
\subsection{Two Heterogeneous Multicore Nodes}
\label{sec:dist-het}
We suppose here that the computing platform is made of two processors
of different processing capabilities: the first one is made of $p$
cores, while the second one includes $q$ cores. We also assume that
the parameter $\alpha$ of the speedup function is the same on both
processors. As the problem gets more complicated, we concentrate here
on $n$ independent tasks, of lengths $L_1, ..., L_n$. Thanks to the
homogenous case presented above, we already know that scheduling
independent tasks on two nodes is NP-complete.
\iflong \else
This problem is close to the \textsc{Subset Sum} problem. Given $n$
numbers, the optimization version of \textsc{Subset Sum} considers a
target $K$ and aims at finding the subset with maximal sum smaller
than or equal to $K$. There exists many approximation schemes for
this problem. In particular, Kellerer et
al.~\cite{kellerer2003efficient} propose a fully polynomial
approximation scheme (FPTAS). Based on this result, an approximation
scheme can be derived for our problem.
\begin{theorem}
There exists an FPTAS for the problem of scheduling independent
malleable tasks on two heterogeneous nodes, provided that, for each
task, $L_i^{1/\alpha}$ is an integer.
\end{theorem}
The proof is complex and detailed in~\cite{RR-ipdps-2014}. The
assumption on the $L_i^{1/\alpha}$s is needed to apply the FPTAS of
\textsc{Subset Sum}, which is valid only on integers.
\fi
\iflong
\begin{proof}
We reduce the problem to {\sc Subset sum}, which is known to be NP-Complete \cite{gareyjohnson}.
Consider an instance $\mathcal{I}$ to {\sc Subset sum}. We have a set $X=\{x_i\}, i\in[1,n]$, a number $s$, and we want to know if there exists a subset of $X$ that sums to $s$, assuming that the total sum is larger.
We construct an instance $\mathcal{J}$ to $(p,q)$-scheduling. Let $T_i$ have lengths $L_i = x_i^{\alpha}$, $p=s$, $q=\sum x_i -p$, and $T=1$. We recall that raising a number to the power $\alpha$ is assumed feasible, so the computation of the $L_i$s is done in polynomial time. Note that $T$ is the optimal makespan without the $(p,q)$ constraint, and so only the PM schedule can be a solution.
It now remains to prove that $\mathcal{I}$ is satisfiable if and only if $\mathcal{J}$ is satisfiable.
Let $p_i$ be the share of processors allocated to task $T_i$ in the PM schedule. We have $$\displaystyle p_i = (p+q) \frac{L_i^{1/\alpha}}{ \sum_k L_k^{1/\alpha}} = x_i$$
Then, $\mathcal{J}$ is satisfiable if and only if a subset of the $p_i$ sums to $s$. This is equivalent to say that a subset of $X$ sums to $s$, i.e., that $\mathcal{I}$ is satisfiable.\qed
\end{proof}
We will denote by $A$ the subset of the indices of the tasks allocated
to the $p$-part, and by $\bar A$ the complementary of $A$. Then, the
schedule that partition the tasks according to the subset $A$ and
performing a PM schedule on both parts is denoted by $S_A$.
The $(p,q)$-scheduling problem is strictly equivalent to {\sc Subset sum} only when we restrict ourselves to instances where the PM schedule is compatible with the $(p,q)$-constraint. In the general case, the minimum-makespan schedule is reached when the total idle time is minimized and the schedule is PFC.
This is equivalent to pack all the tasks in two sets $A$ and $B$ such that the difference between the time needed to complete $A$ with $p$ processors and the one needed to complete $\bar A$ with $q$ processors is minimized, i.e.:
$$\mathop{\mathrm{minimize}}\limits_{A} \left(\frac{\sum_{i\in A} L_i^{1/\alpha}}{p}\right)^\alpha - \left(\frac{\sum_{i\in \bar A} L_i^{1/\alpha}}{q}\right)^\alpha$$
$$ \mathop{\mathrm{minimize}}\limits_{A} \left(q^\alpha\left(\sum_{i\in A} x_i\right)^\alpha - p^\alpha\left(\sum_{i\in \bar A} x_i\right)^\alpha\right) \quad \textit{ where $x_i=L_i^{1/\alpha}$}$$
Note that the makespan of the schedule associated to the subset $A$ is:
$$M_A=\max\left(\frac{\ensuremath{\left(\sum_{i\in A} x_i\right)^\alpha}}{p^\alpha}, \frac{\ensuremath{\left(\sum_{i\in \bar A} X_i\right)^\alpha}}{q^\alpha}\right)$$
Consider an instance of $(p,q)$-scheduling. Let $S$ be $\sum_i x_i$, where $x_i=L_i^{1/\alpha}$ and $X$ be the set $\left\{x_i, i\in[1,n]\strut\right\}$.
\begin{lemma}
A tight lower bound of the optimal makespan is $ \ensuremath{{M}_{\mathrm{ideal}}}\xspace = \puisa{\frac{S}{p+q}}$. In such a schedule, we have $\ensuremath{\sum_{i\in A} x_i} = \frac{pS}{p+q}$. We denote this quantity by $\ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace$.
\end{lemma}
\begin{proof}
$\ensuremath{{M}_{\mathrm{ideal}}}\xspace$ is the makespan of the PM schedule, which is a lower bound of the optimal makespan, and can be reached, as in the proof of Theorem \ref{th:pqnpc}.\qed
\end{proof}
We note $r =\max\left(\frac qp,\frac pq\right)$.
For $0<\kappa<1$, a $\kappa$-approximation of {\sc Subset sum} returns a subset $A$ of $X$ such that the sum of its elements ranges between $\kappa OPT$ and $OPT$, where $$\displaystyle OPT = \max\limits_{A\ \mid\ \sum_A x_i \leq s} \sum_{A} x_i$$
For $\lambda>1$, a $\lambda$-approximation of the $(p,q)$-scheduling problem returns a schedule such that its makespan is not larger than $\lambda$ times the optimal makespan.
We note $\varepsilon_\lambda=\lambda^{1/\alpha}-1$ and $\varepsilon_\kappa = 1-\kappa$.
An AS \ensuremath{\mathcal A}\xspace resolving {\sc Subset Sum} is defined as followed. Given an instance $\mathcal I$ of {\sc Subset Sum} and a parameter $0<\kappa<1$, it computes a solution to $\mathcal I$ achieving a $\kappa$-approximation in a time complexity $f_\ensuremath{\mathcal A}\xspace(n,\varepsilon_{\kappa})$.
An AS \ensuremath{\mathcal B}\xspace resolving $(p,q)$-scheduling is defined as followed. Given an instance $\mathcal J$ of the $(p,q)$-scheduling problem and a parameter $\lambda>1$, it computes a solution to $\mathcal J$ achieving a $\lambda$-approximation in a time complexity $f_\ensuremath{\mathcal B}\xspace(\mathcal J,\varepsilon_\lambda)$.
\newcommand{\max\left(3,\left\lceil\frac{1}{\ek}-4\right\rceil\right)}{\max\left(3,\left\lceil\frac{1}{\varepsilon_{\kappa}}-4\right\rceil\right)}
\begin{remark}
There exist a FPTAS for {\sc Subset Sum}.
In [Kellerer et al'02] "An efficient FPTAS for the subset sum" a FPTAS of time complexity $O\left(\min\left(n/\varepsilon_{\kappa},n+1/\varepsilon_{\kappa}^2\log(1/\varepsilon_{\kappa})\right)\right)$ and space complexity $0(n+1/\varepsilon_{\kappa})$ is proposed.
\end{remark}
\begin{defi}
\label{def:pqrest}
The {\sc $(p,q)$-Scheduling Restricted}\xspace problem is defined from the {\sc $(p,q)$-Scheduling}\xspace problem by replacing the entries $L_i$ by $x_i = L_i^{1/\alpha}$, and is restricted to the case where the $x_i$ are integers.
\end{defi}
\begin{theorem}
\label{th:pqptas}
Given a AS \ensuremath{\mathcal A}\xspace of {\sc Subset Sum} of time complexity $(n,\varepsilon_{\kappa}) \mapsto f_\ensuremath{\mathcal A}\xspace(n,\varepsilon_{\kappa})$, Algorithm \ref{alg:pqapp} performs a AS to {\sc $(p,q)$-Scheduling Restricted}\xspace with time complexity $(n,p,q,\alpha,\lambda) \mapsto O\left(f_\ensuremath{\mathcal A}\xspace\left(n,\frac{\varepsilon_\lambda}{r}\right)\right)$.
\end{theorem}
\begin{coro}
The {\sc $(p,q)$-Scheduling Restricted}\xspace problem admits a AS of time complexity $O\left(\min\left(\frac{nr}{\lambda^{1/\alpha}-1},n+\left(\frac{r}{\lambda^{1/\alpha}-1}\right)^2\log(\frac{r}{\lambda^{1/\alpha}-1})\right)\right)$ and space complexity $0\left(n+\frac{r}{\lambda^{1/\alpha}-1}\right)$.
Indeed, parametrized by the FPTAS of [Kellerer'02], Algorithm \ref{alg:pqapp} is an AS of the {\sc $(p,q)$-Scheduling Restricted}\xspace problem of such a complexity.
\end{coro}
\begin{proof}
Let $\mathcal{I}$ be an instance of {\sc $(p,q)$-Scheduling Restricted}\xspace, $\lambda>1$ and \ensuremath{\mathcal A}\xspace be an AS of {\sc Subset Sum}.
We recall that raising a number to the power $\alpha$ or $1/\alpha$ is assumed feasible.
If $\lambda \geq (1+r)^{\alpha}$, it suffices to compute the PM schedule on the largest part of the platform.
We assume in the following that $\lambda < (1+r)^\alpha$.
We define $\kappa = \left(1-\frac{1}{r}(\lambda^{1/\alpha}-1)\right)$, so that $\varepsilon_{\kappa}=\frac{\varepsilon_\lambda}{r}$. One can check that $0<\kappa<1$.
A tight lower bound on the makespan is $\ensuremath{{M}_{\mathrm{ideal}}}\xspace = \puisa{\frac{S}{p+q}}$. Indeed, it represents the makespan of the PM schedule on $p+q$ processors, which can respect the constraint for some values of the $x_i$s.
Let \ensuremath{\mathcal{S}_{\mathrm{OPT}}}\xspace be an optimal schedule of $\mathcal I$. If we denote by $\ensuremath{A_{\mathrm{O}}}$ the subset of the tasks allocated to the $p$-part of the platform, we have either:
\begin{equation}
\label{eq:apq}
\ensuremath{\sum_{i\in \ao} x_i} \leq \ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace \leq \frac{p S}{p+q} = p \ensuremath{{M}_{\mathrm{ideal}}}\xspace^{1/\alpha} \quad\text{or}\quad
\ensuremath{\sum_{i\in \bar \ao} x_i} = S - \ensuremath{\sum_{i\in \ao} x_i} \leq \frac{q S}{p+q} = q \ensuremath{{M}_{\mathrm{ideal}}}\xspace^{1/\alpha}
\end{equation}
We first suppose that the left inequality holds. The other case is treated at the end of the proof.
Then,
$$\puisa{\frac{\ensuremath{\sum_{i\in \ao} x_i}}{p}} \leq \ensuremath{{M}_{\mathrm{ideal}}}\xspace$$
So the $p$-part of the schedule terminates before the ideal schedule. Therefore, the $q$-part of the schedule terminates after the $p$-part as $\ensuremath{{M}_{\mathrm{OPT}}}\xspace\geq\ensuremath{{M}_{\mathrm{ideal}}}\xspace$.
We denote $\ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace = \ensuremath{\sum_{i\in \ao} x_i}$.
Then, the makespan $\ensuremath{{M}_{\mathrm{OPT}}}\xspace$ is equal to the time needed to complete the tasks of $\bar \ensuremath{A_{\mathrm{O}}}$. Then we have
$$\ensuremath{{M}_{\mathrm{OPT}}}\xspace = \puisa{\frac{\ensuremath{\sum_{i\in \bar \ao} x_i}}{q}} = \puisa{\frac{S-\ensuremath{\sum_{i\in \ao} x_i}}{q}} $$
Let $\Lambda$ be the set of subsets f $X$:
$$\Lambda = \left\{ A\subset X \mathrel{}\middle|\mathrel{} (1-\varepsilon_{\kappa}) \ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace \leq \ensuremath{\sum_{i\in A} x_i} \leq \ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace \right\}$$
We now prove in the following paragraphs that a subset $A\in \Lambda$ is computed by the algorithm \ensuremath{\mathcal A}\xspace launched on $X$, $s=\ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace=\frac{pS}{p+q}$, and $\varepsilon_{\kappa}$. First, we recall that the $x_i$ are assumed to be integers in the formulation of {\sc $(p,q)$-Scheduling Restricted}\xspace;
We know that $\ensuremath{\sum_{i\in \ao} x_i}=\ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace$, so $A_O \in \Lambda$.
Then, there does not exist any subset $A$ of $X$ such that $\ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace < \ensuremath{\sum_{i\in A} x_i} \leq \ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace$, because the associated schedule $S_A$ would have a makespan smaller than $\ensuremath{{M}_{\mathrm{OPT}}}\xspace$, which contradicts the optimality of $\ensuremath{\mathcal{S}_{\mathrm{OPT}}}\xspace$.
So $A_O$ is an optimal solution to the instance submitted to $\ensuremath{\mathcal A}\xspace$.
Therefore, \ensuremath{\mathcal A}\xspace launched on this instance with the parameter $\gamma$ will return a set $A\in \Lambda$ in time $f_\ensuremath{\mathcal A}\xspace(n,\varepsilon_\gamma)$, and so the claim is proved.
\bigskip
Let $A$ be an element of $\Lambda$.
We know that the makespan $M_A$ of the corresponding schedule $S_A$ allocating the tasks corresponding to $A$ on the $p$-part is:
$$M_A=\puisa{\max\left(\frac{\ensuremath{\sum_{i\in A} x_i}}{p}, \frac{\ensuremath{\sum_{i\in \bar A} x_i}}{q}\right)}$$
We have
$$ \ensuremath{\sum_{i\in \bar A} x_i} = S - \ensuremath{\sum_{i\in A} x_i}$$
and
$$\ensuremath{\sum_{i\in A} x_i} \geq \kappa \ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace$$
So
$$\frac{\ensuremath{\sum_{i\in \bar A} x_i}}{q} \leq \frac{ S- \kappa \ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace}{q}$$
and
$$ \ensuremath{\sum_{i\in A} x_i} \leq \ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace \leq \ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace$$
This last inequality implies that the tasks allocated to the $p$-part of the platform are terminated before $\ensuremath{{M}_{\mathrm{ideal}}}\xspace$, and so necessarily before the tasks allocated to $q$-part. Therefore, we have $$M_A = \puisa{\frac{\ensuremath{\sum_{i\in \bar A} x_i}}{q}}$$.
and so
$$
\left(\frac{M_A}{\ensuremath{{M}_{\mathrm{OPT}}}\xspace}\right)^{1/\alpha}
\leq \frac{S-\kappa\ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace}{S-\ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace}\\ $$
Then, as $0<\kappa<1$ and $\ensuremath{\mathcal{\Sigma}_{\mathrm{OPT}}}\xspace\leq\ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace$, we get
\begin{align*}
\left(\frac{M_A}{\ensuremath{{M}_{\mathrm{OPT}}}\xspace}\right)^{1/\alpha}
&\leq \frac{S-\kappa\ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace}{S-\ensuremath{{\Sigma}_{\mathrm{ideal}}}\xspace}\\
&\leq \frac{1-\frac{\kappa p}{p+q}}{1-\frac{p}{p+q}}\\
&\leq \frac{p+q-\kappa p}{q}\\
&\leq 1 + \frac{p}{q} (1-\kappa)
\end{align*}
Then, $r$ is an upper bound of $\frac{p}{q}$, so
$$ \frac{M_A}{\ensuremath{{M}_{\mathrm{OPT}}}\xspace} \leq \puisa{1+ r\left(1-\kappa\right)}$$
Finally, by the definition of $\kappa$, we get
$$ \frac{M_A}{\ensuremath{{M}_{\mathrm{OPT}}}\xspace} \leq \lambda$$
\bigskip
We have supposed so far that the left inequality of (\ref{eq:apq}) holds. Otherwise, the second one holds. Note that both hypotheses only differ by an exchange of the roles of $p$ and $q$. Then, as the problem is strictly symmetric in $p$ and $q$, by an analogue reasoning, one can prove that \ensuremath{\mathcal A}\xspace launched on $X$, $\frac{qS}{p+q}$, $\varepsilon_{\kappa}$ returns a set $B$ in
$$\Lambda' = \left\{ B\subset X \mathrel{}\middle|\mathrel{} (1-\varepsilon_{\kappa}) \sum_{i\in\bar \ensuremath{A_{\mathrm{O}}}} x_i \leq \sum_{i\in B} x_i \leq \sum_{i\in\bar \ensuremath{A_{\mathrm{O}}}} x_i\right\}$$
and that the schedule that associates $B$ to the $q$-part of the processors has a makespan smaller than $\lambda \ensuremath{{M}_{\mathrm{OPT}}}\xspace$. Indeed, we needed to obtain this conclusion that $r\geq \frac{p}{q}$, and as we also have $r\geq \frac qp$, the same method works in this case.
\bigskip
To conclude, Algorithm \ref{alg:pqapp} launched with the parameter $\lambda$ computes a set $A\in \Lambda$ and a set $B\in \Lambda'$, then returns the schedule that has the minimum makespan between $S_A$ and $S_{\bar B}$. Therefore, regardless of which inequality of (\ref{eq:apq}) holds, the returned schedule has a makespan smaller than $\lambda\ensuremath{{M}_{\mathrm{OPT}}}\xspace$, and so Algorithm \ref{alg:pqapp} achieves a $\lambda$-approximation.\qed
\end{proof}
Formally, the algorithm is the following.
\begin{figure}[!ht]
\SetEndCharOfAlgoLine{;}
\begin{algorithm}[H]
\SetKwFunction{pqapp}{PQApp}
\SetKwBlock{Fct}{Function \pqapp{$G$,$p$,$q$,$\lambda$}}{}
\SetKwInOut{KwOut}{Output}
\SetKwInOut{KwIn}{Input}
\SetKwIF{AFct}{AElseIf}{AElse}{if}{then}{else if}{else}{endif}
\SetVline
\Fct{
\KwIn{A graph $G$ composed of $n$ independent tasks $T_i$ of length $L_i$, the parameters $p$ and $q$ of the processor platform $\mathcal{P}$, and the requested approximation ratio $\lambda$}
\KwOut{A schedule $S$ of $G$ on $\mathcal{P}$ that is a $\lambda$-approximation of the makespan}
\If{$\lambda > (1+r)^\alpha$}{
\Return{the PM schedule on the largest part}}
$A \gets \ensuremath{\mathcal A}\xspace\left(X,\frac{pS}{p+q}, \frac{\varepsilon_\lambda}r\right) \quad ; \quad$
$B \gets \ensuremath{\mathcal A}\xspace\left(X,\frac{qS}{p+q}, \frac{\varepsilon_\lambda}r\right)$\;
\Return{the schedule with the minimum makespan between $S_A$ and $S_{\bar B}$}
}
\vspace*{-1em}
\caption{Approximation scheme for the $(p,q)$-scheduling problem \mystrut(1.5em)}
\label{alg:pqapp}
\end{algorithm}
\end{figure}
\fi
\section{Conclusion}
In this paper, we have studied how to schedule trees of malleable
tasks whose speedup function on multicore platforms is $p^\alpha$. We
have first motivated the use of this model for sparse matrix
factorizations by actual experiments. When using factorization kernels
actually used in sparse solvers, we show that the speedup follows the
$p^\alpha$ model for reasonable allocations. On the machine used for
our tests, $\alpha$ is in the range 0.85--0.95. Then, we proposed a
new proof of the optimal allocation derived by Prasanna and
Musicus~\cite{prasmus,prasmus2} for such trees on single node
multicore platforms. Contrarily to the use of optimal control theory
of the original proofs, our method relies only on pure scheduling
arguments and gives more intuitions on the scheduling problem. Based
on these proofs, we proposed several extensions for two multicore
nodes: we prove the NP-completeness of the scheduling problem and propose
a \ensuremath{\left(\frac{4}{3}\right)^\alpha}-approximation algorithm for a tree of malleable tasks on two
homogeneous nodes, and an FPTAS for independent malleable tasks on two
heterogeneous nodes.
The perspectives to extend this work follow two main
directions. First, it would be interesting to extend the
approximations proposed for the heterogeneous case to a number of nodes
larger than two, and to more heterogeneous nodes, for which the value
of $\alpha$ differs from one node to another. This is a promising
model for the use of accelerators (such as GPU or Xeon Phi). The
second direction concerns an actual implementation of the PM
allocation scheme in a sparse solver.
\bibliographystyle{splncs03}
|
1,116,691,501,287 | arxiv | \section{Diffraction of position-momentum entangled photons from a sharp edge}
To findout quantum diffraction pattern of position-momentum entangled photons from a straight sharp edge, consider a schematic diagram given in Fig.~\ref{fig1}. In experiment, position-momentum entangled photons are produced by a type-I spontaneous parametric down conversion (SPDC) process happening in a second-order nonlinear crystal, such as Beta Barium Borate (BBO). Similar SPDC sources of photon pairs have been utilised in the quantum ghost interference experiments \cite{ghost1}. In experiments, photon pair creation rate is kept low so that a single photon pair is produced and detected. Probability of having more than one photon pairs in one experimental cycle is extremely low. In this paper, the analysis of quantum diffraction is given in two-dimensions of space that is according to the experimental setup of the source as explained further.
Type-I SPDC process produces photon pairs at a nonzero angle in general \emph{w.r.t} momentum direction of pump photons to conserve momentum and energy. Momenta of photons of each pair, produced in SPDC process, are opposite to each other in a plane perpendicular to the direction of propagation of pump photons. For such an extended source, photons of from a pair are position-momentum entangled in a transverse two-dimensional plane. Therefore, a two-dimensional configuration is considered further.
For an extended SPDC source, a pair of photons is produced at a same location in crystal however, location of pair production is delocalized \emph{i.e.} amplitude of pair production of photons is delocalized over the extension of the source. Consider an arbitrary point $p'$ in the source such that photons from a pair produced around a point $p'$ have opposite directions of their momentum. If an exact location of pair production is determined then an uncertainty in momentum of each photon becomes so large that it results in a separable quantum state of photons. Consider, $|\bf{p}_{1}\rangle_{y'}$ is a momentum quantum state of a photon 1, produced around $p'(y')$, going towards a detector $D_{1}$ and $|\bf{p}_{2}\rangle_{y'}$ is the momentum quantum state of photon 2, produced around $p'(y')$ in the opposite direction to the momentum of photon 1. Each photon has same energy and momentum changes with direction of propagation. Their exact location of origin is uncertain in the vicinity of an arbitrary point $p'(y')$. If an amplitude of a pair production at an arbitrary location $y'$ in the source is $\psi(y')$, then quantum state of both photons of same polarization can be written as
\begin{equation}
\label{eq:1}
|\Psi\rangle \propto \int \psi(y')|\bf{p}_{1}\rangle_{y'}|\bf{p}_{2}\rangle_{y'} \mathrm{d}y'
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale=0.105]{fig1.eps}
\caption{\label{fig1} \emph{Position-momentum entangled photons are produced by an extended source with pair production amplitude $\psi(y')$. Photon 1 is detected by $D_{1}$, photon 2 is partially blocked by a sharp edge and detected by $D_{2}$.}}
\end{center}
\end{figure}
Directions of momenta $\bf{p}_1$ and $\bf{p}_2$ of photons are opposite to each other. Quantum state given in Eqn.~\ref{eq:1} is a two-photon position-momentum entangled state under a consideration that a photon 1 is going towards detector 1. Both photons are momentum entangled as well as they are position entangled in their location of origin. Amplitude of pair production is normalized, such that $\int^{\infty}_{-\infty}\psi^{\ast}{(y')}\psi{(y')} \mathrm{d}y'=1$.
Photon 1 can be detected by a single photon detector $D_{1}$ whereas the path of photon 2 is partially blocked by a straight sharp edge located on $y$-axis, which is parallel to $y'$-axis. One end of the sharp edge is positioned at $\Delta y$ and the other end is assumed to be positioned at infinity on $y$-axis. After traversing the sharp edge, photon 2 can be detected by another single photon detector $D_{2}$. Both the detector, $D_{1}$ and $D_{2}$, are positioned at fixed locations $y_{1}$ and $y_{2}$, respectively, parallel to $y$-axis. Distance along $x$-axis of detector $D_{1}$ from the sharp edge is $d_{1}+d_{2}$ and distance of detector $D_{2}$ from the sharp edge is $d_{3}$. Since extension of source is large therefore, from a detection of photon 1 at detector $D_{1}$ one cannot find out even in principle from which location in the source the photon pair is emitted. Therefore, a detection of a single photon is not revealing which-path information of the other photon. Directions of momenta of photons are opposite therefore, if a pair is originated around a point $p'(y')$ and photon 1 is detected at location $y_{1}$, then photon 2 is most likely to be at a point $p(y)$ on $y$-axis such that $y'=y_{1}+(y-y_{1})d_{1}/(d_{1}+d_{2})$. Detection of photons is not revealing any information about the momenta of photons emitted by an extended source. Once a photon 1 is detected at a known location and due to nonlocal quantum state reduction \cite{epr} a particular amplitude to find a second photon at $p(y)$ on $y$-axis containing a sharp edge can be immediately determined. This selection is made by a position resolved detection of photon 1.Therefore, a particular amplitude of photon 2 can be written as
\begin{equation}
\label{eq:2}
a(y) \propto \frac{\psi(y')}{(2\pi \hbar)^{2}} e^{i p_{1} r_{1}/\hbar} e^{i p_{2} r_{2}/\hbar}
\end{equation}
where $p_{1}$ and $p{_2}$ are magnitudes of momentum of photon 1 and photon 2 respectively. Distance between a detector $D_{1}$ and an arbitrary point $p'(y')$ is $r_{1}$. Distance of a point $p(y)$ from the arbitrary point $p'(y')$ is $r_{2}$ as shown in Fig.~\ref{fig1}. The amplitude $a(y)$ is a complex function in general, it varies along $y$-axis containing a sharp edge. Now, due to diffraction from the sharp edge, there are different possible paths via which a photon 2 can reach at detector $D_{2}$. To find out an amplitude of detection of photon 2 at detector $D_{2}$, if photon 1 is detected by a detector $D_{1}$, the amplitude of photon 2 emanating from a point $p$ on $y$-axis can be considered as a circular wave originating from a point $p$. If photon 1 is not detected by $D_{1}$ then this event is not counted as a conditional event and in this case the particular amplitude $a(y)$ is not well defined. Paths of photon 2 are partially blocked by a sharp edge on $y$-axis. The circular wave amplitudes from different locations on $y$-axis are quantum superimposed once a photon 1 is detected at location $y_{1}$ by a detector $D_{1}$ as the which-path information of photon 2 is not revealed by detection of photon 1. Total amplitude of a position correlated conditional detection, $a_{12}$, of both photons can be evaluated by superimposing two-photon amplitudes in the unblocked region such that
\begin{equation}
\label{eq:3}
a_{12} \propto \frac{1}{(2\pi\hbar)^{2}}\int^{\Delta y}_{-\infty} \psi(y') e^{i k_{o}r_{1}} e^{i k_{o}r_{2}} \frac{e^{ik_{o}r}}{r^{1/2}} \mathrm{d}y
\end{equation}
where a magnitude of momentum of each photon is $\hbar k_{o}$, $k_{o}=2 \pi/\lambda$ is a wave vector magnitude corresponding to wavelength $\lambda$ and $r$ is a distance of a detector $D_{2}$ from a point $p$ on $y$-axis. Integrand function is a circular wave amplitude $e^{ik_{o}r}/r^{1/2}$ multiplied by $a(y)$. In experimental situations the extension of the source is finite. The first term $\psi(y')$ of integral in Eqn.~\ref{eq:3} is an amplitude of a pair production that vanishes at infinity. Actual size of the source is determined by the transverse extension of pump laser beam that leads to a finite width of $\psi(y')$. Therefore, a finite size of the source is due to a finite extension of $\psi(y')$.
From Fig.~\ref{fig1}, total distance is $r_{1}+r_{2}=((d_{1}+d_{2})^{2}+(y-y_{1})^{2})^{1/2}$. Since a location of detector 1 is close to $x$-axis and its distance from source is much larger as compared to the extension of the source therefore, total distance can be expressed as $r_{1}+r_{2}\simeq(d_{1}+d_{2})(1+(y-y_{1})^{2}/2(d_{1}+d_{2})^2)$. Similarly, a distance of detector $D_{2}$ from an arbitrary point $p$ on $y$-axis can be written as $r\simeq d_{3}(1+(y-y_{2})^{2}/2d^{2}_{3})$. Here, a contribution of term $e^{i k_{o}r}/r^{1/2}$ to integral in Eqn.~\ref{eq:3} diminishes as point $p$ is moved away from origin $o$. In addition, an amplitude of a pair production in the crystal is determined by the transverse field profile of pump laser beam. For a Gaussian transverse mode of pump laser the amplitude of a pair production is real and it can be written as $ \psi(y')\propto e^{-y'^{2}/\sigma^{2}}$, where $\sigma$ is width of a pair production amplitude. The amplitude $\psi(y')$ of a pair production becomes smaller and smaller as magnitude of $y'$ increases and eventually it vanishes at infinity. Therefore, total amplitude of correlated conditional detection of photons can be evaluated as
\begin{equation}
\label{eq:4}
a_{12}=\frac{c_{n}e^{i k_{o} (d_{1}+d_{2}+d_{3})}}{(2\pi\hbar)^{2} d^{1/2}_{3}}\int^{\Delta y}_{-\infty} e^{-\frac{y'^{2}}{\sigma^{2}}} e^{i k_{o}\frac{(y-y_{1})^{2}}{2(d_{1}+d_{2})}} e^{ik_{o}\frac{(y-y_{2})^{2}}{2d_{3}}} \mathrm{d}y
\end{equation}
where $c_{n}$ is a constant and $y'=y_{1}+(y-y_{1})d_{1}/(d_{1}+d_{2})$. Therefore, a probability of conditional detection of photons can be calculated as $p_{12}= a^{\ast}_{12} a_{12}$. Probability of conditional detection of photons for different position $\Delta y$, of a sharp edge produces a quantum diffraction pattern of the sharp edge. In experimental situation, probability of correlated conditional detection of photons is proportional to counts of correlated coincidence detection of photons in a chosen time interval. In experiment, position of each detector is stationary when a sharp edge is moved. If any detector is displaced then complete coincidence diffraction pattern is shifted. The correlated coincidence diffraction pattern given in Eqn.~\ref{eq:4} depends on distances of a sharp edge from individual detectors and it is independent of location of an infinitely extended source. It is also evident that there is no diffraction pattern formation by individual photons \emph{i.e.} photon counts of each detector show no diffraction pattern as a sharp edge is moved. Quantum diffraction pattern is formed in correlated conditional measurements of photon counts only. Quantum diffraction pattern is formed even if a sharp edge is placed in the path of the other quantum entangled photon and it is independent of the order of detection of photons.
\section{Experimental realization}
Experiment of quantum diffraction is performed with position-momentum entangled photons of same energy as shown in Fig.~\ref{fig2}. Quantum entangled photons are produced by SPDC happening in a second-order nonlinear BBO crystal. Phase matching angle of the crystal is chosen for type-I phase matching of degenerate SPDC.
\begin{figure}
\begin{center}
\includegraphics[scale=0.082]{fig2.eps}
\caption{\label{fig2} \emph{ A schematic diagram of experiment to measure quantum diffraction pattern of position-momentum entangled photons from a sharp edge.}}
\end{center}
\end{figure}
\begin{figure*}[p]
\begin{center}
\includegraphics[scale=0.72]{plots.eps}
\caption{\label{fig3} \emph{Quantum diffraction patterns of a sharp edge for different detector location is given in the left column. Diffraction pattern is shifted as detector $D_{2}$ is displaced. Detector location is (a) $y_{2}=1.52~mm$, (b) $y_{2}=1.32~mm$ and (c) $y_{2}= 1.12~mm$.Each single point represents experimentally measured position correlated coincidence photon counts for 30~sec integration time. Each measured pattern is compared with the quantum diffraction pattern calculated from equation 4 and shown by a solid line plot. Plots given in the right column are the corresponding single photon counts of each detector.}}
\end{center}
\end{figure*}
A pump laser beam of wavelength 405~$nm$ is incident on the crystal and pairs of position-momentum entangled photons are produced. These photon pairs are quantum entangled in a two-dimensional transverse plane \emph{w.r.t} the direction of propagation of pump beam.
It is important to note that correlated coincidence amplitude depends on the exponents in the integral shown in Eqn.~\ref{eq:4}, that are in the form of a ratio of photon momentum to distances $d_{1}+ d_{2}$ and $d_{3}$. All such quantities are defined in two-dimensions. However, in a real experiment, these quantities can be measured in three dimensions and their respective projections in a transverse plane can be placed in the integral. The projection factors of momentum and distance in their ratio term cancel each other. Therefore, photon momentum and distances indicated further in the paper are measured in three-dimensions. Wavelength of each down-converted photon is about 810~$nm$. Pump laser beam diameter is expanded to produce position-momentum entangled pairs of photons over a broader transverse extension of the crystal. Width $\sigma$ of pair production amplitude $\psi(y')$ increases with pump laser beam diameter. In this way, number of observable fringes of a quantum diffraction pattern can also be increased. Quantum diffraction pattern is independent of rate of pair production, if photon pairs are resolved in time. Each photon is detected by fiber coupled single photon detectors. Photons are coupled to optical fibers and fiber couplers are placed on three-dimensional translation stages. A straight sharp edge is placed to partially block photon 2 going towards detector $D_{2}$. Each single photon detector is preceded by optical bandpass filters ($F_{1}$ and $F_{2}$) of center wavelength 810~$nm$. Single photon counts and correlated conditional single photon counts of both detectors are measured with a two channel coincidence single photon counter (CSPC). A measurement time window of CSPC is chosen such that a single pair of entangled photons is resolved in time.
To measure quantum diffraction pattern of a sharp edge, position $\Delta y$ of a sharp edge is displaced in steps. For each displacement, single photon counts and coincidence single photon counts are measured. Locations, $y_{1}$ and $y_{2}$ of single photon detectors $D_{1}$ and $D_{2}$ are stationary. Experiment is repeated for three different locations, $y_{2}$, of detector $D_{2}$. Quantum diffraction pattern of a sharp edge is shown in left column of Fig.~\ref{fig3} for three different locations of detector $D_{2}$. Corresponding single photon counts of each detector are shown in the right column of Fig.~\ref{fig3}. Location $y_{1}=0.15~mm$ of detector $D_{1}$, and distances $d_{1}=50~cm$ , $d_{2}=28~cm$ and $d_{3}=22~cm$ are measured in three-dimensions and they are same in all plots. Each experimental data point representing coincidence photon counts $N_{12}$ is acquired for 30~$sec$ integration time. Solid line in each coincidence count plot is corresponding to a quantum diffraction pattern calculated by solving Eqn.~\ref{eq:4}, where $\sigma=0.85~mm$.
Quantum diffraction pattern of a sharp edge is revealed only in correlated coincidence photon counts. It is evident from experiment and theory results shown in Fig.~\ref{fig3} that the quantum diffraction pattern is shifted with the displacement of position, $y_{2}$, of a single photon detector $D_{2}$. Quantum diffraction pattern also shows a shift with displacement of position, $y_{1}$, of a single photon detector $D_{1}$.
In single photon measurements by a detector, there is no way to know at what location the other photon is detected or has it been detected by a detector or not. This lack of information suppresses the formation of diffraction pattern in the local single photon counts. If a photon 1 is undetected then a particular amplitude, Eqn.~\ref{eq:2}, is undefined. In other words, the phase distribution of photon 2 amplitude in a plane of a sharp edge is not well defined and it suppresses the formation of a diffraction pattern in single local measurements of photons. Once a photon 1 is detected at a known location the phase of amplitude of photon 2 in the plane of the sharp edge is immediately known due to nonlocal quantum state reduction that is a consequence of quantum entanglement. In this way, this quantum state reduction process leads to a formation of quantum diffraction pattern in correlated coincidence photon detection measurements. In other words, a position correlated coincidence detection selects a particular phase distribution of a two-photon amplitude that results in a formation of a quantum diffraction pattern. On the other hand, If only coincidences are measured without position correlation of photon 1 then no diffraction pattern is formed even in coincidence measurements because of a random phase fluctuation in shot to shot measurement. It has been shown that the quantum diffraction pattern is independent of location of an infinitely extended source. If experiment can store the information of detection of photons then detection events can be correlated pairwise and quantum diffraction pattern can also be recovered even after completing the experiment.
{\bf{Contribution of Authors}}: This experiment is conceived by Mandip Singh (MS). MS made the theoretical model of quantum diffraction from a sharp edge. Samridhi Gambhir (SG) has taken the data and plotted patterns. MS made diagrams and explained the quantum diffraction pattern. Both authors discussed the experiment. MS wrote the manuscript.
|
1,116,691,501,288 | arxiv | \section{Introduction}
The Standard Model (SM) based on the gauge symmetry $SU(3)_C\times SU(2)_L\times U(1)_Y$ is a successful description of fundamental particles and interactions. Almost all the predictions of the SM have been verified experimentally. However, there are two important experimental observations (among many others) that compel us to think of the SM as a low energy theory requiring new physics at a high scale. These are the existence of the dark matter in the universe, and the tiny non-zero masses of the neutrinos and their mixings. The SM has no candidate for the dark matter. While the simplest way to generate neutrino masses is to add right-handed neutrino fields to the SM particle content, it is hard to explain their extreme smallness. These issues have led to a plethora of new dynamics beyond the SM. The Large Hadron Collider (LHC) at CERN is aiming to uncover any such dynamics that may be operative at the scale of a few TeVs.
Neutrino masses are at least six orders of magnitude smaller than the next lightest standard model fermion. Such a small mass could be understood if neutrinos are Majorana particles\footnote{It is important to note that the current experimental data (from neutrino oscillation as well as scattering experiments) is inconclusive in determining the Dirac or Majorana nature of neutrinos as well as the mechanism of neutrino mass generation.} and Majorana masses for neutrinos are generated from higher dimensional operators which violate lepton number by two units. The most studied example of such an operator is the dimension-5 Weinberg operator \cite{Weinberg:1979sa}:
\begin{equation}
{\cal O}_5~=~\frac{c_{\alpha\beta}}{\Lambda}\left(\bar{L^C_{\alpha L}}\tilde H^*\right)\left(\tilde H^\dagger L_{\beta L}\right)~+~{\rm h.c.},\nonumber
\end{equation}
where, $\alpha,~\beta$ are the generation indices, $L_L~=~(\nu_L, l_L)^T$ is the left-handed lepton doublet of the SM, $H~=~(h^+, \frac{h+i\eta}{\sqrt 2})^T$ is the Higgs doublet and $\tilde H=i\sigma_2 H^*$. $\Lambda$ is the scale of new physics and $c_{\alpha\beta}$ is a model-dependent coefficient. Weinberg operator gives rise to Majorana masses (suppressed by $\Lambda$) for the neutrinos after electroweak symmetry breaking (EWSB). At tree level, there are only three ways to generate the Weinberg operator, namely, the type-I \cite{Klein:2019iws,Minkowski:1977sc,Yanagida:1979as,GellMann:1980vs,Mohapatra:1979ia}, the type-II \cite{Magg:1980ut,Schechter:1980gr,Wetterich:1981bx,Lazarides:1980nt,Mohapatra:1980yp,Cheng:1980qt,Perez:2008ha} and the type-III \cite{Foot:1988aq} seesaw mechanisms. In the framework of tree-level seesaw models, the smallness of neutrino masses ($m_\nu$s) is explained via new physics at a very high scale of $\Lambda$. For instance, assuming $c_{\alpha \beta} \sim {\cal O}(1)$, $m_\nu \sim 0.1$ eV requires new physics at a scale $\Lambda\sim 10^{14}-10^{15}$ GeV which is impossible to probe in the ongoing as well as proposed collider experiments. However, there are two alternative classes of models in which ${\cal O}_5$ is forbidden at tree level and neutrino masses are generated radiatively \cite{Babu:2013pma,Sierra:2014rxa,Okada:2015vwh,Nomura:2016run,Nomura:2016ask,Nomura:2016dnf,Nomura:2017vzp,Cepedello:2017lyo,Cepedello:2018rfh,Cai:2017jrq,Babu:2019mfe,Gargalionis:2019drk,Cepedello:2020lul,Arbelaez:2020xcg,Ma:2009gu,Ma:2012xj,Kanemura:2010bq,Krauss:2002px,Branco:1988ex,Aoki:2009vf,Aoki:2008av,Ma:2007yx,Ma:2006km,Jana:2019mgj,Cheung:2004xm,Cheung:2017kxb,Cheung:2018itc,Nomura:2018lsx,FileviezPerez:2009ud,Babu:2002uu,Babu:1989pz,Ma:1998dn,Babu:1988ki,Bonnet:2012kz} or from a tree-level effective operator with $d>5$ \cite{Babu:2009aq,Anamiati:2018cuq,Gu:2009hu,Babu:1999me,Giudice:2008uua,Gogoladze:2008wz,Chen:2006hn,Bonnet:2009ej}. The additional suppression to the neutrino masses, arises from the loop integrals (in case of former) or higher powers of $\Lambda$ in the denominator (in case of later), brings down the new physics scale $\Lambda$ to TeV scale and hence, makes these models testable at the LHC. Radiative neutrino mass generation scenarios often require a $Z_2$ symmetry to forbid the tree-level contribution(s) to the Weinberg operator. Apart from forbidding the generation of neutrino masses at the tree-level, the $Z_2$ symmetry also ensures the stability of the lightest $Z_2$-odd particle which (if weakly interacting) could be a cosmologically viable candidate for cold dark matter (CDM) \cite{Kajiyama:2013rla,Farzan:2014aca,Restrepo:2015sjs,Kashiwase:2015pra,Ahriche:2016rgf,Nomura:2016vxr,Sierra:2016qfa,Ahriche:2016ixu,Guo:2016dzl,Simoes:2017kqb,Nomura:2017emk,Yao:2017vtm,Esch:2018ccs,CentellesChulia:2019xky,Loualidi:2020jlj,Restrepo:2013aga}. However, in collider experiments, a stable weakly interacting particle remains invisible and hence, contributes to the missing transverse energy ($E_T\!\!\!\!\!\!/~$) signature. Presence of $E_T\!\!\!\!\!\!/~$ in the final state poses serious problems in the reconstruction of the new particles masses as well as in the discovery of the new physics (NP) signatures over the SM backgrounds. On the other hand, the models, which do not require a $Z_2$ symmetry to forbid ${\cal O}_5$ at tree-level, lack the motivation of a candidate for CDM. Such models, however, give rise to smoking-gun signatures at the collider experiments and hence, are easily testable at the LHC.
In this article, we have studied the detailed phenomenology of a model that generates neutrino masses at the 1-loop level. In the framework of the SM gauge symmetry, the model includes two new scalar $SU(2)_L$-doublets ($\Phi_{\frac{5}{2}}$ and $\Phi_\frac{3}{2}$), one scalar singlet ($k$) and at least two\footnote{Neutrino oscillation data indicates towards non-zero masses for atleast two neutrinos.} copies vector-like fermion singlets ($E$). The aim is to generate Weinberg operator (${\cal O}_5^{\rm 1 \text - loop}$) at 1-loop level\footnote{Weinberg operator at the 1-loop level has already been studied in details in the literature. In Ref.~\cite{Bonnet:2012kz}, 12 topologies, which contribute to the Weinberg operator at 1-loop level, have been identified. For each topology, there are several alternatives (models) for assigning different types (scalar or fermion) of fields running in the loop. A complete list of all these models leading to ${\cal O}_5^{\rm 1 \text - loop}$ can also be found in Ref.~\cite{Bonnet:2012kz}.} via T1-i topology \cite{Bonnet:2012kz}. To ensure the loop diagram(s) as the leading contribution to the neutrino masses, one needs to forbid the couplings which lead to ${\cal O}_5$ at tree-level. For instance, in this model, the Yukawa couplings involving the newly introduced singlet fermions and SM lepton and Higgs doublets give rise to ${\cal O}_5^{\rm tree}$ via Type-I seesaw mechanism. Charging matter fields under a $Z_2$ symmetry is enough to forbid such couplings. Alternatively, one can carefully choose the hypercharges\footnote{The electric charge ($Q$) is given by $Q=I_3+Y$, where $I_3$ is the third component of isospin.} of the singlet fermions ($Y_E$) to forbid such couplings. In the context of this particular scenario, $Y_E \pm Y_L \pm Y_H \neq 0$ or equivalently, $Y_E \neq 0~{\rm or}~\pm 1$, where $Y_{L,H}$ is the hypercharge of the SM lepton(Higgs) doublet, is enough to forbid Type I seesaw mechanism. Our choice $Y_E=2$ results into doubly charged singlet fermions ($E^{++}$) in the model. The generation of non-zero neutrino masses at 1-loop via T1-i topology requires the following hypercharge assignments for the doublet and singlet scalars: $Y_{\Phi_{\frac{3}{2}\left(\frac{5}{2}\right)}}=\frac{3}{2}\left(\frac{5}{2}\right)$ and $Y_k=2$, respectively (see Ref.~\cite{Bonnet:2012kz} for details). These particular hypercharge assignments for the newly introduced doublets and singlets result in TeV scale multi-charged scalars (triply, doubly and singly charged) and fermions (doubly charged) in the model. In this article, we have studied collider signatures of these multi-charged scalars and fermions at the LHC with 13 TeV center-of-mass energy.
Single production of these multi-charged scalars and fermions at the LHC are suppressed. However, they can be pair produced via quark anti-quark fusion (Drell-Yan production) or photon-photon fusion (photoproduction). The parton densities of quarks and anti-quarks are significantly larger than the photon density\footnote{It is important to note that $\alpha_{EM}$ is of the same order of magnitude as $\alpha_{strong}^2$. Therefore, in the era of precision phenomenology at the LHC when the PDFs are already determined up to NNLO in QCD, consistency of calculations require PDFs which are corrected at least up to NLO QED. Next-to-leading order (NLO) QED corrections require photon as a parton inside the proton, with an associated parton distribution function (PDF).} and thus, photoproductions of charged scalar/fermions pairs are neglected in the phenomenological studies \cite{delAguila:2013mia,Kanemura:2013vxa,Du:2018eaw,Primulando:2019evb,Dev:2018kpa,Dev:2018sel,Alloul:2013raa} as well as by the experimental groups \cite{ATLAS:2012hi,ATLAS:2014kca,Aaboud:2018qcu,Chatrchyan:2012ya,CMS:2017pet,CMS:2016cpz,Aad:2019pfm,Aaboud:2018kbe}. However, it is important to note that at the Born level, photoproduction cross-section of a multi-charged (with charge $Q$) particle is enhanced by a factor of $Q^4$. Moreover, being $t(u)$-channel process, photoproduction falls relatively slowly with the parton center-of-mass energy compared to the $s$-channel Drell-Yan (DY) process. The importance of the photoproduction has already been pointed out in the recent literature \cite{PhysRevD.50.2335,PhysRevD.76.075013,Babu:2016rcr,Agarwalla:2018xpc,Ghosh:2017jbw,Baines:2018ltl,Kurochkin:2006jr,Acharya:2019vtb,_nan_2012,Danielsson:2016nyy} and it has been shown that in the larger mass region, photoproduction of $Q>1$ fermions/scalars could be significant/dominant compared to the conventional DY production. Therefore, in this work, we have considered both DY and photoproduction of the (multi-)charged scalars/fermions.
After being produced at the LHC, the multi-charged scalars/fermions decay into the SM leptons and/or bosons ($W^\pm,~Z$, and Higgs). The resulting signature is characterized by multiple leptons (including 3,4,5,6 leptons, same-sign dilepton e.t.c.) final state. The decays of the (multi-)charged scalars/fermions usually proceed through the Yukawa couplings involving these new scalars/fermions and the SM leptons. It is important to note that such Yukawa couplings also contribute to the loop diagram(s) generating the neutrino masses and hence, required to be small. Therefore, depending on the choice of parameters, the total decay width of these multi-charged particles (MCPs) could be small enough ($\Gamma_{TOT}<10^{-16}$) to ensure the decay of these particles outside the detector. The energy loss of the charged particles inside the detector increases quadratically with its charge \cite{Bethe:1930ku} and hence, long-lived MCPs are expected to leave a very characteristic signature of high ionization in the detector (especially, in the pixel detector, transition radiation tracker and muon chamber) \cite{Aaboud:2018kbe,Aad:2013pqd,Khachatryan:2016sfv,Veeraraghavan:2013rqa}. Recently, in Ref.~\cite{Aaboud:2018kbe}, the ATLAS collaboration has presented a search for abnormally large ionization signature to constrain scenarios with long-lived MCPs at the LHC with $\sqrt s~=~13$ TeV and 36.1 fb$^{-1}$ integrated luminosity. Such constraints may also be applicable in our model for MCPs with $\Gamma_{TOT}<10^{-16}$ GeV. On the other hand, if the total decay widths are large enough $\Gamma_{TOT}>10^{-13}$ to ensure prompt decays of the scalars/fermions, the model gives rise to multi-lepton signatures at the LHC. We have studied 4-leptons final state in detail. Different other new physics scenarios also give rise to 4-leptons final state at the LHC. This final state is particularly interesting due to negligible background contributions from the SM and hence, have already been studied by the ATLAS \cite{Aaboud:2018zeb,Aad:2014iza,ATLAS:2012kr,Aaboud:2017qph,Aad:2015dha} and CMS \cite{Sirunyan:2017lae,Khachatryan:2016iqn,Chatrchyan:2014aea,Chatrchyan:2013xsw,Chatrchyan:2012mea} collaborations in the context of different new physics scenarios. We have used existing ATLAS searches \cite{Aaboud:2018zeb,Aaboud:2017qph} for 4-lepton final states to constraint the parameter space of this model. To enhance the reach at the LHC, we have also proposed a set of event selection criteria optimized for our model.
This paper is organized as follows. The next section is devoted to the introduction of the model and the discussion about the generation of the neutrino masses at the 1-loop level. In section \ref{E_phenomenology}, we studied the collider phenomenology of the doubly charged fermions at the LHC with 13 TeV center-of-mass energy. The production, decay, and collider signatures of the scalars are presented in section \ref{scalar_phenomenology}. We finally conclude in section \ref{conclusion}.
\section{The Model}
To realize Weinberg operator at 1-loop level, the model incorporates new SM $SU(2)_L$ singlet fermions ($E^{++}_\alpha$, where $\alpha~=~$1, 2 and 3) as well as a singlet scalar ($k^{++}$) and two doublet scalars ($\Phi_\frac{3}{2}$ and $\Phi_\frac{5}{2}$) in the framework of the SM gauge symmetry. The field content along with their gauge quantum numbers are summarized in the following:\\
\begin{table}[h!]
\centering
\begin{tabular}{c c}
\hline\hline
\multicolumn{2} {c}{$G_{\rm SM}~=~SU(3)_C \times SU(2)_L \times U(1)_Y$}\\
\hline\hline
{\bf Fermions:} & $Q_{\alpha L}~=~\begin{pmatrix} u_\alpha \\ d_\alpha \end{pmatrix}_L \sim \left(3,2,\frac{1}{6}\right)$,~~ $L_{\alpha L}~=~\begin{pmatrix} \nu_\alpha \\ e_\alpha \end{pmatrix}_L \sim \left(1,2,-\frac{1}{2}\right)$\\
& \\
& $u_{\alpha R}\sim \left(3,1,\frac{2}{3}\right)$, $d_{\alpha R}\sim \left(3,1,-\frac{1}{3}\right)$, $e_{\alpha R}\sim \left(1,1,-1\right)$\\
& \\
& $E^{++}_{\alpha L(R)}\sim \left(1,1,2\right)$ \\
& \\
{\bf Scalars:} & $H~=~\begin{pmatrix} h^{+} \\ \frac{h + i\eta}{\sqrt 2} \end{pmatrix} \sim \left(1,2,\frac{1}{2}\right)$\\
& \\
& $\Phi_{\frac{3}{2}}~=~\begin{pmatrix} \phi^{++}_{\frac{3}{2}} \\ \phi^+_{\frac{3}{2}} \end{pmatrix} \sim \left(1,2,\frac{3}{2}\right)$, ~~$\Phi_{\frac{5}{2}}~=~\begin{pmatrix} \phi^{+++}_{\frac{5}{2}} \\ \phi^{++}_{\frac{5}{2}} \end{pmatrix} \sim \left(1,2,\frac{5}{2}\right)$\\
& \\
& $k^{++}\sim\left(1,1,2\right)$\\
\hline\hline
\end{tabular}
\caption{Field content of the model along with their gauge quantum numbers: $\left(SU(3)_C,SU(2)_L,U(1)_Y\right)$ is presented where $\alpha=1,~2,~{\rm and}~3$ is the generation index and the electric charges are determined by $Q=T_3+Y.$}
\label{table:1}
\end{table}
The gauge interactions of the newly introduced scalars/fermions with the SM gauge bosons ($W^\pm,~Z$ and photon) can be obtained from the kinetic part of the Lagrangian,
\begin{eqnarray}
{\cal L}_{\rm kin} ~\supset && \left(D_\mu \Phi_{\frac{3}{2}}\right)^\dagger\left(D^\mu \Phi_{\frac{3}{2}}\right)~+~\left(D_\mu \Phi_{\frac{5}{2}}\right)^\dagger\left(D^\mu \Phi_{\frac{5}{2}}\right)~+~\left(D_\mu k^{++}\right)^\dagger\left(D^\mu k^{++}\right)\nonumber\\
&&+~\overline{E^{++}_\alpha}i\gamma^\mu D_\mu E^{++}_\alpha,
\label{lag_kin}
\end{eqnarray}
where, the gauge covariant derivative $D_\mu$ is given by, $D_\mu=\partial_\mu-ig\tau^a W_\mu^a-ig^\prime Y B_\mu$ for $SU(2)_L$ doublets and $D_\mu=\partial_\mu-ig^\prime Y B_\mu$ singlets with $W^a_\mu$ and $B_\mu$ being the gauge bosons of $SU(2)_L$ and $U(1)_Y$, respectively. Here, $g~{\rm and}~g^\prime$ are the gauge couplings corresponding to $SU(2)_L$ and $U(1)_Y$, respectively, $Y$ is the hypercharge and $\tau^a$s are the generators of $SU(2)_L$ doublet representation. Assignment of the gauge quantum numbers (see Table~\ref{table:1}) allows Yukawa interactions involving the doubly charged singlet fermions, the SM lepton doublets and $Y=\frac{3}{2}\left(\frac{5}{2}\right)$ scalar doublet. The couplings involving the doubly charged singlet scalar ($k^{\pm\pm}$) and a pair of SM singlet leptons are also allowed. The relevant parts of the Yukawa Lagrangian are as follows:
\begin{eqnarray}
{\cal L}_{\rm Yukawa} ~\supset && m_E^{\alpha\beta}\overline{E^{++}_\alpha}E^{++}_\beta~+~y_{\frac{3}{2}}^{\alpha\beta}\overline{L_{\alpha L}}{\Phi}_{\frac{3}{2}}\left(E^{++}_{\beta L}\right)^C ~+~y_{\frac{5}{2}}^{\alpha\beta}\overline{L_{\alpha L}}i\sigma_2\Phi_{\frac{5}{2}}^*E^{++}_{\beta R} \nonumber\\
&&+~y_k^{\alpha\beta} \overline{e_{\alpha R}}k^{--}\left(e_{\beta R}\right)^C+{\rm h.c.},
\label{lag_yuk}
\end{eqnarray}
where, $C$ stands for charge conjugation, $\alpha~{\rm and}~\beta$ are the generation indices, $y_{\frac{3}{2}\left(\frac{5}{2}\right)}$ and $y_k$ are Yukawa matrices and $m_E$ is the mass matrix for the vector-like doubly charged fermions. All these matrices are, in general, complex $3\times 3$ matrices. However, one has the freedom to choose a basis for $E^{++}_\alpha$ in which, $m_E$ is diagonal with real positive diagonal elements. It is important to note that $y_{\frac{3}{2}\left(\frac{5}{2}\right)}$ contributes to the neutrino masses at the 1-loop level. On the other hand, $y_{k}$ allows the decay of doubly charged scalars into a pair of SM charged leptons and hence, determines the phenomenology at the LHC. Apart from the usual quadratic (mass terms) and quartic terms involving the new doublet ($\Phi_{\frac{3}{2}\left(\frac{5}{2}\right)}$) and singlet ($k^{++}$) scalars, the most general renormalizable gauge invariant scalar potential also includes following phenomenologically important terms:
\begin{eqnarray}
{\rm V}(H,\Phi_{\frac{3}{2}},\Phi_{\frac{5}{2}},k^{++})~\supset && \mu\left(H^Ti\sigma_2{\Phi}_{\frac{3}{2}}\right)k^{--}~+~\mu^\prime\left(H^\dagger{\Phi}_{\frac{5}{2}}\right)k^{--} \nonumber\\
&&+~\lambda\left(H^Ti\sigma_2{\Phi}_{\frac{3}{2}}\right)\left(H^T\Phi^*_{\frac{5}{2}}\right) +c.c.
\label{lag_scalar}
\end{eqnarray}
The cubic and quartic terms in Eq.~\ref{lag_scalar} are not only important for generating the neutrino masses at the 1-loop level but also determines the collider signatures of this model via controling the mixings among the doubly charged scalars ($\phi_{\frac{5}{2}}^{++},~\phi_{\frac{3}{2}}^{++}~{\rm and}~k^{++}$). After the electroweak symmetry breaking (EWSB), the mass Lagrangian for the multi-charged scalars (MCS) can be written as,
\begin{eqnarray}
{\cal L}_{\rm MASS}^{\rm MCS}~= && m_{\frac{3}{2}}^2\left(\phi^{+}_{\frac{3}{2}}\right)^\dagger\phi^{+}_{\frac{3}{2}}~+~m_{\frac{5}{2}}^2\left(\phi^{3+}_{\frac{5}{2}}\right)^\dagger\phi^{3+}_{\frac{5}{2}} \nonumber\\
&&+~\left(\begin{array}{ccc} \phi^{++}_\frac{5}{2} &\phi^{++}_\frac{3}{2} & k^{++} \end{array}\right)^*
\left(\begin{array}{ccc} m_{\frac{5}{2}}^2 & -\frac{\lambda v^2}{2} & \frac{\mu^{\prime}v}{\sqrt{2}} \\ -\frac{\lambda v^2}{2} & m_{\frac{3}{2}}^2 & -\frac{\mu v}{\sqrt{2}} \\\frac{\mu^{\prime}v}{\sqrt{2}} & -\frac{\mu v}{\sqrt{2}} & m_k^2\end{array}\right)
\left(\begin{array}{c} \phi^{++}_\frac{5}{2}\\ \phi^{++}_\frac{3}{2} \\ k^{++} \end{array}\right)\,\, ,
\label{lag_mass}
\end{eqnarray}
where, $v$ is the vacuum expectation value (VEV) of the SM Higgs doublet and $m_{\frac{3}{2}\left(\frac{5}{2}\right)}^2$ and $m_k^2$ are the coefficients of the quadratic terms involving hypercharge $\frac{3}{2}\left(\frac{5}{2}\right)$ doublet and doubly charged singlet, respectively. The tree-level masses for the singly ($\phi^{+}_{\frac{3}{2}}$) and triply ($\phi^{3+}_{\frac{5}{2}}$) charged scalars are given by $m_{\frac{3}{2}}$ and $m_{\frac{5}{2}}$, respectively. Whereas, the mass squared of the physical doubly charged scalars (denoted as $H_1^{++},~H_2^{++}~{\rm and}~H_3^{++}$) are given by the eigen values ($m_{H_1}^2,~m_{H_2}^2~{\rm and}~m_{H_3}^2$) of the doubly charged scalar mass matrix (denoted by $M_{\rm DCS}$ and given by the matrix in the last term of ${\cal L}_{\rm MASS}^{\rm MCS}$) which is a $3\times 3$ real symmetric matrix and hence, can be diagonalized by a orthogonal matrix $O$: $O M_{DCS} O^T~=~{\rm diag}\left(m_{H_1}^2,~m_{H_2}^2,~m_{H_3}^2\right)$. The physical doubly charged scalars are given by,
\begin{equation}
H_a^{++}~=~O_{a1} \phi_{\frac{5}{2}}^{++}+O_{a2} \phi_{\frac{3}{2}}^{++}+O_{a3} k^{++},
\label{mixing}
\end{equation}
where, $a \ni 1,~2,~3$. A list of Feynman rules which are relevant for rest of the article, are presented in Appendix A.
\subsection{Neutrino Masses at 1-loop level}
\label{nu_mass}
\begin{figure}[!t]
\centering
\begin{minipage}{0.35\textwidth}
\begin{tikzpicture}[line width=1.4 pt, scale=1.65,every node/.style={scale=1.0}]
\draw[fermion,black] (0.0,0.0) --(-0.7,-0.7);
\draw[fermion,black] (2,0) --(0,0);
\draw[fermion,black] (2,0) --(2.7,-0.7);
\draw[scalar,black] (0,0) --(0.5,2.0);
\draw[scalar,black] (0.5,2.0) --(1.5,2.0);
\draw[scalar,black] (1.5,2.0) --(2.5,2.3);
\draw[scalar,black] (1.5,2.0) --(2,0);
\draw[scalar,black] (0.5,2) --(-0.5,2.3);
\node at (1,-0.2) {$E^{--}$};
\node at (1,1.80) {$k^{--}$};
\node at (-0.1,1.0) {$\phi^{--}_{\frac{3}{2}}$};
\node at (2.2,1.0) {$\phi^{--}_{\frac{5}{2}}$};
\node at (0.5,2.2) {$\mu$};
\node at (1.5,2.2) {$\mu^\prime$};
\node at (-0.5,2.55) {$<H>$};
\node at (2.5,2.55) {$<H>$};
\node at (2.55,-0.3) {$\nu_L$};
\node at (-0.55,-0.3) {$\nu_L$};
\node at (0.1,-0.23) {$ y_{\frac{3}{2}} $};
\node at (1.9,-0.23) {$ y_{\frac{5}{2}} $};
\end{tikzpicture}
\end{minipage}
\hspace{0.2\textwidth}
\begin{minipage}{0.35\textwidth}
\begin{tikzpicture}[line width=1.4 pt, scale=1.65,every node/.style={scale=1.0}]
\draw[fermion,black] (0.0,0.0) --(-0.7,-0.7);
\draw[fermion,black] (2,0) --(0,0);
\draw[fermion,black] (2,0) --(2.7,-0.7);
\draw[scalar,black] (0,0) --(1,2.0);
\draw[scalar,black] (1,2) --(2,0);
\draw[scalar,black] (1,2) --(0.1,2.4);
\draw[scalar,black] (1,2) --(1.9,2.4);
\node at (1,-0.2) {$E^{--}$};
\node at (0.1,1.0) {$\phi^{--}_{\frac{3}{2}}$};
\node at (1.9,1.0) {$\phi^{--}_{\frac{5}{2}}$};
\node at (1,2.2) {$\lambda$};
\node at (0.1,2.55) {$<H>$};
\node at (1.9,2.55) {$<H>$};
\node at (2.55,-0.3) {$\nu_L$};
\node at (-0.55,-0.3) {$\nu_L$};
\node at (0.1,-0.23) {$ y_{\frac{3}{2}} $};
\node at (1.9,-0.23) {$ y_{\frac{5}{2}} $};
\end{tikzpicture}
\end{minipage}
\mycaption{Feynman diagrams generating neutrino masses at 1-loop level.}
\label{fd_nu_mass}
\end{figure}
In the framework of this model, Weinberg operator is generated at 1-loop level via the Feynman diagrams depicted in Fig.~\ref{fd_nu_mass}. Neutrinos get Majorana masses after the EWSB. Simultaneous presence of Yukawa couplings $y_{\frac{5}{2}}$ and $y_{\frac{5}{2}}$ violates lepton number conservation in this model.
The contributions of the box (Fig.~\ref{fd_nu_mass} left panel) and triangle (Fig.~\ref{fd_nu_mass} right panel) diagrams to the neutrino mass matrix $m_{\nu}$ can be computed as,
\begin{equation}
\frac{ {m_{\nu}}^{\square}}{{\langle H \rangle}^2}~=~ \frac{\mu \mu^\prime}{16\,\pi^2\,v^2} \left(y_{\frac{5}{2}}^T M_\square^{-1} y_{\frac{3}{2}}+y_{\frac{3}{2}}^T M_\square^{-1} y_{\frac{5}{2}}\right)~{\rm and}~ \frac{{m_{\nu}}^{\Delta}}{{\langle H \rangle}^2}~=~\frac{\lambda}{16\,\pi^2} \left(y_{\frac{5}{2}}^T M_\Delta^{-1} y_{\frac{3}{2}}+y_{\frac{3}{2}}^T M_\Delta^{-1} y_{\frac{5}{2}}\right),
\label{nu_mass}
\end{equation}
respectively, where, $M_\square^{-1}~{\rm and}~M_\Delta^{-1}$ are $3 \times 3$ matrices given by,
\begin{eqnarray}
\left(M_\square^{-1}\right)^{\alpha \beta} & = & \sum_{a,b,c=1}^3\left(O_{a1}O_{b2}O_{c3}\right)^2 m_E^{\alpha \beta} I_4\left({m^{\alpha\beta}_E},m_{H_a},m_{H_b},m_{H_c}\right),\nonumber\\
\left(M_\Delta^{-1}\right)^{\alpha \beta} & = & \sum_{a,b=1}^3 \left(O_{a1}O_{b2}\right)^2 m_E^{\alpha \beta} I_3\left(m^{\alpha\beta}_E,m_{H_a},m_{H_b}\right).
\end{eqnarray}
$m_E^{\alpha\beta}$ is the element of vector-like doubly charged fermion mass matrix (defined in ${\cal L}_{Yukawa}$) and $I_3(m^{\alpha\beta}_E,m_{H_a},m_{H_b})$ and $I_4(m^{\alpha\beta}_E,m_{H_a},m_{H_b},m_{H_c})$ are the loop integral factors given by,
\begin{eqnarray}
I_3(m_A,m_B,m_C) &=& \frac{m^2_Aln\left(\frac{m^2_C}{m^2_A}\right)}{\left(m^2_A-m^2_B\right)\left(m^2_A -m^2_C\right)}+\frac{m^2_Bln\left(\frac{m^2_C}{m^2_B}\right)}{\left(m^2_B-m^2_A\right)\left(m^2_B -m^2_C\right)},\nonumber\\
\frac{1}{v^2}\,I_4(m_A,m_B,m_C,m_D)&=&\frac{m^2_Aln\left(\frac{m^2_D}{m^2_A}\right)}{\left(m^2_A-m^2_B\right)\left(m^2_A -m^2_C\right)\left(m^2_A -m^2_D\right)}\nonumber\\
&&+\frac{m^2_Bln\left(\frac{m^2_D}{m^2_B}\right)}{\left(m^2_B-m^2_A\right)\left(m^2_B -m^2_C\right)\left(m^2_B -m^2_D\right)}\nonumber\\
&&+\frac{m^2_Cln\left(\frac{m^2_D}{m^2_C}\right)}{\left(m^2_C-m^2_A\right)\left(m^2_C -m^2_B\right)\left(m^2_C -m^2_D\right)}.\nonumber
\end{eqnarray}
It is important to note that one can always go to a particular $E^{++}_\alpha$ basis where $m_E$ is diagonal with positive entries. Therefore, in this basis, $M_\square^{-1}~{\rm and}~M_\Delta^{-1}$ are also diagonal. Defining $M_{D}~=~\frac{\langle H \rangle^2}{16\,\pi^2} \left(\frac{\mu \mu^\prime}{v^2}M_\square^{-1}+\lambda M_\Delta^{-1}\right)~=~{\rm diag}\left(M_1,M_2,M_3\right)$, the loop induced neutrino mass matrix is given by,
\begin{equation}
m_\nu~=~ \left(y_{\frac{5}{2}}^T M_D y_{\frac{3}{2}}+y_{\frac{3}{2}}^T M_D y_{\frac{5}{2}}\right)~=~U^*_{MNS}D_\nu U_{MNS}^\dagger,
\label{lowhigh}
\end{equation}
where, $D_\nu~=~{\rm diag}\left(m_1,m_2,m_3\right)$ with $m_1,~m_2~{\rm and}~m_3$ being the masses of the SM neutrinos and $U_{MNS}$ is the Pontecorvo--Maki--Nakagawa--Sakata matrix \cite{Maki:1962mu,Pontecorvo:1967fh} determined by 3-angles and 3-phases (one Dirac phase and two Majorana phases). Note that in the low energy effective theory, neutrino mass matrix is determined by 9-parameters (3 neutrino masses, 3 mixing angles and 3 phases). Whereas, the number of parameters appearing from the high-energy theory are much larger. In the basis where the SM charged lepton mass matrix and $m_E$ are diagonal, $y_{\frac{3}{2}}$ and $y_{\frac{5}{2}}$ contain 33 independent real parameters\footnote{$y_{\frac{3}{2}}$ and $y_{\frac{5}{2}}$ are $3\times 3$ complex matrices and hence, contain 18 real parameters each. However, 3 phases are unphysical and can be eliminated by a phase redefinition of the SM left handed lepton doublet ($L_{\alpha L}$).} and $m_E$ is determined by 3 additional parameters. However, not all the 36 parameters are independent in the context of the particular structure of the neutrino mass matrix in Eq.~\ref{lowhigh}. In particular, 3 more parameters can be eliminated by resealing the columns of the Yukawa matrices $y_{\frac{3}{2}}$ and $y_{\frac{5}{2}}$. Therefore, the light neutrino mass matrix is determined by 33 effective parameters at the high-scale. Since the number of parameters in the high-energy theory is much larger than the number of parameters describing low-energy neutrino phenomenology, proper parameterization \cite{Casas:2006hf,Casas:2001sr,Cordero-Carrion:2019qtu} of Yukawa matrices ($y_{\frac{5}{2}}$ and $y_{\frac{3}{2}}$) is required to ensure consistency of the model with the available results from the neutrino oscillation experiments. In this regard, Eq.~\ref{lowhigh} can be re-written as,
\begin{equation}
\left(D_{\sqrt \nu}^{-1}\right)^T U^T_{MNS}y_{\frac{5}{2}}^T M_{\sqrt D}^T M_{\sqrt D} y_{\frac{3}{2}} U_{MNS} D_{\sqrt \nu}^{-1}+\left(D_{\sqrt \nu}^{-1}\right)^T U^T_{MNS} y_{\frac{3}{2}}^T M_{\sqrt D}^T M_{\sqrt D} y_{\frac{5}{2}}U_{MNS} D_{\sqrt \nu}^{-1}~=~{\rm \bf I}_{3\times 3},\nonumber
\end{equation}
where, $M_{\sqrt D}~=~{\rm diag}\left(\sqrt{M_1},\sqrt{M_2},\sqrt{M_3}\right)$ and $D_{\sqrt \nu}^{-1}~=~{\rm diag}\left(m_1^{-\frac{1}{2}},m_2^{-\frac{1}{2}},m_3^{-\frac{1}{2}}\right)$. The most general form of matrices $Y_{\frac{3}{2}}$ and $Y_{\frac{5}{2}}$ which are consistent with the physical, low-energy neutrino parameters like, the three light neutrino masses ($m_1$, $m_2$ and $m_3$) and mixing angles as well as phases (contained in $U_{MNS}$), are given by,
\begin{eqnarray}
M_{\sqrt D} y_{\frac{5}{2}} U_{MNS} D_{\sqrt \nu}^{-1}~=~A~~~&\Rightarrow&~~~y_{\frac{5}{2}}~=~M_{\sqrt D}^{-1}A D_{\sqrt \nu} U_{MNS}^{\dagger},\nonumber\\
M_{\sqrt D} y_{\frac{3}{2}} U_{MNS} D_{\sqrt \nu}^{-1} ~=~B~~~&\Rightarrow&~~~y_{\frac{3}{2}}~=~ M_{\sqrt D}^{-1}B D_{\sqrt \nu} U_{MNS}^{\dagger},
\label{param}
\end{eqnarray}
where, $A$ and $B$ are arbitrary $3 \times 3$ complex matrices subjected to the condition $A^TB~+~B^T A~=~{\rm \bf I}_{3\times 3}$. In Eq.~\ref{param}, the 33 parameters (contained in $y_{\frac{3}{2}}$, $y_{\frac{5}{2}}$ and $M^{-1}_{\sqrt D}$) introduced at the high-scale can be counted as 9 parameters in the SM neutrino sector (contained in $D_{\sqrt \nu}$ and $U_{MNS}$) plus 24 additional parameters contained in the matrices $A~{\rm and}~B$. It is important to note that in the limit $y_{\frac{3}{2}}~=~y_{\frac{5}{2}}$, the parameterization in Eq.~\ref{param} reduce to Casas-Ibarra parameterization proposed in Ref.~\cite{Casas:2006hf,Casas:2001sr}. After introducing the model and discussing the phenomenology in the context of neutrino masses and mixings, we are now equiped enough to discuss the collider phenomenology of this model. We have studied the collider phenomenology in the context of a simplified version of this model which will be discussed in the next section.
\subsection{Simplified version of the model for collider phenomenology}
In the previous section, we have considered 3-generations of doubly charged singlet fermions which are required to explain the observed data from neutrino oscillation experiments. However, in the context of collider phenomenology, we can easily restrict ourselves to the case of only one doubly charged singlet fermion (denoted by $E^{++}$). Note that in the presence of more doubly charged fermion, it will be the lightest one which contributes dominantly to the collider signatures. With only one generation of doubly charged fermion in the scenario, the $3 \times 3$ Yukawa matrices $y_{\frac{3}{2}}$ and $y_{\frac{5}{2}}$ in Eq.~\ref{lag_yuk} reduce to $1\times 3$ vectors,
\begin{equation}
y_{\frac{5}{2}}~=~\left(y_{\frac{5}{2}}^{eE},y_{\frac{5}{2}}^{\mu E},y_{\frac{5}{2}}^{\tau E}\right),~~~{\rm and}~~~y_{\frac{3}{2}}~=~\left(y_{\frac{3}{2}}^{eE},y_{\frac{3}{2}}^{\mu E},y_{\frac{3}{2}}^{\tau E}\right), \nonumber
\end{equation}
and the doubly charged fermion mass matrix $m_E$ is now a scalar. We further assumed real\footnote{We do not consider the phases of the Yukawa couplings ($f_{\frac{5}{2}\left(\frac{3}{2}\right)}$ and $y_{k}$) nor the ones of the PMNS matrix, $U_{MNS}$. Note that the phases of the Yukawa couplings do not play any significant role in the context of collider phenomenology.}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\linewidth]{plots/neutrino/nu_mass.pdf}
\mycaption{The regions of parameter-space ($\mu$--$\lambda$ plane) which is consistent with upper bound ($~0.2$ eV) on the absolute neutrino mass scale are depicted for three different values of the Yukawa couplings. For simplicity, we have assumed $\mu~=~\mu^\prime$ and $f~=~f_{\frac{5}{2}}~=~f_{\frac{3}{2}}$.}
\label{fig:nu_bound}
\end{figure}
$y_{\frac{5}{2}}$, $y_{\frac{3}{2}}$ and $y_{k}$ which couple to all three generations of the SM leptons with equal coupling strength (lepton flavour universality) {\em i.e.,}
\begin{equation}
y_{\frac{5}{2}\left(\frac{3}{2}\right)}^{eE}~=~y_{\frac{5}{2}\left(\frac{3}{2}\right)}^{\mu E}~=~y_{\frac{5}{2}\left(\frac{3}{2}\right)}^{\tau E}~=~f_{\frac{5}{2}\left(\frac{3}{2}\right)}~~~{\rm and}~~~y_{k}~=~f_{k}\,{\bf \rm I}_{3\times 3}, \nonumber
\end{equation}
where $f_{\frac{5}{2}\left(\frac{3}{2}\right)}$ and $f_{k}$ are real numbers. The loop induced neutrino masses (see Eq.~\ref{nu_mass}) are determined in terms of the Yukawa couplings $f_{\frac{5}{2}\left(\frac{3}{2}\right)}$, the tri-linear ($\mu$ and $\mu^\prime$) and the quartic ($\lambda$) scalar couplings (introduced in the scalar potential in Eq.~\ref{lag_scalar}) as well as the masses of the heavy fermions/scalars. Therefore, the allowed values of $f_{\frac{5}{2}\left(\frac{3}{2}\right)}$, $\mu$, $\mu^\prime$ and $\lambda$ are constrained from the upper limit on the absolute neutrino mass scale ($\sum m_{\nu_{i}}~=~m_1+m_2+m_3$), defined as the sum of the masses of the neutrino mass eigenstates. Cosmological observations provide the strongest upper bound of about 0.2 eV \cite{Lesgourgues:2014zoa,Capozzi:2017ipn} on $\sum m_{\nu_{i}}$. For TeV scale masses of the heavy fermion ($E^{++}$) and scalars ($\phi_{\frac{5}{2}}^{3+}$, $\phi_{\frac{3}{2}}^{+}$ and $H_i^{++}$), $M_{\square}^{-1}$ and $M_{\Delta}^{-1}$ in Eq.~\ref{nu_mass} are of the order of TeV$^{-1}$ and hence, $m_\nu^{\square}$ and $m_\nu^\Delta$ lighter than 0.1 eV require both $\frac{\mu \mu^\prime}{16\,\pi^2\,v^2}f_{\frac{5}{2}}f_{\frac{3}{2}}$ and $\frac{\lambda}{16\,\pi^2} f_{\frac{5}{2}}f_{\frac{3}{2}}~<~10^{-12}$, respectively. Assuming $\mu~=~\mu^{\prime}$ and $f_{\frac{5}{2}}~=~f_{\frac{3}{2}}$, in Fig.~\ref{fig:nu_bound}, we have depicted the region of parameter-space ($\mu$--$\lambda$ plane) which is consistent with the upper bound on the absolute neutrino mass scale. In Fig.~\ref{fig:nu_bound}, we have assumed three different values of $f_{\frac{5}{2}\left(\frac{3}{2}\right)}$ of the same order of magnitude as the SM charged lepton Yukawa couplings. It can be seen from Fig.~\ref{fig:nu_bound} that only smaller values of $\mu$ and $\lambda$ are allowed. This has important consequences on the scalar mass spectrum and hence, on the collider phenomenology of this model.
\begin{figure}
\begin{center}
\begin{minipage}{.49\textwidth}
\centering
\begin{tikzpicture}[line width=1.4 pt, scale=1.65,every node/.style={scale=1.0}]
\draw[fermion,black] (-1.0,1.0) --(0.0,0.0);
\draw[fermion,black] (-1.0,1.0) --(0.0,0.0);
\draw[fermion,black] (-1.0,-1.0) --(0.0,0.0);
\draw[vector,black] (0,0.0) --(1.0,0.0);
\draw[fermion,black] (1,0.0) --(1.5,1.0);
\draw[fermion,black] (1,0.0) --(1.5,-1.0);
\draw[fermion,black] (1.5,1.0) --(2.0, 1.0);
\draw[fermion,black] (1.5,1.0) --(2.0,1.5);
\draw[fermion,black] (1.5,1.0) --(2.0,0.5);
\draw[fermion,black] (1.5,-1.0) --(2.0,-1.0);
\draw[fermion,black] (1.5,-1.0) --(2.0,-1.5);
\draw[fermion,black] (1.5,-1.0) --(2.0,-0.5);
\node at (-0.5,0.8) {q};
\node at (-0.5,-0.8) {$\bar{q}$};
\node at (0.5,0.25) {$ \gamma / Z$};
\node at (1.1,0.6) {$E^{++}$};
\node at (1.1,-0.6) {$E^{--}$};
\node at (1.7,1.4) {$e^{+}$};
\node at (1.7,0.6) {$e^{+}$};
\node at (2.1,0.9) {$\nu$};
\node at (1.7,-1.4) {$e^{-}$};
\node at (1.7,-0.6) {$e^{-}$};
\node at (2.1,-1.1) {$\nu$};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{.49\textwidth}
\centering
\begin{tikzpicture}[line width=1.4 pt, scale=1.65,every node/.style={scale=0.9}]
\draw[vector,black] (-1.0,0.0) --(0.0,0.0);
\draw[vector,black] (-1.0,2.0) --(0.0,2.0);
\draw[fermion,black] (0.0,0.0) --(0.0,2.0);
\draw[fermion,black] (0.0,2.0) --(1.0,2.0);
\draw[fermion,black] (0,0.0) --(1.0,0.0);
\draw[fermion,black] (1.0,2.0) --(1.5,2.5);
\draw[fermion,black] (1.0,2.0) --(1.5,1.5);
\draw[fermion,black] (1.0,2.0) --(1.5,2.0);
\draw[fermion,black] (1.0,0.0) --(1.5,-0.5);
\draw[fermion,black] (1.0,0.0) --(1.5,0.5);
\draw[fermion,black] (1.0,0.0) --(1.5,0.0);
\node at (-0.5,2.2) {$\gamma$};
\node at (-0.5,0.2) {$\gamma$};
\node at (0.3,1.0) {$E^{++}$};
\node at (0.5,2.2) {$E^{++}$};
\node at (0.5,0.2) {$E^{--}$};
\node at (1.2,0.4) {$e^{-}$};
\node at (1.2,-0.4) {$e^{-}$};
\node at (1.55,0.1) {$\nu$};
\node at (1.2,2.4) {$e^{+}$};
\node at (1.2,1.6) {$e^{+}$};
\node at (1.55,1.9) {$\nu$};
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{Feynman diagrams showing the Drell-Yan (left panel) and photo-fusion (right panel) pair production of doubly charged fermion and their subsequent decay into the SM leptons.}
\label{prod}
\end{figure}
At tree level, the mass of the doubly charged fermion ($E^{++}$) is given by the parameter $m_E$. Being charged under the $U(1)_Y$, $m_E$ receives radiative corrections from the loops involving photon and $Z$-boson. The Yukawa interactions in Eq.~\ref{lag_yuk} also contribute to the radiative corrections via loops involving a heavy scalar and a SM lepton. However, these corrections are suppressed by the Yukawa couplings. The corrections involving the SM gauge bosons in the loop are also estimated to be small (of the order of few hundred MeVs) \cite{Cirelli:2005uq}. Therefore, one can safely neglect the radiative corrections to $m_E$ in the context of the collider analysis. The tree level masses for the singly ($\phi_{\frac{3}{2}}^\pm$) and triply ($\phi_{\frac{5}{2}}^{3\pm}$) charged scalar are given by $m_{\phi^\pm}~=~m_{\frac{3}{2}}$ and $m_{\phi^{3\pm}}~=~m_{\frac{5}{2}}$, respectively, whereas, the masses of the physical doubly charged ($H_i^{\pm\pm}$) scalars are given by the eigen values of the mass matrix in Eq.~\ref{lag_mass}. Since the allowed values of $\lambda$ and $\mu$ (see Fig.~\ref{fig:nu_bound}) are constrained from the upper bound on the absolute neutrino mass scale, the mixing between the doubly charged scalars ($\phi_{\frac{5}{2}\left(\frac{3}{2}\right)}^{\pm\pm}$ and $k^{\pm\pm}$) are small and hence, $H_1^{\pm\pm}$, $H_{2}^{\pm\pm}$ and $H_{3}^{\pm\pm}$ are dominantly $\phi_{\frac{5}{2}}^{\pm\pm}$, $\phi_{\frac{3}{2}}^{\pm\pm}$ and $k^{\pm\pm}$, respectively. Mixing between the doubly charged scalars gives rise to small mass splitting between triply(singly) charged scalar $\phi_{\frac{5}{2}}^{3\pm}(\phi_{\frac{3}{2}}^{\pm})$ and the corresponding doubly charged scalar $H_1^{++}(H_2^{++})$. At the leading order in $\lambda$ and $\mu$, the mass splitting can be calculated as,
\begin{equation}
m_{\phi^{3\pm(\pm)}}-m_{H_{1(2)}^{\pm\pm}}~\approx~ \frac{v^2}{8\,m_{\frac{5}{2}\left(\frac{3}{2}\right)}}\,\,\left(\frac{4\mu^2}{m_{k}^2-m_{\frac{5}{2}\left(\frac{3}{2}\right)}^2}\,\, + \,\,\frac{\lambda^2v^2}{m_{\frac{3}{2}\left(\frac{5}{2}\right)}^2-m_{\frac{5}{2}\left(\frac{3}{2}\right)}^2}\right)\,\, ,
\label{splitting}
\end{equation}
for $m_{\frac{5}{2}} ~\neq~m_{\frac{3}{2}}~<~m_k$. For TeV scale masses of the heavy fermion/scalars and $\lambda\,v~\sim~\mu ~\sim~10^2$ GeV (the largest possible values consistent with the bound on absolute neutrino mass scale), the mass splittings (due to the mixing between doubly charged scalars) between $\phi^{3\pm}$ and $H_{1}^{\pm\pm}$ as well as $\phi^{\pm}$ and $H_{2}^{\pm\pm}$ are estimated to be of the order of few tens of MeVs. Radiative corrections also contribute to mass splittings between different components of the scalar doublets. In the context of present scenario, the mass splittings due to radiative corrections are estimated to be of the order of GeV~\cite{Cirelli:2005uq}. In particular, the radiative splitting between the triply- and doubly-charged component of the $Y=\frac{5}{2}$ doublet is estimated to be $m_{\phi^{3\pm}}-m_{H_1^{\pm\pm}}~\sim~\frac{5}{2}\alpha_{EM}\, m_Z~\sim~1.8$ GeV. Whereas, the loop induced splitting between doubly- and singly-charged component of the $Y=\frac{3}{2}$ doublet is given by $m_{H_2^{\pm\pm}}-m_{\phi^{\pm}}~\sim~\frac{3}{2}\alpha_{EM}\, m_Z~\sim~1.1$ GeV. These splittings are small, though they play a crucial role in determining the decays of the (multi-)charged scalars which will be discussed in details in section~\ref{scalar_phenomenology}. To summarize, the scalar spectrum of this model contains a degenerate pair of triply ($\phi^{3\pm}$) and doubly ($H_{1}^{\pm\pm}$) charged scalars ($m_{\phi^{3\pm}}~\approx~m_{H_{1}^{\pm\pm}}~\approx~m_{\frac{5}{2}}$), another degenerate pair of singly ($\phi^{\pm}$) and doubly ($H_{2}^{\pm\pm}$) charged scalars ($m_{\phi^{\pm}}~\approx~m_{H_{2}^{\pm\pm}}~\approx~m_{\frac{3}{2}}$) and a doubly charged scalar, $H_{3}^{\pm\pm}$ ($m_{H_{3}^{\pm\pm}}~\approx~m_{k}$).
After introducing the phenomenological model, we are now equipped enough to discuss it's signatures at the LHC. However, before going into the discussion of collider signatures, it is important to discuss about the low-energy constraints on the parameter space of this model resulting from the observables like, lepton-flavor violations, muon $g-2$, oblique parameters {\em e.t.c.} It has been shown in Ref.~\cite{Cheung:2017kxb} that the discrepancy between the experimental measurement and the SM prediction of muon magnetic moment $(g-2)_\mu$ \cite{Davier:2010nc,Aoyama:2012wk,Keshavarzi:2018mgv,Hagiwara:2011af} can be explained for a substantial part of the parameter space while satisfying the experimental upper bounds \cite{Adam:2013mnn,TheMEG:2016wtm,Lindner:2016bgg,Meucci:2019jog} on the lepton-flavor violating decays like, $\mu~\to~e \gamma$, $\tau~\to~e \gamma$, $\tau~\to~\mu \gamma$ {\em e.t.c.} Whereas, the contributions to the oblique parameters \cite{Cheung:2017kxb} are automatically suppressed due to the degeneracy between $\phi^{3\pm}_{\frac{5}{2}}(\phi^{\pm}_{\frac{3}{2}})$ and $H_{1}^{\pm\pm}(H_{2}^{\pm\pm})$.
\section{Phenomenology of doubly charged fermion}
\label{E_phenomenology}
\begin{figure}[!t]
\centering
\includegraphics[width=0.48 \linewidth]{plots/E/cross_DY_PP_E.pdf}
\includegraphics[width=0.48 \linewidth]{plots/E/cross_tot_E.pdf}
\mycaption{(Left panel) Drell-Yan ($\sigma_{q\bar q}$) and photon-fusion ($\sigma_{\gamma\gamma}$) contributions to the pair production of doubly charged fermions are presented as a function of $m_E$ at the LHC with 13 TeV center-of-mass energy. (Right panel) The model prediction for the total ($\sigma_{q\bar q}+\sigma_{\gamma\gamma}$) production cross-section of $E^{\pm\pm}$-pairs is presented. Inset shows the ratio of photon-fusion and Drell-Yan contribution. The gray solid line corresponds to the ATLAS observed 95\% CL upper limit on the pair production cross-section ($\sigma_{\rm Obs}^{95}$) of long-lived doubly-charged particles (DCPs) \cite{Aaboud:2018kbe}. }
\label{cross_E}
\end{figure}
In this section, we will discuss the collider signatures of the doubly charged fermion ($E^{\pm\pm}$). The Lagrangian (see Eq.~\ref{lag_kin},~\ref{lag_yuk} and \ref{lag_scalar}) of this model does not allow single production\footnote{The doubly charged fermion can be singly produced in association with a SM lepton and a heavy multi-charged scalar via the Yukawa interactions in Eq.~\ref{lag_yuk}. Such single production cross-sections are suppressed by the small Yukawa couplings as well as the additional propagator and $2~\to ~3$ phase-space. However, we do not consider such process as the single production of $E^{\pm\pm}$ because of the presence of a heavy multi-charged scalar in the final state.} of $E^{\pm\pm}$. However, the doubly charged fermion can be pair produced via the gauge interactions with the SM photon and $Z$-boson (see Appendix~\ref{feyn}). The pair production of $E^{\pm\pm}$ at the hadron colliders takes place via quark anti-quark initiated DY process with a photon ($\gamma$) or $Z$-boson in the $s$-channel as shown in Fig.~\ref{prod} (left panel). Being electrically charged, $E^{\pm\pm}$ can also be pair produced via photo-fusion ($\gamma \gamma \to E^{++}E^{--}$) process as shown in Fig.~\ref{prod} (right panel). The photo-fusion (PF) of $E^{\pm\pm}$ pairs take place via the exchange of a $E^{\pm\pm}$ in the $t/u$-channel and hence, is relatively less suppressed by the parton center-of-mass energy as compared to the $s$-channel DY production. Moreover, $E^{\pm\pm}$ being a doubly charged particle, the PF cross-section is enhanced by a factor of $2^4$ at the Born level. However, it is also important to note that photons, being electromagnetically interacting, are associated with small parton density at the LHC. In fact, the parton density of the photon is so small that most of the older versions of parton distribution functions (PDF's) do not include photon as a parton. However, inclusion of the photon as a parton with an associated parton distribution function is necessary if one wants to include QED correction to the PDF. In the era of precision physics at the LHC when PDF's are determined upto NNLO in QCD, NLO QED corrections are equally important for the consistency of calculations. Moreover, for some processes, PF could become significant (or even dominant in some cases) at high energies. In view of these facts, different groups (NNPDF, MRST, CTEQ {\em e.t.c.} \cite{Ball:2014uwa,Ball:2013hta,Martin:2004dh,Schmidt:2015zda}) have already included photon as a parton with an associated parton distribution function into their PDF sets. In this work, we have considered both DY and PF pair production of the doubly charged fermion. The total pair production cross-section at the LHC is given by,
\begin{eqnarray}
\sigma(pp~\to X\bar X)~=&&\int dx_1 dx_2 \, f_{\gamma/p}\left(x_1,\mu_F^2\right)f_{\gamma/p}\left(x_2,\mu_F^2\right)\, \hat{\sigma}_{\gamma\gamma}\nonumber\\
&+&\sum_{q,\bar q}\int dx_1 dx_2 \, f_{q/p}\left(x_1,\mu_F^2\right)f_{\bar q/p}\left(x_2,\mu_F^2\right)\, \hat{\sigma}_{q \bar q},
\label{cross_section}
\end{eqnarray}
where, $f_{i/p}\left(x,\mu_F^2\right)$s are the parton distribution functions for the i$^{\rm th}$ parton, $\hat \sigma_{\gamma \gamma}$ and $\hat \sigma_{q \bar q}$ are the partonic PF and DY pair production cross-sections, respectively, $\mu_F$ is the factorization scale. At the leading order, $\hat \sigma_{PF}\left(\gamma \gamma \to E^{++}E^{--}\right)$ and $\hat \sigma_{DY}\left(q \bar q \to E\bar E\right)$ are given by,
\begin{eqnarray}
\frac{d\hat \sigma_{q\bar q}^{E\bar E}}{d\Omega}&=&\frac{\alpha_{EM}^2}{3 \hat s^3}\left[\left(Q_q-\frac{T_{3,q}-Q_q{\rm sin}^2\theta_W}{{\rm cos}^2\theta_W}\frac{\hat s}{\hat s - m_Z^2}\right)^2+Q_q^2\left(1+{\rm tan}^2\theta_W\frac{\hat s}{\hat s - m_Z^2}\right)^2\right]\nonumber\\
&&\times \sqrt{1-\frac{4m_E^2}{\hat s}}\left[\left(m_E^2-\hat u\right)^2+\left(m_E^2-\hat t\right)^2+2m_E^2\hat s\right],\nonumber\\
\frac{d\hat \sigma_{\gamma \gamma}^{E\bar E}}{d\Omega}&=&\frac{8\alpha_{EM}^2}{ \hat s}\sqrt{1-\frac{4m_E^2}{\hat s}}\left[\frac{\hat s{\left(\hat s+4m_E^2\right)-8m_E^4}}{\left(m_E^2-\hat t\right)\left(m_E^2-\hat u\right)}-\frac{4m_E^4}{\left(m_E^2-\hat t\right)^2}-\frac{4m_E^4}{\left(m_E^2-\hat u\right)^2}-2\right],
\label{cross_section_fermion}
\end{eqnarray}
where, $\hat s$, $\hat t$ and $\hat u$ are the usual Mandelstam variables, $\alpha_{EM}$ and $\theta_W$ are the fine-structure constant and the Weinberg angle, respectively, whereas, $Q_q$ and $T_{3,q}$ refer to the charge and the 3$^{\rm rd}$ component of isospin of the corresponding quarks in the initial state.
We have developed a parton-level Monte-Carlo computer program for the numerical evaluation of the integration in Eq.~\ref{cross_section}. We have used the {\bf NNPDF23LO} \cite{Ball:2014uwa} parton distribution functions with the factorization ($\mu_F$) and renormalization scales kept fixed at the subprocess center-of-mass energy $\sqrt {\hat s}$. In the left panel of Fig.~\ref{cross_E}, we have presented the DY ($\sigma_{q\bar q}$) and the PF ($\sigma_{\gamma \gamma}$) production cross-section of $E\bar E$-pairs as a function of doubly charged fermion mass ($m_E$) at the LHC with 13 TeV center-of-mass energy. The ensuing total ($\sigma_{q\bar q}+\sigma_{\gamma \gamma}$) pair production cross-section is presented in the right panel of the Fig.~\ref{cross_E}. In the inset of Fig.~\ref{cross_E} (right panel), we have presented the ratio of the PF and the DY production cross-sections as a function of $m_E$. Fig.~\ref{cross_E} (left panel and also the inset of the right panel) shows that DY production rate is larger than the PF production rate for the doubly charged fermion mass lighter than about 800 GeV. In this region, PF production constitutes a relatively large fraction (about 10\% to 50\% depending on the value of $m_E$) of the total cross-section and hence, can not be neglected. For large doubly charged fermion masses ($m_E > 800$ GeV), DY production, being a $s$-channel process, suffers larger suppression compared to the $t/u$-channel PF production and hence, $\sigma_{\gamma \gamma}$ dominates over $\sigma_{q\bar q}$. The total pair production cross-section varies from a few pb to 0.1 fb as we vary $m_E$ over a range 100--1400 GeV. Once produced, the doubly charged fermion decays (directly or via cascade involving heavy multi-charged scalars) into the SM leptons and/or gauge bosons giving rise to multi-lepton final states at the LHC.
\subsection{Decay of $E^{\pm\pm}$}
\label{sec:decay_E}
The decays of the doubly charged fermion, which will be discussed in this section, play a crucial role in determining the signatures of $E^{\pm\pm}$ at the LHC. The Yukawa interactions in Eq.~\ref{lag_yuk} result into couplings involving a doubly charged fermion, a multi-charged scalar and a SM lepton (see Appendix~\ref{feyn}). Therefore, if kinematically allowed ($m_E~>~m_{\frac{5}{2}}$ and/or $m_{\frac{3}{2}}$), $E^{\pm\pm}$ undergoes 2-body decays into a multi-charged scalar in association with a SM lepton: $E^{\pm\pm}~\to~\phi^{\pm} l^{\pm},~\phi^{3\pm} l^{\mp}$ and $H_{a}^{\pm}\nu_l$, where $l$ includes all three generations of the SM leptons namely, electron ($e$), muon ($\mu$) and tau ($\tau$), and $\nu_l$ is the $l$-neutrino. The partial decay widths for the 2-body decay modes of $E^{\pm\pm}$ are given by,
\begin{eqnarray}
&&\Gamma\left({E^{\pm\pm}~\to~ \phi^{\pm(3\pm)}+l^{\pm(\mp)}}\right)~=~\frac{\left|f_{\frac{3}{2}\left(\frac{5}{2}\right)}\right|^2}{32 \pi}\,m_E\, \left(1-\frac{m^2_{\frac{3}{2}\left(\frac{5}{2}\right)}}{m^2_E}\right)^2,\nonumber\\
&&\Gamma\left({E^{\pm\pm}~\to~ H_a^{\pm\pm}+\nu_l}\right)~=~\frac{\left|f_{\frac{5}{2}}O_{a1}\right|^2+\left|f_{\frac{3}{2}}O_{a2}\right|^2}{32 \pi}\, m_E \, \left(1-\frac{m^2_{H_a}}{m_E^2}\right)^2,
\end{eqnarray}
where, $O_{a1}$ and $O_{a2}$ are the elements of doubly charged scalar mixing matrix defined in Eq.~\ref{mixing}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[line width=1.4 pt, scale=1.65,every node/.style={scale=0.9}]
\draw[fermion,black] (-1.5,0.0) --(0.0,0.0);
\draw[fermion,black] (0.0,0.0) --(0.8,0.6);
\draw[scalar,black] (0.0,0.0) --(1.5,-1.0);
\draw[fermion,black] (1.5,-1.0) --(2.5,-0.5);
\draw[fermion,black] (1.5,-1.0) --(2.5,-1.5);
\node at (-0.5,0.2) {$E^{\pm\pm}$};
\node at (0.85,0.5) {$\nu_l$};
\node at (0.5,-0.8) {${H_a}^{\pm\pm}$};
\node at (2.0,-0.42) {$l^{\pm}$};
\node at (2.0,-1.45) {$l^{\pm}$};
\end{tikzpicture}
\end{center}
\caption{Feynman diagram showing the tree-level 3-body decay of the doubly charged fermion ($E^{\pm\pm}$).}
\label{3-body}
\end{figure}
If the 2-body decays of $E^{\pm\pm}$ are kinematically forbidden {\em i.e.,} $m_E~<~m_{\frac{3}{2}\left(\frac{5}{2}\right)}$, $E^{\pm\pm}$ undergoes tree-level 3-body decays into a neutrino in association with a pair of same-sign SM charged lepton. The 3-body decays proceed through an off-shell doubly charged scalar as depicted in Fig.~\ref{3-body}. The partial decay width of the 3-body decay is given by,
\begin{equation}
\Gamma\left({E^{\pm\pm}~\to~ \nu_{l^\prime}l^{\pm}l^{\pm}}\right)~=~\frac{f_k^2}{512\pi^3}\, m_E\,\sum_{a=1}^3\,O_{a3}^2\left(\left|f_{\frac{5}{2}}O_{a1}\right|^2+\left|f_{\frac{3}{2}}O_{a2}\right|^2\right)\,I\left(\frac{m_{H_a}^2}{m_E^2}\right),
\label{eq:3-body}
\end{equation}
where,
\begin{equation}
I(x)~=~\int_0^1d\xi_1\int_0^1d\xi_2\,\frac{\xi_2\left(1-\xi_2\right)^2}{\left(\xi_2-x\right)^2}.\nonumber
\end{equation}
The branching ratios for the different decay modes of $E^{\pm\pm}$ are presented in Fig.~\ref{br_E} as a function of $m_E$. We have assumed $f_{\frac{5}{2}}~=~f_{\frac{3}{2}}~=~2\times 10^{-4}$ and the other parameters are given by $m_{\frac{5}{2}\left(\frac{3}{2}\right)}=1.2(1.4)~{\rm TeV},~m_{k}=1.5~{\rm TeV},~\mu=\mu^{\prime}=100~{\rm GeV}~\lambda=5\times 10^{-3}$ and $f_k=1$\footnote{The Yukawa couplings involving a doubly charged singlet scalar and two SM right-handed leptons are not constrained from the upper bound on the absolute neutrino mass scale and hence, could be large. In fact, large $f_k$ is required to ensure prompt decay of $E^{\pm\pm}$ when 2-body decays are forbidden.}. Fig.~\ref{br_E} shows that $E^{\pm\pm}$ dominantly decays into a SM lepton plus a heavy charged scalar when kinematically allowed. Same branching ratios for $H_1^{\pm\pm} \nu(H_2^{\pm\pm}\nu)$ and $\phi^{3\pm}l^\mp(\phi^{\pm}l^\pm)$ decay modes are a consequence of the fact that $H_1^{\pm\pm}(H_2^{\pm\pm})$ dominantly belongs to the scalar doublet which also includes $\phi^{3\pm}(\phi^{\pm})$ and hence, both decay widths are determined by the same Yukawa coupling (see Eq.~\ref{lag_yuk}).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8 \linewidth]{plots/E/decay_E.pdf}
\mycaption{Branching ratios of $E^{++}$ as a function of $m_E$ for $m_{\frac{5}{2}\left(\frac{3}{2}\right)}=1.2(1.4)~{\rm TeV},~m_{k}=1.5~{\rm TeV},~\mu=\mu^{\prime}=100~{\rm GeV},~f_{\frac{5}{2}\left(\frac{3}{2}\right)}=2\times 10^{-4},~\lambda=5\times 10^{-3}$ and $f_k=1.0$. The total decay width ($\Gamma_{TOT}$) of $E^{++}$ is presented in the inset.}
\label{br_E}
\end{figure}
$H_3^{\pm\pm}$ being dominantly the singlet doubly charged scalar ($k^{++}$), the decay $E^{\pm\pm}~\to~H_3^{\pm\pm}\nu$ is suppressed by the mixing ($O_{31}$ and $O_{32}$) in the doubly charged scalar sector and hence, is not visible in Fig.~\ref{br_E}. When the 2-body decays are kinematically forbidden, $E^{\pm\pm}$ undergoes 3-body decay into $l^\pm l^\pm\nu$ with 100\% branching fraction. In the inset of Fig.~\ref{br_E}, the total decay width ($\Gamma_{TOT}$) is presented as a function of $m_E$. When 2-body decays are kinematically allowed, the total decay width ${\cal O}\left({f_{\frac{5}{2}\left(\frac{3}{2}\right)}^2m_E}/{32\pi}\right)$ is large ($\Gamma_{TOT}>10^{-13}$ GeV) enough to ensure the prompt decay of $E\bar E$ pairs produced at the LHC. However, Eq.~\ref{eq:3-body} shows that the 3-body decays are suppressed by small Yukawa couplings ($f_{\frac{5}{2}\left(\frac{3}{2}\right)}$) as well as by one of the off-diagonal element of the doubly charged scalar mixing matrix, $O$. The inset of Fig.~\ref{br_E} shows that for $m_E~<~m_{\frac{5}{2}\left(\frac{3}{2}\right)}$ where only 3-body decays are kinematically allowed, the total decay width is suppressed but not suppressed enough to ensure displaced vertex or highly ionizing track signature at the LHC. However, this conclusion is highly dependent on the choice of parameters\footnote{The large values for $\mu$, $\lambda$ and $f_{k}$ are assumed to ensure prompt decay of $E^{\pm\pm}$.} which are used to produce Fig.~\ref{br_E}. The 3-body decay widths depend on the Yukawa couplings and the doubly charged scalar mixings which are determined by $\mu,~\mu^{\prime}$ and $\lambda$. To identify the parts of parameter space ($f_{k}$--$\mu$ plane) which give rise to prompt decay, displaced vertex and abnormally large ionization signature \cite{Aaboud:2018kbe} at the LHC, in Fig.~\ref{length_E}, we have plotted the decay length (by color gradient) of $E^{\pm\pm}$ as a function of $f_{k}$ and $\mu$ for a fixed value of $m_E~=~800$ GeV. The values of other parameters are same as in Fig.~\ref{br_E}. The three regions of $f_{k}$--$\mu$ plane giving rise to prompt decay, displaced vertex and abnormally large ionization signature are clearly indicated in Fig.~\ref{length_E}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9 \linewidth]{plots/E/E++width.pdf}
\mycaption{The decay length (color gradient) of $E^{\pm\pm}$ is presented as a function of $f_k$ and $\mu$ for $m_E=800$ GeV. Other parameters are same as Fig.~\ref{br_E}.}
\label{length_E}
\end{figure}
\subsection{Collider signatures}
After discussing the production and decay of the doubly charged fermion, we are now equipped enough to study the signatures of $E^{\pm\pm}$ at the LHC with $\sqrt s~=~13$ TeV. The collider signatures of $E\bar E$-pairs at the LHC can be broadly categorized into two classes depending on the total decay width of $E^{\pm\pm}$. If the total decay width is large enough ($\Gamma_{TOT}~>~10^{-13}$ GeV), {\em i.e.,} the decay length is small enough ($<$1 mm), to ensure the decay of $E^{\pm\pm}$ inside the detector, the collider signatures are determined by the SM leptons/jets and missing energy resulting from the decay of $E\bar E$-pairs. However, if the doubly charged fermion is long-lived ({\em i.e.,} $\Gamma_{TOT}~<~10^{-16}$ GeV) and remains stable inside the LHC detectors {\em i.e.,} the decay length is larger than few meters, production of $E\bar E$-pairs give rise to abnormally large ionization at the LHC detectors \cite{Aaboud:2018kbe}.
\subsubsection{Long-lived $E^{\pm\pm}$: Abnormally large ionization signature}
It has already been discussed in the previous section and shown in Fig.~\ref{length_E} that certain parts of the parameter space of this model give rise to a long-lived $E^{\pm\pm}$ which passes the entire LHC detectors without decaying. Being doubly charged, a long-lived $E^{\pm\pm}$ is highly ionizing, and thus leave a very characteristic signature of abnormally large ionization in the detector. This particular signature is quite interesting and clean because of the negligible background from the SM. The SM does not have any multi-charged particle and hence, does not give rise to large ionization at the LHC. Such signatures have already been searched by the ATLAS collaboration \cite{Aaboud:2018kbe} with 36.1 fb$^{-1}$ integrated luminosity data and no such events were found. In absence of any observed events, 95\% confidence level (CL) upper limits on the pair-production cross-sections of long-lived multi-charged particles (MCPs) as a function of MCP masses and for different MCP charges are derived in Ref.~\cite{Aaboud:2018kbe}. In Fig.~\ref{cross_E} (right panel), we have also plotted the 95\% CL upper limits on the pair-production cross-sections of long-lived doubly-charged particles ($\sigma^{95\%}_{DCPs}$) along with the model prediction for the $E\bar E$-pair production cross-section. Fig.~\ref{cross_E} (right panel) shows that for a long-lived $E^{\pm\pm}$, doubly-charged fermion mass below about 1150 GeV is excluded from the ATLAS search for long-lived MCPs.
\subsubsection{Prompt decay of $E^{\pm\pm}$: Multi-leptons signature}
\label{E_coll}
The signatures of doubly charged fermion with prompt decay ({\em i.e.,} with large enough decay width to ensure the decay of $E^{\pm\pm}$ at the production vertex) depend on the allowed decay modes and branching ratios. As discussed in section~\ref{sec:decay_E}, if 2-body decays are kinematically possible ($m_E~>~m_{\frac{5}{2}\left(\frac{3}{2}\right)}$ and/or $m_{k}$), $E^{\pm\pm}$ dominantly decays into a heavy (multi-)charged scalar in association with a SM lepton. The (multi-)charged scalars decay further and the resulting collider signatures of $E\bar E$-pair production is determined by the subsequent decays of these (multi-)charged scalars which will be discussed in detail in the next section. If 2-body decays of the doubly charged fermion are kinematically forbidden {\rm i.e.,} $E^{\pm\pm}$ is lighter than the (multi-)charged scalars ($m_E~<~m_{\frac{5}{2}\left(\frac{3}{2}\right)}$ and $m_{k}$), $E^{\pm\pm}$ dominantly decays into a pair of same-sign SM charged leptons in association with a neutrino. Therefore, for $m_E~<~m_{\frac{5}{2}\left(\frac{3}{2}\right)}$ and $m_{k}$, the production of $E\bar E$-pairs at the LHC gives rise to 4-leptons and two neutrinos in the final state. Neutrinos, being weakly interacting, remain elusive in the detector resulting in missing transverse energy signature: $pp~\to~E^{\pm\pm}E^{\mp\mp}~\to~4{\rm \text-leptons}~+~E_T\!\!\!\!\!\!/~$. In the context of the LHC with 13 TeV center-of-mass energy, we have studied 4-leptons plus missing energy final state as a signature of $E\bar E$-pair production. The 4-leptons signature has already been searched by the ATLAS collaboration \cite{Aaboud:2018zeb} as a signature of electroweakinos in the context of simplified R-parity conserving as well as R-parity violating supersymmetric scenarios. Ref.~\cite{Aaboud:2018zeb} uses 36.1 fb$^{-1}$ integrated luminosity data of the LHC running at 13 TeV center-of-mass energy. Data yields are found to be consistent with SM expectations. The consistency between data and the SM prediction results into a 95\% CL upper limit on the visible 4-leptons cross-section. We have used the ATLAS upper limit on the visible 4-leptons cross-section in the context of our model to constrain the mass of doubly charged fermion. We have closely followed the object (electrons, muons, jets, missing energy) reconstruction and event selection criteria used by the ATLAS collaboration in Ref.~\cite{Aaboud:2018zeb}.
\paragraph{Signal and Background}
\label{4l_back}
Several SM processes also result in the 4-leptons final state. The leading SM backgrounds for $4l$ arise from the hard-scattering processes (HSP) resulting in four or more leptons and $ZZ$ production followed by the leptonic decay of both the $Z$-bosons. Production of top anti-top ($t\bar t$) pairs in association with a pair of leptons ($pp~\to~t\bar t Z/\gamma^*$) contributes to the $4l$ background when the $t\bar t$-pairs decay leptonically. Production of tri-bosons ($ZZZ,~WZZ$, and $WWZ$) and Higgs boson also give rise to $4l$ final state. Backgrounds result from the production of $t\bar t t\bar t$, $t \bar t t W$, $t\bar t H$, $ZH$, $WH$ are highly suppressed due to small production cross-sections as well as by the leptonic branching ratios. The production of $t\bar t$, $Z$+jets, $t\bar t W$, $WZ$+jets, $WW$+jets, $WWW$+jets {\em e.t.c.} may also contribute to $4l$ background (reducible background) if one or more jets are misidentified as leptons. Since the probability of mistagging a jet as a lepton is small, the reducible $4l$ backgrounds are estimated \cite{Aaboud:2018zeb} to be negligible compared to the irreducible backgrounds. Therefore, in our analysis, we have not calculated these reducible backgrounds.
We have used a parton level\footnote{At the parton level, the production and subsequent decays of $E\bar E$-pairs give rise to purely leptonic final states without any additional quarks/gluons. Quarks/gluons might result from the initial state radiation (ISR). However, the signal selection (which will be discussed in the next section) relies only on leptons in the final state. Therefore, hadronization of quarks/gluons and subsequent decays of the hadrons will not have any significant effect on the calculation of signal and background cross-sections after the acceptance/selection cuts and hence, parton-level Monte-Carlo results can be trusted in this case.} Monte-Carlo program to simulate the production and subsequent decays of $E\bar E$-pairs. The phase space distributions of different kinematic variables for the signal are also computed in the framework of the same Monte-Carlo code. Different SM background processes are simulated at parton-level using MadGraph \cite{Alwall:2014hca}. We have used MadAnalysis \cite{Conte:2012fm} to study the MadGraph generated background events, compute different kinematic distributions, and impose cuts on different kinematic variables. Table~\ref{lob} shows a list of the SM processes which are simulated in MadGraph to estimate the SM contribution to $4l$-background. One should note that MadGraph@parton-level can not simulate ISR jets which might be important for some of the kinematic variables like, missing energy, effective mass {\em e.t.c.,}. To overcome this particular drawback of the MadGraph@parton-level, we have calculated the SM processes in association with some additional jets. For example, in the category of HSC/$ZZ$, we have calculated the SM production cross-sections of two positively charged and two negatively charged leptons in association with 0, 1, 2, and 3 additional quarks or gluons. The MadGraph@parton-level calculated background cross-sections are found to be consistent with the background numbers estimated by the ATLAS collaboration using sophisticated event simulations and detector level objects ($e,~\mu$, jets, missing energy) reconstructions.
\begin{table}[h]
\centering
\begin{tabular}{c||c}
\hline\hline
Name & Processes generated in MadGraph\\
\hline\hline
HSC/$ZZ$ & $2l^+2l^-$ + upto 3 jets \\
$t\bar{t}ll$ & $t\bar{t}ll$ + upto 1 jet( leptonic decays)\\
$VVZ$ & $WWZ$ + upto 2 jets\\
Higgs & $H$ + upto 2 jets, $WH$ + upto 2 jets, $ZH$ + upto 2 jets, $t\bar{t}H$ \\
Others & $t\bar{t}t\bar{t}$, $t\bar{t}W^+W^-$\\
\hline\hline
\end{tabular}
\caption{List to the SM processes which are calculated using MadGraph-MadAnalysis framework at the parton-level.}
\label{lob}
\end{table}
\paragraph{Event Selection}
\label{4l_selection}
Since the SM contributes significantly to the final states similar to that we are interested in, one has to carefully examine and compare the phase space distributions of different kinematic variables for the signal as well as backgrounds and find some characteristics of our signal which are distinct from the SM processes. The characteristics of signal and background distributions will guide us to develop a systematic methodology of suppressing the SM backgrounds without drastically reducing the signal.
However, before going into the details of signal and background simulation and phase space distributions of different kinematic variables, it is important to list the basic requirements for jets and isolated leptons to be visible as such. In this quest, it should be noted though that the LHC detectors have only a finite resolution. For any realistic detector, this is applicable to both transverse momentum ($p_T$) measurements as well as determination of the angle of motion. In our analysis, we have neglected the later\footnote{The angular resolution is, generically, far superior to the energy/momentum resolutions and too fine to be of any consequence at the level of sophistication of this analysis.} whereas, the former is simulated by smearing the energy with Gaussian functions defined by an energy-dependent width\footnote{In general, width of the Gaussian smearing is also a function of the detector coordinates. However, we choose to simplify the task by assuming a flat resolution function equating it to the worst applicable for our range of interest.} as follows:
\begin{equation}
\frac{\sigma_E}{E} = \frac{a}{\sqrt{E}} \oplus b
\label{reso}
\end{equation}
where the errors are to be added in quadrature and
\begin{equation}
\begin{array}{rclcrclcl}
a_\ell &= & 0.05 \ , &\quad& b_\ell &= &5.5 \times 10^{-3} \ & \qquad &
{\rm for ~leptons,}
\\[1ex]
a_j &= &0.80 \ , & & b_j &= &0.05 & & {\rm for~ partons}.
\end{array}
\end{equation}
In order to be visible at the LHC detectors, a jet or a lepton must have an adequately large transverse momentum and should fall well inside the rapidity coverage of the detector. To ensure the visibility of the jets and leptons at the LHC, we demand
\begin{equation}
p_T({\rm jet}) > 20 \, {\rm GeV} \ , \qquad p_T({\rm electron}) > 7 \, {\rm GeV} \ , \qquad p_T({\rm muon}) > 5 \, {\rm GeV} \ ,
\label{cut:pT}
\end{equation}
and
\begin{equation}
|\eta({\rm jet})| \leq 2.8 \ , \qquad |\eta({\rm electron}) | \leq 2.47 \\ , \qquad |\eta({\rm muon}) | \leq 2.7 \ .
\label{cut:eta}
\end{equation}
Furthermore, we demand the leptons and jets be well separated from each other by requiring
\begin{equation}
\Delta R_{ll} \geq 0.4 ~,~ \Delta R_{\ell j} \geq 0.4 ~{\rm and}~ \Delta R_{j \, j} \geq 0.4 \ .
\label{cut:jj-iso}
\end{equation}
where
$\Delta R \equiv \sqrt{(\Delta \eta)^2 + (\Delta \phi)^2} $. The detection and reconstruction efficiency of tau-leptons is significantly different from the electrons and muons. Therefore, we have only considered electrons and muons as leptons. Unless specified otherwise, $l$ stands for electron and muon only ($l~\supset~ e,~\mu$) throughout the rest of this article. The requirements summarized by Eqns.~(\ref{cut:pT}--\ref{cut:jj-iso}) constitute our {\it acceptance cuts}. We tried to follow the acceptance and selection criteria used in the ATLAS search for $4l$~\cite{Aaboud:2018zeb} as closely as possible in framework of a parton-level Monte-carlo. With the set of {\em acceptance cuts} and {\em detector resolution} defined in Eqns.~(\ref{cut:pT}--\ref{cut:jj-iso}) and Eq.~\ref{reso}, respectively, we compute the $4l$\footnote{The signal and background require to have atleast 4-lepton (electron and/or muon only) in the final state. We do not impose any condition on the number of jets in the final state.} signal and background cross-sections at the LHC operating with $\sqrt s~=~13$ TeV and display them in Table~\ref{cut_flow}. Clearly, after the {\em acceptance cuts}, the SM backgrounds are order magnitude large compared to the signal. Detailed analysis of different kinematic distributions is necessary to suppress the huge contributions from the SM background. Before moving to the discussion of different kinematic distributions and consequently, the event selection, it is important to define two phenomenologically important kinematic variables namely, the missing energy ($E_T\!\!\!\!\!\!/~$) and the effective mass ($M_{\rm eff}$) as follows,
\begin{eqnarray}
\not E_T &\equiv& \sqrt{ \bigg(\sum_{\rm vis.} p_x \bigg)^2
+ \bigg(\sum_{\rm vis.} p_y \bigg)^2 }.\nonumber\\
M_{\rm eff} &\equiv& \not E_T + \sum_{i} p_T(l_i) + \sum_{i} p_T(j_i),
\label{meff}
\end{eqnarray}
where, the summation runs over all visible (consistent with the {\em acceptance cuts} listed in Eqns.~\ref{cut:pT}--\ref{cut:jj-iso}) leptons and jets.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.48 \linewidth]{plots/E/OSD_invmas_E.pdf}
\includegraphics[width=0.48 \linewidth]{plots/E/34Lep_invmas_E.pdf}
\mycaption{(Left panel) Opposite sign dilepton invariant mass distributions (after ordering leptons according to their $p_T$ hardness, $p_T(l_{1}^{\pm})>p_T(l_2^\pm)$) are presented after the acceptance cuts for both signal ($m_E=$ 0.8 and 1 TeV) and the SM background. Right panel top row shows Tri-lepton invariant mass ($M_{{\rm SFOS}+l}$) distributions. Four lepton invariant mass distribution ($M_{{\rm SFOS+SFOS}}$) and effective mass ($M_{eff}$) distribution are presented in the right panel bottom row.}
\label{dist_E_m}
\end{figure}
To design an event selection criteria for suppressing the background without significantly reducing the signal, one needs to understand the characteristics of the signal and background distributions. Table~\ref{cut_flow} (2$^{\rm nd}$ column) shows that after the {\em acceptance cuts}, dominant contribution to the background comes from HSP/$ZZ$ production. The leptonic decays of the $Z$-boson are characterized by a peak at $Z$-boson mass ($m_Z$) in the same-flavor, opposite-sign (SFOS) dilepton invariant mass distributions. Therefore, it is instructive to study each of the four possible SFOS dilepton invariant mass distributions ($M_{SFOS}$) namely, $M_{l_1^+ l_1^-}$, $M_{l_1^+ l_2^-}$, $M_{l_2^+ l_1^-}$ and $M_{l_2^+ l_2^-}$, constructed out of the momenta of the two leading\footnote{We have ordered the leptons according to their $p_T$ hardness. The positively charged and negatively charged lepton with higher (lower) $p_T$ are denoted by $l_1^+$ ($l_2^+$) and $l_1^-$ ($l_2^-$) respectively.} positively and two leading negatively charged leptons.
\begin{table}[h!]
\centering
\begin{tabular}{ c||c }
\hline\hline
\multicolumn{2}{ c}{\em ATLAS cuts} \\
\hline\hline
Invariant mass cuts & Effective mass cuts \\\hline\hline
$M_{\rm OS}~>~4$ GeV, $M_{\rm SFOS}~<~8.4$ or $M_{\rm SFOS}~>~10.4$, & \\
$M_{\rm SFOS}~<~81.2$ or $M_{\rm SFOS}~>~101.2$ & \\
$M_{\rm SFOS+l}~<~81.2$ or $M_{\rm SFOS+l}~>~101.2$, & $M_{\rm eff}~>~600$ GeV \\
$M_{\rm SFOS+SPOS}~<~81.2$ or $M_{\rm SFOS+SPOS}~>~101.2$ & \\
\hline\hline
\end{tabular}
\caption{Cuts implemented in the framework of our parton-level Monte-Carlo to adhere to the signal selection criteria proposed by ATLAS collaboration in Ref.~\cite{Aaboud:2018zeb}.}
\label{ATLAS_cuts}
\end{table}
In Fig.~\ref{dist_E_m} (four plots in the left panel), we show the SFOS dilepton invariant mass distributions for the SM background as well as for the signal with two different values of $m_E~=~0.8$ and 1.0 TeV. The $Z$-boson peak is clearly visible in the background SFOS invariant mass distributions. Therefore, one can easily suppress the background contributions $ZZ$ by imposing $Z$-veto {\em i.e.,} excluding the parts of phase-space giving rise to $M_{SFOS}$ satisfying $|m_Z-M_{SFOS}|~<~10$ GeV. In our analysis, we have assumed $m_Z~=~90.1$ GeV and demanded $m_{SFOS}~<~81.2$ GeV or $m_{SFOS}~>~101.2$. To suppress the contributions from the leptonic decay of hadrons\footnote{Our calculation of signal and backgrounds are limited to parton-level and hence, the production of hadrons and their subsequent leptonic decays which might result in $4l$ final state were not considered. However, we have used ATLAS suggested cuts to suppress these contributions which are definitely present in a collider experiment.}, we further demand the invariant mass of opposite-sign dilepton $M_{OS}$ to be greater than 4 GeV and $M_{SFOS}$ to be outside the range 8.4--10.4 GeV. The set of cuts on the dilepton invariant mass discussed above are also used by the ATLAS collaboration in Ref.~\cite{Aaboud:2018zeb} and hence, fall in the category of {\em ATLAS cuts} defined in Table~\ref{ATLAS_cuts}. Fig.~\ref{dist_E_m} (left panel) shows that the signal SFOS dilepton distributions are shifted towards the larger invariant mass. This is a consequence of the fact that the signal leptons are coming from the 3-body decay of $E^{\pm\pm}$ with a mass of the order of TeV and hence, are usually associated with large transverse momentum. Therefore, one can introduce a lower bound on $M_{SFOS}$ to reduce the background significantly while minimally affecting the signal. In addition to the {\em ATLAS cuts} on $M_{SFOS}$, we demand $M_{SFOS}~>~150$ GeV. This falls into the category of our {\em proposed cuts} defined in Table~\ref{proposed_cuts}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8 \linewidth]{plots/E/SSD_invmass_E.pdf}
\mycaption{Same sign dilepton invariant mass ($M_{l_1^+ l_2^+}$) distributions after the acceptance cuts for both signal ($m_E=$ 0.8 and 1 TeV) and the SM background.}
\label{SSD_E}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{ c }
\hline\hline
{\em Proposed Cuts} \\
\hline\hline
$M_{\rm OS}~>~150$ GeV,~~ $M_{\rm SFOS+l}~>~150$ GeV~~ and~~ $M_{\rm SFOS+SFOS}~>~200$ GeV \\
\hline\hline
\end{tabular}
\caption{Additional cuts proposed to enhance the signal to background ratio in the context of this model.}
\label{proposed_cuts}
\end{table}
In Fig.~\ref{dist_E_m} (right panel), we display the (top row) tri-lepton ($M_{SFOS+l}$), (left plot of the bottom row) 4-lepton ($M_{SFOS+SFOS}$), and (right plot of the bottom row) effective mass distributions. Interestingly, the SM background in the tri-lepton ($M_{l_1^+ l_1^{-} l_2^+}~{\rm and}~M_{l_1^+ l_1^{-} l_2^-}$) and 4-lepton ($M_{l_1^+ l_1^{-} l_2^+ l_2^-}$) invariant mass distributions are also associated with a peak at $m_Z$. These peaks arise due to the radiative $Z$-boson decays\footnote{The radiative decay of the $Z$-boson where a photon radiated from the $Z~\to~l^+l^-$ decay converts into a $l^+ l^-$ pair, is highly suppressed. However, the production cross-section of a single $Z$-boson at the LHC is huge. Therefore, despite being suppressed, the radiative $Z$-boson decays into 4-leptons contribute significantly to the $4l$ final state.} into 4-leptons. To suppress this background, we have used $Z$-veto on tri-lepton and 4-lepton invariant masses also. The $Z$-veto cuts on $M_{SFOS+l}$ and $M_{SFOS+SFOS}$ are summarized in Table.~\ref{ATLAS_cuts}. The Higgs peak as well as the kinematic threshold of $Z$-boson pair-production are also visible in the background $M_{SFOS+SFOS}$ distribution (see Fig.~\ref{dist_E_m}, left panel bottom right plot). In view of the signal tri-lepton and 4-lepton invariant mass distributions, we propose additional lower bounds on $M_{SFOS+l}$ and $M_{SFOS+SFOS}$ which are listed in Table~\ref{proposed_cuts}. We present the effective mass ($M_{\rm eff}$), defined in Eq.~\ref{meff} as the scalar sum of the transverse momenta of all the visible particles, as well as the total missing transverse energy, distributions for the signal and background in Fig.~\ref{dist_E_m} (left panel bottom left plot). The signal leptons are arising from the decay of TeV scale particles and hence, the signal $M_{\rm eff}$ distribution is expected to peak at TeV as can be seen from the signal effective mass distributions in Fig.~\ref{dist_E_m}. Whereas, the background $M_{\rm eff}$ tends to have smaller values. As a result, effective mass is considered to be a powerful discriminator between the new physics signals and the SM background. We demand $M_{\rm eff}~>~600$ GeV which drastically reduces the background with minimal effect on the signal. Fig.~\ref{SSD_E} shows the invariant mass distribution of the same-sign (SS) dilepton pairs ($M_{SS}$) for the signal and background. Since any cut on $M_{SS}$ to suppress the background will also reduce the signal significantly, we have not imposed any cut on $M_{SS}$. Since the same-sign dilepton pairs arise from the decay $E^{\pm\pm}~\to~l^\pm l^\pm \nu$, the signal $M_{SS}$ is associated with a characteristic kinematic endpoint at $m_E$ which could be useful to determine $m_E$ after the discovery. In this work, we restrict ourselves only to the discovery potential of this model at the LHC and do not explore the possibilities of determining different parameters using kinematic variables. The ATLAS suggested signal selection criteria and our proposed cuts on top of the ATLAS cuts are presented in Table~\ref{ATLAS_cuts} and \ref{proposed_cuts}, respectively.
\begin{table}[t!]
\centering
\begin{tabular}{ c||c|c|c}
\hline\hline
\multicolumn{4}{ c}{The SM background and signal Cross-sections [fb] after different cuts} \\
\hline\hline
The SM background & \multicolumn{3}{ c}{Cuts} \\\cline{2-4}
processes & Acceptance Cuts & ATLAS cuts & ATLAS + Proposed cuts\\\hline\hline
HSP/$ZZ$ & 155.2 & $6.7 \times 10^{-2}$ & $1.9 \times 10^{-3}$\\
$t\bar t l \bar l$ & 3.46 & 0.12 & $9.9 \times 10^{-3}$ \\
Higgs & 1.35 & $9.1\times10^{-3}$& $2.1\times10^{-3}$\\
VVZ & 0.64 & $7.6\times10^{-3}$ & --\\
Others & $2.8\times10^{-2}$ & $1.3\times10^{-2}$ & $4.8 \times10^{-3}$\\\hline
Total & 160.6 & $0.22~\left(\textcolor{red}{ 0.28^{+0.06}_{-0.06}}\right)$ & $1.9 \times10^{-2}$\\\hline\hline
$m_E$ [TeV] & \multicolumn{3}{ c}{Signal Cross-Sections [fb]} \\\hline\hline
0.8 & 0.5 & 0.46 & 0.31\\
1.0 & 0.18 & 0.17 & 0.13\\\hline\hline
\end{tabular}
\caption{Various SM backgrounds and signal cross-sections are presented after {\em acceptance}, {\em ATLAS} and {\em ATLAS + proposed cuts}. Bracketed number in the {\em ATLAS cuts} column and the total background cross-section row is the total background cross-section after the {\em ATLAS cuts} estimated by the ATLAS collaboration in Ref.~\cite{Aaboud:2018zeb}.}
\label{cut_flow}
\end{table}
\paragraph{Results}
The signal and the SM background cross-sections after the {\em ATLAS cuts} and {\em proposed cuts} listed in Table~\ref{ATLAS_cuts} and \ref{proposed_cuts}, respectively, are presented in table~\ref{cut_flow}. {\em ATLAS cuts} significantly reduce the background cross-section. Table~\ref{cut_flow} also shows that after the {\em ATLAS cuts}, the estimated background cross-section in the framework of our parton-level Monte-Carlo is consistent (within few percent) with the ATLAS estimation of the background. This consistency of the ATLAS and our analysis enables us to constrain the parameters (in this case $m_E$) of this model from the ATLAS model independent 95\% CL upper limit on the new physics contribution to $4l$ cross-section ($\sigma(4l)_{\rm vis}^{95}$) \cite{Aaboud:2018zeb} after the cuts in Table~\ref{ATLAS_cuts}. Fig.~\ref{bound_E} (left panel) shows the variation of signal $4l$ cross-section (after the {\em acceptance cuts} and {\em ATLAS cuts} in Table~\ref{ATLAS_cuts}) as a function of doubly charged fermion mass, $m_E$. The horizontal line in Fig.~\ref{bound_E} (left panel) corresponds to the ATLAS 95\% CL upper limit on the visible $4l$ cross-section. Fig.~\ref{bound_E} (left panel) clearly shows that for $m_E ~<~\sim 870$ GeV, the contribution of $E \bar E$-pair production to visible $4l$ signal cross-section is larges than $\sigma(4l)_{\rm vis}^{95}$. Therefore, one can set a lower bound of about 870 GeV on the doubly charged fermion mass from the ATLAS search for the $4l$ final state.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.48 \linewidth]{plots/E/ATLAS_Bound_E.pdf}
\includegraphics[width=0.48 \linewidth]{plots/E/lumi_E.pdf}
\mycaption{(Left panel) Four lepton signal cross-section ($\sigma_{4Lep}$) as a function of $E^{\pm\pm}$ mass after the cuts (listed in Table~\ref{ATLAS_cuts}) used by the ATLAS collaboration in Ref.~\cite{Aaboud:2018zeb}. The black solid line corresponds to the ATLAS observed 95\% CL upper bound on the visible 4-lepton signal cross-section. (Right panel) Required luminosity for $3\sigma$ and $5\sigma$ discovery is plotted as a function of $m_E$ for the proposed event selection criteria (listed in Table~\ref{proposed_cuts}), }
\label{bound_E}
\end{figure}
In the last column of Table~\ref{cut_flow}, we have presented the background as well as the signal $4l$ cross-sections after applying the {\em proposed cuts} (listed in Table~\ref{proposed_cuts}) on top of the ATLAS cuts (listed in Table~\ref{ATLAS_cuts}). Total background cross-section is reduced by a factor of 10 as a result of applying the cuts in Table~\ref{proposed_cuts}. Whereas, the signal cross-sections are reduced by a factor of 1.5(1.3) only for $m_E~=~800(1000)$ GeV. In Fig.~\ref{bound_E} (right panel), the required integrated luminosities for the $3\sigma$ and $5\sigma$ discovery of the doubly charged fermion are presented as a function of $m_E$ at the LHC with $\sqrt s~=~13$ TeV. We define the signal to be observable with more than $S\sigma$ significance for a integrated luminosity ${\cal L}$ if,
\begin{equation}
\frac{N_S}{\sqrt{N_S+N_B}} ~\ge~ S,
\end{equation}
where, $N_{S(B)}~=~\sigma_{S(B)}{\cal L}$ is the number of signal (background) events for an integrated luminosity ${\cal L}$. Fig.~\ref{bound_E} (right panel) shows that the LHC with 3000 fb$^{-1}$ integrated luminosity and 13 TeV center-of-mass energy will be able to probe $m_E$ upto about 1800 (1600) GeV at 3$\sigma$ (5$\sigma$) significance. The shaded region of Fig.~\ref{bound_E} (right panel) corresponds to the part of parameter-space which is already excluded from the ATLAS $4l$-search in Ref.~\cite{Aaboud:2018zeb}.
\subsection{Summary}
To summarize, we have discussed the production, decay, and the resulting collider signatures of the doubly charged fermion at the LHC with $\sqrt s~=~13$ TeV. In the scenario where $E^{\pm\pm}$ is lighter than the (multi-)charged scalars, $E^{\pm\pm}$ undergoes tree-level 3-body decays. Depending on the total decay width of the doubly charged fermion, the collider signatures of $E\bar E$-pair production at the LHC are broadly classified into two categories namely, the abnormally large ionization signature for long-lived $E^{\pm\pm}$ and multi-lepton (in particular, 4-lepton) signature for prompt $E^{\pm\pm}$ decay. Using the ATLAS results for long-lived MCP search, we obtain a lower bound of about 1150 GeV on the mass of long-lived $E^{\pm\pm}$. For prompt decay of $E^{\pm\pm}$, ATLAS $4l$ search with 36.1 fb$^{-1}$ data of the 13 TeV LHC excludes $m_E$ below 870 GeV at 95\% CL. After investigating different characteristic kinematic distributions for the background as well as the signal, we proposed additional cuts to optimize the signal to the background ratio. With the proposed cuts, the discovery reach of the LHC with 3000 $fb^{-1}$ integrated luminosity data is estimated to be $m_E\sim$ 1800 (1600) GeV at 3$\sigma$ (5$\sigma$) significance.
\section{Phenomenology of scalars}
\label{scalar_phenomenology}
Being charged under both the SM $SU(2)_L$ and $U(1)_Y$, the (multi-)charged scalars ($\phi^{3\pm},~\phi^\pm$ and $H_a^{\pm\pm}$) have gauge interactions (listed in Appendix~\ref{feyn}) with the SM gauge bosons, namely, the photon and $W/Z$-boson. Therefore, the (multi-)charged scalars can be pair produced or produced in association with another (multi-)charged scalar (associated production) at the LHC. We have computed the following\footnote{The production cross-sections of $H_a^{\pm\pm} H_b^{\mp\mp}$ with $a~\ne~b$ as well as $H_1^{\pm\pm}\phi^{\mp}$, $H_2^{\pm\pm}\phi^{3\mp}$, $H_3^{\pm\pm}\phi^{\mp}$ and $H_3^{\pm\pm}\phi^{3\mp}$ are suppressed by the mixings in the doubly-charged scalar sector and hence, not considered.} pair and associated productions of the (multi-)charged scalars at the LHC with $\sqrt s~=~13$ TeV.
\begin{equation}
pp~\to~\phi^{3\pm}\phi^{3\mp},~\phi^{\pm}\phi^{\mp},H_a^{\pm\pm}H_{a}^{\mp\mp},~H_{1}^{\pm\pm}\phi^{3\mp}~{\rm and~}~H_{2}^{\pm\pm}\phi^{1\mp}.
\end{equation}
At the LHC, the pair productions are quark anti-quark (photon-photon\footnote{In the context of $E\bar E$-production at the LHC, the importance of photoproduction was discussed in section~\ref{E_phenomenology} which holds true for the pair production of multi-charged scalars also.}) initiated processes, proceed through a $\gamma/Z$-boson (charged scalar) in the $s(t/u)$-channel. The photoproductions get an extra contribution from the quartic coupling involving two photons and two (multi-)charged scalars (see Appendix~\ref{feyn}). The associated productions get contribution from the quark anti-quark initial state only and proceeds through a $W^\pm$-boson in the $s$-channel. Different production cross-sections at the LHC have been numerically computed by integrating the following parton-level differential cross-sections over the phase-space and parton-densities.
\begin{figure}[!t]
\centering
\includegraphics[width=0.48 \linewidth]{plots/scalar/prod_H_merge.pdf}
\includegraphics[width=0.48 \linewidth]{plots/scalar/phi+3_phi+1_prod.pdf}
\mycaption{The total pair production cross-sections for the doubly-charged scalar pairs (left panel) as well as triply-charged and singly-charged scalar pairs (right panel) are presented for the LHC with $\sqrt s~=~13$ TeV. The insets shows the ratio of PF and DY contributions. The grey solid lines correspond to the ATLAS observed 95\% CL cross-section upper limit ($\sigma_{\rm Obs}^{95}$) on long-lived (left panel) doubly-charged particles \cite{Aaboud:2018kbe}.}
\label{scalar_pair_prod}
\end{figure}
\begin{eqnarray}
&& \frac{d\hat \sigma_{q\bar q}^{SS^*}}{d\Omega}~=~\frac{\alpha_{EM}^2}{6 \hat s^3}\,\,\sqrt{1-\frac{4m_S^2}{\hat s}}\,\,\left(\hat u \hat t~-~m_{S}^4\right)\nonumber\\
&&\times~ \left[
\left\{
Q_q Q_S + \frac{2\left(T_{3,q}-2Q_q{\rm sin}^2\theta_W \right)\left(T_{3,S}-Q_S{\rm sin}^2\theta_W \right)}{{\rm sin}^22\theta_W \left(1-\frac{m_Z^2}{\hat s}\right)}
\right\}^2
+
4\left\{
\frac{T_{3,q}\left(T_{3,S}-Q_S{\rm sin}^2\theta_W \right)}{{\rm sin}^22\theta_W \left(1-\frac{m_Z^2}{\hat s}\right)}
\right\}^2
\right]\,\,,\nonumber\\
&&\frac{d \hat \sigma^{SS^*}_{\gamma \gamma}}{d\Omega}~=~\frac{Q_S^4\alpha^2_{EM}}{4\hat s}\,\,\sqrt{1-\frac{4m^2_{S}}{\hat s}}\,\,\left[\frac{(m^2_{S} + \hat u)^2}{(m^2_{S}-\hat u)^2}+ \frac{(m^2_{S}+\hat t)^2}{(m^2_{S} -\hat t)^2}+8\frac{m^4_{S}}{(m^2_{S} - \hat u)(m^2_{S}-\hat t)}\right]\,\,,\nonumber\\
&&\frac{d \hat \sigma^{SS^{\prime}}_{q \bar{q^\prime}}}{d\Omega}~=~\frac{\alpha^2_{EM}}{48\hat s\,\, {\rm sin}^4\theta_W}\,\,\sqrt{1-\frac{4m^2_{S}}{\hat s}}\,\,\frac{\hat u \hat t-m_S^4}{\left(\hat s-m_W^2\right)^2}\,\,,
\end{eqnarray}
where, $\hat s$, $\hat t$ and $\hat u$ are the usual Mandelstam variables, $Q_q$ and $Q_S$ are the electric charges of the SM quark $q$ and charged scalar $S$, respectively, $m_S$ is the mass of $S$, and $T_{3,q}$ as well as $T_{3,S}$ are the weak isospin of quarks ($T_{3,q}~=~\frac{1}{2}\left(-\frac{1}{2}\right)$ for $q \supset u,~c~,t\left(d,~s,~b\right)$) and charged scalars ($T_{3,S}~=~\frac{1}{2}\left(-\frac{1}{2}\right)$ for $q \supset \phi^{3\pm},~H_2^{\pm\pm}\left(\phi^\pm,~H_1^{\pm\pm}\right)$ and $T_{3,S}~=~0$ for $H_3^{\pm\pm}$)\footnote{The physical doubly charged scalars ($H_a^{\pm\pm}$) appear after the EWSB as mixtures of the $T_3~=~\frac{1}{2}~{\rm and}~-\frac{1}{2}$ components of the $Y~=~\frac{3}{2}~{\rm and}~\frac{5}{2}$ doublets, respectively, and $Y~=~2$ singlet. Since the mixings in the doubly-charged scalar sector are constrained to be small, $H_{1(2)}^{\pm\pm}$ and $H_{3}^{\pm\pm}$ are dominantly the $T_3~=~-\frac{1}{2}\left(\frac{1}{2}\right)$ component of $Y~=~\frac{5}{2}\left(\frac{3}{2}\right)$ doublet and $Y~=~2$ singlet, respectively.}, respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8 \linewidth]{plots/scalar/assoc_prod_H.pdf}
\mycaption{Production cross-section of $H_1^{++}(H_1^{--})$ in association with a $\phi^{3-}(\phi^{3+})$ are plotted as a function of $m_{H_1}$ at the LHC with 13 TeV center-of-mass energy.}
\label{cross_asso}
\end{figure}
To evaluate the scalar pair and associated production cross-sections at the LHC with $\sqrt s~=~13$ TeV, we have numerically integrated Eq.~\ref{cross_section} over the {\bf NNPDF23LO} \cite{Ball:2014uwa} parton distribution functions. We fix the factorization ($\mu_F$) and renormalization scales at the subprocess center-of-mass energy $\sqrt {\hat s}$. The resulting scalar production cross-sections are presented in Fig.~\ref{scalar_pair_prod}(pair productions) and Fig.~\ref{cross_asso} (associated productions). The insets in Fig.~\ref{scalar_pair_prod} show the ratio of the contributions from the PF and DY. Different doubly charged scalars, being the members of different scalar multiplets with different weak isospins, couple differently with the SM $Z$-boson. Therefore, the DY pair production cross-sections are different for different doubly-charged scalar pairs. However, production of different doubly-charged scalar pairs get same contribution from PF which is the dominant contribution in the large scalar mass region. Fig.~\ref{scalar_pair_prod} (right panel) shows that in the large mass region, $\sigma\left(\phi^{3\pm}\phi^{3\mp}\right)$ is more than an order of magnitude bigger than $\sigma\left(\phi^{\pm}\phi^{\mp}\right)$. This can be attributed to the fact that the photo-production of $\phi^{3\pm}\phi^{3\mp}$ pairs are enhanced by a factor of $3^4$ compared to the photo-production of $\phi^{\pm}\phi^{\mp}$. In Fig.~\ref{cross_asso}, we have presented the production cross-sections of $H_1^{++}(H_1^{--})$ in association with a $\phi^{3-}(\phi^{3+})$ at the LHC with $\sqrt s~=~13$ TeV. The difference between $\sigma\left(H_1^{++}\phi^{3-}\right)$ and $\sigma\left(H_1^{--}\phi^{3+}\right)$ arises from the difference in the densities of the initial state partons. The associated productions of the multi-charged scalars, being mediated by the $W$-boson in the $s$-channel, are completely determined by their $SU(2)_L$ charges. Therefore, the production cross-sections $\sigma\left(H_2^{++}\phi^{-}\right)$ and $\sigma\left(H_2^{--}\phi^{+}\right)$ are identical to $\sigma\left(H_1^{--}\phi^{3+}\right)$ and $\sigma\left(H_1^{++}\phi^{3-}\right)$, respectively, and hence, are not shown separately. After being produced at the LHC, the multi-charged scalars decays into the SM leptons and/or bosons giving rise to multi-lepton final states which will be discussed in the following.
\subsection{Decay of multi-charged scalars}
The collider signatures of the multi-charged scalars crucially depend on their decays which will be discussed in this section.
\subsubsection{Triply- and singly-charged scalar decay}
If kinematically allowed {\em i.e.,} $m_{\frac{5}{2}\left(\frac{3}{2}\right)}~>~m_E$, the triply-(singly-)charged scalar can decay into a doubly-charged fermion in association with a SM charged lepton: $\phi^{3\pm(\pm)}\to E^{\pm\pm} l^{\pm(\mp)}$. Other possible 2-body decays of the triply-(singly-)charged are the decays into one of the doubly-charged scalars and a $W$-boson: $\phi^{3\pm(\pm)}\to H_a^{\pm\pm} W^{\pm(\mp)}$. The partial decay widths for the 2-body $\phi^{3\pm}\left(\phi^{\pm}\right)$ decays are presented in the following:
\begin{eqnarray}
\Gamma\left(\phi^{3\pm(\pm)}\to E^{\pm\pm} l^{\pm(\mp)}\right)&=&\frac{\left|f_{\frac{5}{2}\left(\frac{3}{2}\right)} \right|^2}{16\,\pi}\,m_{\frac{5}{2}\left(\frac{3}{2}\right)}\, \left(1-x_E\right),\nonumber\\
\Gamma\left(\phi^{3\pm(\pm)}\to H_a^{\pm\pm} W^{\pm(\mp)}\right)&=&\frac{\alpha_{EM}\,\left|O_{a1(2)} \right|^2}{8\,{\rm sin}^2\theta_W}\,m_{\frac{5}{2}\left(\frac{3}{2}\right)}\,\,\lambda^{\frac{1}{2}}\left(1,x_{H_a},x_W\right)\,\,\eta\left(x_{H_a},x_W\right)\, ,
\end{eqnarray}
where, $\lambda(x,y,z)~=~x^2+y^2+z^2-2xy-2yz-2xz$, $\eta\left(x,y\right)~=~y-2-2x+\frac{\left(1-x\right)^2}{y}$ and $x_i~=~m_i^2/m_{\frac{5}{2}\left(\frac{3}{2}\right)}^2$. It is important to note that the smallness of neutrino masses implies close degeneracy between $\phi^{3\pm}\left(\phi^{\pm}\right)$ and $H_1^{\pm\pm}\left(H_2^{\pm\pm}\right)$ {\em i.e.,} $m_{\phi^{3\pm}\left(\phi^{\pm}\right)}~\approx~m_{H_1^{\pm\pm}\left(H_2^{\pm\pm}\right)}~\approx~m_{\frac{5}{2}\left(\frac{3}{2}\right)}$. Therefore, the 2-body decay $\phi^{3\pm}\left(\phi^{\pm}\right)~\to~H_1^{\pm\pm}\left(H_2^{\pm\pm}\right)+W^{\pm}$ is kinematically forbidden. However, $\phi^{3\pm}$ undergoes 2-body decay into $H_{2(3)}^{\pm\pm}+W^{\pm}$ if $m_{\frac{5}{2}}~>~m_{\frac{3}{2}(k)}+m_W$. On the other hand, in the region of parameter space defined by $m_{\frac{3}{2}}~>~m_{\frac{5}{2}(k)}+m_W$, the decay $\phi^{\pm}~\to~H_{1(3)}^{\pm\pm}+W^{\mp}$ is allowed. If all the aforementioned 2-body decays are kinematically forbidden for $\phi^{3\pm}\left(\phi^{\pm}\right)$ {\em i.e.,} $m_{\frac{5}{2}\left(\frac{3}{2}\right)}~<~m_E,~m_{k}~{\rm and}~m_{\frac{3}{2}\left(\frac{5}{2}\right)}$, the triply-(singly-)charged scalar undergoes tree-level 3-body decay into $l^{\pm}l^{\pm}W^{\pm(\mp)}$. The 3-body decay $\phi^{3\pm}\left(\phi^{\pm}\right)~\to l^{\pm}l^{\pm}W^{\pm(\mp)}$ proceeds through a off-shell doubly charged scalar as depicted in Fig.~\ref{3-body-phi}(left panel).
\begin{figure}
\begin{minipage}[t]{0.4\textwidth}
\begin{tikzpicture}[line width=1.4 pt, scale=1.65,every node/.style={scale=0.9}]
\draw[scalar,black] (-1.5,0.0) --(0.0,0.0);
\draw[vector,black] (0.0,0.0) --(0.8,0.6);
\draw[scalar,black] (0.0,0.0) --(1.5,-1.0);
\draw[fermion,black] (1.5,-1.0) --(2.5,-0.5);
\draw[fermion,black] (1.5,-1.0) --(2.5,-1.5);
\node at (-0.5,0.3) {$\phi^{3\pm (\pm)}$};
\node at (1.2,0.5) {$W^{\pm (\mp)}$};
\node at (0.5,-0.8) {${H_a}^{\pm\pm}$};
\node at (2.0,-0.44) {$l^{\pm}$};
\node at (2.0,-1.47) {$l^{\pm}$};
\end{tikzpicture}
\end{minipage}
\hspace{0.1\textwidth}
\begin{minipage}[t]{0.4\textwidth}
\begin{tikzpicture}[line width=1.4 pt, scale=1.65,every node/.style={scale=0.9}]
\draw[scalar,black] (-1.5,0.0) --(0.0,0.0);
\draw[vector,black] (0.0,0.0) --(1.5,1.0);
\draw[scalar,black] (0.0,0.0) --(1.5,-0.7);
\draw[fermion,black] (1.5,1.0) --(2.5,1.5);
\draw[fermion,black] (1.5,1.0) --(2.5,0.5);
\node at (-0.5,0.3) {$\phi^{3\pm }$};
\node at (0.5,0.8) {$W^{\pm}$};
\node at (0.5,-0.8) {${H_1}^{\pm\pm}$};
\node at (2.0,1.44) {$e^{\pm}$};
\node at (2.0,0.6) {$\nu_e$};
\end{tikzpicture}
\end{minipage}
\caption{Feynman diagram showing the tree-level 3-body decays of the triply-(singly-)charged scalars ($\phi^{3\pm(\pm)}$).}
\label{3-body-phi}
\end{figure}
The partial width for the decay $\phi^{3\pm}\left(\phi^{\pm}\right)~\to l^{\pm}l^{\pm}W^{\pm(\mp)}$ is given by,
\begin{equation}
\Gamma\left(\phi^{3\pm}\left(\phi^{\pm}\right)~\to l^{\pm}l^{\pm}W^{\pm(\mp)}\right)=\frac{\alpha_{EM}\left|f_k\right|^2}{128\,\pi^2\,{\rm sin}^2\theta_W}\,\,m_{\frac{5}{2}\left(\frac{3}{2}\right)}\,\,\sum_{a,b}\left(O_{a3}O_{b3}O_{a1(2)}O_{b1(2)}\right)\,\,I\left(x_{H_a},x_{H_b}\right),\nonumber
\end{equation}
\begin{equation}
{\rm where},~x_i~=~\frac{m_{i}^2}{m^2_{\frac{5}{2}\left(\frac{3}{2}\right)}},~I\left(x,y\right)=\int_{x_W}^1 d\xi_1 \int_0^{\xi_2^{\rm max}} d\xi_2 \frac{\xi_2\left[\left(x_W-\xi_2\right)^2-2\left(x_W+\xi_2\right)+1\right]}{x_W\left(\xi_2-x_{H_a}\right)\left(\xi_2-x_{H_b}\right)},
\label{3body_1}
\end{equation}
and $\xi_2^{\rm max}~=~(1-\xi_1)(\xi_1-x_W)/\xi_1$. The radiative corrections induce small (of the order of a GeV) mass splitting between the $\phi^{3\pm}$ and $H_{1}^{\pm\pm}$ with the triply-charged scalar being heavier than $H_{1}^{\pm\pm}$. Therefore, the triply-charged scalar can decay into a on-shell $H_{1}^{\pm\pm}$ in association with a $e^{\pm}\nu_e$ or $u\bar d$ pair via a off-shell $W$-boson as depicted in Fig.~\ref{3-body-phi}(right panel). The partial decay width for the decay $\phi^{3\pm}~\to e^{\pm} \nu_e H_1^{\pm\pm}$ is given by,
\begin{equation}
\Gamma\left(\phi^{3\pm}~\to e^{\pm}\nu_e H_1^{\pm\pm}\right)~=~\frac{\alpha_{EM}^2O_{11}^2}{32\,\pi\,{\rm sin}^4\theta_W\,m_W^4}\,\,\left(m_{\phi^{3\pm}}-m_{H_1^{\pm\pm}}\right)^5.
\label{3body_3}
\end{equation}
Note that the 3-body decay of $\phi^{3\pm}$ into $e^{\pm}\nu_e H_1^{\pm\pm}$ is suppressed by the splitting $\left(m_{\phi^{3\pm}}-m_{H_1^{\pm\pm}}\right)$ $\sim~1.8$ GeV and is estimated to be of the order of $10^{-12}$ GeV. Since $H_2^{\pm\pm}$ become slightly heavier than $\phi^{\pm}$ after the radiative corrections, the decay of singly-charged scalar into an on-shell $H_{2}^{\pm\pm}$ is kinematically forbidden.
\begin{figure}[!t]
\centering
\includegraphics[width=1 \linewidth]{plots/scalar/phi3+_decay_multi.pdf}
\mycaption{(Left panel) The branching ratios of $\phi^{3+}$ is presented as a function of $m_{\phi^{3+}}$ for $m_{\frac{3}{2}}=1.6~{\rm TeV},~m_{k}=1.2~{\rm TeV},~m_{E}=1.4~{\rm TeV},~\mu=\mu^{\prime}=0.1~{\rm GeV},~f_{\frac{5}{2}\left(\frac{3}{2}\right)}=2\times 10^{-4},~\lambda=5\times 10^{-3}$ and $f_k=1.0$. (Right panel) Branching ratios of 3-body decays is shown as a function of $y_k$ for $m_{\frac{5}{2}}~=~1$ TeV and two different values of $\mu~=~0.1$ and 1 GeV.}
\label{br_phi3}
\end{figure}
Fig.~\ref{br_phi3}(left panel) shows the branching ratios of $\phi^{3\pm}$ into different 2- and 3-body decay modes. We have assumed $m_{\frac{3}{2}}=1.6~{\rm TeV},~m_{k}=1.2~{\rm TeV},~m_{E}=1.4~{\rm TeV},~\mu=\mu^{\prime}=0.1~{\rm GeV},~f_{\frac{5}{2}\left(\frac{3}{2}\right)}=2\times 10^{-4},~\lambda=5\times 10^{-3}$ and $f_k=1.0$. Obviously, the 2-body decays dominates over the 3-body decays as long as the 2-body decays are kinematically allowed {\em i.e.,} $m_{\phi^{3\pm}}~>~m_{E}$ or $m_{\frac{5}{2}(k)}$. Fig.~\ref{br_phi3}(left panel) also shows that for $m_{\phi^{3\pm}}~<~m_{E}$ and $m_{\frac{5}{2}(k)}$, the 3-body decay into a $W$-boson in association with a pair of same-sign lepton dominates over the 3-body decay into a $H^{++}_1$ plus a $e^+\nu_e$ or $u\bar d$ pair. Eq.~\ref{3body_3} shows that the partial decay widths for $\phi^{3+}~\to~H_1^{++} e^+ \nu_e~{\rm and}~H_1^{++} u \bar d$ are completely\footnote{For small mixing in the doubly-charged scalar sector, $O_{11}~\approx~1$ and the radiative mass splitting $m_{\phi^{3\pm}}-m_{H_1^{\pm\pm}}~\approx~\frac{5}{2}\alpha_{EM}\,\,m_Z$.} determined by the SM parameters only. Whereas, $\Gamma\left(\phi^{3\pm}~\to l^{\pm}l^{\pm}W^{\pm}\right)$ depends on the Yukawa coupling $f_{k}$ as well as on the mixing in the doubly-charged scalar sector. In Fig.~\ref{br_phi3}(right panel), we have presented the 3-body decay branching ratios of $\phi^{3\pm}$ as function of $f_{k}$ for two different values of $\mu~=~0.1$ and 1 GeV. For smaller mixing ({\em i.e.,} smaller $\mu$) between $\phi^{\pm\pm}_{\frac{5}{2}}$ and $k^{\pm\pm}$, 3-body decay via an off-shell $W^*$-boson dominates. The partial decay width $\Gamma\left(\phi^{3\pm}~\to~l^{\pm}l^{\pm}W^{\pm}\right)$ become significant for larger $f_{k}$ and $\mu$. Decay branching ratios for the possible 2-body decays of $\phi^{\pm}$ are similar to that of $\phi^{3\pm}$ and hence, are not shown separately. However, $\phi^{\pm}$ being lighter than $H_2^{\pm\pm}$, the 3-body decays $\phi^{\pm}$ into an on-shell $H_2^{\pm\pm}$ in association with a $e^{-}\bar \nu_e (e^{+}\nu_e)$ or $\bar u d(u\bar d)$ pair are kinematically forbidden. Therefore, for $m_{\frac{3}{2}}~<~m_{\frac{5}{2}},~m_{k}~{\rm or}~m_E$, $\phi^{\pm}$ decays into a $W$-boson in association with a same-sign dilepton pair with 100\% branching ratio.
\begin{figure}[!ht]
\centering
\includegraphics[width= 1.0\linewidth]{plots/scalar/h1_decay_br.pdf}
\mycaption{The branching ratios of $H_{1}^{\pm\pm}$ (left panel) and $H_3^{\pm\pm}$ (right panel) are presented as a function of the doubly-charged scalar mass. To calculate the branching ratios of $H_{1(3)}^{\pm\pm}$ in the left panel (right panel), we assume $m_{\frac{3}{2}}=0.9~{\rm TeV},~m_{k\left(\frac{5}{2}\right)}=1.4~{\rm TeV},~m_{E}=1.5~{\rm TeV}$ and $f_{\frac{3}{2}}~=~f_{\frac{5}{2}}~=~2\times 10^{-4}$, $f_{k}~=~0.1\left(2\times 10^{-3}\right)$, $\lambda=~5\times 10^{-3}$ and $\mu~=~0.1(10)$ GeV. The insets show the total decay width ($\Gamma_{TOT}$).}
\label{h1_br}
\end{figure}
\subsubsection{Doubly-charged scalar decay}
\label{2_decay}
Doubly charged scalars, being charged under the $SU(2)_L$ and $U(1)_Y$, has gauge coupling with another (multi-)charged scalar and a $W/Z$-boson. If kinematically allowed, the doubly-charged scalars undergo 2-body decay into a lighter (multi-)charged scalar and a $W/Z$-boson: $H_a^{\pm\pm}~\to~\phi^{3\pm(\pm)}W^{\mp(\pm)}$ and $H_a^{\pm\pm}~\to~H_b^{\pm\pm}Z$. After the EWSB, the scalar potential in Eq.~\ref{lag_scalar} gives rise to interactions involving two doubly-charged scalar and a SM Higgs boson (see Appendix~\ref{feyn} for details). Therefore, doubly-charged scalar can decay into a SM Higgs in association with another doubly-charged scalar: $H_a^{\pm\pm}~\to~H_b^{\pm\pm} h$. The doubly-charged scalars can also decay into $E^{\pm\pm}\nu_l$ or $l^\pm l^\pm$-pairs via the Yukawa interactions in Eq.~\ref{lag_yuk}: $H_a^{\pm\pm}~\to~E^{\pm\pm}\nu_l~{\rm or}~l^\pm l^\pm$. The partial decay widths for the abovementioned 2-body decays are given by,
\begin{eqnarray}
&& \Gamma\left(H_a^{\pm\pm}\to\phi^{3\pm(\pm)}W^{\mp(\pm)}\right)~=~\frac{\alpha_{EM}\, O_{a1(2)}^2}{8\,{\rm sin^2}\theta_W}\,m_{H_a}\,\lambda^{\frac{1}{2}}\left(1,x_{\frac{5}{2}\left(\frac{3}{2}\right)},x_W\right)\,\eta\left(x_{\frac{5}{2}\left(\frac{3}{2}\right)},x_W\right)\, ,\nonumber\\
&& \Gamma\left(H_a^{\pm\pm}\to H_b^{\pm\pm}Z\right)~=~\frac{\alpha_{EM}\,\left(O_{a1}O_{b1}-O_{a2}O_{b2}\right)^2}{4\,{\rm sin^2}2\theta_W}\,m_{H_a}\,\lambda^{\frac{1}{2}}\left(1,x_{H_b},x_Z\right)\,\eta\left(x_{H_b},x_Z\right),\nonumber\\
&& \Gamma\left(H_a^{\pm\pm}\to H_b^{\pm\pm}h\right)~=~\frac{{\cal C}_{ab}^2}{16\,\pi\,m_{H_a}}\,\lambda^{\frac{1}{2}}\left(1,x_{H_b},x_h\right),\nonumber\\
&& \Gamma\left(H_a^{\pm\pm}\to E^{\pm\pm}\nu\right)~=~\frac{\left|f_{\frac{5}{2}}\right|^2\,O_{a1}^2\,+\,\left|f_{\frac{3}{2}}\right|^2\,O_{a2}^2}{16\,\pi}\,m_{H_a}\,\left(1-x_E\right)^2\, ,\nonumber\\
&& \Gamma\left(H_a^{\pm\pm}\to l^{\pm}l^\pm\right)~=~\frac{\left|f_{k}\right|^2\,O_{a3}^2}{16\,\pi}\,m_{H_a}\, ,
\label{h_w}
\end{eqnarray}
where, $\lambda\left(x,y,z\right)$, $\eta\left(x,y\right)$ are already defined in the previous section and
\begin{equation}
{\cal C}_{ab}~=~\mu\,\left\{\frac{1}{\sqrt 2}\left(O_{a2}-O_{a1}\right)O_{b3}+O_{a3}\left(O_{b2}-O_{b1}\right)\right\}+\lambda v\left(O_{a1}O_{b2}+O_{a2}O_{b1}\right).
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width= 1.0\linewidth]{plots/scalar/h2_decay_br.pdf}
\mycaption{(Left panel) The branching ratios of $H_2^{\pm\pm}$ are presented as a function of $m_{H_2}$ for $m_{\frac{5}{2}}=1.1~{\rm TeV},~m_{k}=1.4~{\rm TeV},~m_{E}=1.5~{\rm TeV},~\mu=\mu^{\prime}=0.1~{\rm GeV},~f_{\frac{5}{2}\left(\frac{3}{2}\right)}=2\times 10^{-4},~\lambda=5\times 10^{-3}$ and $f_k=5\times 10^{-2}$. (Right panel) Branching ratios of two possible decay modes of $H_2^{\pm\pm}$ for $m_{\frac{3}{2}}~<~m_{\frac{5}{2}},~m_{k}$ and $m_E$ are presented as a function of $f_k$ for $m_{\frac{3}{2}}~=~0.8$ TeV and two different values of $\mu~=~0.1$ and 1 GeV.}
\label{h2_br}
\end{figure}
The left panel (right panel) of Fig.~\ref{h1_br} shows the branching ratios of $H_1^{\pm\pm}\left(H_3^{\pm\pm}\right)$ as a function of the doubly-charged scalar mass. $H_{1}^{\pm\pm}$ being lighter than $\phi^{3\pm}$, can not decay into $\phi^{3\pm}$. However, depending on the choice of mass parameters {\em i.e.,} $m_{\frac{5}{2}\left(\frac{3}{2}\right)},~m_k$ and $m_E$, the decays of $H_1^{\pm\pm}\left(H_3^{\pm\pm}\right)$ into other (multi-)charged scalars and fermion are allowed. If the decays of $H_{1}^{\pm\pm}\left(H_{3}^{\pm\pm}\right)$ into lighter (multi-)charged scalars or fermions are kinematically forbidden {\em i.e.,} $m_{\frac{5}{2}\left(k\right)}~<~m_{\frac{3}{2}},~m_{k\left(\frac{5}{2}\right)}$ and $m_E$, $H_{1}^{\pm\pm}\left(H_{3}^{\pm\pm}\right)$ dominantly decays into a same-sign dilepton. In the insets of Fig.~\ref{h1_br}, we have presented the total decay width. It is important to note that $\Gamma\left(H_{1}^{\pm\pm}~\to~l^\pm l^\pm\right)$ (see Eq.~\ref{h_w}) is suppressed by the Yukawa coupling $f_k$ as well as by the small mixing in the doubly charged scalar sector, whereas, $\Gamma\left(H_{3}^{\pm\pm}~\to~l^\pm l^\pm\right)$ is suppressed only by the Yukawa coupling $f_k$. The insets of Fig.~\ref{h1_br} shows that the chosen values of $f_k~=~0.1(2\times 10^{-3}),~\mu~=~0.1(10)$ GeV and $\lambda~=~5\times 10^{-3}$ ensure prompt decay of $H_{1(3)}^{\pm\pm}$ at the LHC. However, it can be easily estimated from Eq.~\ref{h_w} that with the same set of values for $\mu$ and $\lambda$, $f_{k}~<~10^{-3}\left(10^{-9}\right)$ gives rise to a long-lived $H_{1(3)}^{\pm\pm}$ which remains stable inside the detector.
The mass of the doubly-charged scalar $H_2^{\pm\pm}$ is slightly larger than the mass of $\phi^{\pm}$. The radiative mass splitting between $H_2^{\pm\pm}$ and $\phi^{\pm}$ is given by $m_{H_2^{\pm\pm}}-m_{\phi^{\pm}}~\approx~\frac{3}{2}\alpha_{EM}\,\,m_Z~\approx~1.1$ GeV. Therefore, $H_2^{++}$ can decay into an on-shell $\phi^{+}$ in association with a $e^+\nu_e$ or $u \bar d$ pair. The tree-level 3-body decay $H_2^{++}~\to~\phi^{+}e^+\nu_e$ proceeds through an off-shell $W$-boson and the partial decay width is given by,
\begin{equation}
\Gamma\left(H_2^{\pm\pm}~\to~e^\pm\nu_e\phi^{\pm}\right)~=~\frac{\alpha_{EM}^2O_{22}^2}{32\,\pi\,{\rm sin}^4\theta_W\,m_W^4}\,\,\left(m_{H_2^{\pm\pm}}-m_{\phi^{\pm}}\right)^5.
\label{3body_h2}
\end{equation}
The decay $H_2^{++}~\to~\phi^{+}e^+\nu_e$ is suppressed by the small mass splitting between $H_2^{\pm\pm}$ and $\phi^{\pm}$. However, if the above-mentioned 2-body decays of $H_2^{\pm\pm}$ are kinematically forbidden or suppressed (by the Yukawa parameters and/or mixing), the 3-body decay could be important and will have important consequences at the LHC which will be discussed in section~\ref{coll_scalar}. In Fig.~\ref{h2_br}(left panel), we have presented the branching ratios of $H_2^{\pm\pm}$ as a function of $m_{H_2}$ for $m_{\frac{5}{2}}=1.1~{\rm TeV},~m_{k}=1.4~{\rm TeV},~m_{E}=1.5~{\rm TeV},~\mu=\mu^{\prime}=0.1~{\rm GeV},~f_{\frac{5}{2}\left(\frac{3}{2}\right)}=2\times 10^{-4},~\lambda=5\times 10^{-3}$ and $f_k=5\times 10^{-2}$. Fig.~\ref{h2_br}(left panel) shows that for $m_{\frac{3}{2}\left(k\right)}~<~m_{\frac{5}{2}},~m_{k}$ and $m_E$, the possible decay modes of $H_2^{\pm\pm}$ are $H_2^{++}~\to~l^+ l^+$ and $H_2^{++}~\to~\phi^{+}+e^{+}\nu_e/u \bar d$ with the same-sign dileptonic decay being the dominant one. However, the partial decay widths of the same-sign dileptonic decays of $H_2^{\pm\pm}$ (see Eq.~\ref{h_w}) are proportional to the $f_k^2$ as well as $O_{23}^2$. Therefore, suppressed mixing between $\phi^{\pm\pm}_{\frac{3}{2}}$ and $k^{\pm\pm}$ ({\em i.e.,} smaller $\mu$) and/or smaller $f_k$ result into suppressed $\Gamma\left(H_2^{\pm\pm}\to l^{\pm}l^\pm\right)$ and hence, enhanced branching ratio for $H_2^{++}~\to~\phi^{+}+e^{+}\nu_e/u \bar d$. In Fig.~\ref{h2_br}(right panel), we have presented the branching ratios of $H_2^{++}~\to~l^+ l^+$ and $H_2^{++}~\to~\phi^{+}+e^{+}\nu_e/u \bar d$ as a function of $f_k$ for $m_{\frac{3}{2}}~=~0.8$ TeV and two different values of $\mu~=~0.1$ and 1 GeV. Fig.~\ref{h2_br}(right panel) shows that for smaller values of $f_k$ and/or $\mu$, the kinematically suppressed 3-body decays, $H_2^{++}~\to~\phi^{+}+e^{+}\nu_e/u \bar d$, dominate over the Yukawa and mixing suppressed 2-body decays, $H_2^{++}~\to~l^+ l^+$. After the detailed discussion about the possible decay modes and branching ratios of the (multi-)charged scalars, we are now prepared enough to study their collider signatures which will be discussed in the following.
\subsection{Collider signatures}
\label{coll_scalar}
In this section, we will discuss the collider signatures of (multi-)charged scalars at the LHC with $\sqrt s~=~13$ TeV. We will also study the impact of existing LHC searches on the parameter space of our model. In the context of the LHC phenomenology, we are particularly interested on the lightest scalar multiplet because it will be the lightest one that will be copiously produced and hence, more easily discovered. Therefore, the signatures of this model at the LHC can be classified according to the hierarchy between the masses of different scalar multiplets and the doubly-charged fermion. In our analysis, we consider four possible scenarios defined in the following:
\begin{itemize}
\item {\em Scenario I} assumes $m_{\frac{3}{2}}~\ll~m_{\frac{5}{2}(k)},~m_E$ and hence, $\phi^{\pm}$ and $H_2^{\pm\pm}$, being the lightest among the (multi-)charged scalars and fermion, are the most important in the context of the LHC experiment. The collider phenomenology of {\em Scenario I} is determined by $m_{\frac{3}{2}}$, $\mu$ and $f_k$. The pair and associated production of $\phi^{\pm}/H_2^{\pm\pm}$ at the LHC are determined by $m_{\frac{3}{2}}$. Whereas, $\mu$ determines the $\phi^{\pm\pm}_{\frac{3}{2}}$--$k^{\pm\pm}$ mixing and hence, controls the branching ratios of $H_2^{\pm\pm}$. The branching ratios of $H_2^{\pm\pm}$ also depends on $f_{k}$ (see Fig.~\ref{h2_br}). For our numerical calculations, we have assumed $m_{\frac{5}{2}(k)},~m_E~\sim~2.5$ TeV and varied $m_{\frac{3}{2}}$ over a range between 300--1500 GeV. Although, the collider phenomenology of {\em Scenario I} is almost insensitive on the values of $f_{\frac{3}{2}\left(\frac{5}{2}\right)}~{\rm and}~\lambda$, we have assumed $f_{\frac{3}{2}\left(\frac{5}{2}\right)}~=~2\times 10^{-4}$ and $\lambda~=~5\times 10^{-3}$.
\item {\em Scenario II} is defined by the mass hierarchy $m_{\frac{5}{2}}~\ll~m_{\frac{3}{2}(k)},~m_E~\sim~2.5$ TeV. In this case, the signatures at the LHC are governed by the production and decay of $\phi^{3\pm}$ and $H_1^{\pm\pm}$ which are the lightest among the (multi-)charged scalars and fermion. The collider phenomenology of {\em Scenario II} is mainly determined by $m_{\frac{5}{2}}$, $\mu$ and $f_k$. The values of other collider insensitive parameters are chosen same as in the case of {\em Scenario I}.
\item {\em Scenario III} assumes $m_{k}~\ll~m_{\frac{3}{2}\left(\frac{5}{2}\right)},~m_E~\sim~2.5$ TeV and hence, $H_3^{\pm\pm}$ being the lightest one, determines the collider signatures. If $H_3^{\pm\pm}$ is the lightest, it decays into a same-sign dilepton pair with 100\% branching ratio. Therefore, the LHC phenomenology of {\em Scenario III} is completely determined by the mass of $H_3^{\pm\pm}$ and hence, $m_k$.
\item {\em Scenario IV} corresponds to $m_E~\ll~m_{\frac{5}{2}\left(\frac{3}{2}\right)},~m_{k}~\sim~2.5$ TeV and hence, results into a doubly-charged fermion as the lightest one. The phenomenology of {\em Scenario IV} has already been discussed in section~\ref{E_phenomenology}.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width= 0.8\linewidth]{plots/scalar/nlep.pdf}
\mycaption{Distribution of no. of leptons (electron and muon only) after imposing the {\em acceptance cuts} listed in Eqns.~\ref{cut:pT}--\ref{cut:jj-iso} is presented for {\em Scenario I} with $m_{\frac{3}{2}}~=~0.8$ TeV, $\mu~=~0.1$ GeV and $f_{k}~=~10^{-3}$.}
\label{nlep}
\end{figure}
In this section, we will study the LHC phenomenology of {\em Scenario I, II} and {\em III}. Before going into the detailed discussion about the prompt decay signatures of the above-mentioned scenarios, we will discuss the possibility of these (multi-)charged scalars being long-lived and hence, abnormally large ionization signatures at the LHC. In presence of the tree-level 3-body decays via an off-shell $W$-boson, $H_2^{\pm\pm}$ and $\phi^{3\pm}$ always undergo prompt decays at the LHC. However, $H_{1(3)}^{\pm\pm}$ can be long-lived in certain parts of parameter space (see section~\ref{2_decay}). In Fig.~\ref{scalar_pair_prod} (left panel), we have plotted the ATLAS observed 95\% CL upper limits on the pair-production cross-sections of long-lived doubly-charged particles ($\sigma^{95\%}_{DCPs}$) along with the model predictions for the doubly-charged scalar pair production cross-sections. Fig.~\ref{scalar_pair_prod} (left panel) shows that for a long-lived $H_{1(3)}^{\pm\pm}$, $m_{H_{1(3)}}$ below about 800 GeV is excluded from the ATLAS search for long-lived MCPs.
In {\em Scenario I} and {\em II}\footnote{The phenomenology of {\em Scenario III} is straight forward because of the presence of a characteristic same-sign dilepton invariant mass peak and hence, will be discussed separately.}, the prompt decays of (multi-)charged scalars result in one or more leptons, $W$-boson in the final state. Therefore, the pair and associated productions of $\phi^{\pm}~{\rm and}~H_2^{\pm\pm}$ (in the case of {\em Scenario I}) as well as $\phi^{3\pm}~{\rm and}~H_1^{\pm\pm}$ (in the case of {\em Scenario II}) give rise to multi-lepton, jets and $E_T\!\!\!\!\!\!/~$ signatures at the LHC. For example, in the context of {\em Scenario I}, $\phi^{\pm}$ dominantly decays into a pair of same-sign leptons in association with a $W$-boson. Whereas, depending on the choice of $f_k$ and $\mu$, both same-sign dileptonic decays and 3-body decays into $\phi^{+}e^+\nu_e/u\bar d$ are possible for $H_2^{++}$. Therefore, depending on the subsequent decays of the $W$-bosons, the pair and associated production of $\phi^{\pm}~{\rm and}~H_2^{\pm\pm}$ may result into up to 6 leptons in the final state. For {\em Scenario I} with $m_{\frac{3}{2}}~=~0.8$ TeV, $\mu~=~0.1$ GeV and $f_{k}~=~10^{-3}$, Fig.~\ref{nlep} shows the no. of lepton (electron and muon only) distribution\footnote{In order to impose the cuts and generate the distributions, we have used a parton-level Monte-Carlo computer code. The technical details of our simulation were already discussed in section~\ref{E_coll}.} after the {\em acceptance cuts} listed in Eqns.~\ref{cut:pT}--\ref{cut:jj-iso}. Six lepton final state arises only from the pair production of $\phi^{\pm}$ followed by the leptonic decay of both $W$-bosons and hence, is highly suppressed. Final states with odd no. of leptons are also suppressed because of the fact that only the production of $\phi^{\pm}\phi^{\mp}$ and $\phi^{\pm}H_2^{\mp\mp}$ contribute to the odd lepton final state when one $W$-boson decay leptonically. Whereas, dilepton and 4-lepton final states get contributions from all possible combinations of $\phi^{\pm}~{\rm and}~H_2^{\pm\pm}$ pair and associated productions and hence, are enhanced. The dilepton signature suffers from huge SM background and thus, we choose to study 4-lepton final states as a signature of (multi-)charged scalars in our model. In section~\ref{E_coll}, 4-lepton searches (existing ATLAS search as well as our proposed search) at the LHC have already been discussed in detail in the context of the doubly-charged fermion. In the next section, we will study the implications of those 4-lepton searches (discussed in section~\ref{E_coll}) in the context of the (multi-)charged scalars.
\subsubsection{Four-lepton signature}
Without going into the details of 4-lepton search strategies (which have already been discussed in details in section~\ref{E_coll}), in Table~\ref{select_scalar}, $4l$ signal cross-sections after different cuts are presented for {\em Scenario I} and {\em II} for different values of $m_{\frac{3}{2}}$ and $m_{\frac{5}{2}}$, respectively. The definitions of the {\em acceptance cuts}, the {\em ATLAS cuts} and the {\em proposed cuts} can be found in Eqns.~\ref{cut:pT}--\ref{cut:jj-iso}, Table~\ref{ATLAS_cuts} and Table~\ref{proposed_cuts}, respectively. Whereas, the potential sources of $4l$ contributions from the SM processes (the SM backgrounds) have been discussed in section~\ref{4l_back} and the numerical values of the SM background cross-sections after different cuts are presented in Table~\ref{cut_flow}.
\begin{table}[h!]
\centering
\begin{tabular}{ c||c|c}
\hline\hline
\multicolumn{3}{ c}{$4l$ signal Cross-sections [fb] after different cuts} \\
\hline\hline
Mass [GeV] & ATLAS cuts & ATLAS + Proposed cuts\\\hline\hline
$m_{\frac{3}{2}}$ & \multicolumn{2}{ c}{\em Scenario I} \\\hline
650 & 0.32 & 0.27\\
750 & 0.16 & 0.15\\\hline\hline
$m_{\frac{5}{2}}$ & \multicolumn{2}{ c}{\em Scenario II} \\\hline
750 & 0.34& 0.27\\
850 & 0.2 & 0.17\\\hline\hline
\end{tabular}
\caption{$4l$ signal cross-sections for {\em Scenario I} and {\em II} after the ATLAS (tabulated in Table~\ref{ATLAS_cuts}) and ATLAS + proposed cuts (tabulated in Table~\ref{proposed_cuts}) are presented for different values of $m_{\frac{3}{2}}$ and $m_{\frac{5}{2}}$, respectively.}
\label{select_scalar}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.48 \linewidth]{plots/scalar/scalar_exc.pdf}
\includegraphics[width=0.48 \linewidth]{plots/scalar/reach_scalar.pdf}
\mycaption{(Left panel) Four lepton signal cross-section ($\sigma_{4Lep}$) as a function of $m_{\frac{3}{2}\left(\frac{5}{2}\right)}$ after the selection cuts (listed in Table~\ref{ATLAS_cuts}) used by the ATLAS collaboration in Ref.~\cite{Aaboud:2018zeb}. The black solid line corresponds to the observed 95\% CL upper bound on the 4-lepton signal cross-section. (Right panel) With the proposed event selection criteria (see Table~\ref{proposed_cuts}), required luminosities for $3\sigma$ and $5\sigma$ discovery of {\em Scenario I}({\em II}) are plotted as a function of $m_{\frac{3}{2}\left(\frac{5}{2}\right)}$.}
\label{bound_scalar}
\end{figure}
In Fig.~\ref{bound_scalar} (left panel), we have presented the $4l$ signal cross-sections (after the {\em acceptance cuts} and {\em ATLAS cuts}) for {\em Scenario I} and {\em II} as a function of $m_{\frac{3}{2}}$ and $m_{\frac{5}{2}}$, respectively. The horizontal line in Fig.~\ref{bound_scalar} (left panel) corresponds to the ATLAS 95\% CL upper limit on the visible $4l$ cross-section ($\sigma(4l)_{\rm vis}^{95}$) \cite{Aaboud:2018zeb}. Fig.~\ref{bound_scalar} (left panel) clearly shows that for $m_{\frac{3}{2}\left(\frac{5}{2}\right)} ~<~\sim 650(760)$ GeV, contribution from {\em Scenario I}({\em II}) to the visible $4l$ signal cross-section is larger than $\sigma(4l)_{\rm vis}^{95}$. Therefore, one can set a lower bound of about 650(760) GeV on $m_{\frac{3}{2}\left(\frac{5}{2}\right)}$ from the ATLAS search for the $4l$ final state in Ref.~\cite{Aaboud:2018zeb}.
In Fig.~\ref{bound_scalar} (right panel), the required integrated luminosities for the $3\sigma$ and $5\sigma$ discovery of {\em Scenario I}({\em II}) are presented as a function of $m_{\frac{3}{2}\left(\frac{5}{2}\right)}$ at the LHC with $\sqrt s~=~13$ TeV. We found that the LHC with 3000 fb$^{-1}$ integrated luminosity and 13 TeV center-of-mass energy will be able to probe $m_{\frac{3}{2}\left(\frac{5}{2}\right)}$ upto about 1250 (1530) GeV at 3$\sigma$ significance. The shaded region of Fig.~\ref{bound_scalar} (right panel) corresponds to the part of parameter-space which is already excluded from the ATLAS $4l$-search in Ref.~\cite{Aaboud:2018zeb}.
\subsubsection{Search for the doubly-charged scalars}
One of the possible decay modes of the doubly-charged scalars is $H_a^{\pm\pm}~\to~l^\pm l^\pm$. Therefore, some of the 4-lepton signal events, discussed in the previous section, might contain a same-sign pair of leptons resulting from the decay of a doubly-charged scalar. Though the branching ratios of the same-sign dileptonic decays of the doubly-charged scalars depend on the choice of $f_k$ and $\mu$, it is instructive to search for invariant mass peaks in the same-sign dilepton invariant mass distributions\footnote{Final states with a pair of positively charged and a pair of negatively charged lepton falls into the category of 4-lepton final state defined in section~\ref{4l_selection}. Therefore, two same-sign dilepton invariant mass distributions can be constructed.}. For example, in {\em Scenario I}, the doubly-charged scalar, $H_2^{\pm\pm}$, dominantly decays into $l^\pm l^\pm$ for larger $\mu$ and $f_k$ and hence, one expects to observe characteristic same-sign dilepton invariant mass peaks as a signature of $H_2^{\pm\pm}$-pair production. The pair productions $H_1^{\pm \pm}H_1^{\mp \mp}$ and $H_3^{\pm \pm}H_3^{\mp \mp}$ in {\em Scenario II} and {\em III}, respectively, always result into spectacular same-sign dilepton invariant mass peaks (see Fig.~\ref{h1_br} for the branching ratios of $H_{1(3)}^{\pm\pm}$). In {\em Scenario II}, the production of $\phi^{3\pm}\phi^{3\mp}$ and $\phi^{3\pm}H_1^{\pm\pm}$ followed by the decay of $\phi^{3\pm}$ into an on-shell $H_1^{\pm\pm}$ also contribute to the same-sign dilepton invariant mass peaks. Fig.~\ref{SSD_phi} shows the same-flavour, same-sign (positive) dilepton invariant mass ($M_{l_1^+ l_2^+}$) distributions for both the signal ({\em Scenario I} with $\mu~=~1$ GeV and $f_k~=~10^{-2}$, $m_{\frac{3}{2}}=$ 0.6 and 0.8 TeV) and the SM background after imposing the {\em acceptance cuts}. The signal $M_{l_1^+ l_2^+}$-distributions are characterized by a spectacular peak at $m_{\frac{3}{2}}$ (resulting from the production and same-sign dileptonic decay of $H_{2}^{\pm\pm}$) followed by a kinematic edge (resulting from the production and decay of $\phi^{\pm}$) at $m_{\frac{3}{2}}-m_W$.
\begin{figure}[t]
\centering
\includegraphics[width= 0.8\linewidth]{plots/scalar/SSD_invmass_phi_3_2.pdf}
\mycaption{Same sign dilepton invariant mass ($M_{l_1^+ l_2^+}$) distributions for both signal ($m_{\frac{3}{2}}=$ 0.6 and 0.8 TeV) and the SM background are presented for {\em Scenario I} after imposing the {\em acceptance cuts} (listed in Eqns.~\ref{cut:pT}--\ref{cut:jj-iso}) on the $4l$ final state (see section~\ref{4l_selection} for details). We have assumed $\mu~=~1$ GeV and $f_k~=~10^{-2}$.}
\label{SSD_phi}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width= 0.8\linewidth]{plots/scalar/H_pair_cross_bound.pdf}
\mycaption{The pair-production cross-sections of doubly-charged scalars ($H_a^{\pm\pm}$ where $a\ni 1,2,3$) decaying into $l^\pm l^\pm$ ($l\ni e,~\mu$) are presented as a function of doubly-charged scalar mass ($m_{H_a}$) for different values of $\mu$ and $f_k$. The grey solid line corresponds to the ATLAS observed \cite{Aaboud:2017qph} model independent 95\% CL upper limit on the doubly-charged scalar pair production cross-section.}
\label{bound_h_cross}
\end{figure}
Leptonic final states with a pair of prompt, isolated, highly energetic leptons with the same electric charge are very rare in the framework of the SM. However, such final states might have a significant rate in the framework of different BSM scenarios and thus, are extensively studied by the ATLAS~\cite{Aaboud:2017qph,ATLAS:2017iqw,Ucchielli:2017qad,ATLAS:2014kca,Nuti:2014eaa,ATLAS:2012hi} and the CMS~\cite{Chatrchyan:2012ya,CMS:2017pet,CMS:2016cpz} collaborations of the LHC experiments. To constrain the parameter space of our model, we are interested in the existing LHC searches for a doubly-charged scalar decaying into a pair of same-sign leptons. In this work, we consider the ATLAS search \cite{Aaboud:2017qph} for an invariant mass peak in the observed invariant mass of same-charge lepton pairs with 36.1 fb$^{-1}$ integrated luminosity data of the LHC running at $\sqrt s~=~$13 TeV. The ATLAS analysis in Ref.~\cite{Aaboud:2017qph} is aimed to search for a doubly-charged scalar that is not only present in our model but also arises in a large variety of BSM scenarios~\cite{Zee:1985id,Babu:1988ki,Nebot:2007bc,Gunion:1989in,Pati:1974yy,Mohapatra:1974hk,Senjanovic:1975rk,Gunion:1989ci,ArkaniHamed:2002qx,Muhlleitner:2003me,Perez:2008zc,Georgi:1985nv}. However, the observed data is found to be consistent with the SM predictions in all the two, three and four lepton signal regions\footnote{To constrain the parameter space of our model, we have used the ATLAS 95\% CL upper bound (model-independent) on the pair production cross-section of doubly-charged scalars decaying into a pair of same-charge leptons. The definitions of different signal regions and event selection criteria, which lead to the above-mentioned bound, are not discussed here. We refer the interested readers to Ref.~\cite{Aaboud:2017qph} for details.} considered in Ref~.\cite{Aaboud:2017qph}. In the absence of evidence for a signal, the pair production cross-section of doubly-charged scalar, which decays into a pair of same-sign electrons/muons with a 100\% branching ratio, is excluded down to about 0.1 fb at 95\% CL. In the context of our model, the ATLAS model-independent bounds on the pair production cross-sections of doubly-charged scalars are utilized to constrain the masses of $H_1^{\pm\pm}$, $H_2^{\pm\pm}$ and $H_3^{\pm\pm}$ in {\em Scenario II, I} and {\em III}, respectively.
\begin{figure}[!ht]
\centering
\includegraphics[width= 1.0\linewidth]{plots/scalar/marged_H_bound.pdf}
\mycaption{The observed 95\% CL lower limits (color gradient) on the masses of the doubly-charged scalars in {\em Scenario I}(right panel) and {\em Scenario II}(left panel) are presented as a function of $\mu$(x--axis) and $f_k$(y--axis)}
\label{bound_h}
\end{figure}
Fig.~\ref{bound_h_cross} shows the model predictions for the doubly-charged scalar pair-production cross-sections for {\em Scenario I}($a=2$), {\em II}($a=1$) and {\em III}($a=3$) as a function of $m_{H_a}$ for different values of $\mu$ and $f_k$. The ATLAS observed 95\% CL upper bound on the pair-production cross-section of the doubly-charged scalars decaying into pair of same-sign leptons is also depicted in Fig.~\ref{bound_h_cross}. The model predictions for the pair productions of doubly-charged scalars followed by the same-sign dileptonic decays significantly depend on the values of $\mu$ and $f_k$ for {\em Scenario I} and {\em II}. This can be attributed to the fact that in presence of a second decay mode (namely, $H_2^{\pm\pm}~\to~\phi^{\pm}W^{*\pm}$), the same-sign dileptonic decay branching ratios of $H_{2}^{\pm\pm}$ in {\em Scenario I} depend on $\mu$ and $f_k$. In particular, larger $\mu$ and $f_k$ correspond to larger branching ratios for $H_2^{\pm\pm}~\to ~ l^\pm l^\pm$ and hence, stronger bound on $m_{H_2}$ from the ATLAS search for same-sign dilepton invariant mass peak. Fig.~\ref{bound_h_cross} shows that $m_{H_2}$ below $\sim$ 720(430) GeV is excluded for $\mu~=~7$ GeV and $f_k~=~6\times 10^{-3}$($\mu~=~3$ GeV and $f_k~=~3.5\times 10^{-3}$). In {\em Scenario II}, the same-sign dileptonic decays are the only allowed decay modes for $H_{1}^{\pm\pm}$ and hence, $H_1^{\pm\pm}~\to~l^\pm l^\pm$ has 100\% branching ratio. However, the pair-production of $H_1^{\pm\pm}$ gets extra contributions from the pair and associated productions of $\phi^{3\pm}$ followed by the decay of $\phi^{3\pm}$ into an on-shell $H_1^{\pm\pm}$. The $\mu$ and $f_k$ dependence in the theoretical predictions for the $H_1^{\pm\pm}$ ({\em Scenario II}) pair-productions in Fig.~\ref{bound_h_cross} arises from the $\mu$ and $f_k$ of $\phi^{3\pm}~\to ~ H_1^{\pm\pm}W^{*\pm}$ branching ratio (see left panel of Fig.~\ref{br_phi3}). Fig.~\ref{bound_h_cross} shows that the ATLAS bound on the doubly-charged scalar pair production cross-section in Ref.~\cite{Aaboud:2017qph} sets a lower bound of about 780(1130) GeV on $m_{H_1}$ for $\mu~=~10$ GeV and $f_k~=~6\times 10^{-2}$($\mu~=~2$ GeV and $f_k~=~10^{-2}$). It is clear from the previous discussion that for {\em Scenario I} and {\em Scenario II}, the lower bounds on the masses of doubly-charged scalars crucially depends on the values of $\mu$ and $f_k$. Therefore, in Fig.~\ref{bound_h}, we have tabulated the lower bounds on $m_{H_1}$ (left panel) and $m_{H_2}$ (right panel) as a function of $\mu$ and $f_k$. Fig.~\ref{bound_h} shows that for {\em Scenario I}({\em Scenario II}), the lower bound on $m_{H_2}(m_{H_1})$ can be as low(high) as 313(1150) GeV for smaller values of $\mu$ and $f_k$.
\subsection{Summary}
The collider phenomenology of (multi-)charged scalars is discussed in this section. At the LHC with $\sqrt s~=~13$ TeV, the pair and associated production rate of (multi-)scalars (with masses of the order of few hundred GeVs to TeV) are large enough to study their signatures. After being produced at the LHC, $\phi^{3\pm}$ and $H_2^{\pm\pm}$ undergo prompt decay. Whereas, for smaller values of $\mu,~\lambda$ and $f_k$, the decay length of $\phi^{\pm}$, $H_1^{\pm\pm}$ and $H_3^{\pm\pm}$ could be large enough to ensure the decay of these scalars out side the LHC detector. ATLAS search for abnormally large ionization signature to probe long-lived multi-charged particles excludes $m_{H_{1(3)}}$ below about 800 GeV for long-lived $H_{1(3)}^{\pm\pm}$. The prompt decays of the (multi-)charged scalars at the LHC give rise to interesting multi-lepton final states with characteristics kinematic distributions. We have studied 4-lepton final states and the important results are summarized in the following:
\begin{itemize}
\item In {\em Scenario I}({\em II}), $m_{\frac{3}{2}\left(\frac{5}{2}\right)}$ is excluded below 650(760) GeV at 95\% CL from the ATLAS search for 4-lepton signatures in Ref.~\cite{Aaboud:2018zeb}. With our proposed 4-lepton signal selection criteria optimized for this model, the expected reach of the LHC with 3000 fb$^{-1}$ is estimated to be 1250(1530) GeV for $m_{\frac{3}{2}\left(\frac{5}{2}\right)}$ in the context of {\em Scenario I}({\em II}).
\item The ATLAS search \cite{Aaboud:2017qph} for doubly-charged scalars using the observed invariant mass of same-sign lepton pairs at the 13 TeV LHC with 36.1 fb$^{-1}$ integrated luminosity data puts an upper limit of about 730 GeV on $m_{H_3}$ in {\em Scenario III}. Whereas, the observed upper limits on $m_{H_{2(1)}}$ in {\em Scenario I}({\em II}) varies in range 313--720(1150--780) GeV as we vary the values of $\mu$ and $f_k$ from smaller to larger.
\item In the context of {\em Scenario I}, It is important to note the complementarity between the ATLAS searches in Ref.~\cite{Aaboud:2018zeb} and Ref.~\cite{Aaboud:2017qph}. For larger values of $\mu$ and $f_k$, stronger bound on $m_{H_2}$ (and hence, $m_{\frac{3}{2}}$) results from the ATLAS search for same-sign dilepton invariant mass peak in Ref.~\cite{Aaboud:2017qph}. Whereas, the 4-lepton search in Ref.~\cite{Aaboud:2018zeb} yields the most stringent bound on $m_{\frac{3}{2}}$ for the smaller values $\mu$ and $f_k$.
\item For {\em Scenario II}, however, the strongest bound on $m_{\frac{5}{2}}$ always arises from same-sign dilepton invariant mass peak search in Ref.~\cite{Aaboud:2017qph} regardless of the values of $\mu$ and $f_k$.
\end{itemize}
\section{Conclusion and Outlook}
\label{conclusion}
We studied the phenomenology of a model that generates neutrino masses at a 1-loop level. To realize the Weinberg operator at 1-loop level, the model includes additional scalar doublets and singlet as well as fermion singlets in the framework of the SM gauge symmetry. Usually, loop induced neutrino mass models require some additional symmetry to forbid tree-level seesaw contributions to the Weinberg operator. However, in our model, the additional fields and their gauge quantum numbers are chosen in such a way that the couplings, which give rise to the Weinberg operator at tree-level, are absent and hence, the tree-level contributions to the neutrino masses are forbidden without any additional symmetry. Apart from effortlessly explaining neutrino oscillation data with Yukawa couplings of the order of the SM charged lepton Yukawa couplings, the model can explain the discrepancy between the experimental measurement and the SM prediction of muon magnetic moment $(g-2)_\mu$ and gives rise to interesting signatures at the collider experiments.
In this work, we have studied the neutrino and collider phenomenology of this model. After fitting the neutrino masses and mixings, we studied the constraints resulting from the upper bound on the absolute neutrino mass scale. In this model, the absolute values of the neutrino masses depend on the Yukawa couplings, masses of heavy fermions/scalars and the mixings between the doubly-charged scalars {\em i.e.,} on the cubic ($\mu,~\mu^\prime$) and quartic ($\lambda$) terms in the scalar potential. We found that the lower region of the $\mu$--$\lambda$ plane is consistent with the upper bound on the absolute neutrino mass scale for Yukawa couplings of the order of $10^{-3}$--$10^{-4}$ and TeV scale masses of the newly introduced scalars and fermions. We studied the production, decay, and the resulting collider signatures of these TeV scale fermion/scalars in the context of the LHC experiment.
Being (multi-)charged, the heavy-fermion and the scalars are pair-produced via Drell-Yan or photon-fusion processes at the hadron colliders. We have shown that at the LHC with $\sqrt s~=~13$ TeV, the photoproduction, being enhanced by a factor of $Q^4$ where $Q$ is the electric charge of the final state fermion/scalars, contributes significantly (even dominantly in some cases) to the total pair-production cross-sections and hence, cannot be neglected. The associated productions of the TeV scale scalars have also been considered. The signatures of these fermion and scalars at the LHC crucially depend on their subsequent decays. Depending on the total decay widths of these TeV scale particles, we have classified the collider signatures into two categories.\\
{\bf \em Prompt Decay Signatures:} If the particle decay width is large enough to ensure prompt decay at the LHC, we studied multi-lepton (in particular, 4-lepton) signatures. Bounds on the masses of the new scalars and fermion are derived from ATLAS search for $4l$~\cite{Aaboud:2018zeb} at the LHC with $\sqrt s~=~13$ TeV and 36.1 fb$^{-1}$ integrated luminosity. We found that the doubly-charged fermion mass below 870 GeV is excluded. Whereas, the bounds on the masses of the multi-charged scalars vary between 650--760 GeV depending on the hypercharges. The ATLAS $4l$ search strategy in Ref.~\cite{Aaboud:2018zeb} is designed to search for electroweakinos in the context of supersymmetric scenarios. We have proposed a new set of kinematic cuts to maximize the $4l$ signal to background ratio in the context of our model and shown that the reach of the LHC could be significantly improved with the proposed event selection criteria. The production and subsequent same-sign dileptonic decays of the doubly-charged scalars give rise to characteristic same-sign dilepton invariant mass peak signature which has already been studied by the ATLAS collaboration in Ref.~\cite{Aaboud:2017qph}. We recast the ATLAS results in Ref.~\cite{Aaboud:2017qph} in the context of our model and obtain bounds on masses of the doubly-charged scalars. In the context of our model, we found complementarity between ATLAS searches in Ref.~\cite{Aaboud:2018zeb} and \cite{Aaboud:2017qph}.\\
{\bf \em Highly Ionizing Charge Tracks:} The 2-body gauge decays for the lightest of the newly introduced TeV scale scalars/fermion are kinematically forbidden. The remaining Yukawa induced 2-body decays or tree-level 3-body decays are suppressed by several factors like, the Yukawa couplings, mixings in the doubly-charged scalar sector, phase-space {\em e.t.c.} Note that the upper bound on the absolute neutrino mass scale gives rise to stringent constraints on the Yukawa couplings and mixings in the doubly-charged scalar ({\em i.e.,} the values of $\mu,~\mu^{\prime}$ and $\lambda$) sector. As a consequence, the doubly-charged fermion or scalars become long-lived in a significant part of the allowed parameter space. A long-lived multi-charged scalar/fermion is highly ionizing, and thus leave a very characteristic signature of abnormally large ionization at the LHC. Using the results from the ATLAS search \cite{Aaboud:2018kbe} for abnormally large ionization signatures, we obtain lower bounds of about 1150 GeV and 800 GeV on the masses of the long-lived doubly-charged fermion and scalars, respectively.
We mention in closing that we obtained bounds on the parameter-space a collider testable radiative seesaw model. Our collider analysis is limited to parton-level Monte-Carlo and hence, cannot simulate ISR, FSR, and subsequent hadronization of the final state quarks and gluons. As a result, we have only considered purely leptonic final states. In this work, we have discussed the collider signatures of the lightest non-SM scalars/fermion and derived the collider bounds on their masses from the LHC with $\sqrt s~=~13$ TeV and 36.1 fb$^{-1}$ integrated luminosity data. It is important to note that the next-to-lightest non-SM scalars/fermion dominantly decay into the lightest non-SM scalars/fermion in association with an SM gauge boson ($W/Z$-boson) or a Higgs boson. Therefore, the production and subsequent decays of the next-to-lightest non-SM scalars/fermion in the framework of this model give rise to interesting di-boson (Higgs, $W/Z$-boson) in association with multiple leptons signatures at the LHC. This is beyond the scope of this article to consider all those possible final states. However, this article will surely pave the way for such phenomenological studies.
\subsubsection*{Acknowledgments}
We thank Dr. Aruna Kumar Nayak and Saiyad Ashanujjaman for their technical help. The simulations were performed on SAMKHYA, the high performance computing facility at IOP, Bhubaneswar. K.G. acknowledges the support from the DST/INSPIRE Research Grant [DST/INSPIRE/04/2014/002158] and SERB Core Research Grant [CRG/2019/006831].
\section*{Appendix}
|
1,116,691,501,289 | arxiv | \section{Introduction}
In modern smart buildings, various continuous states such as the temperature, humidity and discrete states such as the air conditioning states have made the system a hybrid system. In a hybrid system, the continuous state keeps flowing in a location (also called a mode or discrete state) until an event is triggered. Then it jumps to a target location and flows continuously again according to possibly different dynamics. As there are increasingly growing interest in finding ways to accurately determine localized building or room occupancy
in real time, traditional methods seldom apply to multiple dynamics in a hybrid system.
Presently, there are mainly two categories of approaches for occupancy detection or estimation (e.g. detecting or estimating the number of people in a room). The first category relies on the learning-based techniques such as
decision trees \cite{Hailemariam2011} or support vector regression \cite{Hua2016} to find features of different occupancy states from data gathered from various sensors. The second category relies on the mathematical model of systems as they compare available measurements with information analytically derived from the system model. For hybrid systems, the main challenge of the model-based occupancy detection is due to the difficulty in capturing the combined continuous and discrete measurements.
In this paper, we propose an approach that utilizes both the learning-based techniques and the model-based methods for hybrid system occupancy detection. We mainly focus on distinguishing between different occupancy states and observing the location of the modeled hybrid system at any time. For the learning aspect, there has been a growing interest in learning (inferring) dense-time temporal logic formulae from system trajectories \cite{Kong2017,zheletter2,zhe2016,Bombara2016,zhe2015,zhe_ijcai2019,Asarin2012,Yan2019swarm,zhe2019ACCinfo,Hoxha2017,zheCDC2019GTL,Jin13}. Such temporal logic formulae have been used as high-level knowledge or specifications in many applications in robotics \cite{Allerton2019,zheACC2019DF,zhe_advisory,MH2019IFAC}, power systems \cite{zheACCstorageControl,zhe2017cascade,zhe_control,zheACC2018wind}, smart buildings \cite{zheCDCprivacy,zhe2019privacy}, agriculture \cite{cubuktepe2020policy}, etc. We infer dense-time temporal logic formulae from the temperature and humidity sensor data as dense-time temporal logics can effectively capture the time-related features in the transient period when people enter a room. In the meantime, we also utilize the model information so that the MTL formula that classifies the finite trajectories we gathered also classifies the infinite trajectories that differ from the simulated trajectories by a small margin in both space and time. In our previous work in \cite{zheCDCprivacy}, we have performed classification for trajectories generated from switched systems, which have spatial uncertainties due to initial state variations. In this paper, we extend the results to hybrid systems and we classify time-robust tube segments around the trajectories so that the inferred MTL formula can classify different system behaviors when both the spatial and the temporal uncertainties exist due to initial state variations in a hybrid system.
To identify the current location, useful information usually originates from comprehensive discrete and continuous system outputs, yet both are of limited availability. For instance, lack of event observability makes it difficult to determine the subsequent locations, especially for those non-deterministic unobservable events that may take place anywhere without any output discrete signal. In that case, we can resort to the available continuous state measurements for the purpose of location identification. Relevant approaches include designing residual generation schemes~\cite{Bayoudh2008,Arogeti2010,Vento2012}, and analyzing the derivatives of the continuous outputs~\cite{Collins2004}.
The problem can also be addressed by system abstraction, such as abstracting away the continuous dynamics in exchange for temporal information of the discrete event evolution that helps to track locations~\cite{DiBenedetto2008,Zhao2005,DiBenedetto2011,Deng2015_Verification}
Another aspect of hybrid state estimation is concerned with continuous state tracking for the underlying multiple modes of continuous dynamical systems. In \cite{Balluchi2002,Alessandri2001}, continuous observers are constructed based on the classical Luenberger's approach, and thus continuous state observability is required. In \cite{Tanwani2013}, the authors propose an observer design for switched systems that does not require the continuous systems to be observable. To that end, \cite{Tanwani2013} presents a characterization of observability over an interval. In \cite{YiCDC}, the authors of the present paper propose a framework for hybrid system state estimation from the perspective of bisimulation theory~\cite{Girard2007}. The framework is based on the robust test idea~\cite{Julius2007}, which extracts the spatial and temporal properties of infinitely many trajectories by a finite number of simulations.
In inferring a temporal logic formula that classifies different system behaviors, we can further design an observer for determining the location of the hybrid system at any time. Our previous work \cite{YiCDC} results in a hybrid observer that estimates both the discrete and continuous states constantly. The observer only uses the discrete outputs generated by the hybrid system's observable events and their timing information as its input, and thus is referred to as the basic observer in this paper. Based on \cite{YiCDC}, we utilize the inferred MTL formula from the MTL classifier to refine the basic observer and the obtained observer is referred to as the refined observer. We illustrate the idea with Figure \ref{fig_diagapproach}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{diagram2.pdf}
\caption{The MTL classifier infers an MTL formula that classifies the sensor data in different conditions (such as different room occupancy states). The refined observer utilizes the MTL formula inferred from the MTL classifier to shrink the basic observer's states.}
\label{fig_diagapproach}
\end{figure}
\section{Preliminaries}
\label{sec_hybrid}
\subsection{Hybrid Automaton}
A hybrid autonomous system is defined to be a $5$-tuple $\mathcal{H}=(\mathcal{\mathcal{L}}\times \mathcal{\mathcal{X}}, \mathcal{\mathcal{L}}^0\times\mathcal{\mathcal{X}}^0,\mathcal{\mathcal{F}},\mathcal{\mathcal{E}},Inv)$~\cite{Alur1995}:
\begin{itemize}
\item $\mathcal{L}\times \mathcal{X}$ is a set of hybrid states $(\ell,x)$, where $\ell\in \mathcal{L}$ is discrete state (location), and $x\in \mathcal{X}$ is continuous state.
\item $\mathcal{L}^0\times \mathcal{X}^0\subset \mathcal{L}\times \mathcal{X}$ a set of initial states.
\item $\mathcal{F}=\{f_{\ell}\vert \ell\in \mathcal{L}\}$ associates with each location $\ell\in \mathcal{L}$ the autonomous continuous time-invariant dynamics, $f_\ell: \dot{x}=f_\ell(x)$, which is assumed to admit a unique global solution $\xi_\ell(\tau,x^0_\ell)$, where $\xi_\ell$ satisfies $\frac{\partial\xi_\ell(\tau,x_\ell^0) }{\partial \tau}= f_\ell(\xi_\ell(\tau,x_\ell^0))$, and $\xi_\ell(0,x^0_\ell)=x_\ell^0$ is the initial condition in $\ell$
\item $Inv:\mathcal{L}\rightarrow 2^\mathcal{X}$ associates an invariant set $Inv(\ell)\subset \mathcal{X}$ with each location. Only if the continuous state satisfies $x\in Inv(\ell)$, can the discrete state be at the location $\ell$.
\item $\mathcal{E}$ is a set of events. In each location $\ell$, the system state evolves continuously according to $f_\ell$ until an event $e = (\ell, \ell', g, r), e \in \mathcal{E}$ occurs. The event is guarded by $g \in Inv(\ell)$. Namely, a necessary condition for the occurrence of $e$ is $x \in g$. After the event, the state is reset from $(\ell,x)$ to $(\ell',r(x))$, where $r(x)$ is the reset initial state of $x$.
\end{itemize}
When a hybrid system runs, the system state alternately flows continuously and triggers events in $\mathcal{E}$. For convenience, we also define an initialization event $e^0\not\in \mathcal{E}$. Then a trajectory of the system can be defined as a sequence:
\begin{definition}[Trajectory]
\label{def_traj}
A trajectory of a hybrid system H
is denoted as
\begin{equation}
\rho =\{(e^m,\ell^m,x^0_{\ell^m},\tau^m)\}_{m=0}^N,\nonumber
\end{equation}
where
\begin{itemize}
\item $\forall m\ge 0$, $(\ell^m,x^0_{\ell^m})\in \mathcal{L}\times \mathcal{X}$ are the (reset) initial states;
\item $\forall m\ge 0$, $\tau^m\in\mathbb{R}_{\ge 0}$ (nonnegative real), and $\forall\tau\in [0,\tau^m]$, $\xi_{\ell^m}(\tau,x^0_{\ell^m})\in Inv(\ell^m)$;
\item $\forall m\ge 1$, $e^m=(\ell^{m-1},\ell^m,g^m,r^m)$, $\xi_{\ell^{m-1}}(\tau^{m-1},$ $x^0_{\ell^{m-1}})\in g^m$,
$x^0_{\ell^m}=r^m(\xi_{\ell^{m-1}}(\tau^{m-1},x^0_{\ell^{m-1}}))$, i.e. $(\ell^m,x^0_{\ell^m})$ is the reset initial state for $(\ell^{m-1},\xi_{\ell^{m-1}}(\tau^{m-1},$ $x^0_{\ell^{m-1}}))$.
\end{itemize}
\end{definition}
Each event $e\in E$ has an output symbol $\psi(e)$ that can be observable or unobservable. An unobservable output symbol $\psi(e)$ is specifically denoted as $\epsilon$.
\subsection{Robust Neighborhood Approach}
\label{sec_robusttest}
In this section, we briefly review the robust neighborhood approach \cite{Julius2007}, which is based on the approximated bisimulation theory \cite{Girard2007}.
The robust neighborhood approach \cite{Julius2007} is to compute a neighborhood around a simulated initial state, such that any trajectory initiated from the neighborhood will trigger the same event sequence as the simulated trajectory, and the continuous state always stays inside a neighborhood around the continuous state of the simulated one.
\begin{definition}
\label{def_bisimfunc}
$\Phi_\ell: Inv(\ell) \times Inv(\ell) \rightarrow \mathbb{R}$ is an autobisimulation function for the dynamics of hybrid system $\mathcal{H}$ at location $\ell$, if for any $x_1,x_2\in Inv(\ell)$,
\[
\begin{split}
\Phi_\ell(x_1,x_2)&\ge0,\\
\frac{\partial{\Phi_\ell(x_1,x_2)}}{\partial{x_1}}f_\ell(x_1)&+\frac{\partial{\Phi_\ell(x_1,x_2)}}{\partial{x_2}}f_\ell(x_2)\le0.
\end{split}
\]
\label{bisim_def}
\end{definition}
From Definition \ref{bisim_def}, $\Phi_\ell$ can be used to bound the divergence of continuous state trajectories. If we define the level set
\begin{equation}
B_\ell(\gamma_\ell,\xi_\ell(\tau,x^0_\ell))\triangleq\{x\vert \Phi_\ell(x,\xi_\ell(\tau,x^0_\ell))<\gamma_\ell\}.
\end{equation}
then we can easily conclude that the value of $\Phi_\ell$ is nondecreasing along any two trajectories of
the system at location $\ell$, i.e. for any initial state $\tilde{x}^0_\ell\in B_\ell(\gamma_\ell,x^0_\ell)$ and $\tau>0$,
$\xi_\ell(\tau,\tilde{x}^0_\ell)\in B_\ell(\gamma_\ell,\xi_\ell(\tau,x^0_\ell))$.
Let $e=(\ell,\ell',g,r)$ be an event triggered by a trajectory initiated from $x^0_\ell$. If we want all the trajectories initiated from within
$B_\ell(\gamma_\ell,x^0_\ell)$ to avoid triggering a different event $e'=(\ell,\ell'',g',r')$, then we can let
\[
\gamma_\ell \le\inf_{y \in g'}\inf_{\tau\in [0,\bar{\tau}]} \Phi_\ell(\xi_\ell(\tau,x^{0}_\ell),y),
\]
where $\bar{\tau}$ is an upper bound of the time for trajectories initiated from $B_\ell(\gamma_\ell,x^0_\ell)$ to transition out of $\ell$ (for details on methods for estimating $\bar{\tau}$, see \cite{Julius2007}).
Then for any $\tilde{x}^0_\ell\in B(\gamma_\ell, x^0_\ell), \tau\in [0,\tau]$, we have that $\xi_\ell(\tau,\tilde{x}^{0}_\ell)$ cannot reach $g'$ and thus trigger $e'$.
Let $\rho=\{(e^m,\ell^m,x^0_{\ell^m},\tau^m)\}_{m=0}^N$ denote the simulated trajectory. We can compute robust neighborhoods $B_{\ell^m}(\gamma_{\ell^m},x^0_{\ell^m})$ around the (reset) initial continuous states $x^0_{\ell^m}$ of $\rho$ such that the property below holds.
\begin{proposition}
For any initial state $(\ell^0,\tilde{x}_{\ell^0}^0)\in\{\ell^0\}\times B_{\ell^0}(\gamma_{\ell^0},x_{\ell^0}^0)$ and any trajectory $\tilde{\rho}=\{(e^m,\ell^m,\tilde{x}^0_{\ell^m},\tilde{\tau}^m)\}_{m=0}^{N}$ that triggers the same event sequence with the simulated trajectory $\rho$, there exist $\tau_{lead}^m,\tau_{lag}^m>0~(0\le m\le N-1)$ such that
\begin{itemize}
\item for all $0\le m\le N-1$, $\tilde{x}^0_{\ell^m}\in B_{\ell^m}(\gamma_{\ell^m},x^0_{\ell^m})$, $\tilde{\tau}^m$ $\in [\tau^m-\tau_{lead}^m,\tau^m+\tau_{lag}^m]$, and $\Phi_{\ell^m}(\xi_{\ell^m}(t,x^0_{\ell^m}),$ $\xi_{\ell^m}(t,\tilde{x}^0_{\ell^m}))\le\gamma_{\ell^m}$ for all $t\in [0,\tilde{\tau}^m]$;
\item $\tilde{x}^0_{\ell^N}\in B_{\ell^N}(\gamma_{\ell^N},x^0_{\ell^N})$, and $\Phi_{\ell^N}(\xi_{\ell^N}(t,x^0_{\ell^N}),\xi_{\ell^N}$ $(t,\tilde{x}^0_{\ell^N}))\le\gamma_{\ell^N}$ for all $t\in [0,\min(\tilde{\tau}^N,\tau^N)]$.
\end{itemize}
\end{proposition}
We simulate trajectories from the initial set $\mathcal{L}^0\times \mathcal{X}^0$ and perform robust neighborhood computation. We denote $\rho_k=\{(e^m_k,\ell^m_k,x^0_{\ell^m_k},\tau^m_k)\}_{m=0}^{N_k}$ as the $k$th simulated trajectory ($k=1,2,\dots$). The robust neighborhood around the (reset) initial state for the segment $m$ of $\rho_k$ is the following:
\begin{eqnarray}
B_{\ell^m_k}(\gamma_{\ell^m_k},x^0_{\ell^m_k})
&=&\{x\vert \Phi_{\ell^m_k}(x^0_{\ell^m_k},x)<\gamma_{\ell^m_k}\},
\end{eqnarray}
where $\Phi_{\ell^m_k}$ is the bisimulation function in location $\ell^m_k$, and $\gamma_{\ell^m_k}$ is the radius of the computed robust neighborhood. The initial set can be covered by the robust neighborhoods around the initial states of the finitely simulated trajectories if:
\begin{equation}
\label{eq_coverinit}
\mathcal{L}^0\times\mathcal{X}^0\subset \bigcup\limits_{k} \{\ell_k^0\}\times B_{\ell^0_k}(\gamma_{\ell^0_k},x^0_{\ell^0_k}).
\end{equation}
\section{Timed Abstraction and Observer Design}
\label{sec_time}
\subsection{Timed Abstraction}
\label{sec_abstraction}
Based on the simulated trajectories $\{\rho_k\}_{k=1}^{\hat{K}}$, we construct a timed automaton $T = (Q, Q^0, C, \tilde{E}, \tilde{Inv})$~\cite{Alur1994}.
\begin{definition}[Timed Abstraction]
$T$ consists of
\begin{itemize}
\item The state space is $Q:=\{(k,n)\vert k\in \{1,2,\ldots, \hat{K}\}, n\in \{0,1,\ldots, N_k\}\}$.
\item The initial set is $Q^0:=\{1,2,\ldots,K\}\times\{0\}$.
\item The set of clock $C$ is a singleton $\{c\}$.
\item The events $\tilde{e}\in \tilde{E}$ are defined as $\tilde{e}=(q,q',\tilde{g},\tilde{r})$ such that $\tilde{r}(c)=0$, i.e., the only clock is reset after any event, and one of the following cases should be satisfied:
\begin{enumerate}
\item $q=(k,n)$, where $n<N_k$; $q'=(k,n+1)$; and $\tilde{g}=[\tau_k^n-lead_k^n,\tau_k^n+lag_k^n]$; $\tilde{e}$ is associated with the output symbol of $e_k^{n+1}$;
\item $q=(k,N_k)$, where $k\in [1,K]$; $q'=(k',0)$, where $k'\in Cover(k)$; and $\tilde{g}=[\tau_k^{N_k},\tau_k^{N_k}]$; $\tilde{e}$ is associated with the unobservable output symbol $\epsilon$;
\item $q=(k,N_k)$, where $k\in [K+1,\hat{K}]$; $q'=EoS$ (end of simulation); and $\tilde{g}=[\tau_k^{N_k},\tau_k^{N_k}]$; $\tilde{e}$ is associated with the unobservable output symbol $\epsilon$;
\item $q=(k,n)$; $q'=(k',0)$, where $k'\in Ind^f(k,n,e^f)$ for some $e^f\in Feas^f(k,n)$; and $\tilde{g}=[0,\tau_k^n+lag_k^n]$; $\tilde{e}$ is associated with the output symbol of $e^f$.
\end{enumerate}
\item The invariant set is $\tilde{Inv}(q):=[0,\tau_k^n+lag_k^n]$ if $n<N_k$, $\tilde{Inv}(q):=[0,\tau_k^n]$ if $n=N_k$, where $q = (k,n)\in Q$.
\end{itemize}
\end{definition}
\begin{example}
Suppose for a system $H$, the following trajectories are simulated in the robust neighborhood approach:
There is one simulated normal trajectory, i.e., $K=1$. The robust neighborhood $Ball(1,0)$ covers the initial set of $H$.
\[
\rho_1=(e^0_1,\ell^0_1,x^0_1,\tau^0_1),(e^1_1,\ell^1_1,x^1_1,\tau_1^1),(e^2_1,\ell^2_1,x^2_1,\tau^2_1),
\]
where $e^0_1=e^0$ (initialization event), $\tau^0_1=23$, $lead^0_1=lag^0_1=6$; $e^1_1$ is unobservable, $\tau^1_1=6$, $lead^1_1=lag^1_1=1$; $e^2_1$ has the observable output symbol $\alpha$, $\tau^2_1=20$.
The robust neighborhood $Ball(1,0)$ covers the end of robust tube around $\rho_1$, i.e., $Cover(1)=\{(1,0)\}$. An unobservable faulty event $e^f$ may occur in $\ell^1_1$.
There are two simulated faulty trajectories, i.e., $\hat{K}=3$.
\[
\rho_2=(e^0_2,\ell^0_2,x^0_2,\tau^0_2),\quad
\rho_3=(e^0_3,\ell^0_3,x^0_3,\tau^0_3),
\]
where $e^0_2=e^0_3=e^f\in Feas^f(\ell^1_1)$, $\tau_2^0=\tau_3^0=36$. The union of inverse images of the robust neighborhoods $Ball(2,0), Ball(3,0)$ covers the robust tube $Tube(1,1,[0,7])$, i.e., $Ind^f(1,1,e^f)=\{2,3\}$.
The timed abstraction $T$ is constructed as Fig. \ref{fig_ta}.
\end{example}
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig_taf2.pdf}
\caption{There are one normal trajectory $\rho_1=\{(e^n_1,\ell^n_1,x^n_1,\tau^n_1)\}_{n=0}^2$ and two faulty trajectories $\rho_2 = (e^0_2,\ell^0_2,x^0_2,\tau^0_2)$, $\rho_3 = (e^0_3,\ell^0_3,x^0_3,\tau^0_3)$. The faulty event occurs in $\ell^1_1$ and is unobservable.}
\label{fig_ta}
\end{figure}
Since timed automata can be considered as a subclass of hybrid automata, trajectories and projected timed output symbol sequences can be defined the same way as before. By construction, for any normal trajectory $\rho$ of $H$, there is a trajectory $\tilde{\rho}$ of $T$ such that $\Pi(S(\rho))=\Pi(S(\tilde{\rho}))$. For faulty trajectories, a similar property holds for finite horizon.
\subsection{Basic Observer}
\label{sec_observer}
Based on the timed abstraction $T$, we construct for $H$ an observer $O$. By using the history of system output, i.e., a projected timed output symbol sequence, $O$ over-approximates the set of states reached by $H$.
The following definitions are used in the construction of $O$. We illustrate them later with examples.
\begin{definition}
\label{def_closure}
Given $T = (Q, Q^0, C, \tilde{E}, \tilde{Inv})$, for each $(k,n)\in Q$, let $Feas: Q\rightarrow 2^{\tilde{E}}$ be the feasible event function:
\begin{equation}
Feas((k,n)):=\{\tilde{e}\in\tilde{E}\vert \tilde{e}=((k,n),(k',n'),\tilde{g},\tilde{r})\}.
\end{equation}
Let $\bar{a}$ be an integer. Define the $\epsilon[0]$-successors and $\epsilon[0]$-closure of $(k,n)[\bar{a},0]$:
\begin{eqnarray}
& & Succ^{\epsilon[0]}((k,n)[\bar{a},0]))\nonumber\\
&:=&\{(k',n')[\bar{a}-\tilde{b},0] \vert \exists \tilde{e}=((k,n),(k',n'),[0,\tilde{b}],\tilde{r})\nonumber\\
& & \in Feas((k,n)), \tilde{e}\text{ outputs }\epsilon\}.\\
& & Cl^{\epsilon[0]}((k,n)[\bar{a},0]))\nonumber\\
&:=&\{(k,n)[\bar{a},0]\}\cup\{Succ^{\epsilon[0]}((k,n)[\bar{a}',0])\vert\nonumber\\
& & (k,n)[\bar{a}',0]\in Cl^{\epsilon[0]}((k,n)[\bar{a},0])\}.
\end{eqnarray}
Given integers $\bar{a}, \bar{b}$, and $s$ being a set of $(k,n)[\bar{a},\bar{b}]$, define the $\epsilon[0]$-closure of $s$:
\begin{equation}
Cl^{\epsilon[0]}(s):=s\cup\{Cl^{\epsilon[0]}((k,n)[\bar{a},0])\vert (k,n)[\bar{a},0]\in s\}.
\end{equation}
\end{definition}
\begin{figure}
\centering
\includegraphics[scale=0.25]{timer.pdf}
\caption{The state $(k,m)[\bar{a}, \bar{b}]$ of the basic observer and an unobservable event that occurs in the time interval $[\tilde{a}, \tilde{b}]$.}
\label{update}
\end{figure}
In what follows, we construct an observer that over-approximates the state reached by $H$. Each state of the observer can be represented by a subset of the set
\begin{eqnarray*}
\{(k,n)[\bar{a},\bar{b}] &\vert& k,n,\bar{a},\bar{b}\text{ are integers, }\\
& & 1\le k\le\hat{K}, 0\le n\le N_k,\\
& & [\bar{a},\bar{b}]\subset \tilde{Inv}((k,n))\}.
\end{eqnarray*}
The state of the observer being just updated to $s$ means that the system $H$ must be at some state within $\bigcup_{(k,n)[\bar{a},\bar{b}]\in s} Tube(k,n,[\bar{a},\bar{b}])$. In short, we say the state of $H$ is within $(k,n)[\bar{a},\bar{b}]$ instead of $Tube(k,n,[\bar{a},\bar{b}])$.
For convenience, given $T = (Q, Q^0, C, \tilde{E}, \tilde{Inv})$, we define the feasible event function $Feas: Q\rightarrow 2^{\tilde{E}}$,
\[
Feas(q):=\{\tilde{e}\in\tilde{E}\vert \tilde{e}=(q,q',\tilde{g},\tilde{r})\}
\]
for each $q\in Q$, and $Feas_{lab}(q)$ as the labels of the feasible events of $q$:
\begin{eqnarray*}
Feas_{lab}(q):=\{\psi[a,b] &\vert& \exists\tilde{e}=(q,q',\tilde{g},\tilde{r})\in Feas(q),\\
&& \tilde{e}\rightarrow\psi, \tilde{g}=[a,b]\},
\end{eqnarray*}
where $\rightarrow$ means "outputs the symbol".
Suppose the current state $s$ of the observer only contains $\bar{q}=(k,n)[\bar{a},\bar{b}]$, and $s$ is reached at the time instant $t$. The observer will update the next state based on what is observed after $t$, disregarding what already happened before or at $t$. Consider the feasible events $Feas(q)$ of $q=(k,n)$. The following facts are obvious for a feasible event $\tilde{e}=(q,q',\tilde{g},\tilde{r})\in Feas(q)$ that occurs. Let the label of $\tilde{e}$ be $\psi[a,b]\in Feas_{lab}(q)$.
\begin{itemize}
\item If $\psi=\epsilon, \bar{b}<a$, then $(a-\bar{b})$ time units later without observation of output symbols, the state of $H$ may be within $q'[0,0]$ or $q[\bar{a}+a-\bar{b},a]$.
More generally, given any $\tau\in [a-\bar{b},b-\bar{a}]$, $\tau$ time units later, the state of $H$ may be within $q'[0,\tau-(a-\bar{b})]\cap\tilde{Inv}(q')$ or $q[\bar{a}+\tau,\bar{b}+\tau]\cap\tilde{Inv}(q)$.
We can computationally equivalently let the state estimation at $\tau=a-\bar{b}$ time units later be $q'[-(b-\bar{a}-a+\bar{b}),0]\cup q[\bar{a}+a-\bar{b},a]$ rather than $q'[0,0]\cup q[\bar{a}+a-\bar{b},a]$. These negatively timed states are considered as "latent" reset initial states. The state of $H$ cannot actually be at these negative timed states, which should be kept in mind when interpreting the state read of the observer. Using the latent reset initial states helps us avoid consideration of the unobservable event $\tilde{e}$ triggered at $\tau\in (a-\bar{b},b-\bar{a}]$ time units later.
\item If $\psi\in\Psi_v, \bar{a}<b$, then one should anticipate an observation of $\psi$ at $\tau$ time units later with $\tau\in [\max\{0,a-\bar{b}\},b-\bar{a}]\setminus\{0\}$; after the observation the state of $H$ must be within $q'[0,0]$.
\item Moreover, let $\tilde{Inv}(q)=[\tilde{a},\tilde{b}]$, $(\tilde{b}-\bar{a})$ time units later, the state of $H$ must be within $q[\tilde{b},\bar{b}+\tilde{b}-\bar{a}]$, or somewhere else through a transition; but it cannot be at $q[\tilde{b},\bar{b}+\tilde{b}-\bar{a}]$ due to the invariant set, so the latter case holds.
\end{itemize}
Given the current observer state $s\ni\bar{q}=(k,n)[\bar{a},\bar{b}]$, and one of the feasible events $\tilde{e}\in Feas(q)$ for $q=(k,n)$, we let the function $Blank(\bar{q},\tilde{e})$ or $Blank^{\tilde{Inv}}(\bar{q})$ (see explanations below) return the shortest blank interval (i.e., no output symbol observed) that the observer needs to wait until it updates the state in order to keep track of the location change of $H$. In specific, corresponding to the three cases listed above, let
\begin{itemize}
\item $Blank(\bar{q},\tilde{e})=a-\bar{b}$, if $\psi=\epsilon, \bar{b}<a$, since a new location might be visited by $H$ through $\tilde{e}$ and needs to be incorporated into the state of the observer;
\item $Blank(\bar{q},\tilde{e})=b-\bar{a}$, if $\psi\in\Psi_v, \bar{a}<b$, since up to a blank interval of length $(b-\bar{a})$, the observer is able to determine the absence of $\tilde{e}$.
\item $Blank^{\tilde{Inv}}(\bar{q})=\tilde{b}-\bar{a}$, where $\tilde{Inv}(q)=[\tilde{a},\tilde{b}]$, since the state of $H$ has already transitioned out of the location $\ell_k^n$, which requires an update of the observer state.
\end{itemize}
From the discussions above, the state of the observer can be updated from $s$ to $s'$ based on the observation of a blank interval $(t,t']$, where $t$ is the time instant of reaching $s$, and $t'$ is generated through $Blank(\bar{q},\tilde{e})$ or $Blank^{\tilde{Inv}}(\bar{q})$, and becomes the time instant of updating $s'$. The observer can also updates its state to $s''$ by observing $\psi\in\Psi_v$ at some $t''\in (t,t']$. Thus, two types of transition labels should be modeled:
when counting from $t=0$,
\begin{enumerate}
\item$\epsilon[a_1], a_1>0$, meaning that no symbol is observed during $(0,a_1]$;
\item $\psi\langle a_2,b], \psi\in\Psi_v,a_2<b$, meaning that $\psi$ is observed during $[a_2,b]$ (or instead, $(a_2,b]$, if $a_2=0$),
\end{enumerate}
Besides these two typical transitions, there are additional types of state updates that have nothing to do with an interval $(t,t']$. Let $t,t',t'',\ldots$ be the time instants when the observer updates states. Clearly, $(t,t'], (t',t''],\ldots$ are either a blank interval, or with an observable event from $H$ at the right end of interval.
However, such traces of the observer cannot model the case where multiple events accumulate at the same time instant, that is, an event occurs at an (reset) initial state. So we incorporate two more types of transitions, labeled by $\epsilon[0]$ and $\psi_1\cdots\psi_n\langle a,b],\psi_i\in\Psi_v$.
\begin{definition}
For $q[\bar{a},0]$, if there exists $\tilde{e}\in Feas(q)$, $\tilde{e}=(q,q',[0,\tilde{b}],\tilde{r})\rightarrow\epsilon$ (meaning $\tilde{e}$ outputs $\epsilon$), then we define $\epsilon[0]$-successors of $q[\bar{a},0]$ as the set of all $q'[-(\tilde{b}-\bar{a}),0]$:
\begin{eqnarray}
Succ^{\epsilon[0]}(q[\bar{a},0]) &:=&\nonumber\\
\{q'[\bar{a}-\tilde{b},0] &\vert& \exists\tilde{e}=(q,q',[0,\tilde{b}],\tilde{r})\in Feas(q)\nonumber\\
& & \tilde{e}\rightarrow\epsilon\}.
\end{eqnarray}
\end{definition}
The $\epsilon[0]$-extension of $q[\bar{a},0]$, $Ext^{\epsilon[0]}(q[\bar{a},0])$, is defined to be union of $q[\bar{a},0]$, the $\epsilon[0]$-successors of $q[\bar{a},0]$, and the $\epsilon[0]$-successors of all the $\epsilon[0]$-successors.
The transition $\epsilon[0]$ is actually "unobservable" for the observer, since the state transition of $H$ from within $q[\bar{a},0]$ to its $\epsilon[0]$-successor cannot be seen by the observer (the observer can only see observable output symbols and blank intervals), while it indeed causes the state estimation (of $H$ by the observer) changes. So we identify $q[\bar{a},0]$ with $Ext^{\epsilon[0]}(q[\bar{a},0])$ to build an observer state.
Let $s$ be a set of $q[\bar{a},\bar{b}]$, we write
\begin{equation}
Ext^{\epsilon[0]}(s):=s\cup\{Ext^{\epsilon[0]}(q[\bar{a},0])\vert q[\bar{a},0]\in s\}.
\end{equation}
\begin{definition}
We define the $\psi[0,0]$-successors for $q[0,0]$:
\begin{eqnarray}
Succ^{\psi[0,0]}(q[0,0]) &:=&\nonumber\\
\{q'[0,0] &\vert& \exists \tilde{e}=(q,q',[0,\tilde{b}],\tilde{r})\in Feas(q)\nonumber\\
& & \tilde{e}\rightarrow\psi\in\Psi_v\cup\{\epsilon\}\}.
\end{eqnarray}
\end{definition}
Given $q^0[0,0]$, $q^n[0,0]$ is called an extended $\psi[0,0]$-successor of $q[0,0]$ if and only if there exist $q^1[0,0],q^2[0,0],\ldots,q^{n-1}[0,0]$ such that
\begin{equation}
q^i[0,0]\in Succ^{\psi[0,0]}(q^{i-1}[0,0]), i\in\{1,\ldots,n\}.
\end{equation}
Let $\tilde{e}^i\rightarrow\psi^i$ be the event that leads to $q^i$, the concatenation $\psi^1\cdots\psi^n$ is projected to $Proj(\psi^1\cdots\psi^n)$ by erasing all $\psi^i=\epsilon$. If $Proj(\psi^1\cdots\psi^n)\not\in\Psi_v$, then it is considered as a new observable output symbol.
We write $q^0[0,0]\xrightarrow{\psi_{proj}}q^n[0,0]$ to mean that $q^n[0,0]$ is an extended $\psi[0,0]$-successor of $q^0[0,0]$, and the concatenation of output symbols is projected to $Proj(\psi^1\ldots\psi^n)=\psi_{proj}$. The set of extended $\psi[0,0]$-successors of $q[0,0]$ is denoted by $Succ^{\psi[0,0]}_{ext}(q[0,0])$.
\begin{definition}[Basic Observer]
We construct an basic observer $O=(S,s^0,\bar{\Sigma},f)$ by the following steps, where $S,S^0,\bar{\Sigma},f$ are respectively the state space, initial state, transition labels and transition function:
\begin{enumerate}
\item Define $s^0:=Ext^{\epsilon[0]}(\{(1,0)[0,0],\ldots,(K,0)[0,0])\})$. Set $S=\{s^0\}$.
\item For each new state $s\in S$, compute
\begin{eqnarray*}
& & Blank_{min}(s)\\
&:=&
\min\{
\min\limits_{\bar{q}\in s, \tilde{e}\in Feas(q)}Blank(\bar{q},\tilde{e}), Blank^{\tilde{Inv}}(\bar{q})
\},
\end{eqnarray*}
where $\bar{q}=q[\bar{a},\bar{b}], q=(k,n)$.
Add the transition label $\epsilon[a], a:=Blank_{min}(s)>0$ to $Feas^{obs}_{lab}(s)$.
Define
\begin{eqnarray*}
f'(s,\epsilon[a]) &:=& \{q'[\bar{a}+a-\tilde{b},0]\\
&\vert & q[\bar{a},\bar{b}]\in s,\\
& & \tilde{e}=(q,q',[\tilde{a},\tilde{b}],\tilde{r})\in Feas(q),\\
& & \tilde{e}\rightarrow\epsilon, \bar{b}<\tilde{a}=\bar{b}+a\}.\\
f(s,\epsilon[a]) &:=& \{q[[\bar{a}+a,\bar{b}+a]\cap\tilde{Inv}(q)]\\
&\vert& q[\bar{a},\bar{b}]\in s,\\
& & (\bar{a}+a,\bar{b}+a]\cap\tilde{Inv}(q)\neq\emptyset\}\\
&\cup& Ext^{\epsilon[0]}(f'(s,\epsilon[a]))
\end{eqnarray*}
Add a transition label $Proj(\psi^1\cdots\psi^n)[a,a], a:=Blank_{min}(s)>0$ into $Feas_{lab}^{obs}(q)$, as long as there exist $q'[\bar{a}',0]\in f'(s,\epsilon[a])$, and $q''[0,0]\in Succ^{\psi[0,0]}_{ext}(q'[0,0])$ through the concatenated output symbols $\psi^1\cdots\psi^n$, and also $Proj(\psi^1\cdots\psi^n)\neq\epsilon$.
\begin{eqnarray*}
f(s, \psi_{proj}[a,a])) &:=& Ext^{\epsilon[0]}(\{q''[0,0]\\
&\vert& q'[\bar{a}',0]\in f'(s,\epsilon[a]),\\
& & q'[0,0]\xrightarrow{\psi_{proj}}q''[0,0]\}).
\end{eqnarray*}
\item Check if there exist $\bar{q}=q[\bar{a},\bar{b}]\in s, q=(k,n)$, $\tilde{e}=(q,q',[\tilde{a},\tilde{b}],\tilde{r})\in Feas(q), \tilde{e}\rightarrow\psi\in\Psi_v$, such that the anticipated interval for the observation of $\psi$, $[\max\{0,\tilde{a}-\bar{b}\},\tilde{b}-\bar{a}]\setminus\{0\}$, satisfies
\begin{equation*}
([\max\{0,\tilde{a}-\bar{b}\},\tilde{b}-\bar{a}]\setminus\{0\})\cap (0,Blank_{min}(s)]\neq\emptyset.
\end{equation*}
If so, define $\bar{\sigma}':= \psi\langle\max\{0,\tilde{a}-\bar{b}\}, Blank_{min}(s)]$ for each $\tilde{e}$ that meets the above conditions ($\langle a,b]$ stands for $(a,b]$ if $a=0$, $[a,b]$ if $a>0$).
Classify the obtained $\bar{\sigma}'$ according to distinct $\psi$. For each classification $[\bar{\sigma}']_{\psi}=\{\psi\langle a_1,b],\psi\langle a_2,b],\ldots\}$, where $b= Blank_{min}(s)$, order the distinct $a_i$ values increasingly and let the result be $a_{(1)}<\ldots<a_{(m)}$.
Then add to $Feas^{obs}_{lab}(s)$ the transition labels $\{\psi\langle a_{(1)},a_{(2)}), \ldots, \psi\langle a_{(m-1)},a_{(m)}), \psi\langle a_{(m)},b]\}$.
Obviously, when $m=1$, there is only $\psi\langle a_{(1)},b]$; when $m>1$, these labels are $\{\psi\langle a_{(1)},a_{(2)}), \ldots, \psi[a_{(m-1)},a_{(m)}), \psi[a_{(m)},b]\}$.
The labels of the form $\psi\langle a,b)$ are a variant of the previously mentioned type $\psi\langle a,b]$. We use them to partition a long interval to short intervals, and make the basic observer a deterministic automaton.
For $\bar{\sigma}=\psi\langle a,b]$ or $\psi\langle a,b)$, $\psi\in\Psi_v$, define
\begin{eqnarray*}
f'(s,\bar{\sigma}) &:=&\{q'[0,0]\\
&\vert& q[\bar{a},\bar{b}]\in s, \tilde{e}=(q,q',[\tilde{a},\tilde{b}],\tilde{r})\\
& & \tilde{e}\in Feas(q), \tilde{e}\rightarrow\psi,\\
& & \tilde{b}-\bar{a}\ge b,\tilde{a}-\bar{b}\le a\}.\\
f(s,\bar{\sigma}) &:=& Ext^{\epsilon[0]}(f'(s,\bar{\sigma})).
\end{eqnarray*}
Add into $Feas_{lab}^{obs}(q)$ the label $\psi Proj(\psi^1\cdots\psi^n)\langle a,b]$ (or $\psi Proj(\psi^1\cdots\psi^n)\langle a,b)$) as long as there exist $\bar{\sigma}=\psi\langle a,b]$ (or $\psi\langle a,b)$), $q'[0,0]\in f'(s,\bar{\sigma})$, $q''[0,0]\in Succ^{\psi[0,0]}_{ext}(q'[0,0])$ through the concatenated output symbols $\psi^1\cdots\psi^n$ and $Proj(\psi^1\cdots\psi^n)\neq\epsilon$.
\begin{eqnarray*}
f(s, \psi\psi_{proj}\langle a,b]) &:=& Ext^{\epsilon[0]}(\{q''[0,0]\\
&\vert& q'[0,0]\in f'(s,\psi\langle a,b]),\\
& & q'[0,0]\xrightarrow{\psi_{proj}}q''[0,0]\});
\end{eqnarray*}
similar for $\psi\psi_{proj}\langle a,b)$.
\item Include the transition labels $Feas_{lab}^{obs}(s)$ in $\bar{\Sigma}$.
\item If $s':=f(s,\bar{\sigma})\not\in S$, add the new state $s'$ to $S$.
\item Repeat Steps 2-5 until no new states are created.
\end{enumerate}
\end{definition}
Consider the previous example, whose timed abstraction is shown in Fig. \ref{fig_ta}. The basic observer is built by steps below.
\begin{enumerate}
\item $s^0:=\{(1,0)[0,0]\}$.
\item $Blank_{min}(s^0)=17$, $Feas_{lab}^{obs}(s^0)=\{\epsilon[17]\}$,\\
$f'(s^0,\epsilon[17])=\{(1,1)[-12,0]\}$,\\
$f(s^0,\epsilon[17])=\{[1,0][17,17]\}\cup Ext^{\epsilon[0]}(f'(s^0,\epsilon[17]))$, \\
$s^1:= \{[1,0][17,17],(1,1)[-12,0],\\ (2,0)[-19,0],(3,0)[-19,0]\}$.
\item $Blank_{min}(s^1)=\min\{12,19,55,55\}=12$,\\
$Feas_{lab}^{obs}(s^1)=\{\epsilon[12],\alpha[5,12]\}$,\\
$f(s^1,\epsilon[12])=\{(1,1)[0,7],(2,0)[-7,0],(3,0)[-7,0]\}$.\\
$s^2:=\{(1,1)[0,7],(2,0)[-7,0],(3,0)[-7,0]\}$.\\
$f(s^1,\alpha[5,12])=\{(1,2)[0,0]\}$.\\
$s^3:=\{(1,2)[0,0]\}$.
\item $Blank_{min}(s^2)=\min\{7,43,43\}=9$,\\
$Feas_{lab}^{obs}(s^2)=\{\epsilon[7],\alpha(0,7]\}$,\\
$f(s^2,\epsilon[7])=\{(2,0)[0,7],(3,0)[0,7]\}$.\\
$s^4:=\{(2,0)[0,7],(3,0)[0,7]\}$.\\
$f(s^2,\alpha(5,7])=\{(1,2)[0,0]\}=s^3$.
\item $Blank_{min}(s^4)=\min\{29,29\}=29$,\\
$Feas_{lab}^{obs}(s^4)=\{\epsilon[29]\}$,\\
$f(s^4,\epsilon[29])=\{(3,0)[0,36],(2,0)(3,36)\}$.\\
$s^5:=\{(3,0)[0,36],(2,0)(3,36)\}$.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[scale=0.36]{fig_obs.pdf}
\caption{Observer constructed for the timed abstraction in Fig. \ref{fig_ta}.}
\label{fig_obs}
\end{figure}
The basic observer of the example is shown in Fig. \ref{fig_obs}. The basic observer is constructed as a deterministic finite automaton driven by an external timer and output symbols observed from $H$.
In summary, given a trajectory
simulated from an initial state, the possible discrete state (current location) can be estimated at any time by an observer for any trajectory initiated from a neighborhood around the simulated initial state. There are two notions of time, one is the \textbf{external time} that can be read from an external timer and is reset to zero every time the constructed observer updates its states, the other one is the \textbf{clock time} that is associated with each trajectory which is reset to zero every time the trajectory enters a new location. \textbf{In this paper, we use $t$ to denote the external time and $\tau$ to denote the clock time.} As different trajectories may reach the guards or leave an invariant set at different times, the clock time that is associated with each trajectory has temporal uncertainties. As the clock time is reset to zero when the trajectory enters a new location, the clock time is also associated with each location the trajectory enters. It can be seen that $\tau$ in $\xi_{\ell^m_k}(\tau,x^0_{\ell^m_k})$ is the clock time associated with location $\ell^m_k$ (the location corresponding to the $m$th segment of the $k$th trajectory $\rho_k$). We denote $s$ as the set of possible observer states at the current time. At the external time $t$, we denote $(k,m)[\bar{a},\bar{b}]\in s$ if $\ell^m_k$ is possible as the current location and the clock time $\tau$ in location $\ell^m_k$ has temporal uncertainty $\tau\in[t+\bar{a},t+\bar{b}]$. For example, if the observer state $s^1=\{(1,0)[17,17],(1,1)[-12,0],(2,0)[-19,0],(3,0)[-19,0])\}$, $s^2=\{(1,1)[0,7],(2,0)[-7,12]),(3,0)[-7,12]\}$, and the observer state update is $s^1\xrightarrow[]{\epsilon[12]}s^2$~(here $\epsilon[12]$ means no event is observed for 12 time units), then at external time $t\in[0,12)$, the state could be in location $\ell^1_0$, $\ell^1_1$, $\ell^0_2$ or $\ell^0_3$, the clock time $\tau^1_0$ for location $\ell^1_0$ has no temporal uncertainty $\tau^1_0=t+17$, the clock time $\tau^1_1$ for location $\ell^1_1$ has temporal uncertainty $\tau^1_1\in[t-12,t]$, etc. The external time $t$ is reset to $0$ when $s^1$ is updated by $s^2$ and at the new external time $t$, the state could be in location $\ell^1_1$, $\ell^0_2$ or $\ell^0_3$, the clock time $\tau^1_1$ for location $\ell^1_1$ has temporal uncertainty $\tau^1_2\in[t,t+7]$, the clock time $\tau^0_2$ for location $\ell^0_2$ has temporal uncertainty $\tau^0_2\in[t-7,t+12]$, etc. \textbf{To summarize, after the basic observer is constructed, the external time $t$ has no temporal uncertainties, while the clock time $\tau$ has temporal uncertainties}.
Note that when $\bar{a}$ in $(k,m)[\bar{a},\bar{b}]$ is negative, it actually represents ``latent'' states that are currently in other locations. For example, $(2,0)[-7,12]$ means at the external time $t$ ($t<7$), the hybrid system state may have already been at location $\ell^0_2$ for $\tau$ time units ($\tau$ is the positive clock time in location $\ell^0_2$, $\tau\in[0,t]$), but may also
be at location $\ell^1_1$ and will enter location $\ell^0_2$ at the next ($-\tau$) time unit ($\tau$ is the ``virtual'' negative clock time in location $\ell^0_2$, $\tau\in[t-7,0]$). To account for the negative times, we allow $\tau$ in the notation $\xi_{\ell^m}(\tau,x^0_{\ell^m})$ to be negative to represent the ``virtual'' negative clock time when the state is not in location $\ell^m$ at the current time but will enter location $\ell^m$ at a future time $(-\tau)$.
\begin{definition}
The time-robust tube segment at external time $t$ corresponding to location $\ell^m_k$ and an interval $[t+\bar{a},t+\bar{b}]$, denoted as $R_{tube}(k,m,[t+\bar{a},t+\bar{b}])$, is defined as follows:
\begin{align}\nonumber
\begin{split}
& R_{tube}(k,m,[t+\bar{a},t+\bar{b}])=\{\big(\tau,\hat{\xi}_{\ell^m_k}(\tau,\tilde{x}^0_{\ell^m_k})\big)~\vert~\tau\in[t+\bar{a},\\
&t+\bar{b}],~\xi_{\ell^m_k}(\tau,\tilde{x}^0_{\ell^m_k})\in B_{\ell^m_k}(\gamma_{\ell^m_k},\xi_{\ell^m_k}(\tau,x^0_{\ell^m_k}))~\textrm{if}~\tau\ge 0\},
\end{split}
\end{align}
\label{tube_def}
\end{definition}
As $B_{\ell^m_k}(\gamma_{\ell^m_k},\xi_{\ell^m_k}(\tau,x^0_{\ell^m_k}))$ is obtained through the robust neighborhood approach, the time-robust tube segment can be also expressed as
\begin{eqnarray}\nonumber
& & R_{tube}(k,m,[t+\bar{a},t+\bar{b}]):=\{\big(\tau,\hat{\xi}_{\ell^m_k}(\tau,\tilde{x}^0_{\ell^m_k})\big)~\vert~\\\nonumber& &\tau\in[t+\bar{a},t+\bar{b}],\tilde{x}^0_{\ell^m_k}\in B_{\ell^m_k}(\gamma_{\ell^m_k},x^0_{\ell^m_k})\},
\label{rtube}
\end{eqnarray}
\begin{proposition}
Let $H$ be a hybrid automaton, and $O$ be the constructed basic observer. Given that the current state of $O$ is $s$, and the external time is $t$, then the clock time and the state of $\mathcal{H}$ should be in $\{(\tau_k^m,\ell_k^m,x)\vert (\tau_k^m,x)\in R_{tube}(k,m,[t+\bar{a}, t+\bar{b}]), (k,m)[\bar{a},\bar{b}]\in s\}$.
\begin{proof}
Directly follow from construction of $O$.
\end{proof}
\end{proposition}
\section{Robust Temporal Logic Inference for Classification with Spatial and Temporal Uncertainties}
\label{sec_stl}
In this section, we present the robust temporal logic inference framework for classification that accounts for both spatial and temporal uncertainties. We first review the metric temporal logic (MTL) \cite{Donze2010}.
The continuous state of the system we are studying is described by a set of $n$ variables that can be
written as a vector $x = \{x_1, x_2, \dots, x_n\}$.
The domain of $x$ is denoted by $\mathcal{X}$. A set $AP=\{\mu_1,\mu
_2,\dots \mu_q\}$ is a set of atomic propositions, each mapping $\mathcal{X}$ to $\mathbb{B}$. The
syntax of MTL is defined recursively as follows:
\[
\phi:=\top\mid \mu \mid\neg\phi\mid\phi_{1}\land\phi_{2}\mid\phi_{1}\lor
\phi_{2}\mid\phi_{1}\mathcal{U}_{I}\phi_{2}%
\]
where $\top$ stands for the Boolean constant True,
$\neg$ (negation), $\land$(conjunction), $\lor$ (disjunction)
are standard Boolean connectives, $\mathcal{U}$ is a temporal operator
representing "until", and $I$ is an interval of the form $I=(i_{1},i_{2}),(i_{1},i_{2}],[i_{1},i_{2})$ or $[i_{1},i_{2}]$, $i_1, i_2\ge 0$. We
can also derive two useful temporal operators from
"until" ($\mathcal{U}$), which are "eventually" $\Diamond_I\phi=\top\mathcal{U}_I\phi$ and
"always" $\Box_I\phi=\neg\Diamond_I\neg\phi$.
For a set $S\subseteq \mathcal{X}$, we define the signed distance from $x$ to $S$ as
\begin{equation}
\textbf{Dist$_d(x,S)\triangleq$}
\begin{cases}
-\textrm{inf}\{d(x, y)\vert y\in cl(S)\},& \mbox{if $x$ $\not\in S$},\\
\textrm{inf}\{d(x, y)\vert y\in \mathcal{\mathcal{X}}\setminus S\}, & \mbox{if $x$ $\in S$}.
\end{cases}
\label{sign}
\end{equation}
where $d$ is a metric on $\mathcal{X}$ and $cl(S)$ denotes the closure of the set $S$. In this paper, we use the metric $d(x,y)=\norm{x-y}$, where $\left\Vert\cdot\right\Vert $ denotes the 2-norm.
The robustness degree of a MTL formula $\phi$ with respect to a trajectory $\xi_\ell(\tau,x^{0}_\ell)$
at time $\tau$ is denoted as $r(\xi_\ell(\tau,x^{0}_\ell),\phi)$:
\[
\begin{split}
r(\xi_\ell(\tau,x^{0}_\ell),\top):=&\infty,\\
r(\xi_\ell(\tau,x^{0}_\ell),\mu)
:=& \textbf{Dist$_d(\xi_\ell(\tau,x^{0}_\ell),\mathcal{O}(\mu))$},\\
r(\xi_\ell(\tau,x^{0}_\ell),\neg\phi):=&-r(\xi_\ell(\tau,x^{0}_\ell),\phi),\\
r(\xi_\ell(\tau,x^{0}_\ell),\phi_1\land\phi_2)
:=& \min\{r(\xi_\ell(\tau,x^{0}_\ell),\phi_1), r(\xi_\ell(\tau,x^{0}_\ell),\phi_2)\},\\
r(\xi_\ell(\tau,x^{0}_\ell),\phi_1\mathcal{U}_{I}\phi_2)
:=& \max\limits_{\tau'\in \tau+ I} \min\{r(\xi_\ell(\tau',x^{0}_\ell),\phi_2),\\
&\min\limits_{\tau''\in [\tau,\tau')}r(\xi_\ell(\tau'',x^{0}_\ell),\phi_1)\}.
\end{split}
\]
We denote all the functions (trajectories) mapping from $\mathbb{T}=\mathbb{R}$ to $\mathbb{X}$ as $\mathbb{X}^{\mathbb{T}}$ and denote all the functions (trajectories) mapping from $\mathbb{T}_H$ to $\mathbb{X}$ as $\mathbb{X}^{\mathbb{T}_H}$, where $\mathbb{T}_H\triangleq[0,H]$.
\begin{definition}
For a set-valued mapping $\Pi_H:\mathbb{X}^{\mathbb{T}_H}\rightarrow\mathbb{X}^{\mathbb{T}}$, and $\hat{\xi}_\ell(\tau,x^{0}_\ell)$ is defined as the extended trajectory of a trajectory segment $\xi_\ell(\tau,x^{0}_\ell)$, denoted as $\hat{\xi}_\ell(\cdot,x^{0}_\ell)=\Pi_H(\xi_\ell(\cdot,x^{0}_\ell))$, if
\begin{equation}
\hat{\xi}_\ell(\tau,x^{0}_\ell)=\xi_\ell(\tau,x^{0}_\ell), \forall\tau\ge0.
\end{equation}
\label{projection}
\end{definition}
Next, we introduce the Boolean semantics of an MTL suffix in the strong and the weak view, which are modified from the literature of temporal logic model checking and monitoring \cite{KupfermanVardi2001}. In the following, $\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\phi$ (resp. $\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\phi$)
means the extended trajectory $\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})$ strongly (resp. weakly) satisfies $\phi$ at time $t$, $\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\not\models_{S}\phi$ (resp. $\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\not\models_{W}\phi$)
means the extended trajectory $\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})$ fails to strongly (resp. weakly) satisfy $\phi$ at time $t$.
\begin{definition}
The Boolean semantics of the (F,G)-fragment MTL for the extended trajectories in the strong view is defined recursively as follows:
\[
\begin{split}
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\mu\quad\mbox{iff}\quad& \tau\ge 0~\mbox{and}~\\&\textbf{Dist$_d(\xi_{\ell^m}(\tau,x^0_{\ell^m}),\mathcal{O}(\mu))>0$},\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\lnot\phi\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\not\models_{W}\phi,\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\phi_{1}\wedge\phi_{2}\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\phi
_{1}~\mbox{and}~\\&\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\phi_{2},\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\phi_{1}\vee\phi_{2}\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\phi
_{1}~\mbox{or}~\\&\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}\phi_{2},\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}F_{[\tau_1,\tau_2)}\phi\quad\mbox{iff}\quad & \exists
\tau^{\prime}\in[\tau+\tau_1,\tau+\tau_2),\\
& s.\tau.~\hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m})\models_{S}\phi,\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{S}G_{[\tau_1,\tau_2)}\phi\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m})\models_{S}\phi, \\&\forall
\tau^{\prime}\in[\tau+\tau_1, \tau+\tau_2).
\end{split}
\]
\label{strong}
\end{definition}
\begin{definition}
The Boolean semantics of the (F,G)-fragment MTL for the extended trajectories in the weak view is defined recursively as follows:
\[
\begin{split}
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\mu\quad\mbox{iff}\quad& \textrm{either of the following holds}:\\
& 1)~\tau\ge0~\mbox{and}~\\
&\textbf{Dist$_d(\xi_{\ell^m}(\tau,x^0_{\ell^m}),\mathcal{O}(\mu))>0$};\\
& 2)~\tau<0,\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\lnot\phi\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\not\models_{S}\phi,\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\phi_{1}\wedge\phi_{2}\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\phi
_{1}~\mbox{and}\\&~\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\phi_{2},\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\phi_{1}\vee\phi_{2}\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\phi
_{1}~\mbox{or}\\&~\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}\phi_{2},\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}F_{[\tau_1,\tau_2)}\phi\quad\mbox{iff}\quad & \exists
\tau^{\prime}\in[\tau+\tau_1,\tau+\tau_2), \\
& s.t.~\hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m})\models_{W}\phi,\\
\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})\models_{W}G_{[\tau_1,\tau_2)}\phi\quad\mbox{iff}\quad & \hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m})\models_{W}\phi,\\& \forall
\tau^{\prime}\in[\tau+\tau_1, \tau+\tau_2).
\label{weak}
\end{split}
\]
\end{definition}
\begin{figure}[th]
\centering
\includegraphics[width=8cm]{view3.pdf}\caption{Venn diagram of strong (weak) satisfaction and strong (weak) violation.}
\label{view}
\end{figure}
We denote $\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi)$ and $\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi)$ as the extended robustness degree of an extended trajectory $\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m})$ with respect to an MTL formula $\phi$ evaluated at a certain external time corresponding to the clock time $\tau$ for the $i$th trajectory ($\tau$ can be positive or negative) in the strong and the weak view, respectively. $\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi)$ can be calculated recursively via the following extended quantitative semantics:
\begin{align}
\begin{split}
& \hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\mu) =
\begin{cases}
r(\xi_{\ell^m}(\tau,x^0_{\ell^m}),\mu),~~\mbox{if $\tau\ge0$},\\
-\infty, ~~\mbox{if $\tau<0$},
\end{cases} \\
&\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\lnot\phi) =-\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi),\\
&\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi_{1}\wedge\phi_{2}) = \min(\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi_{1}),\\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi_{2})),\\
&\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^{0}_{\ell^m}),F_{I}\phi)
:= \max\limits_{\tau'\in (\tau+I)} \hat{r}_S(\hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m}),\phi),\\
&\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^{0}_{\ell^m}),G_{I}\phi)
:= \min\limits_{\tau'\in (\tau+I)} \hat{r}_S(\hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m}),\phi).
\end{split}
\label{semantics}
\end{align}
$\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi)$ can be calculated recursively via the following extended quantitative semantics:
\begin{align}
\begin{split}
& \hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\mu) =
\begin{cases}
r(\xi_{\ell^m}(\tau,x^0_{\ell^m}),\mu),~~\mbox{if $\tau\ge0$},\\
\infty, ~~\mbox{if $\tau<0$},
\end{cases} \\
&\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\lnot\phi) =-\hat{r}_S(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi),\\
&\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi_{1}\wedge\phi_{2}) = \min(\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi_{1}),\\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^0_{\ell^m}),\phi_{2})),\\
&\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^{0}_{\ell^m}),F_{I}\phi)
:= \max\limits_{\tau'\in (\tau+I)} \hat{r}_W(\hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m}),\phi),\\
&\hat{r}_W(\hat{\xi}_{\ell^m}(\tau,x^{0}_{\ell^m}),G_{I}\phi)
:= \min\limits_{\tau'\in (\tau+I)} \hat{r}_W(\hat{\xi}_{\ell^m}(\tau',x^{0}_{\ell^m}),\phi).
\end{split}
\label{semantics2}
\end{align}
\begin{definition}
Given a labeled set of extended trajectories $\{(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^{0}_{\ell^{m_i}_{k_i}}),c_i)\}^{N}_{i=1}$ from a hybrid system $\mathcal{H}$, $c_i=1$ represents desired behavior and $c_i=-1$ represents undesired behavior, an MTL formula $\phi$ evaluated at external time $t$ (corresponding to clock time $\tau_i$ for the $i$th trajectory, each $\tau_i$ can be positive or negative) perfectly classifies the desired behaviors (trajectory segments) and undesired behaviors (trajectory segments) if the following condition is satisfied:\\
$\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^{0}_{\ell^{m_i}_{k_i}})\models_{W}\phi$, if $c_i=1$; $\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^{0}_{\ell^{m_i}_{k_i}})\models_{W}\lnot\phi$, if $c_i=-1$.
\label{perfect0}
\end{definition}
\begin{problem}
Given a labeled set of time-robust tube segments $\tilde{S}=\{(R_{tube}(k_i,m_i,[t+\bar{a}_i,t+\bar{b}_i]),c_i)\}^{N}_{i=1}$ from a hybrid system $\mathcal{H}$, find an MTL formula $\phi$ such that $\phi$ evaluated at external time $t$ (corresponding to different clock times for different trajectory segments) perfectly classifies the desired behaviors (trajectory segments) and undesired behaviors (trajectory segments) in $\tilde{S}$, i.e. for any $\big(\tau_i,\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}})\big)\in R_{tube}(k_i,m_i,[t+\bar{a}_i,t+\bar{b}_i])$, $\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}}),\phi)>0$, if $c_i=1$; $\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}}),\lnot\phi)>0$, if $c_i=-1$.
\label{Problem}
\end{problem}
If for each location $\ell$, the continuous dynamics is affine and stable, then there exists a quadratic autobisimulation function $\Phi_\ell(\xi_\ell(\tau,x^{0}_\ell), \xi_\ell(\tau,x)) = [\big(\xi_\ell(\tau,x^0_\ell)-\xi_\ell(\tau,x)\big)^TM_\ell\big(\xi_\ell(\tau,x^{0}_\ell)-\xi_\ell(\tau,x)\big)]^{\frac{1}{2}}$, where $M_\ell$ is positive definite. To solve problem \ref{Problem}, we first give the following three propositions:
\begin{proposition}
\label{prop_space}
For any MTL formula $\phi$ and $\gamma_{\ell}>0$, if $\Phi_\ell(\xi_{\ell}(\tau,\tilde{x}^0_\ell),\xi_{\ell}(\tau,x^0_\ell))=[\big(\xi_\ell(\tau,x^0_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)^TM_\ell$ $\big(\xi_\ell(\tau,x^{0}_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)]^{\frac{1}{2}}<\gamma_{\ell}$ for any $\tau\ge0$, then for any $\tau$ (here $\tau$ can be positive or negative), we have
\begin{align}
\begin{split}
&\hat{r}_S(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)-\hat{\gamma}_{\ell}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}^0_\ell),\phi)\\&\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)+\hat{\gamma}_{\ell},\\
&\hat{r}_W(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)-\hat{\gamma}_{\ell}\le \hat{r}_W(\hat{\xi}_{\ell}(\tau,\tilde{x}^0_\ell),\phi)\\&\le \hat{r}_W(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)+\hat{\gamma}_{\ell},
\end{split}
\end{align}
where $c$ is a classification label, $\hat{\gamma}_{\ell}=\gamma_{\ell}\norm{M_\ell^{-\frac{1}{2}}}$ .
\end{proposition}
\begin{proof}
See Appendix.
\end{proof}
\begin{proposition}
For any MTL formula $\phi$ that only contains one variable $x_j$ ($j=1,2,\dots,n$) and $\gamma_{\ell}$ $>0$, if $\Phi_\ell(\xi_{\ell}(\tau,\tilde{x}^0_\ell),\xi_{\ell}(\tau,x^0_\ell))=[\big(\xi_\ell(\tau,x^0_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)^TM_\ell$ $\big(\xi_\ell(\tau,x^{0}_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)]^{\frac{1}{2}}<\gamma_{\ell}$ for any $\tau\ge0$, and if there exists $z_{\ell,j}>0$ such that $z_{\ell,j}^2e_{j}^{T}e_{j} \preceq M_{\ell}$, then for any $\tau$ (here $\tau$ can be positive or negative), we have
\begin{align}
\begin{split}
&\hat{r}_S(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)-\tilde{\gamma}_{\ell,j}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}^0_\ell),\phi)\le\\
& \hat{r}_S(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)+\tilde{\gamma}_{\ell,j},\\
&\hat{r}_W(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)-\tilde{\gamma}_{\ell,j}\le \hat{r}_W(\hat{\xi}_{\ell}(\tau,\tilde{x}^0_\ell),\phi)\le\\
& \hat{r}_W(\hat{\xi}_{\ell}(\tau,x^0_\ell),\phi)+\tilde{\gamma}_{\ell,j},
\end{split}
\end{align}
where $c$ is the classification label, $\tilde{\gamma}_{\ell,j}=\gamma_{\ell}/z_{\ell,j}$.
\label{th1}
\end{proposition}
\begin{proof}
See Appendix.
\end{proof}
\begin{remark}
Proposition \ref{th1} provides a possibly tighter bound $\tilde{\gamma}_{\ell,j}$ than $\hat{\gamma}_{\ell}$ when the MTL formula $\phi$ only contains one variable $x_j$, which applies to the case of smart building occupancy detection in Section \ref{sec_implementation} where fewer numbers of applied sensors (corresponding to the number of variables that are contained in $\phi$) are preferred.
\end{remark}
\begin{proposition}
Given the settings of Problem \ref{Problem}, an MTL formula $\phi$ evaluated at external time $t$ perfectly classifies the desired behaviors and undesired behaviors in $\tilde{S}$ if the following condition is satisfied:\\ $MG(k_i,m_i,\bar{a}_i,\bar{b}_i,\phi,c_i)>0$, if $c_i=1$; $MG(k_i,m_i,\bar{a}_i,\bar{b}_i,$ $\phi,c_i)<0$, if $c_i=-1$, where $MG(\cdot)$ is a margin function defined as follows:
\begin{align}
\begin{split}
MG(k,m,\bar{a},\bar{b},\phi,1)&=\min\limits_{\tau\in t+ [\bar{a},\bar{b}]}\hat{r}_W(\hat{\xi}_{\ell_k^m}(\tau,x^0_{\ell^{m}_{k}}),\phi)-\hat{\gamma}_{\ell^m_k},\\
MG(k,m,\bar{a},\bar{b},\phi,-1)&=\min\limits_{\tau\in t+ [\bar{a},\bar{b}]}\hat{r}_W(\hat{\xi}_{\ell_k^m}(\tau,x^0_{\ell^{m}_{k}}),\lnot\phi)-\hat{\gamma}_{\ell^m_k},
\end{split}
\label{MG}
\end{align}
where $\hat{\gamma}_{\ell_k^m}=\gamma_{\ell_k^m}\norm{M_{\ell_k^m}^{-\frac{1}{2}}}$.
\label{sol_th}
\end{proposition}
\begin{proof}
See Appendix.
\end{proof}
According to Proposition \ref{sol_th}, we can solve Problem \ref{Problem} by minimizing the following cost function:
\begin{align}
\begin{split}
\label{eq_cost}
J(\tilde{S}, \phi)=& \sum_{c_i=1}G(k_i,m_i,\bar{a}_i,\bar{b}_i,\phi,c_i)+\\& \sum_{c_i=-1}G(k_i,m_i,\bar{a}_i,\bar{b}_i,\neg\phi,c_i),
\end{split}
\end{align}
where $G(\cdot)$ is defined as follows:
\begin{eqnarray}\nonumber
G(k,m,\bar{a},\bar{b},\phi,c)&=&
\begin{cases}
0, \text{ if } MG(k,m,\bar{a},\bar{b},\phi,c)>0,\\
\zeta, \text{ otherwise},
\end{cases}
\label{eq_cost1}
\end{eqnarray}
where the margin function $MG(\cdot)$ is defined in (\ref{MG}), $\zeta$ is a positive constant, the external time $t$ is usually set to be $0$. When the MTL formula $\phi$ only contains one variable $x_j$, $\hat{\gamma}_{\ell}$ can be replaced by $\tilde{\gamma}_{\ell,j}$ in (\ref{eq_cost1}).
The core of the classification process is a non-convex optimization problem for finding the structure and the parameters that describe the MTL formula $\phi$, which can be solved through Particle Swarm Optimization~\cite{Kennedy1995}. The search starts from a basis of candidate formulae in the form of $\Box_{\lbrack \tau_{1},\tau_{2}]}\pi$ or $\Diamond_{\lbrack \tau_{1},\tau_{2}]}\pi$ and adding Boolean connectives until a satisfactory formula is found.
Once the optimization procedure obtains an optimal formula $\phi$, we can use the obtained $\phi$ to refine the basic observer constructed as in \cite{YiCDC}. The refinement procedure is to shrink the observer's state (i.e., the state estimate for $\mathcal{H}$) and the subsequent transitions based on satisfaction or violation of a MTL formula. For a given $\phi$, we can shrink the observer state as soon as $\phi$ is satisfied or violated, while not resetting the timer. Then the subsequent states and transitions can be modeled in the same way as the basic observer as constructed in \cite{YiCDC}.
\section{Implementation}
\label{sec_implementation}
In this section, we implement our occupancy detection method to distinguish between two cases in the simulation model of a smart building testbed \cite{Okaeme2016}: (i) one person enters an empty room after the door opens; (ii) two people enter an empty room after the door opens. We assume that we can observe the event when the door opens. The air conditioning is programmed to increase the mass flow rate of the cooling air when the temperature reaches certain thresholds (e.g. 290.6K, 290.7K).
The system is modeled as a hybrid system $\mathcal{H}$ with 6 locations, as shown in Fig. \ref{automaton}. The state $x=[T, w, \dot{W}_{\rm{gen}}, \dot{Q}_{\rm{gen}}]$ represents the temperature and humidity ratio of the room, humidity and heat generation rate within the room (i.e. from the humans) respectively (we choose the units of $\dot{W}_{\rm{gen}}$ and $\dot{Q}_{\rm{gen}}$ to be W and mg/s, respectively). $\dot{W}_{\rm{gen}}$ and $\dot{Q}_{\rm{gen}}$ are added as two pseudo-states to account for the variations of the humidity and heat generation rates by different people \cite{TenWolde2007}. The continuous dynamics in the 6 locations are given as follows:\\
For location $\ell^0$ (room unoccupied):
\[
\begin{cases}
C\dot{x}_1=\dot{m}_{\ell^0}C_{\rm{p}}(T_{\rm{s}}-x_1)+\beta G(x_2-w_{\infty})-K(x_1-T_{\infty}); \\
M\dot{x}_2=\dot{m}(w_{\rm{s}}-x_2)-G(x_2-w_{\infty});\\
\dot{x}_3=0;
\dot{x}_4=0.
\end{cases}
\]
For the other 5 locations $\ell_k^m$ ($\ell^1_1$, $\ell^2_1$, $\ell^1_2$, $\ell^2_2$, $\ell^3_2$, room occupied with one or two people):
\[%
\begin{cases}
C\dot{x}_1=\dot{m}_{\ell_k^m}C_{\rm{p}}(T_{\rm{s}}-x_1)+\beta G(x_2-w_{\infty})-K(x_1-T_{\infty})\\
~~~~~~~~+x_4-10^{-6}\beta x_3; \\
M\dot{x}_2=\dot{m}_{\ell_k^m}(w_{\rm{s}}-x_2)-G(x_2-w_{\infty})+x_3;\\
\dot{x}_3=0;
\dot{x}_4=0.
\end{cases}
\]
where $\dot{m}_{\ell_k^m}$ is the mass flow rate of the air conditioning in location $\ell_k^m$ (we set $\dot{m}_{\ell^0}=\dot{m}_{\ell^1_1}=\dot{m}_{\ell^1_2}=0.5$Kg/s$, \dot{m}_{\ell^2_1}=\dot{m}_{\ell^2_2}=0.6$Kg/s$, \dot{m}_{\ell^3_2}=0.8$Kg/s), $C$ is the thermal capacitance of the room, $M$ is mass of air in the room, $G$ is the mass transfer conductance between the room
and the ambient, $w_{\rm{s}}$,
$T_{\rm{s}}$ are the supply air humidity ratio and temperature respectively, $w_{\infty}$,
$T_{\infty}$ are the ambient humidity ratio and temperature respectively, $C_{\rm{p}}$ is specific heat of air at constant pressure, $\beta$ is latent heat of vaporization of water,
$K$ is the wall thermal conductance.
We set $T_{\infty}=303$K~(29.85$^{\circ}$C), $T_{\rm{s}}=290$K~(16.85$^{\circ}$C), $w_{\infty}=0.0105$, $w_{\rm{s}}=0.01$. As shown in Fig. \ref{pic}, as human can generate both heat and moisture, the room temperature and humidity ratio will increase towards the new equilibrium after people enter the room. As the mass flow rate of the air conditioning may change in different locations, when two people enter the empty room, the temperature first increases to 290.7K, then starts to decrease as the mass flow rate is increased to 0.8Kg/s. It can be seen that the steady state values of the temperatures in the two cases are almost the same, therefore a temporal logic formula is needed to distinguish their temporal patterns in the transient period.
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{automaton3.pdf}
\caption{Locations of hybrid system $\mathcal{H}$ for the smart building model describing the series of events of the two cases.} \label{automaton}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{pic3.pdf}
\caption{The temperature state of the two simulated trajectories (blue represents the trajectory when one person enters the empty room, red represents the trajectory when two people enter the empty room) and the corresponding locations.}
\label{pic}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig_ta_exp5.pdf}
\caption{A timed abstraction of the hybrid automaton $\mathcal{H}$. $\tau$ is the clock time that is associated with each trajectory which is reset to zero every time the trajectory enters a new location. For instance, the transition from $(1,1)$ to $(1,2)$ means that any trajectory of $\mathcal{H}$ initiated from $B_{\ell^1_1}(\gamma_{\ell^1_1},x^0_{\ell^1_1})$ will reach $B_{\ell^2_1}(\gamma_{\ell^2_1},x^0_{\ell^2_1})$ within $29.1$ to $47.6$ time units by triggering an unobservable event.}
\label{fig_ta_exp}
\end{figure}
The invariant sets are
\begin{eqnarray*}
Inv(\ell^0) &=&\mathbb{R}^4,\\
Inv(\ell^1_1) &=&Inv(\ell^1_2)= \{x\vert 290.4\le x_2\le 290.6\},\\
Inv(\ell^2_1) &=&Inv(\ell^2_2)= \{x\vert 290.5\le x_2\le 290.7\},\\
& &Inv(\ell^3_2)= \{x\vert 290.6\le x_2\le 290.8\}.
\end{eqnarray*}
The events are modeled as follows:
\begin{itemize}
\item $e^1_1 =(\ell^0,\ell^1_1,g^1_1,r^1_1)$, where $g^1_1=
\mathbb{R}^4$, $r^1_1(x)=x+[0,0,80,300]$;
\item $e^1_2 =(\ell^0,\ell^1_2,g^1_2,r^1_2)$, where $g^1_2=
\mathbb{R}^4$, $r^1_2(x)=x+[0,0,160,600]$;
\item $e^2_1=(\ell^1_1,\ell^2_1,g^2_1,r^2_1) = e^2_2=(\ell^1_2,\ell^2_2,g^2_2,r^2_2)$, where $g^2_1=g^2_2=\{x\vert
x_2=290.6\}$, $r^2_1(x)=r^2_2(x)=x$;
\item $e^3_2=(\ell^2_2,\ell^3_2,g^3_2,r^3_2)$, where $g^3_2=\{x\vert
x_2=290.7\}$, $r^3_2(x)=x$.
\end{itemize}
The events $e^1_1$ and $e^1_2$ are non-deterministic, i.e. the events can happen anywhere in $Inv(\ell^0)$; the events $e^2_1$, $e^2_2$ and $e^3_2$ are deterministic, i.e. the events are forced to occur whenever the states leave the invariant sets (reach the guards). The output symbols of events $e^1_1$ and $e^1_2$ are observable (door opening) while the output symbols of events $e^2_1$, $e^2_2$ and $e^3_2$ are unobservable.
The reset initial state at location $\ell^1_1$ lies in the following set:
\begin{eqnarray*}
\mathcal{L}^1_1\times \mathcal{X}^1_1=\{\ell^1_1\}\times \{x&\vert& x_1=0.01, x_2=290.4976, \\
& & 280\le x_3\le 320, 60\le x_4\le 100\}.
\end{eqnarray*}
The reset initial state at location $\ell^1_2$ lies in the following set:
\begin{eqnarray*}
\mathcal{L}^1_2\times \mathcal{X}^1_2=\{\ell^1_1\}\times \{x&\vert& x_1=0.01, x_2=290.4976, \\
& &560\le x_3\le 640, 120\le x_4\le 200\}.
\end{eqnarray*}
By using the MATLAB Toolbox STRONG~\cite{Deng2013}, we can verify
that $\mathcal{L}^1_1\times \mathcal{X}^1_1$ is covered by a robust neighborhood
$B_{\ell^1_1}(\gamma_{\ell^1_1},x^0_{\ell^1_1})=\{x\vert \Phi_{\ell^1_1}(x,x^0_{\ell^1_1})<\gamma_{\ell^1_1}=0.098\}$
around the reset initial state $x^0_{\ell^1_1}$ of the trajectory (w.l.o.g, we assume that the door opens at 10 seconds)
\[
\begin{split}
\rho_1 =& (e^0, \ell^0, x^0_{\ell^0}, 10),(e^1_1, \ell^1_1, x^0_{\ell^1_1}, 37.6), (e^2_1, \ell^2_1, x^0_{\ell^2_1}, 262.4),\\
x^0_{\ell^0} =& [0.01, 290.4976,0,0]^T,
x^0_{\ell^1_1} = [0.01,290.4976,300,80]^T,\\
x^0_{\ell^2_1} =& [0.0101,290.6,300,80]^T.
\end{split}
\]
Similarly, we can verify
that $\mathcal{L}^1_2\times \mathcal{X}^1_2$ is covered by a robust neighborhood
$B_{\ell^1_2}(\gamma_{\ell^1_2},x^0_{\ell^1_2})=\{x\vert \Phi_{\ell^1_2}(x,x^0_{\ell^1_2})<\gamma_{\ell^1_2}=0.1\}$
around the reset initial state $x^0_{\ell^1_2}$ of the trajectory
\[
\begin{split}
\rho_2 =& (e^0, \ell^0, x^0_{\ell^0}, 10),(e^1_2, \ell^1_2, x^0_{\ell^1_2}, 16.1), (e^2_2, \ell^2_2, x^0_{\ell^2_2}, 36.6),\\& (e^3_2, \ell^3_2, x^0_{\ell^3_2}, 247.3),\\
x^0_{\ell^0} =& [0.01, 290.4976,0,0]^T,
x^0_{\ell^1_2} = [0.01, 290.4976, 600, 160]^T,\\
x^0_{\ell^2_2} =& [0.0101, 290.6, 600, 160]^T,\\
x^0_{\ell^3_2} =& [0.0102, 290.7, 600, 160]^T.
\end{split}
\]
As the variation range of the temperature is much smaller than the variation ranges of the humidity and heat generation rates, in order to cover the reset initial sets $\mathcal{L}^1_1\times\mathcal{X}^1_1$ and $\mathcal{L}^1_2\times\mathcal{X}^1_2$, we optimize the matrix $M_{\ell}$ in each location $\ell$ (geometrically change the shape of the level set ellipsoid) so that the outer bounds of the level set ellipsoid $B_{\ell}(\gamma_\ell,x^0_\ell)$ in the dimension of the temperature variation is much smaller than the outer bounds in the other dimensions. Besides, according to Proposition \ref{th1}, we use the tighter bound $\tilde{\gamma}_{\ell,2}=\gamma_{\ell}/z_{\ell,2}$ for the optimization for MTL classification (we use
the data of the simulated room temperature to infer the MTL
formula and the case for the room humidity ratio can be done
in a similar manner), and by maximizing $z_{\ell,2}$ thus minimizing $\tilde{\gamma}_{\ell,2}$, we can obtain the tightest bound $\tilde{\gamma}^{\ast}_{\ell,2}=\gamma_{\ell}/z^{\ast}_{\ell,2}$. The combined optimization to obtain both $M^{\ast}_{\ell}$ and $z^{\ast}_{\ell,2}$ is as follows:
\begin{align}
\begin{split}
&\rm{min}. -z_{\ell,2}^2\\
\rm{s.t.} ~& M\succ 0,
A_{\ell}^{T}M_{\ell}+M_{\ell}A_{\ell}\prec 0,\\
& e_3^{T}Me_3\leq \eta_3, e_4^{T}Me_4\leq \eta_4,\\
& e_2^{T}Me_2\geq \eta_2, M_{\ell}-z_{\ell,1}^2a_{1,1}^{T}Ma_{1,1}\succeq 0.
\end{split}
\end{align}
where $A_{\ell}$ is the state (or system) matrix in location $\ell$, $e_2=[0,1,0,0]^T, e_3=[0,0,1,0]^T, e_4=[0,0,0,1]^T$, $\eta_2=30$, $\eta_3=\eta_4=10^{-7}$ ($\eta_2$, $\eta_3$ and $\eta_4$ are tuned manually for covering the reset initial sets $\mathcal{L}^1_1\times\mathcal{X}^1_1$ and $\mathcal{L}^1_2\times\mathcal{X}^1_2$).
The optimal solution is computed as $z^{\ast}_{\ell,2}=30$. Based on the two simulated trajectories, we construct
a timed abstraction (timed automaton) as shown in Fig. \ref{fig_ta_exp} (for details of constructing the timed automaton, see the content of timed abstraction in \cite{YiCDC}). All the events are unobservable except $\psi$ which represents the door opening. We construct a basic observer as in Fig. \ref{fig_obs} (for details of designing the basic observer, see \cite{YiCDC}), where the two occupancy states are never distinguished to the end of the simulation time.
Next we infer an MTL formula that classifies the time-robust tube segments corresponding to the basic observer's states. The observer's initial state $s^1$ contains $(1,1)[0,0]$ and $(2,1)[0,0]$. We first classify the time-robust tube segment $R_{tube}(1,1,[t,t])$ and $R_{tube}(2,1,[t,t])$ ($t\in[0,13]$) but does not find any MTL formula that can achieve perfect classification. Then we move on to state $s^2$ which contains $(1,1)[13,13]$, $(2,1)[13,13]$ and $(2,2)[-6.5,0]$. We find the following formula that perfect clasifies $R_{tube}(1,1,[t,t])$, $R_{tube}(2,1,[t,t])$ and $R_{tube}(2,2,[t-6.5,t])$ ($t\in[0,6.5]$):
\begin{equation*}
\phi^{\ast}= \Box_{[1.7717,5]}( x_2\ge 290.6006).
\end{equation*}
The optimization takes 36.7 seconds on a laptop with Intel Core i7 and 8GB RAM.
\begin{figure}
\centering
\includegraphics[scale=0.45]{fig_obs_expnew6.pdf}
\caption{The refined observer shrink the basic observer's states by adding the inferred MTL formula $\phi^{\ast}$. The satisfaction and violation of $\phi$ are modeled as transition labels $\phi^{\ast}[5]$ and $\neg\phi^{\ast}[5]$ respectively.}
\label{fig_obs1_exp}
\end{figure}
With the inferred MTL formula $\phi^{\ast}$, we construct the refined observer as shown in Fig. \ref{fig_obs1_exp}. It can be seen that once $\phi^{\ast}$ is satisfied, the two cases are distinguished in $18$ seconds. Compared with the basic observer which can never distinguish the two occupancy states, the refinement has achieved the result in 18 seconds by only adding one temperature sensor.
\section{Conclusion}
We have presented a methodology for occupancy detection of smart building modeled as hybrid systems. It compresses the system states into time-robust tube segments according to trajectory robustness and event occurrence times of the hybrid system, which account for both spatial and temporal uncertainties. Besides occupancy detection, the same methodology can be used in much broader applications such as fault diagnosis, state estimation, etc.
\section*{Acknowledgment}
The authors would like to thank Charles C. Okaeme and Dr. Sandipan Mishra for introducing us to the smart building testbed, and Sayan Saha for helpful discussions. This research was partially supported by the National Science
Foundation through grants CNS-1218109, CNS-1550029 and CNS-1618369.
\section*{APPENDIX}
\textbf{Proof of proposition \ref{prop_space}}:\\
We use induction to prove Proposition \ref{prop_space}.
(i) We first prove that Proposition \ref{prop_space} holds for atomic predicate $\mu$.
If $\tau<0$, then $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x^0_{\ell}),\mu)=\hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}^0_{\ell}),\mu)$ as they both equal to $-\infty$, $\hat{r}_W(\hat{\xi}_{\ell}(\tau,x^0_{\ell}),\mu)=\hat{r}_W(\hat{\xi}_{\ell}(\tau,\tilde{x}^0_{\ell}),\mu)$ as they both equal to $\infty$. Therefore, Proposition \ref{prop_space} trivially holds for $\mu$ if $\tau<0$. If $\tau\ge0$, according to (\ref{semantics}), $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x^0_{\ell}),\mu)=\hat{r}_W(\hat{\xi}_{\ell}(\tau,x^0_{\ell}),\mu)=r(\xi_{\ell}(\tau,x^0_{\ell}),\mu)$, so we only need to prove $r(\hat{\xi}_{\ell}(\tau,x^0_\ell),\mu)-\hat{\gamma}_{\ell}\le r(\hat{\xi}_{\ell}(\tilde{\tau},\tilde{x}^0_\ell),\mu)\le r(\hat{\xi}_{\ell}(\tau,x^0_\ell),\mu)+\hat{\gamma}_{\ell}$.
As the metric $d$ satisfies the triangle inequality, we have
\begin{align}
\begin{split}
&d(\xi_{\ell}(\tau,x_\ell^0),y)-d(\xi_{\ell}(\tau,x_\ell^0),\xi_{\ell}(\tau,\tilde{x}_\ell^0)) \le d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)\le \\& d(\xi_{\ell}(\tau,x_\ell^0),y)+d(\xi_{\ell}(\tau,x_\ell^0),\xi_{\ell}(\tau,\tilde{x}_\ell^0)), \forall y\in\mathcal{X}, \tau\in[0,T].
\end{split}
\label{tri0}
\end{align}
As $[\big(\xi_\ell(\tau,x^0_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)^TM_\ell\big(\xi_\ell(\tau,x^{0}_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)]^{\frac{1}{2}}\le\gamma_{\ell}$, we have $d(\xi_{\ell}(\tau,x_\ell^0),\xi_{\ell}(\tau,\tilde{x}_\ell^0))=\norm{\xi_{\ell}(\tau,x_\ell^0)-\xi_{\ell}(\tau,\tilde{x}_\ell^0)}\le\gamma_{\ell}\norm{M_\ell^{-\frac{1}{2}}}=\hat{\gamma}_{\ell}$, thus we have
\begin{align}
&d(\xi_{\ell}(\tau,x_\ell^0),y)-\hat{\gamma}_{\ell} \le d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)\le d(\xi_{\ell}(\tau,x_\ell^0),y)+\hat{\gamma}_{\ell}.
\label{tri}
\end{align}
1) $\xi_{\ell}(\tau,x_\ell^0)\in \mathcal{O}(\pi)$, and $B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})\subset\mathcal{O}(\pi)$. In this case, for any $\xi_{\ell}(\tau,\tilde{x}_\ell^0 )\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})$, $r(\xi_{\ell}(\tau,\tilde{x}_\ell^0 ), \pi)=\mbox{inf}\{d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)|y \in \mathcal{X}\backslash\mathcal{O}(\pi)\}$.
From (\ref{tri}), $r(\xi_{\ell}(\tau,\tilde{x}_\ell^0 ), \pi)\ge\mbox{inf}\{d(\xi_{\ell}(\tau,x_\ell^0),y)-\hat{\gamma}_{\ell}|y \in \mathcal{X}\backslash\mathcal{O}(\pi)\}=\mbox{inf}\{d(\xi_{\ell}(\tau,x_\ell^0),y)|y \in \mathcal{X}\backslash\mathcal{O}(\pi)\}-\hat{\gamma}_{\ell}=r(\xi_{\ell}(\tau,x_\ell^0), \pi)-\hat{\gamma}_{\ell}$.
2) $\xi_{\ell}(\tau,x_\ell^0)\notin \mathcal{O}(\pi)$, and $B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})\subset\mathcal{X}\backslash\mathcal{O}(\pi)$. In this case, for any $\xi_{\ell}(\tau,\tilde{x}_\ell^0)\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})$, $r(\xi_{\ell}(\tau,\tilde{x}_\ell^0), \pi)=-\mbox{inf}\{d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)|y \in cl(\mathcal{O}(\pi))\}$.
From (\ref{tri}), $r(\xi_{\ell}(\tau, \pi), \tau)\ge-\mbox{inf}\{d(\xi_{\ell}(\tau,x_\ell^0),y)+\hat{\gamma}_{\ell}|y \in cl(\mathcal{O}(\pi))\}=r(\xi_{\ell}(\tau,x_\ell^0), \pi)-\hat{\gamma}_{\ell}$.
3) $\xi_{\ell}(\tau,x_\ell^0)\in \mathcal{O}(\pi)$, but $B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})\not\subset\mathcal{O}(\pi)$. In this case, we have
\begin{align} \nonumber
\begin{split}
&r(\xi_{\ell}(\tau,\tilde{x}_\ell^0), \pi)\ge\min\limits_{\xi_{\ell}(\tau,\tilde{x}_\ell^0)\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})}r(\xi_{\ell}(\tau,\tilde{x}_\ell^0), \pi)\\
&~~~~~~~~~~~~~~~~~~~~~~=\min\{X_1, X_2\}, \\
& \mbox{where}\\
&X_1=-\max_{\substack{\xi_{\ell}(\tau,\tilde{x}_\ell^0)\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell}),\\\xi_{\ell}(\tau,\tilde{x}_\ell^0)\notin \mathcal{O}(\pi)}}\mbox{inf}\{d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)|y \in cl(\mathcal{O}(\pi))\},\\
&X_2=\min_{\substack{\xi_{\ell}(\tau,\tilde{x}_\ell^0)\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell}),\\\xi_{\ell}(\tau,\tilde{x}_\ell^0)\in \mathcal{O}(\pi)}}\mbox{inf}\{d(\xi_{\ell}(\tau,\tilde{x}_\ell),y)|y \in \mathcal{X}\backslash\mathcal{O}(\pi)\}.
\end{split}
\end{align}
As $d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)\ge0$, so $X_1\le0, X_2\ge0$, $\min\{X_1, X_2\}=X_1$. For any $\xi_{\ell}(\tau,\tilde{x}_\ell^0)\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})$ and $\xi_{\ell}(\tau,\tilde{x}_\ell^0)\notin \mathcal{O}(\pi)$, there exists $z_c\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})$ and $z_c\in \partial(\mathcal{O}(\pi))$ such that $\xi_{\ell}(\tau,\tilde{x}_\ell^0), z_c$ and $\xi_{\ell}(\tau,x_\ell^0)$ are collinear, i.e. $d(\xi_{\ell}(\tau,x_\ell^0),z_c)+d(z_c,\xi_{\ell}(\tau,\tilde{x}_\ell^0))=d(\xi_{\ell}(\tau,x_\ell^0),\xi_{\ell}(\tau,\tilde{x}_\ell^0))\le\hat{\gamma}_{\ell}$. Therefore, as $r(\xi_{\ell}(\tau,x_\ell^0), \pi)=\mbox{inf}\{d(\xi_{\ell}(\tau,x_\ell^0),y)|y \in \mathcal{X}\backslash\mathcal{O}(\pi)\}\le d(\xi_{\ell}(\tau,x_\ell^0),z_c)$ and $\mbox{inf}\{d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)|y \in cl(\mathcal{O}(\pi))\}\le d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),z_c)$, we have $\mbox{inf}\{d(\xi_{\ell}(\tau,\tilde{x}_\ell^0),y)|y \in cl(\mathcal{O}(\pi))\}+r(\xi_{\ell}(\tau,x_\ell^0), \pi)\le\hat{\gamma}_{\ell}$ for any $x\in B_{\ell}(x_\ell^0,\gamma_{\ell})$ and $\xi_{\ell}(\tau,\tilde{x}_\ell^0)\notin \mathcal{O}(\pi)$. So $-X_1+r(\xi_{\ell}(\tau,x_\ell^0), \pi)\le\hat{\gamma}_{\ell}$, i.e. $X_1\ge r(\xi_{\ell}(\tau,x_\ell^0), \pi)-\hat{\gamma}_{\ell}$. Therefore, $r(\xi_{\ell}(\tau,\tilde{x}_\ell^0), \pi)\ge\min\{X_1, X_2\}=X_1\ge r(\xi_{\ell}(\tau,x_\ell^0), \pi)-\hat{\gamma}_{\ell}$.
4) $\xi_{\ell}(\tau,x_\ell^0)\notin \mathcal{O}(\pi)$, but $B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})\not\subset\mathcal{X}\backslash\mathcal{O}(\pi)$. In this case, $r(\xi_{\ell}(\tau,x_\ell^0), \pi)=-\mbox{inf}\{d(\xi_{\ell}(\tau,x_\ell^0),y)|y \in cl(\mathcal{O}(\pi))\}$. For any $\xi_{\ell}(\tau,\tilde{x}_\ell^0)\in B_{\ell}(\xi_{\ell}(\tau,x_\ell^0),\gamma_{\ell})$ and $\xi_{\ell}(\tau,\tilde{x}_\ell^0)\notin\mathcal{O}(\pi)$, $r(\xi_{\ell}(\tau,x_\ell^0), \pi)=-\mbox{inf}\{d(\xi_{\ell}(\tau,x_\ell^0),y)|y \in cl(\mathcal{O}(\pi))\}\ge-\mbox{inf}\{d(\xi_{\ell}(\tau,x_\ell^0),y)+\hat{\gamma}_{\ell}|y \in cl(\mathcal{O}(\pi))\}=r(\xi_{\ell}(\tau,x_\ell^0), \pi)-\hat{\gamma}_{\ell}$. Therefore, $r(\xi_{\ell}(\tau,\tilde{x}_\ell^0 ), \pi)\ge\min\{X_1, X_2\}=X_1\ge r(\xi_{\ell}(\tau,x_\ell^0), \pi)-\hat{\gamma}_{\ell}$.
(ii) We assume that Proposition \ref{prop_space} holds for $\phi$ and prove Proposition \ref{prop_space} holds for $\lnot\phi$.
If Proposition \ref{prop_space} holds for $\phi$, then as $\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0), \phi)=-\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)$, we have $-\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \lnot\phi)-\hat{\gamma}_{\ell}\le -\hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \lnot\phi)\le -\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)+\hat{\gamma}_{\ell}$, thus $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)-\hat{\gamma}_{\ell}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \lnot\phi)\le\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)+\hat{\gamma}_{\ell}$. Similarly, as $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0), \phi)=-\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)$, we have $-\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \lnot\phi)-\hat{\gamma}_{\ell}\le -\hat{r}_W(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \lnot\phi)\le -\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)+\hat{\gamma}_{\ell}$, thus $\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)-\hat{\gamma}_{\ell}\le \hat{r}_W(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \lnot\phi)\le\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0), \lnot\phi)+\hat{\gamma}_{\ell}$.
(iii) We assume that Proposition \ref{prop_space} holds for $\phi_1,\phi_2$ and prove Proposition \ref{prop_space} holds for $\phi_1\wedge\phi_2$.
If Proposition \ref{prop_space} holds for $\phi_1$ and $\phi_2$, then $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1)-\hat{\gamma}_{\ell}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \phi_1)\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1)+\hat{\gamma}_{\ell}$, $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_2)-\hat{\gamma}_{\ell}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \phi_2)\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_2)+\hat{\gamma}_{\ell}$. As $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1\wedge\phi_2)=\min(\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1),\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_2))$, we have
\begin{align}\nonumber
\begin{split}
&\min(\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1),\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_2))-\hat{\gamma}_{\ell}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \\&\phi_1\wedge\phi_2)\le \min(\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1),\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_2))+\hat{\gamma}_{\ell},
\end{split}
\end{align}
therefore $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1\wedge\phi_2)-\hat{\gamma}_{\ell}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \phi_1\wedge\phi_2)\le \hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1\wedge\phi_2)+\hat{\gamma}_{\ell}$.
Similarly, it can be proved that if Proposition \ref{prop_space} holds for $\phi_1$ and $\phi_2$, then $\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1\wedge\phi_2)-\hat{\gamma}_{\ell}\le \hat{r}_W(\hat{\xi}_{\ell}(\tau,\tilde{x}_\ell^0), \phi_1\wedge\phi_2)\le\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0 ), \phi_1\wedge\phi_2)+\hat{\gamma}_{\ell}$.
(iv) We assume that Proposition \ref{prop_space} holds for $\phi$ and prove Proposition \ref{prop_space} holds for $F_{I}\phi$.
As $\hat{r}_S(\hat{\xi}_{\ell}(\tau,x_\ell^0), F_{\mathcal{I}}\phi)=\displaystyle\max_{\tau'\in (t+\mathcal{I})}\hat{r}_S(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)$, $\hat{r}_W(\hat{\xi}_{\ell}(\tau,x_\ell^0), F_{\mathcal{I}}\phi)=\displaystyle\max_{\tau'\in (t+\mathcal{I})}\hat{r}_W(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)$, if Proposition \ref{prop_space} holds for $\phi$, then for any $\tau'\in (t+\mathcal{I})$, $\hat{r}_S(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)-\hat{\gamma}_{\ell}\le \hat{r}_S(\hat{\xi}_{\ell}(\tau',\tilde{x}_{\ell}), \phi)\le\hat{r}_S(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)+\hat{\gamma}_{\ell}$, $\hat{r}_W(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)-\hat{\gamma}_{\ell}\le \hat{r}_W(\hat{\xi}_{\ell}(\tau',\tilde{x}_{\ell}), \phi)\le\hat{r}_W(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)+\hat{\gamma}_{\ell}$. So we have
\begin{align}\nonumber
\begin{split}
&\max_{\tau'\in (t+\mathcal{I})}\hat{r}_S(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)-\hat{\gamma}_{\ell}\le\max_{\tau'\in (\tau+\mathcal{I})}\hat{r}_S(\hat{\xi}_{\ell}(\tau',\tilde{x}_{\ell}), \phi)\le\\
&\max_{\tau'\in (t+\mathcal{I})}\hat{r}_S(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi),\\
&\max_{\tau'\in (t+\mathcal{I})}\hat{r}_W(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi)-\hat{\gamma}_{\ell}\le\max_{\tau'\in (\tau+\mathcal{I})}\hat{r}_W(\hat{\xi}_{\ell}(\tau',\tilde{x}_{\ell}), \phi)\le\\
&\max_{\tau'\in (t+\mathcal{I})}\hat{r}_W(\hat{\xi}_{\ell}(\tau',x_\ell^0), \phi).
\end{split}
\end{align}
Thus Proposition \ref{prop_space} holds for $F_{\mathcal{I}}\phi$.
Similarly, it can be proved that if Proposition \ref{prop_space} holds for $\phi$, then Proposition \ref{prop_space} holds for $G_{\mathcal{I}}\phi$.
Therefore, it is proved that Proposition \ref{prop_space} holds for any $\phi$.
\textbf{Proof of Proposition \ref{th1}}:\\
The proof of Proposition \ref{th1} is similar with that of Proposition \ref{prop_space} except the case for the atomic predicate $\mu$ when $\tau\ge0$.
We denote $\Pi_j:\mathbb{X}\rightarrow\mathbb{X}_j$ as a projection map that maps every state $x\in\mathbb{X}$ to its value at the $i$th dimension, i.e. $\Pi_j(x)=x_j=e_jx$, where $e_j$ is a canonical unit vector. From (\ref{tri0}), as $[\big(\xi_\ell(\tau,x^0_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)^Tz_{\ell,j}^2e_j^{T}e_j\big(\xi_\ell(\tau,x^{0}_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)]^{\frac{1}{2}}\le[\big(\xi_\ell(\tau,x^0_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)^TM_\ell\big(\xi_\ell(\tau,x^{0}_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)]^{\frac{1}{2}}\le\gamma_{\ell}$, we have $d(\Pi_j(\xi_{\ell}(\tau,x_\ell^0)),\Pi_j(\xi_{\ell}(\tau,\tilde{x}_\ell^0)))=[\big(\xi_\ell(\tau,x^0_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)^Te_j^{T}e_j\big(\xi_\ell(\tau,x^{0}_\ell)-\xi_\ell(\tau,\tilde{x}^0_\ell)\big)]^{\frac{1}{2}}\le\gamma_{\ell}/z_{\ell,j}$, thus we have
\begin{align}
\begin{split}
& d(\Pi_j(\xi_{\ell}(\tau,x_\ell^0)),\Pi_j(y))-\tilde{\gamma}_{\ell,j} \le d(\Pi_j(\xi_{\ell}(\tau,\tilde{x}_\ell^0)),\Pi_j(y))\\&\le d(\Pi_j(\xi_{\ell}(\tau,x_\ell^0)),\Pi_j(y))+\tilde{\gamma}_{\ell,j}.
\end{split}
\label{tri2}
\end{align}
The remaining proof is similar to the proof of Proposition \ref{prop_space}, replacing $d(x,y)$ by $d(\Pi_j(x),\Pi_j(y))$ and $\hat{\gamma_{\ell}}$ by $\tilde{\gamma}_{\ell,j}$.
\textbf{Proof of Proposition \ref{sol_th}}:\\
For $c_i=1$, according to Proposition \ref{prop_space}, for any $\big(\tau_i,\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}})\big)\in R_{tube}(k_i,m_i,[t+\bar{a}_i,t+\bar{b}_i])$, we have $\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^0_{\ell^{m_i}_{k_i}}),\phi)-\hat{\gamma}_{\ell^{m_i}_{k_i}}\le \hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}}),\phi)$. If $MG(k_i,m_i,\bar{a}_i,\bar{b}_i,\phi,c_i)=\min\limits_{\tau_i\in t+ [\bar{a}_i,\bar{b}_i]}\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^0_{\ell^{m_i}_{k_i}}),\phi)-\hat{\gamma}_{\ell^{m_i}_{k_i}}>0$, then for any $\tau_i\in t+ [\bar{a}_i,\bar{b}_i]$, $\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}}),\phi)\ge \hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^0_{\ell^{m_i}_{k_i}}),\phi)-\hat{\gamma}_{\ell^{m_i}_{k_i}}>0$.
For $c_i=-1$, according to Proposition \ref{prop_space}, for any $\big(\tau_i,\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}})\big)\in R_{tube}(k_i,m_i,[t+\bar{a}_i,t+\bar{b}_i])$, we have $\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^0_{\ell^{m_i}_{k_i}}),\lnot\phi)-\hat{\gamma}_{\ell^{m_i}_{k_i}}\le \hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}}),\lnot\phi)$. If $MG(k_i,m_i,\bar{a}_i,\bar{b}_i,\lnot\phi,c_i)=\min\limits_{\tau_i\in t+ [\bar{a}_i,\bar{b}_i]}\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^0_{\ell^{m_i}_{k_i}}),\lnot\phi)-\hat{\gamma}_{\ell^{m_i}_{k_i}}>0$, then for any $\tau_i\in t+ [\bar{a}_i,\bar{b}_i]$, $\hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,\tilde{x}^0_{\ell^{m_i}_{k_i}}),\lnot\phi)\ge \hat{r}_W(\hat{\xi}_{\ell^{m_i}_{k_i}}(\tau_i,x^0_{\ell^{m_i}_{k_i}}),\lnot\phi)-\hat{\gamma}_{\ell^{m_i}_{k_i}}>0$.
\bibliographystyle{IEEEtran}
|
1,116,691,501,290 | arxiv | \section{Introduction}
On large scales, the observable Universe can be described in simple terms: it is very close to a homogeneous, isotropic (and probably spatially flat, although that is less clear \cite{closedFLRW}) Friedmann--Lema\^{i}tre--Robertson--Walker (FLRW) universe, with nearly scale-invariant small scalar perturbations. The challenge for all approaches to modern cosmology is to find an explanation for this observed structure, and to resolve various puzzles inherent to the currently accepted $\Lambda$CDM model of cosmology. This challenge is often seen as an opportunity for theories of quantum gravity to connect to observations, in particular since the $\Lambda$CDM model features the Big Bang singularity which signals its own fundamental incompleteness. The relative simplicity of our Universe means that one does not need to understand quantum gravity in full generality to say something about cosmology: all that is needed is a formalism powerful enough to deal with approximately homogeneous and isotropic universes. Various quantum-gravity inspired cosmological scenarios can indeed describe the evolution of perturbations across a quantum bounce \cite{BounceReview}.
The description of cosmological perturbations in the standard framework is based on gauge-invariant perturbation variables \cite{cosmopert}, characterised by their invariance under infinitesimal diffeomorphisms, which correspond to physical perturbations.
On top of this notion of gauge invariance, there are gauge-invariant perturbation variables with a particularly direct physical interpretation.
The {\em curvature perturbation on uniform-density hypersurfaces} $\zeta$ is directly related to cosmological observations at late times, in particular of the cosmic microwave background (CMB) \cite{BounceReview}.
The notion of what characterises a `good' cosmological perturbation variable may not extend straightforwardly to quantum gravity, where the notions of gauge invariance and diffeomorphisms may be modified \cite{BojowaldPaily}, or where one may not have access to an effective action for perturbations. The physical mechanism for the generation of cosmological perturbations may also be different from the most commonly assumed framework of inflation, where they arise from quantum fluctuations on an effectively classical background. It is important to be clear about which assumptions from standard perturbation theory are carried over to a particular quantum gravity formalism of interest. \\
In this paper we study gauge-invariant scalar perturbations in quantum gravity bounce scenarios governed by a modified Friedmann equation which reduces to general relativity at low energies but includes high-curvature corrections. This starting point is familiar from the standard effective dynamics of Loop Quantum Cosmology (LQC) \cite{LQC, LQC_2} but clearly more general\footnote{As a basic example, generalisations of LQC can already lead to more complicated Friedmann equations \cite{GLQCnew}.}.
We do not assume an effective spacetime description of perturbations, but ask what properties of the dynamics of perturbations can be obtained from the Friedmann equation and homogeneous matter dynamics alone.
Consequently, we work in the separate universe picture for long-wavelength cosmological perturbations \cite{SepUniv, SepUniv_2, hamSU}, which suggests that such perturbations can be well approximated by a universe consisting of many independent, locally homogeneous patches, each governed by a local Friedmann equation and dynamical equations for matter.
The separate universe approach has already been studied in LQC \cite{LQCsepUniv} and Group Field Theory (GFT) \cite{GFTsepUniv}. Our work extends the results of \cite{LQCsepUniv} to more general modified Friedmann equations such as those appearing in GFT, while also going beyond \cite{GFTsepUniv} in that we study gauge-invariant perturbation variables.
In standard cosmology the quantity $\zeta$ is conserved on super-horizon scales for adiabatic perturbations, $\zeta'=0$ (see, e.g., \cite{conserved}; here and in the following $'$ refers to derivative with respect to an arbitrary time coordinate).
This property is very important: it means one only has to follow the evolution of cosmological perturbations while they are within the Hubble horizon.
Typically, in a cosmological scenario that aims to solve the horizon problem, perturbations are initially generated deep inside the Hubble horizon, then leave the horizon as the Universe expands or contracts, only to re-enter later as amplified, classical perturbations.
In bounce scenarios, the point of exiting the horizon is often in a contracting phase before the bounce and re-entry happens in the subsequent expansion phase \cite{BounceReview}. (A subtle point which we will get back to later is that at the bounce itself the horizon is necessarily infinite so {\em all} modes are sub-horizon.)
We review the derivation of the conservation law for $\zeta$ from which one can readily see that it will continue to hold for adiabatic perturbations in quantum gravity scenarios of interest, where the continuity equation remains unaltered.
We also study a related quantity, the {\em comoving curvature perturbation} $\mathcal{R}$. This quantity usually satisfies a similar conservation law for long-wavelength modes, $\mathcal{R}'=0$, and in general relativity one can show that $-\zeta=\mathcal{R}$ on super-horizon scales (with appropriate sign conventions). This equality is no longer guaranteed if one goes beyond general relativity as we do here.
Indeed the simplest GFT bounce scenario gives an example of quantum-gravity inspired cosmological dynamics for which $\zeta$ is still conserved, but $\mr$ is not.
This article is organised as follows: In sec.\,\ref{sec:cosmPert}, we give a brief introduction to standard cosmological perturbation theory, including the definitions of $\zeta$ and $\mr$, and introduce the separate universe framework.
We then proceed to rederive the well-known conservation law for $\zeta$ in the case of adiabatic perturbations in sec.\,\ref{sec:consZeta}. The main results of this paper are contained in sec.\,\ref{sec:genPert}: we first derive perturbation equations for long-wavelength modes starting from a general modified Friedmann equation without specifying a lapse or choosing a specific gauge for perturbation variables. We then comment on the meaning of different gauge choices in the separate universe framework (sec.\,\ref{sec:gauge}). Working in the comoving gauge, we study a class of modified Friedmann equations, where the modification is a function of the energy density $\rho$ only (a special case of which is LQC), in sec.\,\ref{sec:mfRho}.
We find that for this particular case $\mr' = \zeta' = 0$ continues to hold.
In sec.\,\ref{sec:GFTexample} we consider a modified Friedmann equation obtained from GFT as an example for which $\mr'\neq0$.
Our main results are based on solving first-order equations of motion, but in sec.\,\ref{sec:MS} we compare this strategy to approaches based on a single second order equation of motion for a single perturbation variable, such as the Mukhanov--Sasaki equation.
We finally conclude in sec.\,\ref{sec:conc}.
\section{Cosmological perturbations}\label{sec:cosmPert}
The usual starting point in cosmological perturbation theory is to model the Universe as a flat FLRW universe with inhomogeneous perturbations.
We will follow this assumption even though the observational status of spatial curvature is not fully settled. This is because we are interested in the behaviour of perturbations near a bounce, where spatial curvature would be subdominant, and because the quantum gravity scenarios we are interested in might prefer flat FLRW geometries (see, e.g., \cite{GFTscalarcosmo, GFTscalarcosmo_2} for the situation in GFT cosmology).
The general form of the perturbed line element at linear order for scalar perturbations, following standard conventions \cite{cosmopert,BaumannNotes}, is \footnote{The tilde over $\tilde\Phi$ is used to distinguish this lapse perturbation variable (which is not gauge-invariant) from its gauge-invariant analogue, the Bardeen variable $\Phi$.}
\begin{eqnarray}
{\rm d} s^2 &=& -N^2(t)\left(1+2\tilde\Phi(t,x^i)\right){\rm d} t^2 + 2N(t)a(t)\,\partial_i B(t,x^i)\,{\rm d} t\,{\rm d} x^i \nonumber
\\&&+ a^2(t)\left[\left(1-2\psi(t,x^i)\right)\delta_{ij}+2\partial_i\partial_j E(t,x^i)\right]{\rm d} x^i\,{\rm d} x^j\,.
\label{pertmetric}
\end{eqnarray}
For concreteness we have added the functional dependence of all variables explicitly: we have the background scale factor $a(t)$ and lapse function $N(t)$ (which only depend on time) and perturbations $\tilde\Phi,\psi,B$ and $E$, which in general depend on space and time.
In cosmological bounce scenarios, departures from general relativity are expected to become relevant near the bounce,
where most modes are super-horizon (as mentioned earlier, at the bounce itself all modes are sub-horizon).
We will therefore only be interested in a long-wavelength approximation in which spatial gradients are neglected, so that
\begin{equation}
\tilde\Phi(t,x^i)\rightarrow\tilde\Phi(t)\,,\quad \psi(t,x^i)\rightarrow\psi(t)\,,\quad \partial_i B\rightarrow 0\,,\quad \partial_i\partial_j E\rightarrow 0\,.
\label{longwavel}
\end{equation}
$\tilde\Phi$ and $\psi$ can then be seen as perturbations in the background quantities $N$ and $a$, which is the essence of the separate-universe idea: one can consider a universe composed of many independent, locally homogeneous flat FLRW patches, with local lapse and scale factor
\begin{equation}
N_{{\rm loc}} = N(1 + \tilde\Phi)\,,\quad a_{{\rm loc}} = a(1 - \psi)\,,
\label{flrwperturb}
\end{equation}
where $N$ and $a$ are now considered as averages over many patches, and $\tilde\Phi$ and $\psi$ as small quantities ($O(\epsilon)$ with $\epsilon\ll 1$) characterising the difference between one patch and the average. We see that the metric for each patch would then be given by (\ref{pertmetric}) with (\ref{longwavel}) (here and throughout the paper, quantities of quadratic and higher order in perturbation variables are $O(\epsilon^2)$ and will be dropped, as we work within linear perturbation theory).
The separate universe picture is particularly useful in quantum gravity scenarios that do not allow for a satisfactory description of inhomogeneities.
If matter is described by an energy density $\rho$ and pressure $P$, one can analogously introduce perturbation variables $\delta\rho$ and $\delta P$ with
\begin{equation}
\rho_{{\rm loc}} = \rho + \delta\rho\,,\quad P_{{\rm loc}} = P+\delta P\,.
\label{matterperturb}
\end{equation}
In the case where the matter content of the very early universe is given by a single scalar field $\phi$ with potential $\pot$ (such as in inflation, LQC or GFT),
the energy density and pressure are given by
\begin{align}
\rho = \frac{\phi'^2}{2N^2} + \pot \, , \qquad P = \frac{\phi'^2}{2N^2} - \pot
\label{eq:rhoPscalarField}
\end{align}
and the dynamics of the scalar field are determined by the Klein--Gordon equation \begin{align}
\phi'' - \frac{N'}{N} \phi' + N^2 \frac{{\rm d} \pot}{{\rm d} \phi} + 3 \mh \phi' = 0\, ,
\label{eq:klGordon}
\end{align}
where we define $\mathfrak{H}=a'/a$ (which reduces to the usual Hubble parameter $H$ if $t$ is chosen to be proper time).
For $\phi_{{\rm loc}}=\phi+\delta\phi$ one obtains the perturbations for the energy density and pressure by perturbing \eqref{eq:rhoPscalarField} at linear order:
\begin{align}
\delta \rho = \frac{\phi'^2}{ N^2} \left(\frac{\delta \phi'}{\phi'} - \tPhi\right) +\frac{{\rm d} \pot}{{\rm d} \phi} \delta \phi\, , \qquad \delta P = \frac{\phi'^2}{N^2} \left(\frac{\delta \phi'}{\phi'} - \tPhi\right) -\frac{{\rm d} \pot}{{\rm d} \phi} \delta \phi\,.
\label{eq:pertRhoP}
\end{align}
In cosmological perturbation theory, there exist two notions of gauge invariance,
one related to the choice of the lapse $N$ and another to the choice of the coordinate system of perturbations.
The first allows an arbitrary change of the background time coordinate $t\to f(t)$; the second encodes the invariance under \emph{infinitesimal} diffeomorphisms $x^\mu \to x^\mu + \xi^\mu$, where $\xi^\mu$ is $O(\epsilon)$.
It is the second gauge freedom we refer to when we discuss gauge invariance and different gauge choices in the following, keeping in mind that our separate universe approximation will reduce the relevant gauge transformations to those of the form $t \to t+\xi^0(t)$. For more details, see \cite{cosmopert, BaumannNotes}.
The metric and matter perturbation variables in \eqref{pertmetric} are not gauge-invariant, but have certain transformation properties under infinitesimal diffeomorphisms, such that gauge-invariant variables need to be obtained by combining perturbation variables in a suitable manner.
The two gauge-invariant perturbation variables we have mentioned above are defined by (where different sign conventions for $\zeta$ exist in the literature and we follow the one in \cite{conserved} and going back to \cite{BST1983})
\begin{equation}
-\zeta = \psi + \frac{\mathfrak{H}}{\rho'}\delta\rho\,,\quad \mathcal{R} = \psi+\frac{\mathfrak{H}}{\phi'}\delta\phi\,,
\label{gaugeinvperts}
\end{equation}
where the second definition specifically refers to a scalar field.
As such, the variables in \eqref{gaugeinvperts} are invariant under infinitesimal diffeomorphisms acting on the metric and matter fields at $O(\epsilon)$ \cite{cosmopert}. Importantly, we will assume that {\em the same notion of gauge invariance applies in our quantum gravity scenarios of interest} so that (\ref{gaugeinvperts}) are gauge-invariant, and hence potentially observable, also for our modified gravitational dynamics. This is certainly an assumption given that the action of diffeomorphisms might receive quantum corrections as in LQC \cite{LQCanomaly}, but in the absence of a full spacetime picture we need to make such an assumption to proceed. Since we are only interested in long-wavelength perturbations, we only need to assume that diffeomorphisms act as in general relativity in that limit, which is true in LQC and might be seen as an admissible assumption to make: naively, the longest wavelengths should be least sensitive to any quantum gravity corrections.
Within general relativity coupled to a scalar field, one can show that $-\zeta=\mathcal{R}+O(k^2)$, where $k$ is the wavenumber in a Fourier decomposition, so that in the separate universe limit $k\rightarrow 0$
(or, more accurately, a limit in which the physical wavelength of a mode is much larger than the Hubble horizon: $k\ll\frac{a'}{N}$)
we have $-\zeta=\mathcal{R}$.
For this reason the two quantities are often treated as interchangeable when long-wavelength modes are studied, but again it is not clear whether a similar relation holds in more general cosmological models of the type we are interested in.
We would like to point out that during slow-roll inflation one can approximate
\begin{equation}
\rho=\frac{\phi'^2}{2N^2}+\pot\approx \pot\quad\Rightarrow \;\rho'\approx \phi' \frac{{\rm d} \pot}{{\rm d} \phi}
\end{equation}
and hence in this case $-\zeta\approx\mathcal{R}$ without using any gravitational field equations. We will not be interested in slow-roll inflation, but consider scalar fields with general potentials such that the high-energy regime can be dominated by kinetic energy.\\
\section{Simple conservation law for $\zeta$}\label{sec:consZeta}
We start by deriving the simplest conservation law for the perturbation variables we are considering: $\zeta$ is conserved on large scales for adiabatic matter perturbations if the continuity equation for matter is unchanged.
This is an old result in cosmology \cite{conserved} which extends directly to many quantum bounce scenarios, in particular to LQC \cite{LQC, LQC_2} and GFT \cite{GFTscalarcosmo, GFTscalarcosmo_2,deparamcosmo} which do not introduce any alterations to the dynamics of the matter content of the universe.
Rederiving this result illustrates the philosophy behind the separate-universe approach.
The continuity equation satisfied by the background variables reads
\begin{equation}
\rho' + 3 \mathfrak{H}(\rho+P)=0\,,
\label{continuity}
\end{equation}
and if we introduce locally perturbed variables according to (\ref{flrwperturb}) and (\ref{matterperturb}), assuming that the continuity equation also holds in each local patch, we find the perturbed continuity equation
\begin{equation}
\delta\rho' + 3 \mathfrak{H}(\delta\rho+\delta P)-3\psi'(\rho+P)=0\, ,
\label{pertcontinuity}
\end{equation}
using that $\mathfrak{H}_{{\rm loc}}=\mathfrak{H}-\psi'$. Hence,
\begin{equation}
-\zeta'=\psi' + \left(\frac{\mathfrak{H}}{\rho'}\delta\rho\right)' = -\left(\frac{1}{3(\rho+P)}\right)'\delta\rho+\frac{\mathfrak{H}(\delta\rho+\delta P)}{\rho+P}= \frac{\rho'+P'}{3(\rho+P)^2}\delta\rho+\frac{\mathfrak{H}(\delta\rho+\delta P)}{\rho+P}
\end{equation}
using (\ref{continuity}) and (\ref{pertcontinuity}). The assumption of adiabatic perturbations means that we can write $\frac{\delta P}{\delta\rho}=\frac{P'}{\rho'}$ and one finally obtains, after again using (\ref{continuity}), that $\zeta'=0$ for these perturbations.
This argument relies on the long-wavelength limit, since the perturbed continuity equation \eqref{pertcontinuity} would in general contain terms involving spatial derivatives (sec.\,\ref{sec:ddpsi}). It does however not use the gravitational dynamics, and hence holds in many scenarios beyond general relativity.
One commonly introduces the equation of state parameter $w = \frac{P}{\rho}$ and the sound speed $c_s^2 = \frac{P'}{\rho'}$ and using these definitions together with the continuity equation (\ref{continuity}), one finds that
\begin{equation}
w' = -3 \mathfrak{H} (w +1) (c_s^2 -w )\,.
\label{eq:dw}
\end{equation}
In the case of a perfect fluid, $w$ is constant, $c_s^2 = w$, and the adiabaticity condition is always satisfied.
A scalar field can mimic a perfect fluid if the potential is chosen such that it has a constant equation of state parameter (at least for the time scales one is interested in). In particular, this is the case for a massless scalar field, where $w=1$.
For more general scalar field dynamics, there is exchange between kinetic and potential energy in the scalar field and perturbations can not generally be assumed to be adiabatic; however, in general relativity a single scalar field can generally only produce adiabatic perturbations on large scales, so that $\zeta'=0$ \cite{GRscalarAdiabatic}. As we will show in sec.\,\ref{sec:mfRho}, $\zeta'=0$ still holds for a single scalar field for specific forms of modified gravitational dynamics.
However, for general modifications to the Friedmann equation, one cannot conclude that a single scalar field with an arbitrary potential induces only adiabatic perturbations.
\section{Generalised Friedmann dynamics and their perturbations}\label{sec:genPert}
We now introduce gravitational dynamics given by generalised Friedmann equations of the type expected in many approaches to quantum gravity. For this generalised Friedmann equation, we write
\begin{equation}
\frac{\mathfrak{H}^2}{N^2}=\frac{\kappa}{3}\rho\,\mathcal{F}\, ,
\label{eq:genFried}
\end{equation}
where $N$ is a general choice of lapse, $\kappa=8\pi G$ is a rescaled Newton's constant and the function $\mathcal{F}$ encodes the quantum gravity corrections to the Friedmann equation of general relativity. ($\mf=1$ then corresponds to the general relativistic Friedmann equation.)
The perturbation equations that follow in this section are independent of the specific form of $\mf$, but we would like to give two examples that we will get back to later, namely the modified Friedmann equations of LQC and GFT.
As we will see, these two examples characterise two qualitatively different cases, where $\mathcal{F}$ and its perturbations either depend only on $\rho$ and $\delta \rho$ (sec.\,\ref{sec:mfRho}) or on other variables as well (sec.\,\ref{sec:GFTexample}).
When deriving the effective Friedmann equation in LQC or GFT one assumes that the matter content of the universe is given by a single massless scalar field, such that the energy density is given by $\rho = \frac{\pi_\phi^2}{ (2 a^6)}$, where the scalar field momentum $\pi_\phi$ is a constant of motion.
In the standard effective dynamics of LQC \cite{effectiveLQC}
\begin{equation}
\mathcal{F}_{\rm LQC}=1-\frac{\rho}{\rho_{{\rm c}}}\,,
\label{eq:mfLQC}
\end{equation}
where $\rho_{{\rm c}}$ is a universal, maximal energy density, which characterises the regime in which quantum gravity corrections become relevant.
For phenomenological applications $\mf_{\rm LQC}$ is sometimes assumed to hold also for massive scalar fields, even if this cannot be directly derived from the quantum theory (see, e.g., \cite{LQCsepUniv, LQC_genRho, LQC_matterBounce}).
In GFT more general forms appear such as \cite{deparamcosmo}
\begin{equation}
\mathcal{F}_{\rm GFT}=1+\frac{v_0}{a^3}+\frac{\my}{a^6}\,.
\label{eq:mfGFT}
\end{equation}
Here, $v_0$ is a fixed constant that has the unit of volume and gets its interpretation from the underlying quantum theory. $\my$ on the other hand is a constant of motion rather than a fundamental parameter and so in the separate universe picture will vary from one patch to another, $\delta \my \neq 0$. Its value is related to the volume of the universe at the bounce.
To obtain the dynamics of perturbations for a modified Friedmann equation while being agnostic about the details of the underlying gravitational theory, we proceed as follows.
An equation of motion for $\mh$ is obtained from the time derivative of \eqref{eq:genFried}\footnote{The equation is written in this form for convenience, however, division by $\mh$ and $\mf$ is not defined at the bounce.},
\begin{align}
\frac{ \mh'}{\mh} = \frac{ N'}{N} + \frac{1}{2}\left( \frac{ \rho'}{\rho} + \frac{ \mf'}{\mf} \right)\,,
\label{eq:dtauH}
\end{align}
and the first and second order equations of motion for the metric perturbation $\psi$ are obtained by perturbing \eqref{eq:genFried} and \eqref{eq:dtauH} at linear order (or equivalently, perturbing \eqref{eq:genFried} and then taking the time derivative). Giving different equivalent forms of these equations, we have
\begin{align}
\begin{split}
\mh \psi' = & -\mh^2\left(\tPhi +\frac{\delta\rho}{2\rho}+\frac{\delta \mf}{2\mf}\right)\\
= & -\frac{\kappa}{3} N^2\left(\rho\mf\,\tPhi +\frac{1}{2}\left(\mf \delta \rho + \rho \delta \mf\right)\right)\, ,
\label{eq:dPsi}
\end{split}\\
\begin{split}
- \psi'' = & \left(-\frac{N'}{N}-\frac{\mf'}{2\mf}+3\mh\frac{\rho+P}{\rho}\right)\psi' + \mh\,\tPhi' + \frac{\mh}{2}\left(\frac{\delta \mf'}{\mf}-\frac{\mf'}{\mf^2}\delta \mf\right)\\
& + \frac{\kappa}{2}N^2 \mf (\rho+P) \left(\frac{\delta\rho}{\rho}-\frac{\delta\rho + \delta P}{\rho + P}\right)\\
= & - \frac{N'}{N} \psi' + \mh\,\tPhi'
- \frac{\kappa}{2} N^2 \mathcal{F} (P+\rho ) \left( \frac{( \delta P +\delta \rho )}{P+\rho}+ \frac{\delta \mathcal{F}}{\mathcal{F}}+ 2 \tPhi \right)\\
& + \frac{\mh}{2}\frac{\mf'}{\mf}\left(-\frac{ \delta \mathcal{F} }{2
\mathcal{F}}+\frac{\delta \rho }{2 \rho} + \tPhi+\frac{\delta
\mathcal{F}' }{ \mathcal{F}'} \right)\, .\label{eq:ddPsi}
\end{split}
\end{align}
These forms can be transformed into each other by using the Friedmann equation (\ref{eq:genFried}) and by using (\ref{eq:dPsi}) to rewrite terms proportional to $\psi'$ in (\ref{eq:ddPsi}).
For $\mf =1$ (and hence $\mf'=0\,, \ \delta \mf =0 \,, \ \delta \mf' = 0$) the above reduce to the standard general relativistic equations of motions for perturbations in the long-wavelength limit (spatial gradients vanish) as obtained from the Einstein field equations.
So far, we have not assumed a specific form of matter content. Neither have we chosen a specific form of lapse, nor made a gauge choice for perturbation variables.
A popular choice of time coordinate is conformal time, where $N=a$, and certain gauge choices simplify the perturbation equations further.
We comment on the question of gauge in the following subsection, and then discuss a specific class of functions $\mf$ for which the perturbation equations above simplify in an LQC-like fashion.
\subsection{Gauge choices in the separate universe approximation}\label{sec:gauge}
Even though one is ultimately interested in the dynamics of gauge-invariant quantities, it is often useful to carry out calculations in a specific gauge.
A popular gauge choice in cosmology is the Newtonian or longitudinal gauge, in which all anisotropic perturbations of the metric tensor are set to zero ($B=E=0$).
This gauge was also used in previous studies of the separate universe framework in LQC \cite{LQCsepUniv} and GFT \cite{GFTsepUniv}.
Here we will instead work in the comoving gauge, which for scalar matter is defined by $\delta \phi = 0$ and $E=0\,$.\footnote{There exist different conventions for the comoving gauge: in \cite{ Giesel_2018} the comoving gauge is defined by $\delta \phi =0$ and $B=0$ instead. The above convention is used in, e.g., \cite{BaumannNotes}.}
(As the matter content in LQC and GFT is taken to be a scalar field, we focus on this case in the following.)
For scalar matter the lapse perturbation is directly related to the perturbation of the energy density and pressure \eqref{eq:pertRhoP} in comoving gauge,
\begin{align}
\delta \rho = -\frac{\phi'^2}{N^2} \tPhi = - (\rho + P)\tPhi =\delta P \,
\label{eq:tPhiCom}
\end{align}
and one can write, using (\ref{eq:tPhiCom}) and the continuity equation \eqref{continuity},
\begin{align}
-\zeta = \mr + \mh \frac{\delta \rho}{\rho'} = \mr + \frac{\tPhi}{3}\,.
\label{eq:zetaMr}
\end{align}
The reasons we choose the comoving gauge are threefold. Firstly, the comoving curvature perturbation takes a particularly simple form, $\mr = \psi$. Secondly, when working with relational settings such as GFT, where the scalar field takes the role of a physical clock \cite{GFTscalarcosmo, GFThamiltonian}, the comoving gauge is simply the statement that at an instant of time all patches of the separate universe picture have the same clock value.
The third reason is more subtle and is connected to the application of gauge choices in general cosmological perturbation theory to the separate universe picture.
As already pointed out in \cite{hamSU}, fixing the Newtonian gauge $E=B=0$ does not provide an additional prescription in the separate universe picture, as anisotropic degrees of freedom are already absent in this approximation (see \eqref{longwavel}).
In general relativity, a relation between the metric and the lapse perturbation in Newtonian gauge is obtained from the off-diagonal spatial components of the perturbed Einstein field equations ($\delta G_{i\neq j} = \kappa \delta T_{i\neq j}$) \cite{cosmopert}, which for $E=B=0$ reduce to
\begin{align}
\partial_i \partial_j \psi - \partial_i \partial_j \tPhi = 0\,.
\label{eq:EFEij}
\end{align}
They imply that $\psi = \tPhi + x^i h_i(t) + g(t)\,$, where $h$ and $g$ are arbitrary homogeneous functions.
One then usually sets $\tPhi = \psi$ arguing that perturbations should average out to zero ($\int {\rm d}^3 x\;\psi = 0 = \int {\rm d}^3 x\;\tPhi $) since any homogeneous contribution to the perturbations could be absorbed in the background \cite{cosmopert}. This argument would forbid any nontrivial $h_i(t)$ or $g(t)\,$. In the separate universe framework all spatial gradients automatically vanish and the off-diagonal components of the Einstein field equations are trivially satisfied, so there is no analogue of \eqref{eq:EFEij}.
Equivalently, requiring that $\psi = \tPhi + g(t)$ is not a constraint as the perturbations $\psi, \, \tPhi$ are in any case only functions of time in the separate universe picture.
Fixing $\tPhi = \psi$ for the Newtonian gauge in the separate universe framework is therefore an additional assumption, somewhat harder to justify than in usual cosmological perturbation theory. This issue is also discussed in \cite{0801Wands}, where the authors introduce a `pseudo-longitudinal gauge', which ensures $\psi = \tPhi$ throughout as long as it is assumed to hold in some limit.\footnote{
It is also discussed for the Hamiltonian framework in \cite{hamSU}.
In the Hamiltonian picture the relation $\psi = \tPhi$ cannot be recovered due to the absence of the diffeomorphism constraint in the separate universe framework.
In \cite{hamSU} the authors recover $\Tilde{\Phi} = \psi$ by redefining the Newtonian gauge, where the redefinition relies on a relation between perturbation variables obtained from the diffeomorphism constraint.} \\
\subsection{A special case: $\mf = \mf(\rho)$}\label{sec:mfRho}
In LQC, the general perturbation equations take a particularly simple form due to the specific form of $\mf$ \eqref{eq:mfLQC}.
In this section we generalise the LQC case by considering a restricted class of corrected Friedmann equations, namely those in which $\mf$ is a function of the energy density $\rho$ only.
In particular, this means that the perturbation of $\mf$ is proportional to the perturbation of the energy density, $\delta\mf = \frac{{\rm d} \mf}{{\rm d} \rho}\delta \rho$. Unlike the case of LQC, the GFT correction given in \eqref{eq:mfGFT} does \emph{not} fall in this category, $\mf_{\rm GFT} \neq \mf(\rho)$, as $\my$ is perturbed as well.
If we define the quantities
\begin{align}
\mfr := & \frac{{\rm d} \mf}{{\rm d} \rho}\,, \qquad \mfrr:= \frac{{\rm d}^2 \mf}{{\rm d} \rho^2}\,, \qquad
\ma := \mf + \mfr\, \rho\,, \label{eq:maDef}
\end{align}
we obtain the following relations for quantities derived from $\mf$, using again the continuity equation \eqref{continuity}:
\begin{align}
\begin{split}
\delta \mf = & \,\mfr \delta \rho\,, \qquad \mf' = - 3 \mh (\rho + P)\mfr\,, \qquad
\ma' = - 3 \mh (\rho + P) (2\mfr + \rho \mfrr )\,,
\\
\frac{\delta \mf'}{\mf'} = & \, \frac{\mfrr \delta \rho}{\mfr}+\frac{\delta\rho + \delta P}{\rho + P}-\frac{\psi'}{\mh}\,.
\label{eq:mfRhoSimp}
\end{split}
\end{align}
We can then write the second Friedmann equation (\ref{eq:dtauH}) and generalised perturbation equations \eqref{eq:dPsi}-\eqref{eq:ddPsi} for the $\mf(\rho)$ class of modified Friedmann equations as
\begin{align}
\mh' - \frac{ N'}{N}\mh & = - \frac{\kappa}{2}N^2 (\rho + P)\ma\,, \label{eq:mh'SU_Frho}\\
\mh \psi' & = -\mh^2\tPhi - \frac{\kappa}{6} N^2 \ma \,\delta \rho\,,
\label{eq:psi'SU_Frho}\\
\begin{split}
-\psi'' & = -\frac{N'}{N}\psi'+ \mh \tPhi' -\frac{\kappa}{2}N^2\left(\ma\,\delta P + 2(\rho+P)\ma\,\tPhi + (\mf+\rho(\rho+P)\mfrr+(2P+3\rho)\mfr)\delta\rho\right) \\
& = -\frac{N'}{N}\psi'+ \mh \tPhi' -\kappa \phi' \ma\, \delta \phi' - \left(\mfr + \frac{\rho}{2} \mfrr\right)\kappa \phi'^2\, \delta \rho \,,
\label{eq:psi''SU_Frho}
\end{split}
\end{align}
where in the last line we used the fact that matter is given by by a scalar field with $\rho+P=\frac{\phi'^2}{N^2}$ (the other equations are general and hold for any matter content).
Equations \eqref{eq:mh'SU_Frho}-\eqref{eq:psi''SU_Frho} hold in any gauge.
They correspond to the LQC equations reported in \cite{LQCsepUniv} (which are given in conformal time $N=a$, and for a gauge in which $\psi=\tPhi$) for $\ma = 1 - 2\frac{\rho}{\rho_{{\rm c}}}$ and $\mfr=-\frac{1}{\rho_{{\rm c}}}$.
To make further progress, another equation is needed.
In general relativity, this is the {\em diffeomorphism constraint} arising from
the mixed time-space components of the perturbed Einstein field equations, $\delta G^0_i = \kappa \delta T^0_i$, or \cite{cosmopert}
\begin{align}
\partial_i \left(\mh \tPhi +\psi' - \frac{\kappa}{2} \phi'\, \delta \phi \right) =: \partial_i D & = 0\,,
\label{eq:0iconstraint}
\end{align}
which implies $D = s(t)$ where $s$ is an arbitrary homogeneous function.
In the usual formalism, one then argues that perturbations are inhomogeneous functions over a homogeneous background, and any homogeneous contribution to $D$ can be absorbed in the homogeneous background dynamics to justify the requirement that the perturbation variables satisfy $D = 0$. This discussion is analogous to the gauge choice $\psi=\tPhi$ in Newtonian gauge that we discussed in sec.\,\ref{sec:gauge}.
As in the discussion in sec.\,\ref{sec:gauge}, there is no analogue of \eqref{eq:0iconstraint} in the separate universe framework due to the absence of spatial gradients.
However, in scenarios of interest such as bouncing cosmologies, one expects to set initial conditions in a regime where general relativity holds and one can consider the case where spatial gradients are small, but not exactly zero. Then, at some initial time $t_0$ where initial conditions are set, which can be any time at which the strict $k \to 0$ limit has not been applied yet, \eqref{eq:0iconstraint} implies $D=0$ for the modes of interest. If one can then show that in general $D'(t) =0$ together with the initial condition $D(t_0) = 0\,$, $D=0$ holds throughout the evolution. This means we obtain another effective constraint equation for perturbations which, as we will see below, can be used to infer conservation laws for gauge-invariant perturbation variables.
For a setting with a modified Friedmann equation, \eqref{eq:0iconstraint} represents the low-curvature limit ($\mf \to 1$) of a possibly modified form of $D$. The modified form for $D$ cannot be derived directly but guessed and justified in hindsight, as was done for LQC in \cite{LQCsepUniv}.
In the following, we introduce a differential equation for suitable $D$ in comoving gauge and show that it holds for any form of $\mf(\rho)\,$.
We work in the comoving gauge, but a similar calculation can be carried out in the Newtonian gauge (see app.\,\ref{app:constrNewton}).
Inspired by \cite{LQCsepUniv}, where the corrections to $D$ for a modified Friedmann equation appear in the $\delta \phi$ term only, we assume that in comoving gauge $D$ takes the same form as in general relativity,
\begin{align}
D = \psi' + \tPhi \mh\,.
\label{eq:constraintCom}
\end{align}
From the equation for $\psi'$ (\ref{eq:psi'SU_Frho}) together with \eqref{eq:tPhiCom} we obtain
\begin{equation}
\psi' = - \mh \tPhi + \frac{\kappa}{6} \ma \frac{\phi'^2}{\mh} \,\tPhi
\label{eq:tPhiPsiCom}
\end{equation}
and it immediately follows that
\begin{align}
D= \frac{\kappa}{6} \frac{\phi'^2}{\mh} \ma \tPhi\,.
\label{eq:Df}
\end{align}
We now show that $D$ satisfies
\begin{align}
\ma D' + W \ma D - \ma' D = 0
\label{eq:constraintDiff}
\end{align}
for a certain form of $W$, and with $\ma$ defined in \eqref{eq:maDef}.
Using the perturbation equation \eqref{eq:psi''SU_Frho} for $\psi''$ in the $\mf(\rho)$ case in comoving gauge (so that $\delta\phi'=0$), $D'$ reduces to
\begin{align}
D' = \frac{N'}{N}\psi' + \mh'\,\tPhi + \kappa \phi'^2 \left(\mfr+\frac{\rho}{2} \mfrr\right) \delta \rho = \frac{N'}{N}\psi' + \mh'\,\tPhi - \kappa \frac{\phi'^4}{N^2} \left(\mfr+\frac{\rho}{2} \mfrr\right) \tPhi\,,
\end{align}
where we used the relation between $\delta \rho$ and $\tPhi$ given in \eqref{eq:tPhiCom}.
It then follows that
\begin{align}
\ma D' + W \ma D - \ma' D
= &\, \ma \tPhi\left(\mh' - \frac{N'}{N}\mh + \frac{\kappa\,\phi'^2}{6\mh }\left(\ma \frac{N'}{N} - \ma' + W\ma \right)-\frac{\kappa \phi'^4}{ N^2}\left(\mfr + \frac{\rho}{2}\mfrr\right)\right)\\
= &\, \kappa\phi'^2\ma\, \tPhi\left(\frac{1}{6\mh }\left(\ma \frac{N'}{N} - \ma' + W\ma -3\mh\ma\right)-\frac{\phi'^2}{ N^2}\left(\mfr + \frac{\rho}{2}\mfrr\right)\right)\\
= &\, \frac{\kappa\phi'^2\ma^2\, \tPhi}{6\mh }\left( \frac{N'}{N} + W -3\mh\right)\,,
\end{align}
using \eqref{eq:tPhiPsiCom}, the equation for $\mh'\,$, (\ref{eq:mh'SU_Frho}), and then eliminating $\ma'$ using (\ref{eq:mfRhoSimp}).
If we now choose $W = 3 \mh - N'/ N\,$, we obtain \eqref{eq:constraintDiff}.
Hence, as long as initial conditions are set in a regime where $D=0$ is satisfied, $\psi' + \tPhi \mh=0$ holds at all times.
We can now proceed to study the conservation laws for the gauge-invariant curvature perturbations $\zeta$ and $\mr$ defined in \eqref{gaugeinvperts} for a modified Friedmann equation of the $\mf = \mf(\rho)$ type, which is one of the main results of this paper.
As established in sec.\,\ref{sec:consZeta}, $\zeta$ is conserved whenever the adiabaticity condition $\frac{\delta P}{P'}=\frac{\delta\rho}{\rho'}$ holds.
If we recall that in comoving gauge, $\delta \rho = -(\rho + P)\tPhi = \delta P\,$,
it follows from \eqref{eq:Df} and $D=0$ that $\tPhi =0 $ and hence the adiabaticity equation is always satisfied, irrespective of the explicit form of $\pot\,$.
Therefore, in the $\mf(\rho)$ case (which, to repeat, includes both general relativity and standard LQC), $\zeta$ will always be conserved on super-horizon scales, and a single scalar matter field cannot introduce non-adiabatic perturbations.
Furthermore, from \eqref{eq:zetaMr} it follows that $-\zeta = \mr\,$, as in general relativity.
While the intermediate steps in this argument, like the form of the constraint equation and $\delta \rho=0\,$, are gauge-dependent statements, the implications for the gauge-invariant variables $\zeta$ and $\mr$ hold in any gauge.
\subsection{$\mf \neq \mf(\rho)$ example: GFT}
\label{sec:GFTexample}
We now turn to the more general case of $\mf \neq \mf(\rho)$, where there is no equation analogous to the diffeomorphism constraint of general relativity.
In this case, we cannot exclude non-adiabatic perturbations in general, but if we restrict ourselves to the special case of a scalar field satisfying the adiabaticity condition $\delta P = c_s^2 \delta \rho\,$, it follows that $\zeta'=0\,$, as demonstrated in sec.\,\ref{sec:consZeta}.
The quantity $\mr$ on the other hand is no longer conserved: if we insert $\delta \rho = - (1+w)\rho\, \tPhi$ (see \eqref{eq:tPhiCom}) into the equation of motion for $\psi$ \eqref{eq:dPsi}, we find that in comoving gauge $\mr$ satisfies
\begin{align}
-\frac{\mr'}{\mh} = \tPhi + \frac{\delta \rho}{2 \rho} + \frac{\delta \mf}{2 \mf} = (1-w)\frac{\tPhi}{2} + \frac{\delta \mf}{2 \mf}\,.
\label{eq:mr'Comoving}
\end{align}
Consequently, for generalised Friedmann equations with general $\mf$, $-\zeta = \mr$ no longer holds: while $\zeta$ remains constant on super--horizon scales, $\mr$ now has non-trivial dynamics. For a massless scalar field ($c_s^2 = w = 1$), which we consider in the following, the dynamics in $\mr$ are determined only by the expression for $\mf\,$.\\
In the remainder of this section, we investigate the dynamics of the comoving curvature perturbation $\mr$ in a GFT toy model as established in \cite{deparamcosmo}, which leads to an effective Friedmann equation specified by \eqref{eq:mfGFT}.
The GFT framework uses a massless scalar field $\phi$ as the only matter content of the universe (or rather, it represents the dominant matter content in the bounce region one is interested in when studying quantum gravitational effects) and $\rt$ also serves as a relational matter clock.
In the special case of a massless scalar field, $\pot = 0\,$, the Klein--Gordon equation \eqref{eq:klGordon} can be solved and the expressions for the energy density and its perturbation simplify as
\begin{align}
\phi' = \frac{\pi_\phi N}{a^3} \quad \Rightarrow \quad \rho & = \frac{\pi_\phi^2}{2 a^6}\,, \qquad
\delta \rho = 2\rho \left(\frac{\delta \pi_\phi}{\pi_\phi} + 3 \psi\right)\,, \qquad
\rho' = -3 \mh \frac{\pi_\phi^2}{a^6} = - 6 \mh \rho\,,
\label{eq:masslesScalarField}
\end{align}
where the scalar field momentum $\pi_\phi$ is a constant of motion.
Furthermore, for a massless scalar field, the relation between the lapse perturbation and the energy density perturbation in comoving gauge as given in \eqref{eq:tPhiCom} reduces to $\frac{\delta \rho}{\rho} = - 2 \tPhi\,$.
One also obtains
\begin{align}
\zeta = \frac{1}{3}\frac{\delta \pi_\phi}{\pi_\phi}\,,
\end{align}
so that the conservation of $\zeta$ follows directly from the fact that $\pi_\phi$ and its perturbation $\delta\pi_\phi$ are constants of motion.
We first consider the evolution of $\mr \ (=\psi$ in comoving gauge) as obtained from the evolution of the GFT volume operator studied separately in each patch of the separate universe picture and then proceed to compare this to the dynamics of $\mr$ obtained by solving the generalised perturbation equation \eqref{eq:mr'Comoving}.
We limit our presentation to the main points; for details, please see app.\,\ref{app:GFT}.
\subsubsection{Evolution of $\mr$ for exact solutions in a GFT model}\label{sec:GFTquantum}
The GFT corrected Friedmann equation originates from the evolution of the expectation value of the GFT volume operator\footnote{Not to be confused with the potential of the scalar field $\pot$.} $V(\phi) := \langle \hat{V} (\phi) \rangle$ taken over a suitable class of semiclassical states with respect to the clock $\phi$.
The analytic solution for the evolution of $V(\phi)$ in a non-interacting GFT and assuming a single dominant field mode is given by \cite{deparamcosmo}
\begin{align}
V(\rt) = v_0 A e^{2 \omega \rt} + v_0 B e^{- 2 \omega \rt} - \frac{v_0}{2}\,,
\label{eq:VGFT}
\end{align}
where $A,\, B \geq 0$ are real parameters determined by the initial conditions (and $v_0$ is a fixed constant).
The effective Friedmann equation is then obtained from $\frac{V(\phi)'^2}{V(\phi)^2}$ (which can be related to the usual form using $V = a^3$ and rewriting $\mh$ in relational time $\phi\,$, see \eqref{eq:friedRelational}) and
in order to obtain the correct late-time limit of this Friedmann equation the fundamental parameter $\omega$ is fixed to satisfy $\omega^2 = \frac{3}{8}\kappa\,$.
To obtain an exemplary evolution of $\mr$ directly from the solution to $V(\rt)$ as given in \eqref{eq:VGFT}, we set up an ensemble of separate universe patches labelled by $p$, each with slightly different initial conditions $A_p, \ B_p\,$.
The bounce in each patch happens at $\phi_{p, \rm bounce} = \frac{1}{4\omega}\log \left(\frac{B_p}{A_p}\right)$, such that for generic initial conditions each patch reaches its minimum volume at a different value of $\phi\,$.
We obtain the perturbation $\psi_p$ of each patch from $V_p = (a_p)^3 = (a_{bg})^3 -3 \psi_p$ at linear order, such that
\begin{align}
\psi_p & = \frac{1}{3}\left( 1 - \frac{V_p}{V_{bg}} \right)\,, \quad \text{where}\
V_{bg} := \frac{1}{N_{\rm patches}} \sum_p V_p\,, \label{eq:psiGFT}
\end{align}
and $N_{\rm patches}$ is the the total number of patches in the ensemble considered.
This gives an analytic expression for the perturbation $\psi_p$ (and hence $\mr_p$) of each patch.
In comoving gauge the value of $\phi$ at a given instant of relational time is (by definition) the same in each patch, and it is therefore straightforward to compare the evolution of $V_p(\phi)$ of different patches (unlike in \cite{GFTsepUniv}, where more general gauge choices were studied).
\subsubsection{Evolution of $\mr$ from separate universe perturbation equations}
We now compare the evolution of $\psi_p$ as given by \eqref{eq:VGFT} and \eqref{eq:psiGFT} to that obtained from the generalised perturbation equations \eqref{eq:dPsi} or \eqref{eq:mr'Comoving} in comoving gauge.
We wish to establish if and for how long these generalised perturbation equations correctly capture the exact evolution of $\mr\,$.
As the matter content is given by a massless scalar field, $w=1$ in \eqref{eq:mr'Comoving} and $\mr' = -\mh \frac{\delta \mf}{2 \mf}\,$.
We fix $\mf$ as given in \eqref{eq:mfGFT}, so that
\begin{equation}
\mf = 1+\frac{v_0}{a^3}+ \frac{\my}{a^6}\,, \quad \delta\mf = 3\frac{v_0}{a^3}\psi + 6 \frac{\my}{a^6}\psi + \frac{\delta \my}{a^6}\, ,
\label{eq:mfDeltaMfGFT}
\end{equation}
and the constant of motion $\my$ is related to the coefficients in \eqref{eq:VGFT} as
\begin{align}
\my = \frac{v_0^2}{4}- 4\,v_0^2 \, A\, B\,.
\end{align}
From the definition of $A$ and $B$ from the underlying quantum theory, it follows that $A\, B \geq \frac{1}{16}$ and hence $\my \leq 0$ (see app.\,\ref{app:GFT}).
We can consider at least two inequivalent approaches of defining the background quantity $\my = \my_{bg}$, namely $\my = \frac{v_0^2}{4} - 4\,A_{bg}\,B_{bg}$ or $\my = \frac{1}{N_p}\sum_p \my_p\,$, where we define $A_{bg} := \frac{1}{N_{\rm patches}}\sum_p A_p$ and $B_{bg} := \frac{1}{N_{\rm patches}}\sum_p B_p\,$.
If we average over the volumes of each patch $V_p$, which are given by \eqref{eq:VGFT} with $A,\, B \to A_p, \, B_p\,$, we find that $V_{bg}$ is obtained by
replacing $A,\, B$ with their background values $A,\, B \to A_{bg}, \, B_{bg}\,$
in \eqref{eq:VGFT}.
We will therefore use $\my = \frac{v_0^2}{4} - 4 A_{bg}B_{bg}$ in the following. The alternative choice would introduce nonlinear averaging effects in the evolution of $V_{bg}$ around the bounce, but these would have no impact on the qualitative statements made in the remainder of this section.
We define $\delta \my := \my_p - \my_{bg} = -4\, v_0^2 \left( \delta A_p\, B_{bg} + \delta B_p\, A_{bg} + \delta A_p\, \delta B_p \right)\,$.
To solve the equation of motion for $\mr$, we first
obtain an expression for $\mh$ by solving the background Friedmann equation \eqref{eq:genFried},
which in relational time reads
\begin{align}
\mh = & \frac{1}{3}\frac{{\rm d} V}{{\rm d} \rt} \frac{1}{V} \rt'
\qquad \Rightarrow \qquad \left(\frac{{\rm d} V}{{\rm d} \rt} \frac{1}{V}\right)^2 = \frac{3}{2}\kappa \mf\,,
\label{eq:relFried}
\end{align}
and is solved by
\begin{align}
V(\rt) = \frac{\mc}{4}e^{ \sqrt{3 \kappa/2}\, \phi} + \left(- \my + \frac{v_0^2}{4}\right)\mc^{-1} e^{-\sqrt{3 \kappa /2}\, \phi} - \frac{v_0}{2}\,.
\label{eq:Vfried}
\end{align}
As an example, we consider again an ensemble of patches that follow \eqref{eq:VGFT} with perturbed initial conditions (different values of $A_p, \ B_p$ for each patch). These determine the value of $\my_{bg}$ and the integration constant $\mc$ is fixed by setting the initial condition from $V_{bg}$ as given by \eqref{eq:psiGFT} in the post-bounce regime.
The solution to the modified Friedmann equation $V(\phi)$ as given in \eqref{eq:Vfried} then agrees with the exact expression for $V_{bg}$ obtained from \eqref{eq:VGFT} and \eqref{eq:psiGFT}.\\
To now obtain the evolution of $\mr= \psi$, we solve \eqref{eq:mr'Comoving}. However, as we are concerned also with the bounce region we use the following form to avoid division by zero (since at the bounce $\mh = 0 = \mf\,$), and rewrite in relational time:
\begin{align}
2 \mh \psi' = & -\frac{\kappa}{6} \phi'^2 \delta \mf \qquad \Rightarrow \qquad \frac{{\rm d} V}{{\rm d} \rt} \frac{1}{V} \frac{{\rm d} \psi}{{\rm d} \rt} = - \frac{\kappa}{4} \delta \mf\,.
\label{eq:psi'relational}
\end{align}
Note that this is independent of the explicit form of the lapse $N$, like the relational Friedmann equation \eqref{eq:relFried}.
A solution to \eqref{eq:psi'relational} (inserting the solution \eqref{eq:Vfried} in \eqref{eq:psi'relational}) is given by
\begin{align}
\psi = \frac{ \mc_\psi \left(\mc^2 e^{ \sqrt{6 \kappa }\, \phi }-v_0^2 + 4 \my \right) + \frac{4}{3} \delta \my }{\left(\mc\, e^{ \sqrt{3 \kappa/2 }\, \phi }-v_0\right)^2-4 \my}\,.
\label{eq:psiPertSol}
\end{align}
The integration constant $\mc_\psi$ is fixed by setting the initial condition in the post-bounce regime from the exact solution obtained from \eqref{eq:psiGFT} for a specific patch of the ensemble.
Fig.\,\ref{fig:q_vs_SU} shows the exact evolution of $\mr$ as obtained from \eqref{eq:psiGFT} as well as the solution to the perturbation equations \eqref{eq:psiPertSol} for a patch of an exemplary ensemble.
If perturbations are small the exact and perturbative solution agree well. The difference in the asymptotic values in the pre-bounce regime is given by $\frac{1}{3}\frac{\delta A_p}{A_{bg}}\frac{\delta B_p}{B_{bg}}\,$.
If $\psi$ is of order $\epsilon$, the discrepancy between the asymptotic values is of order $\epsilon^2$, i.e., a quantity that is assumed negligible in linear perturbation theory. (For details, see app.\,\ref{app:asymptoticVals}.)
We would like to point out, however, that linear perturbation theory breaks down in the bounce region, since perturbations are no longer small relative to their respective background quantity: at the bounce, $\mf =0$ but $\delta \mf\neq 0$, and hence $\frac{\delta \mf}{\mf} \ll 1$ does not hold (fig.\,\ref{fig:deltaMfMf}).
Note that in the special case where all patches reach their minimum volume at the same value of $\rt\,$ ($\frac{B_p}{A_p}$ is the same in all patches), the qualitative evolution of $\psi$ differs from the example in fig.\,\ref{fig:q_vs_SU}.
Then, even though $\psi$ is not constant around the bounce, $\mr$ takes the same value in the semiclassical regimes before and after the bounce.
We can then conclude that, despite the invalidity of linear perturbation theory in the bounce region, the generalised perturbation equations we established accurately capture the non-trivial evolution of $\mr$ introduced by the modified Friedmann equation across the bounce if $\psi$ is sufficiently small.
If perturbations become too large the perturbation equations reproduce the correct qualitative behaviour of $\psi$, but lead to different values around the bounce and in the post-bounce regime. \\
\begin{figure}
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{img/pert_psi_new2.pdf}
\caption{
Evolution of $\psi_{\rm pert}$ \eqref{eq:psiPertSol} in an exemplary patch. The horizontal dashed lines represent the asymptotic values of the solution \eqref{eq:psiAsympPertApp}.
}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{img/pert_psi_diff_new2.pdf}
\caption{ The difference between the exact solution \eqref{eq:psiGFT} and the solution obtained from the perturbation equations \eqref{eq:psiPertSol}, $\Delta \psi = \psi_{\rm exact} - \psi_{\rm pert}$.
}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{img/pert_deltaF_new2.pdf}
\caption{
In the immediate vicinity of the bounce $\frac{\delta \mf_p}{ \mf_{bg}}$ is large, indicating a breakdown of linear perturbation theory.
}
\label{fig:deltaMfMf}
\end{subfigure}
\caption{
$\psi$ for a single patch of an ensemble with $N_{\rm patches} = 16$ (see also fig.\,\ref{fig:exactGFT}) as given in \eqref{eq:psiPertSol} ($\psi_{\rm pert}$) compared to the exact solution \eqref{eq:psiGFT} ($\psi_{\rm exact}$). The difference between the two solutions increases in the bounce region and asymptotically approaches a constant value, but remains small throughout. Initial conditions are set in the post-bounce regime at $\phi = 4$. The asymptotic values of $\psi$ are given by \eqref{eq:psiAsympExactApp} and \eqref{eq:psiAsympPertApp}.
While the qualitative behaviour of the plots and the conclusions we draw in the main text are independent of the specific choice of initial conditions, we quote the numerical values of parameters in the solution of $\psi_{\rm pert}$ for reference:
$A_{bg} =200.226\,, \ B_{bg} = 200.262\,, \ \my = -160390\,, \ \mc = 800.903\,, \ \delta \my = 35.0685\,, \ \mc_\psi = 2.75576\times 10^{-4}\,, \ v_0 =1\,$. The bounce time is $\phi_{\rm bounce} = 1.47351 \times 10^{-5}$ and we set $\kappa = 8 \pi$.}
\label{fig:q_vs_SU}
\end{figure}
We conclude this section with two remarks:
Even though the far pre- and post-bounce regime follow general relativistic dynamics, the relation $-\zeta = \mr$ can only hold in one of them: $\zeta$ remains conserved even in the GFT case, but $\mr$ has different asymptotic values (see fig.\,\ref{fig:q_vs_SU}).
This can be understood by recalling that the dynamical equivalence of $-\zeta$ and $\mr$ in the $\mf(\rho)$ case follows from initially setting $D=0$ and the conservation law of $D$ (see sec.\,\ref{sec:mfRho}). If the system evolves through a period in which the conservation law for $D$ no longer holds as is the case here, this can introduce a shift between $-\zeta$ and $\mr$. Hence, in the GFT bouncing scenario where $\mr$ has non-trivial dynamics, the far pre- and post-bounce phase must be treated as independent general relativistic regimes.
Finally, the assumption that the background dynamics satisfy the same Friedmann equation as the individual locally homogeneous patches is not exact, but only holds in a perturbative regime (see app.\,\ref{app:GFT}).
The fact that averaged quantities and their perturbations are inadequate to capture the true evolution is referred to as `the averaging problem' in standard cosmology \cite{avgProb_Zalaletdinov, avgProb_Wiltshire, avgProb_Hossenfelder}.
It can be summarised as follows: the assumption that the Universe (even at present) is homogeneous and isotropic, such that it can be described by the FLRW metric, only holds on average over large scales.
Einstein's equations are highly nonlinear and it is per se unclear whether an average of an exact solution that takes the true matter distribution of the Universe into account will match a solution obtained from perturbations around an exact FLRW universe.
\section{Relation to second order perturbation equations}\label{sec:MS}
In the previous section we studied the dynamics of the comoving curvature perturbation $\mr$ by deriving an analogue to the diffeomorphism constraint and thereby obtaining a conservation law in the $\mf(\rho)$ case, and then considering directly a solution to the first order equation in $\psi$ for a GFT model, where the matter content is given by a massless scalar field and thus takes a specific, simple form.
As established in sec.\,\ref{sec:consZeta}, $\zeta$ is conserved in the separate universe picture as long as the continuity equation holds.
Solving the first order equation \eqref{eq:dPsi} in $\psi$ directly, as we did in the GFT case, works only for specific forms of matter content, where one can eliminate all perturbation variables but one.
In more general cases, one can combine perturbation equations to obtain a single second order equation of motion that only refers to a single perturbation variable (and background quantities).
This could be an equation for $\psi$ (which is equivalent to the Bardeen variable $\Psi$ in longitudinal gauge) as in \cite{0801Wands}, or the Mukhanov--Sasaki equation as in \cite{0112249}.
Notably, these two approaches lead to different results for the evolution of $\zeta$ in the separate universe framework already in general relativity. If one obtains its evolution from a second order equation in $\psi\,$, $\zeta$ remains constant, in agreement with our considerations in sec.\,\ref{sec:consZeta}. On the other hand, if one solves the long-wavelength limit of the Mukhanov--Sasaki equation, the solution for $\zeta$ has a constant and a dynamical part, where the latter is particularly important in the contracting branch.
We will discuss how this discrepancy could be understood as a limitation of the strict separate universe limit.
In order to relate our results to some of the literature, we summarise the above-mentioned two second order approaches of main interest in standard cosmology and comment how they would apply to the more general types of cosmological dynamics we consider.
To simplify comparison, in this section we use conformal time ($N=a$ and we denote the Hubble parameter as $ \frac{a'}{a} = \hubb $) and longitudinal gauge ($E=B=0$ and $\psi = \tPhi\,$, where we discussed the origin of the last relation in sec.\,\ref{sec:gauge}).
In this gauge, the relevant linearised Einstein equations involving the perturbation variable $\psi$ are (see, e.g., \cite{cosmopert})
\begin{align}
-k^2 \psi -3 \hubb \left(\hubb \psi +\psi ' \right) & = \frac{\kappa}{2} a ^2 \delta \rho\, ,
\label{eq:psi'GR}\\
\psi \left(2 \hubb' +\hubb ^2\right)+3 \hubb \psi ' +\psi '' & =\frac{\kappa}{2} a ^2 \delta P\,.
\label{eq:psi''GR}
\end{align}
\subsection{Second order equation for the metric perturbation $\psi$}
\label{sec:ddpsi}
One can combine the temporal \eqref{eq:psi'GR} and spatial-diagonal component \eqref{eq:psi''GR} of the Einstein equations to obtain an equation of motion for a single perturbation variable only, using that for adiabatic perturbations $\frac{\delta P}{\delta \rho} = \frac{P'}{\rho'} = c_s^2\,$. Together with the background equation $\hubb' = - \frac{1}{2}\hubb^2 (1+ 3w)\,$,
one obtains
\begin{align}
3 \hubb^2\left( c_s^2 - w \right)\psi +3 \hubb (c_s^2 +1) \psi ' +\psi '' = -c_s^2 k^2 \psi \, .
\label{eq:psi''GR_k}
\end{align}
In the separate universe limit $k\rightarrow 0 \,$, one neglects the right-hand side, obtaining an equation that also follows from our earlier general separate universe equations (\ref{eq:dPsi}) and (\ref{eq:ddPsi}) in conformal time and for $\mathcal{F}=1$. In this limit, one can find the general solution $\psi(\eta) = \frac{ \hubb}{a^2} \left(\frac{3}{2} C_1 \int {\rm d}\eta \left(a^2 \left(w+1\right)\right) + C_2\right)\,$, where $C_1\,$, $C_2$ are constants depending on $k$ in the range of wavenumbers covered by this approximation (see, e.g., \cite{Bertschinger, 0801Wands} \footnote{To see that this expression solves \eqref{eq:psi''GR_k} when $k\to0 \, $, one needs to use the background equations for $\hubb'$ and $\hubb''$ as well as the continuity equation \eqref{eq:dw}.}).
We can obtain an expression of $\zeta$ in terms of $\psi$ by replacing the energy density perturbation \eqref{eq:psi'GR} and inserting the continuity equation \eqref{continuity} in its definition \eqref{gaugeinvperts}:
\begin{align}
-\zeta= \frac{2 k^2}{9 \hubb ^2 (w +1)} \psi +\frac{3 w + 5}{3 (w +1)}\psi +\frac{2 \psi ' }{3 \hubb (w +1)}\,.
\label{eq:zetaPsi}
\end{align}
One can then derive an equation of motion for $\zeta$ which, after using \eqref{eq:psi''GR_k} and the background equations for $\hubb'$ and $w'\,$, reads
\begin{align}
-\zeta' = \frac{2 k^2 \left( \hubb \psi +\psi ' \right)}{9 \hubb ^2 (w +1)}\,,
\label{eq:dZeta}
\end{align}
so that we again find $\zeta'=0$ when spatial gradients can be neglected.
In particular, one can verify explicitly that the long-wavelength solution $\psi(\eta)$ derived above results in
\begin{align}
-\zeta(\eta) = C_1- k^2 \frac{2 C_2 + 3 C_1 \int {\rm d}\eta\left(a ^2 (w +1)\right)}{9 a ^2 \hubb (w +1)}
\end{align}
and so again $\zeta$ is a constant in the separate universe limit $k\rightarrow 0\,$: the second solution $\psi\sim\frac{\hubb}{a^2}$ does not contribute.
\\
We will now investigate to what extent this approach could be illuminating for the general cosmological scenarios we consider in this paper.
Even though it is inherent to the separate universe approach that the form of corrections to the perturbation equations arising from spatial gradients cannot be determined, we include a generic form of possible modifications, assuming they appear in a similar way to $k-$dependent terms in general relativity.
Recall that in order to derive the expression for $\psi''$ in the separate universe picture \eqref{eq:ddPsi} we take the derivative of the first order equation for $\psi$ \eqref{eq:dPsi} and insert an expression for $\delta \rho'$ obtained by perturbing the continuity equation \eqref{pertcontinuity}.
If we follow the same procedure when including inhomogeneities in $\psi'\,$, we also need to include $k-$dependent correction terms in $\delta \rho'$ to ensure consistency with general relativity: inhomogeneous changes to the dynamics of metric perturbations must lead to changes in the dynamics of the matter perturbations (whereas in the separate universe picture we assumed the matter sector remains unaltered).
Specifically, we consider modifications to the first order equation of $\psi$ \eqref{eq:dPsi} and the perturbed continuity equation \eqref{pertcontinuity} of the form
\begin{align}
\hubb \psi' & = - \hubb^2 \left( \psi + \frac{\delta \rho}{2\rho} + \frac{\delta \mf}{2 \mf }\right) + G_k\,, \label{eq:dpsimodK}\\
\delta \rho' & = 3 \psi' (\rho +P) - 3 \hubb (\delta \rho + \delta P) + Z_k\, ,\label{eq:dDeltaRhoModK}
\end{align}
where in the classical limit $G_k \to -\frac{k^2}{3}\psi$ and $Z_k \to -\frac{2 k^2}{\kappa a^2}(\psi' + \hubb \psi)\,$.
One can then compute an expression for $\psi''$ from the derivative of \eqref{eq:dpsimodK} by inserting \eqref{eq:dDeltaRhoModK}, the continuity equation and replacing $\delta \rho$ as given by \eqref{eq:dpsimodK}, as well as making use of the background equation $\hubb' = -\frac{1}{2}\left(\hubb^2 (1+ 3w) - \hubb \frac{\mf'}{\mf} \right)$ and the modified Friedmann equation:
\begin{align}
\begin{split}
3 \mf\hubb^2 (c_s^2-w) \psi + \left(3 \mf \hubb (c_s^2 +1) -\frac{\mf'}{2}\right)\psi' + \mf \psi'' + \left(3 \hubb^2 (c_s^2 -w) - \hubb \frac{\mf'}{\mf}\right)\frac{\delta \mf}{2} + \hubb \frac{\delta \mf'}{2} \\
= -\frac{\kappa a ^2 \mf^2 Z_k }{6 \hubb}+ G_k \left((3 c_s^2+1) \mf -\frac{ \mf'}{ \hubb}\right) +\frac{G_k' }{\hubb} \mf \, .
\label{eq:ddpsimodK}
\end{split}
\end{align}
In the classical limit ($\mf \to 1\,, \ \delta \mf \to 0 \,, \ \mf' \to 0,\, \delta \mf'\to 0$ and $G_k\, , \ Z_k$ as given above) this reduces to \eqref{eq:psi''GR_k}.
In order to recover \eqref{eq:dPsi}, \eqref{eq:ddPsi} and the perturbed continuity equation \eqref{pertcontinuity} in the separate universe limit, we require $Z_k \to 0$ and $G_k \to 0$ for small $k\,$.
However, unlike for \eqref{eq:psi''GR_k}, the long-wavelength limit of \eqref{eq:ddpsimodK} cannot necessarily be solved directly to obtain a solution for $\psi\,$, as $\delta \mf$ can depend on perturbation variables other than $\psi$.
As in the general relativistic case, we can again replace $\delta \rho$ from \eqref{eq:dpsimodK} and insert the continuity and Friedmann equation in the definition of $\zeta$ \eqref{gaugeinvperts} to obtain
\begin{align}
- \zeta = -\frac{2 G_k }{3 \hubb ^2 (w +1)} +\frac{(3 w +5) \psi }{3 (w +1)} +\frac{2 \psi ' }{3 \hubb (w +1)}+ \frac{\delta \mf }{3 \mf (w +1)}\,.
\end{align}
We can again compute its derivative, where we replace $\delta \rho' $ from \eqref{eq:dDeltaRhoModK}, $ \delta \rho$ from \eqref{eq:dpsimodK} and use the background equations:
\begin{align}
-\zeta' = - \frac{Z_k}{3(1+w) \rho} = - \frac{\kappa a^2 \mf Z_k}{9 \hubb^2 (1+w)} \,.
\end{align}
This reduces to \eqref{eq:dZeta} in the classical limit and vanishes when spatial gradients can be neglected.\\
In summary, following the approach used in \cite{0801Wands, Bertschinger}, we found a second order equation in $\psi$ similar to \eqref{eq:psi''GR_k}, but whether it can be written in a form that can be solved directly in the long-wavelength limit depends on the specific form of $\delta \mf\,$.
The analysis above is consistent with the general statement from sec.\,\ref{sec:consZeta} that $\zeta'= 0$ also for a modified Friedmann equation in the separate universe limit as long as we have adiabatic perturbations.
\subsection{Mukhanov--Sasaki equation}
In conventional cosmological perturbation theory, one commonly works with the Mukhanov--Sasaki variable $v $ \cite{MukhanovSasaki, MukhanovSasaki_2}, which has the property of evolving like a canonically normalised scalar field in an expanding background; it appears with the standard kinetic term of a scalar field in its action, and hence can be quantised canonically as a scalar field (see, e.g., \cite{BaumannNotes}).
The dynamics of the Mukhanov--Sasaki variable are governed by the Mukhanov--Sasaki equation which (after a Fourier decomposition) reads \cite{cosmopert}
\begin{align}
v'' + c_s^2k^2v - \frac{z''}{z}v =0\,.
\label{eq:MSeq}
\end{align}
For scalar matter $v = a (\delta \phi + \frac{\phi'}{\hubb}\psi)$ and $z = a \frac{\phi'}{\hubb}$ so that $v=z\mr\,$. In general relativity and in the long-wavelength limit, one then also has $v=-z\zeta\,$.
The Mukhanov--Sasaki equation can be derived by rewriting the matter and gravity action in terms of $v$ \cite{cosmopert} or, in the separate universe approximation, through algebraic manipulation of the perturbation equations \cite{LQCsepUniv}.
The derivation of the Mukhanov--Sasaki equation requires the constraint equation $D = \mh \tPhi +\psi' - \frac{\kappa}{2} \phi' \delta \phi = 0\,$, which as discussed in sec.\,\ref{sec:mfRho} originates from the spatiotemporal component of the Einstein field equations and is generally not available in a separate universe framework. (But as we have seen a modified version can be derived in the $\mf(\rho)$ case.)
Another approach to obtain a solution for $\zeta$ or $\mr$ on super-horizon scales is then to solve the long-wavelength limit of \eqref{eq:MSeq}, which leads to \cite{cosmopert, 0112249}
\begin{align}
\zeta = V + S \int \frac{d \eta}{z^2}\,,
\label{eq:zetaSol}
\end{align}
where $V$ and $S$ are ($k-$dependent) constants.
Here, the dynamical part of the solution has no clear $k-$dependence that disappears in the separate universe limit.
Imposing the long-wavelength limit of \eqref{eq:dZeta} (which is equivalent to requiring that the long-wavelength limit of the first order equation in $\psi$ is satisfied) would however recover a constant solution, by requiring $S=0\,$.
It is then unclear whether it is justified to keep the dynamical part of $\zeta$ in \eqref{eq:zetaSol}, as this amounts to neglecting $k-$dependent terms in \eqref{eq:MSeq}, but not in \eqref{eq:dZeta}.
In some way, one then acknowledges that $\zeta' \neq 0$ for small but non-zero $k\,$, which becomes relevant in scenarios as those discussed in \cite{0112249, LQCsepUniv}, where $\zeta$ contains important information about small, but non-zero $k$ modes in the contracting phase. The authors point out that for the contracting branch in a bouncing universe, $\zeta'$ can increase as one approaches the bounce ($-\eta \to 0$), namely in the cases where $\zeta' \sim k^2(-\eta)^{-|p|}\ (p \in \mathbb{R})$, leading to a growing mode.
To remain in the separate universe regime, one then has to assume that the wavelength of perturbations is always large enough ($k$ sufficiently small) for $\zeta'$ to remain negligible.
However, the separate universe limit cannot be consistently applied at the bounce point, as it arises from the requirement that wavelengths are much larger than the Hubble horizon, $k\ll \hubb\,$.
For a full treatment one would therefore need to understand the finite theory: it seems necessary to verify any statements made about the dynamics of perturbations in the separate universe limit around the bounce region against the full dynamics including gradient terms.
In LQC there exists an effective Hamiltonian that allows to study perturbations also outside the strict $k \to 0$ limit and an analogue of the Mukhanov-Sasaki equation was derived \cite{LQCanomaly}, where LQC corrections to \eqref{eq:MSeq} appear only in the $k^2$--\,term.
This cannot be done in general for model-independent perturbation equations as we consider here.
Furthermore, in absence of the diffeomorphism constraint it is unclear whether a Mukhanov--Sasaki like equation can be derived algebraically as was done in \cite{LQCsepUniv}\footnote{Note that in the $\mf(\rho)$ case, where $\mr' = 0$ in the separate universe framework, a second order equation $(\mr' z)' =0 $ holds independently of the choice of $z$. It is not clear how to justify a particular choice of $z$ as corresponding to the $k\rightarrow 0$ limit of an equation valid more generally.}.
The applicability of \eqref{eq:MSeq} to a scenario with a modified Friedmann equation is therefore far from clear and an analogue needs to be established from the full dynamics for a specific model in question.
\\
In conclusion, second order equations as are used in standard cosmology generally do not provide additional insight into the evolution of gauge-invariant perturbation variables for general theories with a modified Friedmann equation in the separate universe limit.
They may nonetheless be useful for specific theories where additional information is available.
\section{Conclusion}\label{sec:conc}
In this article we investigated the evolution of scalar perturbations in cosmological scenarios with a modified Friedmann equation, such as those that can arise in quantum gravity.
We focused on the gauge-invariant perturbation variables $\zeta$ and $\mr$ which are frequently studied in conventional cosmological perturbation theory, as they have a physical interpretation and are related to the power spectrum of the CMB.
Our starting point is a generic modified Friedmann equation, and the main body of our analysis is agnostic with regards to the underlying theory.
We do however assume an unchanged continuity equation, from which it follows that $\zeta$ is conserved for long wavelengths as long as perturbations are adiabatic, independent of the gravitational dynamics.
We furthermore need to assume that the notion of gauge invariance, which ensures $\zeta$ and $\mr$ are physically meaningful variables to study, remains unchanged.
In cases where the underlying gravitational theory admits an effective description of the modified cosmological dynamics, this could be investigated explicitly (as is the case in LQC \cite{LQCanomaly}).
We work in the separate universe framework, where the Universe is modelled as an ensemble of disconnected patches that each follow the dynamics of an FLRW universe and all spatial gradients vanish. In this framework, the perturbations are homogeneous in each patch and defined with respect to the background values of the entire ensemble.
The perturbation equations are obtained by perturbing the Friedmann equation and its derivative at linear order.
We then focus on a special case, where the modification of the Friedmann equation can be contained in a function that depends on the energy density $\rho$ only, $\mf = \mf(\rho)\,$.
In this case one can show that a relation similar to the diffeomorphism constraint usually obtained from the spatiotemporal components of the Einstein field equations holds, which simplifies the perturbation equations. It then follows that $\mr$ is conserved for these types of models, as in general relativity.
Similar considerations were made for LQC in \cite{LQCsepUniv} and here we show that these results hold in general for this class of modified Friedmann equations.
We then investigate a specific example of a Friedmann equation that does not have this property, namely the GFT Friedmann equation as established in \cite{deparamcosmo}.
In this case, $\mr' \neq 0$ and we compare the evolution of $\mr$ across the bounce as obtained from the expectation value of the GFT volume operator to analytical solutions of the generalised perturbation equations.
The difference between the solutions for $\psi$ obtained from the two procedures is of second order and therefore negligible for small perturbations.
Finally, we consider two common approaches in the literature that use second order equations in perturbation variables and comment how they relate to our findings. We conclude that neither of them can be used to make further general statements about the evolution of perturbations in scenarios with a general modified Friedmann equation.
In summary, we established that for a general modified Friedmann equation in the separate universe framework, the relation $\mr = -\zeta$ no longer holds, whereas it remains valid for a certain type of modification.
As $\zeta$ remains conserved irrespective of the type of modification, the separate universe framework alone is not suitable to establish possible imprints on the CMB power spectrum from quantum gravitational effects. Inhomogeneous perturbations need to be included in an analysis to obtain alterations to the dynamics of $\zeta\,$.
Also, this is the only way to rigorously establish how sub-horizon dynamics around the bounce influence the evolution of perturbations through the bounce.
How and whether this can be done depends on the underlying theory that generates the modified Friedmann dynamics.
In LQC, techniques have been established \cite{LQCanomaly}, and first investigations have also been initiated in the context of GFT \cite{GFTrelationalPert}.\\
A final comment on the definition of $\zeta$ is in order:
Here we have assumed that the definition of $\zeta$ remains unchanged also in the non-general relativistic regime.
However, a modified Friedmann equation of the form \eqref{eq:genFried} can also be interpreted as a modification to the energy density $\rho_{\rm eff} = \rho \mf\,$, which implies a modified form of the curvature perturbation on uniform density hypersurfaces $\zeta_{\rm eff}$, as was considered in \cite{0801Wands}. The arguments for the conservation of $\zeta$ presented here would no longer apply to $\zeta_{\rm eff}\,$, since in that case $\rho_{\rm eff}' = -3\mh(\rho + P)\mf + \rho \mf'\,$. It is clear that from a Friedmann equation alone, one cannot conclude whether the modification arises in the matter sector (which then also alters the continuity equation) or is limited to gravitational dynamics.
Such an input would originate from the theory that gives the modified Friedmann equation, and in the examples studied here (LQC and GFT) one assumes that the matter sector remains unaltered.
\acknowledgments
The work of SG was funded
by the Royal Society through a University Research Fellowship (UF160622).
|
1,116,691,501,291 | arxiv | \section{The spectral extraction package aXe}
As part of a collaborative project between STScI and the Advanced
Camera for Surveys (ACS) IDT,
the ST-ECF provides comprehensive support for the slitless
spectroscopy modes of the ACS. As
well as support to users of the ACS grism (G800L for the Wide Field
Channel, WFC and High Resolution Channel, HRC) and prisms
(PR200L for the HRC and PR110L and PR130L for the Solar Blind Channel,
SBC) and contributions to the ground and in-orbit calibrations of
the slitless modes, a primary pillar of this project has been
the provision of an extraction package, called aXe. The package consists of
a number of self-contained modules, which perform
the basic steps - defining apertures for extraction of a spectrum,
assigning wavelengths to pixels, flat fielding, extracting
rectified 2D spectra and a 1D spectrum, and applying flux
calibration. The modules are scripted in Python to allow easy
integration into Pyraf/STSDAS (see K\"{u}mmel et al. 2005
for more details). The aXe user manual
({\tt http://www.stecf.org/software/aXe/documentation.html})
provides full details for installing and running the package.
The fundamental aspect of slitless spectroscopy
is that the individual objects define their own `slit' in terms
of position on the detector and object height of the dispersed
spectrum; the object width in the dispersion direction affects the
spectral resolution. aXe uses a catalogue of the observed targets,
which is usually taken from a matched direct image
as the starting point of the reduction process. In the design of
aXe it was decided to make the software as general as possible,
so that in the longer term not only slitless spectra from ACS
could be extracted. This flexibility is engineered by putting
all instrument specific parameters in a configuration file.
Thus the specification of the spectral traces, the dispersion
solutions, the name of the flat field file, the sensitivity file name,
etc are all listed in a single file for each instrument mode. Thus for
ACS, there are six configuration files; three for the G800L with the
WFC (one for each chip) and HRC, one for PR200L, and one each for
the SBC with PR110L and PR130L.
The built-in flexibility has paid off since aXe has also been
used to extract spectra from multi-object spectra taken with
the VLT FORS2 instrument (K\"{u}mmel et al. 2006).
Since the first release of aXe in 2002 ready for installation
of ACS into HST, the package has evolved. In particular in
2004 a major enhancement was added with the use of `drizzle'
(Fruchter \& Hook, 2002) to combine 2D spectra of individual
objects when the data is taken with dithers of the telescope.
This turns out to be a very common observational procedure, at
least for the grism modes, in order to recover the undersampling
of the Point
Spread Function (PSF) and to mitigate the effect of pixel
sensitivity variations and hot pixels. Since the 2004 release (aXe-1.4),
two enhancements have been added which are
here described. The sensitivity of the ACS slitless spectroscopy
modes implies that, despite the small pixel scale and compact PSF,
even for high Galactic latitude fields, the surface density
of detected spectra on moderate exposures ($>$ thousands of seconds)
displays crowding and overlap. A high priority was to indicate
to the user which pixels in an extracted spectrum are affected
by overlap with spectra of other objects. This has now been implemented
in a quantitative way, whereby the estimated value
of the contaminating flux contributing to a spectrum pixel is
output. The second enhancement was to apply the well known
technique of weighting by the spatial profile when forming
a 1-D spectrum from the 2D spectrum on the detector (optimal
extraction of Horne 1986). Both these enhancements are described.
\section{Handling spectral contamination in aXe}
The contamination of one spectrum by spectra of neighbouring objects
can be manifest in several ways:
\begin{itemize}
\item overlap of the first order spectrum by the first order spectrum
of a nearby object situated in the projected direction of
dispersion;
\item overlap of first order spectra situated nearby but offset perpendicular
to the dispersion direction;
\item both above cases combined and possibly involving spectral orders
other than the first.
\end{itemize}
A particular case is that of the zeroth order of a bright object
overlying a fainter object spectrum. For the G800L grism,
the zeroth order is similar in size to the dispersing object, but slightly
dispersed, so that it can resemble a broad emission line. If such a
feature is not
recognized as a contaminating line it may lead to erroneous
wavelength, and hence redshift, assignment. Of course the
effect of bright objects on faint object spectra is more serious
than the reverse case, and highlights the need for a warning
of the contamination which provides at least an estimate of the
actual contaminating flux to a given spectrum. In the first release
of aXe, a minimal indicator of spectral contamination was
implemented which indicated the total number of other object spectra
which fell within a pixel in the extraction box of the given object. With
aXe-1.5, a new scheme was implemented which provides
a robust estimate of how much contaminating flux contributes to
a given pixel. The contamination is estimated by making a
simulated slitless image, which is achieved by one of two methods.
The simpler method takes the input catalogue and generates simulated
images as 2D Gaussians based on the image parameters, and is called the
Gaussian Emission Model;
the other method actually uses the fluxes in the pixels of a set
of multi-colour companion direct images, and is called the
Fluxcube Emission Model.
\subsection{Gaussian Emission model}
The input catalogue which drives the object extraction is
usually produced by running SExtractor on a companion direct
image (or several images taken with different filters) and lists
the object position, size parameters and magnitude. Thus for
each object a spectrum over the wavelength extent of the
slitless dispersing element can be formed and converted to
detector count rate using the known sensitivity of the slitless
mode. For a single filter this will be a flat featureless
spectrum, but more filter bands can more closely match
the true object spectrum. From the position and extent of each
object, which are determined from the object parameters of each
image (A\_IMAGE, B\_IMAGE
and THETA\_IMAGE in SExtractor), a simulated spectrum corresponding to
the slitless spectrum can be formed with spatial extent matching
the object size (see Fig.1, left panel, for an overview).
\begin{figure}
\epsscale{0.80}
\plottwo{Walsh_fig1a.eps}{Walsh_fig1b.eps}
\caption{ {\bf Left.} The scheme for generating the contamination model based on
2-D Gaussians and an input catalogue with object shapes. The
position, size (major and minor axes and position angle) and magnitude
of objects in different filter direct images provide simulated
objects which are dispersed onto the detector pixels, converted
to $e^{-}$ s$^{-1}$.
{\bf Right.} As Fig. 1 (left) but showing the procedure for generating the
simulated images directly from the photometric information in the
pixels of the multi-colour direct images.}
\label{Fig1}
\end{figure}
\subsection{Fluxcube Emission model}
An alternative, and preferable, method to produce the model
spectrum for estimation of the contribution
of contaminating spectra is to use the surface brightness
distribution in the pixels of the companion direct image(s).
The assignment of pixels to a given object still has to
be established and the method chosen in aXe-1.5 was to use the Segmentation
image provided by SExtractor. This image has the pixel value
belonging to a given object set to the object number in the
SExtractor catalogue. The data are stored in an intermediate
file which is a cube with planes for the segmentation image
and the filter images in different bands; it is hence called
a Fluxcube (see Fig. 1, right panel). aXe-1.5 provides a
routine to produce such flux cubes which are then read by the
appropriate tasks during the extraction of the spectra.
\subsection{Quantitative contamination}
During the extraction process for a given object spectrum,
then, within the extraction box, the count rate in the pixels
belonging to objects other than the one being extracted are
accumulated from the contamination image (originating in
either the Gaussian or Fluxcube models) and assigned to
the contaminating signal. Applying the sensitivity curve to
the contaminating signal enables the total contaminating
signal to the spectrum to be computed. The result
is written to a separate column of the Extracted Spectra File
by aXe. Fig. 2 shows as an example the extracted spectrum of an
$i \sim$ 24 mag. emission line galaxy; the strong red continuum
is shown to be badly contaminated (squares show the
contamination contribution). However the pair of emission lines at
around 7500\AA\ can be seen to be intrinsic to the object and not
arise from a contaminating spectrum.
It must be emphasized that the quantitative contamination is only
an estimate as it is based on a model (Gaussian or Fluxcube); it
does however lead to an appropriate level of caution being exercised
in quantitatively assessing a given spectrum.
The aXe tasks for producing the emission model leading to
contamination estimation deliver simulated images from direct
image(s) or a catalogue based on direct image(s). Thus the
slitless images produced by the Gaussian or Fluxcube models
(....CONT.fits files) are ideally suited to use in observation
planning and exposure time estimation. For example for a complex
field, the contamination images can be used to choose optimal
telescope roll angles that minimize overlap of the spectra of
interest.
\begin{figure}
\epsscale{0.45}
\plotone{Walsh_fig2.ps}
\caption{The extracted ACS WFC 1D spectrum of an emission line galaxy from
the HUDF parallel data (primary instrument was NICMOS (PI:
Thompson, programme 9803). The flux of the galaxy (with 1$\sigma$ error bars)
(diamonds) and the sum of the contaminating flux in the
extraction box (boxes) are shown.}
\label{Fig2}
\end{figure}
\section{Weighted spectral extraction}
If the 2D spectrum of an object is extracted applying a set of
weights to the spatial profile, then the resulting
1D spectrum has a higher signal-to-noise than the
simple box-extracted (i.e. summed) spectrum. This
was shown by Horne (1986; see also Robertson 1986) and is
often referred to as optimal extraction. Weighted extraction
has been implemented in aXe-1.5, with the Horne (1986) algorithm, using
weights derived from the contamination
image described in Section 2. The contamination image is well
suited to providing the weights since it is binned in the same way
as the spectrum to be extracted and, as a model, does not suffer from
any systematic statistical effects.
In the examples described by Horne (1986), the weights are
typically determined by fitting the observed spectrum in
the dispersion direction as a function of spatial offset. This
procedure can be prone to failure for weak spectra and where the whole
spectral extent is not occupied by signal; in the case of
overlapping spectra it would provide incorrect weights.
Simulations were performed with a random
star field composed of (spatially) well-sampled star images
in order to test the optimal extraction implementation in
aXe-1.5. The quantitative contamination procedure in aXe using the
Gaussian emission model was employed to make simulated slitless
spectra from a SExtractor catalogue; background and
noise were added and then the spectra were extracted with
and without weighting. The gain of weighted extraction
was specified as the
ratio of the signal-to-noises in the optimal to the
unweighted spectrum over a given range. As the signal-to-noise
level decreases, the optimal extraction shows an advantage
over the unweighted extraction depending on the width of the
extraction (see Fig. 3 where the
extraction width is 6 times the object width).
At the lowest signal-to-noise level, the advantage is
around a factor 1.4, equivalent to an increase
in exposure time of 1.9 over unweighted extraction.
As an example of weighted extraction applied to
real ACS data, a crowded
stellar field was selected from archive data resulting from
the APPLES programme
(ACS Pure Parallel Lyman-alpha Emission Survey, PI: J. Rhoads,
Programme 9482). Over 7000 spectra (to $i$=25mag.) were
present on this
ACS Wide Field Channel image so there is often considerable
contamination between adjacent spectra. Using the
quantitative contamination, only those spectra with $<$5\%
contamination were selected, and the advantage in
signal-to-noise of using weighted extraction
was determined. The mean S/N was computed over a range of
1000\AA\ centred at 7500\AA\ and Fig. 4 shows the result
as a point plot $v.$ magnitude and as a histogram $v.$
signal-to-noise. Here 1374 stars are analysed, and
a peak advantage of weighted over unweighted extraction of
about 1.3 is realised. This is
somewhat lower than that achieved in the simulations, but
is probably on account of the narrow undersampled Point Spread
Function (PSF) of the WFC data. Nevertheless the theoretical
gain in exposure time is around 60\% at low signal-to-noise.
Weighted extraction can be selected
in the aXe parameter set and both weighted and unweighted 1D spectra
for all sources are output. For sources with complex cross
dispersion profiles, there will probably be little advantage
of weighted extraction but for small objects, such as faint stars
and distant galaxies, a modest gain in signal-to-noise is
achievable with optimal extraction for slitless spectra. \\
The latest release aXe-1.5 provides both quantitative contamination
and weighted extraction for all ACS slitless spectroscopy modes and
is available from November 2005, with Pyraf/STSDAS 3.4. Full details
can be found at \\
{\tt http://www.stecf.org/software/aXe/}
\begin{figure}
\epsscale{0.42}
\plotone{Walsh_fig3.ps}
\caption{The result of a simulation of a star field observed with
the ACS
WFC G800L grism in terms of the signal-to-noise (S/N) advantage
of the weighted over the unweighted extraction (extraction
widths $\pm$3 $\sigma$ of the sources). The lower plot shows
the actual advantage for each star in the simulation (ratio
of weighted S/N divided by unweighted S/N, averaged over a
wavelength range) in terms of the $i$ mag.
The upper plot shows a histogram version of the lower plot where
the plot abscissa is the mean signal-to-noise over a 1000\AA\
range.
}
\label{Fig3}
\end{figure}
\begin{figure}
\epsscale{0.42}
\plotone{Walsh_fig4.ps}
\caption{
An identical plot to Figure 3, but now based on real data taken
with the ACS WFC G800L grism as part of HST APPLES programme 9482.
The field is at low Galactic
latitude and consists of many late type stars. There is considerable
scatter in the weighted v. unweighted extraction advantage at the lowest signal
level, but a demonstrable increase in signal-to-noise at low
flux levels is achieved.
}
\label{Fig4}
\end{figure}
|
1,116,691,501,292 | arxiv |
\section{Conclusions}\label{ref:conc}
We have presented two kernel-based tensor representations for action recognition from 3D skeletons, namely the sequence compatibility kernel (SCK) and dynamics compatibility kernel (DCK). SCK captures the higher-order correlations between 3D coordinates of the body-joints and their temporal variations, and factors out the need for expensive operations such as Fourier temporal pyramid matching or dynamic time warping, commonly used for generating sequence-level action representations. Further, our DCK kernel captures the action dynamics by modeling the spatio-temporal co-occurrences of the body-joints. This tensor representation also factors out the temporal variable, whose length depends on each sequence. Our experiments substantiate the effectiveness of our representations, demonstrating state-of-the-art performance on three challenging action recognition datasets.
\section{Experiments}\label{sec:exp}
In this section, we present experiments using our models on three benchmark 3D skeleton based action recognition datasets, namely (i) the UTKinect-Action~\cite{xia_utkinect}, (ii) Florence3D-Action~\cite{seidenari_florence3d}, and (iii) MSR-Action3D~\cite{li_msraction3d}. We also present experiments evaluating the influence of the choice of various hyper-parameters, such as the number of pivots $Z$ used for linearizing the body-joint and temporal kernels, the impact of Eigenvalue Power Normalization, and factor equalization.
\subsection{Datasets}
\label{sec:sets}
\noindent\textbf{UTKinect-Action~\cite{xia_utkinect}} dataset consists of 10 actions performed twice by 10 different subjects, and has 199 action sequences. The dataset provides 3D coordinate annotations of 20 body-joints for every frame. The dataset was captured with a stationary Kinect sensor and contains significant viewpoint and intra-class variations.
\\
\noindent\textbf{Florence3D-Action~\cite{seidenari_florence3d}} dataset consists of 9 actions performed two to three times by 10 different subjects. It comprises 215 action sequences. 3D coordinate annotations of 15 body-joints are provided for every frame. This dataset was also captured with a Kinect sensor and contains significant intra-class variations \emph{i.e.}, the same action may be articulated with the left or right hand. Moreover, some actions such as \emph{drinking}, \emph{performing a phone call}, etc., can be visually ambiguous.
\\
\noindent\textbf{MSR-Action3D~\cite{li_msraction3d}} dataset is comprised from 20 actions performed two to three times by 10 different subjects. Overall, it consists of 557 action sequences. 3D coordinates of 20 body-joints are provided. This dataset was captured using a Kinect-like depth sensor. It exhibits strong inter-class similarity.
In all experiments we follow the standard protocols for these datasets. We use the cross-subject test setting, in which half of the subjects are used for training and the remaining half for testing. Similarly, we divide the training set into two halves for purpose of training-validation. Additionally, we use two protocols for MSR-Action3D according to approaches~\cite{wu_actionlets} and ~\cite{li_msraction3d}, where the latter protocol uses three subsets grouping related actions together.
\subsection{Experimental Setup}\label{sec:setup}
For the sequence compatibility kernel, we first normalized all body-keypoints with respect to the hip joints across frames, as indicated in Section \ref{sec:ker1}. Moreover, lengths of all body-parts are normalized with respect to a reference skeleton. This setup follows the pre-processing suggested in~\cite{vemulapalli_SE3}. For our dynamics compatibility kernel, we use unnormalized body-joints and assume that the displacements of body-joint coordinates across frames capture their temporal evolution implicitly.
\noindent{\textbf{Sequence compatibility kernel.}} In this section, we first present experiments evaluating the influence of parameters $\sigma_2$ and $\sigma_3$ of kernels $G_{\sigma_2}$ and $G_{\sigma_3}$ which control the degree of selectivity for the 3D body-joints and temporal shift invariance, respectively. See Section \ref{sec:ker1} for a full definition of these parameters.
\begin{figure}[t
\centering\hspace*{-0.1cm}
\begin{subfigure}[b]{0.32\linewidth}
\centering\includegraphics[trim=0 3 0 15, clip=true, width=4.35cm]{piv1a.pdf}
\caption{\label{fig:piv1}}
\end{subfigure}
\hspace*{0.05cm}
\begin{subfigure}[b]{0.32\linewidth}
\centering\includegraphics[trim=0 3 0 15, clip=true, width=4.35cm]{piv2.pdf}
\caption{\label{fig:piv2}}
\end{subfigure}
\hspace*{0.05cm}
\begin{subfigure}[b]{0.32\linewidth}
\includegraphics[trim=0 3 0 15, clip=true, width=4.35cm]{piv3.pdf}
\caption{\label{fig:piv3}}
\end{subfigure}
\caption{Figure \ref{fig:piv1} illustrates the classification accuracy on Florence3d-Action for the sequence compatibility kernel when varying radii $\sigma_2$ (body-joints subkernel) and $\sigma_3$ (temporal subkernel). Figure \ref{fig:piv2} evaluates behavior of SCK w.r.t. the number of pivots $Z_2$ and $Z_3$. Figure \ref{fig:piv3} demonstrates effectiveness of our slice-wise Eigenvalue Power Normalization in tackling burstiness by varying parameter $\gamma$.
\end{figure}
Furthermore, recall that the kernels $G_{\sigma_2}$ and $G_{\sigma_3}$ are approximated via linearizations according to equations \eqref{eq:gauss_lin} and \eqref{eq:gauss_lin2}. The quality of these approximations and the size of our final tensor representations depend on the number of pivots $Z_2\!$ and $Z_3$ chosen. In our experiments, the pivots $\boldsymbol{\zeta}$ are spaced uniformly within interval $[-1;1]$ and $[0;1]$ for kernels $G_{\sigma_2}$ and $G_{\sigma_3}$ respectively.
Figures \ref{fig:piv1} and \ref{fig:piv2} present the results of this experiment on the Florence3D-Action dataset -- these are the results presented on the test set as we have also observed exactly the same trends on the validation set.
Figure \ref{fig:piv1} shows that the body-joint compatibility subkernel $G_{\sigma_2}$ requires a choice of $\sigma_2$ which is not too strict as the specific body-joints (\emph{e.g.}, elbow) would be expected to repeat across sequences in the exactly same position. On the one hand, very small $\sigma_2$ leads to poor generalization. On the other hand, very large $\sigma_2$ allows big displacements of the corresponding body-joints between sequences which results in poor discriminative power of this kernel. Furthermore, Figure \ref{fig:piv1} demonstrates that the range of $\sigma_3$ for the temporal subkernel for which we obtain very good performance is large, however, as $\sigma_3$ becomes very small or very large, extreme temporal selectivity or full temporal invariance, respectively, result in a loss of performance. For instance, $\sigma_3\!=\!4$ results in $91\%$ accuracy only.
In Figure \ref{fig:piv2}, we show the performance of our SCK kernel with respect to the number of pivots used for linearization. For the body-joint compatibility subkernel $G_{\sigma_2}$, we see that $Z_2\!=\!5$ pivots are sufficient to obtain good performance of $92.98\%$ accuracy. We have observed that this is consistent with the results on the validation set. Using more pivots, say $Z_2\!=\!20$, deteriorates the results slightly, suggesting overfitting. We make similar observations for the temporal subkernel $G_{\sigma_3}$ which demonstrates good performance for as few as $Z_3\!=\!2$ pivots. Such a small number of pivots suggests that linearizing 1D variables and generating higher-order co-occurrences, as described in Section~\ref{sec:ker1}, is a simple, robust, and effective linearization strategy.
Further, Figure \ref{fig:piv3} demonstrates the effectiveness of our slice-wise Eigenvalue Power Normalization (EPN) described in Equation \eqref{eq:epn1}. When $\gamma\!=\!1$, the EPN functionality is absent. This results in a drop of performance from $92.98\%$ to $88.7\%$ accuracy. This demonstrates that statistically unpredictable bursts of actions described by the body-joints, such as long versus short \emph{hand waving}, are indeed undesirable. It is clear that in such cases, EPN is very effective, as in practice it considers correlated bursts, \emph{e.g.}~co-occurring \emph{hand wave} and associated with it elbow and neck motion. For more details behind this concept, see~\cite{me_tensor}. For our further experiments, we choose $\sigma_2\!=\!0.6$, $\sigma_3\!=\!0.5$, $Z_2\!=\!5$, $Z_3\!=\!6$, and $\gamma\!=\!0.36$, as dictated by cross-validation.
\begin{figure}[t
\centering\hspace*{-0.22cm}
\begin{subfigure}[b]{0.32\linewidth}
\centering\includegraphics[trim=0 0 0 0, clip=true, width=3.2cm]{stickman5.pdf}
\renewcommand{\arraystretch}{0.95}
{
\scriptsize
\begin{tabular}{ c | c | c | c | c }
\kern-0.5em A\kern-0.5em & \kern-0.5em B\kern-0.5em & \kern-0.5em C\kern-0.5em & \kern-0.5em D\kern-0.5em & \kern-0.5em E\kern-0.5em\\
\hline
\kern-0.5em 6,9\kern-0.5em & \kern-0.5em 1,6,9\kern-0.5em & \kern-0.5em 6,9,12,15\kern-0.5em & \kern-0.5em 4,6,7,9,11,14\kern-0.5em & \kern-0.5em 4,6,7,9,\kern-0.7em\\
\cline{1-4}
F & G & H & I & \kern-0.5em 11,12,\kern-0.7em\\
\kern-0.7em 4-15\kern-0.5em & \kern-0.5em 1,4-15\kern-0.5em & \kern-0.5em 1,2,4-15\kern-0.5em & \kern-0.5em 1-15\kern-0.5em & \kern-0.5em 14,15\kern-0.7em\\
\hline
\end{tabular}
}
\caption{\label{fig:piv4}}
\end{subfigure}
\hspace*{0.16cm}
\begin{subfigure}[b]{0.32\linewidth}
\centering\includegraphics[trim=0 3 0 10, clip=true, width=4.35cm]{piv4c.pdf}
\caption{\label{fig:piv5}}
\end{subfigure}
\hspace*{0.00cm}
\begin{subfigure}[b]{0.32\linewidth}
\centering\includegraphics[trim=0 3 0 10, clip=true, width=4.35cm]{piv5c.pdf}
\caption{\label{fig:piv6}}
\end{subfigure}
\caption{Figure \ref{fig:piv4} enumerates the body-joints in the Florence3D-Action dataset. The table below lists subsets A-I of the body-joints used to build representations evaluated in Figure \ref{fig:piv5}, which demonstrates the performance of our dynamics compatibility kernel w.r.t. these subsets. Figure \ref{fig:piv6} demonstrates effectiveness of equalizing the factors in non-symmetric tensor representation by HOSVD Eigenvalue Power Normalization by varying $\gamma$.
\end{figure}
\vspace{0.05cm}
\noindent{\textbf{Dynamics compatibility kernel.}}
In this section, we evaluate the influence of choosing parameters for the DCK kernel. Our experiments are based on the
Florence3D-Action dataset. We present the scores on the test set as the results on the validation set match these closely. As this kernel considers all spatio-temporal co-occurrences of body-joints, we first evaluate the impact of the joint subsets we select for generating this representation as not all body-joints need to be used for describing actions.
Figure \ref{fig:piv4} enumerates the body-joints that describe every 3D human skeleton on the Florence3D-Action dataset whilst the table underneath lists the proposed body-joint subsets A-I which we use for computations of DCK. In Figure \ref{fig:piv5}, we plot the performance of our DCK kernel for each subset. The plot shows that using two body-joints associated with the hands from Configuration-A in the DCK kernel construction, we attain $88.32\%$ accuracy which highlights the informativeness of temporal dynamics. For Configuration-D, which includes six body-joints such as the knees, elbows and hands, our performance reaches $93.03\%$. This suggests that some not-selected for this configuration body-joints may be noisy and therefore detrimental to classification.
As configuration Configuration-E includes eight body-joints such as the feet, knees, elbows and hands, we choose it for our further experiments as it represents a reasonable trade-off between performance and size of representations. This configuration scores $92.77\%$ accuracy. We see that if we utilize all the body-joints according to Configuration-I, performance of $91.65\%$ accuracy is still somewhat lower compared to $93.03\%$ accuracy for Configuration-D highlighting again the issue of noisy body-joints.
In Figure \ref{fig:piv6}, we show the performance of our DCK kernel when HOSVD factors underlying our non-symmetric tensors are equalized by varying the EPN parameter $\gamma$. For $\gamma\!=\!1$, HOSVD EPN is absent which leads to $90.49\%$ accuracy only. For the optimal value of $\gamma\!=\!0.85$, the accuracy rises to $92.77\%$. This again demonstrates the presence of the burstiness effect in temporal representations.
\vspace{0.05cm}
\noindent{\textbf{Comparison to the state of the art.}} In this section, we compare the performance of our representations against the best performing methods on the three datasets. Along with comparing SCK and DCK, we will also explore the complementarity of these representations in capturing the action dynamics by combining them
On the Florence3D-Action dataset, we present our best results in Table \ref{tab:flor}. Note that the model parameters for the evaluation was selected by cross-validation. Linearizing a sequence compatibility kernel using these parameters resulted in a tensor representation of size $26,565$ dimensions\footnote{\label{foot:foot1}Note that this is the length of a vector per sequence after unfolding our tensor representation and removing duplicate coefficients from the symmetries in the tensor.}, and producing an accuracy of $92.98\%$ accuracy. As for the dynamics compatibility kernel (DCK), our model selected Configuration-E (described in Figure~\ref{fig:piv4}) resulting in a representation of dimensionality $16,920$ and achieved a performance of $92\%$. However, somewhat better results were attained by Configuration-D, namely $92.27\%$ accuracy for size of $9,450$. Combining both SCK representation with DCK in Configuration-E results in an accuracy of $95.23\%$. This constitutes a $4.5\%$ improvement over the state of the art on this dataset as listed in Table \ref{tab:flor} and demonstrates the complementary nature of SCK and DCK. To the best of our knowledge, this is the highest performance attained on this dataset.
\begin{table}[t
\renewcommand{\arraystretch}{0.85}
{
\footnotesize
\begin{subtable}[t]{0.48\linewidth}
\centering
\begin{tabular}{ c | c | c | c | c }
& SCK & \multicolumn{2}{|c|}{DCK} & \kern-0.3em SCK+DCK\kern-0.3em\\
\hline
\kern-0.3em accuracy\kern-0.3em & \kern-0.3em $92.98\%$\kern-0.3em & \kern-0.3em $93.03\%$\kern-0.3em & \kern-0.3em $92.77\%$\kern-0.3em & \kern-0.3em $\mathbf{95.23\%}$\kern-0.3em\\%95.47
size & 26,565 & 9,450 & 16,920 & 43,485\\
\hline
\end{tabular}\vspace{0.1cm}\\\begin{tabular}{ c | c}
\hline
\kern-0.3em Bag-of-Poses $82.00\%$ \cite{seidenari_florence3d}\kern-0.3em & \kern-0.3em $SE(3)$ $90.88\%$ \cite{vemulapalli_SE3}\kern-0.3em\\
\hline
\end{tabular
\caption{\label{tab:flor}}
\end{subtable}
\hspace{0.1cm}
\begin{subtable}[t]{0.48\linewidth}
\centering
\begin{tabular}{ c | c | c | c }
& SCK & DCK & \kern-0.3em SCK+DCK\kern-0.3em\\
\hline
\kern-0.3em accuracy\kern-0.3em & $96.08\%$ & $97.5\%$ & $\mathbf{98.2\%}$\\%97.69
size & 40,480 & 16,920 & 57,400\\
\hline
\end{tabular}\vspace{0.1cm}\\\begin{tabular}{ c | c }
\hline
\kern-0.3em 3D joints. hist. $90.92\%$ \cite{xia_utkinect}\kern-0.3em & \kern-0.3em $SE(3)$ $97.08\%$ \cite{vemulapalli_SE3}\kern-0.3em\\
\hline
\end{tabular}
\caption{\label{tab:utk}}
\end{subtable}
}
\caption{Evaluations of SCK and DCK and comparisons to the state-of-the-art results on \ref{tab:flor} the Florence3D-Action and \ref{tab:utk} UTKinect-Action dataset.
\end{table}
Action recognition results on the UTKinect-Action dataset are presented in Table \ref{tab:utk}. For our experiments on this dataset, we kept all the parameters the same as those we used on the Florence3D dataset (described above). On this dataset, both SCK and DCK representations yield $96.08\%$ and $97.5\%$ accuracy, respectively. Combining SCK and DCK yields $98.2\%$ accuracy outperforming marginally a more complex approach described in~\cite{vemulapalli_SE3} which uses Lie group algebra on $SE(3)$ matrix descriptors and requires practical extensions such as discrete time warping and Fourier temporal pyramids for attaining this performance, which we avoid completely.
\begin{table}[b
\renewcommand{\arraystretch}{0.95}
{
\footnotesize
\parbox{.48\linewidth}{
\centering
\begin{tabular}{ c | c | c | c }
& SCK & DCK & \kern-0.3em SCK+DCK\kern-0.3em\\
\hline
\kern-0.3em acc., prot.~\cite{wu_actionlets}\kern-0.3em & \kern-0.3em $90.72\%$\kern-0.3em & \kern-0.3em $86.30\%$\kern-0.3em & \kern-0.3em $\mathbf{91.45\%}$\kern-0.3em\\
\kern-0.3em acc., prot.~\cite{li_msraction3d}\kern-0.3em & \kern-0.3em $93.52\%$\kern-0.3em & \kern-0.3em $91.71\%$\kern-0.3em & \kern-0.3em $\mathbf{93.96\%}$\kern-0.3em\\
size & 40,480 & 16,920 & 57,400\\
\hline
\end{tabular}
}
\parbox{.48\linewidth}{
\centering
\begin{tabular}{ c | c }
\kern-0.3em accuracy, protocol~\cite{wu_actionlets}\kern-0.3em & \kern-0.3em accuracy, protocol~\cite{li_msraction3d}\kern-0.3em\\
\hline
\kern-0.3em Actionlets $88.20\%$ \cite{wu_actionlets}\kern-0.3em & \kern-0.3em R. Forests $90.90\%$ \cite{zhu_fusingjoints}\kern-0.3em\\
$SE(3)$ $89.48\%$ \cite{vemulapalli_SE3} & $SE(3)$ $92.46\%$ \cite{vemulapalli_SE3}\\
\kern-0.3em Kin. desc. $91.07\%$ \cite{zanfir_movingpose}\kern-0.3em &\\
\hline
\end{tabular}
}
}
\caption{Results on SCK and DCK and comparisons to the state of the art on MSR-Action3D.}\label{tab:msr
\end{table}
In Table~\ref{tab:msr}, we present our results on the MSR-Action3D dataset. Again, we kept all the model parameters the same as those used on the Florence3D dataset. Conforming to prior literature, we use two evaluation protocols on this dataset, namely (i) the protocol described in actionlets~\cite{wu_actionlets}, for which the authors utilize the entire dataset with its 20 classes during the training and evaluation, and (ii) approach of~\cite{li_msraction3d}, for which the authors divide the data into three subsets and report the average in classification accuracy over these subsets. The SCK representation results in the state-of-the-art accuracy of $90.72\%$ and $93.52\%$ for the two evaluation protocols, respectively. Combining SCK with DCK outperforms other approaches listed in the table and yields $91.45\%$ and $93.96\%$ accuracy for the two protocols, respectively.
\\
\\
\noindent{\textbf{Processing Time.}}
For SCK and DCK, processing a single sequence with unoptimized MATLAB code on a single core i5 takes 0.2s and 1.2s, respectively. Training on full MSR Action3D with the SCK and DCK takes about 13 min. In comparison, extracting $SE(3)$ features \cite{vemulapalli_SE3} takes 5.3s per sequence, processing on the full MSR Action3D dataset takes $\sim$ 50 min. and with post-processing (time warping, Fourier pyramids, etc.) it goes to about 72 min. Therefore, SCK and DCK is about $5.4\!\times$ faster.
\section{Preliminaries}
\label{sec:framework}
In this section, we review our notations and the necessary background on shift-invariant kernels and their linearizations, which will be useful for deriving kernels on 3D skeletons for action recognition.
\subsection{Tensor Notations}
\label{sec:not}
Let $\vec{\mathcal{V}}\in\mbr{d_1\times d_2\times d_3}$ denote a third-order tensor. Using Matlab style notation, we refer to the $p$-th slice of this tensor as $\vec{\mathcal{V}}_{:,:,p}$, which is a $d_1\times d_2$ matrix. For a matrix $\bm{V}\in\mbr{d_1\times d_2}$ and a vector $\mathbf{v}\in\mbr{d_3}$, the notation $\vec{\mathcal{V}}\!=\!\bm{V}\kronstack\mathbf{v}$ produces a tensor $\vec{\mathcal{V}}\!\in\!\mbr{d_1\times d_2\times d_3}$ where the $p$-th slice of $\vec{\mathcal{V}}$ is given by $\bm{V} v_p$, $v_p$ being the $p$-th dimension of $\mathbf{v}$. Symmetric third-order tensors of rank one are formed by the outer product of a vector $\mathbf{v}\in\mbr{d}$ in modes two and three. That is, a rank-one $\vec{\mathcal{V}}\in\mbr{d\times d\times d}$ is obtained from $\mathbf{v}$ as $\vec{\mathcal{V}}\!=\!({\kronstack}_3\mathbf{v}\!\triangleq\!(\mathbf{v}\vv^T)\kronstack\mathbf{v})$. Concatenation of $n$ tensors in mode $k$ is denoted as $\left[\vec{\mathcal{V}}_i\right]_{i\in\idx{n}}^{\oplus_k}$, where $\idx{n}$ is an index sequence $1,2,\cdots, n$. The Frobenius norm of tensor is given by $\fnorm{\vec{\mathcal{V}}} = \sqrt{\sum_{i,j,k} \mathcal{V}_{ijk}^2}$, where $\mathcal{V}_{ijk}$ represents the $ijk$-th element of $\vec{\mathcal{V}}$. Similarly, the inner-product between two tensors $\vec{\mathcal{X}}$ and $\vec{\mathcal{Y}}$ is given by $\left\langle\vec{\mathcal{X}},\vec{\mathcal{Y}}\right\rangle=\sum_{ijk}\mathcal{X}_{ijk}\mathcal{Y}_{ijk}$.
\subsection{Kernel Linearization}
\label{sec:kernel_linearization}
Let $G_{\sigma}(\mathbf{u}-\mathbf{\bar{u}})=\exp(-\enorm{\mathbf{u} - \mathbf{\bar{u}}}^2/{2\sigma^2})$ denote a standard Gaussian RBF kernel centered at $\mathbf{\bar{u}}$ and having a bandwidth $\sigma$. Kernel linearization refers to rewriting this $G_{\sigma}$ as an inner-product of two infinite-dimensional feature maps. To obtain these maps, we use a fast approximation method based on probability product kernels \cite{jebara_prodkers}. Specifically, we employ the inner product of $d'$-dimensional isotropic Gaussians given $u,u'\!\!\in\!\mbr{d'}\!$. The resulting approximation can be written as:
\begin{align}
&G_{\sigma}\!\left(\mathbf{u}\!-\!\mathbf{\bar{u}}\right)\!\!=\!\!\left(\frac{2}{\pi\sigma^2}\right)^{\!\!\frac{d'}{2}}\!\!\!\!\!\!\int\limits_{\boldsymbol{\zeta}\in\mbr{d'}}\!\!\!\!G_{\sigma/\sqrt{2}}\!\!\left(\mathbf{u}\!-\!\boldsymbol{\zeta}\right)G_{\sigma/\sqrt{2}}(\mathbf{\bar{u}}\!\!-\!\boldsymbol{\zeta})\,\mathrm{d}\boldsymbol{\zeta}.
\label{eq:gauss_lin}
\end{align
Equation \eqref{eq:gauss_lin} is then approximated by replacing the integral with the sum over $Z$ pivots $\boldsymbol{\zeta}_1,\cdots,\boldsymbol{\zeta}_Z$, thus writing a feature map $\boldsymbol{\phi}$ as:
\begin{align}
&\boldsymbol{\phi}(\mathbf{u})=\left[{G}_{\sigma/\sqrt{2}}(\mathbf{u}-\boldsymbol{\zeta}_1),\cdots,{G}_{\sigma/\sqrt{2}}(\mathbf{u}-\boldsymbol{\zeta}_Z)\right]^T,\!\!\\
\text{ and } & G_{\sigma}(\mathbf{u}\!-\!\mathbf{\bar{u}})\approx\left<\sqrt{c}\boldsymbol{\phi}(\mathbf{u}), \sqrt{c}\boldsymbol{\phi}(\mathbf{\bar{u}})\right>,
\label{eq:gauss_lin2}
\end{align}
where $c$ represents a constant. We refer to~\eqref{eq:gauss_lin2} as the linearization of the RBF kernel.
\section{Proposed Approach}
In this section, we first formulate the problem of action recognition from 3D skeleton sequences, which precedes an exposition of our two kernel formulations for describing the actions, followed by their tensor reformulations through kernel linearization.
\subsection{Problem Formulation}
Suppose we are given a set of 3D human pose skeleton sequences, each pose consisting of $J$ body-keypoints. Further, to simplify our notations, we assume each sequence consists of $N$ skeletons, one per frame\footnote{\label{foot:foo0}We assume that all sequences have $N$ frames for simplification of presentation. Our formulations are equally applicable to sequences of arbitrary lengths \emph{e.g.},~$M$ and $N$. Therefore, we apply in practice $G_{\sigma_3}(\frac{s}{M}-\frac{t}{N})$ in Equation \eqref{eq:ker1a}.}. Mathematically, we can define such a pose sequence $\Pi$ as:
\begin{equation}
\Pi = \set{\mathbf{x}_{is}\in\mbr{3},i\in\idx{J}, s\in\idx{N}}.
\end{equation}
Further, let each such sequence $\Pi$ be associated with one of $K$ action class labels $\ell\in\idx{K}$. Our goal is to use the skeleton sequence $\Pi$ and generate an action descriptor for this sequence that can be used in a classifier for recognizing the action class. In the following, we will present two such action descriptors, namely (i) sequence compatibility kernel and (ii) dynamics compatibility kernel, which are formulated using the ideas of kernel linearization and tensor algebra. We present both these kernel formulations next.
\begin{figure}[t
\centerin
\begin{subfigure}[b]{0.210\linewidth}
\centering\includegraphics[trim=0 0 0 0, clip=true, width=2.6cm]{conc1da.pdf}
\caption{\label{fig:ker0a}}
\end{subfigure}
\begin{subfigure}[b]{0.12\linewidth}
\centering\includegraphics[trim=0 0 0 0, clip=true, width=1.42cm]{conc1db.pdf}
\caption{\label{fig:ker0a2}}
\end{subfigure}
\begin{subfigure}[b]{0.65\linewidth}
\centering\includegraphics[trim=0 0 0 0, clip=true, width=8.0cm]{conc1dd.pdf}
\caption{\label{fig:ker0b}}
\end{subfigure}
\caption{Figures \ref{fig:ker0a} and \ref{fig:ker0a2} show how SCK works -- kernel $G_{\sigma_2}$ compares exhaustively \emph{e.g.}~hand-related joint $i$ for every frame in sequence $A$ with every frame in sequence $B$. Kernel $G_{\sigma_3}$ compares exhaustively the frame indexes. Figure \ref{fig:ker0b} shows this burden is avoided by linearization -- third-order statistics on feature maps $\boldsymbol{\phi}(\mathbf{x}_{is})$ and $\mathbf{z}(s)$ for joint $i$ are captured in tensor $\vec{\mathcal{X}}_i$ and whitened by EPN to obtain $\vec{\mathcal{V}}_i$ which are concatenated over $i\!=\!1,\cdots,J$ to represent a sequence.
\end{figure}
\subsection{Sequence Compatibility Kernel}
\label{sec:ker1}
As alluded to earlier, the main idea of this kernel is to measure the compatibility between two action sequences in terms of the similarity between their skeletons and their temporal order. To this end, we assume each skeleton is centralized with respect to one of the body-joints (say, hip). Suppose we are given two such sequences $\Pi_A$ and $\Pi_B$, each with $J$ joints, and $N$ frames. Further, let $\mathbf{x}_{is}\!\in\!\mbr{3}$ and $\mathbf{y}_{jt}\!\in\!\mbr{3}$ correspond to the body-joint coordinates of $\Pi_A$ and $\Pi_B$, respectively. We define our~\emph{sequence compatibility kernel} (SCK) between $\Pi_A$ and $\Pi_B$ as\footref{foot:foo0}:
\begin{align}
& K_S(\Pi_A,\Pi_B) = \frac{1}{\Lambda}\!\!\! \sum\limits_{(i,s)\in\mathcal{J}}\sum\limits_{(j,t)\in\mathcal{J}}\!G_{\sigma_1}(i\!-\!j)\Big(\beta_1 G_{\sigma_2}\!\left(\mathbf{x}_{is} - \mathbf{y}_{jt}\right) + \beta_2\, G_{\sigma_3}(\frac{s-t}{N})\Big)^r,\label{eq:ker1a}
\end{align}
where $\Lambda$ is a normalization constant and $\mathcal{J}=\idx{J}\times\idx{N}$. As is clear, this kernel involves three different compatibility subkernels, namely (i) $G_{\sigma_1}$, that captures the compatibility between joint-types $i$ and $j$, (ii) $G_{\sigma_2}$, capturing the compatibility between joint locations $\mathbf{x}$ and $\mathbf{y}$, and (iii) $G_{\sigma_3}$, measuring the temporal alignment of two poses in the sequences. We also introduce weighting factors $\beta_1,\beta_2\geq 0$ that adjusts the importance of the body-joint compatibility against the temporal alignment, where $\beta_1+\beta_2=1$. Figures \ref{fig:ker0a} and \ref{fig:ker0a2} illustrate how this kernel works. It might come as a surprise, why we need the kernel $G_{\sigma_1}$. Note that our skeletons may be noisy and there is a possibility that some of the keypoints are detected incorrectly (for example, elbows and wrists). Thus, this kernel allows incorporating some degree of uncertainty to the alignment of such joints. To simplify our formulations, in this paper, we will assume that such errors are absent from our skeletons, and thus $G_{\sigma_1}(i-j)=\delta(i-j)$. Further, the standard deviations $\sigma_2$ and $\sigma_3$ control the joint-coordinate selectivity and temporal shift-invariance respectively. That is, for $\sigma_3\rightarrow 0$, two sequences will have to match perfectly in the temporal sense. For $\sigma_3\rightarrow\infty$, the algorithm is invariant to any permutations of the frames. As will be clear in the sequel, the parameter $r$ determines the order statistics of the kernel (we use $r=3$).
Next, we present linearization of our kernel using the method proposed in Section~\ref{sec:kernel_linearization} and Equation~\eqref{eq:gauss_lin2}
so that kernel $G_{\sigma_2}(\mathbf{x}-\mathbf{y})\approx \phi(\mathbf{x})^T\phi(\mathbf{y})$ (see note\footnote{\label{foot:foob}In practice, we use $G^{'}_{\sigma_2}(\mathbf{x}\!-\!\mathbf{y})\!=\!G_{\sigma_2}(x^{(x)}\!\!-\!y^{(x)})\!+\!G_{\sigma_2}(x^{(y)}\!\!-\!y^{(y)})\!+\!G_{\sigma_2}(x^{(z)}\!\!-\!y^{(z)})$ so the kernel $G^{'}_{\sigma_2}(\mathbf{x}\!-\!\mathbf{y})\approx[\phi(x^{(x)}\!); \phi(x^{(y)}\!); \phi(x^{(z)}\!)]^T\![\phi(y^{(x)}\!); \phi(y^{(y)}\!); \phi(y^{(z)}\!)]$ but for simplicity we write $G_{\sigma_2}(\mathbf{x}\!-\!\mathbf{y})\!\approx\!\phi(\mathbf{x})^T\phi(\mathbf{y})$. Note that $(x)$, $(y)$, $(z)$ are the spatial xyz-components of joints.}) while $G_{\sigma_3}(\frac{s-t}{N})\approx\mathbf{z}(s/N)^T\mathbf{z}(t/N)$. With these approximations and simplification to $G_{\sigma_1}\!$ we described above, we can rewrite our sequence compatibility kernel as:
\begin{align}
K_S(\Pi_A,\Pi_B) &= \!\frac{1}{\Lambda}\!\!\sum\limits_{i\in\idx{J}}\sum\limits_{s\in\idx{N}}\!\sum\limits_{t\in\idx{N}}\!\!\!\left(
\begin{bmatrix}
\sqrt{\beta_1}\,\boldsymbol{\phi}(\mathbf{x}_{is}),\text{ (see note\footref{foot:foob})}\\
\sqrt{\beta_2}\,\mathbf{z}(s/N)\\[3pt]
\end{bmatrix}^T\!\!\!\cdot
\begin{bmatrix}
\sqrt{\beta_1}\boldsymbol{\phi}(\mathbf{y}_{it})\\
\sqrt{\beta_2}\mathbf{z}(t/N)\\[3pt]
\end{bmatrix}\right)^r\\
&=\!\frac{1}{\Lambda}\!\!\sum\limits_{i\in\idx{J}}\sum\limits_{s\in\idx{N}}\!\sum\limits_{t\in\idx{N}}\!\!\!\left<
{\kronstack}_r\!\begin{bmatrix}
\sqrt{\beta_1}\,\boldsymbol{\phi}(\mathbf{x}_{is})\\
\sqrt{\beta_2}\,\mathbf{z}(s/N)\\[3pt]
\end{bmatrix}\!,
{\kronstack}_r\!\begin{bmatrix}
\sqrt{\beta_1}\boldsymbol{\phi}(\mathbf{y}_{it})\\
\sqrt{\beta_2}\mathbf{z}(t/N)\\[3pt]
\end{bmatrix}\right>\\
&=\!\!\!\sum\limits_{i\in\idx{J}}\!\!\left<
\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{s\in\idx{N}}\!\!{\kronstack}_r\!\begin{bmatrix}
\sqrt{\beta_1}\,\boldsymbol{\phi}(\mathbf{x}_{is})\\
\sqrt{\beta_2}\mathbf{z}(s/N)\\[3pt]
\end{bmatrix}\!,
\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{t\in\idx{N}}\!\!{\kronstack}_r\!\begin{bmatrix}
\sqrt{\beta_1}\boldsymbol{\phi}(\mathbf{y}_{it})\\
\sqrt{\beta_2}\mathbf{z}(t/N)\\[3pt]
\end{bmatrix}\right>.
\label{eq:ker1b}
\end{align}
As is clear,~\eqref{eq:ker1b} expresses $K_S(\Pi_A,\Pi_B)$ as a sum of inner-products on third-order tensors ($r=3$). This is illustrated by Figure \ref{fig:ker0b}. While, using the dot-product as the inner-product is a possibility, there are much richer alternatives for tensors of order $r>=2$ that can exploit their structure or manipulate higher-order statistics inherent in them, thus leading to better representations. An example of such a commonly encountered property is the so-called \emph{burstiness}~\cite{jegou_bursts}, which is the property that a given feature appears more often in a sequence than a statistically independent model would predict. A robust sequence representation should be invariant to the length of actions \emph{e.g.}, a prolonged \emph{hand waving} represents the same action as a short \emph{hand wave}. The same is true for short versus repeated \emph{head nodding}. Eigenvalue Power Normalization (EPN)~\cite{me_tensor} is known to suppress burstiness. It acts on higher-order statistics illustrated in Figure~\ref{fig:ker0b}. Incorporating EPN, we generalize~\eqref{eq:ker1b} as:
\begin{align}
&
\!K_S^{*}({\Pi_A},{\Pi_B})\!=\!\!\!\sum\limits_{i\in\idx{J}}\!\!\left<
\!\mygthree{\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{s\in\idx{N}}\!\!\!{\kronstack}_r\!\!\begin{bmatrix}
\!\sqrt{\beta_1}\boldsymbol{\phi}(\mathbf{x}_{is})\\
\sqrt{\beta_2}\mathbf{z}(s/N)\\[3pt]
\end{bmatrix}}\!\!,\mygthree{\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{t\in\idx{N}}\!\!\!{\kronstack}_r\!\!\begin{bmatrix}
\sqrt{\beta_1}\boldsymbol{\phi}(\mathbf{y}_{it})\\
\sqrt{\beta_2}\mathbf{z}(t/N)\\[3pt]
\end{bmatrix}}\!\!\right>\!,\label{eq:ker1c}
\end{align}
where the operator $\boldsymbol{\mathcal{G}}$ performs EPN by applying power normalization to the spectrum of the third-order tensor (by taking the higher-order SVD). Note that in general $K_S^{*}({\Pi_A},{\Pi_B})\!\not\approx\!K_S({\Pi_A},{\Pi_B})$ as $\boldsymbol{\mathcal{G}}$ is intended to manipulate the spectrum of $\vec{\mathcal{X}}$. The final representation, for instance for a sequence ${\Pi_A}$, takes the following form:
\begin{align}
& \vec{\mathcal{V}}_i\!=\!\mygthree{\vec{\mathcal{X}}_i}\!,\text{ where } \vec{\mathcal{X}}_i\!=\!\!\frac{1}{\sqrt{\Lambda}}\!\!\!\sum\limits_{s\in\idx{N}}\!\!\!{\kronstack}_r\!\!\begin{bmatrix}
\!\sqrt{\beta_1}\,\boldsymbol{\phi}(\mathbf{x}_{is})\\
\sqrt{\beta_2}\mathbf{z}(s/N)\\[3pt]
\end{bmatrix}\!.\label{eq:ker1d}
\end{align}
We can further replace the summation over the body-joint indexes in \eqref{eq:ker1c} by concatenating $\vec{\mathcal{V}}_i$ in ~\eqref{eq:ker1d} along the fourth tensor mode, thus defining $\vec{\mathcal{V}} = \big[\vec{\mathcal{V}}_i\big]_{i\in\idx{J}}^{\oplus_4}$. Suppose $\vec{\mathcal{V}}_A$ and $\vec{\mathcal{V}}_B$ are the corresponding fourth order tensors for $\Pi_A$ and $\Pi_B$ respectively.
Then, we obtain:
\begin{align}
& K_S^{*}({\Pi_A},{\Pi_B})=\left<\vec{\mathcal{V}}_A, \vec{\mathcal{V}}_B\right>.
\end{align}
Note that the tensors $\vec{\mathcal{X}}$ have the following properties: (i) super-symmetry $\vec{\mathcal{X}}_{i,j,k}\!=\!\vec{\mathcal{X}}_{\pi(i,j,k)}$ for indexes $i,j,k$ and their permutation given by $\pi,\;\forall\pi$, and (ii) positive semi-definiteness of every slice, that is, $\vec{\mathcal{X}}_{:,:,s}\!\in\!\semipd{d},$ for $s\!\in\!\idx{d}$. Therefore, we need to use only the upper-simplex of the tensor which consists of $\binom{d+r-1}{r}$ coefficients (which is the total size of our final representation) rather than $d^r\!$, where $d$ is the side-dimension of $\vec{\mathcal{X}}$ \emph{i.e.}, $d\!=\!3Z_2\!+\!Z_3$ (see note\footref{foot:foob}), and $Z_2$ and $Z_3$ are the numbers of pivots used in the approximation of $G_{\sigma_2}$ (see note\footref{foot:foob}) and $G_{\sigma_3}$ respectively. As we want to preserve the above listed properties in tensors $\vec{\mathcal{V}}$, we employ slice-wise EPN which is induced by the Power-Euclidean distance and involves rising matrices to a power $\gamma$. Finally, we re-stack these slices along the third mode as:
\begin{align}
& \mygthree{\vec{\mathcal{X}}}\!=\![\vec{\mathcal{X}}_{:,:,s}^{\gamma}]_{s\in\idx{d}}^{\oplus_3}, \text{ for } 0\!<\gamma\!\leq\!1.\label{eq:epn1}
\end{align}
This $ \mygthree{\vec{\mathcal{X}}}$ forms our tensor representation for the action sequence.
\begin{figure}[t
\centerin
\begin{subfigure}[b]{0.195\linewidth}
\centering
\includegraphics[trim=0 0 0 0, clip=true, width=2.95cm]{ker2a.pdf}
\caption{\label{fig:ker2a}}
\end{subfigure}
\begin{subfigure}[b]{0.135\linewidth}
\centering
\includegraphics[trim=0 0 0 0, clip=true, width=1.6cm]{ker2b2.pdf}
\caption{\label{fig:ker2b}}
\end{subfigure}
\begin{subfigure}[b]{0.65\textwidth}
\centering
\includegraphics[trim=0 0 0 0, clip=true, width=8.0cm]{ker2c2.pdf}
\caption{\label{fig:ker2c}}
\end{subfigure}
\caption{Figure \ref{fig:ker2a} shows that kernel $G_{\sigma'_2}$ in DCK captures spatio-temporal dynamics by measuring displacement vectors from any given body-joint to remaining joints spatially- and temporally-wise (\emph{i.e.}~see dashed lines). Figure \ref{fig:ker2b} shows that comparisons performed by $G_{\sigma'_2}$ for any selected two joints are performed all-against-all temporally-wise which is computationally expensive. Figure \ref{fig:ker2c} shows the encoding steps in the proposed linearization which overcome this burden.
\end{figure}
\subsection{Dynamics Compatibility Kernel}
The SCK kernel that we described above captures the inter-sequence alignment, while the intra-sequence spatio-temporal dynamics is lost. In order to capture these temporal dynamics, we propose a novel dynamics compatibility kernel (DCK). To this end, we use the absolute coordinates of the joints in our kernel. Using the notations from the earlier section, for two action sequences $\Pi_A$ and $\Pi_B$, we define this kernel as:
\begin{align}
& \!\!\!K_D({\Pi_A},{\Pi_B})=\frac{1}{\Lambda}\!\!\!\!\!\sum\limits_{\substack{(i,s)\in\mathcal{J}\!,\\(i',s')\in\mathcal{J}\!,\\i'\!\!\neq\!i\!,s'\!\!\neq\!s}}\sum\limits_{\substack{(\!j,t)\in\mathcal{J}\!,\\(\!j'\!\!,t'\!)\in\mathcal{J},\\j'\!\!\neq\!j\!,t'\!\!\neq\!t}}\!\!\!\!G'_{\sigma'_1}(i\!-\!j\!, i'\!\!-\!j'\!)\,G_{\sigma'_2}\left(\left(\mathbf{x}_{is}\!-\!\mathbf{x}_{i's'}\!\right)\!-\!\left(\mathbf{y}_{jt}\!-\!\mathbf{y}_{j't'}\right)\right)\cdot\nonumber\\[-16pt]
& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\cdot G'_{\sigma'_3}(\frac{s\!-\!t}{N},\!\frac{s'\!\!-\!t'}{N})\,G'_{\sigma'_4}(s\!-\!s'\!,t\!-\!t'\!),\label{eq:ker2a}
\end{align}
where $G'_{\sigma}(\bm{\alpha},\vec{\beta})=G_{\sigma}(\bm{\alpha})G_{\sigma}(\vec{\beta})$. In comparison to the SCK kernel in~\eqref{eq:ker1a}, the DCK kernel uses the intra-sequence joint differences, thus capturing the dynamics. This dynamics is then compared to those in the other sequences. Figures~\ref{fig:ker2a}-\ref{fig:ker2c} depict schematically how this kernel captures co-occurrences. As in SCK, the first kernel $G'_{\sigma'_1}$ is used to capture sensor uncertainty in body-keypoint detection, and is assumed to be a delta function in this paper. The second kernel $G_{\sigma'_2}$ models the spatio-temporal co-occurrences of the body-joints. Temporal alignment kernels expressed as $G_{\sigma'_3}$ encode the temporal start and end-points from $(s,s'\!)$ and $(t,t'\!)$. Finally, $G_{\sigma'_4}$ limits contributions of dynamics between temporal points if they are distant from each other, \emph{i.e.}~if $s'\!\gg\!s$ or $t'\!\gg\!t$ and $\sigma'_4$ is small. Furthermore, similar to SCK, the standard deviations $\sigma'_2$ and $\sigma'_3$ control the selectivity over spatio-temporal dynamics of body-joints and their temporal shift-invariance for the start and end points, respectively.. As discussed for SCK, the practical extensions described by the footnotes\footref{foot:foo0}\textsuperscript{,}\footref{foot:foob} apply to DCK as well.
As in the previous section, we employ linearization to this kernel. Following the derivations described above, it can be shown that the linearized kernel has the following form (see appendices for details):
\begin{align}
&\!K_D(\Pi_A,\Pi_B) =\!\!\!\!\sum\limits_{\substack{i\in\idx{J}\!,\\i'\!\in\idx{J}\!:\\i'\!\neq i}}
\!\!
\left<
\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\!\!
G_{\sigma'_4}(s\!-\!s'\!)\left(\boldsymbol{\phi}(\mathbf{x}_{is}\!\!-\!\!\mathbf{x}_{i's'})
\!\cdot\!\mathbf{z}\big(\frac{s}{N}\big)^T\!\right)\!\kronstack\mathbf{z}\big(\frac{s'\!}{N}\big)\!\right.,\label{eq:ker2b}\\[-16pt]
&\qquad\qquad\qquad\qquad\qquad\qquad\;
\left.
\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{t\in\idx{N}\!,\\t'\!\!\in\idx{N}\!:\\t'\!\!\neq\!t}}\!\!
G_{\sigma'_4}(t\!-\!t'\!)\Big(
\boldsymbol{\phi}(\mathbf{y}_{it}\!\!-\!\!\mathbf{y}_{i't'})
\!\cdot\!\mathbf{z}\big(\frac{t}{N}\big)^T\!\Big)\!\kronstack\mathbf{z}\big(\frac{t'\!}{N}\big)\!\right>\!.\nonumber
\end{align}
Equation~\eqref{eq:ker2b} expresses $K_D({\Pi_A},{\Pi_B})$ as a sum over inner-products on third-order non-symmetric tensors of third-order (c.f. Section \ref{sec:ker1} where the proposed kernel results in an inner-product between super-symmetric tensors). However, we can decompose each of these tensors with a variant of EPN which involves Higher Order Singular Value Decomposition (HOSVD) into factors stored in the so-called core tensor and equalize the contributions of these factors. Intuitively, this would prevent bursts in the statistically captured spatio-temporal co-occurrence dynamics of actions. For example, consider that a long \emph{hand wave} versus a short one yield different temporal statistics, that is, the prolonged action results in bursts. However, the representation for action recognition should be invariant to such cases. As in the previous section, we introduce a non-linear operator $\boldsymbol{\mathcal{G}}$ into equation \eqref{eq:ker2b} which will handle this. Our final representation, for example, for sequence ${\Pi_A}$ can be expressed as:
\begin{align}
& \!\!\!\!\!\vec{\mathcal{V}}_{ii'\!}\!=\!\mygthree{\vec{\mathcal{X}}_{ii'\!}}\!,\!\text{ and }\vec{\mathcal{X}}_{ii'\!}\!=\!\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\!\!
G_{\sigma'_4}(s\!-\!s'\!)\left(
\boldsymbol{\phi}(\mathbf{x}_{is}\!\!-\!\!\mathbf{x}_{i's'})
\!\cdot\!\mathbf{z}\big(\frac{s}{N}\big)^T\!\right)\!\kronstack\mathbf{z}\big(\frac{s'\!}{N}\big),\!\!\label{eq:ker2d}
\end{align}
where the summation over the pairs of body-joint indexes in \eqref{eq:ker2b} becomes equivalent to the concatenation of $\vec{\mathcal{V}}_{ii'}\!$ from \eqref{eq:ker2d} along the fourth mode such that we obtain tensor representations $\big[\vec{\mathcal{V}}_{ii'\!}\big]_{i>i'\!:\,i,i'\in\idx{J}}^{\oplus_4}\!$ for sequence ${\Pi_A}$ and $\big[\vec{\mathcal{\bar{V}}}_{ii'\!}\big]_{i>i'\!:\,i,i'\in\idx{J}}^{\oplus_4}\!$ for sequence ${\Pi_B}$. The dot-product can be now applied between these representations for comparing them. For the operator $\boldsymbol{\mathcal{G}}$, we choose HOSVD-based tensor whitening as proposed in \cite{me_tensor}. However, they work with the super-symmetric tensors, such as the one we proposed in Section \ref{sec:ker1}. We work with a general non-symmetric case in \eqref{eq:ker2d} and use the following operator $\boldsymbol{\mathcal{G}}$:
\begin{align}
&{\left(\vec{\mathcal{E}}; \vec{A}_1,\cdots,\vec{A}_r\right)}=\hosvd(\vec{\mathcal{X}})\label{eq:rawcod3}\\
&\vec{\mathcal{\hat{E}}}=\sgn\!\left(\vec{\mathcal{E}}\right)\!\,\left|\!\,\vec{\mathcal{E}}\right|^{\gamma}\label{eq:rawcod4}\\
&\vec{\mathcal{\hat{V}}}=((\vec{\mathcal{\hat{E}}}\otimes_{1}\!\vec{A}_1)\,\cdots)\otimes_{r}\!\vec{A}_r\label{eq:rawcod5}\\
&\boldsymbol{\mathcal{G}}(\vec{\mathcal{X}})=\sgn(\vec{\mathcal{\hat{V}}})\,|\!\vec{\mathcal{\hat{V}}}|^{\gamma^{*}}\label{eq:rawcod6
\end{align}
In the above equations, we distinguish the core tensor $\vec{\mathcal{E}}$ and its power normalized variants $\vec{\mathcal{\hat{E}}}$ with factors that are being evened out by rising to the power $0\!<\!\gamma\!\leq\!1$, eigenvalue matrices $\vec{A}_1,\cdots,\vec{A}_r$ and operation $\otimes_r$ which represents a so-called tensor-product in mode $r$. We refer the reader to paper \cite{me_tensor} for the detailed description of the above steps.
\section{Computational Complexity}
Non-linearized SCK with kernel SVM has complexity $\mathcal{O}(JN^2T^\rho)$ given $J$ body joints, $N$ frames per sequence, $T$ sequences, and $2\!<\!\rho\!<\!3$ which concerns complexity of kernel SVM. Linearized SCK with linear SVM takes $\mathcal{O}(JNTZ_*^r)$ for a total of $Z_*$ pivots and tensor order $r\!=\!3$. Note that $N^2T^\rho\!\gg\!NTZ_*^r$. For $N\!=\!50$ and $Z_*\!=\!20$, this is $3.5\!\times$ (or $32\!\times$) faster than the exact kernel for $T\!=\!557$ (or $T\!=\!5000$) used in our experiments. Non-linearized DCK with kernel SVM has complexity $\mathcal{O}(J^2N^4T^\rho)$ while linearized DCK takes $\mathcal{O}(J^2N^2TZ^3)$ for $Z$ pivots per kernel, \emph{e.g.}~$Z\!=\!Z_2\!=\!Z_3$ given $G_{\sigma'_2}$ and $G_{\sigma'_3}$. As $N^4T^\rho\!\gg\!N^2TZ^3$, the linearization is $~11000\!\times$ faster than the exact kernel, for say $Z\!=\!5$. Note that EPN incurs negligible cost (see appendices for details).
\section{Introduction}
\label{sec:intro}
Human action recognition is a central problem in computer vision with potential impact in surveillance, human-robot interaction, elderly assistance systems, and gaming, to name a few. While there have been significant advancements in this area over the past few years, action recognition in unconstrained settings still remains a challenge. There have been research to simplify the problem from using RGB cameras to more sophisticated sensors such as Microsoft Kinect that can localize human body-parts and produce moving 3D skeletons~\cite{shotton2013real}; these skeletons are then used for recognition. Unfortunately, these skeletons are often noisy due to the difficulty in localizing body-parts, self-occlusions, and sensor range errors; thus necessitating higher-order reasoning on these 3D skeletons for action recognition.
There have been several approaches suggested in the recent past to improve recognition performance of actions from such noisy skeletons. These approaches can be mainly divided into two perspectives, namely (i) generative models that assume the skeleton points are produced by a latent dynamic model~\cite{turaga2009locally} corrupted by noise and (ii) discriminative approaches that generate compact representations of sequences on which classifiers are trained~\cite{presti20153d}. Due to the huge configuration space of 3D actions and the unavailability of sufficient training data, discriminative approaches have been the trend in the recent years for this problem. In this line of research, the main idea has been to compactly represent the spatio-temporal evolution of 3D skeletons, and later train classifiers on these representations to recognize the actions. Fortunately, there is a definitive structure to motions of 3D joints relative to each other due to the connectivity and length constraints of body-parts. Such constraints have been used to model actions; examples include Lie Algebra~\cite{vemulapalli_SE3}, positive definite matrices~\cite{harandi2014bregman,hussein2013human}, using a torus manifold~\cite{elgammal2009tracking}, Hanklet representations~\cite{li2012cross}, among several others. While modeling actions with explicit manifold assumptions can be useful, it is computationally expensive.
In this paper, we present a novel methodology for action representation from 3D skeleton points that avoids any manifold assumptions on the data representation, instead captures the higher-order statistics of how the body-joints relate to each other in a given action sequence. To this end, our scheme combines positive definite kernels and higher-order tensors, with the goal to obtain rich and compact representations. Our scheme benefits from using non-linear kernels such as radial basis functions (RBF) and it can also capture higher-order data statistics and the complexity of action dynamics.
We present two such kernel-tensor representations for the task. Our first representation~\emph{sequence compatibility kernel} (SCK), captures the spatio-temporal compatibility of body-joints between two sequences. To this end, we present an RBF kernel formulation that jointly captures the spatial and temporal similarity of each body-pose (normalized with respect to the hip position) in a sequence against those in another. We show that tensors generated from third-order outer-products of the linearizations of these kernels can be a simple yet powerful representation capturing higher-order co-occurrence statistics of body-parts and yield high classification confidences.
Our second representation, termed~\emph{dynamics compatibility kernel} (DCK) aims at representing spatio-temporal dynamics of each sequence explicitly.
We present a novel RBF kernel formulation that captures the similarity between a pair of body-poses in a given sequence explicitly, and then compare it against such body-pose pairs in other sequences. As it might appear, such spatio-temporal modeling could be expensive due to the volumetric nature of space and time. However, we show that using an appropriate kernel model can shrink the time-related variable in a small constant size representation after kernel linearization. With this approach, we can model both spatial and temporal variations in the form of co-occurrences which could otherwise have been prohibitive.
We further show through experiments that the above two representations in fact capture complementary statistics regarding the actions, and combining them leads to significant benefits. We present experiments on three standard datasets for the task, namely (i) UTKinect-Actions~\cite{xia_utkinect}, (ii) Florence3D-Actions~\cite{seidenari_florence3d}, and (iii) MSR-Action3D \cite{li_msraction3d} datasets and demonstrate state-of-the-art accuracy.
To summarize, the main contributions of this paper are (i) introduction of sequence and the dynamics compatibility kernels for capturing spatio-temporal evolution of body-joints for 3D skeleton based action sequences, (ii) derivations of linearization of these kernels, and (iii) their tensor reformulations. We review the related literature next
\section{Related Work}
\label{sec:related_work}
The problem of skeleton based action recognition has received significant attention over the past decades. Interested readers may refer to useful surveys~\cite{presti20153d} on the topic. In the sequel, we will review some of the more recent related approaches to the problem
In this paper, we focus on action recognition datasets that represent a human body as an articulated set of connected body-joints that evolve in time~\cite{zatsiorsky_body}. A temporal evolution of the human skeleton is very informative for action recognition as shown by Johansson in his seminal experiment involving the moving lights display~\cite{johansson_lights}. At the simplest level, the human body can be represented as a set of 3D points corresponding to body-joints such as elbow, wrist, knee, ankle, \emph{etc}. Action dynamics has been modeled using the motion of such 3D points in~\cite{hussein_action,lv_3daction}, using joint orientations with respect to a reference axis~\cite{parameswaran_viewinvariance} and even relative body-joint positions~\cite{wu_actionlets,yang_eigenjoints}. In contrast, we focus on representing these 3D body-joints by kernels whose linearization results in higher-order tensors capturing complex statistics. Noteworthy are also parts-based approaches that additionally consider the connected body segments~\cite{yacoob_activities,ohn_hog2,ofli_infjoints,vemulapalli_SE3}.
Our work also differs from previous works in the way it handles the temporal domain. 3D joint locations are modeled as temporal hierarchy of coefficients in \cite{hussein_action}. Pairwise relative positions of joints were modeled in \cite{wu_actionlets} and combined with a hierarchy of Fourier coefficients to capture temporal evolution of actions. Moreover, this approach uses multiple kernel learning to select discriminative joint combinations. In \cite{yang_eigenjoints}, the relative joint positions and their temporal displacements are modeled with respect to the initial frame. In \cite{vemulapalli_SE3}, the displacements and angles between the body parts are represented as a collection of matrices belonging to the special Euclidean group SE(3). Temporal domain is handled by the discrete time warping and Fourier temporal pyramid matching on a sequence of such matrices. In contrast, we model temporal domain with a single RBF kernel providing invariance to local temporal shifts and avoid expensive techniques such as time warping and multiple-kernel learning.
Our scheme also differs from prior works such as kernel descriptors \cite{ker_des} that aggregate orientations of gradients for recognition. Their approach exploits sums over the product of at most two RBF kernels handling two cues \emph{e.g.}, gradient orientations and spatial locations, which are later linearized by Kernel PCA and Nystr\"{o}m techniques. Similarly, convolutional kernel networks \cite{ckn} consider stacked layers of a variant of kernel descriptors \cite{ker_des}.
Kernel trick was utilized for action recognition in kernelized covariances \cite{cavazza_kercov} which are obtained in Nystr\"{o}m-like process. A time series kernel \cite{gaidon_timekern} between auto-correlation matrices is proposed to capture spatio-temporal auto-correlations.
In contrast, our scheme allows sums over several multiplicative and additive RBF kernels, thus, it allows handling multiple input cues to build a complex representation. We show how to capture higher-order statistics by linearizing a polynomial kernel and avoid evaluating costly kernels directly in contrast to kernel trick.
Third-order tensors have been found to be useful for several other vision tasks. For example, in \cite{tensoraction2007}, spatio-temporal third-order tensors on videos is proposed for action analysis, non-negative tensor factorization is used for image denoising in~\cite{shashua2005non}, tensor textures are proposed for texture rendering in~\cite{vasilescu2004tensortextures}, and higher order tensors are used for face recognition in~\cite{vasilescu2002multilinear}. A survey of multi-linear algebraic methods for tensor subspace learning and applications is available in~\cite{lu2011survey}. These applications use a single tensor, while our goal is to use the tensors as data descriptors similar to~\cite{me_tensor2,me_tensor,sparse_tensor_cvpr,zhao2012comprehensive} for image recognition tasks. However, in contrast to these similar methods, we explore the possibility of using third-order representations for 3D action recognition, which poses a different set of challenges.
\section{Linearization of Dynamics Compatibility Kernel}
In what follows, we derive the linearization of DCK. Let us remind that $G_{\sigma}(\mathbf{u}-\mathbf{\bar{u}})=\exp(-\enorm{\mathbf{u} - \mathbf{\bar{u}}}^2/{2\sigma^2})$, $G'_{\sigma}(\bm{\alpha},\vec{\beta})=G_{\sigma}(\bm{\alpha})G_{\sigma}(\vec{\beta})$ and $G_{\sigma}(\mathbf{i}-\mathbf{j})=\delta(\mathbf{i}-\mathbf{j})$ if $\sigma\!\rightarrow\!0$, therefore $\delta(\boldsymbol{0})=1$ and $\delta(\mathbf{u})=0$ if $\mathbf{u}\!\neq\!\boldsymbol{0}$. Moreover, $\Lambda=J^2$ is a normalization constant and $\mathcal{J}=\idx{J}\times\idx{N}$. We remind that kernel $G_{\sigma'_2}(\mathbf{x}-\mathbf{y})\approx \phi(\mathbf{x})^T\phi(\mathbf{y})$ while $G_{\sigma'_3}(\frac{s-t}{N})\approx\mathbf{z}(s/N)^T\mathbf{z}(t/N)$. Therefore, we obtain:
{
\begin{align}
& \!K_D({\Pi_A},{\Pi_B})=\nonumber\\
&\!\!=\!\frac{1}{\Lambda}\!\!\!\!\!\sum\limits_{\substack{(i,s)\in\mathcal{J}\!,\\(i',s')\in\mathcal{J}\!,\\i'\!\!\neq\!i\!,s'\!\!\neq\!s}}\sum\limits_{\substack{(\!j,t)\in\mathcal{J}\!,\\(\!j'\!\!,t'\!)\in\mathcal{J},\\j'\!\!\neq\!j\!,t'\!\!\neq\!t}}\!\!\!\!G'_{\sigma'_1}(i\!-\!j\!, i'\!\!-\!j'\!)\,G_{\sigma'_2}\left(\left(\mathbf{x}_{is}\!-\!\mathbf{x}_{i's'}\!\right)\!-\!\left(\mathbf{y}_{jt}-\mathbf{y}_{j't'}\right)\right)G'_{\sigma'_3}(\frac{s-t}{N},\!\frac{s'-t'}{N})\,\cdot\nonumber\\[-24pt]
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;\;\cdot G'_{\sigma'_4}(s\!-\!s'\!,t\!-\!t'\!)\nonumber\\[6pt]
&\!\!=\!\frac{1}{\Lambda}\!\!\sum\limits_{\substack{i\in\idx{J}\!,\\i'\!\in\idx{J}\!:\\i'\!\neq i}}\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\sum\limits_{\substack{t\in\idx{N}\!,\\t'\!\!\in\idx{N}\!:\\t'\!\!\neq\!t}}\!G_{\sigma'_2}\big(\!\left(\mathbf{x}_{is}\!-\!\mathbf{x}_{i's'}\!\right)\!-\!\left(\mathbf{y}_{jt}-\mathbf{y}_{j't'}\right)\!\big)\,G_{\sigma'_3}\big(\frac{s-t}{N}\big)G_{\sigma'_3}\big(\frac{s'-t'}{N}\big)\cdot{\RaiseBiggBar{-8pt}{_{\substack{\\[-10pt]j\!=\!i\\j'\!\!=\!i'\!}}}}\nonumber\\[-24pt]
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\cdot G_{\sigma'_4}(s\!-\!s'\!)\,G_{\sigma'_4}(t\!-\!t'\!)\nonumber\\[6pt]
&\!\!\approx\!\frac{1}{\Lambda}\!\!\sum\limits_{\substack{i\in\idx{J}\!,\\i'\!\in\idx{J}\!:\\i'\!\neq i}}\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\sum\limits_{\substack{t\in\idx{N}\!,\\t'\!\!\in\idx{N}\!:\\t'\!\!\neq\!t}}\!\boldsymbol{\phi}\left(\mathbf{x}_{is}\!-\!\mathbf{x}_{i's'}\!\right)^T\!\boldsymbol{\phi}\left(\mathbf{y}_{it}-\mathbf{y}_{i't'}\right)\!\cdot\!\mathbf{z}\big(\frac{s}{N}\big)^T\!\mathbf{z}\big(\frac{t}{N}\big)\!\cdot\!\mathbf{z}\big(\frac{s'\!}{N}\big)^T\!\mathbf{z}\big(\frac{t'\!}{N}\big)\cdot\nonumber\\[-24pt]
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\;\cdot G_{\sigma'_4}(s\!-\!s'\!)\,G_{\sigma'_4}(t\!-\!t'\!)\nonumber\\[6pt]
&\!\!=\!\frac{1}{\Lambda}\!\!\sum\limits_{\substack{i\in\idx{J}\!,\\i'\!\in\idx{J}\!:\\i'\!\neq i}}\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\sum\limits_{\substack{t\in\idx{N}\!,\\t'\!\!\in\idx{N}\!:\\t'\!\!\neq\!t}}\!\!\left<\!G_{\sigma'_4}(s\!-\!s'\!)\left(\boldsymbol{\phi}(\mathbf{x}_{is}\!\!-\!\!\mathbf{x}_{i's'})
\!\cdot\!\mathbf{z}\big(\frac{s}{N}\big)^T\!\right)\!\kronstack\mathbf{z}\big(\frac{s'\!}{N}\big)\!\right.,\nonumber\\[-24pt]
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\left.
G_{\sigma'_4}(t\!-\!t'\!)\Big(\boldsymbol{\phi}(\mathbf{y}_{it}\!\!-\!\!\mathbf{y}_{i't'})
\!\cdot\!\mathbf{z}\big(\frac{t}{N}\big)^T\!\Big)\!\kronstack\mathbf{z}\big(\frac{t'\!}{N}\big)\!\right>\nonumber\\
&\!\!=\!\!\!\!\sum\limits_{\substack{i\in\idx{J}\!,\\i'\!\in\idx{J}\!:\\i'\!\neq i}}
\!\!
\left<
\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\!\!
G_{\sigma'_4}(s\!-\!s'\!)\left(\boldsymbol{\phi}(\mathbf{x}_{is}\!\!-\!\!\mathbf{x}_{i's'})
\!\cdot\!\mathbf{z}\big(\frac{s}{N}\big)^T\!\right)\!\kronstack\mathbf{z}\big(\frac{s'\!}{N}\big)\!\right.,\nonumber\\[-24pt]
&\qquad\qquad\qquad\qquad\qquad\!
\left.
\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{t\in\idx{N}\!,\\t'\!\!\in\idx{N}\!:\\t'\!\!\neq\!t}}\!\!
G_{\sigma'_4}(t\!-\!t'\!)\,\Big(\boldsymbol{\phi}(\mathbf{y}_{it}\!\!-\!\!\mathbf{y}_{i't'})
\!\cdot\!\mathbf{z}\big(\frac{t}{N}\big)^T\!\Big)\!\kronstack\mathbf{z}\big(\frac{t'\!}{N}\big)\!\right>\!.
\label{eq:supp1}
\end{align}
Equation \eqref{eq:supp1} expresses $K_D({\Pi_A},{\Pi_B})$ as a sum over dot-products on third-order non-symmetric tensors. We introduce operator $\boldsymbol{\mathcal{G}}$ into Equation \eqref{eq:supp1} to amend the dot-product with a distance which can handle burstiness and we obtain a modified kernel:
\begin{align}
& \!\!\!\!\!\!K_D^{*}({\Pi_A},{\Pi_B})\
=\!\!\!\!\sum\limits_{\substack{i\in\idx{J}\!,\\i'\!\in\idx{J}\!:\\i'\!\neq i}}
\!\!
\left<
\!\boldsymbol{\mathcal{G}}\bigg(\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\!\!
G_{\sigma'_4}(s\!-\!s'\!)\,\left(\boldsymbol{\phi}(\mathbf{x}_{is}\!\!-\!\!\mathbf{x}_{i's'})
\!\cdot\!\mathbf{z}\big(\frac{s}{N}\big)^T\!\right)\!\kronstack\mathbf{z}\big(\frac{s'\!}{N}\big)\!\right.\!\!\bigg),\nonumber\\[-6pt]
&\qquad\qquad\qquad\qquad\!\!
\boldsymbol{\mathcal{G}}\bigg(\!\left.
\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{t\in\idx{N}\!,\\t'\!\!\in\idx{N}\!:\\t'\!\!\neq\!t}}\!\!
G_{\sigma'_4}(t\!-\!t'\!)\,\Big(\boldsymbol{\phi}(\mathbf{y}_{it}\!\!-\!\!\mathbf{y}_{i't'})
\!\cdot\!\mathbf{z}\big(\frac{t}{N}\big)^T\!\Big)\!\kronstack\mathbf{z}\big(\frac{t'\!}{N}\big)\!\bigg)\!\right>\!.\!\!
\label{eq:supp2}
\end{align}
From Equation \eqref{eq:supp2} the following notation is introduced:
\begin{align}
& \!\!\!\!\vec{\mathcal{V}}_{ii'\!}\!=\!\mygthree{\vec{\mathcal{X}}_{ii'\!}}\!,\!\text{ and }\vec{\mathcal{X}}_{ii'\!}\!=\!\!\frac{1}{\sqrt{\Lambda}}\!\!\sum\limits_{\substack{s\in\idx{N}\!,\\s'\!\!\in\idx{N}\!:\\s'\!\!\neq\!s}}\!\!
G_{\sigma'_4}(s\!-\!s'\!)\left(
\boldsymbol{\phi}(\mathbf{x}_{is}\!\!-\!\!\mathbf{x}_{i's'})
\!\cdot\!\mathbf{z}\big(\frac{s}{N}\big)^T\!\right)\!\kronstack\mathbf{z}\big(\frac{s'\!}{N}\big),
\label{eq:supp3}
\end{align}
where the summation over the pairs of body-joints in Equation \eqref{eq:supp2} is replaced by the concatenation along the fourth mode to obtain representations $\big[\vec{\mathcal{V}}_{ii'\!}\big]_{i>i'\!:\,i,i'\in\idx{J}}^{\oplus_4}$ and $\big[\vec{\mathcal{\bar{V}}}_{ii'\!}\big]_{i>i'\!:\,i,i'\in\idx{J}}^{\oplus_4}$ for ${\Pi_A}$ and ${\Pi_B}$. Thus, $K_D^{*}$ becomes:
\begin{align}
& K_D^{*}({\Pi_A},{\Pi_B})=\left<\sqrt{2}\big[\vec{\mathcal{V}}_{ii'\!}\big]_{i>i'\!:\,i,i'\in\idx{J}}^{\oplus_4}\!,\sqrt{2}\big[\vec{\mathcal{\bar{V}}}_{ii'\!}\big]_{i>i'\!:\,i,i'\in\idx{J}}^{\oplus_4}\right>\label{eq:supp4}
\end{align}
As Equation \eqref{eq:supp4} suggests, we avoid repeating the same computation when evaluating our representations \emph{e.g.}, we stack only unique pairs of body-joints $i\!>\!i'\!$. Moreover, we also ensure we execute computations temporally only for $s\!>\!s'\!$. In practice, we have to evaluate only $\binom{JN}{2}$ unique spatio-temporal pairs in Equation \eqref{eq:supp4} rather than naive $J^2N^2$ per sequence. The final representation is of $Z'_2\!\cdot\!\binom{JZ'_3\!}{2}$ size, where $Z'_2$ and $Z'_3$ are the numbers of pivots for approximation of $G_{\sigma'_2}\!$ and $G_{\sigma'_3}$.
We assume that all sequences have $N$ frames for simplification of presentation. Our formulations are equally applicable to sequences of arbitrary lengths \emph{e.g.},~$M$ and $N$. Thus, we apply in practice $G'_{\sigma'_3}(\frac{s}{M}\!-\!\frac{t}{N},\frac{s'}{M}\!-\!\frac{t'}{N})$ and $\Lambda\!=\!J^2MN$ in Equation \eqref{eq:supp1}.
In practice, we use $G^{'}_{\sigma'_2}(\mathbf{x}\!-\!\mathbf{y})\!=\!G_{\sigma'_2}(x^{(x)}\!\!-\!y^{(x)})\!+\!G_{\sigma'_2}(x^{(y)}\!\!-\!y^{(y)})\!+\!G_{\sigma'_2}(x^{(z)}\!\!-\!y^{(z)})$ so the kernel $G^{'}_{\sigma'_2}(\mathbf{x}\!-\!\mathbf{y})\approx[\phi(x^{(x)}\!); \phi(x^{(y)}\!); \phi(x^{(z)}\!)]^T\![\phi(y^{(x)}\!); \phi(y^{(y)}\!); \phi(y^{(z)}\!)]$ but for simplicity we write $G_{\sigma'_2}(\mathbf{x}\!-\!\mathbf{y})\!\approx\!\phi(\mathbf{x})^T\phi(\mathbf{y})$. Note that $(x)$, $(y)$, $(z)$ are the spatial xyz-components of displacement vectors \emph{e.g.}, $\mathbf{x}_{is}\!-\!\mathbf{x}_{i's'}$.
\section{Positive Definiteness of SCK and DCK}
SCK and DCK utilize sums over products of RBF subkernels. It is known from \cite{taylor_kermet} that sums, products and linear combinations (for non-negative weights) of positive definite kernels result in positive definite kernels.
Moreover, subkernel $G_{\sigma'_2}\left(\left(\mathbf{x}_{is}\!-\!\mathbf{x}_{i's'}\!\right)\!-\!\left(\mathbf{y}_{jt}-\mathbf{y}_{j't'}\right)\right)$ employed by DCK in Equation \eqref{eq:supp1} (top) can be rewritten as:
\begin{align}
& G_{\sigma'_2}\left(\mathbf{z}_{isi's'}\!-\!\mathbf{z}'_{jtj't'}\!\right)\text{ where }\mathbf{z}_{isi's'}\!=\!\mathbf{x}_{is}\!-\!\mathbf{x}_{i's'}\text{ and }\mathbf{z}'_{jtj't'}\!=\!\mathbf{y}_{jt}-\mathbf{y}_{j't'}.\label{eq:supp5}
\end{align}
The RBF kernel $G_{\sigma'_2}$ is positive definite by definition and the mappings from $\mathbf{x}_{is}$ and $\mathbf{x}_{i's'}$ to $\mathbf{z}_{isi's'}$ and from $\mathbf{y}_{jt}$ and $\mathbf{y}_{j't'}$ to $\mathbf{z}'_{jtj't'}$, respectively, are unique. Therefore, the entire kernel is positive definite.
Lastly, whitening performed on SCK also results in a positive (semi)definite kernel as we employ the Power-Euclidean kernel \emph{e.g.}, if $\mathbf{X}$ is PD then $\mathbf{X}^\gamma$ stays also PD for $0\!<\!\gamma\!\leq\!1$ because $\mathbf{X}^\gamma\!=\!\bm{U}\bm{\lambda}^\gamma\bm{V}$ and element-wise rising of eigenvalues to the power of $\gamma$ gives us $\diag(\bm{\lambda})^\gamma\!\geq\!0$. Therefore, the sum over dot-products of positive (semi)definite autocorrelation matrices raised to the power of $\gamma$ remains positive (semi)definite.
\section{Complexity}
Non-linearized SCK with kernel SVM have complexity $\mathcal{O}(JN^2T^\rho)$ given $J$ body joints, $N$ frames per sequence, $T$ sequences, and $2\!<\!\rho\!<\!3$ which concerns complexity of kernel SVM. Linearized SCK with linear SVM have complexity $\mathcal{O}(JNTZ_*^r)$ for total of $Z_*$ pivots and tensor of order $r\!=\!3$. As $N^2T^\rho\!\gg\!NTZ_*^r$. For $N\!=\!50$ and $Z_*\!=\!20$, \emph{e.g.}~$Z_*\!=\!3Z_2\!+\!Z_3$ given $G_{\sigma'_2}$ and $G_{\sigma'_3}$, linearization is $3.5\!\times$ (or $32\!\times$) faster than the exact kernel if $T\!=\!557$ (or $T\!=\!5000$, respectively).
Non-linearized DCK with kernel SVM have complexity $\mathcal{O}(J^2N^4T^\rho)$. Linearized DCK with SVM enjoys $\mathcal{O}(J^2N^2TZ^3)$ for $Z$ pivots per kernel, \emph{e.g.}~$Z\!=\!Z_2\!=\!Z_3$ given $G_{\sigma'_2}$ and $G_{\sigma'_3}$. As $N^4T^\rho\!\gg\!N^2TZ^3$, the linearization is $~11000\!\times$ faster than the exact kernel, for say $Z\!=\!5$.
Slice-wise EPN applied to SCK has negligible cost $\mathcal{O}(JTZ_*^{\omega+1})$, where $2\!<\!\omega\!<\!2.376$ concerns complexity of eigenvalue decomposition applied to each tensor slice.
EPN applied to DCK utilizes HOSVD and results in complexity $\mathcal{O}(J^2TZ^4)$. As HOSVD is performed by truncated SVD on matrices obtained from unfolding $\vec{\mathcal{V}}_{ii'\!}\in\mbr{Z\times Z\times Z}\!$ along a chosen mode, $\mathcal{O}(Z^4)$ represents the complexity of truncated SVD on matrices $\bm{V}_{ii'\!}\in\mbr{Z\times Z^2}\!$ which can attain rank less or equal to $Z$.
\comment{
\section{Third-order Statistics -- illustration}
\begin{figure}[b
\centering
\includegraphics[trim=0 3 0 25, clip=true, width=4.7cm]{ker1b.pdf}
\caption{Illustration of third-order statistics in SCK. Linearized components $\boldsymbol{\phi}(x)$, $\boldsymbol{\phi}(y)$, $\boldsymbol{\phi}(z)$ and time $\mathbf{z}(t)$ denoted as ($\circ$, $\scriptscriptstyle\square$, $\scriptstyle\triangledown$, $\scriptscriptstyle+$) are captured in third-order tensor \emph{e.g.}, triplets ($\circ\scriptscriptstyle\square\scriptstyle\triangledown$) and ($\circ\scriptscriptstyle\square\scriptscriptstyle+$). This exposes SVM to rich body-joint statistics.
}
\end{figure}
}
} |
1,116,691,501,293 | arxiv | \section{Introduction}
\subsection{The Standard Model and New Physics}
Theoretical interest in the rare decay $B \to K^*\gamma$ as a test of
the Standard Model has been renewed by the experimental results of the
CLEO collaboration~\cite{cleo:evidence-for-penguins}. For the first
time, this mode has been positively identified and a preliminary
determination of its branching ratio given.
The radiative decays of the $B$ meson are remarkable for several
reasons. The decay $B \to K^*\gamma$ arises from the
flavour-changing quark-level process $b \to s\gamma$, which occurs
through penguin diagrams at one-loop in the Standard Model. As a
result, the decay is a purely quantum effect and a subtle
test of the Standard Model. The process is also sensitive to new
physics appearing through virtual particles in the internal
loops. Existing bounds on the $b \to s\gamma$ branching ratio have
been used to place constraints on supersymmetry (SUSY)~%
\cite{bertolini:supersymmetry,%
oshimo:supersymmetry,%
barbieri:supersymmetry,%
lopez:supersymmetry,%
garisto:supersymmetry,%
diaz:supersymmetry,%
borzumati:supersymmetry}
and other extensions of the Standard Model (SM)~%
\cite{rizzo:two-higgs-doublet,%
hou:fourth-generation}.
A comprehensive review of these results can be found
in~\cite{hewett:top-ten}. Finally, it is also remarkable that this
rare process has a sufficiently large branching ratio to be detected
experimentally. Thus, {\it accurate\/} experimental measurements and
{\it accurate\/} theoretical calculations of these decays could soon
probe new physics at comparatively low energies.
In order to compare the experimental branching ratio with a
theoretical prediction it is necessary to know the relevant hadronic
matrix elements. These have been estimated using a wide range of
methods, including relativistic and nonrelativistic quark models~%
\cite{deshpande:rel-quark-model,%
odonnel:rel-quark-model,%
altomari:nonrel-quark-model},
two-point and three-point QCD sum rules~%
\cite{dominguez:2pt-sum-rules,%
aliev:2pt-sum-rules,%
ball:3pt-sum-rules,%
colangelo:3pt-sum-rules-plb,%
ali:3pt-sum-rules,%
narison:3pt-sum-rules}
and heavy quark symmetry~\cite{ali:heavy-quark-symmetry}, but there
remains some disagreement between the different results. It is
therefore of interest to perform a direct calculation of the matrix
elements using lattice QCD\@. The viability of the lattice approach
was first demonstrated by the work of Bernard, Hsieh and
Soni~\cite{bhs:lattice-91} in 1991.
Excluding QCD contributions, the free quark decay $b \to s\gamma$ in the
SM proceeds by diagrams similar to that shown in~Fig.(\ref{figure:bsgwloop}).
The
charm and top quark dominate, because the up quark contribution to the
loop is suppressed by the small CKM factor~$|V^{\phantom{*}}_{ub}
V^*_{us}|$.
If the value of the top mass is assumed, the Standard Model can be
tested by deriving an independent result for $\mbox{\it{BR\,}}(B \to
K^*\gamma)$. Deviations from the expected branching ratio would be an
indication of contributions to the decay from physics beyond the SM,
to which this decay is potentially sensitive.
Research on such contributions can be classified into supersymmetric
and non-supersymmetric extensions of the SM\@. In the latter case, Cho
and Misiak~%
\cite{cho_misiak_left_right}
considered $SU(2)_L \otimes SU(2)_R$ left-right symmetric models and
found considerable variations from the SM result for a wide range of
the free parameters, while Randall and Sundrum~%
\cite{randall_technicolour}
found significant potential deviations from the SM in technicolour
models. Anomalous $WW\gamma$ couplings in $b \to s\gamma$ have been
analysed and the results found to be consistent with the SM\@. The
bounds obtained from this approach can improve on those from direct
searches~%
\cite{%
chia:anomalous-WWg,%
peterson:anomalous-WWg,%
rizzo:anomalous-WWg,%
he:anomalous-WWg}.
The contributions from two Higgs doublet models~%
\cite{glashow:typeII-two-higgs-doublet,%
abbot:typeI-two-higgs-doublet}
have been analysed to obtain bounds on the charged Higgs mass and
$\tan{\beta}$, the ratio of the vacuum expectation values of the
doublets~\cite{buras:review,hewett:two-higgs-doublet}.
SUSY models also involve additional Higgs doublets, but the
contribution of other boson-fermion loops, in particular charginos
($\chi^-$) with up type squarks, and gluinos ($\tilde{g}$) or
neutralinos ($\chi^0$) with down type squarks must also be included~%
\cite{bertolini:supersymmetry,%
oshimo:supersymmetry,%
barbieri:supersymmetry,%
lopez:supersymmetry,%
garisto:supersymmetry,%
diaz:supersymmetry,%
borzumati:supersymmetry,%
diaz:supersymmetry-ii}.
A thorough study of the decay in the Minimal Supersymmetric
Standard model can be found in
reference~\cite{borzumati:supersymmetry}. There are a strong
contributions from chargino and gluino loops, especially for large
$\tan{\beta}$, which interfere destructively with the Higgs
contribution and allow SUSY to mimic the SM in some regions of
parameter space. As a result, the current limits on $\tan{\beta}$ and
Higgs masses are weak, but will tighten as more stringent bounds on
superpartner masses are obtained.
For the rest of this paper, we shall use the SM as the appropriate
model, and look for possible deviations from the experimental
branching ratio. It should be noted that the lattice calculation is
needed only to determine the effects of low energy QCD, and these are
independent of new physics. The low energy effect of many extensions
of the SM will be completely contained within the renormalisation
group operator coefficients, and hence it is straightforward to allow
for contributions from different models.
\subsection{Exclusive vs. Inclusive decay modes}
The inclusive decay $B \to X_s \gamma$ is predominantly a short distance
process and can be treated perturbatively in the spectator
approximation. It is also possible to use Heavy Quark Effective
Theory (HQET) to compute the leading $1/m_b^2$
corrections~\cite{falk:B-to-Xs-gamma}.
The experimental inclusive branching ratio has been determined at
CLEO~\cite{cleo:inclusive},
\begin{equation}
\mbox{\it{BR\,}}(B \to X_s \gamma)=
(2.32 \pm 0.51 \pm 0.29 \pm 0.32)\times 10^{-4}.
\end{equation}
The procedure for obtaining this result has a mild model dependence
(the final result is a function of $m_b$).
In addition, the branching ratios of the exclusive decay modes of
$b \to s\gamma$ can also be experimentally determined, and the present
published branching ratio for $B \to K^*\gamma$ from the CLEO
collaboration~\cite{cleo:evidence-for-penguins} is,
\begin{equation}
\label{eq:CLEO_BR}
\mbox{\it{BR\,}}(B \to K^*\gamma) = (4.5 \pm 1.5 \pm 0.9) \times 10^{-5}.
\end{equation}
The advantage of this measurement is that the experimental number is
model independent. Theoretical calculation of the relevant exclusive
matrix element requires the determination of long distance QCD
contributions which cannot be determined perturbatively, but can be
computed using lattice QCD.
\subsection{The Effective Hamiltonian and Hadronic Matrix Elements}
In order to determine the low energy QCD contributions to this decay,
the high energy degrees of freedom must be integrated out, generating
an effective $\Delta B = -1$, $\Delta S = 1$ hamiltonian. Grinstein,
Springer and Wise~\cite{grinstein:b-meson-decay} determined the
hamiltonian ${\cal H}_{\mbox{\scriptsize\it eff}}$, to leading order in
weak matrix elements,
\begin{equation}
{\cal{H}}_{\mbox{\scriptsize\it eff}} =
-\frac{4 G_F}{\sqrt{2}} V_{tb}^{\phantom{*}} V_{ts}^*
\sum_{i=1}^8 C_i(\mu) O_i ,
\end{equation}
where the eight operators $O_i$ are multiplied by the renormalisation
group coefficients, $C_i(\mu)$. Six of the operators are four-quark
operators and two are magnetic moment operators, coupling to the gluon
and photon~\cite{simma:effective-lagrangians}. The operator which
mediates the $b \to s\gamma$ transition is,
\begin{equation}
O_7=\frac{e}{16 \pi^2} m_b \overline{s} \sigma_{\mu\nu}
\frac{1}{2}(1+\gamma_5)b~F^{\mu\nu}.
\end{equation}
The coefficients $C_i(\mu)$ are set by matching to the full theory at
the scale $\mu = M_W$. The coefficient $C_7(m_b)$ is determined using
the renormalization group to run down to the appropriate physical
scale $\mu = m_b$~\cite{ciuchini:btoxs}, and is approximately given by,
\begin{equation}
\label{eq:csevenmb}
C_7(m_b) = \eta^{-16/23} \left( C_7(M_W) +
\frac{58}{135}(\eta^{10/23} - 1) + \frac{29}{189}(
\eta^{28/23} - 1) \right),
\qquad
\eta=\frac{\alpha_s(m_b)}{\alpha_s(M_W)},
\end{equation}
where, in the Standard Model \cite{cho:weak-hamiltonian},
\begin{equation}
\label{eq:csevenmW}
C^{SM}_7(M_W) = \frac{1}{2} \frac{x}{(x -1)^3} \left(
\frac{2}{3} x^2 + \frac{5}{12}x - \frac{7}{12}
- \frac{x}{2} \frac{(3x - 2)}{(x-1)} \log{x} \right),
\qquad
x = \frac{m_t^2}{M_W^2}.
\end{equation}
The effects of scale uncertainty in the leading order approximation
have been considered by Buras {\it et al.}~\cite{buras:review}.
To leading order, the on-shell matrix for $B \to K^*\gamma$ is given by,
\begin{equation}
{\cal M}= \frac{e G_F m_b}{2 \sqrt{2} \pi^2}
C_7(m_b) V^{\phantom{*}}_{tb} V_{ts}^* \eta^{\mu*} \langle K^* |
J_\mu
| B \rangle ,
\end{equation}
where,
\begin{equation}
J_\mu = \overline{s} \sigma_{\mu\nu} q^\nu b_R,
\end{equation}
and $\eta$ and $q$ are the polarization and momentum of the emitted
photon. As outlined by Bernard, Hsieh and Soni~\cite{bhs:lattice-91},
the matrix element $\langle K^* | \overline{s} \sigma_{\mu \nu}
q^\nu b_R | B \rangle$ can be parametrised by three form
factors,
\begin{equation}
\langle K^* | J_\mu | B \rangle = \sum_{i=1}^3 C^i_\mu T_i(q^2) ,
\end{equation}
where,
\begin{eqnarray}
C^{1}_\mu & = &
2 \varepsilon_{\mu\nu\lambda\rho} \epsilon^\nu p^\lambda k^\rho, \\
C^{2}_\mu & = &
\epsilon_\mu(m_B^2 - m_{K^*}^2) - \epsilon\cdot q (p+k)_\mu, \\
C^{3}_\mu & = &
\epsilon\cdot q
\left( q_\mu - \frac{q^2}{m_B^2-m_{K^*}^2} (p+k)_\mu \right).
\end{eqnarray}
As the photon emitted is on-shell, the form factors need to be
evaluated at $q^2{=}0$. In this limit,
\begin{equation}
T_2(q^2{=}0) = -i T_1(q^2{=}0) ,
\label{eq:T1_T2_equal}
\end{equation}
and the coefficient of $T_3(q^2{=}0)$ is zero. Hence, the branching
ratio can be expressed in terms of a
single form factor, for example,
\begin{equation}
\label{eq:decay_rate}
\mbox{\it{BR\,}}(B \to K^* \gamma )
= \frac{\alpha}{8 \pi^4} m_b^2 G_F^2
m_B^3 \tau_B \left(1-\frac{m_{K^\ast}^2}{m_B^2}\right)^3
| V^{\phantom{*}}_{tb} V_{ts}^* |^2 |C_7(m_b)|^2 |T_1(q^2{=}0)|^2.
\end{equation}
This paper concerns the evaluation of $T_1(0)$.
We shall outline how matrix elements of the form $\langle V | J_\mu
| P \rangle$, where $| P \rangle$ is a heavy-light pseudoscalar meson
and $| V \rangle$ is a strange-light vector meson, can be calculated in
lattice QCD and explain the computational details involved. We shall
evaluate the form factors $T_1(q^2{=}0)$ and
$T_2(q^2{=}0)$, make some statements about the
systematic error, and compare the calculated value of
$\mbox{\it{BR\,}}(B \to K^*\gamma)$ with the results from CLEO.
\subsection{Heavy Quark Symmetry}
\label{hqs}
We cannot directly simulate $b$-quarks on the lattice, as will be
explained below. Instead, we calculate with a selection of quark
masses near the charm mass. This means that any results for the form
factors must be extrapolated to the $b$-quark scale. Heavy quark
symmetry~\cite{isgur:form-factors} tells us that,
\begin{equation}\label{eq:hqs-scaling}
\begin{array}{rcl}
T_1(q^2_{max}) &\sim& m_P^{1/2} \\
T_2(q^2_{max}) &\sim& m_P^{-1/2}
\end{array}
\end{equation}
as the heavy quark mass, and hence the pseudoscalar meson mass, $m_P$,
grows infinitely large. Combining this with the relation $T_2(q^2{=}0) =
-iT_1(q^2{=}0)$ constrains the $q^2$ dependence of the form factors.
Pole dominance ideas suggest that,
\begin{equation}
T_i(q^2) = {T_i(0)\over (1 - q^2/m_i^2)^{n_i}}
\end{equation}
for $i=1,2$, where $m_i$ is a mass that is equal to $m_P$ plus $1/m_P$
corrections and $n_i$ is a power. Since $1-q^2_{max}/m_i^2 \sim 1/m_P$
for large $m_P$, the combination of heavy quark symmetry and the form
factor relation at $q^2=0$ implies that $n_1 = n_2 + 1$. Thus we could
fit $T_2(q^2)$ to a constant and $T_1(q^2)$ to a single pole form or
fit $T_2(q^2)$ to a single pole and $T_1(q^2)$ to a double pole. These
two cases correspond to,
\begin{equation}
T_1(0) \sim \cases{m_P^{-1/2}&single pole\cr
m_P^{-3/2}&double pole\cr}.
\end{equation}
As we will see, our data for $T_2(q^2)$ appear roughly constant in
$q^2$ when $m_P$ is around the charm scale, but have increasing
dependence on $q^2$ as the heavy quark mass increases. We will fit to
both constant and single pole behaviours for $T_2(q^2)$ below.
\section{Lattice Field Theory}
The hadronic matrix element $\langle V | J_\mu | P \rangle$ for the
$b \to s\gamma$ transition can be obtained from the correlator $\langle 0
| J^{V}_\rho(x) T_{\mu\nu}(y) J_P^{\dagger}(0) | 0 \rangle$, where
$J_P$ and $J^{V}_\rho$ are interpolating fields for the $P$ and $V$
mesons, consisting of a heavy quark, $h$, a light quark, $l$, and a
strange quark, $s$;
\begin{eqnarray}
J_{P}(x) &=& \overline{l}(x) \gamma_5 h(x), \\
J^{V}_\rho(x) &=& \overline{l}(x) \gamma_\rho s(x), \\
T_{\mu \nu}(y) &=& \overline{s}(y) \sigma_{\mu\nu} h(y).
\end{eqnarray}
The full matrix element
$ \langle V | \overline{s} \sigma_{\mu \nu} {1\over2}(1 + \gamma_5)
h | P \rangle$ can be derived using the Minkowski space
relation,
\begin{equation}
\gamma^5 \sigma^{\mu \nu} = \frac{i}{2} \varepsilon^{\mu \nu
\rho \lambda} \sigma_{\rho \lambda}.
\end{equation}
In Euclidean space, the correlator
$\langle 0 | J^{V}_\rho(x) T_{\mu\nu}(y) J_P^{\dagger}(0) | 0 \rangle$
can be computed numerically using the functional integral,
\begin{eqnarray}
\label{eq:funcintexpr}
\langle 0 | J^{V}_\rho(x) T_{\mu\nu}(y) J_P^\dagger(0) | 0 \rangle
& = & \frac{1}{Z}
\int {\cal D} A {\cal D} q {\cal D} {\bar q}~J_\rho^{V}(x) T_{\mu\nu}(y)
J_P^\dagger(0)
\exp(-S[A,q,{\bar q}]), \\
& = & \frac{1}{Z}
\int {\cal D} A~\mbox{Tr}
\left(\gamma_5 H(0,y) \sigma_{\mu\nu} S(y,x) \gamma_\rho
L(x,0) \right)
\exp(-S_{\mbox{\scriptsize\it{eff}}}) ,
\label{eq:func-int-trace}
\end{eqnarray}
where $S[A,q,{\bar q}]$ is the QCD action and $S(y,x)$, $H(y,x)$,
$L(y,x)$ are the propagators from $x$ to $y$ for the $s$,
$h$ and $l$ quarks.
Working in momentum space, we calculate the three-point correlator,
\begin{eqnarray}
C^{3pt}_{\rho\mu\nu}(t,t_f,\vec{p},\vec{q}) & = &
\sum_{\vec{x},\vec{y}}
e^{i \vec{p}\cdot \vec{x}}
e^{- i \vec{q}\cdot \vec{y}}
\langle J_{P}(t_f,\vec{x}) T_{\mu\nu}(t,\vec{y})
J^\dagger_{V\rho}(0)\rangle \\
& \mathop{\longrightarrow}\limits_{t,t_f - t,T\to\infty} &
\sum_{\epsilon}
\frac{Z_{P}}{2 E_{P}}
\frac{Z_{V}}{2 E_{V}}
e^{-E_{P} (t_f-t) } e^{-E_V t}
\epsilon_\rho
\langle P(p) | \overline{h} \sigma_{\mu\nu} s |
V(k,\epsilon) \rangle.
\end{eqnarray}
To obtain the matrix element $\langle P(p) | \overline{h}
\sigma_{\mu\nu} s | V(k) \rangle$, we take the ratio,
\begin{equation}
\label{eq:3pt-correlator}
C_{\rho\mu\nu}(t,t_f,\vec{p},\vec{q}) =
\frac{C^{3pt}_{\rho\mu\nu}(t,t_f,\vec{p},\vec{q})
}{C^{2pt}_{P}(t_f-t,\vec{p}) C^{2pt}_{V}(t,\vec{p}-\vec{q})
},
\end{equation}
where the two-point correlators are defined as,
\begin{eqnarray}
C^{2pt}_{P}(t,\vec{p})
& = &
\sum_{\vec{x}} e^{i\vec{p}\cdot\vec{x}}
\langle J^\dagger_{P}(t,\vec{x}) J^{\vphantom{\dagger}}_{P}(0)
\rangle \nonumber \\
& = &
\frac{Z^2_{P}}{2 E_{P}} \left( e^{-E_{P}t}+e^{-E_{P}(T-t)} \right) ,
\label{eq:pseudoscalar-two-point} \\
C^{2pt}_{V}(t,\vec{k})
&= &
-{\displaystyle
\left(\frac{1}{3}\right)}
\sum_{\vec{x}} e^{i\vec{k}\cdot\vec{x}}
\langle J_{V\sigma}^\dagger(t,\vec{x})
J^{\sigma}_{V}(0) \rangle \nonumber \\
& = &
\frac{Z^2_{V}}{2 E_{V}} \left( e^{-E_{V}t}+e^{-E_{V}(T-t)} \right).
\label{eq:vector-two-point}
\end{eqnarray}
By time reversal invariance and
assuming the three points in the correlators of
Eq.(\ref{eq:3pt-correlator}) are sufficiently separated in time,
a term proportional to the required matrix element dominates:
\begin{equation}
\label{eq:3pt-asymptotic}
C_{\rho\mu\nu} \mathop{\longrightarrow}\limits_{t,t_f - t,T\to\infty}
\frac{1}{Z_{P}Z_{V}} \sum_\epsilon
\epsilon_{\rho}
\langle V(k,\epsilon) | \overline{s} \sigma_{\mu\nu} h
| P(p) \rangle + \dots,
\end{equation}
and $C_{\rho\mu\nu}$ approaches a plateau. The three-point correlator
is calculated in its time reversed form to allow the use of
previously calculated light propagators. The factors $Z_{P}$, $Z_V$
and the energies of the pseudoscalar and vector particles are obtained
{}from fits to the two-point Euclidean correlators.
In order to simulate this decay on a sufficiently finely spaced
lattice, vacuum polarisation effects were discarded and the gauge
field configurations generated in the quenched approximation. The
decays $D \rightarrow K e \nu$, $D \rightarrow K^* e \nu$, and $D_s
\rightarrow \phi e \nu$ have been calculated in the quenched
approximation~\cite{lubicz:d-decay-ii,bernard:d-decay,UKQCD:d-decay}
and have been found to be in relatively good agreement with
experiment. It is therefore quite plausible to assume that the
systematic error from the quenched approximation for this calculation
would be of a similar size.
The matrix element $\langle K^* | \overline{s} \sigma_{\mu \nu}
q^\nu b_R | B \rangle$ cannot be directly calculated as
realistic light quarks cannot be simulated owing to critical slowing
down in determining the propagator for small masses.
Instead light quarks are simulated at a number of masses approximately
that of the strange quark mass, and any result is extrapolated to the
chiral limit. Furthermore, $b$ quarks cannot be simulated directly as
the $b$-quark mass is greater than the inverse lattice spacing
($2.73(5) \, \mbox{GeV}$), and the variation of the propagator would
occur over lengths smaller than the lattice spacing. As a result,
heavy quarks are simulated with masses around the charm quark mass,
and the results extrapolated to $m_b$. Hence, $\langle V | J_\mu | P
\rangle $ has to be calculated at a number of different light, strange
and heavy quark masses.
\section{Computational Details}
Sixty $SU(3)$ gauge configurations were generated in the quenched
approximation for a $24^3 \times 48$ lattice at $\beta=6.2$. These
configurations were generated with periodic boundary conditions using
the hybrid over-relaxed algorithm, and the standard discretised gluon
action, defined in~\cite{luscher}. The configurations were separated
by $400$ compound sweeps, starting at sweep number $2800$. The
inverse lattice spacing was determined to be $2.73(5) \, \mbox{GeV}$,
by evaluating the string tension~\cite{ukqcd:string-tension}. In
physical units, this corresponds to a spacing of approximately $0.07
\, \mbox{fm}$ and a spatial size of $1.68 \, \mbox{fm}$. In order to
simulate heavy quarks whose masses are approaching the inverse lattice
spacing, the $O(a)$-improved fermion action of Sheikholeslami and
Wohlert~\cite{sw-action} (also referred to as the clover action) was
used. This is defined as,
\begin{equation}
S^C_F = S^W_F - i\frac{\kappa}{2}\sum_{x,\mu,\nu}\bar{q}(x)
F_{\mu\nu}(x)\sigma_{\mu\nu}q(x) ,
\end{equation}
where $S^W_F$ is the standard Wilson fermion
action~\cite{ukqcd:string-tension,wilson-paper} and $F_{\mu\nu}$ is a
lattice definition of the field strength tensor, which we take to be
the sum of the four untraced plaquettes in the $\mu\nu$ plane open at
the point $x$,
\begin{equation}
F_{\mu\nu}(x) =
\frac{1}{4} \sum_{\mathord{%
\hbox{\vrule\vbox{\hrule width0.3em\kern0.3em\hrule}\vrule}}=1}^{4}
\frac{1}{2i}
\biggl[U_{\mathord{%
\hbox{\vrule\vbox{\hrule width0.3em\kern0.3em\hrule}\vrule}}\mu\nu}(x)
- U_{\mathord{%
\hbox{\vrule\vbox{\hrule
width0.3em\kern0.3em\hrule}\vrule}}\mu\nu}^\dagger(x)\biggr].
\end{equation}
In using this action, all observables with fermion fields
$q,\overline{q}$ must be ``rotated'',
\begin{eqnarray}
q(x) \;\; &\rightarrow& \;\; \left(1 - \frac{1}{2}\stackrel{\rightarrow}
{\! \not \!\! \Delta} \right) q(x), \\ \nonumber
\overline{q}(x) \;\; &\rightarrow& \;\; \overline{q}(x)
\left(1 + \frac{1}{2}\stackrel{\leftarrow}
{\! \not \!\! \Delta} \right) ,
\end{eqnarray}
where $\Delta_\mu$ is the discretised covariant derivative, operating
on the quark fields as,
\begin{eqnarray}
\stackrel{\rightarrow}{\Delta_\mu} \, q(x) \;\;& = & \;\;
\frac{1}{2}\left( U_\mu(x) q(x + \mu) \,-\, U^\dagger_\mu(x - \mu) q(x -
\mu) \right) , \\ \nonumber
\overline{q}(x) \, \stackrel{\leftarrow}{\Delta_\mu} \;\;& = & \;\;
\frac{1}{2}\left( \overline{q} (x + \mu) U^\dagger_\mu(x)
\,-\, \overline{q}(x - \mu) U_\mu(x - \mu)
\right).
\end{eqnarray}
This action eliminates the tree level $O(ma)$-error of the Wilson
action~\cite{heatlie:clover-action}, which can be significant for
heavy quark systems~\cite{ukqcd:fP,ukqcd:charmonium}.
For each configuration, quark propagators were calculated using the
over--relaxed minimal residual algorithm with red--black
preconditioning for $\kappa = 0.14144$, $0.14226$ and $0.14262$, using
periodic boundary conditions in the spatial directions and
anti--periodic boundary conditions in the temporal direction.
Smearing was not used in the calculation of these light propagators.
The first two $\kappa$ values can be used to interpolate to the
strange quark mass which corresponds to $\kappa = 0.1419(1)$
\cite{ukqcd:strange-prd}.
The third $\kappa$ value, corresponding to a somewhat lighter quark,
was used in conjunction with the others in order to test the behaviour
of the data in the chiral limit.
Heavy propagators, for $\kappa_h = 0.121$, $0.125$, $0.129$ and
$0.133$, were evaluated using timeslice $24$ of some of the above
propagators as the source.
For $\kappa_h = 0.121$ and $0.129$, the propagators for all
of the light $\kappa$ values were used.
For $\kappa_h = 0.125$ and $0.133$, the propagators
for $\kappa = 0.14144$ and $0.14226$ were used.
To reduce excited state contamination,
these sources were smeared using the gauge invariant Jacobi
algorithm~\cite{ukqcd:smearing}, with an r.m.s.\ smearing radius of~5.2.
Because of memory limitations, these propagators were evaluated
only for timeslices 7~to~16 and 32~to~41.
Using these propagators, the three point correlators
were evaluated.
The spatial momentum ${\bf p}$ was chosen to be
$(0,0,0)$ or $(\pi/12,0,0)$
(the lowest unit of momentum in lattice units that can be injected).
All possible choices of ${\bf q}$ were
calculated such that the magnitude of the spatial momentum of the
vector meson ${\bf k}$ was less than $\sqrt{2} \pi/12$. This is
because the signal of light hadrons degrades rapidly as
the momentum is increased \cite{alamos:light-spectrum}.
In order to obtain $\langle V | \overline{s} \sigma_{\mu \nu} h | P
\rangle$, the decay constant and energy were
determined for the pseudoscalar of each heavy--light $\kappa$
combination and the vector of each possible light $\kappa$
combination, for all possible momenta used. The process of extracting
these is well understood and has been discussed in detail
elsewhere~\cite{ukqcd:strange-prd}. As the two point functions
are periodic, a correlator at a time $0 \le t \le 24$ was averaged
with the same correlator at $48 - t$ to improve the statistical
sample. This ``folded'' data was fitted
to~Eq.(\ref{eq:pseudoscalar-two-point}) or~Eq.(\ref{eq:vector-two-point}) for
timeslices 15 to 23. For both the two point and three point functions
we utilised the discrete symmetries $C$, $P$ and $T$ (folding)
wherever possible, in addition to averaging over equivalent momenta.
The statistical errors for all correlators were determined by the
bootstrap procedure~\cite{efron:bootstrap}, using 1000 bootstrap
subsamples from the original configurations. The finite
renormalization needed for the lattice--continuum matching of the
$\sigma_{\mu\nu}$ operator has been calculated
\cite{borrelli:improved-operators} but has a negligible effect here
($O(2\%)$) and was not included. It introduces a small correction to
the branching ratio which is considered in the conclusions.
As outlined in the previous section, the weak matrix elements
$C_{\rho \mu\nu}$ were
extracted from the three point data and the fits to the two point
data.
Having divided out
the contributions from the two point amplitudes and energies,
the matrix element $\langle V | \overline{s} \sigma_{\mu \nu} h | P
\rangle$ was isolated.
These matrix elements were combined to determine the
form factors $T_1(q^2)$, $T_2(q^2_{max})$ and
$T_2(q^2)$.
Each form factor was extracted by a correlated fit to a constant
for timeslices 11, 12 and 13.
\section{Results}
The data for unphysical masses, and off-shell photons must be combined
to isolate the form factors and extrapolate to the physical regime.
It is clear from~Eq.(\ref{eq:T1_T2_equal})
and~Eq.(\ref{eq:decay_rate}) that the branching ratio can be evaluated
{}from $T_1(q^2{=}0;m_B;m_{K^*})$ or $T_2(q^2{=}0;m_B;m_{K^*})$. As
demonstrated in a previous paper~\cite{ukqcd:penguin-prl}, the
evaluation of $T_1(q^2{=}0;m_P;m_{K^*})$ is relatively
straightforward, and $T_2(q^2{=}0;m_P;m_{K^*})$ can be determined in a
similar way. To test heavy quark scaling, we also extracted the form
factor $T_2$ at maximum recoil, where $q^2=q^2_{max}=(m_P-m_V)^2$, in
the same way as Bernard {\it et al.}~\cite{bhs:penguin-prl}. These
form factors were extrapolated to the physical mass $m_P=m_B$, and an
estimate of systematic errors in the extrapolation made by comparing
different methods.
\subsection{Extraction of form factors}
\subsubsection{$T_1(q^2)$}
The form factor $T_1$ can be conveniently extracted from the matrix
elements by considering different components of the relation,
\begin{equation}
4( k^\alpha p^\beta - p^\alpha k^\beta) T_1(q^2) =
\varepsilon^{\alpha\beta\rho\mu}
C_{\rho\mu\nu}q^{\nu}.
\end{equation}
We see a plateau in $T_1$ about $t=12$. The use of smeared operators
for the heavy quarks provides a very clean signal, with stable
plateaus forming before timeslice~11. The data for the heaviest of our
light quarks, $\kappa_l=\kappa_s=0.14144$, with the smallest
statistical errors, are shown in~Fig.(\ref{figure:t1-vs-time}).
The form factor is evaluated for each of the five possible values of
$q^2$. We fit $T_1(q^2)$ to a pole or dipole model in order to obtain
the on-shell form factor $T_1(q^2{=}0)$,
\begin{equation}
T_1(q^2)= {T_1(q^2{=}0) \over 1- q^2/m^2},\qquad
T_1(q^2)= {T_1(q^2{=}0) \over (1- q^2/m^2)^2}.
\end{equation}
We allow for correlations between the energies of the vector and
pseudoscalar particles and $T_1$ at each $q^2$. An example of such a
fit, for $\kappa_l=\kappa_s=0.14144$, is shown
in~Fig.(\ref{figure:Tone_vs_qsq})
and the full set of fit parameters and their $\chi^2/\mbox{d.o.f.}$
are shown in tables~\ref{table:qsq_fits} and ~\ref{table:qsq_fits_b}.
The chiral limit behaviour of $T_1(q^2{=}0;m_P;m_V)$, interpolated
{}from a single pole fit, was explored for $\kappa_h= 0.121$ and
$0.129$, in our earlier work~\cite{ukqcd:penguin-prl}. To
test for approximate spectator quark independence, we compared the
single pole fits of the form factor to the two functions,
\begin{eqnarray}
T_1(q^2{=}0;m_{q,\, light}) &=& a + b m_{l} ,\\
T_1(q^2{=}0;m_{q,\, light}) &=& c ,
\end{eqnarray}
where $m_{l}$ is the lattice pole mass,
\begin{equation}
m_l={1 \over 2}({1\over\kappa} - {1\over\kappa_{crit}}),
\end{equation}
and $\kappa_{crit}=0.14315(2)$~\cite{ukqcd:strange}. The linear
coefficient $b$ was found to be consistent with zero for each
combination of $\kappa_s$ and $\kappa_h$ (see
Fig.\ref{figure:t1-chiral-extrapolation}). From
table~\ref{table:Tone_chiral}, the $\chi^2/\mbox{d.o.f.}$ for both
fits are similar, indicating that for the data available, the
assumption that the form factor is a constant, independent of the
spectator quark mass, is valid. Hence, the data for
$\kappa_l=0.14144$ was used for the chiral limit, and a simple linear
interpolation carried out between $\kappa_s=0.14144$ and $0.14226$ for
the strange quark, in order to obtain $T_1(q^2{=}0;m_P;m_{K^*})$.
These results are listed in the columns labelled (b) and (c) in
table~\ref{table:T1_T2_comparison}.
\subsubsection{$T_2(q^2)$}
The form factor $T_2$ can be extracted from the matrix elements using
the same procedure as $T_1$, by considering the different components
of,
\begin{equation}
(m_P^2 - m_V^2) T_2(q^2;m_P;m_V) = C_{ii\nu} q^\nu,
\end{equation}
for all $i$ (not summed) such that $q^i=0$. A typical plateau for
$T_2$ is shown in~Fig.(\ref{figure:t2-vs-time}). We extract $T_2$ for a
range of $q^2$ as shown in~Fig.(\ref{figure:t2-vs-qsq}).
Fig.(\ref{figure:t2-vs-qsq}) shows that $T_2(q^2)$ is roughly constant
as a function of $q^2$ for our data, with heavy quark masses around
the charm mass. We fit $T_2$ to a constant: we can then compare with
the value of $T_1(q^2=0)$ where $T_1$ is fitted with a single pole
form. We also fit $T_2$ to a single pole form (as shown in the figure)
and compare with $T_1(q^2=0)$ when $T_1$ is fitted with a double pole
form. The results of the fits for $T_2$ are shown in
tables~\ref{table:T2_qsq_fits} and~\ref{table:T2_qsq_fits_b}, and the
chiral extrapolations for the single pole fit in
table~\ref{table:Two_inter_chiral}. The pole mass is found to be
large, and a linear behaviour holds well for all possible $q^2$,
including $q^2_{max}$, as shown in~Fig.(\ref{figure:t2-vs-qsq}). Once
again the data for $k_l = 0.14144$ was used for the chiral limit and
the results are listed in the columns labelled (d) and (e) in
table~\ref{table:T1_T2_comparison}.
The ratio $T_1(q^2{=}0;m_P;m_{K^*})/T_2(q^2{=}0;m_P;m_{K^*})$ is shown
in~Fig.(\ref{figure:T1_over_T2_comparison}). The two sets of points
show $T_1$ fitted to a double pole form and $T_2$ to a single pole or
$T_1$ fitted to a single pole and $T_2$ constant. The ratio should be
1, in accordance with the identity
$T_2(0)=-iT_1(0)$,~Eq.(\ref{eq:T1_T2_equal}). We find greater
consistency from the double-pole/single-pole fit.
\subsubsection{$T_2(q^2_{max})$}
The evaluation of $T_2(q^2_{max};m_P;m_V)$ is also straightforward,
since at zero momentum, ${\bf p}{=}{\bf 0}$, ${\bf
k}{=}{\bf 0}$, the contributions from other form factors vanish,
\begin{eqnarray}
(m_P + m_V) \, T_2(q^2_{max})
& = & C_{110}({\bf p}={\bf 0}, {\bf k}={\bf 0}),
\nonumber\\
& = & C_{220}({\bf p}={\bf 0}, {\bf k}={\bf 0}),
\nonumber\\
& = & C_{330}({\bf p}={\bf 0}, {\bf k}={\bf 0}).
\end{eqnarray}
An example of this data is shown
in~Fig.(\ref{figure:t2-qsq-max-vs-time}). The behaviour of
$T_2(q^2_{max};m_P;m_V)$ as a function of the spectator quark mass was
examined at $\kappa_h = 0.121$ and $0.129$ in the same way as for
$T_1(q^2{=}0)$. It was again found that the linear coefficient $b$
was consistent with zero for each combination of $\kappa_s$ and
$\kappa_h$: see~Fig.(\ref{figure:T2_qsq_max_chiral}) for an
example. From table~\ref{table:Two_chiral}, the $\chi^2/\mbox{d.o.f.}$
for both fits are seen to be similar, indicating that for the data
available, the assumption that the form factor is independent of the
spectator quark mass is valid. Hence, the data for $\kappa_l=0.14144$
was used for the chiral limit, to obtain $T_2(q^2_{max};m_P;m_{K^*})$.
Bernard {\it et al.}~\cite{bhs:penguin-prl} converted this result to
$q^2{=}0$ by assuming single pole dominance,
\begin{equation}
\label{eq:pole_dominance}
T_2^{pole}(q^2) = \frac{T_2(0)}{1 - q^2/m_{P_{s1}}^2}.
\end{equation}
The current $J_\mu$ in the matrix element can be expressed in a $V +
A$ form, with $T_1$ corresponding to the vector component and $T_2$
and $T_3$ to the axial current. Therefore, in a single pole model,
the exchanged particle, $P_{s1}$, for the $T_2$ form factor should be
the lowest $J^P = 1^+$ state with the correct spin, parity and
strangeness quantum numbers. We extracted
$T^{\mbox{\scriptsize\it{pole}}}_2(q^2{=}0;m_P;m_{K^*})$ from
$T_2(q^2_{max})$ using a single pole model, with the mass of the $1^+$
states determined from fits to two-point functions for each heavy
quark mass. The results of these extrapolations are shown in the
column labelled (a) in table~\ref{table:T1_T2_comparison}.
The ratio $T_1(q^2{=}0;m_P;m_{K^*})/ T_2^{pole}(q^2{=}0;m_P;m_{K^*})$
is shown in~Fig.(\ref{figure:T1_on_T2_pole}). We note that using a
fixed pole mass from two-point functions gives a 10-20\% difference in
the ratio (at the heaviest masses) compared with allowing the pole
mass to vary in the fits.
\subsection{Extrapolation to $M_B$}
The appropriate ansatz for extrapolating the on-shell form factor in
the heavy quark mass to $T_1(q^2{=}0;m_B;m_{K^*})$ is not {\it a
priori\/} clear. As we saw in section~\ref{hqs}, one has to model the
$q^2$ dependence of the form factors, maintaining consistency with
known heavy quark scaling results~\cite{isgur:form-factors} at
$q^2_{max}$, from~Eq.(\ref{eq:hqs-scaling}), and the relation
$T_1(0)=iT_2(0)$. Expanding unknown parameters in powers of $1/m_P$,
one obtains scaling laws for the on-shell form factors $T_1(q^2{=}0)$
and $T_2(q^2{=}0)$. Thus, while the scaling behaviour of
$T_2(q^2_{max})$ can be checked directly, the behaviours of $T_1(0)$
and $T_2(0)$ will depend on assumptions made for the $q^2$
dependence. We now address these issues.
Bernard {\it et al.}~\cite{bhs:penguin-prl} used the heavy-quark
scaling law for the off-shell form factor,
$T_2(q^2_{max};m_P;m_{K^*})$ to extrapolate $T_2$ to
$T_2(q^2_{max};m_B;m_{K^*})$, before applying a single pole dominance
model as before to reach the on-shell point
$T_2(q^2{=}0;m_B;m_{K^*})$. They estimated the appropriate pole mass.
The validity of the pole model over the wide range of momentum
transfer from $q^2{=}0$ to $q^2_{max}$ was required, but tests at
heavy quark masses around the charm quark mass showed it to be quite
accurate.
Our results for $T_2(q^2;m_P;m_{K^*})$,
see~Fig.(\ref{figure:t2-vs-qsq}), appear nearly independent of $q^2$
for masses $m_P$ around the charm scale. Hence, we have fitted $T_2$
to both single pole and constant forms, with corresponding behaviour
for $T_1$. This will give us two alternative forms for the heavy mass
dependence of $T_1(q^2{=}0;m_B;m_{K^*})$.
\subsubsection{$T_2(q^2_{max})$}
At $q^2{=}q^2_{max}$, the initial and final hadronic states have zero
spatial momentum and the contributions of form factors other than
$T_2$ vanish,
\begin{equation}
\label{eq:qsqmaxmatrixelement}
\langle K^* | \overline{s} \sigma_{\mu \nu} q^\nu b_R | B \rangle =
\epsilon_\mu ( m_B^2 - m_{K^*}^2 ) T_2(q^2_{max}).
\end{equation}
In the heavy quark limit, the matrix element
of~Eq.(\ref{eq:qsqmaxmatrixelement}) scales as $m_B^{3/2}$, owing to the
normalisation of the heavy quark state ($\sqrt{m_B}$) and the momentum
transfer $q$ ($q^0=m_B-m_{K^*}$). The leading term in the heavy quark
scaling of $T_2(q^2_{max})$ is expected to be $m_B^{-1/2}$, analogous
to the scaling of $f_B$\cite{neubert:HQET,ukqcd:fP}. Higher order
$1/m_B$ and $1/m_B^2$ corrections will also be present, as will
radiative corrections~\cite{shifman:radiative-corrections,%
wise:radiative-corrections}.
Hence, the form factor $T_2(q^2_{max})$ should scale as,
\begin{equation}
\label{eq:T2_scaling_law}
T_2(q^2_{max};m_P;m_{K^*}) \sqrt{m_P}
=
\mbox{const.} \times [ \alpha_s(m_P) ]^{-2/\beta_0}
\left(1 + \frac{a_1}{m_P} + \frac{a_2}{m_P^2} + \dots\right).
\end{equation}
To test heavy quark scaling, we form the quantity,
\begin{equation}
{\hat T}_2=
T_2(q^2_{max})
\sqrt{m_P \over m_B}
\left({\alpha_s(m_P)\over\alpha_s(m_B)}\right)^{2/\beta_0},
\end{equation}
where we approximate $\alpha_s(\mu)$ by,
\begin{equation}
\alpha_s(\mu) = {2 \pi \over \beta_0 \ln( \mu/\Lambda_{QCD} ) }.
\end{equation}
with $\Lambda_{QCD}=200$ MeV and $\beta_0=11-{2\over3}N_f$. In the
quenched approximation, $N_f$ is taken to be zero.
The normalisation ensures that ${\hat T}_2=T_2(q^2_{max})$ at the
physical mass $m_B$. Linear and quadratic correlated fits
to~Eq.(\ref{eq:T2_scaling_law}) were carried out with the functions,
\begin{eqnarray}
{\hat T}_2(m_P)&=&A\left(1+{B\over m_P}\right), \\
{\hat T}_2(m_P)&=&A\left(1+{B\over m_P}+{C\over m_P^2}\right),
\end{eqnarray}
and are shown in~Fig.(\ref{figure:T2_scaling}).
Taking the quadratic fit of $T_2$ at $m_P = m_B$ as the best
estimate, and the difference between the central values of the
linear and quadratic fits as an estimate of the sytematic error, $T_2$
was found to be
\begin{equation}
\label{eq:Ttwo-qsqmax-result}
T_2(q^2_{max};m_B;m_{K^*}) = 0.269^{+17}_{-9}\pm{0.011} .
\end{equation}
Once $T_2(q^2_{max})$ is extracted, we can obtain $T_2(0)$ in the two
cases, pole model or constant, for the $q^2$ behaviour. If $T_2$ is
constant, then~Eq.(\ref{eq:Ttwo-qsqmax-result}) is the result at $q^2=0$.
In the pole model, the expected exchange particle for $T_2$ is the
$1^+$ $B_{s1}$ state, but experimental data for its mass is not yet
available. However, it is possible to estimate reasonable upper and
lower bounds for the mass from HQET\@. It can be shown
that~\cite{neubert:HQET},
\begin{eqnarray}
\label{eq:mbs1mbsplitting}
m_{B_{s1}} - m_{B} &=& \Delta \overline{\Lambda} + \frac{A}{m_b} +
O(\frac{1}{m_b^2}), \\
m_{D_{s1}} - m_{D} &=& \Delta \overline{\Lambda} + \frac{A}{m_c} +
O(\frac{1}{m_c^2}).
\end{eqnarray}
Neglecting terms of order $1/m_c^2$, the upper and lower bounds
for~Eq.(\ref{eq:mbs1mbsplitting}) are,
\begin{equation}
\frac{m_c}{m_b}(m_{Ds1} - m_{D})
< m_{B_{s1}} - m_{B} <
m_{D_{s1}} - m_{D}
\end{equation}
Making the approximation,
\begin{equation}
\frac{m_c}{m_b} \sim \frac{m_D + 3 m_{D^*}}{m_B + 3 m_{B^*}}
\end{equation}
the range of the expected pole mass can be found,
\begin{equation}
m_{B_{s1}} = 5.74\pm{0.21}~\mbox{GeV}.
\end{equation}
Therefore,
\begin{equation}
\label{eq:T2_calc}
T^{\mbox{\scriptsize\it{pole}}}_2(q^2{=}0;m_B;m_{K^*}) =
0.112^{+7}_{-7}\mbox{}^{+16}_{-15},
\end{equation}
where the first error is statistical and the second is the systematic
error obtained by combining the variation of the pole mass within its
bounds and the systematic error from~Eq.(\ref{eq:Ttwo-qsqmax-result}).
There is clearly a significant systematic difference between the
results in~Eq.(\ref{eq:Ttwo-qsqmax-result}) and~Eq.(\ref{eq:T2_calc})
corresponding to the two assumed forms for $T_2(q^2)$.
\subsubsection{$T_1(q^2{=}0)$}
If constant-in-$q^2$ behaviour is assumed for $T_2$, then $T_2(0)$
should satisfy the same scaling law as $T_2(q^2_{max})$
in~Eq.(\ref{eq:T2_scaling_law}). Combining this with the identity
$T_1(0)=iT_2(0)$ leads to a scaling law for $T_1(0)$:
\begin{equation}
\label{eq:T1_one_half_scaling}
T_1(0;m_P;m_{K^*}) \sqrt{m_P}
=
\mbox{const.} \times [ \alpha_s(m_P) ]^{-2/\beta_0}
\left(1 + \frac{a_1}{m_P} + \frac{a_2}{m_P^2} + \dots\right).
\end{equation}
If single pole dominance is assumed for $T_2$ and the mass of
the exchanged $1^+$ particle can be expanded as,
\begin{equation}
\label{eq:mass-expansion}
m_{P_{s1}}=m_P \left(1 + {b_1 \over m_P} + {b_2 \over m_P^2} + \dots
\right),
\end{equation}
then $T_1(q^2{=}0;m_P;m_{K^*})$ should satisfy a modified scaling law,
\begin{equation}
\label{eq:T1_three_halves_scaling}
T_1(0;m_P;m_{K^*}) \, m_P^{3/2}
=
\mbox{const.} \times [ \alpha_s(m_P) ]^{-2/\beta_0}
\left(1 + \frac{c_1}{m_P} + \frac{c_2}{m_P^2} + \dots\right),
\end{equation}
where the unknown coefficients in~Eq.(\ref{eq:mass-expansion}) have
been absorbed into the unknown scaling coefficients of the matrix
element.
A similar scaling relationship has been found by
Ali {\it et al.} \cite{ali:3pt-sum-rules} by the sum rules approach.
The two scaling forms were tested in the same way as for
$T_2(q^2_{max})$, by forming the quantities,
\begin{equation}
{\hat T}_1=
T_1(q^2{=}0)
\left({m_P \over m_B}\right)^{N/2}
\left({\alpha_s(m_P)\over\alpha_s(m_B)}\right)^{2/\beta_0}.
\end{equation}
where $N$ is 1 or 3 as appropriate.
Linear and quadratic fits were carried out with the same functions as
for $\hat T_2$, allowing for correlations between masses and form
factors. They are shown
in~Fig.(\ref{figure:T1_hat_extrapolation}). The $\chi^2/\mbox{d.o.f.}$
was approximately 1 for the $m_P^{3/2}$ scaling law, indicating that
the model is statistically valid in the available mass range. For the
$m_P^{1/2}$ scaling law we found a $\chi^2/\mbox{d.o.f.}$ of 0.3.
The correlated quadratic fit with radiative corrections gives,
\begin{equation}
\label{eq:T1_calc}
T_1(q^2{=}0;m_B;m_{K^*}) =
\cases{{0.159}^{+34}_{-33} &$m_P^{1/2}$ scaling\cr
{0.124}^{+20}_{-18}&$m_P^{3/2}$ scaling\cr},
\end{equation}
where the errors quoted are statistical.
All methods of evaluating $T_1(q^2{=}0;m_P;m_{K^*})$ at
intermediate masses are compared in
table~\ref{table:T1_T2_comparison}. We consider the differences
between the methods as a measure of part of the systematic error. The
differences between the methods of determining the form factors at the
computed masses are of a similar size ($\sim 10\%$) to the systematic
error at the physical $B$ mass, as measured by the linear or quadratic
extrapolation of ${\hat T}_1$ in the inverse heavy meson mass.
The final result for $T_1(q^2{=}0;m_B;m_{K^*})$ is taken from the
quadratic fit for $T_1$, with an estimated systematic error in
extrapolation given by the difference between linear and quadratic
fits,
\begin{equation}
T_1(q^2{=}0;m_B;m_{K^*}) =
\cases{{0.159}^{+34}_{-33}\pm{0.067} &$m_P^{1/2}$ scaling\cr
{0.124}^{+20}_{-18}\pm{0.022}&$m_P^{3/2}$ scaling\cr}.
\end{equation}
The extrapolation is shown in
Fig.(\ref{figure:T1_hat_extrapolation}). We note that the value
obtained from $m_P^{3/2}$ scaling is consistent with the corresponding
value from $T_2$ calculated using the single pole $q^2$ behaviour
discussed earlier.
\subsection{$B_s \to \phi \gamma$}
Much of the analysis above can also be applied to the
rare decay $B_s \to \phi \gamma$. ALEPH~\cite{aleph:btophigamma} and
DELPHI~\cite{delphi:btophigamma} have looked for this decay and
obtained 90\% CL upper bounds on its branching ratio of $4.1 \times
10^{-4}$ and $1.9 \times 10^{-3}$ respectively. Future research into
this decay at LEP is planned. The branching ratio for this decay can
be expressed in a form similar to~Eq.(\ref{eq:decay_rate}),
\begin{equation}
\label{eq:Bs-decay_rate}
\mbox{\it{BR\,}}(B_s \to \phi \gamma )
= \frac{\alpha}{8 \pi^4} m_b^2 G_F^2
m_{B_s}^3 \tau_{B_s} \left(1-\frac{m_{\phi}^2}{m_{B_s}^2}\right)^3
| V^{\phantom{*}}_{tb} V_{ts}^* |^2 |C_7(m_b)|^2 |T^s_1(q^2{=}0)|^2,
\end{equation}
where $T^s_1$ is the relevant form factor from the decomposition
of $\langle \phi | J_\mu | B_s \rangle$.
In determining this matrix element numerically, the interpolating
operator $J^V_\rho(x)$ is replaced by the operator $J^\phi_\rho(x)$
defined as,
\begin{equation}
J^\phi_\rho(x) = \overline{s}(x) \gamma_\rho s(x) .
\end{equation}
As a result of the presence of two identical particles in the final state,
there is an extra additive term in the trace of~Eq.(\ref{eq:func-int-trace}),
which corresponds to $\overline{s} s$ creation from purely gluonic
states. It is expected that this process is heavily suppressed by
Zweig's rule~\cite{zweig,okubo,iizuka}, and hence the extra term is
neglected.
As the variation of the form factors with respect to the
spectator quark mass has been discarded, it can be assumed that,
\begin{eqnarray}
T^s_1(q^2{=}0; m_P; m_\phi) &=& T_1(q^2{=}0; m_P; m_{K*}) , \\
T^s_2(q^2{=}0; m_P; m_\phi) &=& T_2(q^2{=}0; m_P; m_{K*}) .
\end{eqnarray}
By employing the same {\it ans\"atze\/} for extrapolating $T_1$ and
$T_2$ as the previous sections.
\begin{eqnarray}
T^s_1( q^2{=}0; m_{B_s}; m_\phi ) & = &
\cases{0.165^{+32}_{-30}\pm{0.060}&$m_P^{1/2}$ scaling\cr
0.125^{+20}_{-18}\pm{0.021}&$m_P^{3/2}$ scaling\cr}, \\
T^s_2( q^2_{max}; m_{B_s}; m_\phi ) & = &
0.270^{+17}_{-9}\pm{0.009} , \\
T^{s, pole}_2( q^2{=}0; m_{B_s}; m_\phi ) & = &
0.114^{+7}_{-4}{}^{+16}_{-15}.
\end{eqnarray}
We note that $T^s_1(q^2{=}0)$, with $m_P^{3/2}$ scaling, and
$T^{s,pole}_2(q^2{=}0)$ are consistent with each other.
\section{Conclusions}
In this paper we have reported on an {\it ab initio\/} computation of
the form factor for the decay $B \to K^* \gamma$. The large number of
gauge configurations used in this calculation enables an extrapolation
to the appropriate masses to be made and gives a statistically
meaningful result. To compare this result with experiment we convert
the preliminary branching ratio from CLEO, $\mbox{\it{BR\,}}(B \to
K^*\gamma) = (4.5 \pm 1.5 \pm 0.9) \times 10^{-5}$ based on 13 events
\cite{cleo:evidence-for-penguins}, into its corresponding $T_1$ form
factor, assuming the Standard Model.
We work at the scale $\mu=m_b=4.39\,\mbox{GeV}$, in the
$\overline{MS}$ scheme, using a pole mass of $M_b
=4.95(15)\,\mbox{GeV}$~\cite{morningstar:mb} to determine
$m_b$~\cite{broadhurst:pole-to-ms}. Taking $|V_{ts}V_{tb}| =
0.037(3)$~\cite{stone:CKM}, $\tau_B = 1.5(2)
\,\mbox{ps}$~\cite{ALEPH:tau-b,OPAL:tau-B} and all other values from
the Particle Data Book combined with~Eq.(\ref{eq:decay_rate}), we find
$T^{\mbox{\scriptsize\it{exp}}}_1$ to be $0.23(6)$, $0.21(5)$ and
$0.19(5)$ for top quark masses of $m_t=100$, $150$ and
$200\,\mbox{GeV}$ respectively. We find the calculated value for
$T_1$ consistent with these results to within two standard deviations.
In calculating the branching ratio, we use the perturbative
renormalisation of
$\sigma_{\mu\nu}$~\cite{borrelli:improved-operators} with a boosted
coupling, $g^2=1.7 g_0^2$, and the anomalous dimension, $\gamma_{{\bar
q}\sigma{q}} = -(8/3)(g^2/16\pi^2)$, to match the lattice results to
the continuum at the scale $\mu=m_b$, giving a matching coefficient of
$Z\approx0.95$. We apply a correction of $Z^2=0.90$ in the
calculations below.
Varying the scale of $C_7(\mu)$ from $\mu=m_b/2$ to $\mu=2 m_b$
changes the final branching ratio by $+27\%$ and $-20\%$ respectively.
This is due to the perturbative calculation of $C_7(\mu)$ and future
work on next-to-leading logarithmic order corrections will reduce
this variation significantly~\cite{buras:review}.
These uncertainties cancel in the dimensionless hadronisation ratio,
$R$,
\begin{eqnarray}
R &=& \frac{\mbox{\it{BR\,}}(B \to K^*\gamma)}
{\mbox{\it{BR\,}}(B\to X_s \gamma)} \\
&=& 4 {\left(\frac{m_B}{m_b}\right)}^3
{\left(1-\frac{m^2_{K^\ast}}{m^2_B}\right) }^3
|T_1(q^2{=}0)|^2,
\end{eqnarray}
which we find to be,
\begin{equation}
R=\cases{\left(
14.5^{+62}_{-60}\mbox{\,(stat.)\,}
\pm{6.1}\mbox{\,(sys.)\,}
\pm{1.6}\mbox{\,(exp.)\,}
\right)\% & $m_P^{1/2}$ scaling\cr
\left(
8.8^{+28}_{-25}\mbox{\,(stat.)\,}
\pm{3.0}\mbox{\,(sys.)\,}
\pm{1.0}\mbox{\,(exp.)\,}
\right)\% &$m_P^{3/2}$ scaling\cr}.
\end{equation}
Assuming the recent tentative result for $m_t$ from
CDF~\cite{fermilab_top_mass}, the lattice results give a branching
ratio for the decay $B \to K^*\gamma$ of,
\begin{equation}
\mbox{\it{BR\,}}(B \to K^*\gamma) = \cases{
\left(
2.5^{+11}_{-11}
\mbox{\,(stat.)\,}
\pm 2.1\mbox{\,(sys.)\,}
\pm 0.6\mbox{\,(exp.)\,}
{}^{+7}_{-5}\mbox{\,(scale)}
\right)
\times 10^{-5} &$m_P^{1/2}$ scaling\cr
\left(
1.5^{+5}_{-4}
\mbox{\,(stat.)\,}
\pm 0.5\mbox{\,(sys.)\,}
\pm 0.3\mbox{\,(exp.)\,}
{}^{+4}_{-3}\mbox{\,(scale)}
\right)
\times 10^{-5}
&$m_P^{3/2}$ scaling\cr},
\end{equation}
where we separate the statistical and systematic errors from the
lattice, experimental and theoretical (scale) uncertainties. Combining
errors to produce an overall result yields,
\begin{equation}
\mbox{\it{BR\,}}(B \to K^*\gamma) = \cases{\left(
2.5
\pm1.3\mbox{\,(stat.)\,}
{}^{+28}_{-26} \mbox{\,(sys.)}
\right)
\times 10^{-5} &$m_P^{1/2}$ scaling\cr
\left(
1.5
\pm0.6\mbox{\,(stat.)\,}
{}^{+9}_{-8} \mbox{\,(sys.)}
\right)
\times 10^{-5}
&$m_P^{3/2}$ scaling\cr}.
\end{equation}
Similarly for $B_s \to \phi \gamma$, using
$m_{B_s}=5.3833(5)\,\mbox{GeV}$~\cite{OPAL:Bsmass,CDF:Bsmass} and
$\tau_{B_s}=1.54(15)\,\mbox{ps}$~\cite{forty:tau-Bs}, we find,
\begin{eqnarray}
\mbox{\it{BR\,}}(B_s \to \phi \gamma) &=&
\cases{\left(
2.8
{}^{+11}_{-10} \mbox{\,(stat.)\,}
\pm 2.1\mbox{\,(sys.)\,}
\pm 0.5\mbox{\,(exp.)\,}
{}^{+7}_{-5}\mbox{\,(scale)}
\right)
\times 10^{-5} &$m_P^{1/2}$ scaling\cr
\left(
1.6
{}^{+5}_{-5} \mbox{\,(stat.)\,}
\pm 0.6\mbox{\,(sys.)\,}
\pm 0.3\mbox{\,(exp.)\,}
{}^{+4}_{-3}\mbox{\,(scale)}
\right)
\times 10^{-5} &$m_P^{3/2}$ scaling\cr}, \\
&=&
\cases{\left(
2.8 \pm 1.2 \mbox{\,(stat.)\,}
{}^{+28}_{-26} \mbox{\,(sys.)}
\right)
\times 10^{-5} &$m_P^{1/2}$ scaling\cr
\left(
1.6 \pm 0.6 \mbox{\,(stat.)\,}
{}^{+10}_{-9} \mbox{\,(sys.)}
\right)
\times 10^{-5} &$m_P^{3/2}$ scaling\cr}.
\end{eqnarray}
In obtaining these results, we have made some assumptions. Since this
calculation is carried out with one lattice spacing, we cannot explore
discretisation errors. However, the use of an $O(a)$-improved action
is expected to reduce these substantially.
As the form factors and mass ratios evaluated are dimensionless, we
also expect some of the systematic error from setting the scale to
cancel.
The extrapolation of matrix elements to the chiral limit has been
neglected, although the current data indicates a weak dependence on
the spectator quark mass.
Without doing a simulation using dynamical fermions, the error due to
quenching cannot be accurately estimated. However, the good agreement
with experiment for other semileptonic, pseudoscalar to vector meson
decays~\cite{lubicz:d-decay-ii,bernard:d-decay}, that have been
determined using coarser lattices and lower statistics, suggests that
these errors are small. We find our results consistent with previous
calculations~\cite{ukqcd:penguin-prl,bhs:penguin-prl}. With form
factors available over a range of masses, we have been able to
incorporate heavy-quark symmetry into our extrapolation and
investigate phenomenologically motivated pole-dominance models. These
methods supercede the simple linear extrapolation used as a guide in
our earlier preliminary study, where the limited set of two masses
precluded an investigation of different extrapolation
methods~\cite{ukqcd:penguin-prl}.
Whether pole dominance is a valid model for a large range of $q^2$ is
an important question. We have quoted results for two different
possibilities for the $q^2$ dependence of the form factors. Although
the lattice results visually favour $T_2$ constant in $q^2$, at least
for heavy quark masses around the charm mass, our fits favour a single
pole vector dominance form for $T_2$. The difference between the
results indicates the need for a better understanding of the combined
$q^2$ and heavy quark scaling behaviour of the relevant form factors.
We have not applied the constraint $T_1(0) = iT_2(0)$ to our fits in
this paper, using instead the consistency of our results with this
relation as a guide to the fitting method. We find that the the single
pole dominance model for the $q^2$ behaviour of $T_2$ (and
corresponding dipole behaviour for $T_1$) gives the most consistent
fit. In this case we have attempted to determine the systematic
consistency by comparing $T_1(q^2{=}0;m_B;m_{K^*})$, extracted
using the $m_P^{3/2}$
scaling law, with $T_2(q^{2}{=}0;m_B;m_{K^*})$ assuming pole model
behaviour for $T_2$ and the expected pole mass. It could be argued
that both methods are equivalent. However, in extrapolating the form
factor $T_1(q^2{=}0;m_P;m_{K^*})$ to $m_B$, the coefficients in the
fit are not fixed, which is equivalent to letting the pole mass vary.
We require only that the leading order behaviour of $T_1$ satisfy the
$m_P^{3/2}$ dependence.
We look forward to improved experimental results for the decay
$B \to K^*\gamma$ and observation of $B_s \to \phi \gamma$. We hope future
lattice studies will significantly increase the accuracy of these
calculations.
\acknowledgments
The authors wish to thank G.~Martinelli for emphasising
the consistency requirements on scaling the form factors
$T_1$ and $T_2$.
They also thank
A.~Soni, C.~Bernard, A.~El-Khadra, and members of the UKQCD
collaboration,
including
C.~Allton, L.~Lellouch, J.~Nieves and H.~Wittig for useful discussions
on this topic.
JMF thanks the Nuffield Foundation for support under the scheme
of Awards for Newly Appointed Science Lecturers.
The University of Edinburgh and the Wingate Foundation
is acknowledged for its support of
HPS by a scholarship. DGR (Advanced Fellow) and DSH (Personal Fellow)
acknowledge the support of the Science and Engineering Research
Council.
The authors acknowledge the support of the Particle Physics and Astronomy
Research Council by grant GR/J98202.
|
1,116,691,501,294 | arxiv | \section{Introduction}
Single photon sources \cite{Lounis2005,Eisaman2011} are fundamental building blocks for quantum information protocols. Current realizations based on blockade mechanisms \cite{Paul1982} unavoidably require a strong optical nonlinearity. They are usually engineered with such systems as quantum dots \cite{Michler2000,Santori2001,Ding2016,Somaschi2016,Schlehahn2016}, diamond color centers \cite{Kurtsiefer2000}, superconducting circuits \cite{Lang2011} or trapped atoms \cite{McKeever2004}. Although the degree of control over these systems is steadily improving, they basically operate at cryogenic temperatures and (or) imply significant fabrication challenges, particularly with respect to integration and scalability in future photonic platforms. On the other hand, nondeterministic sources relying on heralding protocols are now operating at room temperature in Silicon \cite{Davanco2012,Azzini2012,Collins2013,Li2011,Spring2013}, but they require a significant input power to trigger the four wave mixing mechanism.
The unconventional photon blockade (UPB) was proposed as a novel paradigm to produce sub-poissonian light in presence of a very weak nonlinearity \cite{Liew2010}. The seminal system consists of a pair of coherently coupled cavities embedding a Kerr nonlinear medium, where one of the cavities is driven by a classical source \cite{Liew2010,Bamba2011a,Bamba2011,Flayac2013}. It was then extended to Jaynes-Cummings \cite{Bamba2011a}, optomechanical \cite{Savona2013} or bimodal cavity \cite{Majumdar2012} systems and would be feasible in superconducting circuits \cite{Eichler2014}, or optimized silicon photonic crystal platform \cite{Ferretti2013,Flayac2015}. It was shown that UPB essentially originates from a quantum interference mechanism \cite{Liew2010,Bamba2011a}, that arises even when the nonlinear energy per photon $U\ll\kappa$, where $\kappa$ is the cavity photon loss rate. UBP was shown to be within reach of an optimized silicon photonic crystal platform \cite{Ferretti2013,Flayac2015}, where it could lead to a new class of highly integrable, ultralow-power, passive single photon sources. However, the coherent mode coupling -- with rate $J$ -- results a detrimental oscillation of the delayed two-photon correlations $g^{(2)}(\tau)$, thus restricting the sub-poissonian behavior delays shorter than $1/J\ll1/\kappa$, under the required optimal antibunching conditions \cite{Bamba2011a}. As a consequence, UPB is suppressed under pulsed excitation, as the antibunched portion of the time-dependent field contributes minimally to the emitted pulse of duration $1/\kappa$ \cite{Flayac2015}. Filtering the output pulse through a narrow time gate was shown to improve photon antibunching at the expense of a reduced photon rate \cite{Flayac2015}. Alternative schemes \cite{Kyriienko2014} rely on a strong auxiliary driving field, thus departing from the desired low-power operation.
UPB can be understood in terms of Gaussian squeezed states \cite{Lemonde2014}. For any coherent state $\left| \alpha \right\rangle$, there exists an optimal squeezing parameter $\xi$ that minimizes the two-photon correlation $g^{(2)}(0)$, which can be made vanishing for weak driving field. In the UPB scheme, the two couple modes bring enough flexibility to tune the $\alpha$ and $\xi$ values of the target mode independently \cite{Bamba2011}. A more effective approach would then consist in pipelining two subsystems, one which provides the squeezing $\xi$ and the other that induces the corresponding optimal displacement $\alpha$.
\begin{figure}[ht]
\includegraphics[width=0.40\textwidth,clip]{Fig1.pdf}\\
\caption{(Color online) Scheme of the proposed system: Two nonlinear and dissipative optical resonators are driven by mutually coherent fields of same frequency $\omega_L$ but with different complex amplitudes $F_{1,2}$. The one-directional dissipative coupling from cavity 1 to cavity 2 occurs at a rate $\chi$.}
\label{Fig1}
\end{figure}
In this paper, we develop such an approach by investigating a scheme where two optical resonators are linked via a dissipative -- i.e. one-directional -- coupling \cite{Carmichael1993,Gardiner1993,Gardiner1994}. The nature of such coupling allows independent tuning of the field squeezing and displacement in the second cavity. The most significant advance brought by the one-directional coupling however, is the absence of a normal-mode energy splitting. This removes the oscillations typical of UPB, so that the condition $g^{(2)}(\tau)<1$ is fulfilled for delays longer than the cavity lifetime, and pulsed operation becomes naturally possible as in the conventional photon blockade.
\section{The Model}
We consider two driven Kerr resonators with dissipative (one-directional) coupling from the first to the second cavity as sketched in Fig.\ref{Fig1}. Our goal is to find a regime of parameters for which cavity 2 -- from now on denoted target cavity -- displays sub-poissonian photon statistics. In the frame rotating at the frequency $\omega_L$ of the driving fields, the system Hamiltonian reads
\begin{equation}\label{H}
\hat {\cal{H}} = \sum\limits_{j = 1,2} {\left[ -{{\Delta_j}\hat a_j^\dag {{\hat a}_j} + U_j\hat a_j^\dag \hat a_j^\dag {{\hat a}_j}{{\hat a}_j}} + F_j^*\hat a_j + F_j\hat a_j^\dag \right]}\,,
\end{equation}
where $F_j$ ($j=1,2$) are the complex driving field amplitudes for each cavity, $\Delta_{j}=\omega_{j}-\omega_L$ are the cavity mode detunings, and $U_{j}$ are the strengths of the Kerr nonlinearities. The open system dynamics obeys the quantum master equation
\begin{equation}\label{rhot}
i\frac{{\partial \hat \rho }}{{\partial t}} = \left[ {\hat {\cal{H}},\hat \rho } \right] - {\frac{{{i}}}{2}\sum\limits_{j = 1,2} \kappa _j\hat {\cal{D}}\left[ {{{\hat a}_j}} \right]\hat \rho} + i\chi \hat {\cal{D}}\left[ {{{\hat a}_1},{{\hat a}_2}} \right]\hat \rho\,,
\end{equation}
where $\hat{\cal{D}}\left[ {{{\hat a}_j}} \right]\hat \rho = \{\hat a_j^\dag {{\hat a}_j},\hat \rho\} - 2{{\hat a}_j}\hat \rho \hat a_j^\dag$ describe the dissipation into the environment at rates $\kappa_j$, and $\hat{\cal{D}}\left[ {{{\hat a}_1},{{\hat a}_2}} \right]\hat \rho = [ {{{\hat a}_1}\hat \rho ,\hat a_2^\dag } ] + [ {{{\hat a}_2},\hat \rho \hat a_1^\dag } ]$ models the dissipative coupling at a rate $\chi = \sqrt {\eta {\kappa _1}{\kappa _2}}$. Here, we have defined a transfer efficiency $\eta \in [0,1]$ \cite{Gardiner2004}, relating the coupling to the dissipation rates.
Before directly solving Eq. (\ref{rhot}), it is useful to study the system in the limit of weak driving fields $F_{1,2} \rightarrow 0$. In this limit, analytical expressions for the various expectation values can be obtained by assuming pure states and restricting to the $n\le2$ photon manifold, as was also done in Refs. \onlinecite{Bamba2011a,Savona2013,Flayac2013}. Details of this analysis are reported in the Appendix A. In that framework, the steady state two-photon correlations of the target cavity approximates to
\begin{eqnarray}
g_{2}^{(2)}(0)=\frac{\langle\hat{a}^{\dag}_{2}\hat{a}^{\dag}_{2}\hat{a}_{2}\hat{
a}_{2}\rangle}{\langle\hat{a}^{\dag}_{2}\hat{a}_{2}\rangle^2}\simeq2\frac{|c_{02}|^2}{|c_{01}|^4}\label{g2}\,,
\end{eqnarray}
where $|c_{01}|^2$ and $|c_{02}|^2$ are the probabilities of having zero photons in cavity 1 and, respectively, 1 and 2 photons in the target cavity [see Eqs.(\ref{c01},\ref{c02})]. By requiring $c_{02}=0$, one obtains a condition for an optimal sub-poissonian behavior, $g_2^{(2)}(0) \simeq 0$. This optimal condition can be met, provided that the driving fields fulfill
\begin{equation}\label{F1opt}
{F_1}\left| {_{\rm opt}} \right. = i{F_2}\frac{{\tilde \Delta {{\tilde U}_1} \pm \sqrt { - {U_1}{{\tilde U}_1}\tilde \Delta {{\tilde \Delta }_2}} }}{{\left( {\tilde \Delta + {U_1}} \right)\chi }}\,,
\end{equation}
where we defined $\tilde \Delta_{1,2}=-\Delta_{1,2}-i\kappa_{1,2}/2$, $\tilde \Delta = {{\tilde \Delta }_1} + {{\tilde \Delta }_2}$, ${{\tilde U}_1} = {{\tilde \Delta }_1} + {U_1}$, and assuming $F_2\in {\mathbb{R}^{+}}$ without loss of generality. Eq.\eqref{F1opt} reveals several interesting features: (i) ${F_1}\left| {_{\rm opt}} \right.$ needs to carry the proper magnitude and phase. (ii) ${F_1}\left| {_{\rm opt}} \right.$ depends linearly on $F_2$, which cannot therefore be set to zero. Indeed, an undriven target cavity would simply act as a spectral filter \cite{Flayac2014}, thus essentially recovering the single mode statistics. (iii) The optimal field amplitude doesn't depend on $U_2$, which can therefore be set arbitrarily small. If under the assumption that the sub-poissonian character originates from the optimal squeezing mechanism described in Ref.\cite{Lemonde2014}, then this feature hints at the fact that cavity 1 is here the main source of squeezing. (iv) In the case where $U_1=0$, an optimal value of the driving field is still well defined but results in a vanishing occupation of the target cavity. (v) We made no assumptions on the value of $U_j$, which may be set arbitrarily smaller or larger than the loss rates $\kappa_j$.
\section{Results and discussion}
From now on, we will consider the case of cavities with equal loss rates $\kappa_1=\kappa_2=\kappa$ and and nonlinearities $U_1=U_2=U$. We solve numerically Eq. \eqref{rhot} in the stationary limit, on a Hilbert space truncated to include $N_{\rm{max}}$ quanta per mode. As a figure of merit, in Fig.\ref{Fig2}(a), we show the two-photon correlation $g_2^{(2)}(0)$ for the target cavity, as a function of its average occupation $n_2$ (blue line). We assumed for this calculation the most interesting regime of weak nonlinearity compatible with Silicon photonic crystal cavities where $U=10^{-3}\kappa$ \cite{Flayac2015}. We additionally assumed $\Delta_1=\Delta_2=0$ for simplicity. Since $U\ll\kappa$, the optimal condition \eqref{F1opt} approximately reduces to
\begin{equation}\label{F1opts}
{F_1}\left| {_{\rm opt}} \right. \simeq i{F_2}\frac{\kappa }{{2\chi }}
\end{equation}
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth,clip]{Fig2.pdf}\\
\caption{(Color online) (a) Target cavity two-photon correlations $g_2^{(2)}(0)$ versus its occupation $n_2$. Here $U=10^{-3}\kappa$, $F_1=F_1|_{\rm{opt}}$ and $\chi=\kappa$. Blue line: exact master equation. Red line: linearized model. Yellow line: linearized model where $F_1$ was obtained for each value of $n_2$ from numerical minimization of $g_2^{(2)}(0)$. (b) $g_2^{(2)}(0)$ versus $U$ at fixed occupation $n_2=10^{-4}$. The dashed black and red line denote the thermal and pure state Gaussian boundaries set in Ref. \onlinecite{Lemonde2014}.}
\label{Fig2}
\end{figure}
For increasing driving field amplitudes, the value of $N_{\rm{max}}$ required for convergence becomes exceedingly large. To extend the range of accessible $n_2$ values, we linearize with respect to the mean-field solution (see Appendix B). The result is plotted in Fig.\ref{Fig2}(a) (red line) and matches perfectly with the full quantum treatment from $n_2>10^{-6}$ where the mean field dominates over fluctuations. The optimal two-photon correlation behaves linearly as a function of $n_2$, with the exception of larger occupancies where a nonlinear increase in $g_2^{(2)}(0)$ is displayed. This behavior resides in the limited range of validity of Eq.\eqref{F1opt} which loses accuracy as the 3 photon probability rises. For the largest values of $n_2$ we therefore searched for the optimal parameters numerically, using the amplitude and phase of $F_1$ as free parameters. The value $g_2^{(2)}(0)\leqslant0.5$ -- considered as an upper bound for single-photon emission -- is reached for a remarkably high occupancy $n_2\simeq0.25$ (yellow line) for such a weakly nonlinear system. We note that in the presence of a thermal background, e.g. if microwave photons \cite{Eichler2014} are envisaged, the system would display a value of $g_2^{(2)}(0)=2$ in the limit of vanishing driving fields. The function would therefore present a minimum at finite driving amplitude.
By assuming a linewidth $\kappa=1$ $\mu$eV of state-of-the-art photonic crystal cavities, we can extract a maximum emission rate as high as ${\cal{R}}=n_2\kappa/\hbar=380$ MHz. The corresponding intracavity power at zero detuning for cavity resonances $\hbar\omega_{1,2}=0.8$ eV is $P_{\rm in}=\omega_c (F_{1}+F_{2})/\hbar=15.5$ pW given $F_2\simeq\sqrt{n_2}\kappa = 2 F_1$. The real input power can be estimated to $50 \times P_{\rm in}=778$ pW when taking into account a conservative value for the in-coupling efficiency \cite{Dharanipathy2014}. This value is about 30 times smaller than the typical input power required for single photon operation with quantum dots \cite{Michler2000}.
It was shown \cite{Lemonde2014} that, under the assumption that the state is Gaussian, a lower bound on $g^{(2)}(0)$ exists. In particular, for mean occupancies $n \ll 1$, this bound is given by $g^{(2)}(0)|_{\rm{p}}\simeq4|\langle\hat a\rangle|^2$ for a pure displaced-squeezed state, and by $g^{(2)}(0)|_{\rm{th}}\simeq8\sqrt{\bar{n}_{\rm{eff}}}$ for a corresponding thermal (i.e. mixed) state with mean occupation $\bar{n}_{\rm{eff}}$. For a general mixed state, we can define an effective thermal occupation $\bar{n}_{\rm{eff}}=(1/{\rm Tr} {{{\hat \rho }^2}} - 1)/2\ll1$, which then roughly measures the degree of mixedness. We show in Fig.\ref{Fig2}(b) the computed $g_2^{(2)}(0)$ as a function of $U$ (blue line) at constant occupation $n_2=10^{-4}$ (where Eq.(\ref{F1opt}) holds), and compare it to the pure and thermal limits (dashed lines) that are independent of $U$ for given values of $n_2$ and $\kappa$. Photons in the target cavity achieve a value of $g^{(2)}(0)$ lying below the thermal limit and, from $U/\kappa>10^{-2}$, crossing the pure state boundary. In this case, the state departs from a Gaussian state, which was checked by identifying negative Wigner distribution areas (not shown).
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth,clip]{Fig3.pdf}\\
\caption{(Color online) Color maps of (a) the target cavity occupation $n_{2}$ and (b) two-photon correlation $g_{2}^{(2)}(0)$, computed as a function of the cavity detunings. Here $U=10^{-3}\kappa$, $\chi=\kappa$ and we set the optimal condition (\ref{F1opt}) at $\Delta_{1,2}=0$ and $n_2=10^{-4}$.}
\label{Fig3}
\end{figure}
We have studied the impact of variable detunings $\Delta_{1,2}$ when the other parameters are fixed. The results for $n_2$ and $g_2^{(2)}(0)$ are presented in Fig.\ref{Fig3}. The panel (a) shows that the occupation vanishes for small detunings. This is due to destructive interference between the input from cavity 1 and the field driving the target cavity. In particular, under the condition $F_2=-i \chi F_1/\tilde \Delta_1$, the coefficient $c_{01}$ is suppressed therefore favoring photon pairs (see Appendix A). As shown in Fig.\ref{Fig3}(b), in the region $\Delta_1<|\kappa/2|$, we observe both a strong bunching up to $g_2^{(2)}(0)=30$ (red areas) or strong antibunching (blue areas) where the optimal antibunching condition holds. As already discussed, antibunching results from the interplay between the squeezing brought by cavity 1 and the field displacement induced by the driving field on the target cavity \cite{Lemonde2014}. The results are essentially unchanged when $U_2=0$ as dictated by Eq.\eqref{F1opt}.
As discussed above, the UPB scheme displays antibunching only for values of the time delay smaller than $1/J \ll 1/\kappa$ \cite{Liew2010,Bamba2011a,Flayac2015}, thus preventing simple operation under pulsed input. This is ultimately due to the normal-mode energy splitting in the spectrum of the two-resonators, of the order of $2J$. The dissipative coupling overcomes this difficulty, as the normal-mode splitting is absent and the emitted photons are characterized by the spectrum of the target cavity (see Fig.A\ref{FigS4}). We show in Fig.\ref{Fig4} the $g_2^{\left( 2 \right)}(\tau)$ function computed at steady state for the optimal parameter values (red line), and compare it to the UPB result (blue line) for the same value of $U$. In the dissipative case, the antibunching survives over $\tau>1/\kappa$ and oscillations are absent. The single photon regime, defined by $g^{(2)}_2(\tau)<0.5$, is preserved over the shaded time frame and behaves similarly to conventional sources \cite{Paul1982}.
\begin{figure}[ht]
\includegraphics[width=0.45\textwidth,clip]{Fig4.pdf}\\
\caption{(Color online) Delayed two-photon correlation function $g_2^{(2)}(\tau)$ (red line) computed in the steady state regime under continuous wave driving, for the target cavity at $n_2=10^{-4}$, $U=10^{-3}\kappa$ and $\chi=\kappa$. The gray area highlights the single photon regime. The blue line shows the oscillating UPB counterpart obtained for the same value of $U$ requiring $J=19.6\kappa$, $\Delta_j=0.29\kappa$ and $F_1=0$.}
\label{Fig4}
\end{figure}
We studied the pulsed regime in more detail by a direct time integration of Eq.\eqref{rhot} where we assumed input Gaussian pulses $F_j\exp[-(t-t_{j0})^2/\sigma_t^2]$. A key quantity in assessing the single-photon emission under pulsed excitation is the two-photon correlation averaged over two times \cite{Flayac2015}
\begin{equation}\label{g2pulse}
g_{\rm{pulse}}^{(2)} = \frac{{\int {G_2^{(2)}\left( {{t_1},{t_2}} \right)d{t_1}d{t_2}} }}{{\int {{n_2}\left( {{t_1}} \right){n_2}\left( {{t_2}} \right)d{t_1}d{t_2}} }}\,,
\end{equation}
where $G_2^{(2)}({t_1},{t_2})=\langle {\hat a_2^\dag(t_1) \hat a_2^\dag(t_2) {{\hat a}_2}(t_2){{\hat a}_2}(t_1)} \rangle$ and $n_2(t)= \langle\hat a_2^\dag(t) {{\hat a}_2}(t)\rangle$. For optimal single-photon operation, the duration of the excitation pulses should be optimized so to be shorter than the sub-poissonian time window (see Fig.\ref{Fig4}), while allowing enough time for the buildup of the squeezing (see Fig.A\ref{FigS0}). A suitable delay $\Delta t=t_{02}-t_{01}=1.5/\kappa$ between the two pulses (see Appendix C) has also been introduced here to circumvent the onset of strong bunching in the earliest part of the output pulse.
We show in Fig.\ref{Fig5}(a) the computed cavity occupations $n_{1,2}(t)$. Here, the target cavity has an average occupation of $n_{\rm{pulse}}=\int n_2(t)dt\simeq3\times10^{-2}$. Fig.\ref{Fig5}(b) shows the corresponding two-time correlation function $g^{(2)}_2(t_1,t_2)$. The contour plot highlights the occupation of the target cavity $n_2(t_1,t_2)=\sqrt{n_2(t_1) n_2(t_2)}$, which peaks well inside the sub-poissonian portion of the plot. For the present case, we obtained $g_{\rm{pulse}}^{(2)}\simeq0.3$. Single photon operation may be enhanced via an optimal pulse shaping (a task beyond the scope of this study). Further enhancement may be obtained through time-gating the output pulse, as already suggested for the UPB \cite{Flayac2015}. If one applies a time gate of duration $\Delta T=5/\kappa$, highlighted by the dashed lines in Fig.\ref{Fig5}(b), the two-photon correlation is reduced to $g_{\rm{pulse}}^{(2)}<0.1$ while preserving an average occupation of $n_{\rm{pulse}}\simeq 10^{-2}$. In line with the steady-state discussion and for the parameters we chose here, which fit in the requirements of condition \eqref{F1opt}, we can estimate a single photon rate of ${\cal R}=2.3$ MHz if we assume pulses delayed by $20/\kappa=13$ ns. Note that this rate could easily be increased by one order of magnitude by considering a numerically optimized pump amplitudes as in Fig.\ref{Fig2}(a) (yellow curve).
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth,clip]{Fig5.pdf}\\
\caption{(Color online) Pulsed regime: (a) Time dependent cavity occupation. (b) Two-time two-photon correlation function $g^{(2)}_2(t_1,t_2)$. The quantity $n_2(t_1,t_2)$ is displayed as a contour plot. The dashed-white lines denote a time-gate window resulting in $g_{\rm{pulse}}^{(2)}<0.1$. The parameters are $U=10^{-1}\kappa$, $\chi=\kappa$, $\sigma_t=5/\kappa$, $\Delta_{1,2}=0$, $F_2=0.1\kappa$, $F_1=F_1|_{\rm{opt}}$ and $\Delta t=1.5/\kappa$.}
\label{Fig5}
\end{figure}
The dissipative coupling considered so far can be implemented through an intermediate coupling element, which may be a waveguide or a third optical resonator, as investigated in Ref.\cite{Metelmann2015}. The coupling element acts as an engineered reservoir, effectively generating the quantum interference required for the one-directional transmission. In the case of a third cavity, the corresponding Hamiltonian reads
\begin{eqnarray}\label{H3}
\nonumber \hat {\cal{H}} &=& \sum\limits_{j = 1}^3 {\left[ { - {\Delta _j}\hat a_j^\dag {{\hat a}_j} + {U_j}\hat a_j^\dag \hat a_j^\dag {{\hat a}_j}{{\hat a}_j} + F_j^*{{\hat a}_j} + {F_j}\hat a_j^\dag } \right]}\\
&+& \sum\limits_{j \ne k = 1}^3 {\left[ {{J_{jk}}\hat a_j^\dag {{\hat a}_k} + J_{jk}^*\hat a_k^\dag {{\hat a}_j}} \right]}
\end{eqnarray}
where the auxiliary mode is not driven, i.e. $F_3=0$, and is ideally characterized by a large dissipation rate $\kappa_3\gg\kappa_{1,2}$. $J_{jk}$ describe coherent photon hopping amplitudes. This system is well approximated by Eq. \eqref{rhot} under the conditions $J_{12} = i{\chi }/{2}$ and $J_{23} = J_{31}=\sqrt{-iJ_{12}\kappa_3/2}$ \cite{Metelmann2015}. It requires a complex-valued $J_{12}$, currently feasible e.g. using waveguide delay lines \cite{Hafezi2011}. The directionality of the coupling can be tested numerically (see Appendix D). In presence of the auxiliary resonator, the optimal antibunching condition is displaced in the parameter space. We have identified a new condition by running a steady state optimization with $|F_1|$ and $\phi_1$ as free parameters, and setting $F_2=0.1\kappa$, $U_j=10^{-3}\kappa$, $\Delta_j=0$, and $\kappa_3=10\kappa$. We obtained a value of $g^{(2)}_2(0)=3.8\times10^{-2}$ for $|F_1|=6.58\times10^{-2}\kappa$ and $\phi_1\simeq0$, proving the single-photon operation.
An efficient single-photon source should be benchmarked against the current state-of-the-art, represented by quantum emitters in resonant cavities \cite{Michler2000,Santori2001,Kurtsiefer2000,McKeever2004,Lang2011,Ding2016,Somaschi2016,Schlehahn2016}, and heralded sources\cite{Li2011,Davanco2012,Azzini2012,Collins2013,Spring2013}. Recent advances have led to close-to-ideal single-photon operation for both schemes. However, quantum emitters and heralded source respectively require cryogenic temperature and high input power. The present proposal brings a significant advantage in that it naturally operates at ultra-low power, in the nW range as estimated above for a photonic crystal cavity. This must be compared to the 24 nW of Ref.\cite{Michler2000} and to the mW range of heralded sources \cite{Azzini2012}. Expected single-photon rates are in the MHz range for the three schemes \cite{Azzini2012,Ding2016,Flayac2015}. Photon purity (i.e. the value of $g^{(2)}(0)$) can be made here arbitrarily large, as seen in Fig.\ref{Fig2}(a).
Moreover here, photons are emitted within the narrow spectrum of the target cavity (see Fig.A\ref{FigS4}(a)). Within the assumptions of our model, the indistinguishability degree amounts to $99.95\%$ (see Appendix E). Such a high value would obviously be reduced in the presence to pure dephasing or fluctuations in the driving fields. This incidentally represents a second advantage -- in addition to pulsed operation -- of the present scheme on the original UPB, where instead photons are emitted over the spectrum of the normal modes of the two cavities. The probabilistic emission character remains a limitation of both UPB and cascaded schemes. We are confident however that this difficulty may be overcome soon by devising new schemes based on the present dissipative coupling paradigm.
\section{Conclusion}
We have proposed a scheme for a single-photon source operating under weak nonlinearity and relying on dissipative, one-directional coupling between two optical cavities. Such approach enables single-photon generation over pulsed excitation, thus overcoming the main limitation of the unconventional photon blockade. We have proposed a three-cavity configuration that enables the one-directional coupling and may be realized on several platforms, including weakly nonlinear photonic crystal cavities, coupled ring resonators. The scheme could be generalized to several cascaded optical cavities, aiming at suppressing the $n$-photon probabilities to enhance the single-photon operation or pair production.
\begin{acknowledgments}
The authors acknowledge fruitful discussions with D. Gerace and M. Minkov.
\end{acknowledgments}
\
|
1,116,691,501,295 | arxiv | \section{Introduction}
\label{intro}
\rev{In this paper, we consider a Markov chain $(x_n,n\in \bN)$ with values in $E = \bR^N$, where $N\geq 1$ is an integer.
We assume that the probability transition kernel $P_\gamma$ is indexed by a scaling factor $\gamma$, which
belongs to some interval $(0,\gamma_0)$.
The aim of the paper is to analyze the long term behavior of the Markov chain in the regime where $\gamma$ is small.
The map
\begin{equation}
\label{eq:drift}
g_\gamma(x) \eqdef \int \frac{y-x}\gamma P_\gamma(x,dy)\,,
\end{equation}
assumed well defined for all $x\in \bR^N$, is called the \emph{drift} or the \emph{mean field}. The Markov chain admits the representation
\begin{equation}
x_{n+1} = x_n + \gamma\,g_\gamma(x_n) + \gamma\,U_{n+1}\,,\label{eq:decomp-markov-drift}
\end{equation}
where $U_{n+1}$ is a martingale increment noise \emph{i.e.,} the conditional expectation of $U_{n+1}$ given the past samples
is equal to zero. A case of interest in the paper is given by iterative models of the form:
\begin{equation}
x_{n+1} = x_n+\gamma \,h_\gamma (\xi_{n+1},x_n)\,,\label{eq:iterative-model}
\end{equation}
where $(\xi_n, n \in \bN^*)$ is a sequence of independent and identically distributed (iid)
random variables defined on a probability space $\Xi$ with probability law
$\mu$, and $\{ h_\gamma \}_{\gamma\in(0,\gamma_0)}$ is a family of maps
on $\Xi\times \bR^N\to\RN$. In this case,
the drift $g_\gamma$ has the form:
\begin{equation}
g_\gamma(x) = \int h_\gamma(s,x)\,\mu(ds)\,.\label{eq:drift-integral}
\end{equation}
Our results are as follows.
\begin{enumerate}
\item {\bf Dynamical behavior.} Assume that the drift $g_\gamma$ has the form~(\ref{eq:drift-integral}).
Assume that for $\mu$-almost all $s$ and for every sequence
$((\gamma_k,z_k)\in (0,\gamma_0)\times \RN, k\in\bN)$ converging to $(0,z)$,
$$
h_{\gamma_k}(s,z_k) \to H(s,z)
$$
where $H(s,z)$ is a subset of $\RN$ (the Euclidean distance between $h_{\gamma_k}(s,z_k)$ and the set $H(s,z)$
tends to zero as $k\to\infty$).
Denote by ${\mathsf x}_\gamma(t)$ the continuous-time stochastic process obtained by a
piecewise linear interpolation of the sequence $x_n$, where
the points $x_n$ are spaced by a fixed time step $\gamma$ on the positive real axis.
As $\gamma\to 0$, and assuming that $H(s,\cdot)$ is a proper and upper semicontinuous (usc) map with closed convex values,
we prove that ${\mathsf x}_\gamma$ converges
narrowly (in the topology of
uniform convergence on compact sets) to the set of solutions of the differential inclusion (DI)
\begin{equation}
\dot {\mathsf x}(t) \in \int H(s, {\mathsf x}(t))\mu(ds)\,,\label{eq:di-intro}
\end{equation}
where for every $x\in \RN$, $\int H(s, x)\mu(ds)$ is the \emph{selection integral} of $H(\,.\,x)$, which is defined as
the closure of the set of integrals of the form
$\int \varphi d\mu$ where $\varphi$ is any integrable function
such that $\varphi(s) \in H(s, x)$ for $\mu$-almost all $s$.
\smallskip
\item {\bf Tightness.} As the iterates are not \emph{a priori} supposed to be in a compact subset of $\RN$,
we investigate the issue of stability. We posit a verifiable \emph{Pakes-Has'minskii} condition on
the Markov chain $(x_n)$. The condition ensures that the iterates are stable in the sense that the random occupation measures
$$
\Lambda_n \eqdef \frac 1{n+1}\sum_{k=0}^n\delta_{x_k} \qquad (n\in \bN)
$$
(where $\delta_a$ stands for the Dirac measure at point $a$), form a tight family of random variables
on the Polish space of probability measures equipped with the L\'evy-Prokhorov distance.
The same criterion allows to establish
the existence of invariant measures of the kernels $P_\gamma$, and the tightness of the
family of all invariant measures, for all $\gamma\in (0,\gamma_0)$.
As a consequence of Prokhorov's theorem, these invariant measures admit
cluster points as $\gamma\to 0$. Under a Feller assumption on the kernel $P_\gamma$,
we prove that every such cluster point is an invariant measure for
the DI (\ref{eq:di-intro}). Here, since the
flow generated by the DI is in general
set-valued, the notion of invariant measure is borrowed from \cite{fau-rot-13}.
\smallskip
\item {\bf Long-run convergence.}
Using the above results, we investigate the behavior of the iterates in the
asymptotic regime where $n\to\infty$ and, next, $\gamma\to 0$.
Denoting by $d(a,B)$ the distance between a point $a\in E$ and a subset $B\subset E$,
we prove that for all $\varepsilon>0$,
\begin{equation}
\lim_{\gamma\to 0}\limsup_{n\to\infty} \frac 1{n+1}\sum_{k=0}^{n}\text{Prob}\left(d(x_k,\text{BC})>\varepsilon\right) = 0\,,\label{eq:longrun}
\end{equation}
where $\text{BC}$ is the Birkhoff center of the flow induced by the DI (\ref{eq:di-intro}),
and $\text{Prob}$ stands for the probability.
We also characterize the ergodic behavior of these iterates. Setting $\overline x_n = \frac 1{n+1}\sum_{k=0}^{n}x_k$,
we prove that
\begin{equation}
\lim_{\gamma\to 0}\limsup_{n\to\infty} \text{Prob}\left(d(\overline x_n,\co(L_{\aver}))>\varepsilon\right)=0\,,\label{eq:longrun-ergodic}
\end{equation}
where $\co(L_{\aver})$ is the convex hull of
the limit set of the averaged flow associated with (\ref{eq:di-intro}) (see Section~\ref{subsec-inv-mes}).
\smallskip
\item {\bf Applications.} We investigate several application scenarios. We consider the problem
of non-convex stochastic optimization, and analyze the convergence of a constant step size proximal stochastic gradient algorithm.
The latter finds application in the optimization of deep neural networks \cite{lecun1998gradient}.
We show that the interpolated process converges narrowly to a DI, which we characterize.
We also provide sufficient conditions allowing to characterize the long-run behavior of the algorithm.
Second, we explain that our results apply to the characterization of the fluid limit
of a system of parallel queues. The model is introduced in \cite{ayesta2013scheduling,gas-gau-12}.
Whereas the narrow convergence of the interpolated process was studied in \cite{gas-gau-12}, less is known about the stability
and the long-run convergence of the iterates. We show how our results can be used to address this problem.
As a final example, we explain how our results can be used in the context of monotone operator theory,
in order to analyze a stochastic version of the celebrated proximal point algorithm.
The algorithm consists in replacing the usual monotone operator by an iid sequence of random monotone operators.
The algorithm has been studied in \cite{bia-16,bia-hac-16} in the context of decreasing step size.
Our analysis provide the tools to characterize its behavior in a constant step regime.
\end{enumerate}
\paragraph*{Paper organization.} In Section~\ref{sec:examples}, we introduce
the application examples. In Section~\ref{sec:literature}, we briefly
discuss the literature.}
Section~\ref{sec-bgrd} is devoted to the
mathematical background and to the notations. The main results are given in
Section~\ref{sec-main}. The tightness of the interpolated process as well as
its narrow convergence towards the solution set of the DI
(Th.~\ref{th:SA=wAPT}) are proven in Section~\ref{sec-prf-narrow}. Turning to
the Markov chain characterization, Prop.~\ref{prop:cluster}, who explores the
relations between the cluster points of the Markov chains invariant measures
and the invariant measures of the flow induced by the DI, is proven in
Section~\ref{sec-prf-cluster}. A general result describing the asymptotic
behavior of a functional of the iterates with a prescribed growth is provided
by Th.~\ref{the:CV}, and proven in Section~\ref{sec-prf-CV}. Finally, in
Section~\ref{sec-prf-asymptotics}, we show how the results pertaining to the
ergodic convergence and to the convergence of the iterates (Th.~\ref{cvg-CVSI}
and~\ref{cvg-XY} respectively) can be deduced from Th.~\ref{the:CV}.
\rev{Finally, Section~\ref{sec:applis} is devoted to the application examples.
We prove that our hypotheses are satisfied.
}
\rev{\section{Examples}
\label{sec:examples}
\begin{example} \label{ex:optim}
{\sl Non-convex stochastic optimization.}
Consider the problem
\begin{equation}
\text{minimize } \bE_\xi(\ell(\xi,x)) + r(x)\text{ w.r.t }x\in \bR^N\,,\label{eq:pb-nonCVX}
\end{equation}
where $\ell(\xi,\,.\,)$ is a (possibly non-convex) differentiable function on $\bR^N\to \bR$ indexed by a random variable (r.v.) $\xi$,
$\bE_\xi$ represents the expectation w.r.t. $\xi$, and $r:\bR^N\to \bR$ is a convex function.
The problem typically arises in deep neural networks \cite{yoon2017combined,scardapane2017group}. In the latter case, $x$ represents the collection of weights of
the network, $\xi$ represents a random training example of the database, and $\ell(\xi,x)$ is a risk function which quantifies
the inadequacy between the sample response and the network response. Here, $r(x)$ is a regularization term which prevents
the occurence of undesired solutions. A typical regularizer used in machine learning is the $\ell_1$-norm $\|x\|_1$ that promotes sparsity or generalizations like $\|D x\|_1$, where $D$ is a matrix, that promote structured sparsity.
A popular algorithm used to find an approximate solution to Problem~(\ref{eq:pb-nonCVX}) is the proximal stochastic gradient
algorithm, which reads
\begin{equation}
x_{n+1} = \prox_{\gamma r}(x_n - \gamma \nabla \ell(\xi_{n+1},x_n))\,,\label{eq:prox-gradient}
\end{equation}
where $(\xi_n,n\in \bN^*)$ are i.i.d. copies of the r.v. $\xi$,
where $\nabla$ represents the gradient w.r.t. parameter $x$,
and where the proximity operator of $r$ is the mapping on $\bR^N\to\bR^N$ defined by
$$
\prox_{\gamma r} : x\mapsto \arg\min_{y\in \bR^N} \left(\gamma\, r(y) + \frac{\|y-x\|^2}2\right)\,.
$$
The drift $g_\gamma$ has the form~\eqref{eq:drift-integral} where
$h_\gamma(\xi,x) = \gamma^{-1}(\prox_{\gamma r}(x-\gamma \nabla \ell(\xi,x)) - x)$
and $\mu$ represents the distribution of the r.v. $\xi$.
Under adequate hypotheses, we prove that the interpolated process converges narrowly to the solutions to the DI
$$
\dot {\mathsf x}(t) \in -\nabla_x \bE_\xi(\ell(\xi,{\mathsf x}(t))) - \partial r({\mathsf x}(t))\,,
$$
where $\partial r$ represents the subdifferential of a function $r$, defined by
$$
\partial r(x) \eqdef \left\{u\in \RN\,:\, \forall y\in \RN,\, r(y)\geq r(x)+\ps{u,y-x}\right\}
$$
at every point $x\in \RN$ such that $r(x)<+\infty$, and $\partial r(x)=\emptyset$ elsewhere.
We provide a sufficient condition under which the iterates~(\ref{eq:prox-gradient}) satisfy the
Pakes-Has'minskii criterion, which in turn, allows to characterize the long-run behavior of the iterates.
\end{example}
\begin{example} \label{ex:fluid}
{\sl Fluid limit of a system of parallel queues with priority.}
We consider a time slotted queuing system composed of $N$ queues. The
following model is inspired from \cite{ayesta2013scheduling,gas-gau-12}. We
denote by $y^{k}_n$ the number of users in the queue $k$ at time $n$. We
assume that a random number of $A^{k}_{n+1} \in \bN$ users arrive in the queue
$k$ at time $n+1$. The queues are prioritized: the users of Queue $k$ can only
be served if all users of Queues $\ell$ for $\ell < k$ have been served.
Whenever the queue $k$ is non-empty and the queues $\ell$ are empty for all
$\ell<k$, one user leaves Queue $k$ with probability $\eta_k > 0$. Starting
with $y^k_0 \in \bN$, we thus have
\[
y^{k}_{n+1} = y^{k}_n + A^{k}_{n+1} -
B^{k}_{n+1}\mathbbm 1_{\{y^{k}_n>0,\,y_{n}^{k-1} =\cdots = y_{n}^1 = 0\}}\,,
\]
where $B^{k}_{n+1}$ is a Bernoulli r.v.~with parameter $\eta_k$, and where
$\mathbbm 1_S$ denotes the indicator of an event $S$, equal to one on that set and to
zero otherwise. We assume that the process
$((A^{1}_{n},\dots,A^{N}_n,B^{1}_n,\dots,B^{N}_n), {n\in\bN^*})$ is iid, and
that the random variables $A^{k}_{n}$ have finite second moments. We denote by
$\lambda_k\eqdef \bE(A^{k}_{n}) > 0$ the arrival rate in Queue $k$.
Given a scaling parameter $\gamma > 0$ which is assumed to be small, we are
interested in the \emph{fluid-scaled process}, defined as
$x^{k}_n = \gamma y^{k}_n$. This process is subject to the dynamics:
\begin{align}
\label{eq:queue}
x^{k}_{n+1} &= x^{k}_n + \gamma\,A^{k}_{n+1} - \gamma\,B^{k}_{n+1}
\mathbbm 1_{\{x^{k}_n>0,\,x^{k-1}_{n} =\cdots = x^{1}_{n} = 0\}}\,.
\end{align}
The Markov chain $x_n = (x^{1}_{n},\dots,x^{N}_n)$ admits the representation
\eqref{eq:decomp-markov-drift}, where the drift $g_\gamma$ is defined on
$\gamma\bN^N$, and is such that its $k$-th component $g_\gamma^{k}(x)$ is
\begin{equation}
\label{gk-queue}
g_\gamma^{k}(x) =
\lambda_k-\eta_k\mathbbm 1_{\{x^{k}>0,\,x^{k-1} =\cdots = x^{1} = 0\}} \,,
\end{equation}
for every $k\in \{1,\dots,N\}$ and every $x=(x^1,\dots,x^N)$ in $\gamma\bN^N$.
Introduce the vector
$\boldsymbol u_k\eqdef (\lambda_1,\cdots,\lambda_{k-1},\lambda_k-\eta_k,\lambda_{k+1},
\dots,\lambda_N)$ for all $k$. Let $\bR_+\eqdef[0,+\infty)$, and define the
set-valued map on $\bR_+^N$
\begin{equation}
\label{Hqueue}
{\mathsf H}(x) \eqdef \left\{
\begin{array}[h]{ll}
\boldsymbol u_1 & \text{ if }x^{(1)}>0 \\
\co (\boldsymbol u_1,\dots,\boldsymbol u_k) & \text{ if }x^1 =\cdots = x^{k-1} = 0 \
\text{and} \ x^{k}>0\, ,
\end{array}
\right.
\end{equation}
where $\co$ is the convex hull. Clearly, $g_\gamma(x)\in {\mathsf H}(x)$ for every
$x\in\gamma\bN^N$. In~\cite[\S~3.2]{gas-gau-12}, it is shown that the DI
$\dot {\mathsf x}(t) \in {\mathsf H}({\mathsf x}(t))$ has a unique solution. Our results imply the
narrow convergence of the interpolated process to this solution, hence
recovering a result of \cite{gas-gau-12}. More importantly, if the following
stability condition
\begin{equation}
\label{eq:stability-queue}
\sum_{k=1}^N \frac{\lambda_k}{\eta_k} < 1
\end{equation}
holds, our approach allows to establish the tightness of the occupation measure
of the iterates $x_n$, and to characterize the long-run behavior of these
iterates. We prove that in the long-run, the sequence $(x_n)$ converges to zero
in the sense of~\eqref{eq:longrun}. The ergodic convergence in the sense
of~\eqref{eq:longrun-ergodic} can be also established with a small extra
effort.
\end{example}
\begin{example}
{\sl Random monotone operators.}
As a second application, we consider the problem of finding a zero of a maximal monotone operator
${\mathsf A}:\RN\to 2^\RN$:
\begin{equation}
\label{eq:find-zero}
\text{Find }x\text{ s.t. }0\in A(x)\,.
\end{equation}
We recall that a set-valued map ${\mathsf A}:\RN\to 2^\RN$ is said monotone if for every $x$, $y$ in $\RN$,
and every $u\in {\mathsf A}(x)$, $v\in {\mathsf A}(y)$, $\ps{u-v,x-y}\geq 0$.
The domain and the graph of
${\mathsf A}$ are the respective subsets of $\RN$ and $\RN\times \RN$ defined as
$\dom({\mathsf A}) \eqdef \{ x \in \RN \, : \, {\mathsf A}(x) \neq \emptyset \}$,
and $\graph({\mathsf A}) \eqdef \{ (x,y) \in \RN\times \RN \, : \, y \in {\mathsf A}(x) \}$.
We denote by $\zer({\mathsf A}) \eqdef \{x\in \RN\,:\, 0\in {\mathsf A}(x)\}$ the set of zeroes of ${\mathsf A}$.
The operator ${\mathsf A}$ is proper if $\dom({\mathsf A})\neq\emptyset$.
A proper monotone operator
${\mathsf A}$ is said maximal if its graph $\graph({\mathsf A})$ is a maximal element
in the inclusion ordering.
Denote by $I$ the identity operator, and by ${\mathsf A}^{-1}$ the inverse of the
operator ${\mathsf A}$, defined by the fact that $(x,y) \in \graph({\mathsf A}^{-1})
\Leftrightarrow (y,x) \in \graph({\mathsf A})$. It is well know that ${\mathsf A}$ is maximal monotone if
and only if, for all $\gamma > 0$, the \emph{resolvent}
$\eqdef ( I + \gamma {\mathsf A} )^{-1}$ is a contraction defined on the whole space
(in particular, it is single valued).
Problem~\eqref{eq:find-zero} arises
in several applications such as convex optimization, variational inequalities,
or game theory.
The celebrated \emph{proximal point algorithm} \cite{rockafellar1976monotone}
generates the sequence $(u_n, n\in \bN)$ defined recursively as
$u_{n+1} = (I+\gamma {\mathsf A})^{-1}(u_n)$. The latter sequence converges to a zero of the operator ${\mathsf A}$, whenever such a zero exists.
Recent works (see \cite{bia-hac-16} and references therein) have been devoted to the special case
where the operator ${\mathsf A}$ is defined as the following selection integral
$$
{\mathsf A}(x) = \int A(s,x)\mu(ds)\,,
$$
where $\mu$ is a probability on $\Xi$ and where $\{A(s, \cdot), s \in \Xi\}$
is a family of maximal monotone operators.
In this context, a natural algorithm for solving~(\ref{eq:find-zero}) is
\begin{equation}
\label{eq:ppa}
x_{n+1} = (I+\gamma A(\xi_{n+1},\,.\,))^{-1}(x_n)
\end{equation}
where $(\xi_n, n\in \bN^*)$ is an iid sequence of r.v. whose law coincides
with $\mu$.
The asymptotic behavior of~\eqref{eq:ppa} is analyzed in \cite{bia-16} under the assumption
that the step size $\gamma$ is decreasing with $n$. On the other hand, the results of the present paper
apply to the case where $\gamma$ is a constant which does not depend on $n$.
Here, the drift $g_\gamma$ has the form \eqref{eq:drift-integral} where the map
$-h_\gamma(s,x) = \gamma^{-1}( x-(I+\gamma A(s, \cdot))^{-1}(x))$ is the so-called \emph{Yosida regularization}
of the operator $A(s,\cdot)$ at $x$. As $\gamma\to 0$, it is well known that
for every $x\in\dom(A(s,\cdot))$, $-h_\gamma(s,x)$ converges to the element of
least norm in $A(s,\cdot)$ \cite{bau-com-livre11}.
Thanks to our results, it can be shown that under some hypotheses, the interpolated process converges narrowly to the unique solution
to the DI
\begin{equation}
\dot {\mathsf x}(t) \in -\int A(s, {\mathsf x}(t))\mu(ds)\,,\label{eq:di-mm}
\end{equation}
and, under the Pakes-Has'minskii condition, that the iterates $x_n$ converge in the long run to the zeroes of ${\mathsf A}$.
\end{example}
}
\rev{
\section{About the Literature}
\label{sec:literature}
When the drift $g_\gamma$ does not depend on $\gamma$ and is supposed to be a Lispchitz continuous map,
the long term behavior of the iterates $x_n$ in the small step size regime has been studied in the treatises
\cite{ben-met-pri-livre90,ben-(cours)99,kus-yin-(livre)03,bor-livre08,ben-hir-aap99} among
others. In particular, narrow convergence of the interpolated process to the solution of an
Ordinary Differential Equation (ODE) is established. The authors of \cite{for-pag-99}
introduce a Pakes-Has'minskii criterion to study the long-run behavior of the iterates.
The recent interest in the stochastic approximation when the ODE is replaced
with a differential inclusion dates back to \cite{ben-hof-sor-05}, where
decreasing steps were considered. A similar setting is considered
in~\cite{fau-rot-10}. A Markov noise was considered in the recent manuscript
\cite{yaj-bha-(arxiv)16}. We also mention \cite{fau-rot-13}, where the ergodic
convergence is studied when the so called weak asymptotic pseudo trajectory
property is satisfied.
The case where the DI is built from maximal monotone operators is studied in
\cite{bia-16} and \cite{bia-hac-16}.
Differential inclusions arise in many applications, which include
game theory (see \cite{ben-hof-sor-05,ben-hof-sor-(partII)06},
\cite{rot-san-siam13} and the references therein), convex optimization \cite{bia-hac-16},
queuing theory or wireless communications, where stochastic approximation algorithms
with non continuous drifts are frequently used, and can be modelled by
differential inclusions~\cite{gas-gau-12}.
Differential inclusions with a constant step were studied in \cite{rot-san-siam13}.
The paper \cite{rot-san-siam13} extends previous results of \cite{ben-sch-00}
to the case of a DI. The key result established in \cite{rot-san-siam13} is that
the cluster points of the collection of invariant measures of the Markov chain
are invariant for the flow associated with the DI.
Prop. \ref{prop:cluster} of the present paper restates this result in a more general setting
and using a shorter proof, which we believe to have its own interest.
Moreover, the so-called GASP model studied by \cite{rot-san-siam13}
does not cover certain applications, such as the ones provided in Section~\ref{sec:examples}, for instance.
In addition, \cite{rot-san-siam13} focusses on the case where the space is compact, which
circumvents the issue of stability and simplifies the mathematical arguments.
However, in many situations, the compactness assumption does not hold, and sufficient conditions
for stability need to be formulated.
Finally, we characterize the asymptotic behavior of the iterates $(x_n)$
(as well as their Cesar\`o means) in the doubly asymptotic regime where $n\to\infty$ then $\gamma\to 0$.
Such results are not present in \cite{rot-san-siam13}.
}
\section{Background}
\label{sec-bgrd}
\subsection{General Notations}
The notation $C(E,F)$ is used to denote the set of continuous functions from
the topological space $E$ to the topological space $F$. The notation $C_b(E)$
stands for the set of bounded functions in $C(E,\bR)$. We use the conventions
$\sup \emptyset = -\infty$ and $\inf \emptyset = +\infty$. Notation
$\lfloor x\rfloor$ stands for the integer part of $x$.
Let $(E,d)$ be a metric space. For every $x\in E$ and $S\subset E$, we define
$d(x,S)=\inf\{d(x,y):y\in S\}$. We say that a sequence $(x_n, n\in\bN)$ on $E$
converges to $S$, noted $x_n\to_n S$ or simply $x_n\to S$, if $d(x_n,S)$ tends
to zero as $n$ tends to infinity. For $\varepsilon > 0$, we define the
$\varepsilon$-neighborhood of the set $S$ as $S_\varepsilon \eqdef \{x\in
E:d(x,S)<\varepsilon\}$. The closure of $S$ is denoted by $\overline S$, and
its complementary set by $S^c$.
The characteristic function of $S$ is the function $\mathbbm 1_S:E\to\{0,1\}$ equal to
one on $S$ and to zero elsewhere.
Let $E={\mathbb R}^N$ for some integer $N\geq 1$.
We endow the space $C(\bR_+,E)$ with the topology of uniform convergence on
compact sets.
The space $C(\bR_+,E)$ is metrizable by the distance $d$ defined for every $\mathsf x,\mathsf y\in C(\bR_+,E)$ by
\begin{equation}
d(\mathsf x,\mathsf y)\eqdef\sum_{n\in\mathbb N}2^{-n}
\left(1\wedge\!\!
\sup_{t\in [0,n]}\|\mathsf x(t)-\mathsf y(t)\|\right)\,,
\label{eq:d}
\end{equation}
where $\|\cdot\|$ denotes the Euclidean norm in $E$.
\subsection{Random Probability Measures}
Let $E$ denote a metric space and let $\mcB(E)$ be its Borel $\sigma$-field.
We denote by $\cM(E)$ the set of probability measures on $(E,\mcB(E))$.
The support $\support(\nu)$ of a measure $\nu\in \cM(E)$
is the smallest closed set $G$ such that $\nu(G) = 1$.
We endow $\cM(E)$ with the topology of narrow convergence:
a sequence $(\nu_n, n\in \bN)$ on $\cM(E)$ converges to
a measure $\nu\in\cM(E)$ (denoted $\nu_n\Rightarrow \nu$) if for every
$f\in C_b(E)$, $\nu_n(f)\to\nu(f)$, where $\nu(f)$ is a shorthand for
$\int f(x) \nu(dx)$.
If $E$ is a Polish space, $\cM(E)$ is metrizable by the L\'evy-Prokhorov distance,
and is a Polish space as well.
A subset $\cG$ of $\cM(E)$ is said tight if for every $\varepsilon>0$, there exists a compact
subset $K$ of $E$ such that for all $\nu\in \cG$, $\nu(K)>1-\varepsilon$.
By Prokhorov's theorem, $\cG$ is tight if and only if it is relatively compact in $\cM(E)$.
We denote by $\delta_a$ the Dirac measure at the point $a\in E$.
If $X$ is a random variable on some measurable space $(\Omega,\mcF)$ into $(E,\mcB(E))$,
we denote by $\delta_X:\Omega\to\cM(E)$ the measurable mapping defined by $\delta_X(\omega)=\delta_{X(\omega)}$.
If $\Lambda:(\Omega,\mcF)\to (\cM(E),\mcB(\cM(E)))$ is a random variable on the set of probability measures,
we denote by $\bE\Lambda$ the probability measure defined by
$
(\bE\Lambda)(f)\eqdef\bE(\Lambda(f))\,,
$
for every $f\in C_b(E)$.
\subsection{Set-Valued Mappings and Differential Inclusions}
A set-valued mapping ${\mathsf H} : E\rightrightarrows F$ is a function on $E$ into the
set $2^F$ of subsets of $F$. The graph of ${\mathsf H}$ is
$\graph({\mathsf H})\eqdef \{(a,b)\in E\times F:y\in {\mathsf H}(a)\}$. The domain of ${\mathsf H}$ is
$\dom({\mathsf H}) \eqdef \{ a \in E \, : \, {\mathsf H}(a) \neq \emptyset \}$. The mapping
${\mathsf H}$ is said proper if $\dom({\mathsf H})$ is non-empty. We say that ${\mathsf H}$ is
single-valued if ${\mathsf H}(a)$ is a singleton for every $a\in E$ (in which case we
handle ${\mathsf H}$ simply as a function ${\mathsf H}:E\to F$).
Let ${\mathsf H}: E\rightrightarrows E$ be a set-valued map on $E=\bR^N$, where $N$ is a positive integer.
Consider the
differential inclusion:
\begin{equation}
\dot {\mathsf x}(t)\in {\mathsf H}({\mathsf x}(t))\,.\label{eq:di}
\end{equation}
We say that an absolutely continuous mapping ${\mathsf x}:\bR_+\to E$
is a solution to the differential inclusion
with initial condition $a\in E$
if ${\mathsf x}(0)=a$ and if (\ref{eq:di}) holds for almost every $t\in \bR_+$.
We denote by
$$
\Phi_{{\mathsf H}}:E\rightrightarrows C(\bR_+, E)
$$
the set-valued mapping such that for every $a\in E$, $\Phi_{{\mathsf H}}(a)$
is set of solutions to~(\ref{eq:di}) with initial condition $a$.
We refer to $\Phi_{\mathsf H}$ as the evolution system induced by ${\mathsf H}$.
For every subset $A\subset E$, we define $\Phi_{\mathsf H}(A) = \bigcup_{a\in A}\Phi_{\mathsf H}(a)$.
A mapping ${\mathsf H}:E\rightrightarrows E$ is said \emph{upper
semi continuous} (usc) at a point $a_0 \in E$ if for every open set $U$
containing ${\mathsf H}(a_0)$, there exists $\eta>0$, such that for every
$a\in E$, $\|a-a_0\|<\eta$ implies ${\mathsf H}(a)\subset U$.
It is said usc if it is usc at every point
\cite[Chap.~1.4]{aub-cel-(livre)84}.
In the particular case where ${\mathsf H}$ is usc with nonempty compact convex values
and satisfies the condition
\begin{equation}
\label{eq:lin-growth}
\exists c>0,\ \forall a\in E,\ \sup\{\|b\|\,:b\in {\mathsf H}(a)\}\leq c(1+\|a\|)\ ,
\end{equation}
then, $\dom(\Phi_{\mathsf H})=E$, see \emph{e.g.} \cite{aub-cel-(livre)84},
and moreover, $\Phi_{\mathsf H}(E)$ is closed in the metric space $(C(\bR_+, E), d)$.
\subsection{Invariant Measures of Set-Valued Evolution Systems}
\label{subsec-inv-mes}
Let $(E,d)$ be a metric space.
We define the shift operator $\Theta:C(\bR_+,E)\to
C(\bR_+,C(\bR_+,E))$ s.t. for every $\mathsf x\in C(\bR_+,E)$,
$\Theta(\mathsf x) : t\mapsto \mathsf x(t+\,\cdot\,)$.
Consider a set-valued mapping $\Phi:E\rightrightarrows C(\bR_+,E)$.
When $\Phi$ is single-valued (\emph{i.e.}, for all $a\in E$, $\Phi(a)$
is a continuous function), a measure $\pi\in \cM(E)$ is called an
\emph{invariant measure} for $\Phi$, or $\Phi$-invariant, if for all
$t>0$, $\pi=\pi\Phi_t^{-1}$, where $\Phi_t:E\to E$ is the map defined
by $\Phi_t(a) = \Phi(a)(t)$. For all $t\geq 0$, we define the
projection $p_t:C(\bR_+,E)\to E$ by $p_t({\mathsf x}) = {\mathsf x}(t)$.
The definition can be extended as follows to the case where $\Phi$ is set-valued.
\begin{definition}
\label{def-inv}
A probability measure $\pi\in\cM(E)$ is said invariant for $\Phi$
if there exists $\upsilon\in\cM(C(\bR_+,E))$ s.t.
\begin{enumerate}[(i)]
\item\label{inv-support} $\support(\upsilon)\subset \overline{\Phi(E)}$\,;
\item\label{inv-Theta} $\upsilon$ is $\Theta$-invariant\,;
\item\label{inv-margin} $\upsilon p_0^{-1}=\pi$.
\end{enumerate}
\end{definition}
When $\Phi$ is single valued, both definitions coincide.
The above definition is borrowed from \cite{fau-rot-13} (see also
\cite{mil-aki-99}).
Note that $\overline{\Phi(E)}$ can be replaced by $\Phi(E)$ whenever the latter
set is closed (sufficient conditions for this have been provided above).
The limit set of a function ${\mathsf x} \in C(\bR_+, E)$ is defined as
\[
L_{\mathsf x} \eqdef \bigcap_{t\geq 0}\overline{{\mathsf x}([t,+\infty)} \,.
\]
It coincides with the set of points of the form $\lim_n {\mathsf x}(t_n)$ for some
sequence $t_n\to\infty$. Consider now a set valued mapping
$\Phi:E\rightrightarrows C(\bR_+,E)$. The limit set $L_{\Phi(a)}$ of a point
$a \in E$ for $\Phi$ is
\[
L_{\Phi(a)} \eqdef \bigcup_{{\mathsf x} \in \Phi(a)} L_{\mathsf x} \, ,
\]
and $L_\Phi \eqdef \bigcup_{a\in E}L_{\Phi(a)}$.
A point $a$ is said recurrent for $\Phi$ if $a \in L_{\Phi(a)}$.
The Birkhoff center of $\Phi$ is the closure
of the set of recurrent points
\[
\text{BC}_{\Phi} \eqdef
\overline{\{ a \in E \, : \, a \in L_{\Phi(a)} \}}\, .
\]
The following result, established in \cite{fau-rot-13}
(see also \cite{aub-fra-las-91}), is a consequence of the celebrated
recurrence theorem of Poincar\'e.
\begin{proposition}
\label{poincare}
Let $\Phi:E\rightrightarrows C(\bR_+,E)$. Assume that $\Phi(E)$ is closed. Let $\pi\in \cM(E)$ be an invariant measure for $\Phi$. Then,
$\pi(\text{BC}_\Phi) = 1$.
\end{proposition}
We denote by $\cI(\Phi)$ the subset of $\cM(E)$ formed by all invariant
measures for $\Phi$. We define
$$
\mcI(\Phi) \eqdef \{ \mathfrak{m}\in\cM(\cM(E))\,:\,\forall A\in \mcB(\cM(E)),\, \cI(\Phi)\subset A\,\Rightarrow\,\mathfrak{m}(A)=1\}\,.
$$
We define the mapping $\aver:C(\bR_+,E)\to C(\bR_+,E)$ by
$$
\aver({\mathsf x}) :t\mapsto \frac 1t \int_0^t {\mathsf x}(s) \, ds \,,
$$
and $\aver({\mathsf x})(0)={\mathsf x}(0)$. Finally, we define
$\aver(\Phi) : E \rightrightarrows C(\bR_+, E)$ by
$\aver(\Phi)(a) = \{ \aver({\mathsf x}) \, : \, {\mathsf x} \in \Phi(a) \}$ for
each $a\in E$.
\subsection{The Selection Integral}
Let $(\Xi, \mcG,\mu)$ denote an arbitrary probability space.
For $1 \leq p < \infty$, we denote by
${\mathcal L}^p(\Xi, {\mcG}, \mu; E)$ the Banach space of the measurable
functions $\varphi : \Xi \to E$ such that $\int \| \varphi \|^p d\mu <
\infty$.
For any set-valued mapping $G:\Xi\rightrightarrows E$, we define the set
\[
\Selec^p_G \eqdef
\{ \varphi \in {\mathcal L}^p(\Xi, {\mcG}, \mu; E) \, : \,
\varphi(\xi) \in G(\xi) \ \mu-\text{a.e.} \} \, .
\]
Any element of $\Selec^1_G$ is referred to as an \emph{integrable selection}.
If $\Selec^1_G \neq \emptyset$, the mapping $G$ is said to be
integrable. The \emph{selection integral} \cite{molchanov2006theory} of $G$ is the set
\[
\int G d\mu \eqdef \overline{\left\{ \int_\Xi \varphi d\mu \ : \
\varphi \in \Selec^1_G \right\}} \,.
\]
\section{Main Results}
\label{sec-main}
\subsection{Dynamical Behavior}
\label{rm-general}
From now on to the end of this paper, we set $E\eqdef\bR^N$ where $N$
is a positive integer. \rev{Choose $\gamma_0>0$. For every
$\gamma\in (0,\gamma_0)$, we introduce a probability transition
kernel $P_\gamma$ on $E\times\mcB(E)\to [0,1]$.}
Let $(\Xi, \mcG,\mu)$ be an arbitrary probability space.
\begin{assumption*}[RM] There exist a $\mcG\otimes\mcB(E)/\mcB(E)$-measurable map
$h_\gamma: \Xi\times E \to E$
and $H: \Xi\times E\rightrightarrows E$ such that:
\begin{enumerate}[i)]
\item \label{hyp:drift} \rev{For every $x\in E$, $$\int \frac{y-x}\gamma P_\gamma(x,dy) = \int h_\gamma(s,x)\mu(ds)\,.$$}
\item\label{hyp:RM-cvg}
For every $s$ $\mu$-a.e. and for every converging sequence
$(u_n,\gamma_n)\to (u^\star,0)$ on $E\times (0,\gamma_0)$,
$$h_{\gamma_n}(s,u_n)\to H(s,u^\star)\,.$$
\item For all $s$ $\mu$-a.e., $H(s,\cdot)$ is proper, usc,
with closed convex values.
\item\label{hyp:RM-integ}
For every $x\in E$, $H(\cdot, x)$ is $\mu$-integrable. We set
${\mathsf H}(x)\eqdef \int H(s, x)\, \mu(ds)$.
\item \label{hyp:flot-borne} For every $T>0$ and every compact set
$K\subset E$,
$$
\sup\{\|{\mathsf x}(t)\|: t\in [0,T], {\mathsf x}\in \Phi_{{\mathsf H}}(a), a\in K\}<\infty\,.
$$
\item \label{hyp:RM-moments} \rev{For every compact set $K\subset E$, there exists $\epsilon_{K}>0$
such that
\begin{align}
&\sup_{x\in K}\sup_{0<\gamma<\gamma_0}
\int\left\|\frac{y-x}\gamma\right\|^{1+\epsilon_K} P_\gamma(x,dy) <\infty\,,
\label{eq:moment}\\
&\sup_{x\in K}\sup_{0<\gamma<\gamma_0} \int \left\|h_\gamma(s,x)\right\|^{1+\epsilon_K}\mu(ds)<\infty\,.
\label{eq:moment-bis}
\end{align}}
\end{enumerate}
\end{assumption*}
\rev{Assumption \ref{hyp:drift} implies that the drift has the form~(\ref{eq:drift}).
As mentioned in the introduction, this is for instance useful in the case of iterative Markov models
such as (\ref{eq:iterative-model}).}
Assumption \ref{hyp:flot-borne} requires implicitly that the set of solutions
$\Phi_{{\mathsf H}}(a)$ is non-empty for any value of $a$. It holds true if,
\emph{e.g.}, the linear growth condition~\eqref{eq:lin-growth} on ${\mathsf H}$ is
satisfied.
On the canonical space $\Omega\eqdef E^{\mathbb N}$ equipped with the
$\sigma$-algebra $\mcF\eqdef \mcB(E)^{\otimes \bN}$,
we denote by $X:\Omega\to E^{\bN}$ the canonical process
defined by $X_n(\omega)=\omega_n$ for every $\omega=(\omega_k,k\in \bN)$ and every $n\in \bN$, where
$X_n(\omega)$ is the $n$-th coordinate of $X(\omega)$.
For every $\nu\in \cM(E)$ and $\gamma\in (0,\gamma_0)$,
we denote by $\bP^{\nu,\gamma}$ the unique probability measure on $(\Omega,\mcF)$
such that $X$ is an homogeneous Markov chain with initial distribution $\nu$ and transition kernel $P_\gamma$.
We denote by $\bE^{\nu,\gamma}$ the corresponding expectation. When $\nu=\delta_a$ for some $a\in E$,
we shall prefer the notation $\bP^{a,\gamma}$ to $\bP^{\delta_a,\gamma}$.
The set $C(\bR_+,E)$ is equipped with the topology of uniform convergence
on the compact intervals, who is known to be compatible with the distance
$d$ defined by~\eqref{eq:d}. For every $\gamma>0$,
we introduce the measurable map on
$(\Omega,\mcF)\to (C(\bR_+,E),\mcB(C(\bR_+,E)))$, such that for every
$x=(x_n,n\in \bN)$ in $\Omega$,
$$
{\mathsf X}_\gamma(x)\,:t \mapsto x_{\lfloor \frac t\gamma\rfloor} + (t/\gamma-\lfloor t/\gamma\rfloor)(x_{\lfloor \frac t\gamma\rfloor+1}-x_{\lfloor \frac t\gamma\rfloor})
\,.
$$
The random variable ${\mathsf X}_\gamma$ will be referred to as the linearly
\emph{interpolated process}.
On the space $(C(\bR_+,E),\mcB(C(\bR_+,E)))$, the distribution of the
r.v.~${\mathsf X}_\gamma$ is $\bP^{\nu,\gamma}{\mathsf X}_\gamma^{-1}$.
\begin{theorem}
\label{th:SA=wAPT}
Suppose that Assumption (RM) is satisfied. Then, for every compact set
$K\subset E$, the family
$\{ \bP^{a,\gamma}{\mathsf X}_\gamma^{-1}:a\in K,0<\gamma<\gamma_0 \}$ is tight.
Moreover, for every $\varepsilon>0$,
\[
\sup_{a\in K}\,\bP^{a,\gamma}\left(d({\mathsf X}_\gamma,\Phi_{\mathsf H}(K))>\varepsilon\right)\xrightarrow[\gamma\to 0]{}0\,.
\]
\end{theorem}
\subsection{Convergence Analysis}
For each $\gamma\in (0,\gamma_0)$, we denote by
$$
\cI(P_\gamma) \eqdef \{\pi\in\cM(E)\,:\,\pi = \pi P_\gamma\}
$$
the set of invariant probability measures of $P_\gamma$. Letting
$\cP = \{P_\gamma,0<\gamma<\gamma_0\}$, we define
$\cI(\cP) = \bigcup_{\gamma\in (0,\gamma_0)}\cI(P_\gamma)$.
We say that a measure $\nu\in \cM(E)$ is a cluster point of $\cI(\cP)$
as $\gamma\to 0$, if there exists a sequence $\gamma_j\to 0$ and a sequence
of measures $(\pi_j,j\in \bN)$ s.t. $\pi_j\in \cI(P_{\gamma_j})$ for all $j$,
and $\pi_j\Rightarrow \nu$.
We define
$$
\mcI(P_\gamma) \eqdef \{\mathfrak{m}\in\cM(\cM(E))\,:\, \support(\mathfrak{m})\subset \cI(P_\gamma)\}\,,
$$
and $\mcI(\cP) =\bigcup_{\gamma\in (0,\gamma_0)}\mcI(P_\gamma)$.
We say that a measure $\mm\in \cM(\cM(E))$ is a cluster point of $\mcI(\cP)$
as $\gamma\to 0$, if there exists a sequence $\gamma_j\to 0$ and a sequence
of measures $(\mm_j,j\in \bN)$ s.t. $\mm_j\in \mcI(P_{\gamma_j})$ for all $j$,
and $\mm_j\Rightarrow \mm$.
\begin{proposition}
\label{prop:cluster}
Suppose that Assumption (RM) is satisfied. Then,
\begin{enumerate}[i)]
\item\label{cluster-Idroit}
As $\gamma\to 0$, any cluster point of $\cI(\cP)$ is an element of $\cI(\Phi_{\mathsf H})$;
\item\label{cluster-Icursif}
As $\gamma\to 0$, any cluster point of $\mcI(\cP)$ is an element of
$\mcI(\Phi_{\mathsf H})$.
\end{enumerate}
\end{proposition}
In order to explore the consequences of this proposition, we introduce two
supplementary assumptions. The first is the so-called Pakes-Has'minskii
tightness criterion, who reads as follows \cite{for-pag-99}:
\begin{assumption*}[PH]
There exists measurable mappings $V:E\to[0,+\infty)$, $\psi:E\to[0,+\infty)$
and two functions $\alpha:(0,\gamma_0)\to(0,+\infty)$, $\beta:(0,\gamma_0)\to\bR$, such that
\begin{align*}
& \sup_{\gamma\in (0,\gamma_0)}\frac{\beta(\gamma)}{\alpha(\gamma)}<\infty
\qquad\text{and}\qquad {\lim_{\|x\|\to+\infty}\psi(x)=+\infty}\,,
\end{align*}
and for every $\gamma\in (0,\gamma_0)$,
$$
P_\gamma V\leq V-\alpha(\gamma)\psi+\beta(\gamma)\,.
$$
\end{assumption*}
We recall that a transition kernel $P$ on $E\times\mcB(E)\to [0,1]$
is said \emph{Feller} if the mapping $Pf:x\mapsto \int f(y)P(x,dy)$
is continuous for any $f\in C_b(E)$.
If $P$ is Feller, then the set of invariant measures of $P$
is a closed subset of $\cM(E)$.
The following assumption ensures that for all $\gamma\in (0,\gamma_0)$,
$P_\gamma$ is Feller.
\begin{assumption*}[FL]
For every $s\in \Xi$, $\gamma\in (0,\gamma_0)$, the function
$h_\gamma(s,\cdot)$ is continuous.
\end{assumption*}
\begin{theorem}
Let Assumptions (RM), (PH) and (FL) be satisfied. Let $\psi$ and $V$ be the functions
specified in (PH). Let $\nu\in\cM(E)$ s.t. $\nu(V)<\infty$.
Let ${\mathcal U} \eqdef \bigcup_{\pi \in \cI(\Phi)} \support(\pi)$. Then, for all
$\varepsilon > 0$,
\begin{equation}
\limsup_{n\to\infty} \frac 1{n+1}\sum_{k=0}^n \bP^{\nu,\gamma}( d(X_k,{\mathcal U})>\varepsilon)\xrightarrow[\gamma\to 0]{}0\,.
\label{eq:support}
\end{equation}
Let $N'\in\bN^*$ and $f\in C(E,\bR^{N'})$. Assume that there exists $M\geq 0$ and $\varphi:\bR^{N'}\to\bR_+$ such that
$\lim_{\|a\|\to\infty}\varphi(a)/{\|a\|}=\infty$ and
\begin{equation}
\forall a\in E,\ \varphi(f(a))\leq M(1+\psi(a))\,.\label{eq:dVP}
\end{equation}
Then, the set $\cS_f\eqdef\{\pi(f)\,:\, \pi\in\cI(\Phi)\text{
and } \pi(\|f(\cdot)\|)<\infty\}$ is nonempty. For all $n\in\bN$, $\gamma\in (0,\gamma_0)$, the r.v.
$$
F_n \eqdef \frac 1{n+1}\sum_{k=0}^n f(X_k)
$$
is $\bP^{\nu,\gamma}$-integrable, and satisfies for all $\varepsilon>0$,
\begin{align}
&\limsup_{n\to\infty}\ d\left(\bE^{\nu,\gamma}( F_n)\,,\cS_f\right)\xrightarrow[\gamma\to 0]{}0\,,\label{eq:CVSE} \\
&\limsup_{n\to\infty}\ \bP^{\nu,\gamma}\left(d\left(F_n\,,\cS_f\right)\geq \varepsilon\right)\xrightarrow[\gamma\to 0]{}0\,. \label{eq:CVSf}
\end{align}
\label{the:CV}
\end{theorem}
\begin{theorem}
\label{cvg-CVSI}
Let Assumptions (RM), (PH) and (FL) be satisfied. Assume that $\Phi_{\mathsf H}(E)$ is closed.
Let $\psi$ and $V$ be the functions specified in (PH). Let $\nu\in\cM(E)$ s.t. $\nu(V)<\infty$.
Assume that
$$
\lim_{\|a\|\to\infty}\frac{\psi(a)}{\|a\|}=+\infty\,.
$$
For all $n\in \bN$, define
$
\overline X_n\eqdef \frac 1{n+1} \sum_{k=0}^n X_k\,.
$
Then, for all $\varepsilon > 0$,
\begin{align*}
& \limsup_{n\to\infty}\ d\left(\bE^{\nu,\gamma}(\overline X_n)\,,\co(L_{\aver(\Phi)})\right)\xrightarrow[\gamma\to 0]{}0\,,\\
&\limsup_{n\to\infty}\ \bP^{\nu,\gamma}\Bigl(d\Bigl(\overline X_n\,,\co(L_{\aver(\Phi)})\Bigr)\geq \varepsilon\Bigr)
\xrightarrow[\gamma\to 0]{}0\, ,
\end{align*}
where $\co(S)$ is the convex hull of the set $S$.
\end{theorem}
\begin{theorem}
\label{cvg-XY}
Let Assumptions (RM), (PH) and (FL) be satisfied. Assume that $\Phi_{\mathsf H}(E)$ is closed. Let $\psi$ and $V$ be the functions
specified in (PH). Let $\nu\in\cM(E)$ s.t. $\nu(V)<\infty$.
Then, for all $\varepsilon > 0$,
$$
\limsup_{n\to\infty}\ \frac 1{n+1}\sum_{k=0}^n
\bP^{\nu,\gamma}\left(d\left(X_k\,,\text{BC}_{\Phi} \right)
\geq \varepsilon\right) \xrightarrow[\gamma\to 0]{}0\,.
$$
\end{theorem}
\section{Proof of Theorem~\ref{th:SA=wAPT}}
\label{sec-prf-narrow}
The first lemma is a straightforward adaptation of the \emph{convergence theorem} \cite[Chap.~1.4, Th.~1, pp. 60]{aub-cel-(livre)84}.
Hence, the proof is omitted.
We denote by $\lambda_T$ the Lebesgue measure on $[0,T]$.
\begin{lemma}
\label{lem:cvth}
Let $\{F_\xi:\xi\in\Xi\}$ be a family of mappings on $ E\rightrightarrows E$.
Let $T>0$ and for all $n\in\bN$, let
$u_n:[0,T]\to E$, $v_n:\Xi\times[0,T]\to E$ be measurable maps
w.r.t $\mcB([0,T])$ and $\mcG \otimes \mcB([0,T])$ respectively. Note for simplicity
${{\mathcal L}}^1\eqdef {{\mathcal L}}^1(\Xi\times [0,T],\mcG\otimes \mcB([0,T]),\mu\otimes \lambda_T;\bR)$.
Assume the following.
\begin{enumerate}[i)]
\item \label{hyp:cvToGraph} For all $(\xi,t)$ $\mu\otimes\lambda_T$-a.e.,
$(u_n(t),v_n(\xi,t))\to_n \mathrm{gr}(F_\xi)$.
\item $(u_n)$ converges $\lambda_T$-a.e. to a function $u:[0,T]\to E$.
\item \label{hyp:cvL1} For all $n$, $v_n\in {{\mathcal L}}^1$ and converges weakly in ${{\mathcal L}}^1$ to a
function $v:\Xi\times [0,T]\to E$.
\item For all $\xi$ $\mu$-a.e., $F_\xi$ is proper upper semi continuous
with closed convex values.
\end{enumerate}
Then, for all $(\xi,t)$ $\mu\otimes \lambda_T$-a.e., $v(\xi,t)\in F_\xi(u(t))$.
\end{lemma}
Given $T > 0$ and $0<\delta\leq T$, we denote by
\[
w_{\mathsf x}^T(\delta) \eqdef
\sup\{\|{\mathsf x}(t)-{\mathsf x}(s)\| :|t-s|\leq \delta, (t,s)\in [0,T]^2\}
\]
the modulus of continuity on $[0,T]$ of any ${\mathsf x}\in C(\bR_+, E)$.
\begin{lemma}
\label{h-ui}
For all $n\in \bN$, denote by $\mcF_n\subset\mcF$ the $\sigma$-field generated
by the r.v. $\{X_{k},0\leq k\leq n\}$. For all $\gamma\in (0,\gamma_0)$, define
$Z^\gamma_{n+1} \eqdef \gamma^{-1}(X_{n+1}-X_{n})$.
Let $K\subset E$ be compact. Let
$\{\bar \bP^{a,\gamma},a\in K,0<\gamma<\gamma_0\}$ be a family of probability
measures on $(\Omega,\mcF)$ satisfying the following uniform integrability
condition:
\begin{equation}
\sup_{{n\in\bN^*, a\in K, \gamma\in (0,\gamma_0)}} \bar \bE^{a,\gamma}(\|Z^\gamma_{n}\|\mathbbm 1_{\|Z^\gamma_{n}\|>A})\xrightarrow[]{A\to+\infty} 0\,.\label{eq:ui}
\end{equation}
Then, $\{\bar P^{a,\gamma}{\mathsf X}_\gamma^{-1}:a\in K,0<\gamma<\gamma_0\}$ is
tight. Moreover, for any $T > 0$,
\begin{equation}
\sup_{a\in K}\bar\bP^{a,\gamma}\left(\max_{0\leq n\leq \lfloor\frac T\gamma\rfloor}\gamma\left\|\sum_{k=0}^n\left(Z^\gamma_{k+1}-\bar\bE^{a,\gamma}(Z^\gamma_{k+1}|\mcF_k)\right)
\right\|>\varepsilon\right)\xrightarrow[]{\gamma\to 0} 0\,.
\label{eq:asym-rate-change}
\end{equation}
\label{lem:tightC}
\end{lemma}
\begin{proof}
We prove the first point. Set $T>0$, let $0<\delta\leq T$, and choose
$0\leq s\leq t\leq T$ s.t. $t-s\leq \delta$. Let $\gamma\in(0,\gamma_0)$ and
set $n\eqdef \lfloor \frac t\gamma\rfloor$,
$m\eqdef \lfloor \frac s\gamma\rfloor$. For any $R>0$,
$$
\|{\mathsf X}_\gamma(t)-{\mathsf X}_\gamma(s)\|\leq \gamma\!\!\sum_{k=m+1}^{n+1}\!\!\|Z_k^{\gamma}\|\leq \gamma(n-m+1)R + \gamma\!\!\sum_{k=m+1}^{n+1}\!\!\|Z_k^{\gamma}\|\mathbbm 1_{\|Z_k^{\gamma}\|>R}\,.
$$
Noting that $n-m+1\leq \frac{\delta}\gamma$ and using Markov inequality, we obtain
\begin{align*}
\bar\bP^{a,\gamma}{\mathsf X}_\gamma^{-1}(\{x:w_x^T(\delta)>\varepsilon\}) &\leq \bar\bP^{a,\gamma}\left(\gamma\!\!\sum_{k=1}^{ \lfloor \frac T\gamma\rfloor+1}\!\!\|Z_k^{\gamma}\|\mathbbm 1_{\|Z_k^{\gamma}\|>R}>\varepsilon-\delta R\right) \\
&\leq T \frac{\sup_{k\in\bN^*} \bar \bE^{a,\gamma}\left(\|Z_k^{\gamma}\|\mathbbm 1_{\|Z_k^{\gamma}\|>R}\right)}{\varepsilon-\delta R}\,,
\end{align*}
provided that $R\delta<\varepsilon$. Choosing $R=\varepsilon/(2\delta)$ and
using the uniform integrability,
\[
\sup_{a\in K,0<\gamma<\gamma_0}
\bar\bP^{a,\gamma}{\mathsf X}_\gamma^{-1}(\{x:w_x^T(\delta)>\varepsilon\})
\xrightarrow[]{\delta\to 0} 0\, .
\]
As $\{\bar\bP^{a,\gamma}{\mathsf X}_\gamma^{-1}p_0^{-1},0<\gamma<\gamma_0,a\in K\}$
is obviously tight, the tightness of
$\{\bar P^{a,\gamma}{\mathsf X}_\gamma^{-1},a\in K,0<\gamma<\gamma_0\}$ follows
from \cite[Theorem 7.3]{bil-(livre)99}.
We prove the second point.
We define $M^{a,\gamma}_{n+1} \eqdef \sum_{k=0}^n\left(Z^\gamma_{k+1}-\bar\bE^{a,\gamma}(Z^\gamma_{k+1}|\mcF_k)\right)$.
We introduce
\begin{align*}
\eta_{n+1}^{a,\gamma,\leq} &\eqdef Z_{n+1}^{\gamma}\mathbbm 1_{\|Z_{n+1}^{\gamma}\|\leq R}-\bar\bE^{a,\gamma}\left(Z_{n+1}^{\gamma}\mathbbm 1_{\|Z_{n+1}^{\gamma}\|\leq R}|\mcF_n\right)
\end{align*}
and we define $\eta_{n+1}^{a,\gamma,>}$ in a similar way, by replacing $\leq$ with $>$ in the right hand side of the above equation.
Clearly, for all $a\in K$, $M_{n+1}^{a,\gamma} = \eta_{n+1}^{a,\gamma,\leq}+ \eta_{n+1}^{a,\gamma,>}$. Thus,
$$
\gamma\left\|M^{a,\gamma}_{n+1}\right\|\leq \|S_{n+1}^{a,\gamma,\leq}\| + \|S_{n+1}^{a,\gamma,>}\|
$$
where $S_{n+1}^{a,\gamma,\leq}\eqdef \gamma \sum_{k=0}^n\eta_{k+1}^{a,\gamma,\leq}$ and
$S_{n+1}^{a,\gamma,>}$ is defined similarly.
Under $\bar\bP^{a,\gamma}$, the random processes $S^{a,\gamma,\leq}$ and $S^{a,\gamma,>}$ are $\mcF_n$-adapted martingales.
Defining $q_\gamma\eqdef \lfloor\frac T\gamma\rfloor+1$, we obtain by Doob's
martingale inequality and by the boundedness of the increments of
$S_{n}^{a,\gamma,\leq}$ that
$$
\bar\bP^{a,\gamma}\left(\max_{1\leq n\leq q_\gamma}\|S_{n}^{a,\gamma,\leq}\|
>\varepsilon\right) \leq
\frac{\bar\bE^{a,\gamma} (\|S_{q_\gamma}^{a,\gamma,\leq}\|)}{\varepsilon}
\leq \frac{\bar\bE^{a,\gamma} (\|S_{q_\gamma}^{a,\gamma,\leq}\|^2)^{1/2}}{\varepsilon}
\leq \frac 2\varepsilon \gamma R \sqrt
{q_\gamma}\,,
$$
and the right hand side tends to zero uniformly in $a\in K$ as $\gamma\to 0$. By the same inequality,
$$
\bar\bP^{a,\gamma}\left(\max_{1\leq n\leq q_\gamma}\|S_{n}^{a,\gamma,>}\|>\varepsilon\right) \leq \frac 2\varepsilon q_\gamma \gamma \sup_{k\in\bN^*} \bar\bE^{a,\gamma}\left(\|Z_k^{\gamma}\|\mathbbm 1_{\|Z_k^{\gamma}\|>R}\right)\,.
$$
Choose an arbitrarily small $\delta>0$ and select $R$ as large as need
in order that the supremum in the right hand side is no larger than
$\varepsilon \delta/(2T+2\gamma_0)$. Then the left hand side is no larger than
$\delta$. Hence, the proof is concluded.
\end{proof}
For any $R>0$, define
$h_{\gamma,R}(s,a)\eqdef h_{\gamma}(s,a)\mathbbm 1_{\|a\|\leq R}$. Let
$H_R(s,x)\eqdef H(s,x)$ if $\|x\|<R$, $\{0\}$ if $\|x\|>R$, and $E$ otherwise.
Denote the corresponding selection integral as
${\mathsf H}_R(a) = \int H_R(s,a)\, \mu(ds)$. Define
$\tau_R(x)\eqdef\inf\{n\in \bN:x_n>R\}$ for all $x\in \Omega$. We also
introduce the measurable mapping $B_R:\Omega\to \Omega$, given by
$$
B_R(x) : n\mapsto x_n\mathbbm 1_{n< \tau_R(x)} + x_{\tau_R(x)}\mathbbm 1_{n\geq \tau_R(x)}
$$
for all $x\in \Omega$ and all $n\in \bN$.
\begin{lemma}
\label{lem:aptTrunk}
Suppose that Assumption (RM) is satisfied. Then, for every compact set
$K\subset E$, the family
$\{\bP^{a,\gamma}B_R^{-1}{\mathsf X}_\gamma^{-1},\gamma\in (0,\gamma_0), a\in K\}$
is tight. Moreover, for every $\varepsilon>0$,
\[
\sup_{a\in K}\,\bP^{a,\gamma}B_R^{-1}\left[d({\mathsf X}_\gamma,\Phi_{{\mathsf H}_R}(K))>\varepsilon\right]\xrightarrow[\gamma\to 0]{}0\,.
\]
\end{lemma}
\begin{proof}
We introduce the measurable mapping $\Delta_{\gamma,R}:\Omega\to E^\bN$ s.t. for all $x\in \Omega$,
$\Delta_{\gamma,R}(x)(0)\eqdef 0$ and
$$
\Delta_{\gamma,R}(x)(n)\eqdef (x_n-x_{0}) - \gamma \sum_{k = 0}^{n-1} \int h_{\gamma,R}(s,x_k)\mu(ds)
$$
for all $n\in \bN^*$. We also introduce the measurable mapping ${\mathsf G}_{\gamma,R}:C(\bR_+,E)\to C(\bR_+,E)$
s.t. for all ${\mathsf x}\in C(\bR_+,E)$,
$$
{\mathsf G}_{\gamma,R}({\mathsf x})(t) \eqdef
\rev{\int_0^t\int h_{\gamma,R}(s, {\mathsf x}(\gamma \lfloor u/\gamma\rfloor)) \, \mu(ds)\,du\, .}
$$
We first express the interpolated process in integral form. For every $x\in E^\bN$ and $t\geq 0$,
$$
{\mathsf X}_\gamma(x)(t)= x_0
+ \int_0^t \gamma^{-1}(x_{\lfloor \frac u\gamma\rfloor +1}
-x_{\lfloor \frac u\gamma\rfloor})\, du \,,
$$
from which we obtain the decomposition
\begin{equation}
{\mathsf X}_\gamma(x) = x_0 + {\mathsf G}_{\gamma,R}\circ {\mathsf X}_\gamma(x) + {\mathsf X}_\gamma\circ \Delta_{\gamma,R}(x)\,. \label{eq:expr-itp}
\end{equation}
\rev{ The uniform integrability condition~(\ref{eq:ui})
is satisfied when letting $\bar\bP^{a,\gamma}\eqdef \bP^{a,\gamma}B_R^{-1}$. Indeed,
\begin{align*}
\bar\bE^{a,\gamma}(\|\gamma^{-1}(X_{n+1}-X_n)\|^{1+\epsilon_K}) &= \bE^{a,\gamma}\left(\left\|\gamma^{-1}(X_{n+1}-X_n)\right\|^{1+\epsilon_K} \mathbbm 1_{\tau_R(X)>n}\right) \,,
\end{align*}
and the condition~\eqref{eq:ui} follows from hypothesis~(\ref{eq:moment}).
On the other hand, as the event $\{\tau_R(X)>n\}$ is $\mcF_n$-measurable,
\begin{align*}
\bar \bE^{a,\gamma}(\gamma^{-1}(X_{n+1}-X_n)|\mcF_n) &= \bE^{a,\gamma}(\gamma^{-1}(X_{n+1}-X_n)|\mcF_n) \mathbbm 1_{\tau_R(X)>n} \\
&= \int h_\gamma(s,X_n)\mu(ds) \mathbbm 1_{\tau_R(X)>n} \\
&= \int h_{\gamma,R}(s,X_n)\mu(ds) \,.
\end{align*}
}Thus, Lemma~\ref{lem:tightC} implies that for all $\varepsilon>0$ and
$T>0$,
$$
\sup_{a\in K}\bar\bP^{a,\gamma}\left(
\max_{0\leq n\leq \lfloor\frac T\gamma\rfloor}
\gamma\left\|\Delta_{\gamma,R}(x)(n+1)\right\|>\varepsilon\right)
\xrightarrow[]{\gamma\to 0} 0\,.
$$
It is easy to see that for all $x\in \Omega$, the function
${\mathsf X}_\gamma\circ \Delta_{\gamma,R}(x)$ is bounded on every compact interval
$[0,T]$ by $\max_{n\leq \lfloor T/\gamma\rfloor}\|\Delta_{n+1}^{\gamma}\|$.
This in turns leads to:
\begin{equation}
\sup_{a\in K}\bar \bP^{a,\gamma}(\|{\mathsf X}_\gamma\circ \Delta_{\gamma,R}\|_{\infty,T}>\varepsilon) \xrightarrow[]{\gamma\to 0}0\,,\label{eq:rate-of-change-continuous}
\end{equation}
where the notation $\|{\mathsf x}\|_{\infty,T}$ stands for the uniform norm of ${\mathsf x}$ on $[0,T]$.
As a second consequence of Lemma~\ref{lem:tightC}, the family
$\{\bar\bP^{a,\gamma}{\mathsf X}_\gamma^{-1},0<\gamma<\gamma_0,a\in K\}$ is tight.
Choose any subsequence $(a_n,\gamma_n)$ s.t. $\gamma_n\to 0$ and $a_n\in K$.
Using Prokhorov's theorem and the compactness of $K$, there exists a subsequence
(which we still denote by $(a_n,\gamma_n)$) and there exist some $a^\star\in K$ and some
$\upsilon\in \cM(C(\bR_+,E))$ such that $a_n\to a^\star$ and
$\bar\bP^{a_n,\gamma_n}{\mathsf X}_{\gamma_n}^{-1}$ converges narrowly to~$\upsilon$.
By Skorokhod's representation theorem, we introduce some r.v. ${\mathsf z}$, $\{{\mathsf x}_n,n\in\bN\}$ on $C(\bR_+, E)$
with respective distributions $\upsilon$ and $\bar\bP^{a_n,\gamma_n}{\mathsf X}_{\gamma_n}^{-1}$,
defined on some other probability space $(\Omega',\mcF',\bP')$
and such that $d({\mathsf x}_n(\omega), {\mathsf z}(\omega))\to 0$
for all $\omega\in \Omega'$.
By~\eqref{eq:expr-itp} and~\eqref{eq:rate-of-change-continuous}, the sequence
of r.v.
$$
{\mathsf x}_n - {\mathsf x}_n(0) - {\mathsf G}_{\gamma_n,R}({\mathsf x}_n)
$$
converges in probability to zero in $(\Omega',\mcF',\bP')$, as $n\to\infty$.
One can extract a subsequence under which this convergence holds in the almost sure sense.
Therefore, there exists an event of probability one s.t., everywhere on this event,
$$
{\mathsf z}(t) = {\mathsf z}(0)
+ \lim_{n\to\infty} \int_0^t\int_\Xi
h_{\gamma_n,R}(s, {\mathsf x}_n(\gamma_n\lfloor u/\gamma_n\rfloor))\, \mu(ds) \, du\,
\qquad (\forall t\geq 0) \,,
$$
where the limit is taken along the former subsequence.
We now select an $\omega$ s.t. the above convergence holds, and omit the dependence on $\omega$ in the sequel
(otherwise stated, ${\mathsf z}$ and ${\mathsf x}_n$ are treated as elements of $C(\bR_+,E)$ and no longer as random variables).
Set $T>0$. As $({\mathsf x}_n)$ converges uniformly on $[0,T]$, there exists a compact set $K'$ (which depends on $\omega$) such that
${\mathsf x}_n(\gamma_n\lfloor t/\gamma_n\rfloor)\in K'$ for all $t\in [0,T]$, $n\in \bN$.
Define
$$
v_n(s,t)\eqdef h_{\gamma_n,R}(s, {\mathsf x}_n(\gamma_n\lfloor t/\gamma_n\rfloor))\,.
$$
\rev{By Eq.~\eqref{eq:moment-bis}, the sequence $(v_n,n\in \bN)$
forms a bounded subset of
${{\mathcal L}}^{1+\epsilon_{K'}}\eqdef {{\mathcal L}}^{1+\epsilon_{K'}}( \Xi\times [0,T],
\mcG\otimes\mcB([0,T]), \mu\otimes\lambda_T; E)$.}
By the Banach-Alaoglu theorem, the sequence converges weakly to some mapping $v\in {{\mathcal L}}^{1+\epsilon_{K'}}$ along some subsequence.
This has two consequences. First,
\begin{equation}
\ {\mathsf z}(t)={\mathsf z}(0)+
\int_0^t\int_\Xi v(s,u)\, \mu(ds)\, du\,,\quad(\forall t\in [0,T])\,.
\label{eq:di-int}
\end{equation}
Second, for $\mu\otimes\lambda_T$-almost all $(s,t)$,
$v(s,t)\in H_R(s,{\mathsf z}(t))$.
In order to prove this point, remark that, by Assumption~(RM),
$$
v_n(s,t) \to H_R(s,{\mathsf z}(t))
$$
for almost all $(s,t)$. This implies that the couple
$({\mathsf x}_n(\gamma_n\lfloor t/\gamma_n\rfloor),v_n(s,t))$
converges to $\mathrm{gr}(H_R(s, \cdot))$ and the second point thus follows from Lemma~\ref{lem:cvth}.
By Fubini's theorem, there exists a negligible set of $[0,T]$ s.t.
for all $t$ outside this set, $v(\cdot,t)$ is an integrable selection of
$H_R(\cdot,{\mathsf z}(t))$.
As $H(\cdot,x)$ is integrable for every $x\in E$, the same holds for $H_R$.
Equation~(\ref{eq:di-int}) implies that
${\mathsf z}\in \Phi_{{\mathsf H}_R}(K)\,.$
We have shown that for any sequence $((a_n,\gamma_n),n\in\bN)$ on
$K\times (0,\gamma_0)$ s.t. $\gamma_n\to 0$,
there exists a subsequence along which, for every $\varepsilon>0$, $\bP^{a_n,\gamma_n}B_R^{-1}(d({\mathsf X}_{\gamma_n},\Phi_{{\mathsf H}_R}(K))>\varepsilon)\to 0\,.$
This proves the lemma.
\end{proof}
\noindent {\bf End of the proof of Theorem~\ref{th:SA=wAPT}.}\\
We first prove the second statement.
Set an arbitrary $T>0$. Define $d_T({\mathsf x},{\mathsf y})\eqdef \|x-y\|_{\infty,T}$.
It is sufficient to prove that for any sequence $((a_n,\gamma_n),n\in\bN)$
s.t.~$\gamma_n\to 0$, there exists a subsequence along which
$\bP^{a_n,\gamma_n}(d_T({\mathsf X}_{\gamma_n},\Phi_{{\mathsf H}}(K))>\varepsilon)\to 0$.
Choose $R>R_0(T)$, where $R_0(T)\eqdef\sup\{\|{\mathsf x}(t)\|:t\in [0,T], {\mathsf x}\in\Phi_{\mathsf H}(a), a\in K\}$ is finite by
Assumption (RM).
It is easy to show that any ${\mathsf x}\in\Phi_{{\mathsf H}_R}(K)$ must satisfy $\|{\mathsf x} \|_{\infty,T} < R$.
Thus, when $R>R_0(T)$, any ${\mathsf x}\in \Phi_{{\mathsf H}_R}(K)$ is such that there exists ${\mathsf y}\in \Phi_{{\mathsf H}}(K)$ with $d_T({\mathsf x},{\mathsf y})=0$
\emph{i.e.}, the restrictions of ${\mathsf x}$ and ${\mathsf y}$ to $[0,T]$ coincide.
As a consequence of the Lemma~\ref{lem:aptTrunk}, each sequence $(a_n,\gamma_n)$ chosen as above admits
a subsequence along which, for all $\varepsilon>0$,
\begin{equation}
\bP^{a_n,\gamma_n}(d_T({\mathsf X}_{\gamma_n}\circ B_R,\Phi_{{\mathsf H}}(K))>\varepsilon)\to 0\,.\label{eq:aptTronquee}
\end{equation}
The event $d_T({\mathsf X}_\gamma\circ B_R,{\mathsf X}_\gamma)>0$ implies the event $\|{\mathsf X}_\gamma\circ B_R\|_{\infty,T}\geq R$,
which in turn implies by the triangular inequality that
$
R_0(T) + d_T({\mathsf X}_\gamma\circ B_R,\Phi_{{\mathsf H}}(K))\geq R\,.
$
Therefore,
\begin{equation}
\bP^{a_n,\gamma_n}( d_T({\mathsf X}_{\gamma_n}\circ B_R,{\mathsf X}_{\gamma_n})>\varepsilon) \leq \bP( d_T({\mathsf X}_{\gamma_n}\circ B_R,\Phi_{{\mathsf H}}(K))\geq R-R_0(T))\,.\label{eq:XTronq}
\end{equation}
By~(\ref{eq:aptTronquee}), the right hand side converges to zero. Using~\eqref{eq:aptTronquee} again along with the triangular
inequality, it follows that $\bP^{a_n,\gamma_n}(d_T({\mathsf X}_{\gamma_n},\Phi_{{\mathsf H}}(K))>\varepsilon)\to 0$, which proves the second statement of the theorem.
We prove the first statement (tightness). By \cite[Theorem 7.3]{bil-(livre)99},
this is equivalent to showing that for every $T>0$, and for every sequence
$(a_n,\gamma_n)$ on $K\times (0,\gamma_0)$, the sequence
$( \bP^{a_n,\gamma_n} {\mathsf X}_{\gamma_n}^{-1} p_0^{-1} )$ is tight, and
for each positive $\varepsilon$ and $\eta$, there exists $\delta > 0$ such
that $\lim\sup_n
\bP^{a_n,\gamma_n}{\mathsf X}_{\gamma_n}^{-1}(\{x:w_x^T(\delta)>\varepsilon\}) < \eta$.
First consider the case where $\gamma_n\to 0$. Fixing $T > 0$, letting
$R>R_0(T)$ and using~\eqref{eq:XTronq}, it holds that for all $\varepsilon>0$,
$\bP^{a_n,\gamma_n}( d_T({\mathsf X}_{\gamma_n}\circ B_R,{\mathsf X}_{\gamma_n})>\varepsilon)
\to_n 0$. Moreover, we showed that
$\bP^{a_n,\gamma_n}B_R^{-1}{\mathsf X}_{\gamma_n}^{-1}$ is tight.
The tightness of $( \bP^{a_n,\gamma_n} {\mathsf X}_{\gamma_n}^{-1} p_0^{-1} )$ follows.
In addition, for every ${\mathsf x}, {\mathsf y} \in C(\bR_+,E)$, it holds by the triangle
inequality that $w_{{\mathsf x}}^T(\delta) \leq w_{{\mathsf y}}^T(\delta) + 2 d_T({\mathsf x}, {\mathsf y})$
for every $\delta > 0$. Thus,
\begin{multline*}
\bP^{a_n,\gamma_n}{\mathsf X}_{\gamma_n}^{-1}(\{x:w_x^T(\delta)>\varepsilon\})
\leq
\bP^{a_n,\gamma_n}B_R^{-1}{\mathsf X}_{\gamma_n}^{-1}
(\{x:w_x^T(\delta)>\varepsilon /2\}) \\
+ \bP^{a_n,\gamma_n}( d_T({\mathsf X}_{\gamma_n}\circ B_R,{\mathsf X}_{\gamma_n})
>\varepsilon / 4) ,
\end{multline*}
which leads to the tightness of $(\bP^{a_n,\gamma_n} {\mathsf X}_{\gamma_n}^{-1})$
when $\gamma_n \to 0$.
It remains to establish the tightness when $\liminf_n\gamma_n>\eta>0$ for some
$\eta>0$. Note that for all $\gamma>\eta$,
$$
w_{{\mathsf X}_{\gamma}^T(x)}(\delta)\leq
2\delta \max_{k=0\dots\lfloor T/\eta\rfloor+1}\|x_k\|\,.
$$
There exist $n_0$ such that for all $n\geq n_0$, $\gamma_n>\eta$ which implies
by the union bound:
$$
\bP^{a_n,\gamma_n}{\mathsf X}_{\gamma_n}^{-1}(\{{\mathsf x}:w_{\mathsf x}^T(\delta)>\varepsilon\}) \leq \sum_{k=0}^{\lfloor T/\eta\rfloor+1}P_\gamma^k(a,B(0,(2\delta)^{-1}\varepsilon)^c)\,,
$$
where $B(0,r)\subset E$ stands for the ball or radius $r$ and
where $P_\gamma^k$ stands for the iterated kernel, recursively defined by
\begin{equation}
P_\gamma^k(a,\cdot) = \int P_\gamma(a,dy)P_\gamma^{k-1}(y,\cdot)\label{eq:iterated}
\end{equation}
and $P_\gamma^0(a,\cdot)=\delta_a$. Using~\eqref{eq:moment}, it is an easy
exercise to show, by induction, that for every $k\in \bN$,
$P_\gamma^k(a,B(0,r)^c)\to 0$ as $r\to \infty$. By letting $\delta\to 0$ in the
above inequality, the tightness of $(\bP^{a_n,\gamma_n}{\mathsf X}_{\gamma_n}^{-1})$
follows.
\section{Proof of Proposition~\ref{prop:cluster}}
\label{sec-prf-cluster}
To establish Prop.~\ref{prop:cluster}--\ref{cluster-Idroit}, we consider a
sequence $((\pi_n,\gamma_n), n\in \bN)$ such that
$\pi_n \in \cI(P_{\gamma_n})$, $\gamma_n \to 0$, and $(\pi_n)$ is tight. We
first show that the sequence
$(\upsilon_n \eqdef \bP^{\pi_n,\gamma_n} {\mathsf X}_{\gamma_n}^{-1}, n\in \bN)$ is
tight, then we show that every cluster point of $(\upsilon_n)$ satisfies
the conditions of Def.~\ref{def-inv}.
Given $\varepsilon > 0$, there exists a compact set $K\subset E$ such
that $\inf_n \pi_n(K) > 1 - \varepsilon / 2$. By Th.~\ref{th:SA=wAPT},
the family
$\{ \bP^{a,\gamma_n} {\mathsf X}_{\gamma_n}^{-1} \, , \, a \in K, n \in \bN \}$ is
tight. Let $\cC$ be a compact set of $C(\bR_+, E)$ such that
$\inf_{a\in K, n\in\bN} \bP^{a,\gamma_n} {\mathsf X}_{\gamma_n}^{-1}(\cC) >
1 - \varepsilon/2$.
By construction of the probability measure $\bP^{\pi_n, \gamma_n}$, it
holds that
$\bP^{\pi_n, \gamma_n}(\cdot) = \int_E \bP^{a, \gamma_n}(\cdot) \, \pi_n(da)$.
Thus,
\[
\upsilon_n(\cC)
\geq \int_K \bP^{a, \gamma_n}{\mathsf X}_{\gamma_n}^{-1}(\cC) \, \pi_n(da)
> (1 - \varepsilon/2)^2 > 1 - \varepsilon \, ,
\]
which shows that $(\upsilon_n)$ is tight.
Since $\pi_n = \upsilon_n p_0^{-1}$, and since the projection $p_0$ is
continuous, it is clear that every cluster point $\pi$ of $\cI(\cP)$ as
$\gamma\to 0$ can be written as $\pi = \upsilon p_0^{-1}$, where $\upsilon$ is
a cluster point of a sequence $(\upsilon_n)$. Thus,
Def.~\ref{def-inv}--\ref{inv-margin} is satisfied by $\pi$ and $\upsilon$.
To establish Prop.~\ref{prop:cluster}--\ref{cluster-Idroit}, we need to verify
the conditions~\ref{inv-support} and~\ref{inv-Theta} of
Definition~\ref{def-inv}. In the remainder of the proof, we denote with a
small abuse as $(n)$ a subsequence along which $(\upsilon_n)$ converges
narrowly to $\upsilon$.
To establish the validity of
Def.~\ref{def-inv}--\ref{inv-support}, we prove that for every $\eta > 0$,
$\upsilon_n( (\Phi_{\mathsf H}(E))_\eta ) \to 1$ as $n\to\infty$; the result will
follow from the convergence of $(\upsilon_n)$. Fix $\varepsilon > 0$, and let
$K \subset E$ be a compact set such that
$\inf_n \pi_n(K) > 1 - \varepsilon$. We have
\begin{align*}
\upsilon_n( (\Phi_{\mathsf H}(E))_\eta ) &=
\bP^{\pi_n,\gamma_n}( d({\mathsf X}_{\gamma_n}, \Phi_{{\mathsf H}}(E)) < \eta ) \\
&\geq \bP^{\pi_n,\gamma_n}( d({\mathsf X}_{\gamma_n}, \Phi_{{\mathsf H}}(K)) < \eta ) \\
&\geq \int_K \bP^{a,\gamma_n}( d({\mathsf X}_{\gamma_n}, \Phi_{{\mathsf H}}(K)) < \eta )
\, \pi_n(da) \\
&\geq (1-\varepsilon)
\inf_{a\in K} \bP^{a,\gamma_n}( d({\mathsf X}_{\gamma_n}, \Phi_{{\mathsf H}}(K)) < \eta )
\, .
\end{align*}
By Th.~\ref{th:SA=wAPT}, the infimum at the right hand side converges to $1$.
Since $\varepsilon > 0$ is arbitrary, we obtain the result.
It remains to establish the $\Theta$-invariance of $\upsilon$
(Condition~\ref{inv-Theta}). Equivalently, we need to show that
\begin{equation}
\int f({\mathsf x}) \, \upsilon(d{\mathsf x}) = \int f \circ \Theta_t ({\mathsf x}) \,
\upsilon(d{\mathsf x})
\label{inv-ups}
\end{equation}
for all $f\in C_b(C(\bR_+, E))$ and all $t > 0$.
We shall work on $(\upsilon_n)$ and make $n\to\infty$. Write
$\eta_n \eqdef t - \gamma_n \lfloor t / \gamma_n \rfloor$.
Thanks to the $P_{\gamma_n}$--invariance of $\pi_n$,
$\Theta_{\eta_n}({\mathsf x}(\gamma_n \lfloor t / \gamma_n \rfloor+\cdot))$ and
$\Theta_{t} ({\mathsf x})$ are equal in law under $\upsilon_n(d{\mathsf x})$. Thus,
\begin{align}
\int f \circ \Theta_t ({\mathsf x}) \, \upsilon_n(d{\mathsf x}) &=
\int
f \circ \Theta_{\eta_n} ({\mathsf x}(\gamma_n \lfloor t / \gamma_n \rfloor+\cdot))
\, \upsilon_n(d{\mathsf x}) \nonumber \\
&= \int f \circ \Theta_{\eta_n} ({\mathsf x}) \, \upsilon_n(d{\mathsf x}) . \label{eq:commentonlappelle}
\end{align}
Using Skorokhod's representation theorem, there exists a probability
space $(\Omega',\mathcal F', \bP')$ and random variables
$(\bar {\mathsf x}_n, n\in \bN)$ and $\bar {\mathsf x}$ over this probability space, with
values in $C(\bR_+, E)$, such that for every $n \in \bN$, the
distribution of $\bar {\mathsf x}_n$ is $\upsilon_n$, the
distribution of $\bar {\mathsf x}$ is $\upsilon$ and
$\bP'$-a.s,
\[
d(\bar {\mathsf x}_n,\bar {\mathsf x}) \longrightarrow_{n \to +\infty} 0,
\]
\textit{i.e}, $(\bar{\mathsf x}_n)$ converges to $\bar {\mathsf x}$ as
$n \to +\infty$ uniformly over compact sets of $\bR_+$. Since
$\eta_n \rightarrow_{n \to +\infty} 0$, $\bP'$-a.s,
$d(\Theta_{\eta_n}(\bar{\mathsf x}_n),\bar{\mathsf x}) \longrightarrow_{n \to
+\infty} 0.$ Hence,
\[
\int f \circ \Theta_{\eta_n} ({\mathsf x}) \, \upsilon_n(d{\mathsf x})
\xrightarrow[n\to\infty]{} \int f({\mathsf x}) \, \upsilon(d{\mathsf x}) \,.
\]
Recalling Eq.~(\ref{eq:commentonlappelle}), we have shown that $\int f\circ \Theta_t ({\mathsf x}) \, \upsilon_n(d{\mathsf x}) \xrightarrow[n\to\infty]{}
\int f\circ \Theta_t({\mathsf x}) \, \upsilon(d{\mathsf x})$. Since $\int f({\mathsf x}) \, \upsilon_n(d{\mathsf x}) \xrightarrow[n\to\infty]{}
\int f({\mathsf x}) \, \upsilon(d{\mathsf x})$, the identity \eqref{inv-ups} holds true.
Prop.~\ref{prop:cluster}--\ref{cluster-Idroit} is proven.
We now prove Prop.~\ref{prop:cluster}--\ref{cluster-Icursif}. Consider a
sequence $((\mathfrak m_n, \gamma_n), n\in \bN)$ such that
$\mathfrak m_n \in \mcI(P_{\gamma_n})$, $\gamma_n \to 0$, and
$\mathfrak m_n \Rightarrow \mathfrak m$ for some $\mathfrak m \in\cM(\cM(E))$.
Since the space $\cM(E)$ is separable, Skorokhod's representation theorem shows
that there exists a probability space $(\Omega',\mcF',\bP')$, a
sequence of $\Omega' \to \cM(E)$ random variables $(\Lambda_n)$ with
distributions $\mathfrak m_n$, and a $\Omega' \to \cM(E)$ random variable
$\Lambda$ with distribution $\mathfrak m$ such that $\Lambda_n(\omega)
\Rightarrow \Lambda(\omega)$ for each $\omega \in \Omega'$. Moreover, there is a
probability one subset of $\Omega'$ such that $\Lambda_n(\omega)$ is a
$P_{\gamma_n}$--invariant probability measure for all $n$ and for every
$\omega$ in this set. For each of these $\omega$, we can construct on the
space $(E^\bN, \mcF)$ a probability measure
$\bP^{\Lambda_n(\omega), \gamma_n}$ as we did in Sec.~\ref{rm-general}. By the
same argument as in the proof of
Prop.~\ref{prop:cluster}--\ref{cluster-Idroit}, the sequence
$(\bP^{\Lambda_n(\omega), \gamma_n} {\mathsf X}_{\gamma_n}^{-1}, n\in\bN)$ is tight,
and any cluster point $\upsilon$ satisfies the conditions of
Def.~\ref{def-inv} with $\Lambda(\omega) = \upsilon p_0^{-1}$.
Prop.~\ref{prop:cluster} is proven.
\section{Proof of Theorem~\ref{the:CV}}
\label{sec-prf-CV}
\subsection{Technical lemmas}
\label{sec:technical-lemmas}
\begin{lemma}
\label{lem:cpctLevy}
Given a family $\{ K_j , j\in\bN \}$ of compact sets of $E$, the
set
\[
U\eqdef\{ \nu \in \cM(E) \, : \, \forall j \in \bN,
\nu(K_j) \geq 1 - 2^{-j} \}
\]
is a compact set of $\cM(E)$.
\end{lemma}
\begin{proof}
The set $U$ is tight hence relatively compact by Prokhorov's theorem. It is moreover
closed. Indeed, let $(\nu_n, n\in\bN)$ represent a sequence of $U$ s.t.
$\nu_n\Rightarrow \nu$. Then, for all $j\in \bN$,
$\nu(K_j) \geq \limsup_n \nu_n(K_j) \geq 1 - 2^{-j}$ since
$K_j$ is closed.
\end{proof}
\begin{lemma}
\label{lem:01}
Let $X$ be a real random variable such that $X \leq 1$ with probability one,
and $\bE X \geq 1 - \varepsilon$ for some $\varepsilon \geq 0$. Then
$\bP[ X \geq 1 - \sqrt{\varepsilon}] \geq 1 - \sqrt{\varepsilon}$.
\end{lemma}
\begin{proof}
$1-\varepsilon \leq \bE X \leq \bE X \mathbbm 1_{X < 1-\sqrt{\varepsilon}}
+ \bE X \mathbbm 1_{X \geq 1-\sqrt{\varepsilon}} \leq
(1-\sqrt{\varepsilon})(1-\bP[X\geq 1 - \sqrt{\varepsilon}]) +
\bP[X\geq 1 - \sqrt{\varepsilon}]$. The result is obtained by rearranging.
\end{proof}
For any $\mathfrak{m}\in\cM(\cM(E))$, we denote by $e(\mathfrak{m})$ the probability measure in $\cM(E)$ such that
for every $f\in C_b(E)$,
$$
e(\mathfrak{m}) :f\mapsto \int\nu(f)\mathfrak{m}(d\nu)\,.
$$
Otherwise stated, $e(\mathfrak{m})(f) = \mathfrak{m}(\cT_f)$ where $\cT_f:\nu\mapsto \nu(f)$.
\begin{lemma}
\label{lem:espTight}
Let ${{\mathcal L}}$ be a family on $\cM(\cM(E))$. If
$\{e(\mathfrak m)\,:\,\mathfrak m\in{{\mathcal L}}\}$ is tight, then ${{\mathcal L}}$ is tight.
\end{lemma}
\begin{proof}
Let $\varepsilon>0$ and choose any integer $k$ s.t. $2^{-k+1}\leq\varepsilon$.
For all $j\in\bN$, choose a compact set $K_j\subset E$ s.t. for all $\mathfrak m\in{{\mathcal L}}$,
$e(\mathfrak m)(K_j) > 1-2^{-2j}\,.$ Define $U$ as the set of measures $\nu\in\cM(E)$
s.t. for all $j\geq k$, $\nu(K_j)\geq 1-2^{-j}$. By Lemma~\ref{lem:cpctLevy}, $U$ is compact.
For all $\mathfrak m\in{{\mathcal L}}$, the union bound implies that
\begin{align*}
\mathfrak m(E\backslash U) &\leq \sum_{j=k}^\infty \mathfrak m\{\nu:\nu(K_j)<1-2^{-j}\}
\end{align*}
By Lemma~\ref{lem:01}, $\mathfrak m\{\nu:\nu(K_j)\geq 1-2^{-j}\}\geq 1- 2^{-j}$. Therefore,
$
\mathfrak m(E\backslash U) \leq \sum_{j=k}^\infty 2^{-j} = 2^{-k+1}\leq \varepsilon\,.
$
This proves that ${{\mathcal L}}$ is tight.
\end{proof}
\begin{lemma}
\label{lem:CVe}
Let $(\mathfrak m_n,n\in \bN)$ be a sequence on $\cM(\cM(E))$,
and consider $\bar{\mathfrak{m}}\in \cM(\cM(E))$.
If $\mathfrak{m}_n\Rightarrow \bar{\mathfrak m}$, then
$e(\mathfrak{m}_n)\Rightarrow e(\bar{\mathfrak m})$.
\end{lemma}
\begin{proof}
For any $f\in C_b(E)$, $\cT_f\in C_b(\cM(E))$. Thus,
$\mathfrak{m}_n(\cT_f)\to \bar{\mathfrak m}(\cT_f)$.
\end{proof}
When a sequence $(\mm_n, n\in\bN)$ of $\cM(\cM(E))$ converges narrowly to
$\mm\in \cM(\cM(E))$, it follows from the above proof that $\mm_n\cT_f^{-1}
\Rightarrow \mm\cT_f^{-1}$ for all bounded continuous $f$. The purpose of the
next lemma is to extend this result to the case where $f$ is not necessarily
bounded, but instead, satisfies some uniform integrability condition. For any
vector-valued function $f$, we use the notation $\|f\|\eqdef\|f(\cdot)\|$.
\begin{lemma}
\label{lem:UIf}
Let $f\in C(E,\bR^{N'})$ where $N'\geq 1$ is an integer.
Define by $\cT_f:\cM(E)\to\bR$ the mapping s.t. $\cT_f(\nu) \eqdef \nu(f)$ if
$\nu(\|f\|)<\infty$ and equal to zero otherwise.
Let $(\mm_n,n\in \bN)$ be a sequence on $\cM(\cM(E))$ and let $\mm\in\cM(\cM(E))$. Assume that $\mm_n\Rightarrow \mm$
and
\begin{equation}
\lim_{K\to\infty}\sup_n e(\mm_n)(\|f\|\mathbbm 1_{\|f\|>K})=0\,.\label{eq:UI1}
\end{equation}
Then, $\nu(\|f\|)<\infty$ for all $\nu$ $\mm$-a.e. and $\mm_n\cT_f^{-1}\Rightarrow \mm\cT_f^{-1}$.
\end{lemma}
\begin{proof}
By Eq.~(\ref{eq:UI1}), $e(\mm)(\|f\|)<\infty$.
This implies that for all $\nu$ $\mm$-a.e., $\nu(\|f\|)<\infty$.
Choose $h\in C_b(\bR^{N'})$ s.t. $h$ is $L$-Lipschitz continuous.
We must prove that $\mm_n\cT_f^{-1}(h)\to \mm\cT_f^{-1}(h)$.
By the above remark, $\mm\cT_f^{-1}(h) = \int h(\nu(f))d\mm(\nu)$, and by Eq~(\ref{eq:UI1}),
$\mm_n\cT_f^{-1}(h) = \int h(\nu(f))d\mm_n(\nu)$. Choose $\varepsilon>0$.
By Eq.~(\ref{eq:UI1}), there exists $K_0>0$ s.t. for all $K>K_0$,
$\sup_ne(\mm_n)(\|f\|\mathbbm 1_{\|f\|>K})<\varepsilon$.
For every such $K$, define the bounded function $f_K\in C( E,\bR^{N'})$ by $f_K(x) = f(x) (1\wedge K/\|f(x)\|)$.
For all $K>K_0$, and for all $n\in \bN$,
\begin{align*}
|\mm_n\cT_f^{-1}(h) - \mm_n\cT_{f_K}^{-1}(h)| & \leq \int |h(\nu(f))-h(\nu(f_K))|d\mm_n(\nu)\\
&\leq L\,\int \nu(\|f-f_K\|)d\mm_n(\nu)\\
&\leq L\,\int \nu(\|f\|\mathbbm 1_{\|f\|>K})d\mm_n(\nu)\leq L\varepsilon\,.
\end{align*}
By continuity of $\cT_{f_K}$, it holds that $ \mm_n\cT_{f_K}^{-1}(h)\to \mm\cT_{f_K}^{-1}(h)$.
Therefore, for every $K>K_0$, $\limsup_n |\mm_n\cT_f^{-1}(h) - \mm\cT_{f_K}^{-1}(h)|
\leq L\varepsilon\,.$
As $\nu(\|f\|)<\infty$ for all $\nu$ $\mm$-a.e., the dominated convergence theorem implies that $\nu(f_K)\to\nu(f)$ as $K\to\infty$,
$\mm$-a.e. As $h$ is bounded and continuous, a second application of the dominated convergence theorem
implies that $\int h(\nu(f_K))d\mm(\nu)\to\int h(\nu(f))d\mm(\nu)$, which reads $ \mm\cT_{f_K}^{-1}(h)\to \mm\cT_{f}^{-1}(h)$.
Thus, $\limsup_n |\mm_n\cT_{f}^{-1}(h) - \mm\cT_{f}^{-1}(h)| \leq L\varepsilon\,.$
As a consequence, $\mm_n\cT_{f}^{-1}(h) \to \mm\cT_{f}^{-1}(h)$ as $n\to\infty$, which completes the proof.
\end{proof}
\subsection{Narrow Cluster Points of the Empirical Measures}
Let $P:E\times \mcB(E)\to [0,1]$ be a probability transition kernel.
For $\nu\in \cM(E)$, we denote by
$\bP^{\nu,P}$ the probability on $(\Omega,\mcF)$
such that $X$ is an homogeneous Markov chain with initial distribution $\nu$ and transition kernel $P$.
For every $n\in\bN$, we define the measurable mapping $\Lambda_n:\Omega\to\cM(E)$ as
\begin{equation}
\Lambda_n(x) \eqdef \frac 1{n+1}\sum_{k=0}^n\delta_{x_k}\label{eq:Lambdan}
\end{equation}
for all $x=(x_k:k\in\bN)$. Note that
$$
\bE^{\nu,P}\Lambda_n = \frac 1{n+1} \sum_{k=0}^{n}\nu P^k\,,
$$
where $\bE^{\nu,P}\Lambda_n = e(\bP^{\nu,P}\Lambda_n^{-1})$, and $P^k$ stands for the iterated kernel, recursively defined by
$
P^k(x,\cdot) = \int P(x,dy)P^{k-1}(y,\cdot)
$ and $P^0(x,\cdot)=\delta_x$.
We recall that $\mcI(P)$ represents the subset of $\cM(\cM(E))$ formed by the measures
whose support is included in $\cI(P)$.
\begin{proposition}
\label{prop:feller}
Let $P:E\times \mcB(E)\to [0,1]$ be a Feller probability transition kernel. Let $\nu\in\cM(E)$.
\begin{enumerate}
\item Any cluster point of $\{\bE^{\nu,P}\Lambda_n \,,\,n\in\bN \}$ is an element of
$\cI(P)$.
\item Any cluster point of
$\{\bP^{\nu,P}\Lambda_n^{-1}\,,\,n\in\bN \}$ is an element of $\mcI(P)$.
\end{enumerate}
\end{proposition}
\begin{proof}
We omit the upper script $^{\nu,P}$.
For all $f\in C_b(E)$,
$\bE\Lambda_n(Pf) -\bE\Lambda_n(f) \to 0$. As $P$ is Feller, any cluster point $\pi$ of
$\{\bE\Lambda_n\,,\,n\in\bN\}$ satisfies $\pi(P f)=\pi(f)$.
This proves the first point.
For every $f\in C_b(E)$ and $x\in \Omega$, consider the decomposition:
\begin{align*}
\Lambda_{n}(x)(P f) - \Lambda_{n}(x)( f)
&= \frac {1}{n+1}\sum_{k=0}^{n-1}(Pf(x_k)-f(x_{k+1}))+\frac{Pf(x_n)-f(x_0)}{n+1}\,.
\end{align*}
Using that $f$ is bounded, Doob's martingale convergence theorem implies that the sequence
$
\Bigl( \sum_{k=0}^{n-1}k^{-1}(Pf(X_k)-f(X_{k+1})) \Bigr)_n
$
converges a.s. when $n$ tends to infinity. By Kronecker's lemma, we deduce that
$\frac {1}{n+1}\sum_{k=0}^{n-1}(Pf(X_k)-f(X_{k+1}))$ tends a.s. to zero. Hence,
\begin{equation}
\label{eq:mtg}
\Lambda_{n}(P f) - \Lambda_{n}( f) \to 0\ \text{a.s.}
\end{equation}
Now consider a subsequence
$(\Lambda_{\varphi_n})$ which converges in distribution to some
r.v. $\Lambda$ as $n$ tends to infinity.
For a fixed $f\in C_b(E)$, the mapping
$\nu \mapsto (\nu(f),\nu(Pf))$ on $\cM(\bR) \to \bR^2$ is continuous.
From the mapping theorem, $\Lambda_{\varphi_n}(f)-\Lambda_{\varphi_n}(Pf)$ converges in distribution to
$\Lambda(f)-\Lambda(Pf)$. By~(\ref{eq:mtg}), it follows that $\Lambda(f)-\Lambda(Pf)=0$
on some event $\cE_f\in \mcF$ of probability one.
Denote by $C_\kappa(E)\subset C_b(E)$ the set of continuous real-valued functions having a compact support,
and let $C_\kappa(E)$ be equipped with the uniform norm $\|\cdot\|_\infty$.
Introduce a dense denumerable subset $S$ of $C_\kappa(E)$. On the
probability-one event $\cE=\cap_{f\in S}\cE_f$, it holds that for all $f\in S$,
$\Lambda(f)=\Lambda P(f)$.
Now consider $g\in C_\kappa(E)$ and let $\varepsilon>0$. Choose $f\in S$ such that $\|f-g\|_\infty\leq \varepsilon$.
Then, almost everywhere on $\cE$, $|\Lambda(g) -\Lambda P(g)| \leq |\Lambda(f)-\Lambda(g)| + |\Lambda P(f)-\Lambda P(g)|\leq 2\varepsilon$.
Thus, $\Lambda(g) -\Lambda P(g) =0$ for every $g\in C_\kappa(E)$.
Hence, almost everywhere on $\cE$, one has $\Lambda =\Lambda P$.
\end{proof}
\subsection{Tightness of the Empirical Measures}
\begin{proposition}
\label{prop:tight}
Let $\cP$ be a family of transition kernels on $E$. Let $V:E\to[0,+\infty)$,
$\psi:E\to[0,+\infty)$ be measurable. Let $\alpha:\cP\to(0,+\infty)$ and
$\beta:\cP\to\bR$.
Assume that $\sup_{P\in\cP}\frac{\beta(P)}{\alpha(P)}<\infty$ and
$\psi(x) \to \infty$ as $\| x \| \to\infty$. Assume that for every $P\in\cP$,
$$
P V\leq V-\alpha(P)\psi+\beta(P)\,.
$$
Then, the following holds.
\begin{enumerate}[i)]
\item \label{it:itight} The family $\bigcup_{P\in\cP}\cI(P)$ is tight. Moreover, $\sup_{\pi\in\cI(\cP)} \pi(\psi) < +\infty\,.$
\item \label{it:mtight} For every $\nu\in\cM(E)$ s.t. $\nu(V)<\infty$, every $P\in \cP$,
$\{\bE^{\nu,P}\Lambda_n\,,\,n\in\bN \}$ is tight. Moreover, $\sup_{n\in\bN}\bE^{\nu,P}\Lambda_n(\psi)<\infty\,.$
\end{enumerate}
\end{proposition}
\begin{proof}
For each $P\in\cP$, $P V$ is everywhere finite by assumption. Moreover,
$$
\sum_{k=0}^n P^{k+1}V \leq \sum_{k=0}^n P^{k}V -\alpha(P)\sum_{k=0}^n P^k \psi +(n+1) \beta(P)\,.
$$
Using that $V\geq 0$ and $\alpha(P)>0$,
$$
\frac{1}{n+1}\sum_{k=0}^n P^k \psi \leq \frac{V}{\alpha(P)(n+1)} +c\,,
$$
where $c\eqdef\sup_{P\in\cP}\beta(P)/\alpha(P)$ is finite. For any $M>0$,
\begin{align}
\frac{1}{n+1}\sum_{k=0}^n P^k (\psi\wedge M) &\leq
\left(\frac{1}{n+1}\sum_{k=0}^n P^k \psi\right)\wedge M \nonumber \\ &
\leq \left(\frac{V}{\alpha(P)(n+1)} +c\right)\wedge M\,. \label{eq:wedge}
\end{align}
Set $\pi\in\cI(\cP)$, and consider
$P\in\cP$ such that $\pi=\pi P$. Inequality~(\ref{eq:wedge}) implies that
for every $n$,
$$
\pi (\psi\wedge M) \leq
\pi\left(\left(\frac{V}{\alpha(P)(n+1)} +c\right)\wedge M\right)\,.
$$
By Lebesgue's dominated convergence theorem, $\pi (\psi\wedge M) \leq c$.
Letting $M\to\infty$ yields $\pi(\psi)\leq c$. The tightness of $\cI(\cP) $
follows from the convergence of $\psi(x)$ to $\infty$ as $\|x\|\to\infty$.
Setting $M=+\infty$ in~(\ref{eq:wedge}), and integrating w.r.t. $\nu$, we obtain
$$
\bE^{\nu,P}\Lambda_n(\psi) \leq \frac{\nu(V)}{(n+1)\alpha(P)} +c\,,
$$
which proves the second point.
\end{proof}
\begin{proposition}
\label{prop:tight2}
We posit the assumptions of Prop.~\ref{prop:tight}. Then,
\begin{enumerate}
\item The family $\mcI(\cP)\eqdef \bigcup_{P\in\cP}\mcI(P)$ is tight;
\item $\{\bP^{\nu,P}\Lambda_n^{-1}\,,\,n\in\bN\}$ is tight.
\end{enumerate}
\end{proposition}
\begin{proof}
For every $\mathfrak{m}\in\mcI(\cP)$, it is easy to see that
$e(\mathfrak{m})\in\cI(\cP)$. Thus,
$\{e(\mathfrak{m}):\mathfrak{m}\in \mcI(\cP)\}$ is tight by
Prop.~\ref{prop:tight}. By Lemma~\ref{lem:espTight}, $\mcI(\cP)$ is tight.
The second point follows from the equality
$\bE^{\nu,P}\Lambda_n=e(\bP^{\nu,P}\Lambda_n^{-1})$ along with Prop.~\ref{prop:tight} and
Lemma~\ref{lem:espTight}.
\end{proof}
\subsection{Main Proof}
By continuity of $h_\gamma(s,\cdot)$ for every $s\in \Xi$, $\gamma\in (0,\gamma_0)$, the transition kernel
$P_\gamma$ is Feller.
By Prop.~\ref{prop:tight} and Eq.~\eqref{eq:dVP}, we have
$\sup_n \bE^{\nu,\gamma}\Lambda_n(\varphi\circ f)<\infty$ which, by de la Vall\'ee-Poussin's criterion for uniform integrability, implies
\begin{equation}
\lim_{K\to\infty}\sup_n \bE^{\nu,\gamma}\Lambda_n(\|f\|\mathbbm 1_{\|f\|>K})=0\,.\label{eq:UI-ELambda}
\end{equation}
In particular, the quantity $\bE^{\nu,\gamma}\Lambda_n(f)=\bE^{\nu,\gamma}(F_n)$ is well-defined.
We now prove the statement (\ref{eq:CVSE}).
By contradiction, assume that for some $\delta>0$, there exists a positive sequence $\gamma_j\to 0$, such that for all $j\in\bN$,
$
\limsup_{n\to\infty}\ d\left(\bE^{\nu,\gamma_j}\Lambda_n(f)\,,\cS_f\right)>\delta\,.
$
For every $j$, there exists an increasing sequence of integers
$(\varphi_n^j,n\in\bN)$ converging to $+\infty$ s.t.
\begin{equation}
\forall n,\ d\left(\bE^{\nu,\gamma_j}\Lambda_{\varphi_n^j}(f)\,,\cS_f\right)>\delta\,.\label{eq:contradiction1}
\end{equation}
By Prop.~\ref{prop:tight}, the sequence
$(\bE^{\nu,\gamma_j}\Lambda_{\varphi_n^j},n\in\bN)$ is tight. By Prokhorov's
theorem and Prop.~\ref{prop:feller}, there exists $\pi_j\in \cI(P_{\gamma_j})$
such that, as $n$ tends to infinity, $\bE^{\nu,\gamma_j}\Lambda_{\varphi_n^j}\Rightarrow \pi_j$ along some subsequence.
By the uniform integrability condition~(\ref{eq:UI-ELambda}), $\pi_j(\|f\|)<\infty$ and
$\bE^{\nu,\gamma_j}\Lambda_{\varphi_n^j}(f)\to \pi_j(f)$ as~$n$ tends to infinity, along the latter subsequence.
By Eq. (\ref{eq:contradiction1}), for all $j\in \bN$,
$d(\pi_j(f),\cS_f)\geq\delta\,.$
By Prop.~\ref{prop:tight}, $\sup_{\pi\in\cI(\cP)} \pi(\psi) < +\infty\,.$ Since $\varphi\circ f\leq M(1+\psi)$,
de la Vall\'ee-Poussin's criterion again implies that
\begin{equation}
\label{eq:UI-j}
\sup_{\pi\in \cI(\cP)} \pi(\|f\|\mathbbm 1_{\|f\|>K}) <\infty\,.
\end{equation}
Also by Prop.~\ref{prop:tight}, the sequence $(\pi_j)$ is tight.
Thus $\pi_j\Rightarrow \pi$ along some subsequence, for some measure $\pi$ which, by Prop.~\ref{prop:cluster},
is invariant for $\Phi_{{\mathsf H}}$.
The uniform integrability condition~\eqref{eq:UI-j} implies that
$\pi(\|f\|)<\infty$ (hence, the set $\cS_f$ is non-empty)
and $\pi_j(f)\to \pi(f)$ as $j$ tends to infinity, along the above subsequence.
This shows that $d(\pi(f),\cS_f)>\delta$, which is absurd.
The statement (\ref{eq:CVSE}) holds true (and in particular, $\cS_f$ must be non-empty).
The proof of the statement~(\ref{eq:support}) follows the same line, by replacing $f$ with the
function $\mathbbm 1_{\overline{{\mathcal U}_\epsilon}}$. We briefly explain how the proof adapts, without repeating all the arguments.
In this case, $\cS_{\mathbbm 1_{{{\mathcal U}_\epsilon^c}}}$ is the singleton $\{0\}$, and Equation~(\ref{eq:contradiction1})
reads $\bE^{\nu,\gamma_j}\Lambda_{\varphi_n^j}({{\mathcal U}_\epsilon^c})>\delta$.
By the Portmanteau theorem,
$\limsup_n\bE^{\nu,\gamma_j}\Lambda_{\varphi_n^j}({{\mathcal U}_\epsilon^c})\leq \pi_j({{\mathcal U}_\epsilon^c})$
where the $\limsup$ is taken along some subsequence.
The contradiction follows from the fact that $\limsup \pi_j({{\mathcal U}_\epsilon^c})\leq \pi({\overline{{\mathcal U}_\epsilon^c}})=0$
(where the $\limsup$ is again taken along the relevant subsequence).
We prove the statement~(\ref{eq:CVSf}). Assume by contradiction that for some (other) sequence $\gamma_j\to 0$,
$\limsup_{n\to\infty}\ \bP^{\nu,\gamma_j}\left(d\left(\Lambda_{n}(f)\,,\cS_f\right)\geq \varepsilon\right)>\delta\,.$
For every $j$, there exists a sequence $(\varphi_n^j,n\in \bN)$ s.t.
\begin{gather}
\forall n,\ \bP^{\nu,\gamma_j}\left(d\left(\Lambda_{\varphi_n^j}(f)\,,\cS_f\right)\geq \varepsilon\right)>\delta\,.
\label{eq:contradition2}
\end{gather}
By Prop.~\ref{prop:tight2}, $(\bP^{\nu,\gamma_j}\Lambda_{\varphi_n^j}^{-1},n\in\bN)$ is tight, one can extract a further subsequence
(which we still denote by $(\varphi_n^j)$ for simplicity) s.t. $\bP^{\nu,\gamma_j}\Lambda_{\varphi_n^j}^{-1}$ converges narrowly
to a measure ${\mathfrak m}_j$ as $n$ tends to infinity, which, by Prop.~\ref{prop:feller}, satisfies $\mm_j\in \mcI(P_{\gamma_j})$.
Noting that $e(\bP^{\nu,\gamma_j}\Lambda_{\varphi_n^j}^{-1})=\bE^{\nu,\gamma_j}\Lambda_{\varphi_n^j}$ and recalling Eq.~(\ref{eq:UI-ELambda}),
Lemma~\ref{lem:UIf} implies that $\nu'(\|f\|)<\infty$ for all $\nu'$ $\mm_j$-a.e., and
$\bP^{\nu,\gamma_j}\Lambda_{\varphi_n^j}^{-1}\cT_f^{-1}\Rightarrow \mm_j\cT_f^{-1}$, where we recall that
$\cT_f(\nu') \eqdef \nu'(f)$ for all $\nu'$ s.t. $\nu'(\|f\|)<\infty$. As $(\cS_f)_\varepsilon^c$ is a closed set,
\begin{align*}
\mm_j\cT_f^{-1}((\cS_f)_\varepsilon^c) &\geq \limsup_n \bP^{\nu,\gamma_j}\Lambda_{\varphi_n^j}^{-1}\cT_f^{-1}((\cS_f)_\varepsilon^c) \\
&= \limsup_n \bP^{\nu,\gamma_j}\left(d\left(\Lambda_{\varphi_n^j}(f)\,,\cS_f\right)\geq \varepsilon\right)>\delta\,.
\end{align*}
By Prop.~\ref{prop:tight}, $(\mm_j)$ is tight, and one can extract a subsequence (still denoted by $(\mm_j)$)
along which $\mm_j\Rightarrow \mm$ for some measure $\mm$ which, by Prop.~\ref{prop:cluster}, belongs to $\mcI(\Phi_{{\mathsf H}})$.
For every $j$, $e(\mm_j)\in \cI(P_{\gamma_j})$. By the uniform integrability condition (\ref{eq:UI-j}), one can
apply Lemma~\ref{lem:UIf} to the sequence $(\mm_j)$. We deduce that $\nu'(\|f\|)<\infty$ for all $\nu'$ $\mm$-a.e.
and $\mm_j\cT_f^{-1}\Rightarrow \mm\cT_f^{-1}$. In particular,
$$
\mm\cT_f^{-1}((\cS_f)_\varepsilon^c)\geq \limsup_j \mm_j\cT_f^{-1}((\cS_f)_\varepsilon^c) >\delta\,.
$$
Since $\mm\in \mcI(\Phi_{{\mathsf H}})$, it holds that $\mm\cT_f^{-1}((\cS_f)_\varepsilon^c)=0$, hence a contradiction.
\section{Proofs of Theorems~\ref{cvg-CVSI} and~\ref{cvg-XY}}
\label{sec-prf-asymptotics}
\subsection{Proof of Theorem~\ref{cvg-CVSI}}
In this proof, we set $L=L_{\aver(\Phi)}$ to simplify the notations.
It is straightforward to show that the identity mapping $f(x)=x$ satisfies the
hypotheses of Th.~\ref{the:CV} with $\varphi = \psi$. Hence, it is sufficient to prove that
$\cS_f$ is a subset of $\overline{\co}(L)$, the closed convex hull of $L$. Choose $q\in \cS_I$ and let
$q=\int xd\pi(x)$ for some $\pi\in\cI(\Phi)$ admitting a first order
moment. There exists a $\Theta$-invariant measure
$\upsilon\in \cM(C(\bR_+,E))$ s.t. $\support(\upsilon)\subset\Phi(E)$ and $\upsilon p_0^{-1}=\pi$.
We remark that for all $t>0$,
\begin{equation}
q = \upsilon(p_0) = \upsilon(p_t) = \upsilon(p_{t}\circ \aver)\,,\label{eq:pnu}
\end{equation}
where the second identity is due to the shift-invariance of $\upsilon$, and the last one uses
Fubini's theorem.
Again by the shift-invariance of $\upsilon$, the family $\{p_t,t>0\}$ is uniformly integrable w.r.t. $\upsilon$.
By Tonelli's theorem, $\sup_{t>0}\upsilon(\|p_t\circ\aver \|\mathbbm 1_S)\leq \sup_{t>0} \upsilon(\|p_t\|\mathbbm 1_S)$
for every $S\in\mcB(C(\bR_+,E))$. Hence, the family $\{p_t\circ \aver,t>0\}$ is $\upsilon$-uniformly integrable as well.
In particular, $\{p_t\circ \aver,t>0\}$ is tight in
$(C(\bR_+,E),\mcB(C(\bR_+,E)),\upsilon)$.
By Prokhorov's theorem, there exists
a sequence $t_n\to\infty$ and a measurable function
$g:C(\bR_+,E)\to E$ such that $p_{t_n}\circ \aver$ converges in distribution to $g$
as $n\to\infty$. By uniform integrability, $\upsilon(p_{t_n}\circ \aver)\to \upsilon(g)$.
Equation~(\ref{eq:pnu}) finally implies that
$$
q=\upsilon(g)\,.
$$
In order to complete the proof, it is sufficient to show that $g({\mathsf x})\in \overline L$ for every ${\mathsf x}$ $\upsilon$-a.e.,
because $\overline{\co}(L)\subset \co(\overline L)$.
Set $\varepsilon>0$ and $\delta>0$. By the tightness of the r.v. $(p_{t_n}\circ \aver,n\in \bN)$, choose a compact set
$K$ such that $\upsilon(p_{t_n}\circ \aver)^{-1}(K^c)\leq \delta$ for all $n$. As $\overline{L_\varepsilon}^c$ is an open set,
one has
$$
\upsilon g^{-1}(\overline{L_\varepsilon}^c)\leq \lim_n \upsilon (p_{t_n}\circ\aver)^{-1}(\overline{L_\varepsilon}^c)\leq \lim_n \upsilon (p_{t_n}\circ\aver)^{-1}(\overline{L_\varepsilon}^c\cap K) + \delta\,.
$$
Let ${\mathsf x}\in \Phi(E)$ be fixed. By contradiction, suppose that $\mathbbm 1_{\overline{L_\varepsilon}^c\cap K}(p_{t_n}(\aver({\mathsf x})))$ does
not converge to zero. Then, $p_{t_n}(\aver({\mathsf x}))\in \overline{L_\varepsilon}^c\cap K$ for every $n$ along some subsequence.
As $K$ is compact, one extract a subsequence, still denoted by $t_{n}$, s.t. $p_{t_{n}}(\aver({\mathsf x}))$ converges.
The corresponding limit must belong to the closed set $L_\varepsilon^c$,
but must also belong to $L$ by definition of ${\mathsf x}$.
This proves that $\mathbbm 1_{L_\varepsilon^c\cap K}(p_{t_n}\circ \aver({\mathsf x})))$ converges to zero for all $x\in \Phi(E)$.
As $\support(\upsilon)\subset \Phi(E)$,
$\mathbbm 1_{\overline{L_\varepsilon}^c\cap K}(p_{t_n}\circ \aver)$ converges to zero $\upsilon$-a.s.
By the dominated convergence theorem, we obtain that $\upsilon g^{-1}(\overline{L_\varepsilon}^c)\leq \delta$. Letting $\delta\to 0$
we obtain that $\upsilon g^{-1}(\overline{L_\varepsilon}^c)=0$.
Hence, $g({\mathsf x})\in \overline L$ for all ${\mathsf x}$ $\upsilon$-a.e. The proof is complete.
\subsection{Proof of Theorem~\ref{cvg-XY}}
Recall the definition ${\mathcal U} \eqdef \bigcup_{\pi \in \cI(\Phi)} \support(\pi)$.
By Th.~\ref{the:CV}, for all $\varepsilon > 0$,
\[
\limsup_{n\to\infty} \bE^{\nu,\gamma} \Lambda_n( {\mathcal U}_\varepsilon^c ) \xrightarrow[\gamma\to 0]{} 0 ,
\]
where $\Lambda_n$ is the random measure given by~(\ref{eq:Lambdan}).
By Theorem~\ref{poincare}, $\support(\pi) \subset \text{BC}_\Phi$ for each
$\pi \in \cI(\Phi)$. Thus, ${\mathcal U}_\varepsilon\subset(\text{BC}_\Phi)_\varepsilon$.
Hence, $\limsup_n \bE^{\nu,\gamma}\Lambda_n( ( (\text{BC}_\Phi)_\varepsilon)^c )
\to 0$ as $\gamma\to 0$. This completes the proof.
\rev{
\section{Applications}
\label{sec:applis}
In this section, we return to the Examples \ref{ex:optim} and \ref{ex:fluid} of Section~\ref{sec:examples}.
\subsection{Non-Convex Optimization}
Consider the algorithm~\eqref{eq:prox-gradient} to solve
problem~\eqref{eq:pb-nonCVX} where $\ell : \Xi \times E \to \bR$, $r :
E \to \bR$ and $\xi$ is a random variable over a probability space
$(\Omega,\mcF, \bP)$ with values in the measurable space $(\Xi,
{\mcG})$ and with distribution $\mu$. Assume that $\ell(\xi,\,.\,)$ is
continuously differentiable for every $\xi \in \Xi$, that
$\ell(\,.\,,x)$ is $\mu$-integrable for every $x \in E$ and that $r$
is a convex and lower semicontinuous function. We assume that for every compact subset $K$ of $E$,
there exists $\epsilon_K > 0$ s.t.
\begin{equation}
\label{eq:moment-l}
\sup_{x \in K} \int \|\nabla \ell(s,x)\|^{1+\epsilon_K} \mu(ds) < \infty\,.
\end{equation}
Define $L(x) \eqdef \bE_\xi(\ell(\xi,x))$. Under Condition (\ref{eq:moment-l}),
it is easy to check that $L$ is differentiable, and that $\nabla L(x) = \int \nabla \ell(s,x) \mu(ds)$.
From now on, we assume moreover that $\nabla L$ is Lipschitz continuous.
Letting $H(s,x)\eqdef -\nabla \ell(s,x) -\partial r(x)$, it holds that $H(\,.\,,x)$ is proper, $\mu$-integrable and usc~\cite{phe-97},
and that the corresponding selection integral ${\mathsf H}(x) \eqdef \int H(s,x)\mu(ds)$ is given by
$$
{\mathsf H}(x) = -\nabla L(x) - \partial r(x)\,.
$$
By \cite[Theorem 3.17, Remark 3.14]{bre-livre73}, for every $a\in E$,
the DI $\dot {\mathsf x}(t)\in {\mathsf H}(x(t))$ admits a unique solution on $[0,+\infty)$
s.t. ${\mathsf x}(0)=a$.
Now consider the iterates $x_n$ given
by~\eqref{eq:prox-gradient}. They satisfy~(\ref{eq:iterative-model})
where
$h_\gamma(s,x) \eqdef \gamma^{-1}(\prox_{\gamma r}(x-\gamma \nabla
\ell(s,x)) - x)$.
We verify that the map $h_\gamma$ satisfies Assumption (RM). Let us
first recall some known facts about proximity operators. Using
\cite[Prop. 12.29]{bau-com-livre11}, the mapping
$x\mapsto \gamma^{-1}(x-\prox_{\gamma r}(x))$ coincides with the
gradient $\nabla r_\gamma$ of the Moreau enveloppe
$r_\gamma : x\mapsto \min_y r(y) + \|y-x\|^2$. By
\cite[Prop. 23.2]{bau-com-livre11}, $\nabla r_\gamma(x)\in \partial r(\prox_{\gamma r}(x))$,
for every $x\in E$. Therefore,
\begin{align}
h_{\gamma}(s,x)
&= -\nabla r_\gamma(x-\gamma \nabla \ell(s,x)) -\nabla\ell(s,x) \label{eq:tmp-b} \\
&\in -\partial r(\prox_{\gamma r}(x-\gamma \nabla \ell(s,x))) -\nabla\ell(s,x) \nonumber \\
&\in -\partial r(x -\gamma h_\gamma(s,x)) -\nabla\ell(s,x) \,. \label{eq:tmp-a}
\end{align}
In order to show that Assumption (RM)-\ref{hyp:RM-cvg}) is satisfied, we need some estimate on $\|h_\gamma(s,x)\|$.
Using Eq.~(\ref{eq:tmp-b}) and the fact that $\nabla r_\gamma$ is $\gamma^{-1}$-Lipschitz continuous
(see \cite[Prop. 12.29]{bau-com-livre11}), we obtain that
\begin{align}
\|h_\gamma(s,x)\|&\leq \| \nabla r_\gamma(x)\| + 2\|\nabla \ell(s,x)\| \nonumber \\
&\leq \|\partial^0 r(x)\|+2\|\nabla \ell(s,x)\| \,,\label{eq:bound-hgamma}
\end{align}
where $\partial^0 r(x)$ the least norm element in $\partial r(x)$ for every $x \in E$,
and where the last inequality is due to \cite[Prop. 23.43]{bau-com-livre11}.
As $\partial^0 r$ is locally bounded and $\partial r$ is usc,
it follows from Eq. (\ref{eq:tmp-a}) that Assumption (RM)-\ref{hyp:RM-cvg}) is satisfied.
The estimate (\ref{eq:bound-hgamma}) also yields Assumption (RM)-\ref{hyp:RM-moments}).
As a conclusion, Assumption (RM) is satisfied.
In particular, the statement of Th.~\ref{th:SA=wAPT} holds.
\medskip
To show that Assumption (PH) is satisfied, we first recall the Proximal
Polyak-Lojasiewicz (PPL) condition introduced
in~\cite{karimi2016linear}. Assume that $L$ is differentiable with a
$C$-Lipschitz continuous gradient. We say that $L$ and $r$ satisfy the
(PPL) condition with constant $\beta > 0$ if for every $x \in
E$, $$\frac12 D_{L,r}(x,C) \geq \beta \left[(L+r)(x) - \min
(L+r)\right]$$ where $$D_{L,r}(x,C) \eqdef -2 C \min_{y \in E}
\left[\ps{\nabla L(x),y-x} + \frac{C}2 \|y - x\|^2 + r(y) -
r(x)\right].$$ The (PPL) helps to prove the convergence of the
(deterministic) proximal gradient algorithm applied to the
(deterministic) problem of minimizing the sum $L+r$. We refer to~\cite{karimi2016linear} for practical
cases where the (PPL) condition is satisfied. In our stochastic setting, we introduce the Stochastic PPL condition (SPPL). We say that $\ell$ and $r$ satisfy the (SPPL) condition if there exists $\beta > 0$ such that for every $x \in E$,
$$\frac12 \int D_{\ell(s,\cdot),r}(x,\frac{1}{\gamma}) \mu(ds) \geq \beta \left[(L+r)(x) - \min (L+r)\right].
$$
for all $\gamma \leq \frac{1}{C}$. Note that (SPPL) is satisfied if for every $s \in \Xi$, $\ell(s,\cdot)$ and $r$ satisfy the (PPL) condition with constant $\beta$. In the sequel, we assume that for every $x \in E$, the random variable $\|\ell(x,\xi)\|$ is square integrable and denote by $W(x)$ its variance.
\begin{proposition}
\label{th:PHnoncvx}
Assume that the (SPPL) condition is satisfied, that $\gamma \leq \frac{1}{C}$ and that $$\beta (L(x)+r(x)) - W(x) - \frac{1}4 \|\nabla L(x)\|^2 \longrightarrow_{\|x\| \to +\infty} +\infty.$$ Then (PH) is satisfied.
\end{proposition}
\begin{proof}
Using (sub)differential calculus, it is easy to show that for every $n \in \bN$, \begin{equation*}
x + \gamma h_\gamma(s,x) = \arg\min_{y \in E} \left[\ps{\nabla \ell(s,x),y-x} + \frac{1}{2\gamma} \|y - x\|^2 + r(y) - r(x)\right].
\end{equation*}
Since $\nabla L$ is $1/\gamma$-Lipschitz continuous,
\begin{align}
\label{eq:PHnoncvx}
(L+r)(x + \gamma h_\gamma(s,x)) &= L(x + \gamma h_\gamma(s,x)) + r(x) + r(x + \gamma h_\gamma(s,x)) - r(x) \nonumber\\
&\leq (L+r)(x) + \ps{\nabla L(x), \gamma h_\gamma(s,x)} + \frac{1}{2\gamma}\|\gamma h_\gamma(s,x)\|^2 \nonumber\\
&\phantom{=} + r(x + \gamma h_\gamma(s,x)) - r(x) \nonumber\\
&\leq (L+r)(x) + \ps{\nabla \ell(s,x), \gamma h_\gamma(s,x)} + \frac{1}{2\gamma}\|\gamma h_\gamma(s,x)\|^2 \nonumber\\
&\phantom{=} + \ps{\nabla L(x) - \nabla \ell(s,x), \gamma h_\gamma(s,x)} + r(x + \gamma h_\gamma(s,x)) - r(x) \nonumber\\
&\leq (L+r)(x) - \frac{\gamma}{2} D_{\ell(s,\cdot),r}(x,1/\gamma) \nonumber\\
&\phantom{=} + \gamma\ps{\nabla \ell(s,x) - \nabla L(x), \nabla \ell(s,x) + \nabla r_\gamma(x - \gamma \nabla \ell(s,x))}
\end{align}
Recall that for every $x,y \in E$,
\begin{align*}
\ps{\nabla r_\gamma(x) - \nabla r_\gamma(y),x-y} & =
\ps{\nabla r_\gamma(x) - \nabla r_\gamma(y),\prox_{\gamma r}(x)-\prox_{\gamma r}(y)} \\
&\phantom{=} + \ps{\nabla r_\gamma(x) - \nabla r_\gamma(y),\gamma \nabla r_\gamma(x) - \gamma \nabla r_\gamma(y)}\\
& \geq \gamma \|\nabla r_\gamma(x) - \nabla r_\gamma(y)\|^2,
\end{align*}
using the monotonicity of $\partial r$. Hence,
\begin{equation*}
\ps{\nabla r_\gamma(x - \gamma \nabla \ell(s,x)) - \nabla r_\gamma(x),\gamma \nabla \ell(s,x)}
\leq -\gamma \|\nabla r_\gamma(x) - \nabla r_\gamma(x - \gamma \nabla \ell(s,x))\|^2.
\end{equation*}
Therefore,
\begin{align*}
&\gamma\ps{\nabla \ell(s,x) - \nabla L(x), \nabla r_\gamma(x - \gamma \nabla \ell(s,x)) - \nabla r_\gamma(x)} \\
\leq & -\gamma \|\nabla r_\gamma(x) - \nabla r_\gamma(x - \gamma \nabla \ell(s,x))\|^2 \\
& + \gamma \|\nabla r_\gamma(x) - \nabla r_\gamma(x - \gamma \nabla \ell(s,x))\|^2 + \frac{\gamma}{4}\|\nabla L(x)\|^2 \\
\leq & \frac{\gamma}{4}\|\nabla L(x)\|^2,
\end{align*}
where we used $\ps{x,y} \leq \|x\|^2 + \frac{1}4 \|y\|^2$.
Plugging into~\eqref{eq:PHnoncvx},
\begin{align*}
(L+r)(x + \gamma h_\gamma(s,x)) &\leq (L+r)(x) - \frac{\gamma}{2} D_{\ell(s,\cdot),r}(x,1/\gamma) \nonumber\\
&\phantom{=} + \gamma\ps{\nabla \ell(s,x) - \nabla L(x), \nabla \ell(s,x)} \nonumber\\
&\phantom{=} + \gamma\ps{\nabla \ell(s,x) - \nabla L(x), \nabla r_\gamma(x - \gamma \nabla \ell(s,x)) - \nabla r_\gamma(x)} \nonumber
\\
&\phantom{=} + \gamma\ps{\nabla \ell(s,x) - \nabla L(x), \nabla r_\gamma(x)} \nonumber\\
&\leq (L+r)(x) - \frac{\gamma}{2} D_{\ell(s,\cdot),r}(x,1/\gamma) \nonumber\\
&\phantom{=} + \gamma\ps{\nabla \ell(s,x) - \nabla L(x), \nabla \ell(s,x)} \nonumber\\
&\phantom{=} + \frac{\gamma}{4}\|\nabla L(x)\|^2 \nonumber
\\
&\phantom{=} + \gamma\ps{\nabla \ell(s,x) - \nabla L(x), \nabla r_\gamma(x)} \nonumber
\end{align*}
Integrating with respect to $\mu$, we obtain
\begin{align*}
\int (L+r)(x + \gamma h_\gamma(s,x)) \mu(ds)&\leq (L+r)(x) - \gamma \beta \left((L+r)(x) - \min(L+r)\right)\\
&\phantom{=} +\gamma W(x) + \frac{\gamma}{4} \|\nabla L(x)\|^2.
\end{align*}
Finally, the condition (PH) is satisfied with $\alpha(\gamma) = \gamma$, $\beta(\gamma) = 0$, $V = L+r - \min L+r$ and
$$\psi = \beta V - W - \frac{1}4 \|\nabla L\|^2.$$
\end{proof}
Note that the assumptions of Proposition~\ref{th:PHnoncvx} are satisfied if the (SPPL) condition is satisfied, $L(x) + r(x) \rightarrow_{\|x\| \to +\infty} +\infty$ and the function $x \mapsto \int \|\nabla \ell(s,x)\|^2 \mu(ds)$ is bounded.
\medskip
The condition (FL) is naturally satisfied. Identifying the invariant measures of the DI, we finally obtain a long-run convergence result for the algorithm~\eqref{eq:prox-gradient}. Let $\nu\in\cM(E)$ s.t. $\nu(L+r)<\infty$. Let $\mZ = \{x \in E, \text{ s.t }0 \in \nabla L(x) + \partial r(x)\}$. For all
$\varepsilon > 0$,
\begin{equation}
\limsup_{n\to\infty} \frac 1{n+1}\sum_{k=0}^n \bP^{\nu,\gamma}( d(X_k,\mZ)>\varepsilon)\xrightarrow[\gamma\to 0]{}0\,.
\end{equation}
\subsection{Fluid Limit of a System of Parallel Queues}
We now apply the results of this paper to the dynamical system described in
Example~\ref{ex:fluid} above. For a given $\gamma > 0$, the transition
kernel $P_\gamma$ of the Markov chain $(x_n)$ whose entries are given by
Eq.~\eqref{eq:queue} is defined on $\gamma\bN^N \times 2^{\gamma \bN^N}$.
This requires some small adaptations of the statements of the main results
that we keep confined to this paragraph for the paper readability.
The limit behavior of the interpolated process (see Theorem~\ref{th:SA=wAPT})
is described by the following proposition, which has an analogue
in~\cite{gas-gau-12}:
\begin{proposition}
For every compact set $K\subset \bR^N$, the family
$\{ \bP^{a,\gamma}{\mathsf X}_\gamma^{-1},a\in K\cap\gamma\bN^N ,0<\gamma<\gamma_0 \}$
is tight. Moreover, for every $\varepsilon>0$,
\[
\sup_{a\in K \cap \gamma\bN^N}
\,\bP^{a,\gamma}\left(d({\mathsf X}_\gamma,\Phi_{\mathsf H}(K))>\varepsilon\right)
\xrightarrow[\gamma\to 0]{}0\, ,
\]
where the set-valued map ${\mathsf H}$ is given by \eqref{Hqueue}.
\end{proposition}
\begin{proof}
To prove this proposition, we mainly need to check that Assumption (RM) is
verified. We recall that the Markov chain $(x_n)$ given by
Eq.~\eqref{eq:queue} admits the representation~\eqref{eq:decomp-markov-drift},
where the function $g_\gamma = (g_\gamma^1,\ldots,g_\gamma^N)$ is given
by~\eqref{gk-queue}. If we set $h_\gamma(s,x) = g_\gamma(x)$ (the fact that
$g_\gamma$ is defined on $\gamma\bN^N$ instead of $\bR_+^N$ is irrelevant),
then for each sequence $(u_n, \gamma_n) \to (u^\star, 0)$ with
$u_n \in \gamma_n \bN^N$ and $x^\star \in \bR_+^N$, it holds that
$g_{\gamma_n}(u_n) \to {\mathsf H}(u^\star)$.
Thus, Assumption (RM)--\ref{hyp:drift}) is verified with $H(s,x) = {\mathsf H}(x)$.
Assumptions (RM)--\ref{hyp:RM-cvg}) to (RM)--\ref{hyp:RM-integ}) are obviously
verified. Since the set-valued map ${\mathsf H}$ satisfies the
condition~\eqref{eq:lin-growth}, Assumption (RM)--\ref{hyp:flot-borne}) is
verified. Finally, the finiteness assumption~\eqref{eq:moment} with
$\epsilon_K = 2$ follows from the existence of second moments for the $A^k_n$,
and \eqref{eq:moment-bis} is immediate. The rest of the proof follows word for
word the proof of Theorem~\ref{th:SA=wAPT}.
\end{proof}
The long run behavior of the iterates is provided by the following proposition:
\begin{proposition}
Let $\nu\in\cM(\bR_+^N)$ be such that $\nu(\|\cdot\|^2)<\infty$. For each
$\gamma > 0$, define the probability measure $\nu_\gamma$ on $\gamma \bN^N$ as
\[
\nu_\gamma(\{\gamma i_1, \gamma i_2, \ldots, \gamma i_N\}) =
\nu(\gamma (i_1 - 1/2, i_1+1/2] \times \cdots \times
\gamma (i_N - 1/2, i_N + 1/2]) \, .
\]
If Condition~\eqref{eq:stability-queue} is satisfied, then for all
$\varepsilon > 0$,
$$
\limsup_{n\to\infty}\ \frac 1{n+1}\sum_{k=0}^n
\bP^{\nu_\gamma,\gamma}\left(d\left(X_k\,, 0 \right)
\geq \varepsilon\right) \xrightarrow[\gamma\to 0]{}0\,.
$$
\end{proposition}
To prove this proposition, we essentially show that the assumptions of
Theorem~\ref{cvg-XY} are satisfied. In the course of the proof, we shall
establish the existence of the (PH) criterion with a function $\psi$ having
a linear growth. With some more work, it is possible to obtain a (PH)
criterion with a faster than linear growth for $\psi$, allowing to
obtain the ergodic convergence as shown in Theorem~\ref{cvg-CVSI}.
This point will not be detailed here.
\begin{proof}
Considering the space $\gamma\bN^N$ as a metric space equipped with the
discrete topology, any probability transition kernel on
$\gamma\bN^N \times 2^{\gamma \bN^N}$ is trivially Feller. Thus,
Proposition~\ref{prop:feller} holds when letting $P = P_\gamma$
and $\nu \in \cM(\gamma \bN^N)$. Let us check that Assumption (PH) is verified
if the stability condition~\eqref{eq:stability-queue} is satisfied. Let
\[
V : \bR_+^N \to \bR_+, \quad
x = (x^1,\ldots, x^N) \mapsto \Bigl(\sum_{k=1}^N x^k/\eta^k \Bigr)^2 \, .
\]
Given $1\leq k,\ell \leq N$, define $f(x) = x^k x^\ell$ on $\gamma\bN^2$.
Using Eq.~\eqref{eq:queue}, the iid property of the process $((A^1_n,\ldots,
A^N_n, B^1_n,\ldots, B^N_n), n\in\bN)$ and the finiteness of the second
moments of the $A^k_n$, we obtain
\begin{align*}
(P_\gamma f)(x) &\leq x^k x^\ell
- \gamma x^k \left(
\eta^\ell \mathbbm 1_{\{x^{\ell}>0,\,x^{\ell-1} =\cdots = x^{1} = 0\}}
- \lambda^\ell \right) \\
&\phantom{=} - \gamma x^\ell \left(
\eta^k \mathbbm 1_{\{x^{k}>0,\,x^{k-1} =\cdots = x^{1} = 0\}} - \lambda^k \right)
+ \gamma^2 C \, ,
\end{align*}
where $C$ is a positive constant. Thus, when $x \in \gamma\bN^N$,
\begin{align*}
(P_\gamma V)(x) &\leq V(x) -
2 \gamma \sum_{k=1}^N x^k / \eta^k \sum_{\ell=1}^N
\left( \mathbbm 1_{\{x^{\ell}>0,\,x^{\ell-1} =\cdots = x^{1} = 0\}} -
\lambda^\ell / \eta^\ell \right) + \gamma^2 C ,
\end{align*}
after modifying the constant $C$ if necessary. If $x\neq 0$, then one and only
one of the $\mathbbm 1_{\{x^{\ell}>0,\,x^{\ell-1} =\cdots = x^{1} = 0\}}$ is equal to
one. Therefore,
$(P_\gamma V)(x) \leq V(x) - \gamma \psi(x) + \gamma^2 C$,
where
\[
\psi(x) = 2 \Bigl( 1 - \sum_{\ell=1}^N \lambda^\ell / \eta^\ell \Bigr)
\sum_{k=1}^N x^k / \eta^k \, .
\]
As a consequence, when Condition~\eqref{eq:stability-queue} is satisfied, the
function $\psi$ is coercive, and one can straightforwardly check that the
statements of Proposition~\ref{prop:tight}--\ref{it:itight}) and
Proposition~\ref{prop:tight}--\ref{it:mtight}) hold true under minor
modifications, namely,
$\bigcup_{P \in \cP}\cI(P)$ is tight in $\cM(\bR_+^N)$, since
$\sup_{\pi\in\cI(\cP)} \pi(\psi) < +\infty$, where
$\cP = \{ P_\gamma \}_{\gamma\in(0,\gamma_0)}$. Moreover, for every
$\nu\in\cM(\bR_+^N)$ s.t. $\nu(\|\cdot\|^2)<\infty$ and every $P\in \cP$,
$\{\bE^{\nu_\gamma,P_\gamma}\Lambda_n\,,\,\gamma\in (0,\gamma_0), n\in\bN \}$
is tight, since
$\sup_{\gamma\in(0,\gamma_0),n\in\bN}
\bE^{\nu_\gamma,P_\gamma}\Lambda_n(\psi)<\infty$.
We can now follow the proof of Theorem~\ref{cvg-XY}. Doing so, all it remains
to show is that the Birkhoff center of the flow $\Phi_{\mathsf H}$ is reduced to
$\{ 0 \}$. This follows from the fact that when
Condition~\eqref{eq:stability-queue} is satisfied, all the trajectories of
the flow $\Phi_{\mathsf H}$ converge to zero, as shown in \cite[\S~3.2]{gas-gau-12}.
\end{proof}
}
\def$'$} \def\cdprime{$''$} \def\cprime{$'${$'$} \def\cdprime{$''$} \def$'$} \def\cdprime{$''$} \def\cprime{$'${$'$}
|
1,116,691,501,296 | arxiv | \section{\label{sec:intro}Introduction}
Magnetized Liner Inertial Fusion (MagLIF) is a magneto-inertial fusion concept currently being explored at Sandia's Z Pulsed Power Facility.\citep{Slutz2010,Awe2013,Gomez2014,McBride2012} MagLIF produces thermonuclear fusion conditions by driving mega-amps of current through a low-Z conducting liner. The subsequent implosion of the liner containing a preheated and pre-magnetized fuel of deuterium or deuterium-tritium compresses and heats the system, creating a plasma with fusion relevant conditions.
Developing a detailed understanding of how experimental parameters such as axial pre-magnetization, preheat, and liner design mitigate losses and affect performance, as well as evolution of the plasma, is a crucial and ongoing step towards realizing the full potential of MagLIF. To this end, time resolved radiography of the imploding liner, as well as self-emission x-rays from the fuel plasma at stagnation (where thermal pressure of the fuel plasma stalls the liner implosion), have been used to study the evolution of the plasma and its structure at peak fusion conditions. For example, \citet{Awe2013} observed an unexpected feature in radiographs of an axial magnetized imploding liner -- a multi-helix structure not observed in liners that were not axially premagnetized. Additionally, axially bifurcated double helical strands have been observed in the stagnating fuel plasma columns, captured by self-emission x-ray image diagnostics. See Fig.~\ref{fig:helix} for an example image of the x-ray self-emission from the stagnated fuel plasma. Details of the helical structure vary, such as if there are one or two strands, and may not be resolved in some images since the resolution of the x-ray imager has just recently been improved.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_01.pdf}
\caption{\label{fig:helix} Self emission x-ray image of fuel plasma stagnation showing double-helix structure (experiment z3236). Axial direction is vertical. Radial direction is horizontal and exaggerated.}
\end{figure}
The underlying physics linking the multi-helix structure of the imploding liner to the bifurcated double-helices in the stagnated plasma is as of yet unknown. One working hypothesis is that a helical magnetic Rayleigh-Taylor instability \citep{Seyler2018} (MRT) seeded on the outside liner surface may grow large enough to feed-through the liner to seed perturbations on the liner interior. It is thought that these interior perturbations may imprint the double helical structure on the plasma. It has been experimentally demonstrated that the helical structure is dependent on the aspect ratio of the liner (AR $\equiv$ liner's initial outer radius$ / $liner's initial wall thickness). Recent experiments using liners with no dielectric coating appear to demonstrate that for increasing AR, the stagnation column helical radius increases while helical wavelength decreases, which is consistent with MRT feed-through from the liner’s outer surface.\citep{Ampleford2019} Dielectric coatings are sometimes used on the outside of the liner to suppress the electro-thermal instability which can strongly seed the MRT.\citep{peterson2012electrothermal,peterson2014electrothermal} There is another, less developed, working hypothesis that this double helical structure might be an emergent structure of the nonlinear evolution of the MRT that is controlled by conserved magnetic and cross helicities (where cross helicity\citep{perez2009role, glinsky2019helicity} is the cross-correlation between the fluid velocity and magnetic field averaged over an ensemble of random motions) that are injected into the liner. The large scale self organization would be the result of a Taylor relaxation,\citep{taylor1986relaxation} that is an energy minimization under the constraints of the topologically conserved helicities. This is supported by the inverse turbulent cascade in the liner structure seen by \citet{yager2018} on ultra-thin foils driven at less than 1 MA. However, such inferences remain weak due to the fact that, to date, there has been no systematic way to quantitatively compare stagnation morphology experiment-to-experiment or experiment-to-simulation while accounting for the uncertainty in characterizing features such as the helical wavelength and radius.
In this work, we develop a method which enables such a comparison by applying a cutting edge Machine Learning (ML) algorithm in image classification known as the Mallat Scattering Transform (MST).\citep{Mallat2012,Bruna2013} Specifically, we are able to use the MST as a quantitative metric of morphology to compare stagnation images, and as a metric to infer morphological features with uncertainty via a regression. We start with two sections that are a thorough description of the theoretical methods and details of the technical approaches that are essential in enabling other researchers to apply this approach to their data. They are quite dense and can be skimmed paying particular attention to the figures after reading the main take aways at the beginning of each of these sections. In Sec.~\ref{sec:theory}, we supply the required theory for the MST, show its connection to Deep Learning, and describe its relationship to causal physics. Section \ref{sec:ml_pipeline} describes the synthetic model used to parametrize the double helix morphology. We then discuss the design of the image morphology metric based on the MST. This metric is then tested in two ways. The first is via a classification of ensembles of synthetic stagnation images, and the second is via performance of a full machine learning pipeline that quantifies the morphological parameters of the stagnation images with uncertainty via regression. Section~\ref{sec:ml_pipeline} concludes with a verification of the metric design. Section~\ref{sec:results} demonstrates the application of the metric of image morphology in quantitatively comparing simulation and experiment, as well as a direct extraction of the morphological parameters with uncertainty from experimental images. We highlight the viability of the method to differentiate between plasmas produced from different experimental designs, and the use of the MST to do a sophisticated background subtraction to enable comparison of experiments to simulations.
\section{\label{sec:theory}Mallat Scattering Transform}
The MST is an iterative transformation that consists of convolutions with a localized oscillatory wavelet dilated to various scales that are nonlinearly rectified with a modulus operation then iteratively repeated. After each iteration the transformation is averaged over local patches and output. This gives a nonlinear mapping of the spacial features of the image to its scale features, including multiple scale correlations. This is a particular form of what is called a Convolutional Neural Network (CNN), a form of deep learning. It has a very specific, predetermined form with only a couple design parameters that need to be determined. This allows it to be trained on very small datasets. It also has very deep connections to physics that makes it a very compact and efficacious encoding. It respects the constraints of the physics such as causality, advective continuity, topological conservations, and more traditional conservations generated by group symmetries.
\subsection{\label{sec:DL_MST_theory}Deep learning based definition of MST}
Recently, the use of deep learning methods, combined with availability of large labeled data sets, has enabled a revolution in the field of image classification and analysis. Particularly, CNNs have gained widespread popularity for image analysis problems, such as classification,\citep{LeCun1989} segmentation,\citep{Ning2005} and even image generation.\citep{Goodfellow2014} The ubiquity of this approach is largely based on the ability of CNNs to learn convolutional filters which compute features that are approximately invariant to irrelevant symmetries present in the task (\textit{e.g.} translation or rotational symmetries).\citep{LeCun2010}
However CNNs require significant expertise to navigate a seemingly arbitrary design space (\textit{e.g.}, number of nodes and layers) and require considerable computing resources to train, even when using transfer learning.\citep{goodfellow2016deep} Additionally, their {\textit{black box}} nature make CNNs a less attractive framework for scientific applications to bridge the gap between causation and correlation. Alternative kernel classifiers such as the probabilistic neural network, are based on the Euclidean distance between image features (\textit{e.g.}, pixel information), which is easily broken by transformations, rotations and scaling. At the same time, familiar translation invariant feature representations such as the Fourier transform modulus are unstable to deformations (that is not Lipschitz continuous). The wavelet transformation on the other hand, is Lipschitz continuous to deformation, but is not translation invariant.\citep{Bruna2013} By combining local translation invariance and Lipschitz continuity to deformations in a fixed weight convolutional network, the MST addresses many of the concerns that arise in deep learning.\citep{Mallat2012,Bruna2013}
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_02.pdf}
\caption{\label{fig:mst} The MST may be thought of as a convolutional network with fixed weights. The above network could represent for example a 1D MST with 3 scales, and no rotations (1D case). The network outputs MST coefficients averaged by a Father Wavelet along each path, $S[p]x$. Each node of the network is the set of scattering coefficients before being window averaged by the Father Wavelet, $U[p]x$. The operator $\tilde{W}$ of Eq.~\eqref{eqn:MST} expands the network below a given node at then at the end of a path, $p$.}
\end{figure}
The MST consists of compositions of wavelet transformations\citep{mallat1999wavelet} coupled with modulus and non-linear smoothing operators which form a deep convolutional network. Unlike traditional deep convolutional neural networks, the filters in the MST are prescribed rather than learned. In fact the deep convolutional network of the MST has been shown to outperform CNNs for image classification tasks over a broad range of training sample sizes.\citep{Bruna2013} This is most significant when the amount of training samples is considerably limited,\citep{Bruna2013} which is often the case with experimental data. Additional benefits of the MST framework over CNNs come in the form of intelligible design -- for example, the depth of an MST network is bound by a signal's energy propagation through the network, whereas the depth of a CNN is seemingly arbitrary.
The two-dimensional MST uses a set of convolutional filters which are calculated from a Mother Wavelet $\psi$ by applying a rotation $r$ and scaling by $2^j$:
\begin{equation}
\label{eqn:wavelet}
\psi_{\lambda} = 2^{-2j} \psi (2^{-j} r^{-j} u),
\end{equation}
where $\lambda=2^{-j} r$ and $u$ is the spatial position. Let the wavelet transformation of image $x(u)$ be given by $x \star \psi_\lambda$. Given that the spatial resolution is retained in a wavelet transform, this process can be iterated upon, such that the propagated signal along path $p = (\lambda_1, \lambda_2,\dots,\lambda_m)$ is given by:
\begin{eqnarray}
U[p]x &= U[\lambda_m] = U[\lambda_2] \cdots U[\lambda_m]x \nonumber \\
&= | || x \star \psi_{\lambda_1} | \star \psi_{\lambda_2} | \cdots | \star \psi_{\lambda_m} | \label{eqn:wavelet_of_wavelet}
\end{eqnarray}
where the modulus removes the complex phase from the propagated signal. However, the wavelet coefficients are not invariant to translation, but rather translation covariant. Introducing the Father Wavelet (\textit{i.e.}, a spatial window function), $\phi_{2^J}(u)=2^{-2J}\phi(2^{-J}u)$, allows an average pooling operation to be performed by convolution $U[p]x \star \phi_{2^J}(u)$. This operation collapses the spatial dependence of the wavelet coefficients while retaining the dominant amplitude $U[p]$ at each scale. This results in an effective translation invariance assuming that a given translation is much smaller than the window scale, $2^J$. The windowed scattering transformation is thus given by:
\begin{eqnarray}
S[p]x(u) &=&U[p]x \star \phi_{2^J}(u) \nonumber \\
&=&| || x \star \psi_{\lambda_1} | \star \psi_{\lambda_2} | \cdots | \star \psi_{\lambda_m} | \star \phi_{2^J}(u). \label{eqn:windowed_scattering}
\end{eqnarray}
Now, we may define an operator $\tilde{W}$ which acts upon the non-windowed scattering $U[p]x$ producing
\begin{equation}
\tilde{W} U[p] x = \{S[p]x, U[p + \lambda]x \}_{\lambda \in \mathcal{P}} \label{eqn:MST}.
\end{equation}
$\tilde{W}$ will produce the output scattering coefficient at the current layer for the given path $p$, and will move to the next layer along the path $p+\lambda$ as demonstrated in Fig.~\ref{fig:mst}. With Eqns. \eqref{eqn:wavelet_of_wavelet} and \eqref{eqn:windowed_scattering}, we arrive at a deep scattering convolutional network $\tilde{W}$ in Eq.~\eqref{eqn:MST} with $m$ layers. For 2-D signals (images), the MST coefficients are visualized via log polar plots as depicted in Fig.~\ref{fig:logpolar}.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{fig_03.pdf}
\caption{\label{fig:logpolar} Coefficients produced by applying MST to 2D images in this work will be displayed on radial plots as shown. Bins are created according to scale (radial positioning, $|\lambda_m|$, and rotation, $\text{arg}(\lambda_m)$) with magnitude (color scale, not shown) representing the size of the coefficient at that scale and rotation.}
\end{figure}
The MST forms a nonlinear mapping from an image's spatial features to its scale features. This mapping is Lipschitz continuous to deformation, meaning that small deformations of the image result in small deformations of the Mallat scattering coefficients. Since we will be concerned with discovering morphology parameters of stagnation column images such as helical wavelength, the MST provides a convenient basis as compared to, for example, a Fourier transform which is not Lipschitz continuous to deformations. The first order MST, $m=1$, can be viewed as an optimal ``local'' Fourier transform. The reader is referred to \citet{Mallat2012} and \citet{Bruna2013} for more details regarding the mathematical properties of MST, such as being a unitary transformation, and having a scale ordering of the path, $p$.
\subsection{\label{sec:physics_MST_theory}Physical foundation of MST}
In the previous Sec.~\ref{sec:DL_MST_theory}, we developed the MST as a deep convolutional network with a very specific form. There was no physical reason given (other than desiring the mathematical properties of Lipschitz continuity and stationarity) for the design choices such as: the use of iterative convolution with a Mother Wavelet, the use of the modulus as an activation function between layers, and the final pooling using convolution with the Father Wavelet. There was also no reason given for: the sparseness of the MST, the need for only the first and second order MST, and the efficacy of the MST as a representation for the Machine Learning of physical systems. As it turns out, there are deep physical foundations for these design choices that explain its compactness and efficacy. These connections were briefly mentioned in \citet{Mallat2012} and are alluded to by the use of the word ``Scattering'' in the name of the transformation.
Important properties that connect the MST to causal dynamics have been noted in the previous section. Fundamental to these connections is the fact that physical dynamics, whether it be fluid dynamics, classical mechanics, or quantum field theory is built upon advection (\textit{i.e.}, deformation) by a vector field, which is also how physical symmetries are generated. This is why having Lipschitz continuity is paramount. Imposing this constraint on the transformation limits the representation to physically realizable systems with the proper symmetries. Furthermore, the construction of the MST also leads to the properties that the transformation is unitary (expressing that probability can not be created or destroyed), and that the path is scale ordered (expressing that the system is causal). Furthermore, the convolution by the Father Wavelet and the use of the modulus can be viewed as an expectation value operator and the evaluation of Gaussian integrals via the method of stationary phase, respectively.
Another seemingly arbitrary choice is the truncation of the MST expansion at second order. A physical system that encodes finite information is fully identified by the first and second order MST. This is because of a statistical realizability theorem\citep{krommes2002} -- either the distribution stops at second order or it must continue to all orders. Since the dynamical information is finite, the distribution must stop at second order. Practically, it is found that there is little signal energy in the MST of third order and higher, and that there is almost no improvement in the classification or regression performance by including the third order MST.
Finally, it is worth noting that, in the context of images, the MST is encoding the static scale structure in the first order transform, and the relationship between structures of different scales in the second order transform. This scale-to-scale correlation is essentially a two-point correlation function between different locations in the image, and the first order transform is the single-point correlation function (essentially a local Fourier transform). In the context of dynamical systems, this is analogous to the single-particle and two-particle distribution functions in the Mayer Cluster expansion of classical kinetic theory and the equivalent constructs of quantum field theory. These quantities encode all the dynamics of the system, meaning that the MST has a profound connection to the underlying physical dynamics of the system, and is a compact, that is sparse, encoding of the dynamics. Another way of looking at this is that the physics of the system is encoded in the two scale correlation function -- the kinetic or quantum transition rates. These transition rates determine how the first order transform evolves. This encoding of the physics in the second order transform is not surprising. In plasmas the electrostatic force is encoded in Debye shielding which is a two-point correlation. Significant theoretical and numerical work is well underway to support these physical foundations of the transformation.\citep{glinsky.et.al.19, glinsky.11}
\section{\label{sec:ml_pipeline}Synthetic model, classification and regression}
An analytic, synthetic model is constructed based on 11 model parameters that describe a 2D image of a double helical stagnation. There are also 6 stochastic parameters that represent the stochastic nature of the stagnation image and the experimental measurement noise. This model is used to generate data sets to design the metric based on the MST, train a classifier, and to develop a regression for the model parameters given an image. The design of the metric consists of choosing: the patch size over which the transformations will be done, which patches to include in the analysis, the prior distributions of the features (whether it is normal or log-normal, essentially whether the log of the features should be taken), how the features should be normalized, the angular resolution of the MST, and the maximum order of the MST to use. These choices are optimized and validated, and the efficacy of the metric demonstrated by the quantified performance of the classification and regression. The metric can be used directly in the analysis, and the regression can be used to estimate the model parameters (of the synthetic model) with uncertainty given an experimental image.
\subsection{Synthetic double helix model}
In order to quantify the morphology of the MagLIF stagnation column, a model with well defined parameters is needed to act as a surrogate for the x-ray self-emission diagnostic images. This model must capture the essential features of the stagnation such as its multi-helical nature, finite axial extent and axial bifurcations. For this purpose, we have constructed a synthetic model complete with 11 descriptive model parameters that capture some features of a fundamentally 3D stagnating plasma projected into a 2D image along with 6 stochastic parameters to represent the {\textit{natural}} experimental variation and signal noise inherent in the x-ray diagnostics fielded on Z.
Analytically, the synthetic model consists of superimposed ``radial'' and axial Gaussians over a pair of axial $\cos^2$ waves. Here, the radial position projected onto the image will be given by $r$, and the axial position of the image will be given by $z$. The model may be specified by the composition of the following functions:
\begin{eqnarray}
\label{eqn:s}
s(z) &=& \theta_6 \cos^2(\theta_7*\theta_3*z+\zeta_5) \nonumber\\
&+&\theta_9 \cos^2(\theta_{10}*\theta_3*z+\zeta_6),
\end{eqnarray}
\begin{equation}
\label{eqn:r0}
r_{0,j}(z)= (-1)^{1+\delta_{j,2}}\theta_8 + \theta_5 \sin(\theta_3*z + \zeta_4 + \delta_{j,2} \theta_{11}),
\end{equation}
\begin{equation}
\label{eqn:g}
g_j(r,r_{0,j}(z)) = \frac{1}{\theta_1 \sqrt{2\pi}} \exp\Big\{\frac{-(r-r_{0,j}(z))^2}{2 \theta_1^2}\Big\},
\end{equation}
\begin{equation}
\label{eqn:ell}
\ell(z) = \frac{\zeta_3}{\theta_2 \sqrt{2\pi}} \exp\left\{ -\left( \frac{z^2}{2 \theta_2^2} \right)^{\theta_4} \right\},
\end{equation}
and
\begin{eqnarray}
\label{eqn:h}
h(r,z) &=& A \sum_{j=1}^2\Big[ (1+s(z)) \; g_j(r,r_{0,j}(z)) \; \ell(z) \nonumber\\
&\times& (1-\zeta_2 U(0,1)) + \zeta_1U(0,1)\Big],
\end{eqnarray}
where $h(r,z)$ is the final composition used to generate double helix images, $\ell(z)$ is the axial envelope, $g_j$ is the Gaussian envelope of the helical strand, $r_{0,j}(z)$ is the center of the helical strand, $s(z)$ is axial bifurcations of the helical strands, $U(0,1)$ is a uniformly distributed random number on $[0,1]$, $j \in \{1,2\}$ is the strand index, $\delta_{i,j}$ is the Kronecker delta function, $A$ is a constant used to normalize the max value of $h(r,z)$ to unity, and $(\theta_i, \zeta_i)$ will be described.
The $\theta_i$ and $\zeta_i$ parameters are depicted in Fig.~\ref{fig:syn} and summarized in Table \ref{tab:param}. Their interpretations are: $\theta_1$ is the standard deviation of the radial Gaussian or strand thickness, $\theta_2$ is the standard deviation of the axial Gaussian or strand length, $\theta_3$ is the helical wavelength or wavenumber, $\theta_4$ is the order of the axial super Gaussian or how quickly the strand ends, $\theta_5$ is the amplitude of radial perturbations or radius of the helix, $\theta_6$ is the amplitude of the large wavelength or low frequency axial brightness perturbations, $\theta_7$ is the wavelength or mode number of the large wavelength or low frequency axial brightness perturbations, $\theta_8$ is the strand separation, $\theta_9$ is the amplitude of the small wavelength or high frequency axial brightness perturbations, $\theta_{10}$ is the wavelength or mode number of the small wavelength or high frequency axial brightness perturbations, and $\theta_{11}$ is the relative strand phase; $\zeta_1$ is the background noise, $\zeta_2$ is the signal noise, $\zeta_3$ is the amplitude of the signal, $\zeta_4$ is the radial perturbation phase shift, $\zeta_5$ is the phase shift of the large wavelength or low frequency axial brightness perturbations, and $\zeta_6$ is the phase shift of the small wavelength or high frequency axial brightness perturbations.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_04.pdf}
\caption{\label{fig:syn} Synthetic Stagnation Model (see Table~\ref{tab:param}).}
\end{figure}
\begin{table}
\caption{\label{tab:param}Synthetic model $\theta_i$ and stochastic $\zeta_i$ parameters (see Fig.~\ref{fig:syn}).}
\begin{tabular}{l}
\textbf{Model Parameters}\\
\hline
$\theta_1$ = strand thickness\\
$\theta_2$ = strand length\\
$\theta_3$ = helical wavenumber\\
$\theta_4$ = order of axial super Gaussian\\
$\theta_5$ = radius of the helix\\
$\theta_6$ = amplitude of low frequency axial brightness\\
perturbations\\
$\theta_7$ = mode number of low frequency axial brightness\\
perturbations\\
$\theta_8$ = strand separation\\
$\theta_9$ = amplitude of high frequency axial brightness\\
perturbations\\
$\theta_{10}$ = mode number of high frequency axial brightness\\
perturbations\\
$\theta_{11}$ = relative strand phase\\
\\
\textbf{Stochastic Parameters}\\
\hline
$\zeta_1$ = background noise\\
$\zeta_2$ = signal noise\\
$\zeta_3$ = amplitude of signal\\
$\zeta_4$ = radial perturbation phase shift\\
$\zeta_5$ = low frequency axial brightness perturbations phase\\shift\\
$\zeta_6$ = high frequency axial brightness perturbations\\ phase shift
\end{tabular}
\end{table}
\subsection{Metric design}
Given an image of the MagLIF stagnation column, we will calculate the MST coefficients as, what is commonly called in the ML literature, features of the image. We will then use these features as a metric on the space of the images. That is, the distance between two images will be calculated as the square root of the sum of the squares of the differences between the features of the two images. There are several design decisions in calculation of the features.
A majority of the design decisions including gridding, maximum MST order, variable transformations, normalization, and scale resolution are inherited from \citet{Bruna2013}'s use of deep scattering transformation networks for handwritten digit recognition from the MNIST database of handwritten digits. With this being said, these design decisions were verified by examining the effect on the regression performance of alternative design decisions. See Sec.~\ref{sec:metric_verify}, for the verification.
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{fig_05.pdf}
\caption{\label{fig:grid} Gridded MST: (a, left) self-emission image of experiment z3236, (b, center) first and (c, right) second order MST coefficients.}
\end{figure*}
We now discuss how features were engineered. First, the reader may note from Eq.~\eqref{eqn:windowed_scattering} that we must evaluate the scattering coefficients at points $u$ in our image. Now, due to the assumption that the statistics given by the MST are stationary, that is spatially invariant, below the Father Wavelet window size; if we were to evaluate $S[p]x(u)$ at all points $u$, one would obtain very redundant information. As a result, it is wise to subsample $u$. This is achieved by translating the spatial window by intervals of $2^J$ such that $G_\#=N2^{-J}$, where $N$ is the symmetric pixel count and $G_\#$ is symmetric grid number. This subsampling forces each image to be segmented into a $G_\#\times G_\#$-grid.\citep{Bruna2013} We work with images of pixel size $512\times512$, and set $J=7$ giving $G_\#=4$. We now have a design parameter to choose, $J$, which determines the size of the sub-image, $2^J \times 2^J$, over which the transform will be calculated. This was chosen based on the position, size and characteristics of our double helix (see Fig.~\ref{fig:grid}) and was found to give good regression performance as discussed later. We note however, that a more rigorous procedure to select $J$ would be to select the $J$ which gives maximum cross validated classifier or regression performance. With this being said, our eyes are very good at recognizing the dynamical space scale of the physics. The size of the Father Wavelet, $2^J$, should be of this scale. If the size is too small, the MST will be not be calculated over the largest area possible and will therefore have more noise and not contain as much statistical information. If the size is too large, the assumption of stationarity will be violated leading to a blurring of the statistics and a resulting loss of information. It is therefore expected that there will be an optimal size that could be determined by the aforementioned $J$ cross validation optimization.
This division of the image into sub-images, or patches, does not increase or decrease the resolution of the image. As the image is divided into more and smaller patches, the number of pixels per sub-image decreases to compensate for the increased number of patches. This also will decrease the number of MST coefficients per patch, since coefficients for scales larger than the patch size can not be calculated.
An added benefit to gridding the images, is data reduction via patch selection. From Fig.~\ref{fig:grid}(a) it is apparent that most of the image is background noise. This is echoed in the MST coefficient space. Since our double helix is confined to column $2$, essentially all of the unique information is contained within the MST coefficients evaluated on the four patches in column $2$, so that the other columns may be dropped. This is done after the MST, not before, to mitigate boundary effects and because of technical issues in taking the MST that require the domain to be square. We also only calculate the MST to second order, $m=2$. Before computing the MST on our gridded image, we must apply boundary conditions for the convolution. There are many reasonable choices, such as periodic, zero-padded, and mirrored. We chose to use a mirror boundary condition. This minimized the influence of the boundary, while making the minimum assumption about the signal outside of the domain.
The final step in engineering features for a machine learning algorithm is to perform an appropriate scaling of the input features. This is a common practice in statistical learning, and many different scaling transformations and dimensionality reduction methods are reasonable. Here, we apply a $\log_{10}$ scaling to our scattering coefficients and model parameters (with the exception of $\theta_{11}$ which is a phase shift) used in training the classifier. This choice was made to decrease the dynamic range of the MST coefficients, since before the transformation the MST coefficients were dominated by only a few coefficients.
\subsection{Classification model}
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{fig_06.pdf}
\caption{\label{fig:classes} Classification Training Set construction. Parameter distributions are shown at left while the base classes are represented on the right. Shown is the class separation between class 1 and 2 as the blue line labeled ``Sep'', and the class precision of class 1 as the red line labeled ``Prec''. Units for all $\theta_i$ are arbitrary, but consistent with those to be shown in the cross plots of Fig.~\ref{fig:mstr_performance_param}. Ensemble of synthetic stagnation images for classification (shown via animation in this link to a \href{https://youtu.be/PG4-rQ1tUyw}{Multimedia View}). Shown in this animation are the base case (left), the base case for the class (middle), and the individual members of the ensemble (right).}
\end{figure*}
Studying the ability of the MST to distinguish between different classes of helical morphology will quantify the performance of the MST as a metric of image morphology and will provide reassurance that the regression problem is well-posed. Additionally, it provides access to more easily interpretable results (\textit{e.g.}, classification accuracy as opposed to $R^2$). By considering the classification problem, we are also able to closely follow the approach using the MST for MNIST handwritten digit recognition in \citet{Bruna2013}.
We approach the problem by synthesizing 12 stagnation image classes -- 11 distinct parameter constrained classes constructed from systematic modifications to the synthetic model parameter distributions from a single base class. Each of the distinct parameter classes has a definitive associated synthetic model parameter. For a given parameter class, the distribution of its associated synthetic model parameter is translated some separation from its corresponding base class distribution. This process is repeated for each of the 11 distinct parameter classes (see Fig.~\ref{fig:classes}). For the classification problem, we generate 340 images. We use $50\%$ of this data set as the training set to train an affine classifier, while the remaining $50\%$ is separated out as the test set to be used for characterizing the trained classifier.
Finally, we apply the classification algorithm. Following \citet{Bruna2013}, we apply a classifier based on an affine space model with the approximate affine space determined by Principal Component Analysis (PCA) of each class. To be specific, let $SX_k$ denote the set of MST coefficients for all of our images belonging to class $k$. $SX_k$ can be organized into a $N_{i,k}\times P$ matrix where $N_{i,k}$ is the number of images available for class $k$ and $P$ is the number of scattering coefficients (\textit{i.e.}, the coefficients have been stacked into a vector of length $P$). The columns of $\Delta_k$ may be transformed to have zero mean for each of the $P$ coefficients $\Delta_k = SX_k - \mathbb{E}(SX_k)$. We may then perform principal component analysis on $\Delta_k$ by finding the eigenvectors $\{\mathbf{U}_{j,k}\}_{j=1}^P$ and corresponding eigenvalues $\{\Lambda_j\}_{j=1}^P$ of the covariance matrix $\Delta_k^T\Delta_k$. Taking $\mathbf{U}_{j,k}$ to be ordered such that $\Lambda_j > \Lambda_{j+1}$, we keep only the first $d \ll P$ principal vectors $\{\mathbf{U}_{j,k}\}_{j=1}^d$. Letting $\mathbf{V}_k = \text{span}(\{\mathbf{U}_{j,k}\}_{j=1}^d)$, we may construct the affine approximation space for class $k$
\begin{equation}
\label{eqn:affinespace}
\mathbf{A}_k = \mathbb{E}(SX_k) + \mathbf{V}_k.
\end{equation}
Finally, for a new image with scattering coefficients $Sx$, the class assigned to the image is given by
\begin{equation}
\hat{k}(x) = \underset{k}{\operatorname{argmin}} || Sx - P_{\mathbf{A}_k}(Sx)||.
\end{equation}
In order to evaluate how effectively the classes are separated one may define the ratio of the expected value of the distance of class $i$ to the affine space for class $j$ divided by the expected value of the distance of class $i$ to its own affine space,
\begin{equation}
\label{eqn:sep}
R^2_{ij} \equiv \frac{E(|| SX_i - P_{\mathbf{A_j}}(SX_i)||^2)}{E(|| SX_i - P_{\mathbf{A_i}}(SX_i)||^2)}.
\end{equation}
Note that if the classes are well separated, then $R_{ij}^2$ will be very large for $i\neq j$, while $R_{ii}^2=1$. It thus makes sense to define the matrix
\begin{equation}
\label{eqn:sepdec}
\Omega_{ij} =N_j e^{-|R_{ij}|},
\end{equation}
where $N_j$ is a column-wise normalization ensuring that each column of $\Omega_{ij}$ sums to $1$. The off-diagonal elements are indicative of overlap among the the tails of the class distributions. Conservative assumptions are made that the distribution is exponential, not Gaussian, and the algebraic geometric factor is ignored. Both lead to estimation of more distribution overlap. The assumption of an exponential distribution leads to the simplification of the same form for the cumulative distributions. Equation~\eqref{eqn:sepdec} gives a rough probability that members of class $j$ would have values that would be classified as class $i$, that is the confusion matrix, $P(C_i | C_j)$. For the case that there is small overlap in the class distribution and there are limited samples, $\Omega_{ij}$ is a high fidelity surrogate for the confusion matrix. This is because this statistic calculates moments of cluster size and separation, rather than the occurrence of the rare cluster overlap events. Figure~\ref{fig:confusion} shows the matrix $\Omega_{ij}$ for our case demonstrating good class separation as indicated by the fact that the matrix is strongly diagonal. The chance of miss-classification is extremely small ($<0.1\%$), and the average class precision is $0.00017$ while the average class separation is $10$. Here we have used the definitions of class separation, $R_d^2$, and precision, $r_d^2$, given in \citet{Bruna2013},
\begin{equation}
\label{eqn:sep2}
R^2_d \equiv \frac{1}{N_c} \sum_{i=1}^{N_c} \frac{E(\text{min}_{j\ne i}|| SX_i - P_{\mathbf{A_j}}(SX_i)||^2)}{E(|| SX_i - P_{\mathbf{A_i}}(SX_i)||^2)},
\end{equation}
\begin{equation}
\label{eqn:prec}
r^2_i \equiv \frac{E(|| SX_i - P_{\mathbf{A_i}}(SX_i)||^2)}{E(|| SX_i ||^2)}
\end{equation}
and
\begin{equation}
\label{eqn:prec2}
r^2_d \equiv \frac{1}{N_c} \sum_{i=1}^{N_c} r^2_i.
\end{equation}
Note that the separation is just the average of the separation matrix given in Eq.~\eqref{eqn:sep}. The geometric meaning of the separation and precision are shown in Fig.~\ref{fig:classes}. Also note in Fig.~\ref{fig:classes} that the classes were constructed so that the separation is about 6 to 10 in $\theta_i$-space. This should be the upper limit on the class separation of any classifier. The fact that the realized class separation with an affine classifier is the optimal value of 10, indicates that the MST metric has exposed the model parameters as linear combinations of the MST metric and no better representation could be found.
\begin{figure}[ht]
\center\includegraphics[width=\columnwidth]{fig_07.pdf}
\caption{\label{fig:confusion} The surrogate confusion matrix $\Omega_{ij}$ defined by Eq.~\eqref{eqn:sepdec} which demonstrates that the constructed double helix classes are well separated in the MST space.}
\end{figure}
Increasing the dimension of the PCA affine approximation space has been shown to make the MST more robust to rotations by effectively reducing the spread of the intra-class affine space while increasing the spread of the inter-class separations for classification problems using an affine classifier and the MST.\citep{Bruna2013} Here we observe a similar effect of affine space dimensionality on performance as demonstrated in Fig.~\ref{fig:affine_opt}. While the performance is maximized for a dimension of 10, there are diminishing returns after a dimension of 4.
\begin{figure}[ht]
\center\includegraphics[width=\columnwidth]{fig_08.pdf}
\caption{\label{fig:affine_opt} Optimization curve for the dimension of the affine space. As the dimension of the affine space increases the error decreases and the average intra-class precision ($r^2_d$) decreases while the inter-class separation ($R^2_d$) increases.}
\end{figure}
\subsection{\label{sec:regression}Regression model and ML pipeline}
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{fig_09.pdf}
\caption{\label{fig:mstr} The MST regression pipeline for morphology characterization of experimental stagnation images. Starting at the left of this figure, an ensemble of synthetic stagnation images are formed from the parameterized model. The MST is taken of this ensemble over the four numbered patches, then the MST is regressed to give the synthetic model parameters. This regression is then applied to an experimental diagnostic image to give estimates of the model parameters with uncertainty including correlation. The synthetic image using the Maximum A posteriori Probability (MAP) parameters can then be constructed. Ensemble of synthetic stagnation images used for regression (shown via animation in this link to a \href{https://youtu.be/uqx-ZkV6TxE}{Multimedia View}). Shown in this animation are the individual members of the ensemble (left), and the corresponding synthetic stagnation image constructed from the MAP parameters regressed from the MST of the member of the ensemble (right).}
\end{figure*}
We now consider the regression problem as highlighted in Fig.~\ref{fig:mstr}. The goal of this regression is to estimate the synthetic model parameters with uncertainty given an experimental image. This Machine Learning (ML) pipeline takes as input an image of a plasma stagnation column and outputs a set $\{\theta_i\}_{i=1}^{11}$ characterizing the morphology of the column along with an estimate of the uncertainty of the output. This will be achieved by creating a set of synthetic images from Eq.~\eqref{eqn:h} using a large set of randomly chosen $(\theta_i,\zeta_i)$, computing the MST, and performing a regression from MST coefficients to $\theta_i$. Note the absence of $\zeta_i$ in our output as those are meant to represent unimportant transformations, such as rotating the viewing angle, which does not alter the fundamental morphology. Specifically, image realizations are produced, using the synthetic model, from a random sampling of the log-uniformly distributed model parameters. The statistical properties of these distributions are determined by visually confirming that helices produced encompass what is reasonable to expect from experiment. Additionally, most of the quantities we wish to learn from the helical images (\textit{i.e.} the $\theta_i$'s) are non-negative. As a result, we chose to $\log_{10}$ scale all of the $\theta_i$ values except for the strand phase $\theta_{11}$. The log scaling of the values is equivalent to assuming a log-normal prior distribution. This choice of using a log-normal distribution is verified in Sec.~\ref{sec:metric_verify}.
Before conducting a linear regression from ($\log_{10}$ scaled) MST coefficients to (scaled) helical parameters, we standard normal scale $\theta$ and $\mathbf{S}$. We will henceforth refer to the transformed quantities as $\tilde \theta$ and $\mathbf{\tilde S}$. Principal Component Analysis (PCA) is then employed to find a set of orthonormal basis vectors by which to rotate the MST coefficients and model parameters into a more directly correlated space, prior to linear regression, by applying Singular Value Decomposition (SVD) to the cross-covariance between the MST coefficients and model parameters from the training set, CCOV$( \mathbf{\tilde \theta}, \mathbf{\tilde S}) = \mathbf{\tilde \theta}^T\mathbf{\tilde S}/(N-1)$. Here, $N$ is the number of training samples used to construct the cross-covariance. The SVD factors the cross-covariance matrix into a set of transformation matrices $\mathbf{U}$ and $\mathbf{V}$ bounding a diagonal matrix $\mathbf{\Sigma}$ containing a set of singular values
\begin{equation}
\label{eqn:svd}
\mathbf{U \Sigma V}^T = \frac{\mathbf{\tilde \theta}^T\mathbf{\tilde S}}{N-1}.
\end{equation}
PCA is often used on linear systems for dimensionality reduction, however for reasons that will be discussed shortly, we retain full dimensionality. This set of transformation matrices provides a set of orthogonal basis vectors along which $\tilde{\theta}$ and $\tilde{\mathbf{S}}$ are most directly correlated, ordered from strongest to weakest correlation (see Fig.~\ref{fig:pcv} and Fig.~\ref{fig:sv}). Most of the correlation is contained in the first four dimensions, about 91\% of the variation. The model parameters $\tilde{\theta}$ and scattering coefficients $\tilde{\mathbf{S}}$ are rotated into the directly correlated space such that $\mathbf{Y}=\mathbf{\tilde \theta U}$ and $\mathbf{X}=\mathbf{\tilde S V}$ define the rotated variables.
Regressing the rotated scattering coefficients $\mathbf{X}$ back onto the rotated $\mathbf{Y}$ is accomplished using multidimensional linear regression,
\begin{equation}
\label{eqn:reg}
Y_j = b_j + \sum_{i=1}^{p}X_i m_{ij} + \epsilon_j,
\end{equation}
where $\mathbf{m}$ is the map from $\mathbf{X}$ to $\mathbf{Y}$ (\textit{e.g.}, ``slope''), $\mathbf{b}$ is the bias (\textit{e.g.}, intercept), $\mathbf{\epsilon}$ is the error term, and $\epsilon =\mathcal{N}(0,\mathbf{\Lambda})$ is assumed to be a zero mean normal random variable with covariance matrix $\mathbf{\Lambda}$. Writing Eq.~\eqref{eqn:reg} in matrix notation, the bias is absorbed into the slope such that $\mathbf{Y}=\mathbf{XM}+\mathbf{\epsilon}$.
Note that Eq.~\eqref{eqn:reg} implies that the prediction for a new input $\mathbf{X}$ is $\mathbf{Y}_\text{pred} =\hat{\mathbf{Y}} = \mathbf{XM}$ since $\overline{\mathbf{\epsilon}}=0$. Importantly, this would also be able to characterize the uncertainty in our prediction if we had an estimate of $\mathbf{\Lambda}$. In order to estimate $\mathbf{M}$ and $\mathbf{\Lambda}$, note that Eq.~\eqref{eqn:reg} specifies a likelihood function
\begin{equation}
\begin{split}
P(\{\mathbf{Y}_i\}|\mathbf{M},\mathbf{\Lambda},\{\mathbf{X}_i\}) = \prod_{i=1}^{N} \frac{1}{\sqrt{(2\pi)^k|\mathbf{\Lambda}|}}\\
\times e^{-\frac{(\mathbf{Y}_i-\mathbf{X}_i\mathbf{M})^T\mathbf{\Lambda}^{-1}(\mathbf{Y}_i-\mathbf{X}_i\mathbf{M})}{2}},
\end{split}
\end{equation}
where the training data are assumed independent and identically distributed (i.i.d.) and $k$ is the dimensionality of our output space (here $k=11$ since there are $11$ theta parameters to which we wish to regress).
A maximum likelihood estimate of the coefficients of the matrix $\mathbf{M}$ and error covariance matrix $\mathbf{\Lambda}$ are determined by finding their values which maximize the likelihood function over our training data. Equivalently, since the logarithm is monotonic, we may maximize the log-likelihood $\mathcal{L}$. The solution is derived in many statistics and machine learning textbooks (see \textit{e.g.} \citet{bishop}) and is given by
\begin{equation}
\label{eqn:mleM}
\mathbf{M}_\text{MLE} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y},
\end{equation}
which is the typical ordinary least squares solution where $\mathbf{X}_i$($\mathbf{Y}_i$) have been stacked to create $\mathbf{X}$($\mathbf{Y}$) and the error covariance matrix is
\begin{equation}
\label{eqn:mleL}
\mathbf{\Lambda}_\text{MLE}= \frac{1}{N} \sum_{i=1}^N(\mathbf{Y}_i-\mathbf{X}_i\mathbf{M}_\text{MLE})^T(\mathbf{Y}_i-\mathbf{X}_i\mathbf{M}_\text{MLE}),
\end{equation}
which is just the estimate of the population covariance matrix of the difference $(\mathbf{Y}-\mathbf{Y}_\text{pred})$.
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{fig_10.pdf}
\caption{\label{fig:pcv} Orthogonal basis vectors, $\mathbf{U}$ (top panel) and $\mathbf{V}$ (bottom panel), which map the model parameters and scattering coefficients, respectively, into the directly correlated space. For the scattering coefficients, the first order is shown in the top row and the second order in the bottom row. Each column represents one of the orthogonal basis vectors from most to least correlated (left to right). For some of these vectors there is a simple interpretation such as the second from the left -- $\theta_1$ and a vertical striping in the image. For others it is a combination of $\theta_i$ and a rich texture. For these cases further insight to the texture could be given by taking the inverse MST.}
\end{figure*}
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_11.pdf}
\caption{\label{fig:sv} Singular values of the cross-covariance matrix, given by the diagonal elements of $\Sigma$. Gives the significance of the orthogonal vectors of Fig.~\ref{fig:pcv}. The greater the singular value of the cross-covariance the larger the contribution to the regressor's prediction.}
\end{figure}
For a new image, we can now estimate a set of values $\theta$ along with an estimate of the uncertainty on theta according to the following algorithm. We start with the MST feature extraction:
\begin{enumerate}
\item Compute the first and second order scattering coefficients of the image on a 4x4 grid (see Figure~\ref{fig:grid}).
\item Discard all but the second column from the grid for each of the 2 sets of coefficients.
\item Compute $\log_{10}$ of the scattering coefficients and flatten into a vector to get $\mathbf{S}$.
\item Standard normal scale using the mean and standard deviation estimated on the training set to get $\mathbf{\tilde S}$.
\item Project onto principal components to get $\mathbf{X} = \mathbf{\tilde S} \mathbf{V}$.
\end{enumerate}
We then do the regression:
\begin{enumerate}
\item Compute $\mathbf{Y}_\text{pred} = \mathbf{X}\mathbf{M}_\text{MLE}$.
\item Create a set of values consistent to within the error term
\begin{equation*}
\{\mathbf{Y}_\text{pred,i}\}_{i=1}^{N_\text{resamp}} = \mathbf{Y}_\text{pred} + \{\mathcal{N}_i(0,\mathbf{\Lambda}_\text{MLE})\}_{i=1}^{N_\text{resamp}}.
\end{equation*}
\item Compute $\{\tilde \theta_i\} = \{\mathbf{Y}_{\text{pred},i}\mathbf{U}^{-1}\}$.
\item Compute $\{\theta_i\}$ by inverting standard normal scaling of $\tilde \theta$ using the mean and standard deviations of $\tilde \theta$ computed from the training set and then invert the $\log_{10}$ scaling performed on all but the last component of $\theta$.
\item We now have an estimate of the distribution of $\theta$ consistent with the original image. We may report the prediction and error as means and standard deviations, or as percentiles (\textit{e.g.}, report the $50^{th}$ percentile as the prediction and the $2.5$-percentile and $97.5$-percentile as lower and upper bounds). The estimates of the distribution are subject to the caveats expressed at the end of this section.
\end{enumerate}
In our case, inverting the transformations leads to an asymmetric distribution of $\theta$ values consistent with the original image, so here we will report the $95\%$ confidence interval and the mode of the distribution rather than mean and standard deviation for any predictions.
Before moving on to discuss results, we note that the cross-covariance matrix computes the set of basis vectors along which the quantities $\tilde{\theta}$ and $\tilde{\mathbf{S}}$ exhibit the strongest linear correlation. As a result, any nonlinear relationships between $\tilde{\theta}$ and $\tilde{\mathbf{S}}$ will not be recoverable upon linear regression. First attempts using a linear regression given by Eq.~\eqref{eqn:mleM}, when truncating the dimensionality of the principle components to the number of singular values, showed nonlinear bows. A more generalized model was constructed to capture this nonlinear behavior by including the full SVD. The nonlinear aspects are captured by the nonlinear dependence of the additional SVD components on the reduced set of SVD components.
A slight modification to the predictive model is required to mitigate numerical issues with this more generalized model. This is a repercussion of using the full SVD on the cross-covariance matrix which causes the quantity $\mathbf{X}^T\mathbf{X}$ from Eq.~\eqref{eqn:mleM} to be ill-conditioned. Applying $L2$-regularization to the predictive model is shown to be an effective mitigation procedure. The predictive model with $L2$-regularization is
\begin{equation}
\label{eqn:predictive_model}
\hat{\mathbf{Y}} = \mathbf{X} (\mathbf{X}^T\mathbf{X} + \lambda \mathbf{I})^{-1}(\mathbf{X}^T\mathbf{Y})
\end{equation}
where $\lambda$ is optimized through cross-validation, maximizing $R^2$, where
\begin{equation}
R^2 \equiv 1 - \frac{\sum_i (\mathbf{Y}_i - \hat{\mathbf{Y}}_i)^2}{\sum_i (\mathbf{Y}_i - \bar{\mathbf{Y}})^2}.
\end{equation}
We find that the optimum value of $\lambda_{CV}$ is $0.005614$. Note that the regularization is an assumption on the gradients or correlation. This does a have an effect on the estimate of the uncertainty as will be discussed at the end of this subsection.
Image realizations ($N=2048$) are produced, using the synthetic model, from a random sampling of the log-uniformly distributed model parameters. The statistical properties of these distributions are determined by visually confirming that helices produced encompass what is reasonable to expect from experiment. We chose the log-uniform distribution over a uniform distribution, because we were not certain of the order of magnitude of the model parameters. If we would have chosen a uniform distribution, we would have not only under sampled the smaller scales, we would have biased the estimation to the largest scale in the population. For each $\mathbf{\theta}$ realization, a set of features is extracted from its corresponding synthetically generated image using the MST. The data set is randomly separated into a training ($50\%$), validation ($25\%$), and test ($25\%$) sets. The training set is used for model training, the validation set is used for cross-validation and model selection, while the test set aside and used to asses the performance of the selected model.
The scatter plots in Fig.~\ref{fig:mstr_performance_param} and Fig.~\ref{fig:mstr_performance_param_2} show predicted vs. actual morphological parameters of the test set in the $\log_{10}$-scaled MST coefficient space. There is reasonable agreement over a large range of parameter space. The correlation is very diagonal and close to 1, and the regression coefficient is quite good, $R^2 = 0.91$. This all shows that the regression is performing very well.
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{fig_12.pdf}
\caption{\label{fig:mstr_performance_param} MST regressor performance. The scatter plots show predicted vs. actual parameters in $\log_{10}$-space. The first several demonstrate very good performance, while the later components show slightly less performance. This may be indicative of nonlinearity which the linear regression cannot explain, or it may be variance caused by the unexplained $\zeta$ parameters. Units for all $\theta_i$ are arbitrary, but consistent with those shown in the histograms of Fig.~\ref{fig:classes}.}
\end{figure*}
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_13.pdf}
\caption{\label{fig:mstr_performance_param_2} Further look at the MST regressor performance. Correlation plot which shows that the correlation of the predicted parameters to the actual parameters is very diagonal and close to 1.}
\end{figure}
Before moving on, we need to discuss the uncertainty that is estimated by this regression. The uncertainty in the regression due to uncorrelated experimental noise is captured (if the experimental noise is Gaussian and of the size that was modeled), but not due to correlated experimental noise. While estimation uncertainty due to the stochastic nature of synthetic images is captured, both random and systematic error due to the limited nature of these synthetic images compared to the experimental images and due to incorrect statistical assumptions in this analysis are not. Because of this, the random error could be larger than estimated and there could be systematic error or bias in the estimate. Interestingly this leads to an iterative approach to improving the models to reduce the systematic error and correct for the underestimation of the random error. When the results of this regression are applied to experimental or simulation images and compared to a physical theory, a systematic bias and/or greater scatter in the regressed model parameters than predicted by the undertainty could be observed. This is a symptom that the regression model and/or the physical model should be improved. Iterative improvements to the models can be made to remove these pathologies in the error.
\subsection{\label{sec:metric_verify}Metric design verification}
Table \ref{tab:optimization} shows cross validation results which aided in the design of the MST metric. The \textit{base} metric was constructed using the aforementioned design criteria, most of which were inherited from previous image classification applications of the MST. We found a modest improvement in performance when using a much larger training set (four times bigger). From our cross validation, there was a modest drop in performance when not $\log_{10}$-scaling the MST coefficients, not using the second order MST ($m=1$ only), using integrated intensity instead of max value normalization, and decreasing the number of MST filter rotations.
\begin{table}
\caption{\label{tab:optimization}Cross-Validation results. All models are slight deviations from the Base Model defined by 2048 training images, max normalization, $m$=2, $4 \times 4$-gridding, 8 rotations and $\log_{10}$-features.}
\begin{tabular}{lcc}
\\
\textbf{Model} && \textbf{Validation Set} $\mathbf{R}^2$\\
\hline
Base && $0.9094$\\
\hline
$8192$ training images && $0.9237$\\
$m=1$ && $0.7238$\\
4 rotations && $0.8449$\\
non-$\log_{10}$ on features && $0.8752$\\
integrated intensity normalization && $0.8849$\\
\hline
\end{tabular}
\end{table}
\section{\label{sec:results}Applications and results}
There are two primary cases of interest for applying our method. The first is to be able to quantitatively compare experimental data to simulation. The second is to be able to compare morphology between different experiments and quantify what those differences are. In doing so, we will be able to make statistically sound inferences about discrepancies in morphology. By providing this capability, the method will provide physical insight into the physical mechanisms causing the differences. To this end, we conduct some initial studies which show how the method will be used.
\subsection{\label{sec:sim_exp}Simulation-to-experiment comparison}
The experimental images are obtained from the Continuum X-ray Imager instrument fielded on Z.\citep{Gomez2014} We include self-emission images from AR4.5, AR6 and AR9 MagLIF experiments fielded on Sandia's Z-Machine -- experiments z3017, z2839 and z3018, respectively.\citep{Ampleford2019} For each of the experimental images, synthetic x-ray self-emission images are taken from 3D radiation magnetohydrodynamic (rad-MHD) \texttt{GORGON} \citep{Chittenden2004} simulations modeling a corresponding experiment. These simulations are run with continuous virtual boundary edges at a height of $5$ $mm$ and the synthetic images are calculated using a ray-tracing algorithm onto a virtual image plate. The experimental images have been vertically cropped down to a $5$ $mm$ height to compare with their simulated counterparts. The experimental images continue to have similar structure over their full height of about $1$ $cm$. Figure \ref{fig:sim_exp_stag} shows the comparison of simulated and experimental self-emission images at several different liner aspect ratios.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_14.pdf}
\caption{\label{fig:sim_exp_stag} Stagnation images from selected MagLIF experiments at varying aspect ratios and their corresponding simulated (\texttt{GORGON}/MHD) counterparts.}
\end{figure}
There are distortions to both the experimental and simulation x-ray self-emission images. For the experimental images there are instrumental responses, noises, and calibrations that can not be explicitly estimated. For the simulations there are approximations to the physics, and numerical error in the calculations. This leads to the ``true'' image being shifted into different \textit{domains} for the experiments and the simulations. In order to address this discrepancy, we have developed a \textit{background subtraction} method. The method works by projecting out a background vector $\mathbf{B}_1$ given by the first principal component of the covariance between simulation and experiment, such that $\tilde{S}^\text{AR}_\text{domain} = S^\text{AR}_\text{domain} - \text{proj}(S^\text{AR}_\text{domain}, \mathbf{B}_1)$, where AR is the aspect ratio, and the domain represents whether the features are from simulated or experimental images. Physically, this assumes that the dominate difference between an experiment and its corresponding simulation is due to this experimental and/or simulation distortion. This method will be verified by an increase in the class separation and better precision after the background subtraction is done.
This process of background subtraction is demonstrated in Fig.~\ref{fig:bs}. The MST coefficients for the AR$4.5$ case are shown for the simulated and experimental data in the left two columns of the figure. There are apparent qualitative similarities of the MST coefficients between the two cases. However, there looks to be some nontrivial background present in the experimental data, which our approach projects out. Specifically, if we take the first principal component of the covariance between simulation and experiment, we find the center column of Fig.~\ref{fig:bs} -- the background, $\mathbf{B}_1$. After projecting out this background component from the experimental data (the right two columns of Fig.~\ref{fig:bs}), we can observe similarities and differences between the simulation and experimental morphologies by comparing the overall separation of the scattering coefficients, $R_{kl}^2$, computed as pairwise Euclidean distances, $\sigma_{kl}$, normalized by the average intra AR class distance,
\begin{equation}
R_{kl}^2 = \frac{\sigma_{kl}^2}{\text{mean}(\sigma_{ii}^2)}.
\end{equation}
where $k$ and $l$ refer to the AR index of simulation and experiment, respectively. We can then visualize how well separated they are by plotting
\begin{equation}
\label{eqn:prob_classification}
\Omega_{kl} = N_l e^{-|R_{kl}|},
\end{equation}
which is analogous to the surrogate confusion matrix, $\Omega_{ij}$, defined in Eq.~\eqref{eqn:sepdec}. Precision and separation can be similarly be defined. The effectiveness of the background subtraction is quantified by an improvement in the precision from 0.40 to 0.08, and an increase in the separation from 1.9 to 4.8. There is essentially no cluster separation before the background subtraction is done, and a reasonable separation after it is done. The character of the background, $\mathbf{B}_1$, in Fig.~\ref{fig:bs} indicates a large wavelength vertical striping. This is more likely to be a distortion of the experimental image, than a deficiency in the physics of the simulation.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_15.pdf}
\caption{\label{fig:bs} Background Subtraction. The top row are the first order MST coefficients, and the second row are the second order MST coefficients. The two columns on the left are before the background is projected out, the center column is the background derived from the first principal component of the covariance between simulation and experiment, and the right two columns are after the background is projected out. This is for the AR$4.5$ case.}
\end{figure}
The quantification of the similarities and differences between the simulations and experiments is shown in Fig.~\ref{fig:sim_vs_exp}. This demonstrates that the simulations are generally close in MST space to the corresponding experiment, with AR4.5 simulation showing some pairwise similarity to the AR9 experiment.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{fig_16.pdf}
\caption{\label{fig:sim_vs_exp} The surrogate confusion matrix $\Omega_{kl}$ defined by Eq.~\eqref{eqn:prob_classification} quantifying similarities and differences between the simulations (``sim'') and experiments (``exp''). Probability of classifying as $C_\text{sim}$ given that it is $C_\text{exp}$, $P(C_\text{sim} | C_\text{exp})$, is plotted as a the image.}
\end{figure}
\subsection{Experiment-to-experiment comparison and analysis}
Finally, we finish with a discussion of differentiating morphology between experiments. Figure \ref{fig:coat_vs_uncoat_mst} shows the experimental plasma stagnation columns along side the synthetic model for the mean prediction and their first and second order MST coefficients of two different liner designs. To the left is experiment z3236 which utilized a dielectric coated AR9 target, while to the right is experiment z3289 which had an uncoated AR6 liner. The dielectric coating on the exterior of the liner is expected to reduce the amount of magnetic Rayleigh-Taylor growth by reducing the electro-thermal instability that is seeding it. There are other significant differences between these two experiments in the amplitude of the current drive, preheating laser pulse profile, applied axial magnetic field, magnetic field axial uniformity, and liner configurations. There are obvious differences between the MST coefficients for the two cases; but what is different? To answer this question, we applied the regression derived in Sec.~\ref{sec:regression}. The regressed synthetic parameters and their uncertainties for MagLIF experiments z3236 and z3289 are shown in Table \ref{tab:coat_vs_uncoat_fit}. The listed uncertainties represent the $95\%$ confidence intervals and are obtained from the multivariate Gaussian distribution of the test data, $\{\mathbf{Y}_\text{pred,i}\}_{i=1}^{N_\text{resamp}}$. The estimates of selected parameters of the synthetic helical model, the $\theta$'s, along with their uncertainties, are plotted for the two cases side-by side in Fig. \ref{fig:coat_vs_uncoat_fit_select}. For parameters such as radius of the helix and the amplitude of the low frequency axial brightness perturbations, there are negligible differences. For other parameters such as the strand thickness, there are modest differences. For yet other parameters such as strand length, helical wavelength, amplitude of the high frequency axial brightness perturbations, the wavelength of the high frequency axial brightness perturbations, and the wavelength of the low frequency axial brightness perturbations; there are significant differences. The reader will also note that the synthetic images given by the mean prediction capture a number of physical features such as the helical wavelength reasonably well.
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{fig_17.pdf}
\caption{\label{fig:coat_vs_uncoat_mst} Comparison of two experiments using the MST. To the left is shot z3236 with a coated AR9 liner. To the right is shot z3289 with an uncoated AR6 liner. Shown, for both cases, are the original stagnation image on the left, the MAP synthetic image, and the MST on the right (both first and second order coefficients). Ensemble of fit synthetics (shown via animation in this link to a \href{https://youtu.be/e5Dv1VcrOzc}{Multimedia View} for z3236, and a \href{https://youtu.be/UqxQYHLj6BQ}{Multimedia View} for z3289). Shown in these animations are the experimental image (left), and member of ensemble of fit synthetics (right).}
\end{figure*}
\begin{table}
\caption{\label{tab:coat_vs_uncoat_fit}MST Regressor determined stagnation column morphological parameters from MagLIF experiments z3236 and z3289. Units for all $\theta_i$ are arbitrary, but consistent with those shown in the histograms of Fig.~\ref{fig:classes} and in the cross plots of Fig.~\ref{fig:mstr_performance_param}.}
\begin{tabular}{lcccc}
\\
&& \textbf{z3236} && \textbf{z3289}\\
&& \textbf{Coated AR9} && \textbf{Uncoated AR6}\\
\hline
$\theta_{1}$ && $0.0720$ + $(-0.0034, 0.0036)$ && $0.0651$ + $(-0.0031, 0.0032)$ \\
$\theta_{2}$ && $2.2335$ + $(-0.1425, 0.1546)$ && $1.4954$ + $(-0.0950, 0.1013)$ \\
$\theta_{3}$ && $2.1677$ + $(-0.3083, 0.3509)$ && $3.9584$ + $(-0.5638, 0.6389)$ \\
$\theta_{4}$ && $1.5327$ + $(-0.2560, 0.3003)$ && $0.3120$ + $(-0.0509, 0.0622)$ \\
$\theta_{5}$ && $0.0988$ + $(-0.0145, 0.0176)$ && $0.1975$ + $(-0.0282, 0.0337)$ \\
$\theta_{6}$ && $0.2293$ + $(-0.0579, 0.0805)$ && $0.1807$ + $(-0.0452, 0.0609)$ \\
$\theta_{7}$ && $5.3522$ + $(-1.3159, 1.7144)$ && $4.8469$ + $(-1.1991, 1.5604)$ \\
$\theta_{8}$ && $0.0286$ + $(-0.0107, 0.0175)$ && $0.0136$ + $(-0.0051, 0.0084)$ \\
$\theta_{9}$ && $0.1756$ + $(-0.0568, 0.0838)$ && $0.0116$ + $(-0.0037, 0.0058)$ \\
$\theta_{10}$ && $6.8597$ + $(-3.1944, 5.8555)$ && $79.247$ + $(-36.568, 68.318)$ \\
$\theta_{11}$ && $0.4819$ + $(-0.4050, 0.4027)$ && $2.0195$ + $(-0.4009, 0.4079)$ \\
\hline
\end{tabular}
\end{table}
\begin{figure*}[ht]
\includegraphics[width=2\columnwidth]{fig_18.pdf}
\caption{\label{fig:coat_vs_uncoat_fit_select} Regressed parameters of the synthetic helical model for the two experiments shown in Fig.~\ref{fig:coat_vs_uncoat_mst}. The values for shot z3236 are shown in blue on the left, and for shot z3289 in green on the right. Plotted are modes with error bars showing the 95\% confidence interval. The error bars are asymmetric because this is not in log space. The parameters are (from left to right): strand thickness (mm), strand length (mm), helical wavelength (mm), radius of the helix (mm), amplitude of the high frequency axial brightness perturbations (arbitrary units), wavelength of the high frequency axial brightness perturbations (mm), amplitude of the low frequency axial brightness perturbations (arbitrary units), and wavelength of the low frequency axial brightness perturbations (mm).}
\end{figure*}
\section{Conclusions}
We have designed and optimized a metric of stagnation morphology using the MST. This was based on both classification of ensembles of synthetic stagnation images, and regression of those synthetic stagnation images to the morphology parameters used to generate them. Excellent performance of both the classifier and regressor was obtained. We demonstrated that the MST provides a convenient basis in which to project out discrepancies between simulated and experimental images.
This metric is then able to be used to test hypotheses, such as if the AR of the liner makes significant changes to the stagnation morphology, and whether the rad-MHD computer simulations predict the changes to the stagnation morphology. For the experimental and simulation data analyzed in Sec.~\ref{sec:sim_exp}, the cluster separation was almost 5 (that is, at a 5 sigma level). The experimental and simulation images in Fig.~\ref{fig:sim_exp_stag} show very subtle differences that could be challenging to quantify without the use of the MST metric, yet show a very significant separation using the metric. We leave conclusions about the significance of systematic changes in the morphology with AR and the use of dielectic coatings and their prediction by simulations for future publications which will analyze larger and more complete datasets.
Finally, the regression enabled the morphology parameters of the stagnation to be estimated with uncertainty. It should be noted that nonlinear aspects of this regression were captured by including more components of the MST SVD vectors in the linear regression of the MST to the morphological parameters. Before the development of this methodology, only a very rough point estimate of some, not all, of the parameters of the stagnations images could be made (i.e., helical wavelength and strand thickness). This was complicated by the stochastic nature of the image. There was no estimate of the uncertainty. Now, it can be concluded that shot z3236 when compared to shot z3289 has a modest increase in strand thickness, and significant increases in strand length, helical wavelength and the wavelength of the low frequency axial brightness perturbations. This shot has longer and larger physical structures. Unfortunately, causal relationships can not be inferred (for instance the effect of the dielectric coating) because of the number of variables that were changed between the shots. Work is underway to analyze and acquire a larger set of shots with more systematic variation in order to establish the causal relationships.
The MST metric space does look to be a low dimensional representation of the stagnation images. The affine classifier showed little improvement after about 4 dimensions, and the SVD of the cross variance of the morphology parameters of the synthetic model and the MST contained most of the variance within the first four components.
There are several ways that this research can be improved and expanded upon. The model has been trained on an inherently 2D synthetic data set whereas the experimental images are projections of complex 3D physical systems onto 2D image plates. The synthetic model also has a significant amount of symmetry that is not seen in the experimental images. The experimental noise model does not have correlation that might exist in the data. This work could be expanded to use a 3D synthetic model with less symmetry, and use a more realistic noise model based on the experimental data, to address these issues.
Although the use of more components of the MST SVD vectors did reduce the nonlinear artifacts of the linear regression, one could apply some of the modern nonlinear regressions, both shallow and deep. There are still some experimental images which are too noisy, or exhibit other artifacts which preclude our ability to get reliable morphology parameter estimates. Additional machine learning and data augmentation methods as well as using a larger data base of experimental images could address this issue.
The connection of the MST to the underlying physics of the rad-MHD, and its emergent behavior is being explored by our ongoing research.\citep{glinsky.et.al.19, glinsky.11} For example, we are addressing questions such as: what is the formal connection of the MST to physical dynamics of both classical and quantum field systems? Can a surrogate for the rad-MHD evolution be constructed using the MST, and can the fixed point, that is, emergent behavior be extracted from that surrogate? What is the expression of helicity and energy in the MST space? What is the relationship of the MST to algebraic topology, that is concepts such as manifold curvature and the Atiyah-Singer index theorem?\citep{atiyah1963index}
Finally, we emphasize that the \textit{background subtraction} in the MST metric space is essential to obtain a quantitative metric which can be used to compare morphology of simulation and experiment. By studying the variation between and within datasets in this metric space, the distortions and system responses can be characterized and removed. The simple case that we presented in Sec.~\ref{sec:sim_exp}, using only six samples, demonstrated its potential, but extensions of this work to much larger datasets in future work will provide further clarification on the usefulness of the background subtraction procedure. It is also not clear what is being compensated by the background subtraction -- distortions to the experimental data, deficiencies in the physics of the simulations, or both. The character of the background, vertical stripes, is suggestive of an experimental distortion, but there is not enough data to draw any conclusions.
\section{Acknowledgments}
We would like to thank St\'ephane Mallat for many useful discussions, suggestions, and providing a computer software implementation of his Scattering Transformation. This research was funded by the Sandia National Laboratories' Laboratory Directed Research and Development (LDRD) program. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC (NTESS), a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration (NNSA) under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
1,116,691,501,297 | arxiv | \section{Introduction}\label{sec: intro}
This presentation reports an innovative application of data analysis methods derived from
disciplines other than physics, namely economy and ecology, to physics software problems.
They concern the analysis of inequality, trend analysis and the analysis of diversity.
The analysis of inequality exploits statistical methods originating from econometrics;
trend analysis is typical of economics and environmental sciences;
the analysis of diversity is based on concepts derived from ecology and treats software as an ecosystem.
The exploration of these methods is motivated by concrete requirements of ongoing projects
concerning Geant4 \cite{g4nim, g4tns} physics validation and Geant4 maintainability assessment;
nevertheless, their scope of application in physics analysis is wider.
\section{Inequality analysis}
\label{sec_econom}
The need to detect and estimate inequality within a data sample may arise in a variety
of physics scenarios.
We investigated its application in the context of evaluating software quality metrics \cite{ronchieri_chep2015},
where inequality analysis helps aggregating the sparse information associated with
individual elements (files, classes) of a software package into a single variable,
which summarizes the distribution of the measurements.
A set of statistical methods has been developed in the context of econometrics to
identify and quantify inequality
\cite{ineq_handbook}.
In their original context they are usually applied to evaluate the distribution of resources
within a country.
For instance, the most common measure of inequality, the Gini index,
is a measurement of the income distribution of a country's inhabitants.
It is a number between 0 and 1, which is based on residents' net income;
it measures the gap between the rich and the poor, with 0 representing perfect
equality among the inhabitants and 1 representing perfect inequality.
In addition to the Gini index, we evaluated other inequality measures: they include
the Ricci-Schutz coefficient (also known as Pietra index), Theil's entropy measure and Atkinson's measure.
We will discuss their specific characteristics and the role they play in the analysis of
Geant4 maintainability in the full paper.
An example of inequality analysis is shown in Figs. \ref{fig_gini}-\ref{fig_theil}, which concern the Halstead Mental Effort metric calculated over
the \textit{solids} package of Geant4.
One can observe similarities across the various inequality measures, although their numerical values
are different.
\begin{figure}
\centerline{\includegraphics[angle=0,width=8cm]{Gini_version.eps}}
\caption{The Gini index calculated over Halstead Mental Effort measure in Geant4
solids package, as a function of Geant4 version. This plot is the result of a preliminary analysis and is shown as an example of the outcome
of inequality analysis methods.}
\label{fig_gini}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=8cm]{RS_version.eps}}
\caption{The Pietra index calculated over Halstead Mental Effort measure in Geant4
solids package, as a function of Geant4 version. This plot is the result of a preliminary analysis and is shown as an example of the outcome
of inequality analysis methods.}
\label{fig_RS}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=8cm]{Atkinson_version.eps}}
\caption{The Atkinson index calculated over Halstead Mental Effort measure in Geant4
solids package, as a function of Geant4 version. This plot is the result of a preliminary analysis and is shown as an example of the outcome
of inequality analysis methods.}
\label{fig_atkinson}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=8cm]{Theil_version.eps}}
\caption{The Theil index calculated over Halstead Mental Effort measure in Geant4
solids package, as a function of Geant4 version. This plot is the result of a preliminary analysis and is shown as an example of the outcome
of inequality analysis methods.}
\label{fig_theil}
\end{figure}
Currently, we are evaluating the application of inequality analysis to overcome some
limitations of inductivism in the context of simulation validation.
\section{Trend analysis}
Trend analysis exploits statistical methods to spot an underlying pattern in a
series of data (usually a time series), which can be distinguished from
randomness.
It is widely used in disciplines such as economics, finance and environmental sciences.
The need of performing a trend analysis may also arise in physics software scenarios.
An example is illustrated in Fig. \ref{fig_trend}, which represents the
evolution of compatibility with experiment of a simulated observable (the
fraction of backscattered electrons, in this case), produced with the same user
application code \cite{tns_ebscatter1, tns_ebscatter2, tns_ebscatter3}, but using different Geant4 versions.
If the reference experimental data are unchanged, one expects that compatibility
with experiment would remain unchanged (within statistical fluctuations) over
different Geant4 versions, or at most would improve with time, if
Geant4 itself is improved in later versions.
Otherwise, evolution of the observable towards worse compatibility with
experiment could be attributed to deterioration of the Geant4 kernel.
Fig. \ref{fig_trend} qualitatively hints to some apparent
downward trend.
In such a scenario one requires the ability to discern whether the apparent
degradation of compatibility with experiment is statistically significant, so
that it would justify appropriate actions both by users (for instance, sticking
to an older version of Geant4 for production) and by Geant4 developers
(investigating the introduction of defects Geant4 code).
\begin{figure}
\centerline{\includegraphics[angle=0,width=8cm]{eff_urban_range8.eps}}
\caption{Evolution of compatibility with experiment (``efficiency'') of a simulation configuration versus
Geant4 version. }
\label{fig_trend}
\end{figure}
Various statistical analysis methods are available for trend analysis.
We investigated the use of the Mann-Kendall test \cite{mann, kendall}, which tests whether to reject the
null hypothesis (H$_0$: no monotonic trend) in favour of the alternative
hypothesis (H$_1$: monotonic trend is present, e.g. downward trend in the case of Fig. \ref{fig_trend}).
In the scenario of Fig. \ref{fig_trend} the p-value resulting from the
Mann-Kendall test is 0.007, which corresponds to rejecting with 0.01
significance the null hypothesis that the observed pattern is consistent with
randomness, in favour of the alternative hypothesis of some degradation in compatibility with experiment.
It is worthwhile to note that trend analysis over an extended range can
identify more subtle effects than the mere comparison of just two scenarios
(e.g. in the aforementioned examples, of the outcome of simulation with two
Monte Carlo simulation versions only), where genuine differences could be hidden by statistical
fluctuations.
This analysis technique could be a powerful instrument to complement
validation and regression testing of Monte Carlo simulation systems,
as well as in other experimental application scenarios.
We performed trend analysis also over a set of software quality metrics discussed in section
\ref{sec_econom}.
In this context we studied the evolution of metrics calculated over several Geant4 versions.
For instance, evolution towards greater coupling between objects is observed in Fig. \ref{ckcbo_abstract},
while a decreasing trend is observed in Fig. \ref{ckcbo_child}, which concern abstract base classes in
the \textit{utils} and leaf (derived or non-derived) classes in the \textit{standard} packages of Geant4 electromagnetic physics, respectively.
In both cases the Mann-Kendall test rejects the hypothesis of randomness with 0.01 significance, in favour
of an alternative hypothesis of upward and downward trend, respectively.
Since various classes in the \textit{standard} package derive from abstract base classes in
the \textit{utils} package, caution should be exercised in appraising the apparent decrease in
coupling between objects observed in the \textit{standard} package.
\begin{figure}
\centerline{\includegraphics[angle=0,width=8cm]{ckcbo_abstract.eps}}
\caption{Evolution of the Coupling Between Objects software metric of abstract classes in the \textit{utils} package of Geant4 electromagnetic physics versus
Geant4 version. }
\label{ckcbo_abstract}
\end{figure}
\begin{figure}
\centerline{\includegraphics[angle=0,width=8cm]{ckcbo_child.eps}}
\caption{Evolution of the Coupling Between Objects software metric of leaf classes in the \textit{standard} package of Geant4 electromagnetic physics versus
Geant4 version. }
\label{ckcbo_child}
\end{figure}
\section{Conclusions}\label{sec: con}
Data analysis methods pertinent to disciplines other than physics are powerful
instruments in physics software applications.
Only a brief overview of our ongoing work in this innovative domain is given
in this summary; more extensive results will be discussed in the conference presentation and in the full paper.
\label{------------------------------------------------END-------------------------------}
|
1,116,691,501,298 | arxiv | \section{Introduction}
T~Tauri stars (TTSs) are young and low-mass ($\la 3$~M$_{\sun}$)
pre-main sequence stars with strong and complex magnetic fields and
a surrounding disc that is truncated near the corotation radius by
interaction with the magnetic field. From the observational point
of view TTSs are split into two main groups: Classical TTSs (CTTSs) and
Weak lined TTSs (WTTSs). CTTSs are accreting mass from the disc whereas
WTTSs have no or very little spectral signatures of accretion. The
material in the inner part of the disc is ionized by the stellar
radiation and channelled through the magnetic field lines
\citep{uchida1984,koenigl1991}. The gas from the disc is
accelerated to almost free-fall velocity before it reaches
the stellar surface forming an accretion shock
\citep[see, e.g., the reviews by][]{bouvier2007,aig2013a}.
Detailed simulations of the interaction between the stellar field and
the inner disc show a complex dynamics of the magnetospheric
flow that depends on the field properties and its
stability \citep{romanova2012,kurosawa2013}.
Some analytical expressions for the hotspot shapes and the
magnetospheric radius have been provided by \citet{kulkarni2013}.
The interaction between the star, disc and magnetic field
produces an excess emission at different wavelengths that
affects the evolution of the disc itself and the
circumstellar environment. The atmospheric and magnetospheric
energy output is released mainly in the ultraviolet (UV) spectral
range. Thus, there is a relatively large number of spectral
features in the UV that can be used as potential
tracers of the physical conditions in TTS.
Different emission lines in the UV wavelength range
provide different information about the regions in which
they are formed, the involved physical processes and the
system geometry. For example, the Mg~II resonance
doublet at 2795.5 and 2802.7~\AA\ is produced in the chromosphere of TTS
and it is one of the strongest features in UV spectra of TTS. Mg~II is
sensitive to, and can be used as a good tracer of, atmosphere and
outflow/wind in TTS
\citep[][Lopez-Martinez \& G\'omez de Castro, submitted]{ardila2002b,calvet2004,ingleby2013}.
N~V, C~IV, He~II and Si~IV are good tracers of hot gas and
accretion processes in TTSs. The relationship between these
lines and mass accretion in TTSs has been already studied
by different authors \citep{johnskrull2000,ardila2002a,
ingleby2011,yang2012,aig2012,ardila2013,aig2013b}.
The semiforbidden lines of the C~II] quintuplet
(wavelengths: $2324.21$, $2325.4$, $2326.11$, $2327.64$, $2328.83$~\AA)
are not observed in WTTSs; however, they are readily detected
in CTTSs, even in low mass accretors \citep{lamzin2000}.
This multiplet seems to be a very sensitive tracer of accretion
or outflows \citep{calvet2004,aig2005,ingleby2013}.
\citet{calvet2004} and \citet{ingleby2013} analysed these lines in low resolution spectra and found
a relationship between the C~II] luminosity and the accretion
luminosity.
The study of the C~II] flux ratios within a small range of
wavelengths provides a good opportunity to investigate TTS
properties because they are optically thin and their ratios
do not depend on the geometry of the accretion system and are
only slightly affected by the large uncertainties associated
with extinction determination. It is known that the relative
intensities of the emission lines of the C~II]
multiplet are sensitive to the electron density in the
range $10^8 \la n_{\rm e} \la 10^{10}$~cm$^{-3}$
\citep{stencel1981,hayes1984a,hayes1984b,keenan1986}.
Plasma in the magnetospheres and atmospheres of
CTTSs is within this density range. However, line blending makes
it difficult to identify the individual features and to measure the
lines ratios \citep[see,e.g.][observations of RU~Lup and DR~Tau, respectively]{lamzin2000,kravtsova2002}.
In this work, we present for the first time
a study of C~II] line ratios in a sample of 20 CTTSs
using 30 medium-resolution spectra. We found
the best-fitting spectrum to the data using
a grid of simulated profiles computed for a broad range
of electron densities and temperatures.
The log of observations, the characteristics of the
CTTSs sample and the profiles are described in Section~\ref{sample}.
The numerical method used to derive the individual
lines fluxes and the properties of the radiating plasma
is presented in Section~\ref{plasma}, that also includes
the limitations of the method and the final results.
In Section~4, we present the plasma properties obtained with our procedure
and they are compared with the accretion rates derived from \citet{ingleby2013}.
To conclude, in Section~5, we provide a brief summary of the main results.
\section{The C~II] profile of CTTSs}
\label{sample}
Our sample consists of the 27 CTTSs observed with the Space Telescope Imaging Spectrograph (STIS) on board
the \textit{Hubble Space Telescope} (\textit{HST}); no C~II] emission is detected in WTTSs.
Most of the sources (17 of 27) are located in Taurus-Auriga Molecular Cloud. The rest of the sources are in $\eta$ Chamaleon (2), $\epsilon$ Chamaleon (1), Chamaleon I (2), TW Hydra Association (2), Orion (1) and Upper Scorpius (1). DK~Tau, HN~Tau \citep{correia2006}, CV~Cha \citep{bary2008} and UX~Tau \citep{nguyen2012} are binaries with companions at distances of $2.304$, $3.109$, $11.4$ and $5.9$ arcsec, respectively, that are resolved by STIS. T~Tau \citep{furlan2006}, FU~Ori \citep{wang2004} and DF~tau \citep{unruh1998} are close binaries at distances $0.7$, $0.5$ and $0.09$ arcsec, respectively. CS~Cha is a spectroscopic binary \citep{guenther2007}. Several stars show evidence of transitional discs, but they are still accreting: CS~Cha, DM~Tau, GM~Aur, TW~Hya and UX~Tau \citep{espaillat2010}. In some sources of our sample jets/outflows have been detected: RY~Tau \citep{stonge2008}, DG~Tau \citep{coffey2008}, T~Tau \citep{furlan2006}, SZ~102 \citep{comeron2011}, AA~Tau, DF~Tau, HN~Tau and SU~Aur \citep{howard2013}.\\
The sample is formed of 42 medium-resolution ($R\simeq 30 000$) spectra obtained with grating E230M; the log of data is provided in Table~\ref{tab1}.
\begin{table}
\caption{Log of observations. \label{tab1}}
\begin{tabular}{ccccc}
\hline
Star & Obs date & Data set & Exp time & S/N \\
& (yy/mm/dd) & & (s) & \\ \hline
AA Tau & 11/01/07 & ob6ba7030 & 1462.2 & 3.21 \\ \hline
CS Cha & 11/06/01 & ob6bb6030 & 1785.2 & 1.65 \\ \hline
CV Cha & 11/04/13 & ob6b18020 & 2598.2 & 3.30 \\ \hline
CY Tau & 00/12/06 & o5cf03020 & 738 & 2.54 \\
& 00/12/06 & o5cf03030 & 282 & 1.75 \\ \hline
DE Tau & 10/08/20 & ob6ba8030 & 1388.1 & 3.58 \\ \hline
DF Tau & 99/09/18 & o5kc01020 & 1670.2 & 14.72 \\ \hline
DG Tau & 01/02/20 & o63l03010 & 2345 & 1.87 \\
& 01/02/20 & o63l03020 & 2923 & 2.66 \\
& 01/02/20 & o63l03030 & 2923 & 2.56 \\
& 01/02/20 & o63l03040 & 2923 & 1.91 \\ \hline
DK Tau & 10/02/04 & ob6bb2030 & 854.4 & 0.81 \\ \hline
DM Tau & 10/08/22 & ob6ba2030 & 1330.1 & 1.37 \\ \hline
DN Tau & 11/09/10 & ob6ba4030 & 1441.2 & 1.72 \\ \hline
DR Tau & 00/08/29 & o5cf02020 & 916 & 1.12 \\
& 01/02/09 & o63l04010 & 2327 & 2.04 \\
& 01/02/09 & o63l04020 & 2880 & 2.26 \\
& 10/02/15 & ob6bb4030 & 881.3 & 0.44 \\ \hline
DS Tau & 00/08/24 & o5cf01020 & 878 & 2.06 \\
& 01/02/23 & o63l08010 & 2345 & 2.27 \\
& 01/02/23 & o63l08020 & 2923 & 2.12 \\ \hline
FM Tau & 11/09/21 & ob6ba0030 & 1401.2 & 0.64 \\ \hline
FU Ori & 01/02/22 & o63l07020 & 2880 & 2.54 \\ \hline
GM Aur & 10/08/19 & ob6ba1030 & 1300.5 & 3.61 \\ \hline
HN Tau & 10/02/10 & ob6ba9030 & 807.5 & 1.24 \\ \hline
PDS 66 & 11/05/23 & ob6b23030 & 1725.2 & 11.68 \\ \hline
RECX15 & 10/02/05 & ob6bb7030 & 916.4 & 2.45 \\ \hline
RECX11 & 09/12/12 & ob6bc4030 & 697.8 & 2.32 \\ \hline
RY Tau & 01/02/19 & o63l01010 & 2353 & 7.47 \\
& 01/02/20 & o63l01020 & 2923 & 8.09 \\
& 01/02/20 & o63l01030 & 2923 & 7.92 \\ \hline
SU Aur & 01/02/24 & o63l05010 & 2383 & 5.04 \\
& 01/02/24 & o63l05020 & 2940 & 4.21 \\
& 11/03/25 & ob6bb1030 & 1489.2 & 2.33 \\ \hline
SZ 102 & 11/05/29 & ob6bb9030 & 1469.2 & 3.12 \\ \hline
T Tau & 01/02/21 & o63l02010 & 2331 & 12.57 \\
& 01/02/21 & o63l02020 & 2880 & 13.95 \\
& 01/02/22 & o63l02030 & 2880 & 13.53 \\ \hline
TW Hya & 00/05/07 & o59d01020 & 1675.2 & 21.23 \\ \hline
TWA 3A & 11/03/26 & ob6b22030 & 1107.2 & 6.70 \\ \hline
UX Tau & 11/11/10 & ob6b54030 & 1408.2 & 2.35 \\ \hline
V836 Tau & 11/02/05 & ob6ba6030 & 1396.2 & 0.64 \\ \hline
\end{tabular}
\end{table}
We have selected spectra with signal-to-noise-ratio (S/N) $>2$; the S/N has been calculated over the whole
feature as described in Section~3.3.
The spectra are shown in Fig.~\ref{f1}.
\begin{figure*}
\begin{tabular}{cc}
\includegraphics[width=8cm]{cii1.eps} & \includegraphics[width=8cm]{cii2.eps} \\
\includegraphics[width=8cm]{cii3.eps} \\
\end{tabular}
\caption{The C~II] multiplet in the TTSs; only profiles with S/N $>$ 2 are plotted. The fluxes are in units of
$10^{-14}$~erg~ s$^{-1}$~cm$^{-2}$~\AA$^{-1}$. Dashed lines mark the rest wavelengths of the C~II] transitions.
For stars with multiple observations, the
spectrum with the best S/N is shown.}
\label{f1}
\end{figure*}
No significant variations are detected in the spectrum of sources with multiple observations, except for DS~Tau (see Appendix A); note that though the C~II] flux of DS~Tau drops by a factor of 2 between two observations, no significant
profile shape variations are noticeable.
In Fig.\ref{f2}, the main spectral features in the 2324-2336~\AA\ range are indicated on the spectrum of TW~Hya, the star with the best S/N in the sample. Note that the C~II] multiplet is resolved.
\begin{figure}
\centering
\includegraphics[width=8cm]{twhya_lines_todas.eps}
\caption{Line identification in the spectral range 2320-2340~\AA\ on TW~Hya spectrum. \label{f2}}
\end{figure}
Additional relevant features in the range are:
\begin{enumerate}
\item The Fe~II] lines at 2328.11 and 2333.52~\AA\ ($3d^6(\ ^5D)4s-3d^6(\ ^5D)4p$). Note that the 2328.11~\AA\ transition is blended
with the C~II] lines in most spectra.
\item The Fe~II] lines at 2332.02 and 2333.52~\AA .
\item The Si~II] multiplet at 2329.23, 2335.12 and 2335.32~\AA.
\end{enumerate}
\section{Measuring the plasma properties}
\label{plasma}
C~II], Fe~II] and Si~II] features are intercombination transitions with very small Einstein coefficients and thus, optically thin
tracers of the radiating plasma, suitable to be used to measure directly their properties.
This characteristic was already noticed by \cite{stencel1981} for C~II] lines, who proposed to use them as electron density tracers in the
$10^{7} \ \leq \ n_{\rm e} \ \leq 10^{10.5}$~cm$^{-3}$ range in nebulae research. In Fig.\ref{ratioscii}, we display the sensitivity of the line ratios
to $T_{\rm e}$ and $n_{\rm e}$ for this quintuplet.
\begin{figure}
\centering
\includegraphics[width=8cm]{ratioscii.eps}
\caption{Emissivity ratios of the C~II] lines relative to the 2326.11~\AA\ line, as a function of electron density. The labels 0,1,2,3 and 4 correspond to the C~II] lines 2324, 2325, 2326, 2327 and 2328~\AA, respectively. Solid, dashed and dotted lines correspond to temperatures of $T_{\rm e}=10^4$, $10^{4.5}$ and $T_{\rm e}=10^5$~K, respectively. \label{ratioscii}}
\end{figure}
The plot was made by using the Atomic Database for Spectroscopic Diagnostics of Astrophysical Plasmas CHIANTI\footnote{www.chiantidatabase.org} \citep{dere1997,landi2013}. Note that below $n_{\rm e} \leq 10^8$~cm$~^{-3}$, the ratios are insensitive to the electron density
except for very diffuse plasmas with $n_{\rm e} \la 10^{2.5}$~cm$~^{-3}$. Therefore, other species need to be considered
to constrain the $T_{\rm e}$ of the plasma and the density for $n_{\rm e} \ga 10^{10.5}$~cm$~^{-3}$ and $n_{\rm e} \la 10^8$~cm$~^{-3}$.
The Fe~II] ratios are sensitive to the electron density for $n_{\rm e} \ga 10^9$~cm$^{-3}$ (see top panel in Fig.\ref{ratios}), the range of densities for which the C~II] quintuplet
ratios are nearly constant.
The Si~II] ratios are more sensitive to the temperature, particularly for $T_{\rm e} \la 10^{4.5}$~K (see bottom panel in Fig.\ref{ratios}).
The combined analysis of all these ratios yields enough information to determine unambiguously the physical properties of the region
where the lines are formed.\\
\begin{figure}
\centering
\includegraphics[width=8cm]{fe2333_c2326_ratios.eps}
\includegraphics[width=8cm]{si2329_c2326_ratios_te.eps}
\caption{Top panel: emissivity ratios of the Fe~II] line relative to the C~II] 2326.11~\AA\ line as a function of density for several temperatures (from $\log T_{\rm e}({\rm K})= 4.0$ to $4.175$ in steps of 0.025).
Bottom panel: emissivity ratios of the Si~II] line relative to the C~II] 2326.11~\AA\ line as a function of temperature for several densities (from $\log n_{\rm e}({\rm cm}^{-3})=0.0$ to $13.0$ in steps of 1.0). \label{ratios}}
\end{figure}
For the calculations, we have assumed that all the lines are optically thin and formed via collisional excitation in a single plasma characterized
by a pair ($n_{\rm e}, T_{\rm e}$). CHIANTI provides the ion emissivities (erg~s$^{-1}$):
$\varepsilon_{ij} = \Delta E \, (n_j({\rm X II})/n({\rm X II})) \, A_{ji}$, being $\Delta E$ the difference of energies between levels $j$ and $i$, $n_j({\rm X II})/n({\rm X II})$ the fraction of ions lying in the state $j$ and $A_{ji}$ the spontaneous radiative transition probability.
The emissivities per unit volume (erg~s$^{-1}$~cm$^{-3}$) for a given ion X~II have been calculated as:
\begin{eqnarray*}
\epsilon_{ij} & = & \Delta E n_j({\rm X II}) A_{ji} \nonumber \\
& = & \Delta E A_{ji} \left( \frac{n_j({\rm X II})}{n({\rm X II})} \frac{n({\rm X II})}{n({\rm X})} \frac{n({\rm X})}{n({\rm H})} \frac{n({\rm H})}{n_{\rm e}} n_{\rm e} \right) \nonumber \\
& = & \varepsilon_{ij} \left(\frac{n({\rm X II})}{n({\rm X})} \frac{n({\rm X})}{n({\rm H})} \frac{n({\rm H})}{n_{\rm e}} n_{\rm e} \right),
\end{eqnarray*}
\noindent
where $n_j({\rm X II})$ is the number density of the specie X~II in the upper level ($j$), $n({\rm X II})/n({\rm X})$ is the ionization fraction of X and $n({\rm X})/n({\rm H})$ is the abundance of element X. Solar metallicity is assumed.
$n({\rm H})/n_{\rm e} = 0.83 $ has been used since $T_{\rm e} > 10^4$~K (see CHIANTI manual).
\subsection{The numerical method}
Making use of the emissivities from CHIANTI, we have computed the flux ratios relative to the C~II] (2326.11 \AA ) line of the following lines: C~II]( 2324.21, 2325.4, 2327.64 and 2328.83~\AA ), Fe II]
(2328.11 and 2333.52~\AA), Fe~II] (2332.02~\AA) and Si~II] (2329.23, 2335.12 and 2335.32~\AA), for a grid of electron temperatures and densities. The grid covers the range
$4.0 \leq \log T_{\rm e}({\rm K}) \leq 5.5$ and $0.0 \leq \log n_{\rm e}({\rm cm}^{-3}) \leq 14.5$ with resolutions 0.025 dex in $\log (T_{\rm e})$ and 0.25~dex in $\log (n_{\rm e})$.
We have assumed that the lines profiles are adequately reproduced by Gaussian functions. In this manner, we have built a grid of simulated spectra in the 2323-2338~\AA\ spectral range given by
\begin{equation}
\label{eq3}
F(\lambda) = F_0 \ \displaystyle\sum_{i=0}^{10} R_i \ \exp \left( {\frac{-(\lambda - (\lambda_i+\delta))^2}{2 \sigma^2}} \right) + F_{cont},
\end{equation}
\noindent where $F_0$ is the peak flux of the reference line (C~II]$_{2326}$), $R_i=F_i/F_0$ is the flux ratio between the peak of the {\it i}th line and $F_0$, $\sigma$ is the standard deviation of the Gaussian functions and $\lambda_i$ is the central wavelength of the \textit{i}th emission line (which can be shifted $\delta$~\AA\ from its expected position). $F_{cont}$ is directly computed from the observations as the average flux in the 2320-2323~\AA\ range for each spectra;
this is a featureless window (see Fig.\ref{f1}). Both dispersion ($\sigma$) and shift ($\delta$) are assumed to be the same for all lines.
We developed an IDL based code to identify the synthetic spectrum that best fit the data consisting in two main steps. First, for each
synthetic spectrum - defined by a pair ($n_{\rm e}$, $T_{\rm e}$) - the best fit to the data is found by a least squares scheme that leaves $F_0$, $\delta$ and $\sigma$ as
free parameters for the fit. As a result, for
any given model $i$ ($n_{e,i}$, $T_{e,i}$), the set of parameters that best fit the data ($F_{0,i}, \sigma _i, \delta _i$),
as well as the residuals, $\chi^2_i$, are computed.
This allows plotting the $\chi^2$ surface in the ($n_{\rm e}$, $T_{\rm e}$) space (see Fig.\ref{chi}). Then, the minimum of the surface
is identified providing the
($n_{\rm e}, T_{\rm e}$) pair that best fit the data. This minimum corresponds to the optimal fit, i.e. $\chi^2_{opt}={\rm min}(\chi^2)$.
\begin{figure}
\centering
\includegraphics[width=9cm]{chi3D.eps}
\caption{$\chi^2$ surfaces and contours for TW~Hya (on the left) and DE~Tau (on the right). At the bottom of the figures, we projected five $\chi^2$ contours starting close to the best solution $\chi^2_{opt}$ (0.08 and 0.48, respectively), with steps of 0.01. The black point at the bottom indicates the $T_{\rm e}$ and $n_{\rm e}$ values finally adopted. \label{chi}}
\centering
\end{figure}
In Table~\ref{tab3}, the $n_{\rm e}, T_{\rm e}, \sigma ,\delta$ values corresponding to the best-fitting model are provided for all the TTSs in the study.
\begin{table*}
\caption{Physical parameters derived from the fitting. \label{tab3}}
\begin{tabular}{cccccccc}
\hline
Star & Data set & $\log(T_{\rm e})$ & $\log(n_{\rm e})$ & $\chi^2_{opt}$ & $\delta$ & $\sigma$ & $F_0$ \\
& & (K) & (cm$^{-3}$) & & (km s$^{-1}$) & (km s$^{-1}$) & (${\rm erg}\ {\rm s}^{-1}\ {\rm cm}^{-2}$~\AA$^{-1}$) \\
\hline
AA Tau & ob6ba7030 & 4.95 & 9.50 & 0.54 & 17.67 & 40.88 & $6.70 \times 10^{-14}$ \\ \hline
CV Cha & ob6b18020 & 4.10 & 10.50 & 0.28 & 18.57 & 82.03 & $1.82\times 10^{-14}$ \\ \hline
CY Tau & o5cf03020 & 4.50 & 11.75 & 0.49 & 15.22 & 23.73 & $6.08\times 10^{-14}$ \\ \hline
DE Tau & ob6ba8030 & 4.15 & 10.00 & 0.48 & 0.90 & 56.36 & $3.52\times 10^{-14}$ \\ \hline
DF Tau & o5kc0102 & 4.45 & 11.50 & 0.01 & 9.29 & 66.42 & $1.12\times 10^{-13}$ \\ \hline
DG Tau & o63l03020 & 4.18 & 10.25 & 0.12 & -67.32 & 104.72 & $9.23\times 10^{-15}$ \\
& o63l03030 & 4.15 & 10.00 & 0.13 & -66.03 & 116.85 & $9.26\times 10^{-15}$ \\ \hline
DR Tau & o63l04010 & 5.48 & 13.75 & 0.42 & -24.76 & 85.38 & $1.62\times 10^{-14}$ \\
& o63l04020 & 5.48 & 13.75 & 0.48 & -26.70 & 71.97 & $2.02\times 10^{-14}$ \\ \hline
DS Tau & o5cf01020 & 4.35 & 9.75 & 0.75 & 27.34 & 62.03 & $3.72\times 10^{-14}$ \\
& o63l08010 & 4.18 & 9.50 & 0.20 & 19.09 & 62.29 & $1.68\times 10^{-14}$ \\
& o63l08020 & 4.18 & 9.50 & 0.15 & 16.77 & 66.03 & $1.74\times 10^{-14}$ \\ \hline
FU Ori & o63l07020 & 4.18 & 10.25 & 0.14 & -45.53 & 93.89 & $9.12\times 10^{-15}$ \\ \hline
GM Aur & ob6ba1030 & 4.38 & 11.50 & 0.47 & 14.83 & 68.48 & $3.26\times 10^{-14}$ \\ \hline
PDS66 & ob6b23030 & 4.30 & 8.75 & 0.01 & 15.35 & 44.88 & $1.79\times 10^{-13}$ \\ \hline
RECX15 & ob6bb7030 & 4.15 & 8.50 & 0.86 & 1.42 & 53.91 & $4.28\times 10^{-14}$ \\ \hline
RECX11 & ob6bc4030 & 4.40 & 9.00 & 1.53 & 30.82 & 61.00 & $4.31\times 10^{-14}$ \\ \hline
RY Tau & o63l01010 & 4.13 & 8.50 & 0.21 & -39.34 & 95.95 & $2.64\times 10^{-14}$ \\
& o63l01020 & 4.18 & 10.75 & 0.16 & -30.57 & 102.14 & $2.59\times 10^{-14}$ \\
& o63l01030 & 4.23 & 11.00 & 0.17 & -25.67 & 110.79 & $2.37\times 10^{-14}$ \\ \hline
SU Aur & o63l05010 & 4.33 & 11.00 & 0.18 & 18.70 & 158.12 & $1.30\times 10^{-14}$ \\
& o63l05020 & 4.18 & 10.25 & 0.20 & -6.84 & 155.80 & $1.38\times 10^{-14}$ \\
& ob6bb1030 & 4.18 & 10.50 & 0.55 & 26.44 & 122.91 & $1.52\times 10^{-14}$ \\ \hline
SZ102 & ob6bb9030 & 4.45 & 1.25 & 0.35 & 25.15 & 51.72 & $2.25\times 10^{-14}$ \\ \hline
T Tau & o63l02010 & 4.15 & 10.50 & 0.01 & 3.74 & 61.52 & $9.15\times 10^{-14}$ \\
& o63l02020 & 4.13 & 10.25 & 0.01 & 3.87 & 62.16 & $8.87\times 10^{-14}$ \\
& o63l02030 & 4.13 & 10.25 & 0.01 & 4.77 & 59.71 & $9.18\times 10^{-14}$ \\ \hline
TWHya & o59d01020 & 4.50 & 12.25 & 0.07 & 15.99 & 20.25 & $7.24\times 10^{-13}$ \\ \hline
TWA3A & ob6b22030 & 4.28 & 9.25 & 0.62 & 17.28 & 49.52 & $7.19\times 10^{-13}$ \\ \hline
UX Tau & ob6b54030 & 4.40 & 8.75 & 0.32 & 23.34 & 35.72 & $3.10\times 10^{-14}$ \\ \hline
\end{tabular}
\end{table*}
Initial conditions for the free parameters are set as follows: $\sigma_0 = 0.1$~\AA\ (approximately equivalent to the combination of the spectral resolution obtained with STIS/E230M and thermal broadening), $F_0$ is set as the peak flux around 2326~\AA\ and $\delta_0$ is such that $F(2326.11-\delta_0)=F_0$ in the observed spectrum. We performed several tests to check the dependence of the results on the initial values of the free parameters. By varying these initial values, the final solution ($\chi_{opt} ^2$) never differed by more than one step in the grid of $T_{\rm e}$ and $n_{\rm e}$ values. This means that the steps of the grid represent the internal precision
of the fitting procedure ($\delta \log T_{\rm e}({\rm K}) \simeq 0.025$ and $\delta \log n_{\rm e}({\rm cm}^{-3}) \simeq 0.25$); they are the same for all stars in the sample.
From the fitting procedure, we also estimated the uncertainties associated to $\delta$, $\sigma$ and each line flux. For this,
we selected the eight closest grid points to the best fit (the local minimum)
and we calculated the standard deviation from the average value using these eight points.
The standard deviations in $\delta$ is always $\la 5$~km~s$^{-1}$, whereas in
$\sigma$ is $\la 6$~km~s$^{-1}$. These uncertainties are not provided in Table~\ref{tab3} because they are negligible.
The final simulated fluxes with their associated errors are shown in Table~\ref{tab4}.
The Fe~II]$_{2332.02}$ line has not been considered for the fit. We have found a large discrepancy between CHIANTI predictions for the
line strength ($\epsilon(2332.02) \sim 0.06 \cdot \epsilon(2333.52)$) and the observations, where both Fe~II lines have comparable strengths (see Fig.\ref{f2}).
Fig.\ref{f6} shows two illustrative examples of the results of the fitting procedure. The targets selected are TW~Hya, with high S/N and
DE~Tau with low S/N. The difference in S/N is readily observed in the $\chi^2 $ surface (see Fig.\ref{chi}); the height of the surface above the ($n_{\rm e}, T_{\rm e}$)
plane increases as the S/N decreases. However both surfaces share some common characteristics: (1) a steep rise of the $\chi^2$ surface towards low
$T_{\rm e}$ and low $n_{\rm e}$ and (2) there is always a narrow range of ($n_{\rm e},T_{\rm e}$)
that gives the best statistical fits to the original data (see the projected contours of the $\chi^2 $ surfaces on the $n_{\rm e}, T_{\rm e}$ plane in Fig.\ref{chi}).
\begin{figure}
\includegraphics[width=8cm]{o59d01020_TWHya.eps}
\includegraphics[width=8cm]{ob6ba8030_DETau.eps}
\caption{Original spectra (solid lines) and their best fits (dotted lines) for two example stars: TW~Hya (top) and DE~Tau (bottom). \label{f6}}
\end{figure}
\subsection{($n_{\rm e},T_{\rm e}$) in the line emission region}
Fig.\ref{tene} shows the electron densities and temperatures
corresponding to the optimal fits.
For stars with multiple observations, only the best-fitting (with the minimum $\chi_{opt}^2$) results are plotted.
The differences among observations are small having very similar results in most of the
cases (see Table~\ref{tab3}).
\begin{figure}
\includegraphics[width=8cm]{tevsne.eps}
\caption{Electron densities ($n_{\rm e}$ in cm$^{-3}$) and temperatures ($T_{\rm e}$ in K)
corresponding to the best fit to the observed spectra for
the stars in the sample. Circle radius corresponds to the uncertainties associated with $n_{\rm e}$ and $T_{\rm e}$.
Filled circles indicate stars with
values out of the range where most of the sources in the sample are present. \label{tene}}
\end{figure}
Most sources are grouped in a region with
$4.1\la\log(T_{\rm e})\la 4.5$ and
$8\la\log(n_{\rm e})\la 12$.
There are three stars outside this region:
DR~Tau, AA~Tau and SZ~102. DR~Tau converged to values
lying very close to the limits of the $n_{\rm e}-T_{\rm e}$ grid.
In the case of SZ~102, the low density probably
indicates that the C~II] emission is dominated by an extended ionized envelope.
Something similar might be occurring in AA~Tau,
a CTTS with a warped disc \citep{menard2003} that displays very peculiar
profiles in the UV emission lines \citep{france2012,ardila2013,aig2013b}.
These three stars are
represented in the figure as filled circles.
These ``unusual" values lead us to think that maybe the C~II], Fe~II] and Si~II]
lines are not formed under the same physical conditions as
the other sources. Therefore, these three stars are excluded
from the following analysis.
\subsection{Consistency tests}
\label{tests}
For this purpose, we have compared the observed flux
in the C~II] feature with the flux derived from the best fitting model for each target - including the C~II] quintuplet
and the unresolved Fe~II]$_{2328.1}$ and Si~II]$_{2329.23}$
lines.
The observed flux has been measured in the
range 2324-2330~\AA\ as
$F_{obs} = (f-N_{pix} F_{cont}) \Delta\lambda$, where
$F_{cont}$ is the continuum average flux, $N_{pix}$ is
the number of pixels in the selected window (151
pixels), $f$ is the wavelength-integrated line flux
and $\Delta\lambda$ the step in wavelength (0.04~\AA).
We also estimated the corresponding flux error as
$\delta F = N_{pix} \cdot \Delta\lambda \cdot \sigma_{cont}$ (being $\sigma _{cont}$
the dispersion around this average).
The continuum was measured in the 2320-2323~\AA\ spectral
range.
The simulated flux of each line has been calculated as the
integral of the Gaussian function fitting that line.
Table~\ref{tab4} shows the fluxes for each line.
The total flux of the C~II] quintuplet has been calculated from the
best-fitting models as
\begin{equation}
\label{eq4}
F_{sim}(C{\rm II}]) = \sigma \sqrt{2 \pi} F_0 \displaystyle\sum_{i=0}^{4} R_i
\end{equation}
\noindent
The Si~II]$_{2335}$ flux is the sum of the components at 2335.12 and 2335.52~\AA\
since they are not resolved in the \textit{HST}/STIS spectra.
\begin{table*}
\begin{flushleft}
\caption{Fluxes of the main features derived from the fitting procedure$^{(a)}$ \label{tab4}}
\begin{tabular}{ccccccc}
\hline
Star & Data set & Flux(C II]) & Flux(Fe II]$_{2328}$) & Flux(Si II]$_{2329}$) & Flux(Fe II]$_{2333}$) & Flux(Si II]$_{2335}$) \\
\cline{3-7}
& & \multicolumn{5}{c}{(${\rm erg}\ {\rm s}^{-1}\ {\rm cm}^{-2}$)} \\
\hline
AA Tau & ob6ba7030 & ($ 9.02 \pm 0.17 ) \times 10^{-14}$ & ($ 2.58 \pm 1.12 ) \times 10^{-20}$ & ($ 4.59 \pm 1.42 ) \times 10^{-18}$ & ($ 7.47 \pm 3.22 ) \times 10^{-20}$ & ($ 4.45 \pm 1.42 ) \times 10^{-16}$ \\ \hline
CV Cha & ob6b18020 & ($ 4.84 \pm 1.02 ) \times 10^{-14}$ & ($ 5.09 \pm 1.50 ) \times 10^{-15}$ & ($ 1.99 \pm 0.36 ) \times 10^{-16}$ & ($ 1.46 \pm 0.43 ) \times 10^{-14}$ & ($ 3.52 \pm 0.59 ) \times 10^{-14}$ \\ \hline
CY Tau & o5cf0302 & ($ 4.73 \pm 0.03 ) \times 10^{-14}$ & ($ 6.36 \pm 4.02 ) \times 10^{-16}$ & ($ 1.52 \pm 0.22 ) \times 10^{-17}$ & ($ 1.81 \pm 1.14 ) \times 10^{-15}$ & ($ 3.16 \pm 0.46 ) \times 10^{-15}$ \\ \hline
DE Tau & ob6ba8030 & ($ 6.39 \pm 0.29 ) \times 10^{-14}$ & ($ 1.82 \pm 0.50 ) \times 10^{-15}$ & ($ 1.30 \pm 0.26 ) \times 10^{-16}$ & ($ 5.21 \pm 1.44 ) \times 10^{-15}$ & ($ 1.86 \pm 0.38 ) \times 10^{-14}$ \\ \hline
DF Tau & o5kc0102 & ($ 2.43 \pm 0.03 ) \times 10^{-13}$ & ($ 3.75 \pm 2.34 ) \times 10^{-15}$ & ($ 1.09 \pm 0.17 ) \times 10^{-16}$ & ($ 1.07 \pm 0.67 ) \times 10^{-14}$ & ($ 2.24 \pm 0.35 ) \times 10^{-14}$ \\ \hline
DG Tau & o63l03020 & ($ 3.13 \pm 0.09 ) \times 10^{-14}$ & ($ 8.78 \pm 2.52 ) \times 10^{-16}$ & ($ 5.03 \pm 0.70 ) \times 10^{-17}$ & ($ 2.51 \pm 0.72 ) \times 10^{-15}$ & ($ 8.03 \pm 1.26 ) \times 10^{-15}$ \\
& o63l03030 & ($ 1.48 \pm 0.13 ) \times 10^{-14}$ & ($ 9.91 \pm 2.82 ) \times 10^{-16}$ & ($ 7.07 \pm 1.44 ) \times 10^{-17}$ & ($ 2.84 \pm 0.81 ) \times 10^{-15}$ & ($ 1.01 \pm 0.21 ) \times 10^{-14}$ \\ \hline
DR Tau & o63l04010 & ($ 4.53 \pm 0.01 ) \times 10^{-14}$ & ($ 3.39 \pm 2.17 ) \times 10^{-19}$ & ($ 2.18 \pm 0.13 ) \times 10^{-17}$ & ($ 9.94 \pm 6.35 ) \times 10^{-19}$ & ($ 4.63 \pm 0.27 ) \times 10^{-15}$ \\
& o63l04020 & ($ 4.76 \pm 0.00 ) \times 10^{-14}$ & ($ 3.56 \pm 2.28 ) \times 10^{-19}$ & ($ 2.29 \pm 0.14 ) \times 10^{-17}$ & ($ 1.05 \pm 0.67 ) \times 10^{-18}$ & ($ 4.87 \pm 0.29 ) \times 10^{-15}$ \\ \hline
DS Tau & o5cf01020 & ($ 7.43 \pm 0.06 ) \times 10^{-14}$ & ($ 3.73 \pm 1.13 ) \times 10^{-16}$ & ($ 6.24 \pm 0.79 ) \times 10^{-17}$ & ($ 1.06 \pm 0.32 ) \times 10^{-15}$ & ($ 7.58 \pm 0.98 ) \times 10^{-15}$ \\
& o63l08010 & ($ 6.40 \pm 0.08 ) \times 10^{-14}$ & ($ 5.91 \pm 1.06 ) \times 10^{-16}$ & ($ 6.25 \pm 0.99 ) \times 10^{-17}$ & ($ 1.69 \pm 0.31 ) \times 10^{-15}$ & ($ 7.00 \pm 1.06 ) \times 10^{-15}$ \\
& o63l08020 & ($ 3.74 \pm 0.08 ) \times 10^{-14}$ & ($ 6.49 \pm 1.08 ) \times 10^{-16}$ & ($ 6.87 \pm 1.17 ) \times 10^{-17}$ & ($ 1.86 \pm 0.31 ) \times 10^{-15}$ & ($ 7.47 \pm 1.13 ) \times 10^{-15}$ \\ \hline
FU Ori & o63l07020 & ($ 2.77 \pm 0.06 ) \times 10^{-14}$ & ($ 7.77 \pm 2.31 ) \times 10^{-16}$ & ($ 4.45 \pm 0.66 ) \times 10^{-17}$ & ($ 2.22 \pm 0.66 ) \times 10^{-15}$ & ($ 7.11 \pm 1.18 ) \times 10^{-15}$ \\ \hline
GM Aur & ob6ba1030 & ($ 7.32 \pm 0.29 ) \times 10^{-14}$ & ($ 3.31 \pm 1.84 ) \times 10^{-15}$ & ($ 5.56 \pm 0.63 ) \times 10^{-17}$ & ($ 9.40 \pm 5.24 ) \times 10^{-15}$ & ($ 1.14 \pm 0.13 ) \times 10^{-14}$ \\ \hline
PDS66 & ob6b23030 & ($ 2.85 \pm 0.06 ) \times 10^{-13}$ & ($ 2.00 \pm 0.37 ) \times 10^{-15}$ & ($ 3.59 \pm 0.33 ) \times 10^{-16}$ & ($ 5.68 \pm 1.06 ) \times 10^{-15}$ & ($ 3.15 \pm 0.27 ) \times 10^{-14}$ \\ \hline
RECX15 & ob6bb7030 & ($ 8.39 \pm 0.22 ) \times 10^{-14}$ & ($ 1.60 \pm 0.37 ) \times 10^{-15}$ & ($ 2.19 \pm 0.46 ) \times 10^{-16}$ & ($ 4.57 \pm 1.07 ) \times 10^{-15}$ & ($ 1.88 \pm 0.39 ) \times 10^{-14}$ \\ \hline
RECX11 & ob6bc4030 & ($ 9.06 \pm 0.13 ) \times 10^{-14}$ & ($ 1.87 \pm 0.59 ) \times 10^{-16}$ & ($ 6.39 \pm 0.97 ) \times 10^{-17}$ & ($ 5.26 \pm 1.70 ) \times 10^{-16}$ & ($ 5.83 \pm 0.88 ) \times 10^{-15}$ \\ \hline
RY Tau & o63l01010 & ($ 9.20 \pm 0.50 ) \times 10^{-14}$ & ($ 2.43 \pm 0.69 ) \times 10^{-15}$ & ($ 3.25 \pm 0.84 ) \times 10^{-16}$ & ($ 6.96 \pm 2.00 ) \times 10^{-15}$ & ($ 2.78 \pm 0.72 ) \times 10^{-14}$ \\
& o63l01020 & ($ 8.65 \pm 0.52 ) \times 10^{-14}$ & ($ 4.84 \pm 1.79 ) \times 10^{-15}$ & ($ 1.46 \pm 0.18 ) \times 10^{-16}$ & ($ 1.38 \pm 0.51 ) \times 10^{-14}$ & ($ 2.74 \pm 0.36 ) \times 10^{-14}$ \\
& o63l01030 & ($ 8.58 \pm 0.45 ) \times 10^{-14}$ & ($ 5.48 \pm 2.16 ) \times 10^{-15}$ & ($ 1.17 \pm 0.08 ) \times 10^{-16}$ & ($ 1.56 \pm 0.62 ) \times 10^{-14}$ & ($ 2.30 \pm 0.16 ) \times 10^{-14}$ \\ \hline
SU Aur & o63l05010 & ($ 6.70 \pm 0.13 ) \times 10^{-14}$ & ($ 2.05 \pm 1.03 ) \times 10^{-15}$ & ($ 6.29 \pm 0.59 ) \times 10^{-17}$ & ($ 5.84 \pm 2.92 ) \times 10^{-15}$ & ($ 1.22 \pm 0.13 ) \times 10^{-14}$ \\
& o63l05020 & ($ 6.96 \pm 0.13 ) \times 10^{-14}$ & ($ 1.96 \pm 0.59 ) \times 10^{-15}$ & ($ 1.12 \pm 0.17 ) \times 10^{-16}$ & ($ 5.59 \pm 1.70 ) \times 10^{-15}$ & ($ 1.79 \pm 0.31 ) \times 10^{-14}$ \\
& ob6bb1030 & ($ 6.09 \pm 0.26 ) \times 10^{-14}$ & ($ 2.32 \pm 0.78 ) \times 10^{-15}$ & ($ 9.91 \pm 1.32 ) \times 10^{-17}$ & ($ 6.64 \pm 2.22 ) \times 10^{-15}$ & ($ 1.73 \pm 0.26 ) \times 10^{-14}$ \\ \hline
SZ102 & ob6bb9030 & ($ 5.97 \pm 0.08 ) \times 10^{-14}$ & ($ 2.02 \pm 0.67 ) \times 10^{-18}$ & ($ 3.67 \pm 0.60 ) \times 10^{-17}$ & ($ 1.44 \pm 0.47 ) \times 10^{-17}$ & ($ 1.94 \pm 0.32 ) \times 10^{-15}$ \\ \hline
T Tau & o63l02010 & ($ 1.83 \pm 0.13 ) \times 10^{-13}$ & ($ 8.89 \pm 3.09 ) \times 10^{-15}$ & ($ 3.70 \pm 0.64 ) \times 10^{-16}$ & ($ 2.55 \pm 0.89 ) \times 10^{-14}$ & ($ 6.49 \pm 1.18 ) \times 10^{-14}$ \\
& o63l02020 & ($ 1.78 \pm 0.20 ) \times 10^{-13}$ & ($ 8.89 \pm 2.72 ) \times 10^{-15}$ & ($ 4.81 \pm 0.99 ) \times 10^{-16}$ & ($ 2.55 \pm 0.78 ) \times 10^{-14}$ & ($ 7.75 \pm 1.58 ) \times 10^{-14}$ \\
& o63l02030 & ($ 1.77 \pm 0.20 ) \times 10^{-13}$ & ($ 8.83 \pm 2.70 ) \times 10^{-15}$ & ($ 4.78 \pm 0.98 ) \times 10^{-16}$ & ($ 2.53 \pm 0.78 ) \times 10^{-14}$ & ($ 7.69 \pm 1.56 ) \times 10^{-14}$ \\ \hline
TWA3A & ob6b22030 & ($ 1.18 \pm 0.03 ) \times 10^{-13}$ & ($ 1.07 \pm 0.16 ) \times 10^{-15}$ & ($ 1.53 \pm 0.14 ) \times 10^{-16}$ & ($ 3.04 \pm 0.45 ) \times 10^{-15}$ & ($ 1.52 \pm 0.11 ) \times 10^{-14}$ \\ \hline
TW Hya & o59d01020 & ($ 4.80 \pm 0.19 ) \times 10^{-13}$ & ($ 1.99 \pm 1.14 ) \times 10^{-14}$ & ($ 1.61 \pm 0.20 ) \times 10^{-16}$ & ($ 5.65 \pm 3.25 ) \times 10^{-14}$ & ($ 3.39 \pm 0.43 ) \times 10^{-14}$ \\ \hline
UX Tau & ob6b54030 & ($ 3.94 \pm 0.07 ) \times 10^{-14}$ & ($ 7.86 \pm 2.54 ) \times 10^{-17}$ & ($ 2.83 \pm 0.43 ) \times 10^{-17}$ & ($ 2.23 \pm 0.72 ) \times 10^{-16}$ & ($ 2.47 \pm 0.37 ) \times 10^{-15}$ \\ \hline
\end{tabular}
\begin{tabular}{ll}
$^{(a)}$ & Fluxes are not extinction corrected. \\
\end{tabular}
\end{flushleft}
\end{table*}
The comparison between observed and fitted flux is shown in Fig.\ref{obssimsnr}.
Most of the observed
fluxes are slightly higher than the simulated ones but the discrepancy is well within the
expected value given the S/N of the data. TW~Hya shows the largest discrepancy
that we interpret as a result of the simplicity of the modelling, i. e. the difficulties to fit the
data to a ``single plasma" emission. In this sense, we would like to remark that the $(n_{\rm e}, T_{\rm e})$
values in Table~\ref{tab3} should be understood as average values on the plasma emission region.
We have also calculated the contribution of the Fe~II]$_{2328}$ and Si~II]$_{2329}$ fluxes
to the 2326~\AA\ feature, unresolved in most of the TTSs spectra. From the simulated
spectra, we have found that Fe~II]$_{2328}$ emission can account for up to $\sim 15$ per cent
of the flux, whereas Si~II]$_{2329}$ contribution is negligible ($\la 0.5$ per cent ).
\begin{figure}
\includegraphics[width=8cm]{obsvssim.eps}
\caption{The observed flux in the 2326~\AA\ feature
compared with the derived from the best fit. Dashed line marks
the 1:1 relation. \label{obssimsnr}}
\end{figure}
\subsubsection{Line ratios as $T_{\rm e}$ and $n_{\rm e}$ indicators}
\label{secratios}
The C~II]/Si~II] flux ratio is a sensitive tracer of the
electron temperature in the range of interest. As it is shown in Fig.\ref{ciisiite},
$T_{\rm e}$ is basically derived from this ratio in our code.
The regression line in Fig.\ref{ciisiite} has a Pearson's coefficient of $r=0.91$ with
a $p{\rm -value}$\footnote{
$p{\rm -value}=p$ means that, for a random population there is $100 \cdot p$ per cent
probability that the cross-correlation
coefficient will be $r$ or better.
We are assuming that the correlation coefficient is
statistically significant if the $p{\rm -value}$ is lower than 5 per cent.}
$=4.8 \times 10^{-7}$.
The regression equation is:
\begin{equation}
\log(F({\rm CII]})/F({\rm SiII]})) = (2.1 \pm 0.3) \, \log(T_{\rm e}) - (8.1 \pm 1.1)
\end{equation}
We have not found any significant correlation
between C~II]/Fe~II]$_{2333}$ flux ratio and the temperature.
\begin{figure}
\includegraphics[width=8cm]{ratiosvste.eps}
\caption{The ratio between C~II] and Si~II] fluxes
$F({\rm CII}])/F({\rm SiII}])$ as a function of the
temperature $T_{\rm e}$ (K). Solid line is the best linear
fit. \label{ciisiite}}
\end{figure}
Regarding electron density, we have recovered the expected relation between $n_{\rm e}$ and the
C~II]/Fe~II]$_{2333}$ and Si~II]/Fe~II]$_{2333}$ ratios.
The regression parameters are:
\begin{itemize}
\item For C~II]/Fe~II]$_{2333}$: $r=-0.6$ and $p{\rm -value}=0.015$
\item For Si~II]/Fe~II]$_{2333}$: $r=-0.9$, $p{\rm -value}=8.34 \times 10^{-7}$ and regression equation:
$\log(F({\rm SiII]})/F({\rm FeII]})) = (-0.25 \pm 0.03) \, \log(n_{\rm e}) + (3.02 \pm 0.32)$, as shown in Fig.\ref{siifeiine}.
\end{itemize}
\begin{figure}
\includegraphics[width=8cm]{ratiosvsne.eps}
\caption{
$F({\rm SiII}])/F({\rm FeII}])$ as a function of the electron
density $n_{\rm e}$ (cm$^{-3}$). Solid line represents the best linear fit.\label{siifeiine}}
\centering
\end{figure}
\section{{\rm C~II]} as an accretion tracer}
\label{ciimdot}
The C~II] quintuplet have been found to be a good tracer of the accretion rate \citep{calvet2004,ingleby2013}.
In this section, we discuss this point as well as the relationship between the obtained results,
($n_{\rm e}$, $T_{\rm e}$, $\sigma$) and accretion rate ($\dot{M}$).
\subsection{Dispersion versus electron temperature}
Further insight on the source of the profile broadening can be drawn from
Fig.\ref{sigmatene}.
The line dispersions that best fit the observed spectra are shown in Table~\ref{tab3} and
they are in the range $20 \la \sigma \la 160$~km~s$^{-1}$.
TW~Hya and CY~Tau have $\sigma < 25$~km~s$^{-1}$ and high $T_{\rm e}$ values
($\log T_{\rm e}({\rm K}) \simeq 4.4-4.5$). For these stars the line broadening is
consistent with thermal broadening ($v_{th} \sim 22$~km~s$^{-1}$).
SU~Aur is the source with the largest line broadening, $\sigma > 100$~km~s$^{-1}$,
and a temperature of $T_{\rm e} \simeq 10^{4.3}$~K. This star is the fastest rotator
in the sample ($v \, \sin i \sim 60$~km~s$^{-1}$) thus, rotation could be
an important source of line broadening.
The rest of the stars have intermediate $\sigma$ values ($40 \la \sigma \la 100$~km~s$^{-1}$) and
temperatures in the range $\log T_{\rm e}({\rm K})\simeq 4.1-4.45$.
The dispersions are suprathermal and the contribution of
rotational broadening is negligible since with $v \, \sin i$ values are in the range
$\sim 5-25$~km~s$^{-1}$ (see Table~\ref{biblio}).
There is a mild correlation between $\sigma$ and $T_{\rm e}$,
as shown in Fig.\ref{sigmatene}
($r=-0.6$ and a $p{\rm -value}=0.018$).
\begin{figure}
\includegraphics[width=8cm]{sigmavste.eps}
\caption{Line dispersion $\sigma$~(km~s$^{-1}$) as a function of
temperature $T_{\rm e}$~(K). Solid line is the
best linear fit. The error bars for $\log (\sigma)$ are smaller than the circle size. \label{sigmatene}}
\end{figure}
\subsection{Dispersion versus accretion rate}
We have also examined the relation between dispersion, $\sigma$ and
accretion rate, $\dot{M}$. As shown in Fig.\ref{sigmamacr}, TTSs show a statistically significant
correlation between $\sigma$ and $\dot{M}$: the higher
the accretion rate the wider the line.
Note that there is a small group
of TTSs (TWA~3A, RECX~11, RECX~15 and PDS~66) with $\dot{M} < 10^{-9}$~M$_{\sun}$~yr$^{-1}$,
that seem to have too low accretion rates
for the given dispersion. PDS~66 also displays an unusually high C~II] flux for the
accretion rate derived by \citet{ingleby2013}. For this reason
these stars have not been considered to determine the correlation coefficient.
The Pearson's coefficient is $r=0.87$ with a
$p{\rm -value}=0.0002$.
This trend suggests a clear connection between the region in which lines are formed and the accretion
process and agrees with those trends reported recently for other UV spectral tracers
\citep{ardila2013,aig2013b}.
\begin{figure}
\includegraphics[width=8cm]{sigmavsmacr.eps}
\caption{The calculated line width $\sigma$~(km~s$^{-1}$) as a
function of the stellar accretion rate $\dot{M}$~(M$_{\sun}$~yr$^{-1}$)
(taken from the literature). Solid line is the
best linear fit for stars with
$\dot{M} \geq 10^9$~M$_{\sun}$~yr$^{-1}$. The error bars for $\log (\sigma)$ are smaller than the circle size. \label{sigmamacr}}
\end{figure}
\subsection{C~II] luminosity versus accretion rate}
Here we re-examine the correlation reported by \citet{ingleby2013} from low-dispersion data
between the accretion rate/luminosity and the C~II] flux.
Fluxes are extinction corrected according
to \citet{valencic2004} assuming $R_V=3.1$
(see Table~\ref{biblio} for a compilation of the
$A_V$ values and distances used in the calculation,
as well as other relevant parameters).
The extinction $A_V$ is one of the
major sources of uncertainty affecting, among
other things, the accretion rate estimates.
For this reason, extinctions have been selected
mainly from the same source than the accretion rates
\citep{ingleby2013}. As a test, we have repeated
the analysis with data from \citet{ardila2013},
and found the same general trend.
\begin{figure}
\includegraphics[width=8cm]{lumvsmacr.eps}
\caption{The C~II] luminosity (in L$_{\sun}$) as a
function of the accretion rate (M$_{\sun}$~yr$^{-1}$). Solid line is the
best linear fit for stars with
$\dot{M} \geq 10^9$~M$_{\sun}$~yr$^{-1}$. \label{ciiavmacr}}
\end{figure}
As shown in Fig.\ref{ciiavmacr}, the C~II] luminosity increases as the accretion rate does:
\begin{equation}
\log(L({\rm CII]})/{\rm L}_{\sun})=(1.24 \pm 0.26) \log{\dot{M}} + (6.27 \pm 2.06)
\end{equation}
\noindent
with a Pearson's correlation coefficient of $r=0.83$ ($p{\rm -value}=0.0008$).
This correlation is for stars with $\dot{M} > 10^{-9}$~M$_{\sun}$~yr$^{-1}$.
For comparison, \citet{ingleby2013} obtain a slope $ \simeq 0.9 \pm 0.2$
from low-dispersion data.
\subsection{Electron density versus accretion rate}
In Fig.\ref{nemacr} we have plotted the electron density as a function of the accretion rate.
TWA~3A, RECX~11, RECX~15 and PDS~66 have again a peculiar behaviour related
with their, apparently, too low accretion rates when compared with the observed electron density
in the emission region.
There are four stars (TW~Hya, CY~Tau, GM~Aur and DF~Tau)
with $n_{\rm e} > 10^{11}$~cm$^{-3}$. There seems to be a trend for $n_{\rm e}$ to increase as the accretion rate does it
($r=0.92$ and a $p{\rm -value}=0.001$) in sources with
$n_{\rm e} \la 10^{11}$~cm$^{-3}$ and $\dot{M} > 10^{-8}$~(M$_{\sun}$~yr$^{-1}$).
\begin{figure}
\includegraphics[width=8cm]{nevsmacr.eps}
\caption{Electron density $n_{\rm e}$ (cm$^{-3}$) as a function of the
accretion rate $\dot{M}$~(M$_{\sun}$~yr$^{-1}$). \label{nemacr}}
\end{figure}
\begin{table*}
\caption{Properties of the sample taken from the literature.}
\label{biblio}
\centering
\begin{tabular}{ccccccccccc}
\hline
Star & SpT & $L$ & $R$ & $M$ & d & $\log(\dot{M})$ & $v\,\sin i$ & $A_V$ & $v_{rad}$ & Ref.\\
& & (L$_{\sun})$ & (R$_{\sun})$ & (M$_{\sun})$ & (pc) & (M$_{\sun} \,{\rm yr}^{-1}$) & (km~s$^{-1}$)& (mag) & (km s$^{-1}$) & \\
\hline
AA Tau & K7 & 1 & 2.1 & 0.8 & 140 & -7.82 & 11 & 1.9 & 16.1 & 1,17 \\
CY Tau & M2 & 0.31 & 1.63 & 0.55 & 140 & -8.86 & 10.6 & 0.03 & 19.1 & 2,5,3,18 \\
CV Cha & G9 & 3.1 & 2 & 1.5 & 160 & -7.23 & 32 & 1.5 & 16.1 & 1,3,10 \\
DE Tau & M2 & 0.8 & 2.4 & 0.4 & 140 & -7.55 & 10 & 0.9 & 14.9 & 1,9,17 \\
DF TauA & M1 & 0.56 & 3.37 & 0.68 & 140 & -8 & 16.1 & 0.15 & 11 & 2,7,3,9 \\
DG Tau & K6 & 1.15 & 1 & 0.88 & 140 & -7.34 & 20 & 1.41 & 15.4 & 2,7,10,18 \\
DR Tau & K5 & 0.4 & 1.1 & 0.9 & 140 & -7.28 & 10 & 1.4 & 27.6 & 1,3,10 \\
DS Tau & K5 & 0.68 & 1.36 & 1.04 & 140 & -7.94 & 10 & 0.9 & 16.3 & 2,7,17 \\
FU Ori & G0 & --- & --- & --- & 450 & --- & --- & -- & 28 & 15,19 \\
GM Aur & K7 & 1.2 & 2.3 & 0.8 & 140 & -8.02 & 12.4 & 0.6 & 15 & 1,9,17 \\
PDS66 & K1 & 0.9 & 1.3 & 1.1 & 86 & -9.89 & 14 & 0.2 & 11.6 & 1,3,14 \\
RECX15 & M3 & 0.1 & 0.9 & 0.3 & 97 & -9.1 & 15.9 & 0 & 15.9 & 1,3,13 \\
RECX11 & K5 & 0.6 & 1.4 & 1 & 97 & -9.77 & 16.4 & 0 & 18 & 1,3,12 \\
RY Tau & G1 & 9.6 & 2.9 & 2 & 140 & -7.17 & 48.7 & 2.2 & 16.5 & 7,17 \\
SU Aur & G1 & 7.8 & 2.6 & 1.7 & 140 & -7.31 & 59 & 0.9 & 16 & 7,17,18 \\
SZ102 & K0 & --- & --- & 0.75 & 200 & -8.1 & --- & 0.32 & 5 & 3,4 \\
T Tau & K0 & 7.29 & 2.9 & 2.11 & 140 & -7.5 & 20.1 & 1.46 & 19.1 & 2,8,17 \\
TW Hya & K7 & 0.3 & 1.1 & 0.8 & 56 & -8.74 & 5.8 & 0 & 13.5 & 1,3,16 \\
TWA3A & M3 & 0.4 & 1.8 & 0.3 & 50 & -10 & 12 & 0 & --- & 1,14 \\
UX TauA & K5 & 0.91 & 2.05 & 1.09 & 140 & -7.96 & 25.4 & 0.26 & 15.6 & 2,3,6,11,17 \\ \hline
\end{tabular}
\begin{tabular}{l}
(1) \citet{ingleby2013}; (2) \citet{white2001}; (3) \citet{ardila2013}; (4) \citet{france2012}; (5) \citet{gullbring1998} \\
(6) \citet{andrews2011}; (7) \citet{salyk2013}; (8) \citet{calvet2004}; (9) \citet{clarke2000}; (10) \citet{johnskrull2000};\\
(11) \citet{preibisch1997}; (12)\citet{jayawardhana2006};(13) \citet{woitke2011}; (14) \citet{dasilva2009};\\
(15) \citet{petrov2008}; (16) \citet{herczeg2006}; (17) \citet{hartmann1986}; (18) \citet{nguyen2012};\\
(19) \citet{malaroda2006}.
\end{tabular}
\end{table*}
\subsection{Blueshifted profiles}
The shift of the lines, $\delta$, obtained from the fitting, was corrected to the stellar rest frame and
it is provided in Table~\ref{tab3}; the radial velocities of the TTSs are compiled in Table~\ref{biblio}.
Note that the pointing errors in the STIS data result
in a velocity uncertainty of 3~km~s$^{-1}$, negligible for the purpose of this work.
Most TTSs satisfy $-20 \la \delta \la 20$~km~s$^{-1}$;
however, there are three stars namely, DG~Tau, FU~Ori and RY~Tau with clearly blueshifted emission
at velocities of -81.5, -73.5 and -47.1~km~s$^{-1}$, respectively. This blueshift indicates
a contribution from the unresolved base of the jet.
\section{Conclusions}
\label{results}
In this work, we have studied the semiforbidden lines of C~II],
Si~II] and Fe~II] in the 2310-2340~\AA\ spectral range for a
sample of 20 TTSs using 30 medium resolution spectra
obtained with \textit{HST}/STIS instrument.
As the lines are blended in a broad feature in most sources,
we have developed a numerical method to determine the
properties of the line emission region assuming that the
radiating plasma can be characterized by a
single $T_{\rm e}$ and $n_{\rm e}$ pair, considering solar abundances.
This is the first work where $n_{\rm e}$ and $T_{\rm e}$ has been determined
for such a large sample of TTSs; previous works dealt with much
smaller samples \citep{aig2001,aig2003}.
In magnetospheric accretion, matter flows from the inner border of the circumstellar disc on the magnetospheric
surface to finally fall on to the star. Near the stellar surface a dense and hot shock is formed producing hot spots.
The sheared magnetosphere-disc boundary layer is expected to be very prone to the
development of turbulent flows.
Within this overall picture there are four issues worth remarking.
\begin{itemize}
\item In most TTSs, the C~II], Si~II] and Fe~II] radiation seems to be produced in an extended magnetospheric structure
characterized by $10^{8} \la n_{\rm e} \la 10^{12}$~cm$^{-3}$ and $10^{4.1} \la T_{\rm e} \la 10^{4.5}$~K.
The line broadening is suprathermal except for two stars (TW~Hya and CY~Tau).
The dispersion depends on the electron temperature of the radiating plasma and on the
accretion rate, suggesting a connection
between the line formation region and the accretion process.
This is consistent with the line radiation being dominated by the magnetospheric accretion flow, close to the disc.
For TW~Hya and CY~Tau, the densities and temperatures are higher than
for the rest of the stars and similar to the observed in atmospheres of cool stars \citep{brown1984,brooks2001}.
Also, the line broadening is thermal.
Therefore, the observed emission lines in TW~Hya and CY~Tau are formed in a different region in the magnetospheric accretion flow (likely close to the star).
In good agreement with this picture, the density and temperature in the line formation region are below the theoretical
predictions for the density and temperature in the accretion shock ($n_{\rm e} \simeq 10^{13}$~cm$^{-3}$ and $T_{\rm e} \simeq 10^6$~K)
and about the densities and temperatures
expected in the funnel flow \citep[$n_{\rm e} \simeq 10^9-10^{12}$~cm$^{-3}$ and $T_{\rm e} \simeq 5 \times 10^3-10^{4.5}$~K; see for example][]{calvet1998,muzerolle2001}.
\item There are three sources, DG~Tau, FU~Ori and RY~Tau with blueshifted lines centroid.
DG~Tau and RY~Tau have resolved jets and FU~Ori has a strong wind.
The large blueshifted velocities in these stars can be due to the contribution
of the outflows to the C~II] lines, suggesting that the properties in the base of the outflow are
similar to those in the base of the accretion stream.
The electron densities of the jet sources derived from the C~II],
Si~II] and Fe~II] lines agree well with previous estimates of electron densities at the base of the jet
\citep{aig2001,aig2003,aig2007}. The observations
agree with the predictions of hot disc winds \citep{aig2005}.
From the theoretical point of view, it is expected that both, the base of the jet and the foot-point of the
accretion flow, share similar physical conditions \citep[see e.g.][]{mohanty2008}.
\item The C~II] quintuplet can be used as a reliable tracer
of the mass accretion rate on the star. C~II] luminosity increases
as the accretion rate does it in agreement with previous
results by \citet{calvet2004,ingleby2013}.
\end{itemize}
\section*{Acknowledgements}
The authors acknowledge support from the Spanish Ministry of Economy and Competitiveness through grant AYA2011-29754-C03-01.
We also wish to thank an anonymous referee for her/his useful comments.
|
1,116,691,501,299 | arxiv | \section{Introduction}
\label{intro}
Strongly coupled quantum matter encompasses some of the most interesting problems in modern physics. Monte Carlo methods are commonly used to perform numerical simulations of theories not amenable to perturbative expansions. These methods typically rely on the path integral formulation of the theory in Euclidean space-time to have Boltzmann-like weights, which can be interpreted as probability distributions. This allows the generation of field configurations distributed according to these weights, whence observables can be sampled.
Real-time theories, QCD at finite baryon density, and non-relativistic bosons, amongst others, however, have complex actions and, therefore, complex weights in their path integrals. This forbids a probabilistic interpretation of the path integral measure. Moreover, this poses a big numerical challenge as oscillatory contributions from the generated configurations must cancel precisely in order to give accurate answers. This is known as the \textit{sign problem}.
In this work, we focus on the complex Langevin (CL) method. It is an extension of stochastic quantisation~\cite{Parisi:1980ys}, where real dynamical variables are allowed to take complex values. After some early works~\cite{Karsch:1985cb,Damgaard:1987rr}, it was realised that the method was plagued by runaway solutions and convergence to wrong limits~\cite{Ambjorn:1985iw,Klauder:1985ks,PhysRevB.34.1964,Ambjorn:1986fz}. More recently, complex Langevin simulations experienced a revival~\cite{Berges:2005yt,Berges:2006xc,Berges:2007nr,Pehlevan:2007eq,Aarts:2008rr,Aarts:2008wh,Guralnik:2009pk}, with studies focusing on understanding its properties and why it sometimes failed~\cite{Aarts:2009uq,Aarts:2010aq,Aarts:2010gr}. This progress led to the use of adaptive step size for the numerical integration~\cite{Aarts:2009dg}, which has improved the numerical stability and reduced the problem of runaway solutions. In addition, criteria for correctness~\cite{Aarts:2011ax,Nagata:2016vkn} that allow \textit{a posteriori} checks were formulated. Another byproduct of the resurgence of the complex Langevin method was the invention of the gauge cooling technique~\cite{Seiler:2012wz,Aarts:2013uxa}, inspired by gauge fixing~\cite{Berges:2007nr}, to limit excursions on the complex manifold in a gauge invariant way. Many more studies followed, applying the method to SU($3$) spin models~\cite{Aarts:2011zn}, the Thirring model~\cite{Pawlowski:2013pje,Pawlowski:2013gag}, random matrix theories~\cite{Mollgaard:2013qra,Mollgaard:2014mga,Bloch:2017sex}, and QCD with staggered quarks~\cite{Sexty:2013ica}, with a hopping expansion~\cite{Aarts:2014bwa}, and in the limit of heavy-dense quarks~\cite{Aarts:2016qrv}.
Additional uses of the complex Langevin method outside of QCD related models include superstring-inspired models~\cite{Nishimura:2019qal} and quantum many-body studies: rotating bosons~\cite{Hayata:2014kra,Berger:2018xwy}, spin-orbit coupled bosons~\cite{Attanasio:2019plf}, fermions with repulsive interactions~\cite{Loheac:2017yar}, non-zero polarisation~\cite{Loheac:2018yjh,Rammelmuller:2018hnk}, mass imbalance~\cite{Rammelmuller:2017vqn,Rammelmuller:2020vwc}, and to determine their virial coefficients~\cite{Shill:2018tan}. In addition, the phase structure of complex unitary matrix models \cite{Basu:2018dtm} as well as supersymmetric models and field theories \cite{Joseph:2019sof} has been studied.
\section{The Complex Langevin method}\label{sec:method}
\subsection{Overview}
The goal is to generate field configurations with a complex measure
\begin{equation}
e^{-S} \, \mathcal{D}\phi \equiv \rho\, \mathcal{D}\phi
\end{equation}
where $\phi$ generically represents all fields in the theory, and $S$ is a complex action defined on a real manifold $\mathcal{M}$.
This measure is replaced by a positive one, $P\, \mathcal{D}\phi_R \mathcal{D}\phi_I$, defined on the complexification $\mathcal{M}_c$ of $\mathcal{M}$.
This is the equilibrium measure of the Langevin process on $\mathcal{M}_c$.
When the criteria for convergence, outlined in~\cite{Aarts:2009uq,Aarts:2011ax}, are met observables calculated with either measure have the same expectation value.
The Langevin process is given by
\begin{align}
d \phi_R &= K_R\, dt + dw\,,\\
d \phi_I &= K_I\, dt\,,
\end{align}
where $t$ is known as the Langevin time, and $dw$ is a Wiener process normalised as $\langle dw^2 \rangle = 2 dt$. The drifts are given by
\begin{align}
K_R &= - \text{Re} \left[ \nabla_\phi\, S[\phi_R + i \phi_I] \right] \,,\\
K_I &= - \text{Im} \left[ \nabla_\phi\, S[\phi_R + i \phi_I] \right].
\end{align}
The choice to add the Wiener process only for the real field is arbitrary and can be generalised.
However, studies have shown that it is more beneficial to add noise only to the real part of the forces, see e.g.~\cite{Aarts:2013uza}. The process is said to produce correct results if the expectation value for a generic observable $\mathcal{O}$ for asymptotically long times, $\langle \mathcal{O} \rangle_\infty$, agrees with the `correct' expectation value, calculated with the original complex measure
\begin{equation}
\langle \mathcal{O} \rangle_\infty = \langle \mathcal{O} \rangle_c \equiv \int \mathcal{D}\phi \,\, \mathcal{O}(\phi)\, e^{-S} \,.
\label{eq:long-time-identity}
\end{equation}
The complexification of the original manifold implies making all fields complex-valued.
The extra degrees of freedom can lead to unstable trajectories for the Langevin process.
Those can be stabilised, in some cases, by a change of variables, such as the non-trivial integration measure in the SU($3$) spin model, see ref.~\cite{Aarts:2012ft}.
In the case of gauge theories this means relaxing the unitary constraint of the gauge links, thus allowing the full space of non-singular matrices to be explored. For QCD this implies that the standard colour group SU($3$) is extended to SL$(3,\mathbb{C})$, which is not a compact group. Group elements can be arbitrarily far away from SU($3$), thus contradicting the criteria for correctness.
The method of gauge cooling~\cite{Seiler:2012wz,Aarts:2013uxa} was constructed as a means to bring the evolution closer to the unitary manifold in a gauge invariant way.
Gauge cooling consists of a series of gauge transformations constructed to reduce the distance to SU($3$) in a steepest descent fashion.
A recent discussion on how gauge cooling stabilises complex Langevin dynamics can be found in~\cite{Cai:2019vmt}.
In that work, the authors carry out analytical and numerical studies of the effects of gauge cooling on SU($2$) and SU($3$) Polyakov chains.
They point out four main effects of gauge cooling:
(i) the removal of a large number of redundant degrees of freedom,
(ii) some components of the drift no longer have any effect on the dynamics,
(iii) emergence of
additional drift terms towards the unitary manifold supporting stability of the simulation, and (iv) the introduction of singularities in the drift.
(i) - (iii) stabilise the Langevin dynamics and support requirements for the criteria of correctness, while implications of (iv) remain still unknown.
A justification for the gauge cooling method was provided in~\cite{Nagata:2015uga} for the continuum Langevin time formulation, whereas~\cite{Nagata:2016vkn} provides justification for discrete time.
\subsection{Criteria for Correctness} \label{sec.corr}
Correct convergence of the complex Langevin technique holds if the expectation value of an observable $\mathcal{O}$ over the probability distribution $P(\phi_R, \phi_I; t)$ agrees with that corresponding to the time-evolved complex measure $\rho(\phi; t)$
\begin{equation}
\langle \mathcal{O} \rangle_{P(t)} =
\langle \mathcal{O} \rangle_{\rho(t)}\,.
\label{eq:CL-correctness}
\end{equation}
The important step in the proof of correctness is the introduction of an interpolation function connecting the left and right-hand side of (\ref{eq:CL-correctness})
\begin{equation}
F_{\mathcal{O}}(t,\tau) = \int \mathcal{D}\phi_R \mathcal{D}\phi_I \, P(\phi_R, \phi_I; t-\tau) \mathcal{O}(\phi_R, \phi_I; \tau)\,.
\label{eq:interpolf}
\end{equation}
Here the interpolating parameter $\tau$ is such that $0 \leq \tau \leq t$. For $\tau = 0$ (\ref{eq:interpolf}) reproduces the LHS of (\ref{eq:CL-correctness}) and for $\tau = t$ the RHS is recovered
assuming $P(\phi_R, \phi_I; 0) = \rho(\phi; 0)$, see equation (27) in~\cite{Aarts:2011ax} for a derivation. Correct convergence of the complex Langevin process requires the interpolating function
to be $\tau$-independent. This can be proven
to hold in the
absence of boundary terms arising from the integral (\ref{eq:interpolf}) (most prominently in $\phi_I$
-direction). This will be further discussed in the next section.
The formal argument of correctness relies on the holomorphicity of the action and hence of the drift. However, the justification of correctness
can be extended also to the presence of meromorphicity. For instance, in QCD zeros of the fermion determinant give rise to poles in the drift.
Correctness can be ensured if the distribution $P$
vanishes sufficiently fast
close to poles~\cite{Nishimura:2015pba,Aarts:2017vrv}.
In practice, this can be verified \textit{a posteriori}.
It should be noted, however, that a pole
lying inside the distribution $P$ can lead to a separation
of configuration space and to ergodicity problems.
On the level of the effective action
complex Langevin studies of chiral random matrix theory~\cite{Mollgaard:2013qra,Splittorff:2014zca}
and of effective Polyakov line models~\cite{Greensite:2014cxa} have identified the branch cut of the logarithm as a source of failure of correct convergence.
The ambiguity of the complex logarithm causes branch cut crossings, i.e.~a winding
of the CL trajectory around a pole in
the drift.
This has been further investigated with regard to the criteria
of correctness in \cite{Nishimura:2015pba} by means of
a model study on an action containing logarithmic
singularities.
Here, it was found that
the multi-valued character of the action is not
the cause of the problem.
The complex Langevin equation can be
formulated in a single-valued fashion by deriving the
drift from the weight $\rho(x)$ instead of the effective action.
The key to correctness, i.e.~the validity of
(\ref{eq:CL-correctness}) lies in the
above mentioned behaviour of the
probability distribution $P$ at
and around the singularity
corresponding to the branch point.
As the authors show
correct results can be obtained despite the occurrence of
winding. Moreover, it is shown that in the case of a
single-valued action containing non-logarithmic singularities the complex Langevin
method can lead to wrong results. In this case the
probability distribution does not vanish sufficiently fast
close to the pole for (\ref{eq:CL-correctness}) to hold.
Complementary to that, extended analyses of simple models as well as simulations of full QCD
provide further indications that the winding
is not relevant for
the correctness of the method~\cite{Aarts:2017vrv}.
The criteria for correctness have to be checked for every observable. Care has to be taken in the presence of poles. Results are affected by the interplay
of the pole order and the behaviour of the distribution $P$ and the observables around the pole. Recently, poles in the
Complex Langevin equation have been studied in connection with boundary terms in \cite{Seiler:2020mkh}.
In \cite{Nagata:2016vkn, Nagata:2018net} the criteria of correctness are
formulated from a slightly different but equivalent view point. There, the authors point out that the
exponential (power-law) fall-off
behaviour of the probability distribution associated with the drift indicates directly if CL works (fails).
It is argued that this constitutes a necessary and sufficient criterion. The proof of the
key identity (\ref{eq:long-time-identity}) is
facilitated in terms of two ingredients. (i) formulating the correctness criteria
(\ref{eq:CL-correctness})
in terms of a discretized Langevin time expansion
($\varepsilon$-expansion) of the time-evolved observable and the probability
distribution $P(\phi_R, \phi_I; t)$, and (ii) relaxing the assumption that the radius of convergence $\tau$ in the
corresponding series expansion of the time evolution (\ref{eq:interpolf}) is infinite to being finite.
Correctness of the CL process follows if the
$\varepsilon$-expansion is valid. The latter stands and falls with the exponential
decay of the drift histogram. Moreover, it is interpreted that -- which is relevant for the next section --
the appearance of boundary terms arising from an integration by parts in the $\tau$-derivative
of the interpolation function is related to the breakdown of the $\varepsilon$-expansion.
The criterion is probed in numerical simulations
using simple models \cite{Nagata:2016vkn},
gauge theories \cite{Nagata:2018net} and for full QCD \cite{Tsutsui:2019suq}.
For a comparison of the drift criterion within a study on correctness in terms of
boundary terms see \cite{Scherzer:2018hid}.
Moreover, the validity of the complex Langevin method has
also been investigated recently from a
mathematical perspective \cite{cai2020validity}.
\section{Recent developments}\label{sec:recent}
\subsection{Boundary terms}
The condition of (\ref{eq:interpolf}) being $\tau$-independent can be written as
\begin{equation}
\frac{\partial}{\partial \tau} F_{\mathcal{O}}(t, \tau) = \lim_{Y\to\infty} B_{\mathcal{O}}(Y;t,\tau) = 0\,,
\end{equation}
where $Y$ is a cutoff in the non-compact directions and the middle term is a boundary term left from the integration
by parts
\begin{align}
&B_{\mathcal{O}}(Y;t,\tau) =\nonumber\\
&\int \mathcal{D}\phi_R \left[ K_I(\phi_R, Y) P(\phi_R, Y; t-\tau) \mathcal{O}(\phi_R + i Y; \tau) \right.\nonumber\\
&\left. -K_I(\phi_R, -Y) P(\phi_R, -Y; t-\tau) \mathcal{O}(\phi_R - i Y; \tau) \right]\,.
\label{eq:boundary-term}
\end{align}
This term has been recently investigated in~\cite{Scherzer:2018hid} for a simple one plaquette model with U($1$) symmetry.
In that model, (\ref{eq:boundary-term}) is non-zero and spoils the correctness of expectation values. Moreover it has been shown that the boundary term can be determined stochastically from the Langevin process.
Comparisons with numerical solutions of the Fokker-Planck equation (FPE), which describes the evolution of $P$, have been performed.
This is however very difficult in higher dimensions.
It is shown that the addition of a regulator term to the action is capable of reducing the boundary terms.
In~\cite{Scherzer:2019lrh}, studies of boundary terms have been deepened in the U($1$) one plaquette model and for the Polyakov chain.
This has been also extended to field
theories such as the three-dimensional XY model as well as
HDQCD.
In the case of gauge theories, the boundary terms appear to be related to the distance to the unitary manifold, typically measured via the unitarity norm~\cite{Seiler:2012wz}.
Moreover, it has been shown that boundary terms provide an estimate of the error of the Langevin process compared to the correct results.
The quantification of the
latter relies on the assumption that the boundary term
for the observables of interest is
maximal at $\tau = 0$. From the analysis of the FPE this
was shown to hold in the $U(1)$
one-plaquette model for some classes of observables.
The error estimation demands the calculation of `higher order' boundary terms which are numerically more expensive.
The usefulness of this measure is currently seen to depend on the model being simulated and is under further investigation.
There are indications that the aforementioned assumption for the $\tau = 0$ behaviour is not valid for HDQCD.
\begin{table}
\centering
\caption{Boundary terms and the error of the complex Langevin simulations for the spatial plaquette in a HDQCD simulations with volume $6^4$, $\mu=0.85$, $N_F=1$, and $\kappa=0.12$, as shown in~\cite{Scherzer:2019lrh}.}\label{tab.BT}
\begin{tabular}[!htb]{ccc}
\hline\noalign{\smallskip}
$\beta$ & $B$ & CL error \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$5.1$ & $-0.578(22)$ & $0.056729(28)$ \\
$5.5$ & $-0.2808(99)$ & $0.020075(24)$ \\
$5.8$ & $-0.03058(14)$ & $-0.004869(54)$ \\
$6.0$ & $-0.00378(49)$ & $-0.000639(25)$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Table~\ref{tab.BT} summarises results from complex Langevin simulations of HDQCD from~\cite{Scherzer:2019lrh}.
It is worth noticing that the magnitude of $B$ is typically an order of magnitude larger than the error across a wide range of inverse couplings. The errors were computed by comparing the CL results with reweighting simulations. Table~\ref{tab.XY} shows results for the three-dimensional XY model, obtained using the dual worldline formulation~\cite{Banerjee:2010kc} and
with the complex Langevin
method. The CL results were corrected using the boundary term analysis in~\cite{Scherzer:2019lrh}. The explicit computation of the boundary terms may hence serve to correct simulations with non-zero contribution from them.
In state-of-the-art full QCD simulations the boundary terms have to be monitored. In order to keep them small the CL trajectory is cut off when a prescribed threshold of the unitarity norm is reached\footnote{It has been recently shown in a study of 2D U($1$) gauge theory on a torus with a $\theta$-term that correct convergence can be obtained even when the unitarity norm is large~\cite{Hirasawa:2020bnl}. In that study, it was also found that observables only thermalise after the unitarity norm saturates.}~\cite{UBHD-68404032}. Moreover, decreasing $\beta$ can cause an increase in the boundary terms.
\begin{table}
\centering
\caption{Comparison of the worldline and the corrected complex Langevin simulations for the three-dimensional XY model, as shown in~\cite{Scherzer:2019lrh}.}\label{tab.XY}
\begin{tabular}[!htb]{cccc}
\hline\noalign{\smallskip}
$\beta$ & $\mu^2$ & worldline & corrected CL \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$0.2$ & $10^{-6}$ & $-0.062288(17)$ & $-0.06630(53)$ \\
$0.2$ & $0.1$ & $-0.062295(18) $ & $-0.06716(90)$ \\
$0.2$ & $0.2$ & $-0.062299(11) $ & $-0.0686(17)$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$0.7$ & $10^{-6}$ & $-1.48219(35)$ & $-1.482283(34)$ \\
$0.7$ & $0.1$ & $-1.52398(35)$ & $-1.52399(72)$ \\
$0.7$ & $0.2$ & $-1.56641(20)$ & $-1.56476(48)$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{Dynamic stabilisation}
In ref.~\cite{Aarts:2016qrv} it was noticed that for some simulations the Langevin process would initially converge to the correct value (comparing with reweighting, when applicable) and then slowly drift to an incorrect one, despite the use of gauge cooling. The departure from the correct result always coincided with the increase in the distance of the field configurations from the unitary manifold. The method of dynamic stabilisation (DS) was then proposed. The idea is to add a term to the Langevin drift itself, which aims to
(a) be small in comparison to the drift originating from the action;
(b) vanish in the na\"ive continuum limit;
(c) affect only the non-compact directions of fields;
(d) be SU($3$) gauge invariant.
The additional force proposed in~\cite{Attanasio:2018rtq} is
\begin{equation}
K_{x,\mu} \to K_{x,\mu} + i \alpha_{\mathrm{DS}} M_x,
\end{equation}
where $M_x$ is proportional to a power of the unitarity norm.
This choice of force, however, is non-holomorphic and thus violates the criteria of correctness.
The real parameter $\alpha_{\mathrm{DS}}$ controls the strength of the DS term. Given the complexity of gauge theories, it is difficult to predict its effects on the CL simulations \textit{a priori}, except in two limiting cases: when $\alpha_{\mathrm{DS}}$ is very small, the DS term will have a negligible effect and the dynamics should remain essentially unchanged; conversely, for large values of the control parameter the DS force heavily suppresses excursions into the non-unitary directions of SL($3,\mathbb{C}$), thus effectively re-unitarising the theory. The optimal values for $\alpha_{\mathrm{DS}}$ are found in a region where expectation values of the observables are least sensitive to it. Calculating the boundary terms explicitly could provide a non-heuristic way of determining optimal values of $\alpha_{\mathrm{DS}}$.
\begin{table}[]
\centering
\caption{Data comparing HMC vs. complex
Langevin (CL) simulations at vanishing chemical potential for two
observables: plaquette and chiral condensate $\overline{\psi} \psi$.
These simulations used four flavours of na\"ive staggered fermions with
$\beta = 5.6$ and quark mass $a\,m_q= 0.025$ as shown in~\cite{Attanasio:2018rtq}.} \label{tab.DS}
\begin{tabular}{ccccc}
\hline\noalign{\smallskip}
& \multicolumn{2}{c}{plaquette} &
\multicolumn{2}{c}{$\overline{\psi} \psi $} \\
Volume & HMC & CL & HMC & CL \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$6^4$ & $0.58246(8)$ & $0.58245(1)$ & $0.1203(3)$ & $0.1204(2)$ \\
$8^4$ & $0.58219(4)$ & $0.58220(1)$ & $0.1316(3)$ & $0.1319(2)$ \\
$10^4$ & $0.58200(5)$ & $0.58201(4)$ & $0.1372(3)$ & $0.1370(6)$ \\
$12^4$ & $0.58196(6)$ & $0.58195(2)$ & $0.1414(4)$ & $0.1409(3)$ \\
\end{tabular}
\end{table}
Agreement between the complex
Langevin method
and HMC at zero chemical potential can be found when Dynamic Stabilisation is used. Table~\ref{tab.DS} shows the accuracy that can be achieved. At vanishing chemical potential there is no sign problem, so that CL and HMC simulations should agree. However, the bi-linear noise scheme (see, e.g., \cite{Sexty:2013ica}) provides real drifts only on average. Thus a small source of complexity is always present, even though the theory itself is real. Without Dynamic Stabilisation this eventually leads to the failure of CL.
\subsection{Deformation technique}
When the quark chemical potential is sufficient for the formation of bound states, the fermion determinant exhibits zeros.
When the Langevin process explores regions around those zeros the drift becomes near-singular, leading to unstable simulations.
In ref.~\cite{Nagata:2018mkb} the fermion matrix is changed by the addition of a term $i\alpha \overline{\psi}(x)(\gamma_4 \otimes \gamma_4) \psi(x)$ to the Lagrangian density, where the $\gamma_4$'s act on spinor and flavour indices, respectively.
This deformation, for $\alpha$ large enough, moves the eigenvalue distribution of the fermion matrix away from zero.
Studies have been performed using a lattice of volume $4^3 \times 8$, coupling $\beta=5.7$, quark mass $am=0.05$, and chemical potential $0.4 \leq a\mu \leq 0.7$.
Histograms of the Langevin drift show a power-law behaviour for $\alpha < 0.3$, while the baryon number density and chiral condensate show a phase transition at $\alpha \sim 0.6$.
These observations indicate that an extrapolation from the deformed to the original manifolds should only use points simulated with $0.3 < \alpha < 0.6$.
Extrapolated values for the number density and chiral condensate simulated with CL were compared to RHMC simulations in the phase-quenched ensemble as a function of the chemical potential.
Both observables also show a steeper dependence on $\mu$ in the CL simulations, which is qualitatively consistent with the expectation that, in the thermodynamic limit at zero temperature, physical observables are independent of $\mu$ for $\mu < m_N/3$, for full QCD, or $\mu < m_\pi / 2$, in the phase-quenched case.
\subsection{QCD phase diagram}
Simulations of the QCD phase diagram with fully dynamical quarks are underway. Results have been reported for high~\cite{Sexty:2019vqx} and low~\cite{Tsutsui:2019suq,Ito:2018jpo,Kogut:2019qmi,Ito:2020mys} temperature regimes. The former two studies used four flavours of staggered quarks, while the latter used two flavours.
Additionally, a study of the deconfinement transition in QCD with heavy pions can be found in~\cite{Scherzer:2020kiu}, where two flavours of Wilson quarks have been considered.
All works have employed gauge cooling to reduce large explorations of non-unitary directions.
\subsubsection{High temperature}
In ref.~\cite{Sexty:2019vqx} the complex Langevin simulations were enhanced by stout smearing~\cite{Morningstar:2003gk} to smooth the gauge configurations. To achieve this, the smearing procedure had to be generalized to SL$(3,\mathbb{C})$ link variables. In addition, tree-level Symanzik improved gauge action was used for a volume of $16^3\times 8$. This allows a comparison between the standard Wilson plaquette action and the improved setup. The quarks are heavier than in nature to keep the simulations numerically feasible, resulting in a pion mass in the range of $500$ to $700\,$MeV. A comparison between standard Taylor expansion and complex Langevin simulation allows an additional consistency check. Figure~\ref{fig:Full} (left panel of figure 7 in~\cite{Sexty:2019vqx}) shows the pressure difference using a Taylor expansion up to 6th order as well as direct results from complex Langevin simulations.
\begin{figure}[]
\centering
\includegraphics[width=0.45\textwidth]{stoutDenes.pdf}
\caption{\label{plot.Full} The pressure difference as a function of the chemical potential as shown in~\cite{Sexty:2019vqx}. The simulation parameters are listed in the figure.}\label{fig:Full}
\end{figure}
The agreement is remarkably good, so that complex Langevin simulations can be used to determine thermodynamic quantities, especially at high temperatures.
\subsubsection{Low temperature}
In refs.~\cite{Ito:2018jpo,Tsutsui:2019suq,Ito:2020mys} the authors study a lower temperature setup with $\beta=5.7$ and $am=0.01$ for two lattice volumes: $8^3 \times 16$ and $16^3 \times 32$.
In order to ensure reliability of their results,
they employed the criterion based on the distribution of the Langevin drift~\cite{Nagata:2016vkn}, described at the end of section \ref{sec.corr}.
After checking the gauge and fermion drifts in both lattices for difference values of the chemical potential, it has been determined that CL is expected to give correct results for $5.2 \leq \mu/T \leq 7.2$ for the smaller volume and $1.6 \leq \mu/T \leq 9.6$ for the larger one.
The observed plateau was understood qualitatively in terms of a picture of free fermions at zero temperature: due to the discreteness of the lattice momenta, $N_f N_c N_s$ is the maximum number of zero momentum quarks that can exist until the chemical potential is large enough to excite the first non-zero momentum states.
This interpretation is possible due to the smallness of the gauge coupling considered, making a picture of free fermions valid.
For a larger volume, they found that the plateau shifts to smaller values of $\mu$.
This is expected, as the discrete momentum states become closer.
In a complementary study of QCD at low temperature and finite chemical potential the authors of ref.~\cite{Kogut:2019qmi} used a setup of $\beta=5.6$ at volume $12^4$ and $\beta=5.7$ with volume $16^4$, both cases with quark mass $am=0.025$ and two flavours of staggered quarks for their CL simulations, augmented with gauge cooling.
They found that, in general, simulations at smaller coupling ($\beta=5.7$) produced results closer to the correct ones, when those were available, also reporting that the average unitarity norm decreases as the continuum limit is approached, in accordance with the findings of~\cite{Aarts:2016qrv}.
However, the expected transition from hadronic to nuclear matter at $\mu \approx m_N/3$, where $m_N$ is the nucleon mass, is not seen.
No signs of new, exotic phases of matter, such as colour-superconductors were found for $\mu \geq m_N/3$, and there is some indication of differences between the full theory and its phase-quenched approximation for $a\mu \geq 0.5$, but not large enough where saturation is dominant.
It is argued in ref.~\cite{Kogut:2019qmi} that the discrepancy between simulation results
and physical expectations could be due to a number of issues complex Langevin simulations face.
One possibility is that CL is known to converge to phase-quenched results in some random matrix theories~\cite{Bloch:2017sex}, although this has not been observed in HDQCD.
Another is that CL produces correct results for non-singular observables, but those used in ref.~\cite{Kogut:2019qmi} do have poles.
This work uses two flavours of rooted staggered quarks. In~\cite{Kogut:2007mz} it has been argued that rooting is only applicable in a small vicinity of the continuum limit.
Lastly, for small temperatures and finite $\mu$, the CL faces severe problems because the eigenvalues of the Dirac matrix approach zero. Therefore the matrix becomes ill-conditioned and standard algorithms such as the conjugate gradient method break down. The latter represents the backbone of calculating the fermionic drift force.
In ref.~\cite{Bloch:2017jzi} the method of selected inversion has been suggested. It is a purely algebraic technique based on the sparse LU decomposition where a subset of the elements of the inverse matrix is computed, thus making it cheaper than a full inversion.
\subsubsection{Deconfinement transition}
In ref.~\cite{Scherzer:2020kiu} the phase diagram of QCD in the $T$-$\mu$ plane was investigated with two flavours of Wilson quarks, for chemical potentials up to $\mu \sim 5T$ using CL.
The study was carried out with a relatively high pion mass of $m_\pi \approx 1.3$ GeV, different lattice sizes, and focused on determining the deconfinement phase transition line.
The traditional parametrisation of the critical temperature for small chemical potentials by a polynomial was used.
By analysing the Binder cumulant of the Polyakov loop and of its fluctuations, it was possible to determine the curvature $\kappa_2$ of the transition line.
It has also been noticed that $\kappa_2$ has a non-monotonic behaviour as a function of the quark mass.
Despite the transition being a smooth crossover for the parameters considered
the critical temperature can be determined relatively well using the Binder cumulant.
A summary of the results is shown in Table \ref{tab:deconf}.
\begin{table}[]
\centering
\caption{
The fitted curvature and $T_c(0)$ for two flavour Wilson fermions with $N_s = 12$ and $16$ using two different methods taken from~\cite{Scherzer:2020kiu}. The curvature has the form $T_c(\mu) = T_c(0) - \kappa_2\, \frac{9\mu^2}{T_c(0)}$. More details on the two methods can be found in section IIIA and IIIB of~\cite{Scherzer:2020kiu}.} \label{tab.Deconf}
\begin{tabular}{cccc}
\hline\noalign{\smallskip}
Method & $N_s$ & $\kappa_2$ & $T_c(0)$/MeV\\
\noalign{\smallskip}\hline\noalign{\smallskip}
fit B3 & $12$ & $0.001002(96)$ & $303(2)$ \\
shift B3 & $12$ & $0.001167(55) $& $297(3)$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
fit B3 & $16$ & $8.1(2.4)\times 10^{-4}$& $270(10)$ \\
shift B3 & $16$ & $0.001042(53)$ & $279(3)$\\
\end{tabular}
\label{tab:deconf}
\end{table}
\section{Summary}
Recent developments on the complex Langevin method have shown promising progress. An important step has been to go beyond the analysis of the correctness criteria in simple models to a practical approach, which is applicable to field theories. With the determination of the boundary terms the error on the complex Langevin process can be estimated and, in some cases, compensated for. Challenges however remain in the development of this novel approach for full QCD.
From a different angle, dynamic stabilization
provides a viable technique for the CL to be applied to the
regions of lower temperatures and medium densities.
Additionally, analysis of boundary terms, both numerically and analytically, may be able to provide a consistent way of optimising the control parameter of DS.
First results for CL simulations of the phase diagram of full QCD have recently appeared.
Despite the lack of evidence for the expected transition from hadronic to nuclear matter at zero temperature, results at low, but finite, temperate are encouraging and can be understood physically.
The agreement with Taylor expansion at high temperatures is a major step towards the ultimate goal of simulating the QCD phase diagram using the complex Langevin method.
\section{Acknowledgments}
We are grateful for discussion and collaboration with Gert Aarts and Ion-Olimpiu Stamatescu. We thank D\'enes Sexty for providing one of the figures to this manuscript.
The work of F.A. was supported by US DOE Grant No. DE-FG02-97ER41014 MOD27.
\bibliographystyle{epj}
|
1,116,691,501,300 | arxiv | \section{Introduction}
The study of the relation between the average degree of a graph and the existence of certain substructures, like minors or topological minors, has a long history. For example, in $1996$, Bollob\'as and Thomason \cite{bollobas1996highly} and, independently, Koml\'os and Szemer\'edi \cite{komlos1996topological} proved that any graph with average degree at least $ct^2$ (for a suitable constant $c$) must contain a subdivision of a clique on $t$ vertices. This is tight up to a constant factor; see K\"uhn and Osthus \cite{kuhn2006extremal} for a sharper bound on the required average degree (the best known lower bound is due to an observation of {\L}uczak).
The analogous statement for digraphs is false. Indeed, there are digraphs with arbitrarily large minimum in- and out-degree which do not contain a subdivision of a complete directed graph on three vertices (see discussion below). Here, a \emph{complete digraph} on $k$ vertices, denoted $\overrightarrow{K}_k$, is a digraph on $k$ vertices where between every two vertices there is an edge in both directions. Mader \cite{mader1996topological} asked whether large minimum out-degree guarantees the existence of a subdivision of a transitive tournament of given size, but it is still not known if this is true.
In this note, we consider a weakening of the concept of subdivisions, namely that of an \textit{immersion}. An \emph{immersion} of a (di)graph $H$ into a (di)graph $G$ is an injective mapping $f : V(H) \to V(G)$ and a collection of pairwise edge-disjoint (directed) paths $P_e$, one for each edge $e$ of $H$, such that the path corresponding to an edge $e = uv$ starts at $f(u)$ and ends at $f(v)$.
For undirected graphs, DeVos, Dvo{\v{r}}{\'a}k, Fox, McDonald, Mohar, and Scheide \cite{devos2014minimum} proved that average degree $200t$ guarantee an immersion of a clique on $t$ vertices. This was improved by Dvo\v{r}\'ak and Yepremyan \cite{dvovrak2018complete} to $11t + 7$ and by Liu, Wang, and Yang \cite{liu2020clique} to $(1 + o(1))t$ for $H$-free graphs, where $H$ is any fixed bipartite graph.
Recently, Lochet \cite{lochet2019immersion} proved that a digraph with high enough minimum out-degree contains an immersion of a transitive tournament. Nevertheless, there are digraphs with arbitrarily large minimum in- and out-degree which do not contain an immersion of $\overrightarrow{K}_3$ (see Mader \cite{mader1985degree}, who used a construction of Thomassen \cite{thomassen1985even} of a family of digraphs with arbitrarily large minimum out-degree with no even directed cycle; see also \cite{devos2012} for a different construction).
An \emph{Eulerian} digraph is a digraph where the in-degree of each vertex $u$ equals the out-degree of $u$.
In \cite{devos2012,devos2013note}, DeVos, McDonald, Mohar, and Scheide showed that every Eulerian digraph with minimum out-degree at least $t^2$ contains an immersion of a $\overrightarrow{K}_t$, and asked whether a linear lower bound on the minimum out-degree would suffice.
Our main theorem confirms their belief.
\begin{theorem} \label{thm:main}
There is a constant $\alpha > 0$ such that for every integer $t \ge 1$, every Eulerian digraph with minimum in-degree at least $\alpha t$ contains an immersion of $\overrightarrow{K}_t$.
\end{theorem}
\subsection{Overview of the proof}
Our proof consists of three key steps. First, we use a notion of sublinear expansion introduced by Koml\'os and Szemer\'edi \cite{komlos1994topological,komlos1996topological}, which played a key role in recent progress on several long-standing conjectures (see, e.g.\ \cite{fernandez2022nested,haslegrave2021extremal,kim2017proof,liu2017proof,liu2020solution}). Our proof is somewhat unusual in that it applies this notion of expansion to digraphs.
To do so, we adapt the notion to our setting and prove that under appropriate assumptions, we can find an immersion of an Eulerian multi-digraph which is an expander with suitable properties; see \Cref{sec:expanders}.
Next, we show that every such expander, with minimum in-degree $t$, immerses a simple digraph on $O(t)$ vertices with $\Omega(t^2)$ edges. Our proof of this is split into two lemmas, depending on the number of vertices in the expander; see \Cref{sec:find-dense-immersion}.
Putting these two steps together, along with an additional observation, implies that every Eulerian digraph with minimum in-degree $t$ immerses an Eulerian multi-digraph on $O(t)$ vertices which has $\Omega(t^2)$ edges, ignoring multiplicities.
The third and final step shows that such a digraph immerses a complete digraph on $\Omega(t)$ vertices. For this, we use the aforementioned result from \cite{devos2014minimum}, which shows that every graph with average degree $t$ immerses a complete graph on $\Omega(t)$ vertices; see \Cref{sec:immerse-clique}.
We leave the proof of Theorem~\ref{thm:main} to Section~\ref{sec: proofmain}, and mention a few open problems in \Cref{sec:conclusion}.
\section{Preliminary lemmas} \label{sec:prelims}
Recall that $\overrightarrow{K}_k$ is the complete digraph on $k$ vertices. We define $\overrightarrow{K}_{k,k}$ to be the digraph on $2k$ vertices that consists of two disjoint independent sets $A$ and $B$ of size $k$ and all edges from $A$ to $B$.
We denote the in-degree of a vertex $u$ by $d^-(u)$ and its out-degree by $d^+(u)$. We will often consider multi-digraphs, in which case $d^-(u)$ counts the number of in-neighbours of $u$ \emph{with multiplicities}. We denote the in-neighbourhood of $u$ by $N^-(u)$ and the out-neighbourhood of $u$ by $N^+(u)$. Note that $N^-(u)$ and $N^+(u)$ are sets, not multi-sets, so $|N^-(u)|$ counts the number of in-neighbours of $u$ after \emph{ignoring multiplicities}.
Given (multi-)(di)graphs $G$ and $H$, we say that $G$ \emph{immerses} $H$ if there is an injective mapping $f : V(H) \to V(G)$ and a collection of pairwise edge-disjoint paths $P_e$, for $e \in E(H)$, such that for an edge $e = uv$, the path $P_e$ starts at $u$ and ends at $v$. Equivalently, $G$ immerses $H$ if $H$ can be obtained from $G$ by performing a sequence of operations which take a (directed) path $xyz$ and replace its edges by the edge $xz$ (this operation is referred to as a \emph{split} and it is said that $y$ is \emph{split off}).
We emphasise that when talking about a simple digraph, we mean a digraph in which there is at most one copy of $xy$ for every two vertices $x$ and $y$; in particular, a simple digraph may contain edges in both directions between a given pair of vertices.
Logarithms are always taken modulo $2$. We drop rounding signs whenever they are not crucial.
We recall the following result, due to DeVos, Dvo{\v{r}}{\'a}k, Fox, McDonald, Mohar, and Scheide \cite{devos2014minimum}, about immersions of complete graphs in graphs with large minimum degree.
\begin{theorem}[\cite{devos2014minimum}] \label{thm:devos-et-al}
Every simple graph with minimum degree at least $200t$ contains an immersion of $K_t$.
\end{theorem}
The following is a simple lemma that allows us to restrict our attention to Eulerian multi-digraphs, even after taking immersions.
\begin{lemma} \label{lem:complete-to-eulerian}
Let $D$ be an Eulerian digraph that immerses a digraph $D'$. Then, $D$ immerses an Eulerian multi-digraph $D''$ on the same vertex set as $D'$ that contains $D'$ as a subdigraph.
\end{lemma}
\begin{proof}
For each edge $e = xy$ in $D'$ let $P(e)$ be a directed path from $x$ to $y$ in $D$ such that the paths $P(e)$ with $e \in E(D')$ are pairwise edge-disjoint; such paths exist by definition of immersion. Let $D_0$ be the multi-digraph obtained from $D$ by replacing $P(e)$ by $e$ for each edge $e$ in $D'$; then $D_0$ is Eulerian. Write $D_0' = D'$.
We define multi-digraphs $D_i$ and $D_i'$, for $i \ge 1$, such that $D_i$ is Eulerian and $D_{i-1}'\subseteq D_i' \subseteq D_i$, as follows.
Suppose that $D_1, \ldots, D_i$ have been defined.
If $D_i'$ is Eulerian, stop. Otherwise, let $x$ be a vertex in $D_i'$ whose out-degree is larger than its in-degree. Because $D_i$ is Eulerian and $D_i' \subseteq D_i$, there is a path $P$ in $D_i \setminus D_i'$ that ends at $x$ and starts at a vertex of $D'_i$ whose in-degree in $D_i'$ is larger than its out-degree. Let $P_i$ be a minimal subpath of $P$ that starts at a vertex with out-degree large than in-degree, and ends at a vertex with out-degree larger than in-degree in $D_i'$; denote its start and end vertices by $x_i$ and $y_i$. Form $D_{i+1}$ by replacing $P_i$ by the edge $x_i y_i$ and form $D_{i+1}'$ by adding $x_iy_i$ to $D_i'$. Then $D_i' \subseteq D_{i+1}' \subseteq D_{i+1}$ and $D_{i+1}$ is Eulerian.
Note that the sum $\sum_{u \in V(D_i)} |d^+(u) - d^-(u)|$ decreases as $i$ increases. Thus for some $i$ this sum will be $0$ and the process will stop. Then $D_i'$ is an Eulerian multi-digraph that contains $D'$ as a subdigraph and is contained in $D$ as an immersion.
\end{proof}
The next lemma will allow us to focus on \emph{regular} Eulerian digraphs.
\begin{lemma} \label{lem:get-regular-Eulerian}
Let $D$ be a simple Eulerian digraph with minimum in-degree at least $2d$. Then either $D$ immerses a simple $2d$-regular Eulerian digraph, or it immerses $\overrightarrow{K}_{d,d}$.
\end{lemma}
\begin{proof}
Let $D'$ be a minimal (in terms of the number of edges) simple Eulerian digraph with minimum in-degree at least $2d$ which is immersed by $D$. If $D'$ is $2d$-regular, we are done, so suppose there is a vertex $u$ with in- and out-degree at least $2d+1$. Let $N^+ \subseteq N^+(u)$ and $N^- \subseteq N^-(u)$ be disjoint sets of size $d$. Suppose that there exist $x \in N^+$ and $y \in N^-$ such that $xy$ is not an edge in $D'$. Form $D''$ by removing the edges $xu$ and $uy$ from $D'$ and adding $xy$.
Then $D''$ is Eulerian, it has minimum in-degree at least $2d$, and it is immersed by $D'$, implying that it is immersed by $D$. Since $D''$ has fewer edges than $D'$, this is a contradiction to the minimality of $D'$.
It follows that $xy \in E(D')$ for every $x \in N^+$ and $y \in N^-$. Hence, $D'$ contains a copy of $\overrightarrow{K}_{d,d}$, implying that $D$ immerses $\overrightarrow{K}_{d,d}$, as required.
\end{proof}
\section{Expanders} \label{sec:expanders}
In this section we introduce several notions of expanders. These are variants of the notions of `sparse expanders' introduced by Koml\'os and Szemer\'edi \cite{komlos1994topological,komlos1996topological} and `robust expanders' introduced by Haslegrave, Kim and Liu \cite{haslegrave2021extremal}.
\subsection{Expanders in undirected graphs} \label{subsec:expanders-undirected}
For $t > 0$ let $\rho_{t}$ be the function defined as follows (when $t$ is clear from the context, we often omit the subscript).
\begin{equation*}
\rho_t(x) =
\left\{
\begin{array}{ll}
0 & \text{if $x < t$} \\
\frac{1}{256(\log (4x/t))^2} & \text{if $x \ge t$}.
\end{array}
\right.
\end{equation*}
Denote the average degree of a graph $G$ by $d(G)$.
A graph $G$ is called a \emph{$t$-edge-expander} if every subset $X \subseteq V(G)$ with $|X| \le |G|/2$ satisfies $e_G(X, X^c) \ge 32d(G) \cdot \rho_t(|X|) |X|$. Similarly, $G$ is called a \emph{robust $t$-vertex-expander} if every subset $X \subseteq V(G)$ with $|X| \le |G|/2$ and subgraph $F \subseteq G$ with $e(F) \le d(G)\rho_t(|X|)|X|$ satisfy $|N_{G \setminus F}(X)| \ge 2\rho_t(|X|)|X|$.
Haslegrave, Kim, and Liu \cite{haslegrave2021extremal} use a similar notion to the latter one (up to a different choice of constants); the former notion is more convenient for our application.
The following lemma is a variant of similar lemmas such as Lemma 2.3 in \cite{komlos1996topological} and Lemma 3.2 in \cite{haslegrave2021extremal}. We prove it in \Cref{appendix} for completeness. Our proof is similar to the proofs of the aforementioned lemmas and also draws inspiration from the proof of Lemma 2.7 in \cite{jiang2021rainbow}. We note that we do not require the third item, but we keep it for future reference.
\begin{lemma} \label{lem:find-expander}
Let $t > 0$ and let $G$ be a graph. Then there is a subgraph $H \subseteq G$ such that
\begin{itemize}
\item
$H$ has average degree at least $d(G)/2$ and minimum degree at least $d(G)/4$,
\item
$H$ is a $t$-edge-expander,
\item
$H$ is a robust $t$-vertex-expander.
\end{itemize}
\end{lemma}
\subsection{Expanders in Eulerian digraphs} \label{subsec:expanders-eulerian}
We now introduce analogous notions of expansion for digraphs (albeit with slightly different parameters).
For a (multi-)digraph $D$, denote by $d(D)$ the average degree of $D$, counting multiplicities and ignoring directions. In other words, $d(D) =
e(D)/|D|$.
Say that a multi-digraph $D$ is a \emph{directed $t$-edge-expander} if every subset $X \subseteq V(D)$ with $|X| \le |D|/2$ satisfies $e(X^c, X), e(X, X^c) \ge 4d(D)\rho_t(|X|)|X|$. Similarly, $D$ is a \emph{robust directed $t$-vertex-expander} if every subset $X \subseteq V(D)$ and subgraph $F$ with $e(F) \le d(D)\rho_t(|X|)|X|$ satisfy $|N^-_{D\setminus F}(X)|, |N^+_{D\setminus F}(X)| \ge \rho_t(|X|)|X|$.
The following lemma is an analogue of \Cref{lem:find-expander} for simple directed Eulerian graphs.
\begin{lemma} \label{lem:find-directed-expander}
Let $t > 0$ and let $D$ be a simple $d$-regular Eulerian oriented graph (so every vertex has in-degree $d$). Then $D$ immerses an Eulerian multi-digraph $D'$ with the following properties.
\begin{itemize}
\item
The simple undirected graph obtained from $D'$ by ignoring directions and multiplicities has average degree at least $d/2$,
\item
$D'$ has minimum in- and out-degree at least $d/8$ (taking into account multiplicities),
\item
$D'$ is a directed $t$-edge-expander,
\item
$D'$ is a robust directed $t$-vertex-expander.
\end{itemize}
\end{lemma}
\begin{proof}
Let $G$ be the undirected graph obtained from $D$ by ignoring directions; so $d(G) \ge d$ (if $xy$ and $yx$ are both in $D$, then we count $xy$ only once in $G$). By \Cref{lem:find-expander}, there is a subgraph $G' \subseteq G$ with average degree at least $d(G)/2$ and minimum degree at least $d(G)/4$, which is a $t$-edge-expander. Let $D'$ be a subgraph of $D$ which is an orientation of $G'$. Apply \Cref{lem:complete-to-eulerian} to find an Eulerian multi-digraph $D''$ which contains $D'$ as a subgraph and is contained in $D$ as an immersion. Observe that $D''$ has maximum in- and out-degree at most $d$, implying $d(D'') \le 2d \le 2d(G)$. We show that $D''$ satisfies the required conditions.
The first item follows from $G'$ having average degree at least $d(G)/2 \ge d/2$.
Note that every vertex has either in- or out-degree at least $d(G)/8 \ge d/8$ in $D'$. Since $D''$ is an Eulerian digraph containing $D'$, the second item holds.
Let $X \subseteq V(D'')$. Then $e_{G'}(X, X^c) \ge 32d(G')\rho(|X|)|X|$, so one of $e_{D'}(X, X^c)$ and $e_{D'}(X^c, X)$ is at least $16d(G')\rho(|X|)|X|$. Since $D''$ is an Eulerian digraph that contains $D'$, we have $e_{D''}(X, X^c) = e_{D''}(X^c, X) \ge 16d(G')\rho(|X|)|X| \ge 8d(G)\rho(|X|)|X| \ge 4d(D'')\rho(|X|)|X|$. This shows that $D''$ is a $t$-edge-directed expander, as required for the third item.
Let $F$ be a subgraph of $D''$ with $e(F) \le d(D'')\rho(|X|)|X|$ ($F$ can be a multi-digraph). Let $N = N^+_{D'' \setminus F}(X)$. Then $e_{D''}(X, N) \ge e_{D''} (X, X^c) - e(F) \ge 3d(D'')\rho(|X|)|X| > d \cdot \rho(|X|)|X|$, using $d(D'') \ge d(G') \ge d(G)/2 \ge d/2$.
Since $D''$ has maximum in-degree at most $d$, it follows that
\begin{equation*}
|N^+_{D'' \setminus F}(X)|
= |N|
\ge \frac{e_{D''}(X, N)}{d}
\ge \rho(|X|)|X|.
\end{equation*}
A symmetric argument shows $|N^-_{D'' \setminus F}(X)| \ge \rho(|X|)|X|$. This establishes vertex-expansion, as required for the fourth item.
\end{proof}
\subsection{Connecting sets in directed expanders} \label{subsec:expanders-connecting}
The following lemma proves that robust directed vertex-expanders possess the following property, similarly to their undirected versions: for every two relatively large sets, there is a short directed path joining the two sets and avoiding a small set of `forbidden' edges. The proof is simple and similar to its undirected analogue (see, e.g.\ \cite{komlos1996topological}). We include the proof in \Cref{appendix} for completeness.
\begin{lemma} \label{lem:short-paths-diam}
Let $D$ be a multi-digraph which is a robust directed $t$-vertex-expander on $n$ vertices, where $n \ge 2^8t$. Let $X$ and $Y$ be two sets of size at least $x$, where $x \ge t$, and let $F$ be a subgraph of $D$ with at most $d(D) \rho(x)x$ edges. Then there is a directed path from $X$ to $Y$ avoiding $F$ of length at most $1600 (\log (n/t))^3$.
\end{lemma}
\section{Immersions in Eulerian digraphs with high degree} \label{sec:find-dense-immersion}
Our aim in this section is to prove \Cref{thm:find-dense-immersion}, which states that simple Eulerian digraphs with minimum degree at least $ck$, for a large constant $k$, immerse a dense simple digraph on at least $k$ vertices. The main works goes into showing that directed expanders with suitable properties immerse a dense subgraph of $\overrightarrow{K}_{k,k}$. This is achieved in \Cref{lem:immersion-large-n}, which will be applied to relatively large expanders, and \Cref{lem:immersion-small-n}, which will be applied to smaller expanders.
\subsection{Immersions in large directed expanders} \label{subsec:immersion-large-expanders}
\begin{lemma} \label{lem:immersion-large-n}
Let $k \ge 1$ be a sufficiently large integer and let $n > 4 k(100k)^{(\log \log n)^6}$. Let $D$ be a multi-digraph with the following properties: it is a robust directed $k$-vertex-expander on $n$ vertices; it has maximum in- and out-degree at most $100k$; and the graph obtained from $D$ by ignoring directions and multiplicities has at least $100kn$ edges. Then $D$ immerses $\overrightarrow{K}_{k,k}$.
\end{lemma}
\begin{proof}
Let $G$ be the simple graph obtained from $D$ by ignoring directions and multiplicities, and let $D'$ be a subgraph of $D$ which is an orientation of $G$; then $e(D') = e(G) \ge 100kn$. We claim that $D'$ has at least $n/2$ vertices with out-degree at least $3k$. Indeed, otherwise $e(D') \le 3k \cdot \frac{n}{2} + 100k \cdot \frac{n}{2} < 100kn$, a contradiction. Similarly, $D'$ has at least $n/2$ vertices with in-degree at least $n/2$. Let $V^+$ be the set of vertices with out-degree at least $3k$ in $D'$ and let $V^-$ be the set of vertices in $D'$ with in-degree at least $3k$.
Set $r = (\log \log n)^6$. We claim that there are disjoint sets of vertices $X \subseteq V^+$ and $Y \subseteq V^-$ of size $k$ each such that any two distinct vertices in $X \cup Y$ are at distance at least $2r+1$ from each other in $G$. To see this we define vertices $x_1, \ldots, x_k \in V^+$ and $y_1, \ldots, y_k \in V^-$, as follows. Let $x_1$ be any vertex in $V^+$. Having defined $x_1, \ldots, x_{i-1}$, set $V^+_i = V^+ \setminus (B_G(x_1, 2r) \cup \ldots \cup B_G(x_{i-1}, 2r))$. Since $|B_G(x_j, 2r)| \le (100k)^{2r}$ for $j \in [i-1]$, we have $|V^+_i| > n - k(100k)^{2r} > \frac{n}{2}$. Let $x_i$ be any vertex in $V(G_i) \cap V^+$. Define $y_1, \ldots, y_k$ similarly (we will need the inequality $n - 2k(100k)^{2r} > \frac{n}{2}$).
Take $X = \{x_1, \ldots, x_k\}$ and $Y = \{y_1, \ldots, y_r\}$; then $X$ and $Y$ have the required property.
For $x \in X \cup Y$, denote $B(x) = B_G(x, r)$; so the sets $B(x)$ with $x \in X \cup Y$ are pairwise disjoint.
\begin{claim} \label{claim:expand-in-ball}
Let $x \in X$ and let $F$ be a subgraph of $D$ with maximum in- and out-degree at most $\frac{k}{(\log \log n)^3}$. Let $F'$ be another subgraph of $D$, with the following property: write $D' = D \setminus F$ and let $X_i$ be the set of vertices $u$ for which there is a directed path in $D'$ of length at most $i$ from $x$ to $u$; then $F'$ contains at most $ki$ edges with both ends in $X_i$.
Then $|X_r| \ge k(\log n)^6$.
\end{claim}
\begin{proof}
First note that $|X_1| \ge 3k - \frac{k}{(\log \log n)^3} - k \ge k$ (using $x \in V^+$).
We will prove the following.
\begin{equation} \label{eqn:statement}
\text{If $i \in [2,r]$ and $|X_i| \le k(\log n)^6$, then $|X_{i+1}| \ge |X_i|(1 + \rho(|X_i|))$.}
\end{equation}
Assuming \eqref{eqn:statement} and $|X_r| \le k(\log n)^6$, then $|X_{i+1}| \ge |X_i|(1 + \rho(|X_i|))$ for $i \in [r]$. Using that $|X_1| \ge k$ and $\rho(|X_i|) \ge \rho(|X_r|) \ge \frac{1}{256(\log(4(\log n)^6))^2} \ge \frac{1}{(\log \log n)^3}$, we find that
\begin{equation*}
|X_r|
\ge k\left(1 + \frac{1}{(\log \log n)^3}\right)^r
\ge k\cdot \exp\left(\frac{r}{2(\log \log n)^3}\right)
> k \cdot \exp\left((\log \log n)^2\right)
> k(\log n)^6,
\end{equation*}
(recall that $r = (\log \log n)^6$) a contradiction.
We now turn to the proof of \eqref{eqn:statement}, which we prove by induction. Suppose that $i \in [2,r]$ and that $|X_{j+1}| \ge |X_j|(1 + \rho(|X_j|))$ for $j \in [2, i-1]$.
Let $F_i$ be the subgraph of $F$ consisting of edges that are incident with $X_i$; similarly, let $F_i'$ be the subgraph of $F'$ consisting of edges incident with $X_i$. Note that
\begin{equation*}
e(F_i)
\le \frac{k}{(\log \log n)^3} \cdot |X_i|
\le \frac{k \cdot \rho(k (\log n)^6)}{2} \cdot |X_i|
\le \frac{k \cdot \rho(|X_i|)|X_i|}{2}.
\end{equation*}
We now prove that $e(F_i') \le \frac{1}{2} k \rho(|X_i|)|X_i|$. By choice of $F_i'$, we have $e(F_i') \le k(i+1) \le 2ki$. It thus suffices to show $4i \le \rho(|X_i|)|X_i|$. Write $s = |X_i|/k$. Then, using that $|X_1| \ge k$ and that $|X_{j+1}| \ge (1 + \rho(|X_j|))$ for $j \in [2, i-1]$,
\begin{equation*}
s = \frac{|X_i|}{k} \ge \frac{|X_i|}{|X_1|}
\ge \left(1 + \rho(|X_i|)\right)^{i-1}
\ge \left(1 + \frac{1}{256(\log (4s))^2}\right)^{i/2}
\ge \exp\left(\frac{i}{1024(\log (4s))^2}\right)
\end{equation*}
It follows that $4i \le 4096(\log (4s))^3$, implying that
\begin{equation*}
\rho(|X_i|)|X_i|
= \frac{sk}{256(\log(4s))^2}
\ge 4096(\log(4s))^3
\ge 4i,
\end{equation*}
using that $k$ is large.
Hence, indeed, $4i \le \rho(|X_i|)|X_i|$, as claimed. We thus have $e(F_i \cup F_i') \le k\cdot \rho(|X_i|)|X_i|$, so, by expansion, $|X_{i+1}| \ge |X_i|(1 + \rho(|X_i|))$, proving \eqref{eqn:statement}.
\end{proof}
Write $a = k^2$ and let $(x_1, y_1), \ldots, (x_a, y_a)$ be an ordering of the ordered pairs $(x, y)$ with $x \in X$ and $y \in Y$. We pick paths $P_1, \ldots, P_a$ as follows.
Suppose that $P_1, \ldots, P_{i-1}$ are defined.
We define subgraphs $F_{i, x_i}, F_{i, y_i}, F_{i, 0}, F_{i,1}, F_i \subseteq D$ as follows.
The edges of $F_{i, x_i}$ are those that appeared in a path $P_j$ with $j < i$ that starts at $x_i$. Similarly, the edges of $F_{i, y_i}$ are those that appeared in a path $P_j$ with $j < i$ that ends at $y_i$. The edges of $F_{i, 0}$ are those appearing in a path $P_j$ with $j < i$. Form $F_{i,1}$ by including all edges in $D$ that are incident to a vertex $u \notin B(x_i) \cup B(y_i)$ which has in- or out-degree at least $\frac{k}{(\log \log n)^3}$ in $F_{i,0}$. Finally, set $F_i = F_{i, x_i} \cup F_{i, y_i} \cup F_{i, 0} \cup F_{i, 1}$. We take $P_i$ to be a shortest directed path from $x_i$ to $y_i$ in $D \setminus F_i$; we will show that such a path exists and has length at most $(\log n)^4$.
Suppose that $P_1, \ldots, P_{i-1}$ were chosen according the above procedure, and have length at most $(\log n)^4$.
Write $x = x_i$, $y = y_i$, $F' = F_{i, x_i}$ and $F = F_i \setminus F'$. For $s \in [r]$ let $X_s$ be the set of vertices $u$ for which there is a path of length at most $s$ from $x$ to $u$ in $D \setminus F_i$. We claim that
\begin{itemize}
\item
the number of edges in $F'[X_s]$ is at most $ks$, for $s \in [r]$,
\item
the maximum degree of $F[X_r]$ is at most $\frac{k}{(\log \log n)^3}$.
\end{itemize}
To prove the first item, fix $s \in [r]$ and consider a pair $(x_j, y_j)$, with $j < i$, such that $x_j = x$. Note that $F_j[X_s] \subseteq F_i[X_s]$. Indeed, since $X_s \subseteq B(x)$ we have $F_j[X_s] = (F_{j, x} \cup F_{j, 0})[X_s] \subseteq (F_{i, x} \cup F_{i, 0})[X_s] = F_i[X_s]$.
Let $u$ be the last vertex of $P_j$ in $X_s$ and let $P'$ be the subpath of $P_j$ that starts at $x$ and ends at $u$. Then $P'$ is a shortest path in $(D \setminus F_j)[X_s]$ from $x$ to $u$ (otherwise $P_j$ could be replaced by a shorter path, contrary to its choice). As $F_j[X_s] \subseteq F_i[X_s]$ and thus $(D \setminus F_i)[X_s] \subseteq (D \setminus F_j)[X_s]$, it follows that $P'$ is a shortest path in $((D \setminus F_i) \cup P')[X_s]$, implying that $P_j$ contains at most $s$ edges with both ends in $X_s$. Since there are at most $k$ values of $j$ with $j < i$ and $x_j = x$, the first item above holds.
We now prove the second item. Fix $u \in X_r$. Consider the largest $j$, with $j < i$, such that $u$ is in $P_j$ and $x_j \neq x$. By definition of $P_j$ and $F_{j,1}$ and the fact that $u \notin B(x_j) \cup B(y_j)$ this means that the in- and out-degrees of $u$ in $F_{j,0}$ are smaller than $\frac{k}{(\log \log n)^3}$. It follows that $u$ has in- and out-degree at most $\frac{k}{(\log \log n)^3}$ in $F[X_r]$, as required.
Having proved the two items above, \Cref{claim:expand-in-ball} implies that $|X_r| \ge k(\log n)^6$. A symmetric argument implies that the set $Y_r$, of vertices $u$ for which there is a directed path in $D \setminus F_i$ from $u$ to $y$ of length at most $r$, has size at least $k(\log n)^6$.
To complete the proof we need an upper bound on $e(F_i)$. Recall that the paths $P_1, \ldots, P_{i-1}$ have length at most $(\log n)^4$ each. Thus $e(F_{i, 0}) \le k^2(\log n)^4$. By choice of $F_{i,1}$ and the assumption that maximum in- and out-degree of $D$ is at most $100k$, it follows that
\begin{equation*}
e(F_i)
\,\le\, \frac{e(F_{i,0}) \cdot 200k}{k/(\log \log n)^3}
\le k^2(\log n)^4 \cdot 200(\log \log n)^3
\le k^2 (\log n)^5,
\end{equation*}
using that $n$ is large. Let $X_r'$ and $Y_r'$ be subsets of $X_r$ and $Y_r$, respectively, of size exactly $k(\log n)^6$.
Then
\begin{equation*}
d(D) \cdot \rho(|X_r'|)|X_r'|
\ge k \cdot \frac{1}{256(\log(4(\log n)^6))^2} \cdot k(\log n)^6
\ge k^2(\log n)^5 \ge e(F_i).
\end{equation*}
By \Cref{lem:short-paths-diam}, there is a path in $D \setminus F_i$ from $X_r'$ to $Y_r'$ of length at most $1600(\log n)^3$. It follows that there is a path from $x$ to $y$ in $D \setminus F_i$ whose length is at most $1600(\log n)^3 + 2r \le (\log n)^4$. This means that $P_i$ can be chosen appropriately and has length at most $(\log n)^4$, as claimed, for $i \in [a]$.
The paths $P_1, \ldots, P_a$ are pairwise edge-disjoint and the join each of the pairs $(x, y)$ with $x \in X$ and $y \in Y$. In particular, the union $P_1 \cup \ldots \cup P_a$ is an immersion of $\overrightarrow{K}_{k,k}$.
\end{proof}
\subsection{Immersions in small expanders} \label{subsec:immersion-small-expanders}
\begin{lemma} \label{lem:immersion-small-n}
Let $n, k \ge 1$ be integers such that $k$ and $n/k$ are sufficiently large and $k \ge (\log (n/k))^7$. Let $D$ be a multi-digraph with the following properties:
it is a robust directed $k$-vertex-expander on $n$ vertices; it has maximum in- and out-degree at most $100k$; and the graph obtained from $D$ by ignoring directions and multiplicities has at least $100kn$ edges. Then $D$ immerses a subgraph of $\overrightarrow{K}_{2k,2k}$ with at least $k^2/2$ edges.
\end{lemma}
\begin{proof}
\def U^+ {U^+}
\def U^- {U^-}
\def U^{\sigma} {U^{\sigma}}
\def B^+ {B^+}
\def B^- {B^-}
\def B^{\sigma} {B^{\sigma}}
\def u^+ {u^+}
\def u^- {u^-}
\def u^{\sigma} {u^{\sigma}}
\def N^{\sigma} {N^{\sigma}}
\def W^+ {W^+}
\def W^- {W^-}
\def W^{\sigma} {W^{\sigma}}
\def \{+, -\} {\{+, -\}}
Write $\ell = n/k$; so $\ell$ is large and $k \ge (\log \ell)^7$.
Write $a = \min\{k, \frac{\ell}{(\log \ell)^8}\}$ and $b = \ceil{k/a}$; so $k \le ab \le 2k$.
\begin{claim}
There are sets of vertices $U^+_1, \ldots, U^+_b, U^-_1, \ldots, U^-_b$, and $W(u)$ for $u \in \bigcup_i(U^+_i \cup U^-_i)$, with the following properties.
\begin{itemize}
\item
The sets $U^+_1, \ldots, U^+_b, U^-_1, \ldots, U^-_b$ are pairwise disjoint sets of size $a$ each. Set $U^{\sigma} := U^{\sigma}_1 \cup \ldots \cup U^{\sigma}_b$ for $\sigma \in \{+, -\}$ and $U := U^+ \cup U^-$.
\item
The sets $W(u)$, with $u \in U^{\sigma}_i$, are pairwise disjoint, for $i \in [b]$ and $\sigma \in \{+, -\}$.
\item
$W(u)$ is a subset of $N^{\sigma}(u)$ of size $20k$, for $u \in U^{\sigma}$.
\end{itemize}
\end{claim}
\begin{proof}
For some $i \in [b]$, suppose that $U^+_1, \ldots, U^+_{i-1}, U^-_1, \ldots, U^-_{i-1}$ and $W(u)$, for $u \in U_{< i}$, satisfy the above properties, where $U_{< i} = \bigcup_{j < i}(U^+_i \cup U^-_i)$. We will show how to obtain sets $U^+_i, U^-_i$ and $W(u)$ for $u \in U^+_i \cup U^-_i$, that together with the previously defined sets satisfy the above properties.
Let $D' = D \setminus U_{< i}$. We will pick distinct vertices $u_1, \ldots, u_a$ in $D'$ and sets of vertices $W_1, \ldots, W_a$ that are pairwise disjoint sets of size $20k$, such that $W_j$ is a set of out-neighbours of $u_j$ in $D'$, for $j \in [a]$.
Suppose that $u_1, \ldots, u_{j-1}$ and $W_1, \ldots, W_{j-1}$ are defined and satisfy the requirements, for some $j \in [a]$. We will show that a vertex $u_j$ and set $W_j$ with the required properties can be found. To see this, set $W_{< j} = \bigcup_{s < j} W_s$ and $D'' = D' \setminus (\{u_1, \ldots, u_{j-1}\} \cup W_{< j})$.
Note that $|U_{< i} \cup \{u_1, \ldots, u_{j-1}\}| \le 2ab \le 4k \le n/8$ and $|W_{< j}| \le a \cdot 20k \le \frac{20n}{(\log \ell)^8} \le n/8$. It follows that $D''$ is obtained from $D$ by the removal of at most $n/4$ vertices.
Denote by $G$ the simple graph obtained from $D$ by ignoring directions and multiplicities, and let $G''$ be obtained similarly from $D''$. By assumption, $G$ has at least $100kn$ edges and maximum degree at most $200k$. It follows that $e(G'') \ge e(G) - \frac{n}{4} \cdot 200k \ge 50 k n$. Let $H''$ be an orientation of $G''$ which is a subdigraph of $D''$. Then $H''$ has average out-degree at least $50k$, implying that there is a vertex $u_j$ in $D''$ whose out-degree in $H''$ is at least $50k$. Let $W_j$ be a subset of the out-neighbourhood of $u_j$ in $H''$ of size $20k$. This shows that vertices $u_1, \ldots, u_a$ and sets $W_1, \ldots, W_a$ as above exist.
A similar argument shows that there exist vertices $u_1', \ldots, u_a'$ and sets $W_1', \ldots, W_a'$, all in $D' \setminus (\{u_1, \ldots, u_a\} \cup W_{\le a})$, such that the sets $W_1, \ldots, W_a, W_1', \ldots, W_a'$ are pairwise disjoint sets of size $20k$ and $W_j$ is a set of in-neighbours of $u_j$. Take $U^+_i = \{u_1, \ldots, u_a\}$, $U^-_i = \{u_1', \ldots, u_a'\}$, $W(u_i) = W_i$ and $W(u_i') = W_i'$.
\end{proof}
Let $U^+_1, \ldots, U^+_b, U^-_1, \ldots, U^-_b$ and $W(u)$, for $u \in U$, where $U = \bigcup_{i \in [b]}(U^+_i \cup U^-_i)$, be as in the claim above.
Note that $|U| = 2ab \le 4k$.
For $u \in U$, let $W'(u)$ be a subset of $W(u) \setminus U$ of size $10k$.
For $u \in U^+$, let $F(u)$ be the set of edges in $D$ that touch $u$ but are not of the form $uv$ with $v \in W(u)$, and for $u \in U^-$, let $F(u)$ be the set of edges in $D$ that touch $u$ but are not of the form $vu$ with $v \in W(u)$. Let $F_0$ be the union of the sets $F(u)$ with $u \in U$. Then $|F_0| \le 200k \cdot 2ab \le 800k^2$.
Let $M_1, \ldots, M_{ab}$ be a collection of perfect matchings in $U^+ \times U^-$ that partition $U^+ \times U^-$ and such that $M_i[U^+_j, U^-_{j'}]$ is either empty or a perfect matching for every $j, j' \in [b]$. We will find collections of paths $\mathcal{P}_1, \ldots, \mathcal{P}_{ab}$ as follows. Let $F_i$ be the union of $F_0$ with the edges that appear in one of the paths in $\mathcal{P}_j$ with $j < i$. Take $\mathcal{P}_i$ to be a maximal collection of pairwise edge-disjoint paths of length at most $(\log \ell)^5$ joining pairs of vertices in $M_i$, each joining a different pair. We claim that $|\mathcal{P}_i| \ge k/2$ for $i \in [ab]$. Observe that, if true, this would complete the proof of the lemma.
To see that $|\mathcal{P}_i| \ge k/2$, fix $i \in [a]$ and suppose that $|\mathcal{P}_i| \le k/2$. Let $M_i'$ be the submatching of $M_i$ consisting of pairs that are not joined by a path in $\mathcal{P}_i$.
Then $|M_i'| \ge k/2$, hence there exist $j, j' \in [b]$ such that $M_i[U^+_j, U^-_{j'}]$ is a perfect matching and $|M_i'[U^+_j, U^-_{j'}]| \ge a/2$. Denote $M' = M_i'[U^+_j, U^-_{j'}]$, let $X = V(M') \cap U^+_j$ and $Y = V(M') \cap U^-_{j'}$, and set $D' = D \setminus F_{i+1}$. For a vertex $u$ in $D'$ and integer $i$, let $B^+_i(u)$ be the set of vertices $v$ for which there is a directed path of length at most $i$ from $u$ to $v$ in $D'$, and let $B^-_i(u)$ be the set of vertices $v$ for which there is a directed path of length at most $i$ from $v$ to $u$ in $D'$.
Set $r = (\log \ell)^4$.
We claim that for every $(x, y) \in M'$ either $|B^+_r(x)| \le k (\log \ell)^7$ or $|B^-_r(y)| \le k (\log \ell)^7$. Indeed, suppose that $|B^+_r(x)|, |B^-_r(y)| \ge k(\log \ell)^7$ and let $X$ and $Y$ be subsets of $B^+_r(x)$ and $B^+_r(y)$, respectively, of size $k(\log \ell)^7$. Observe that $|F_{i+1}| \le d(D) \rho(|X|) |X|$. Indeed, this follows from the next two inequalities (using that the paths in $\mathcal{P}_j$ have length at most $(\log n)^5$).
\begin{align} \label{eqn:F}
\begin{split}
& |F_{i+1}| \le |F_0| + (ab)^2 \cdot (\log \ell)^5 \le 800k^2 + 4k^2 (\log \ell)^5 \le k^2(\log \ell)^6, \\
& d(D) \rho(|X|) |X|
\ge k \cdot \frac{1}{256(\log(4 (\log \ell)^7))^2} \cdot k (\log \ell)^7 \ge k^2 (\log \ell)^6.
\end{split}
\end{align}
So, by \Cref{lem:short-paths-diam}, there is a directed path of length at most $1600(\log \ell)^3$ from $X$ to $Y$ in $D'$, showing that there is a path from $x$ to $y$ in $D'$ whose length is at most $1600(\log \ell)^3 + 2r \le (\log \ell)^5$, a contradiction to the maximality of $\mathcal{P}_i$.
Hence, either $|B^+_r(x)| \le k (\log \ell)^7$ for at least $a/4$ values of $x$ in $X$ or $|B^-_r(y)| \le k (\log \ell)^7$ for at least $a/4$ values of $y$ in $Y$. Without loss of generality, we assume the former.
Let $X_0$ be the set of vertices $x$ in $X$ for which $|B^+_r(x)| \le k (\log \ell)^7$; so $|X_0| \ge a/4$. Define $X_i = \bigcup_{x \in X_0} B^+_i(x)$. Since each $x \in X_0$ has no in-neighbours in $D \setminus F_0$, at most $ab$ of the paths in $\mathcal{P}_1 \cup \ldots \cup \mathcal{P}_i$ contain an edge touching $x$, implying that all but at most $ab \le 2k$ vertices of $W(x)$ are in $X_1$, for $x \in X_0$. Since the sets $W(x)$ are pairwise disjoint, we have $|X_1| \ge (a/4) \cdot (|W(x)| - 2k) \ge 2ak \ge \min\{k^2, \frac{2n}{(\log \ell)^8}\} \ge k (\log \ell)^{7}$ (using that $k \ge (\log \ell)^7$ and that $\ell$ is large). As in \eqref{eqn:F}, we have $|F_{i+1}| \le d(D) \rho(|X_1|)|X_1| \le d(D) \rho(|X_j|)|X_j|$ for $j \ge 1$. Thus, by expansion, if $|X_j| \le n/2$ then $|X_{j+1}| \ge (1 + \rho(|X_j|))|X_j| \ge (1 + \rho(n))|X_j|$. Suppose that $|X_r| \le n/2$. Then
\begin{align*}
|X_{r+1}| \ge (1 + \rho(n))^r |X_1|
& = \left(1 + \frac{1}{256(\log (4n/k))^2}\right)^r \cdot |X_1| \\
& \ge \left(1 + \frac{2}{(\log \ell)^3}\right)^r \cdot |X_1|
\ge \exp\left( \frac{r}{(\log \ell)^3} \right) k > k\ell = n,
\end{align*}
(recalling that $r = (\log \ell)^4)$), a contradiction. Hence $|X_r| \ge n/2 \ge a \cdot k(\log \ell)^7$ (using $ak \le \frac{n}{(\log \ell)^8}$ and that $\ell$ is large). By definition of $X_r$, it follows that $|B^+_r(x)| \ge k (\log \ell)^7$ for some $x \in X_0$, a contradiction.
\end{proof}
\subsection{Immersions in Eulerian digraphs with high degree} \label{subsec:find-dense-immersion}
\begin{theorem} \label{thm:find-dense-immersion}
There exists a constant $\beta > 0$ such that for every large enough integer $k$ the following holds: every (simple) Eulerian digraph with minimum in- (and out-) degree at least $100 k$ immerses a simple digraph with at most $\beta k$ vertices and at least $k^2/2$ edges.
\end{theorem}
\begin{proof}
Let $\beta \ge 100$ be a sufficiently large constant so that \Cref{lem:immersion-small-n} holds when $n/k \ge \beta$ (and $k$ is large and $k \ge (\log(n/k))^7$). Let $k$ be a large enough integer, and let $D$ be an Eulerian digraph with minimum in-degree at least $100k$. By
\Cref{lem:get-regular-Eulerian}, $D$ immerses either $\overrightarrow{K}_{50k, 50k}$ or a simple $100k$-regular Eulerian digraph (meaning that all in- and out-degrees are $100k$). If the former holds, we are done, so suppose that the latter holds, and let $D'$ be a simple Eulerian $100k$-regular digraph immersed by $D$. Now apply \Cref{lem:find-directed-expander} to find a multi-digraph $D''$ immersed by $D'$ with the following properties: $D''$ is a robust $k$-vertex-expander; and the simple graph obtained from $D''$ by ignoring directions and multiplicities has at least $100kn$ edges. Observe that by virtue of being an immersion of a $100k$-regular digraph, $D''$ has maximum in- and out-degree at most $100k$. We consider three cases: $n \le \beta k$; $n \ge \beta k$ and $k \ge (\log(n/k))^7$; and $n \ge 4k(100k)^{(\log \log n)^6}$. It is not hard to see that at least one of these three cases holds. Indeed, it suffices to show that if $k \le (\log (n/k))^7$ then $n \ge 4k(100k)^{(\log \log n)^6}$. The condition on $k$ implies $k \le (\log n)^7$, showing
\begin{equation*}
\log\left(4k(100k)^{(\log \log n)^6}\right)
\le 2 + \log k + (\log \log n)^6 \cdot \log (100k)
\le (\log \log n)^8
\le \log n,
\end{equation*}
(using that $n$ is large, which follows from $k$ being large), as required.
If $n \le \beta k$, then $D''$ is a graph on at most $\beta k$ vertices with at least $100kn \ge 5000 k^2$ edges, ignoring directions and multiplicities (using that $n \ge 50k$, which follows as the simple graph obtained by removing directions and multiplicities has average degree at least $50k$). If $n \ge \beta k$ and $k \ge (\log (n/k))^7$, by \Cref{lem:immersion-small-n}, $D''$ immerses a subgraph of $\overrightarrow{K}_{2k,2k}$ with at least $k^2/2$ edges. Finally, if $n \ge 4k(100k)^{(\log \log n)^6}$, then by \Cref{lem:immersion-large-n}, $D''$ immerses $\overrightarrow{K}_{k,k}$. Either way, $D''$ immerses a simple digraph on at most $\beta k$ vertices and with at least $k^2/2$ edges.
\end{proof}
\section{Immersing a large complete digraph} \label{sec:immerse-clique}
Our aim in this section is to prove the following theorem, which shows that a dense Eulerian multi-digraph immerses a large complete digraph.
\begin{theorem} \label{thm:from-dense-to-complete}
Let $D$ be an Eulerian multi-digraph on $n$ vertices whose underlying simple graph, obtained from $D$ by ignoring directions and multiplicities, has minimum degree at least $\alpha n$. Then $D$ immerses $\overrightarrow{K}_s$, where $s = 10^{-9} \alpha^4 n$.
\end{theorem}
An important step in the proof of \Cref{thm:from-dense-to-complete} is to find short directed cycles in a graph with large minimum out-degree. To realise this step, we use the next lemma which finds short directed cycles in simple weighted digraphs with large minimum degree.
A \emph{weighted digraph} is a digraph $D$ equipped with a weight function $\omega : V(D) \to \mathbb{R}^{\ge 0}$. Given a weighted digraph $D$ with weight function $\omega$ and a subset $U \subseteq V(D)$, denote $\omega(U) := \sum_{u \in U} \omega(u)$.
\begin{lemma} \label{lem:short-cycle-weighted}
Let $D$ be a weighted simple digraph (bi-directed edges are allowed) with weight function $\omega$, satisfying $\omega(N^+(u)) \ge \alpha \cdot \omega(V(D))$ for every vertex $u$. Then, there is a directed cycle of length at most $4\alpha^{-1}$.
\end{lemma}
\begin{proof}
Let $U \subseteq V(D)$ be a minimal set satisfying that $\omega(N^+(u) \cap U) \ge \alpha \cdot \omega(U)$ for every $u \in U$. By possibly re-scaling $\omega$, we may assume that $\omega(U) = 1$.
Write $D' = D[U]$. For a vertex $u \in U$ denote by $N^+_i(u)$ the set of vertices reachable from $u$ by a directed path of length at most $i$ in $D'$; define $N^-_i(u)$ similarly.
Write $\ell = 2 \alpha^{-1}$.
We claim that $\omega(N^+_{\ell}(u)) \ge 2/3$ for every $u \in U$. Indeed, suppose that $\omega(N^+_{\ell}(u)) < 2/3$. Then some $i \in [\ell-1]$ satisfies $\omega(N^+_{i+1}(u) \setminus N^+_i(u)) \le \frac{2}{3\ell} \le \frac{\alpha}{3}$. This implies that for every vertex $v$ in $N^+_i(u)$ the following holds: $\omega(N^+(v) \cap N^+_i(u)) \ge \frac{2\alpha}{3} \ge \alpha \cdot \omega(N^+_i(u))$, contradicting the minimality of $U$.
Next, we claim that there is a vertex $u$ for which $\omega(N^-_{\ell}(u)) \ge 2/3$. Indeed, let $H$ be the weighted digraph on $U$ with weight function $\omega$, where $xy$ is an edge whenever there is a directed path of length at most $\ell$ in $D'$ from $x$ to $y$. Then $\omega(N^+_H(u)) \ge 2/3$ for every $u \in H$. We do a double counting as follows.
\begin{equation*}
\sum_{x \in U} \omega(x) \omega(N^-_H(x))
= \sum_{yx \in E(H)} \omega(x) \omega(y)
= \sum_{y \in U} \omega(y) \omega(N^+_H(y))
\ge \frac{2}{3} \cdot \sum_{y \in U}\omega(y)
= 2/3.
\end{equation*}
It follows that there is a vertex $u$ with $\omega(N^-_H(u)) \ge 2/3$; equivalently, $\omega(N^-_{\ell}(u)) \ge 2/3$, as claimed.
Let $u$ satisfy $\omega(N^-_{\ell}(u)) \ge 2/3$. Since $\omega(N^+_{\ell}(u)) \ge 2/3$, there is a vertex $v$ such that $v \in N^+_{\ell}(u) \cap N^-_{\ell}(u)$. Hence, there is a closed directed walk of length at most $2\ell$, implying that existence of a directed cycle of length at most $2\ell = 4\alpha^{-1}$.
\end{proof}
Next, we leverage \Cref{lem:short-cycle-weighted} to find directed cycles with few simple edges in digraphs with large minimum out-degree.
\begin{lemma} \label{lem:short-cycle}
Let $\alpha \in (0,1)$ and let $D$ be a multi-digraph (with no loops) on $n$ vertices with minimum out-degree at least $\alpha n$. Then there is a directed cycle with at most $4\alpha^{-1}$ simple edges.
\end{lemma}
\begin{proof}
Let $D'$ be the simple digraph on $V(D)$ where $xy$ is an edge whenever there are at least two directed edges in $D$ from $x$ to $y$. Let $X$ be the set of vertices with out-degree $0$ in $D'$.
\begin{claim}
Either $D'$ contains a cycle or there is a partition $\{U(x) : x \in X\}$ of $V(D)$ such that $x \in U(x)$ and $x$ can be reached from each vertex in $U(x)$ in $D'$, for every $x \in X$.
\end{claim}
\begin{proof}
If $D'$ contains a directed cycle, we are done. We may therefore assume that this is not the case.
Write $X = \{x_1, \ldots, x_m\}$.
Define subsets $U_1, \ldots, U_m \subseteq V(D')$ as follows. Suppose that $U_1, \ldots, U_{i-1}$ are defined. Let $D_i' = D' \setminus (U_1 \cup \ldots \cup U_{i-1})$ and let $U_i$ be the set of vertices $u$ for which there is a directed path from $u$ to $x_i$ in $D_i'$. Set $U(x_i) = U_i$. We claim that the collection $\{U(x): x \in X\}$ satisfies the requirements of the claim.
First note that $x_i \in U_i$ for $i \in [m]$ (because there is no directed path in $D'$ between two distinct vertices in $X$, by choice of $X$).
Next, we show that $\{U_1, \ldots, U_m\}$ is a partition of $V(D')$. Indeed, clearly the sets $U_1, \ldots, U_m$ are pairwise disjoint. Now consider $u \in V(D')$. It is easy to see that there is a directed path from $u$ to $X$ (consider a maximal path from $u$ in $D'$, it must end in a vertex of out-degree $0$ since there are no directed cycles). Let $i$ be minimal such that there is a path $P$ from $u$ to $x_i$. Then none of the vertices in $P$ are in $U_1 \cup \ldots \cup U_{i-1}$ (by minimality of $i$). It follows that $x_i$ can be reached from $u$ in $D_i$, implying that $u \in U_i$. So $V(D) = U_1 \cup \ldots \cup U_m$.
\end{proof}
We assume that $D'$ is acyclic (otherwise $D$ has a cycle with no simple edges, as required).
Note that $X$ is non-empty, as otherwise $D'$ contains a cycle. Let $\{U(x): x \in X\}$ be a partition of $V(D)$ as in the above claim.
If there is an edge in $D$ from $x$ to $U(x)$ for some $x \in X$ then there is a directed cycle in $D$ with exactly one simple edge, as required. So suppose that there are no edges from $x$ to $U(x)$ for $x \in X$.
Let $H$ be an auxiliary weighted simple digraph on $X$, with weight function $\omega$ defined by $\omega(x) = |U(x)|$, and where $x y$ is an edge in $H$ whenever there is an edge in $D$ from $x$ to $U(y)$. By choice of $X$, every edge from $x$ to $U(y)$ in $D$ is a simple edge. By choice of $X$ and minimum out-degree assumption on $D$, we have $|N^+(x)| \ge \alpha n$ for $x \in X$. Thus
\begin{equation*}
\omega(N^+(x)) = \sum_{y \in X :\,\, U(y) \cap N^+(x) \neq \emptyset}|U(y)| \ge |N^+(x)| \ge \alpha n.
\end{equation*}
Since $\omega(X) = n$, it follows from \Cref{lem:short-cycle-weighted} that there is a cycle $C$ in $H$ of length at most $4\alpha^{-1}$. Write $C = (x_1 \ldots x_{\ell})$. For $i \in [\ell]$ let $y_i \in U(x_i)$ be such that $x_i y_{i+1}$ is an edge in $D$ (addition of indices is taken modulo $\ell$; such $y_i$ exists by definition of $H$). Let $P_i$ be a directed path in $D'$ from $y_i$ to $x_i$. Then $C' = (x_1 y_2 P_2 \ldots x_{\ell} y_1 P_1)$ is a closed walk in $D$ with $\ell$ simple edges. Since $\ell \le 4\alpha^{-1}$, this completes the proof.
\end{proof}
Finally, we prove \Cref{thm:from-dense-to-complete}.
\begin{proof}[Proof of \Cref{thm:from-dense-to-complete}]
We first modify $D$ as follows. If there are three vertices $x, y, z$ such that both $xy$ and $yz$ are multiple edges, then remove a copy of $xy$ and $yz$ and add a copy of $xz$. Continue doing so until it is not longer possible and denote the resulting digraph by $D'$, and let $G'$ be the graph obtained from $D'$ by ignoring directions and multiplicities. Then $D'$ is an Eulerian multi-digraph which is immersed by $D$, and $G'$ has minimum degree at least $\alpha n$. Moreover, no vertex in $D'$ is incident to both multiple in-edges and multiple out-edges. Denote by $V^-$ and $V^+$ the vertices in $D'$ incident to multiple in-edges and multiple out-edges, respectively; so $V^+$ and $V^-$ are disjoint.
\begin{claim}
Let $\ell = 2^{-16} \alpha^4 n^2$. Then, there is a collection of $\ell$ pairwise edge-disjoint directed cycles in $D'$.
\end{claim}
\begin{proof}
We define directed cycles $C_1, \ldots, C_{\ell}$ as follows. Suppose that $C_1, \ldots, C_{i-1}$ are chosen. Set $D_i = D' \setminus (C_1 \cup \ldots \cup C_{i-1})$ (here we take into account multiplicities), and let $C_i$ be a shortest directed cycle in $D_i$.
We claim that there is such a cycle $C_i$ and that $C_i$ has length at most $64 \alpha^{-1}$, for $i \in [\ell]$.
Suppose that this is the case for $j < i$, where $i \in [\ell]$. Let $X_i$ be the set of vertices that appear in at least $\alpha n / 4$ of the cycles $C_1, \ldots, C_{i-1}$. Then
\begin{equation*}
|X_i|
\le \frac{\ell \cdot 64 \alpha^{-1}}{\alpha n / 4}
\le \frac{\alpha^2 n}{256}.
\end{equation*}
Let $Y_i$ be the set of vertices with out-degree at least $\alpha n / 16$ in $X_i$.
Then $Y_i \subseteq V^+$, because vertices outside of $V^+$ send at most $|X_i|$ out-edges to $X_i$.
Note that the maximum in-degree in $D'$ is at most $n$, because $D'$ is Eulerian and every vertex has maximum in- or out-degree at most $n$. It follows that
\begin{equation*}
|Y_i| \le \frac{|X_i|n}{\alpha n / 16} \le \frac{\alpha n}{16}.
\end{equation*}
Set $D_i' = D_i \setminus (X_i \cup Y_i)$.
We claim that $D_i'$ has minimum out-degree at least $\alpha n / 8$. Indeed, let $u \in V(D_i')$. Observe that $u$ has out-degree at least $\alpha n/2$ in $D'$; this follows from $G'$ having minimum degree at least $\alpha n$.
Thus, since $u$ is not in $X_i$, it has out-degree at least $\alpha n / 4$ in $D_i$. Moreover, since it is not in $Y_i$, it sends at most $\alpha n / 16$ out-edges to $X_i$. Additionally, $u$ sends at most $|Y_i|$ out-edges to $Y_i$, because $Y_i \subseteq V^+$. It follows that $u$ has out-degree at least $\alpha n / 8$ in $D_i'$, as required.
By \Cref{lem:short-cycle}, there is a directed cycle $C$ in $D_i'$ all but at most $32\alpha^{-1}$ of its edges are simple. By the structure of $D'$, there cannot be two consecutive multiple edges in $C$. It follows that $C$ has length at most $64\alpha^{-1}$. This implies that the cycle $C_i$ exists and has length at most $64 \alpha^{-1}$, as required.
\end{proof}
Let $\mathcal{C}$ be a collection of at least $\ell$ pairwise edge-disjoint directed cycles in $D'$. For each $C \in \mathcal{C}$, let $e(C)$ be an edge of $C$ which is simple in $D'$ (the structure of $D'$ implies that such an edge exists).
We define a graph $H$ on $V(D')$ as follows. For each $C \in \mathcal{C}$, add the edge $e(C)$ to $H$, ignoring its direction. Then $e(H) \ge \ell = 2^{-16} \alpha^4 n^2$, so $H$ has a subgraph with minimum degree at least $2^{-16} \alpha^4 n$.
By \Cref{thm:devos-et-al}, the graph $H$ immerses $K_s$, where $s = 10^{-9} \alpha^4 n$. This means that there is a set $X$ of $s$ vertices and a path $P_{xy}$ from $x$ to $y$ for every two vertices $x, y \in X$, such that these paths are pairwise edge-disjoint. We show that $D'$ immerses $\overrightarrow{K}_s$.
Fix two vertices $x, y \in X$. Write $P_{xy} = (x_0, \ldots, x_r)$. For each $i \in [r]$, let $C_i$ be the cycle in $\mathcal{C}$ for which $e(C_i) = x_{i-1} x_i$, let $Q_i$ be the subpath of $C_i$ from $x_{i-1}$ to $x_i$ and let $Q_i'$ be the subpath of $C_i$ from $x_i$ to $x_{i-1}$ (one of $Q_i$ and $Q_i'$ is an edge). Define $D_{xy} = (Q_1 \ldots Q_r)$ and $D_{yx} = (Q_r' \ldots Q_1')$. Then $D_{xy}$ is a directed path from $x$ to $y$ and $D_{yx}$ is a directed path from $y$ to $x$. It is easy to see that the paths $D_{xy}$, with $x,y \in X$ and $x \neq y$, are pairwise edge-disjoint. The union of these paths yields an immersion of $\overrightarrow{K}_s$ in $D'$, as required.
\end{proof}
\section{Proof of Theorem~\ref{thm:main}}\label{sec: proofmain}
\begin{proof}
We may assume $t$ is large by taking $\alpha$ to be a sufficiently large constant and using that an Eulerian simple digraph with minimum out-degree $t^2$ contains an immersion of $\overrightarrow{K}_t$, as proved in \cite{devoss2010immersing}.
Let $\beta > 0$ be a constant as in \Cref{thm:find-dense-immersion}, and write $k = 10^{11} \beta^8 t$. We will show that every simple Eulerian digraph with minimum in-degree at least $100k$ immerses $\overrightarrow{K}_t$, showing that the statement holds with $\alpha = 10^{13} \beta^8$.
Let $D$ be a simple Eulerian digraph with minimum in-degree at least $100k$.
By \Cref{thm:find-dense-immersion}, $D$ immerses a simple digraph $D'$ on at most $\beta k$ vertices and at least $k^2/2$ edges. Let $G'$ be the graph obtained from $D'$ by ignoring directions, let $G''$ be a subgraph of $G'$ with minimum degree at least $d(G')/2$, and let $D''$ be an orientation of $G''$ which is a subgraph of $D'$. Write $n = |G''|$ and $\alpha = 1/2\beta^2$. Note that $d(G') = 2e(D')/|D'| \ge k/\beta$, showing $\delta(G'') \ge k/2\beta \ge n / 2\beta^2 = \alpha n$. Applying Lemma~\ref{lem:complete-to-eulerian}, with $D''$ playing the role of $D'$, we obtain an Eulerian multi-digraph $D'''$ on the same vertex set as $D''$ which contains $D''$ as a subdigraph and is immersed by $D$.
By \Cref{thm:from-dense-to-complete}, $D'''$ immerses $\overrightarrow{K}_s$, where $s = 10^{-9} \alpha^4 n \ge \frac{k}{10^9 2^4 \beta^8} \ge t$, as claimed.
\end{proof}
\section{Concluding remarks} \label{sec:conclusion}
As stated in the introduction, Lochet proved that for every positive integer $k$, there exists $f(k)$ such that any digraph with minimum out-degree at least $f(k)$ contains a immersion of a transitive tournament on $k$ vertices. This is essentially best possible since, as we already pointed out, there are digraph with arbitrarily large minimum-out degree which do not contain an immersion of a $\overrightarrow{K}_3$ (see \cite{mader1985degree,devos2012}).
Lochet's proof allowed him to show $f(k)\leq O(k^3)$. We suspect that a linear bound would suffice.
\begin{conjecture}
There exists an absolute constant $C>0$ such that for any positive integer $k$ the following holds. Let $D$ be a digraph with $\delta^{+}(D)\geq Ck$. Then $D$ immerses a transitive tournament on $k$ vertices.
\end{conjecture}
To conclude, we reiterate Mader's question about subdivisions of transitive tournaments in digraphs with large minimum out-degree.
\begin{question}[Mader \cite{mader1996topological}]
Is there a function $f$ such that, for every integer $k \ge 1$, every digraph with minimum out-degree at least $f(k)$ contains a subdivision of a transitive tournament on $k$ vertices?
\end{question}
\section{Acknowledgements}
We would like to thank Paul Wollan for an insightful conversation on the topic of immersions.
|
1,116,691,501,301 | arxiv | \section{Introduction}
Since the 1960s investigations on the invariant structures of statistical models led to various approaches to incorporate geometric structures, of which notably the contributions of \textsc{N. N. Chenzow} \cite{Chenzow1965} and \textsc{S. Amari} \cite{Amari1987} eventually encouraged the notation of \emph{Statistical Manifolds}. Thereby Amari's approach to incorporate a differential structure emphasizes the \emph{Fisher Information Metric} which is obtained as a partial derivative of the Kullback-Leibler divergence. The induced geometry became known as \emph{Information Geometry}. Unfortunately the supplementary degree of abstraction compared to its initially low applicability caused further research on this direction to lose ``momentum''. Nevertheless, in the first decade of the $21^{\text{st}}$ century, the increasing availability of large collections of complex natural data, demanded a theoretic underpinning of complex structural assumptions, in particular with respect to the rising theory of deep-learning.
\section{\label{sec:Statistical-manifolds}Statistical Manifolds}
\emph{Topological Statistical Models} \cite{Michl2019} provide the ability to characterize statistical inference without the necessity of an underlying sample space. Thereby the statistical equivalence of statistical models is provided by Kolmogorov equivalence. The topologies of those Kolmogorov quotients in turn are obtained by countable coverings of Borel sets in $\mathbb{R}$ and therefore enriches the underlying model space to be second countable Hausdorff spaces. It is therefore straightforward to transfer the concept of topological manifolds to statistical models by the Kolmogorov quotients of their induced topological statistical models: Let $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{T})$ be topological statistical model and $n\in\mathbb{N}$. Then a \emph{coordinate chart} $(U,\,\phi)$ within $\mathrm{KQ}(\mathcal{M},\,\mathcal{T})$ is constituted by an open set $U\in\bigslant{\tau}{\mathrm{id}}$ and a homeomorphism $\phi:U\rightarrow\mathbb{R}^{n}$ into $\mathbb{R}^{n}$. This allows the definition of an \emph{atlas} $\mathcal{A}$ for $\mathcal{M}$ by a family of charts $\{(U_{i},\,\phi_{i})\}_{i}$, that covers $\bigslant{\mathcal{M}}{\mathrm{id}}$, such that $\bigslant{\mathcal{M}}{\mathrm{id}}=\bigcup_{i\in I}U_{i}$. In order to extend the local Euclidean structure of the individual coordinate charts, to a global structure over the model space, the transitions within overlapping charts are required to preserve the structure. Let therefore $(U_{a},\,\phi_{a})$ and $(U_{b},\,\phi_{b})$ be coordinate charts in $\mathcal{A}$ with a nonempty intersection $U_{a\cap b}=U_{a}\cap U_{b}$, then $\phi_{a}(U_{a\cap b})$ and $\phi_{b}(U_{a\cap b})$ generally denote different representations of $U_{a\cap b}$ in $\mathbb{R}^{n}$. In this case for a given $k\in\mathbb{N}_{0}\cup\{\infty,\,\omega\}$, the charts are regarded to be $C^{k}$-compatible, iff their \emph{transition maps} $\phi_{a} \circ\phi_{b}^{-1}$ and $\phi_{b}\circ\phi_{a}^{-1}$ are $k$-times continuously differentiable.
If all charts of an atlas $\mathcal{A}$ are pairwise $C^{k}$-compatible, then $\mathcal{A}$ is termed a $C^{k}$-atlas. Let then be $\mathcal{A}^{'}$ a further $C^{k}$-atlas of $\mathcal{M}$, then $\mathcal{A}$ and $\mathcal{A}^{'}$ are termed $C^{k}$-equivalent, if also $\mathcal{A}\cup\mathcal{A}^{'}$ is a $C^{k}$-atlas of $\mathcal{M}$. This equivalence relationship may be used to derive a maximal atlas by completion. Let therefore $\mathcal{A}_{\max}$ be the union of all $C^{k}$-atlases of $\mathcal{M}$, that are $C^{k}$-equivalent to $\mathcal{A}$, then $\mathcal{A}_{\max}$ is unique for the $C^{k}$-equivalence class of $\mathcal{A}$ and does not depend on the choice of $\mathcal{A}$ within this class. Then any $C^{k}$-differentiable function, that is defined within the image of a chart in $\mathcal{A}_{\max}$ has a unique $C^{k}$-differentiable extension within its neighbourhood in $\mathrm{KQ}(\mathcal{M},\,\mathcal{T})$.
The crux in the definition of a $C^{k}$-atlas $\mathcal{A}$ however
is, that due to the Hausdorff property of $\mathrm{KQ}(\mathcal{M},\,\mathcal{T})$
and the completion of $\mathcal{A}$ by$\mathcal{A}_{\max}$ the requirement of the transition functions to be $C^{k}$-diffeomorphism in $\mathbb{R}^{n}$ induces a differential structure to $\mathrm{KQ}(\mathcal{M},\,\mathcal{T})$. Then not only the transition maps, but any coordinate chart by itself may be regarded as a $C^{k}$-diffeomorphism into $\mathbb{R}^{n}$. This defines the structure of a \emph{statistical manifold}.
\begin{defn*}[Statistical manifold]
\label{def:Statistical-manifold} \emph{Let $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{T})$
be a topological statistical model and $\mathcal{A}$ an $n$-dimensional
$C^{k}$-atlas for $\mathrm{KQ}(\mathcal{M},\,\mathcal{T})$. Then
the tuple $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$ is termed a
statistical manifold. }\textbf{Remark}:\emph{ The category of $k$-differentiable
statistical manifolds is denoted by $\mathbf{StatMan}^{k}$.}
\end{defn*}
Since the atlas $\mathcal{A}$ has to be defined with regard to the
Kolmogorov quotient $\mathrm{KQ}(\mathcal{M},\,\mathcal{T})$ to assure
the Hausdorff property, statistical manifolds have technically to
be regarded as non-Hausdorff manifolds. Since the atlas $\mathcal{A}$
conversely induces a topology that equals $\bigslant{\mathcal{T}}{\mathrm{id}}$
the original topological statistical model $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{T})$
may not be derived by \emph{$(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$}.
Nevertheless by the extension of the Kolmogorov quotient to the atlas $\mathrm{KQ}(\mathcal{M},\,\mathcal{A})$, it follows that $\mathrm{KQ}(\mathcal{M},\,\mathcal{T})$ and $\mathrm{KQ}(\mathcal{M},\,\mathcal{A})$ are Kolmogorov equivalent and therefore that $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{T})$ and $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$ are induced by statistical equivalent models. With regard to observation based statistical inference this ``irregularity'' however usually has no impact, since for any \emph{identifiable statistical manifold} $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$, which model space $\mathcal{M}$ is identical to a parametric family $\mathcal{M}_{\theta}$, it holds that $\bigslant{\mathcal{M}}{\mathrm{id}}=\mathcal{M}_{\theta}=\mathcal{M}$ and therefore that $\mathrm{KQ}(\mathcal{M},\,\mathcal{A})=(\mathcal{M},\,\mathcal{A})$.
Without loss of generality in the following therefore $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$
is assumed to be an identifiable statistical manifold and therefore
a manifold in the usual context. Nevertheless, in order to provide
higher structures, it is reasonable to recapitulate the usual concepts
and the vocabulary of manifolds. First of all by assuming $\mathcal{A}$
to be a $C^{0}$-atlas, the transition maps, that define the structure
of $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$ are only required to
be continuous and $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$ is termed
\emph{topological}. For the case, that $\mathcal{A}$ is a $C^{k}$-atlas
with $k>0$ however, the transition functions at least have to be
differentiable and therefore $(S,\,\Sigma,\,\mathcal{M},\,\mathcal{A})$
is termed \emph{differentiable}. Let now be $(\mathcal{N},\,\mathcal{B})$
a further identifiable statistical manifold, where $\mathcal{B}$
is an $m$-dimensional $C^{k}$-atlas of $\mathcal{N}$ and $f\colon\mathcal{M}\to\mathcal{N}$
a function, that is is continuous w.r.t. the induced topologies. Then
$f$ is \emph{$C^{k}$-differentiable} and written as $f\in C^{k}(\mathcal{M},\,\mathcal{N})$,
if for arbitrary coordinate charts $(U,\,\phi)\in\mathcal{A}$ and
$(V,\,\nu)\in\mathcal{B}$ with $f(U)\subseteq V$ it holds, that
$\nu\circ f\circ\phi^{-1}$ is $k$-times continuously differentiable.
Thereby the case $(\mathcal{N},\,\mathcal{B})=(\mathbb{R},\,\mathcal{B}(\mathbb{R}))$
occupies an exceptional position, which is abbreviated by the notation
$C^{k}(\mathcal{M})$. At this point it is important to notice, that
although the definitions of $C^{k}$-atlases and $C^{k}$-differentiable
functions depend on the choice of $k$, this does essentially not
apply for the underlying differentiable structures. The reason for
this ``peculiarity'' may be found in the property, that for any
$k>0$ any $C^{k}$-atlas uniquely admits a ``smoothing'', given
by a $C^{k}$-equivalent $C^{\infty}$-atlas. Therefore the set of
smooth functions $C^{\infty}(\mathcal{M})$ is well defined, independent
of the underlying differentiable structure. Thereby $C^{\infty}(\mathcal{M})$
constitutes an associative algebra w.r.t. the pointwise product ``$\cdot$'',
the addition ``$+$'' and the scalar multiplication. This allows
a formal definition of \emph{derivations} at points $P\in\mathcal{M}$
by $\mathbb{R}$-linear functions $D\colon C^{\infty}(\mathcal{M})\to\mathbb{R}$,
that satisfies the Leibniz rule $D(\varphi\cdot\psi)=D(\varphi)\psi(P)+\varphi(P)D(\psi)$
for all $\varphi,\,\psi\in C^{\infty}(\mathcal{M})$. Let now be $T_{P}\mathcal{M}$
the set of all derivations at $P$, then $T_{P}\mathcal{M}$ defines
an $n$-dimensional $\mathbb{R}$-vector space, by the operations
$(v+w)(\varphi)\coloneqq v(\varphi)+w(\varphi)$ and $(\lambda v)(\varphi)\coloneqq\lambda v(\varphi)$,
for $v,\,w\in T_{P}\mathcal{M}$, $\varphi\in C^{\infty}(\mathcal{M})$
and $\lambda\in\mathbb{R}$. As for any given coordinate chart $(U,\,\phi)$
that contains $P$, any derivation $v\in T_{P}\mathcal{M}$ uniquely
corresponds to a directional derivative in $\mathbb{R}^{n}$ at the
point $\phi(P)$, the elements of $T_{P}\mathcal{M}$ are termed\emph{
tangent vectors} and $T_{P}\mathcal{M}$ the \emph{tangent space}
at $P$. Then the\emph{ partial derivatives }at $P$, given by $\{\partial_{i}\}_{P}$
with $\partial_{i}\colon P\mapsto\partial/\partial\phi^{i}\mid_{P}$
provide a basis of $T_{P}\mathcal{M}$, such that any $v\in T_{P}\mathcal{M}$
has a local \emph{representation} by a vector $\boldsymbol{\xi}\in\mathbb{R}^{n}$
with $v=\xi^{i}\partial_{i}$. Let now be $f\in C^{\infty}(\mathcal{M},\,\mathcal{N})$,
then the \emph{differential} of $f$ at $P\in\mathcal{M}$ is a linear
mapping $\mathrm{d}f_{P}\colon T_{P}\mathcal{M}\to T_{f(P)}\mathcal{N}$,
which for all $v\in T_{P}\mathcal{M}$ and $\varphi\in C^{\infty}(\mathcal{N})$
is defined by $(\mathrm{d}f_{P}v)(\varphi)\coloneqq v(\varphi\circ f)$.
Then $f$ is an \emph{immersion}, if for all $P\in\mathcal{M}$ the
differential $\mathrm{d}f_{P}$ is injective. If furthermore $f$
is injective and continuous w.r.t. to the respectively induced topologies,
then $f$ is a \emph{smooth embedding} and the image of $f$ a \emph{smooth
submanifold} of $\mathcal{N}$ w.r.t. the atlas, which is restricted
to the image. This allows the definition of \emph{smooth parametrisation}s
for statistical manifolds.
\begin{defn*}[Smooth parametrisation]
\label{def:Differentiable-parametrisation} \emph{Let $(\mathcal{M},\,\mathcal{A})$
be a differentiable statistical manifold, $(V,\,\mathcal{B})$ a differentiable
manifold over a vector space $V$ and $\theta$ a parametrisation
for $\mathcal{M}$ over $V$. Then $\theta$ is termed a smooth parametrisation
for $(\mathcal{M},\,\mathcal{A})$, iff $\theta^{-1}\colon\mathrm{KQ}(\mathcal{M},\,\mathcal{A})\hookrightarrow(V,\,\mathcal{B})$
is a smooth embedding.}
\end{defn*}
Since for any smooth $n$-manifold the \emph{Whitney embedding theorem}
postulates the existence of a smooth embedding within $\mathbb{R}^{2n}$
any smooth statistical $n$-manifold \emph{$(\mathcal{M},\,\mathcal{A})$}
has a smooth parametrisation $\theta$ over $\mathbb{R}^{2n}$. It
is therefore convenient to introduce the notation ``$(\mathcal{M}_{\theta},\,\mathcal{A})$''
for a differentiable statistical manifold with a smooth parametrisation
$\theta$. Since the tuple $(\mathcal{M},\,\theta^{-1})$ is an $C^{\infty}$-chart
that covers $\mathcal{M}$, it provides a \emph{smooth representation}
of \emph{$(\mathcal{M},\,\mathcal{A})$} by the parameter space $\Theta=\theta^{-1}(\mathcal{M})$.
Thereby for any coordinate chart $(U,\,\phi)\in\mathcal{A}$, the
mapping $\phi\colon U\to\mathbb{R}^{n}$ provides an $n$-dimensional
basis for the tangent space $T_{P}\mathcal{M}$ by the partial derivatives
$\{\partial_{i}\}_{P}$. Since the smooth parametrisation $\theta$
however is an immersion, also the differentials $\mathrm{d}\theta^{-1}(\partial_{i})$
also provide an $n$-dimensional basis of the space $T_{\theta^{-1}(P)}\Theta$.
Therefore any tangent space may intuitively be identified with an
$n$-dimensional affine subspace of $\Theta$ and any tangent vector
by a directional derivative within this subspace. A smooth parametrisation
then in particular allows the identification of \emph{smooth curves}
$\gamma\colon I\to\mathcal{M}$ in $\mathcal{M}$ by smooth \emph{parametric
curves} $\gamma_{\theta}\colon I\to\Theta$ in $\Theta$, such that
$\gamma=(\theta\circ\gamma_{\theta})$. Then for any $k\in\mathbb{N}_{0}\cup\{\infty,\,\omega\}$,
it holds that $\gamma\in C^{k}(I,\,\mathcal{M})$ iff $\gamma_{\theta}\in C^{k}(I,\,\Theta)$.
Therefore smooth parametric curves provide the foundation for a traversal
of $\mathcal{M}$. Thereby the traversal along the parametric curve
$\gamma_{\theta}(t)$ is described by the unique directional derivatives
$\dot{\gamma}_{\theta}(t)$ in $\mathcal{\Theta}$. Consequentially
due to the unique correspondence between directional derivatives in
$T_{\gamma_{\theta}(t)}\mathcal{\Theta}$ and tangent vectors in $T_{\gamma(t)}\mathcal{M}$
the traversal along $\gamma$ in $\mathcal{M}$ is also described
by unique tangent vectors $\dot{\gamma}(t)\in T_{\gamma(t)}\mathcal{M}$.
This uniqueness however only applies to the direction in $\Theta$
but not to its representation in $T_{\gamma_{\theta}(t)}\mathcal{\Theta}$
in terms of the chosen basis. With regard to a coordinate chart $(U,\,\phi)$
in $\mathcal{M}$ the local basis $\{\partial_{i}\}_{P}$ at $P\in U$
naturally extends over $U$, by regarding $\partial_{i}\colon P\mapsto\partial/\partial\phi^{i}$
as an ordered basis, termed a \emph{local frame}, which localized
as $P$ provides $\partial_{i}\colon P\mapsto\partial/\partial\phi^{i}\mid_{P}$.
Then the differentials $\mathrm{d}\theta^{-1}(\partial_{i})$ provide
a local basis of $T_{\gamma_{\theta}(t)}\Theta\mid_{\theta^{-1}(U)}$
and the directional derivatives $\dot{\gamma}_{\theta}(t)\in T_{\gamma_{\theta}(t)}\Theta\mid_{\theta^{-1}(U)}$
may uniquely be identified with tangent vectors $\dot{\gamma}(t)\in T_{\gamma(t)}\mathcal{M}\mid_{U}$
by $\dot{\gamma}(t)=\mathrm{d}\theta(\dot{\gamma}_{\theta}(t))\mid_{U}$.
In order to continue the traversal however it is required to ``connect''
the basis vectors of the affine spaces along the curve by an unambiguous
notation, which is independent of the chosen coordinate charts. This
provides the notation of an \emph{affine connection}. At a global
scale the disjoint union of all tangent spaces constitutes the \emph{tangent
bundle} $T\mathcal{M}$, which by itself is diffeomorphic to $\mathcal{M}\times\mathbb{R}^{n}$
and therefore in particular a differentiable manifold. This property
allows to define \emph{smooth vector fields} on $\mathcal{M}$ by
smooth functions $X\in C^{\infty}(\mathcal{M},\,T\mathcal{M})$, or
w.r.t. the sequence $\mathcal{M}\stackrel{X}{\hookrightarrow}T\mathcal{M}\twoheadrightarrow\mathcal{M}$
by smooth sections $X\in\Gamma(T\mathcal{M})$. Intuitively smooth
vector fields assign tangent vectors to the points of the manifold,
such that ``small `` movements on the manifold are accompanied by
``small'' changes within the tangent spaces. With regard to a coordinate
chart $(U,\,\phi)$ a local frame $\{\partial_{i}\}$ may also be
regarded as a localized ordered basis of the vector fields $\Gamma(T\mathcal{M}\mid_{U})$.
Therefore the transition of local frames may be described be derivatives
of vector fields, which provides the notation of \emph{covariant derivatives}.
A covariant derivative $\nabla$ on $\mathcal{M}$ formally defines
a mapping $\nabla\colon\Gamma(T\mathcal{M})^{2}\to\Gamma(T\mathcal{M})$
with $(X,\,Y)\mapsto\nabla_{X}Y$, which satisfies: (i) $\nabla$
is $\mathbb{R}$-linear in both arguments, (ii) $\nabla$ is $C^{\infty}(\mathcal{M})$-linear
in the first argument and (iii) $\nabla$ is a derivation in the second
argument, such that $\nabla_{X}(\varphi\cdot Y)=X(\varphi)Y+\varphi\nabla_{X}Y$
for arbitrary $\varphi\in C^{\infty}(\mathcal{M})$ and $X,\,Y\in\Gamma(T\mathcal{M})$.
An affine connection is then completely described by the specification
of a covariant derivative which in turn endows a differentiable manifold
with an additional structure $\nabla$. In particular however the
choice of an affine connection for any curve $\gamma$ completely
determines its derivative $\dot{\gamma}\in\Gamma(T\mathcal{M})\mid_{\gamma}$
along the curve, as well as those vector fields $X\in\Gamma(T\mathcal{M})$
which are covariant constant along $\gamma$, such that $\nabla_{\dot{\gamma}}X=0$.
As this property however may also be applied w.r.t. the derivative
along the curve itself, the choice of an affine connection $\nabla$
in particular determines those curves $\gamma$, which derivative
is covariant constant along their traversal. These curves, known as
\emph{geodesic}s, therefore generalize straight lines to differentiable
manifolds.
\begin{defn*}[Geodesic]
\label{def:Geodesic}\emph{ Let $(\mathcal{M},\,\mathcal{A})$ be
a smooth statistical manifold and $\nabla$ an affine connection on
$\mathrm{KQ}(\mathcal{M},\,\mathcal{A})$. Then a smooth curve $\gamma\colon I\to\mathcal{M}$
is termed a geodesic w.r.t. $\nabla$, iff it satisfies the geodesic
equation:
\[
\nabla_{\dot{\gamma}}\dot{\gamma}=0
\]
}
\end{defn*}
Over and above geodesic, the choice of an affine connection $\nabla$
admits two fundamental invariants to the differentiable structure
by the \emph{curvature} and the \emph{torsion. }Thereby the curvature
$R\colon\Gamma(T\mathcal{M})^{3}\to\Gamma(T\mathcal{M})$ with $R(X,\,Y)Z\coloneqq\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,\,Y]}Z$
intuitively provides a description, of how tangent spaces ``roll''
along smooth curves under parallel transport, whereas the torsion
$T\colon\Gamma(T\mathcal{M})^{2}\to\Gamma(T\mathcal{M})$ with $T(X,\,Y)\coloneqq\nabla_{X}Y-\nabla_{Y}X-[X,\,Y]$
describes their ``twist'' about the curve. Notwithstanding these
invariants however, an affine connection $\nabla$ only adjusts local
tangent spaces, but does not provide a notation of ``length'' or
``angle''. The mandatory next step, therefore regards the incorporation
of local geometries within the tangent spaces, that eventually extend
to a global geometry over the differentiable structure.
\section{Pseudo-Riemannian Structure}
Since tangent spaces are vector spaces, it is natural to obtain the
local geometry by an inner product. More generally however it suffices
to provide a\emph{ }mapping $g_{P}\colon T_{P}\mathcal{M}{}^{2}\to\mathbb{R}$
that satisfies (i) $g_{P}$ is $C^{\infty}(\mathcal{M})$-bilinear,
(ii) $g_{P}$ is symmetric and (iii) $g_{P}$ is non-degenerate. In
the purpose to extend the local geometries to a global geometry however,
it has additionally to be claimed that the local geometries only vary
smoothly w.r.t. smooth vector fields. This localization requirement
yields the notation of a \emph{pseudo-Riemannian metric $g$} on $(\mathcal{M},\,\mathcal{A})$,
which endows each point $P\in\mathcal{M}$ with a symmetric non-degenerate
form $g_{P}$, such that the mapping
\[
g(X,\,Y)\colon P\mapsto g_{P}(X_{P},\,Y_{P})
\]
is smooth, i.e. $g(X,\,Y)\in C^{\infty}(\mathcal{M})$, for arbitrary
$X,\,Y\in\Gamma(T\mathcal{M})$. With regard to a coordinate chart
$(U,\,\phi)$ and a local frame $\{\partial_{i}\}$ the pseudo-Riemannian
metric $g$ has a coordinate representation by \emph{metric coefficients}
$g_{ij}\colon U\to\mathbb{R}$, with $g_{ij}=g(\partial_{i},\,\partial_{j})$
and therefore by a matrix $G_{P}=(g_{ij})$, termed a \emph{fundamental
matrix}. For $P\in U$ and $v,\,w\in T_{P}\mathcal{M}$, with $v=\xi^{i}\partial_{i}$
and $w=\zeta^{i}\partial_{i}$ it then follows, that $g(v,\,w)$ has
a local representation $\langle\boldsymbol{\xi},\,\boldsymbol{\zeta}\rangle_{P}\coloneqq\boldsymbol{\xi}^{T}G_{P}\boldsymbol{\zeta}$.
In the purpose to extend the local geometry to a global geometry an
affine connection $\nabla$ has to be defined, which is compatible
with $g$, such that $Xg(Y,\,Z)=g(\nabla_{X}Y,\,Z)+g(Y,\,\nabla_{X}Z)$,
for all $X,\,Y,\,Z\in\Gamma(T\mathcal{M})$. In this case $\nabla$
is termed a \emph{metric connection} and has a coordinate representation
by \emph{connection coefficients} $\Gamma_{ij}^{k}\colon U\to\mathbb{R}$,
with $\nabla_{\partial_{i}}\partial_{j}=\sum_{k=1}^{n}\Gamma_{ij}^{k}\partial_{k}$,
known as the \emph{Christoffel-symbols. }Then the geodesic equation
over a parametric curve $\gamma_{\theta}$ is locally expressed by
a second order ODE:
\begin{equation}
\nabla_{\dot{\gamma}_{\theta}}\dot{\gamma}_{\theta}=0\Longleftrightarrow\ddot{\gamma}_{\theta}^{k}+\sum_{i,j}\Gamma_{ij}^{k}\dot{\gamma}_{\theta}^{i}\dot{\gamma}_{\theta}^{j}=0,\,\forall k\label{eq:geodesic-equation}
\end{equation}
With little effort, the \emph{Picard-Lindel�f theorem} then assures,
that for any $(P,\,v)\in T\mathcal{M}$ there exists a locally unique
geodesic $\gamma_{P,v}\colon I\to\mathcal{M}$, that satisfies the
initial conditions $\gamma_{P,v}(0)=P$ and $\dot{\gamma}_{P,v}(0)=v$.
Thereby the locally uniqueness extends to an maximal open interval
$I=(a,\,b)$ in $\mathbb{R}$. If $\nabla$ is furthermore \emph{torsion
free} i.e. $T(X,\,Y)=0$, for all $X,\,Y\in\Gamma(T\mathcal{M})$,
then $\nabla$ is termed a \emph{Levi-Civita connection} and the Christoffel-symbols
may explicitly be derived by the equation $\Gamma_{ij}^{k}=\frac{1}{2}\sum_{l}g^{kl}(\partial_{i}g_{jl}+\partial_{j}g_{il}-\partial_{l}g_{ij})$,
where $g^{kl}$ denote the coefficients of the inverse fundamental
matrix $G_{P}^{-1}$, which existence is assured by the properties
of $g_{P}$. Therefore it follows, that any pseudo-Riemannian metric
$g$ uniquely admits a Levi-Civita connection $\nabla^{g}$. For this
reason the choice of a pseudo-Riemannian metric naturally induces
a global geometry to a differentiable manifold and therefore w.r.t.
statistical manifolds justifies the definition of \emph{pseudo-Riemannian
statistical manifolds}.
\begin{defn*}[Pseudo-Riemannian statistical manifold]
\label{def:Riemannian-statistical-manifold} \emph{Let $(\mathcal{M},\,\mathcal{A})$
be a differentiable statistical manifold and $g$ a Pseudo-Riemannian
metric on $\mathrm{KQ}(\mathcal{M},\,\mathcal{A})$. Then the tuple
$(\mathcal{M},\,g)$ is termed a Pseudo-Riemannian statistical manifold.
}\textbf{Remark}:\emph{ The category of $k$-differentiable Pseudo-Riemannia
statistical manifolds is denoted by $\mathbf{StatMan}_{\text{R}}^{k}$.}
\end{defn*}
Generally Pseudo-Riemannian manifolds endow the notation of geodesics
with an intuitive meaning as the trajectories of free particles. Thereby
the equations of motion obey the principle of stationary action, whereat
the \emph{action functional} $\mathcal{S}(\gamma)\coloneqq\int_{a}^{b}\mathcal{L}\mathrm{d}t$
is defined over the Lagrangian\emph{ }$\mathcal{L}\coloneqq\frac{1}{2}g(\dot{\gamma},\,\dot{\gamma})$.
The properties of the Levi-Civita connection then allow the transformation
of the local geodesic equation \ref{eq:geodesic-equation} to \emph{Euler-Lagrange
equations}, over the local Lagrangian $\mathcal{L}_{\theta}\coloneqq\sum_{i,j}\frac{1}{2}g_{ij}\dot{\gamma}_{\theta}^{i}\dot{\gamma}_{\theta}^{j}$,
such that:
\begin{equation}
\nabla_{\dot{\gamma}_{\phi}}^{g}\dot{\gamma}_{\phi}=0\Longleftrightarrow\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial\mathcal{L}_{\phi}}{\partial\dot{\gamma}_{\phi}^{k}}\right)-\frac{\partial\mathcal{L}_{\phi}}{\partial\gamma_{\phi}^{k}}=0,\,\forall k\label{eq:Euler-Lagrange-equations}
\end{equation}
Consequentially the geodesics of Pseudo-Riemannian manifolds are
stationary solutions of the action functional $\mathcal{S}(\gamma)$,
i.e. $\delta\mathcal{S}=0$. By regarding the tangent bundle $T\mathcal{M}$
as the \emph{configuration space} of a moving particle and its elements
$(q,\,\dot{q})\in T\mathcal{M}$ as the\emph{ generalized coordinates},
the Lagrangian equals its \emph{kinetic term}. With regard to equation
\ref{eq:Euler-Lagrange-equations} the geodesics of $(\mathcal{M},\,g)$
then coincide with the trajectories of free particles. This encourages
the interpretation of $(\mathcal{M},\,g)$ as a dynamical system,
where the temporal evolution is determined by the \emph{geodesic flow
}$\Phi^{t}:T\mathcal{M}\to T\mathcal{M}$ with $\Phi^{t}(q,\,\dot{q})=(\gamma_{q,\dot{q}}(t),\,\dot{\gamma}_{q,\dot{q}}(t))$,
where $\gamma_{q,\dot{q}}(t)$ is the locally unique geodesic, that
satisfies the initial conditions $\gamma_{q,\dot{q}}(0)=q$ and $\dot{\gamma}_{q,\dot{q}}(0)=\dot{q}$.\emph{
}Then due to $\frac{\mathrm{d}}{\mathrm{d}t}g(\dot{q},\,\dot{q})=g(\nabla_{\dot{q}}^{g}\dot{q},\,\dot{q})=0$
the geodesic flow preserves the kinetic term along its trajectories,
and therefore generalizes Newton's first law of motion to curvilinear
and pseudo-Euclidean spaces. In appreciation of its origins in the
conceptualization of spacetime, a geodesic $\gamma$ is therefore
termed \emph{spacelike} if $g(\dot{q},\,\dot{q})>0$, \emph{lightlike}
if $g(\dot{q},\,\dot{q})=0$ and \emph{timelike} if $g(\dot{q},\,\dot{q})<0$.
Moreover the Pseudo-Riemannian metric $g$ induces a canonical isomorphism
between the tangent spaces $T_{q}\mathcal{M}$ and their respective
dual spaces $T_{q}^{*}\mathcal{M}$, the \emph{cotangent spaces},
which assigns a \emph{cotangent vector} $p\in T_{q}^{*}\mathcal{M}$
to each tangent vector $\dot{q}\in T_{q}\mathcal{M}$ by $p(v)\coloneqq g_{q}(\dot{q},\,v)$.
Then also the choice of a local frame $\{\partial_{i}\}$ uniquely
induces a \emph{local coframe} $\{\mathrm{d}q^{i}\}$ by $\mathrm{d}q^{i}\coloneqq\partial_{i}^{T}G_{q}$,
such that any $p\in T_{q}^{*}\mathcal{M}$ has a local representation
$p=p_{i}\mathrm{d}q^{i}$. As the geodesic flow however preserves
the kinetic term it holds, that $\frac{\mathrm{d}}{\mathrm{d}t}p(\dot{q})=\frac{\mathrm{d}}{\mathrm{d}t}g(\dot{q},\,\dot{q})=0$,
such that $p$ equals the \emph{conjugate momentum} of $\dot{q}$.
Consequentially the disjoint union of all cotangent spaces, given
by the\emph{ cotangent bundle} $T^{*}\mathcal{M}$ equals the \emph{phase
space} of the dynamical system. Finally by the definition of the \emph{Hamiltonian}
$\mathcal{H}(q,\,p)\coloneqq\frac{1}{2}g^{ij}p_{i}p_{j}$ with $(g^{ij})=G_{q}^{-1}$
it follows, that $(\mathcal{M},\,g)$ uniquely corresponds to a \emph{Hamiltonian
system}, since (i) $\dot{q}^{i}=g^{ij}p_{j}=\frac{\partial\mathcal{H}}{\partial p_{i}}$
and (ii) $\dot{p}^{i}=-\frac{\partial}{\partial\gamma_{i}}\frac{1}{2}g^{ij}p_{i}p_{j}=-\frac{\partial\mathcal{H}}{\partial q_{i}}$.
This representation allows to reformulate the principle of stationary
action in \emph{canonical coordinates} $(q,\,p)\in T^{*}\mathcal{M}$
by the \emph{curve integral }$\mathcal{S}(q)=\int_{a}^{b}\mathcal{L}\mathrm{d}t=\frac{1}{2}\int_{q}p$.
In particular this formulation, known as \emph{Maupertuis' principle}
then describes the trajectory of a free particle by its geometric
shape instead of its temporal evolution. Due to this geometric interpretation
the action functional of a curve may also be defined with regard to
a given vector field of conjugate momenta $p\in\Gamma(T^{*}\mathcal{M})$,
by $\mathcal{S}_{p}(q)\coloneqq\frac{1}{2}\int_{q}p$. Then for arbitrary
smooth curves $q\colon[a,\,b]\to\mathcal{M}$ the action $\mathcal{S}_{p}(q)$
is completely determined by its boundary values localized at $q_{a}$
and $q_{b}$, such that:
\[
\mathcal{S}_{p}(q)=\frac{1}{2}\int_{q}p=\mathcal{S}_{p}(q_{b})-\mathcal{S}_{p}(q_{a})
\]
Although w.r.t. the fundamental theorem of calculus this insight seems
rather trite, it provides a far-reaching generalisation. Thereby in
the very same manner as the smooth sections of $T^{*}\mathcal{M}$
constitute the smooth linear forms over $T\mathcal{M}$, the smooth
alternating multilinear forms over $T\mathcal{M}^{k}$, termed \emph{differential
$k$-forms} are given by smooth sections of the \emph{outer product}
$\Lambda^{k}(T^{*}\mathcal{M})$. These $k$-forms then provide the
natural integrands over curves, surfaces, volumes or higher-dimensional
$k$-manifolds and therefore may be thought as measures of the flux
through infinitesimal $k$-parallelotopes. In this sense the smooth
functions over $\mathcal{M}$ are \emph{$0$-forms} and the smooth
vector fields over $\mathcal{M}$ are \emph{$1$-forms}. In particular
however since for any smooth function $f\in C^{\infty}(\mathcal{M})$
the differential $\mathrm{d}f$ is a smooth vector field, it appears
that $\mathrm{d}$ by itself defines an $\mathbb{R}$-linear mapping
$\mathrm{d}\colon\Omega^{0}(\mathcal{M})\to\Omega^{1}(\mathcal{M})$,
where $\Omega^{k}(\mathcal{M})\coloneqq\Gamma(\Lambda^{k}(T^{*}\mathcal{M}))$.
This encourages to extend the notation of a differential to arbitrary
$k$-forms by the \emph{exterior derivative}, given by an $\mathbb{R}$-linear
mapping $\mathrm{d}_{k}\colon\Omega^{k}(\mathcal{M})\to\Omega^{k+1}(\mathcal{M})$,
that satisfies (i) $\mathrm{d}_{k}$ is an antiderivation for any
$k\in\mathbb{N}_{0}$, (ii) $\mathrm{d}_{k+1}\circ\mathrm{d}_{k}=0$
for any $k\in\mathbb{N}_{0}$ and (iii) $\mathrm{d}_{0}$ is the differential.
In more detail (i) claims, that for any $\alpha\in\Omega^{k}(\mathcal{M})$
and $\beta\in\Omega^{l}(\mathcal{M})$ it follows, that $\mathrm{d}_{k+l}(\alpha\wedge\beta)=\mathrm{d}_{k}\alpha\wedge\beta+(-1)^{l}(\alpha\wedge\mathrm{d}_{l}\beta)$.
This provides the property, that infinitesimal changes of the volume
$\alpha\wedge\beta$ are expressible as the sum of infinitesimal changes
in their orthocomplemented constituent volumes. Then the additional
claim (ii) assures the symmetry of second derivatives and (iii) the
compatibility with the differential. In order to provide a measure
of length however the Pseudo-Riemannian metric $g$ is additionally
required to be positive definite, i.e. such that $\forall P\in\mathcal{M}$
the $g_{p}$ are positive definite. Then $g$ is termed a \emph{Riemannian
metric} and a statistical manifold $(\mathcal{M},\,g)$ a \emph{Riemannian
statistical manifold}. In this case the Riemannian metric defines
an inner product $\langle\cdot,\,\cdot\rangle_{g}\colon T_{P}\mathcal{M}{}^{2}\to\mathbb{R}$
by $(v,\,w)\mapsto g_{P}(v,\,w)$ and therefore induces a norm $\|\cdot\|_{g}\colon T_{P}\mathcal{M}\to\mathbb{R}$
by $\|v\|_{g}\coloneqq\sqrt{\langle v,\,v\rangle_{g}}$. This allows
the definition of the \emph{length} functional of a piecewise smooth
curve.
\begin{defn*}[Arc length]
\label{def:Length-of-a-curve} \emph{Let $(\mathcal{M},\,g)$ be
a Riemannian statistical manifold and $\gamma\colon[a,\,b]\to\mathcal{M}$
a piecewise smooth curve in $\mathrm{KQ}(\mathcal{M},\,g)$. Then
the arc length of $\gamma$ is given by:
\begin{equation}
L(\gamma)\coloneqq\int_{a}^{b}\|\dot{\gamma}(t)\|_{g}\mathrm{d}t\label{eq:manifold:metric:geodesic}
\end{equation}
}
\end{defn*}
Analogues to the action functional, the length functional may be written
by a Lagrangian, which is given by $\mathcal{L}_{L}(\gamma,\,\dot{\gamma},\,t)\coloneqq\sqrt{g(\dot{\gamma},\,\dot{\gamma})}$,
such that $\mathcal{L}_{L}=\sqrt{2\mathcal{L}}$. Then the Euler-Lagrange
equations for the length functional are equivalent to the Euler-Lagrange
equations for action functional, such that the stationary solutions
of the length and action functional coincide. This property allows
to equip Riemannian statistical manifolds with a \emph{distance.}
\begin{defn*}[Distance]
\label{def:Distance} \emph{Let $(\mathcal{M},\,g)$ be a Riemannian
statistical manifold, then the distance }$d\colon\mathcal{M}^{2}\to\mathbb{R}$\emph{
of $P,\,Q\in\mathcal{M}$ is defined by:
\begin{equation}
d(P,\,Q)\coloneqq\begin{cases}
\infty & \text{, if \ensuremath{P} and \ensuremath{Q} are not}\\
& \text{ path connected in \ensuremath{\mathcal{M}}}\\
\inf L(\gamma) & \text{, where \ensuremath{\gamma\colon[a,\,b]\to\mathcal{M}} }\\
& \text{ with \ensuremath{\gamma(a)=P,\,\gamma(b)=Q}}
\end{cases}\label{eq:manifold:metric:distance-1}
\end{equation}
}
\end{defn*}
Due to its definition the distance $d$ of a Riemannian statistical
manifold for arbitrary $P,\,Q,\,R\in\mathcal{M}$ satisfies, that:
(i) $d(P,\,P)=0$, (ii) $d(P,\,Q)=d(Q,\,R)$ and (iii) $d(P,\,Q)+d(Q,\,R)\leq d(P,\,R)$.
In order to show, that $(\mathcal{M},\,d)$ is a metric space it therefore
suffices to prove, that $d(P,\,Q)>0$ for $P\ne Q$. Let $(\mathcal{M},\,g)$
and $(\mathcal{N},\,g^{'})$ be Pseudo-Riemannian manifolds, and $f\in C^{\infty}(\mathcal{M},\,\mathcal{N})$,
then $f$ is an isometry, iff $g_{P}(v,\,w)=g_{f(P)}^{'}(\mathrm{d}f_{P}(v),\,\mathrm{d}f_{w}(P))$
for all $P\in\mathcal{M}$ and $v,\,w\in T_{P}\mathcal{M}$ .
Let $(\mathcal{P},\,\mathcal{A})$ be a differentiable statistical
manifold. \emph{Then a mapping $D(\cdot\parallel\cdot)\colon\mathcal{P}\times\mathcal{P}\to\mathbb{R}^{+}$
is termed a divergence over $\mathcal{P}$, iff for all $P,\,Q\in\mathcal{P}$
it holds, that $D(P\parallel Q)\geq0$, with $D(P\parallel Q)=0\Leftrightarrow P=Q$}.
If
these scalar products are usually induced by derivatives of locally
linear divergences.
\begin{defn*}[Statistical divergence]
\label{def:Divergence} \emph{Let
\[
(X,\,\mathcal{P})\in\mathrm{ob}(\mathbf{Stat})
\]
Then a mapping $D(\cdot\parallel\cdot)\colon\mathcal{P}\times\mathcal{P}\to\mathbb{R}^{+}$
is termed a (statistical) divergence over $\mathcal{P}$, iff for
all $P,\,Q\in\mathcal{P}$ it holds, that
\begin{align}
& D(P\parallel Q)\geq0,\,\forall P,\,Q\in\mathcal{P}\\
& D(P\parallel Q)=0\Leftrightarrow P=Q,\,\forall P,\,Q\in\mathcal{P}
\end{align}
}
\end{defn*}
\begin{defn*}[Locally linear divergence]
\label{def:Locally-linear-divergence}\emph{ Let
\[
(X,\,\mathcal{P_{\xi}})\in\mathrm{ob}(\mathbf{StatMan}^{k})
\]
and let $D$ be a statistical divergence over $(X,\,\mathcal{P})$.
Then $D$ is termed locally linear, iff for all $P\in\mathcal{P}$
the linearisation of $D$ at $P$ is given by a positive definite
matrix $G_{\xi}(P)$ , such that:
\begin{equation}
D[P_{\xi}\parallel P_{\xi}+\mathrm{d}P]=\frac{1}{2}\mathrm{d}\boldsymbol{\xi}^{T}G_{\xi}(P_{\xi})\mathrm{d}\boldsymbol{\xi}+O(n^{3})\label{eq:divergence_taylor}
\end{equation}
}
\end{defn*}
\begin{example*}[Kullback-Leibler divergence]
\label{exa:Kullback-Leibler-divergence} \emph{Let
\[
(X,\,\mathcal{P})\in\mathrm{ob}(\mathbf{Stat})
\]
Then for $P,\,Q\in\mathcal{P}$ the Kullback-Leibler divergence $D_{KL}[\cdot\parallel\cdot]:\mathcal{P}\times\mathcal{P}\to\mathbb{R}^{+}$
is defined by:
\begin{equation}
D_{\mathrm{KL}}[P\parallel Q]\coloneqq\int_{X}\mathrm{d}_{\mu}P(x)\log\frac{\mathrm{d}_{\mu}P(x)}{\mathrm{d}_{\mu}Q(x)}\mathrm{d}\mu(x)\label{eq:divergence:kl}
\end{equation}
}\textbf{Remark}:\emph{ The Kullback-Leibler divergence measures the
amount of information, which is gained when one revises ones beliefs
from the prior probability distribution $P$ to the posterior probability
distribution $Q$.}
\end{example*}
Riemannian statistical manifolds \emph{$(X,\,\mathcal{P},\,D)$} are
identified with Riemannian manifolds $(M,\,g)$, where $M$ is given
by $(X,\,\mathcal{P})$ and the Riemannian metric $g$ by the linearisation
of $D$. This identification is well-defined, since $(X,\,\mathcal{P})$
is by definition a smooth manifold and the linearisation of a locally
linear divergence for any $P\in\mathcal{P}$ yields a positive definite
matrix $G(P)$. Let $\xi$ be a differentiable parametrisation of
$(X,\,\mathcal{P})$, then the line element $\mathrm{d}s^{2}$ has
a local representation:
\begin{equation}
\mathrm{d}s_{P}^{2}=2D[P\parallel P+\mathrm{d}P]=\mathrm{d}\boldsymbol{\xi}^{T}G_{\xi}(P_{\xi})\mathrm{d}\boldsymbol{\xi}\label{eq:manifold:metric:idistance}
\end{equation}
This allows the determination of distances in \emph{$(X,\,\mathcal{P})$}
by the length of continuously differentiable curves.
\begin{lem}
\label{lem:3.6}Let $(X,\,\mathcal{P},\,D)$ be a Riemannian statistical
manifold and $P,\,Q\in\mathcal{P}$. Then the length of continuously
differentiable curves $\gamma_{P,Q}:[a,\,b]\to(X,\,\mathcal{P})$
with $\gamma(a)=P$ and $\gamma(b)=Q$ has a unique infimum $d_{P,Q}$,
such that $d_{P,Q}\geq0$ and $d_{P,Q}=0\Leftrightarrow P=Q$.
\end{lem}
\begin{proof}
Since $D$ is locally linear, it follows that for any curve $\gamma$
from $P$ to $Q$ it holds, that:
\begin{equation}
L(\gamma_{P,Q})\geq D[\gamma(a)\parallel\gamma(b)]\geq0\label{eq:manifold:metric:geodesic_vs_divergence}
\end{equation}
Therefore the length of all continuously differentiable curves from
$P$ to $Q$ has a unique infimum $d_{P,Q}\geq0$. Let $P=Q$, then
the continuously differentiable curves connecting $P$ and $Q$ may
be contracted at $P$ such that $d_{P,Q}=\inf L(\gamma_{P,Q})=\lim\inf_{Q\to P}D[P\parallel Q]=D[P\parallel P]=0$.
Conversely let $P\ne Q$, then $L(\gamma_{P,Q})\geq D[P\parallel Q]>0$.
\end{proof}
Then for $P,\,Q\in\mathcal{P}$ a geodesic from $P$ to $Q$ is given
by a continuous differentiable curve $\gamma_{P,Q}:[a,\,b]\to(X,\,\mathcal{P})$
with $\gamma_{P,Q}(a)=P$ and $\gamma_{P,Q}(b)=Q$, such that $\gamma_{P,Q}$
minimizes the length among all continuously differentiable curves
from $P$ to $Q$. In the purpose to preserve the distance of the
Riemannian structure, the parametrisation of $(X,\,\mathcal{P},\,D)$
requires to preserve the property of a curve to be a geodesic within
the parameter space. These preserved geodesics are termed affine geodesics.
If a differentiable parametrisation globally preserves the distances,
then it is given by an isometric embedding of \emph{$(X,\,\mathcal{P},\,D)$}
and termed an affine parametrisation.
\begin{defn*}[Affine parametrisation]
\label{def:Affine-parametrisation}\emph{ Let $(X,\,\mathcal{P},\,D)$
be a Riemannian statistical manifold. Then a parametrisation $\xi$
is termed an affine parametrisation for $(X,\,\mathcal{P},\,D)$,
iff the geodesics in $(X,\,\mathcal{P},\,D)$ are $\xi$-affine geodesics.
}\textbf{Remark}:\emph{ A Riemannian statistical manifolds, given
by the notation $(X,\,\mathcal{P}_{\xi},\,D)$ implicates an affine
parametrisation $\xi$.}
\end{defn*}
The embedding of a smooth statistical manifold $(X,\,\mathcal{Q})$
within a Riemannian statistical manifold $(X,\,\mathcal{P},\,D)$
naturally induces the Riemannian metric to the submanifold $(X,\,\mathcal{Q})$,
such that also $(X,\,\mathcal{Q},\,D)$ is a Riemannian statistical
manifold. This is of particularly importance for the approximation
of high dimensional statistical models by lower dimensional submanifolds.
For this purpose the Riemannian metric is fundamental to obtain a
projection from probability distributions in $\mathcal{P}$ to their
closest approximation in $\mathcal{Q}$. This projection is a geodesic
projection:
\begin{defn*}[Geodesic projection]
\label{def:Geodesic-projection} \emph{Let $(X,\,\mathcal{P},\,D)$
be a Riemannian statistical manifold and $(X,\,\mathcal{Q})$ a smooth
submanifold. Then a mapping $\pi:\mathcal{P}\longrightarrow\mathcal{Q}$
is termed a geodesic projection, iff any point $P\in\mathcal{P}$
is mapped to a point $\pi(P)\in\mathcal{Q}$, that minimizes the distance
$d(P,\,\pi(P))$. }\textbf{Remark:}\emph{ By it's definition $d(P,\,\pi(P))<\infty$
iff $P$ and $\pi(P)$ are path-connected. Therefore geodesic projections
are by convention restricted to the common topological components
of $\mathcal{P}$ and $\mathcal{Q}$.}
\end{defn*}
\section{Dually flat Structure}
In Riemannian manifolds the problem to determine geodesic projections
to submanifolds is generally hard to solve, since the distance has
to be minimized over all continuously differentiable curves that connect
points to the submanifold. A particular convenient geometry however
arises by a flat Riemannian metric, whereas the flatness of the metric
is related to the direction of curves. The claim for a further flat
structure, which is given by the dual Riemannian metric allows a generalization
of the Pythagorean theorem and therefore an explicit calculation rule
for geodesic projections by dual affine linear projections.
\begin{defn*}[Dual Riemannian metric]
\label{def:Dual-Riemannian-metric} \emph{Let $(M,\,g)$ be a Riemannian
manifold, then the Riemannian metric tensor $g$ is given by a family
of positive definite matrices $\{g_{P}\}_{P\in M}$. Then metric $g^{*}$,
with is dual to $g$ is given by the family of the inverse Riemannian
metric tensors, such that $g_{P}^{*}=g_{P}^{-1},\,\forall P\in M$.}
\end{defn*}
The dual Riemannian metric $g^{*}$ may be regarded as the Riemannian
metric with a locally inverse direction. In Riemannian statistical
manifolds this definition corresponds to the dual divergence.
\begin{defn*}[Dual divergence]
\label{def:Dual-divergence} \emph{Let
\[
(X,\,\mathcal{P})\in\mathrm{ob}(\mathbf{Stat})
\]
and $D$ be a locally linear divergence over $(X,\,\mathcal{P})$.
Then the dual divergence $D^{*}$ w.r.t. $D$ is given by:
\begin{equation}
D^{*}[P\parallel Q]=D[Q\parallel P],\,\forall P,\,Q\in\mathcal{P}
\end{equation}
}
\end{defn*}
In its most simple case the Riemannian metric $g$, induced by the
divergence $D$ equals the dual Riemannian metric $g^{*}$, induced
by the dual divergence $D^{*}$. In this case the Riemannian metric
and the divergence are termed self-dual.
\begin{defn*}[Self-dual Riemannian metric]
\label{def:Self-dual-Riemannian-metric} \emph{Let $(M,\,g)$ be
a Riemannian manifold, then the Riemannian metric tensor $g$ is termed
self-dual, iff $g^{*}=g$.}
\end{defn*}
\subsection{Dual parametrisation and Legendre transformation}
By considering a differentiable statistical manifold \emph{$(X,\,\mathcal{P}_{\xi})$}
and a real valued differentiable convex function $\psi:\mathrm{img}\xi\rightarrow\mathbb{R}$,
the differentiability of $\psi$ may be used to introduce a further
differentiable parametrisation of $(X,\,\mathcal{P})$ by $\boldsymbol{\xi}_{P}^{*}\coloneqq\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})$.
Furthermore since $\psi$ is convex, the Jacobian determinant is positive
for any $\boldsymbol{\xi}_{P}\in\mathrm{dom}\xi$ and therefore the
transformation $\xi\to\xi^{*}$ is globally invertible. This defines
a bijective relationship between parameter vectors $\boldsymbol{\xi}_{P}\in\mathrm{dom}\xi$
and their respectively normal vectors in the tangent space, given
by $\boldsymbol{\xi}_{P}^{*}\in\mathrm{dom}\xi^{*}$. Since $\xi$
is an identifiable parametrisation and the transformation $\xi\to\xi^{*}$
is globally invertible, also $\xi^{*}$ is an identifiable parametrisation.
This justifies the following definition:
\begin{defn*}[Dual parametrisation]
\label{def:Dual-parametrisation} \emph{Let $(X,\,\mathcal{P}_{\xi})$
be a differentiable statistical manifold and $\psi:\mathrm{dom}\xi\rightarrow\mathbb{R}$
a sufficiently differentiable convex function. Then the dual parametrisation
for $(X,\,\mathcal{P})$ w.r.t. $\psi$ is given by:
\begin{align}
\boldsymbol{\xi}_{P}^{*} & =\nabla_{\xi}\psi(\boldsymbol{\xi}_{P}),\,\forall P\in\mathcal{P}\label{eq:def:dualparametrisation:1}
\end{align}
}
\end{defn*}
Due to the convexity of $\psi$, also the inverse transformation $\xi^{*}\to\xi$
may be represented by the partial derivation of dual function $\psi^{*}:\mathrm{dom}\xi^{*}\rightarrow\mathbb{R}$.
This yields a transformation $(\xi,\,\psi)\to(\xi^{*},\,\psi^{*})$,
which is defined by a dualistic relationship between $(\xi,\,\psi)$
and $(\xi^{*},\,\psi^{*})$, such that additional to equation \ref{eq:def:dualparametrisation:1}
also:
\begin{align}
\boldsymbol{\xi}_{P} & =\nabla_{\xi^{*}}\psi^{*}(\boldsymbol{\xi}_{P}^{*}),\,\forall P\in\mathcal{P}\label{eq:def:legendretransformation:1}
\end{align}
This transformation is known as the \emph{Legendre transformation}
and the function $\psi^{*}$ as the \emph{Legendre dual function}
of $\psi$.
\begin{lem}
\label{lem:3.7}Let $(X,\,\mathcal{P}_{\xi})$ be a differentiable
statistical manifold, $\psi:\mathrm{dom}\xi\rightarrow\mathbb{R}$
a differentiable convex function and $(\xi,\,\psi)\to(\xi^{*},\,\psi^{*})$
a Legendre transformation of $(\xi,\,\psi)$, then the Legendre dual
function $\psi^{*}:\mathrm{dom}\text{\ensuremath{\xi}}^{*}\rightarrow\mathbb{R}$
is given by:
\begin{equation}
\psi^{*}(\boldsymbol{\xi}_{P}^{*})=\arg\max_{P}\left(\boldsymbol{\xi}_{P}\cdot\boldsymbol{\xi}_{P}^{*}-\psi(\boldsymbol{\xi}_{P})\right),\,\forall P\in\mathcal{P}\label{eq:dualfunc}
\end{equation}
\end{lem}
\begin{proof}
By applying the definition of the dual parametrisation $\xi^{*}$
it has only to be proofed, that the function $\psi^{*}$, given by
$\ref{eq:dualfunc}$ satisfies the conditions of the Legendre dual
function, given by equation \ref{eq:def:legendretransformation:1}.
Let $P\in\mathcal{P}$, then:
\begin{eqnarray*}
& & \nabla_{\xi^{*}}\psi^{*}(\boldsymbol{\xi}_{P}^{*})\\
& & \stackrel{\ref{eq:dualfunc}}{=}\boldsymbol{\xi}_{P}+\left(\partial_{\xi^{*}}\boldsymbol{\xi}_{P}\right)\cdot\boldsymbol{\xi}_{P}^{*}-\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})\cdot\left(\partial_{\xi^{*}}\boldsymbol{\xi}_{P}\right)\\
& & =\boldsymbol{\xi}_{P}+\left(\partial_{\xi^{*}}\boldsymbol{\xi}_{P}\right)\cdot\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})-\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})\cdot\left(\partial_{\xi^{*}}\boldsymbol{\xi}_{P}\right)\\
& & =\boldsymbol{\xi}_{P}+\left(\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})-\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})\right)\cdot\left(\partial_{\xi^{*}}\boldsymbol{\xi}_{P}\right)=\boldsymbol{\xi}_{P}
\end{eqnarray*}
\end{proof}
\subsection{Bregman divergence}
The dualistic relationship between dual parametrisations, given by
the Legendre transformation shall be extended to Riemannian metrices.
To this end a family of locally linear divergences is introduced,
that generates a dualistic relationship structure:
\begin{defn*}[Bregman divergence]
\label{def:Bregman-divergence} \emph{Let $(X,\,\mathcal{P}_{\xi})$
be a differentiable statistical manifold and $\psi:\mathrm{dom}\xi\rightarrow\mathbb{R}$
a differentiable convex function. Then for $P,\,Q\in\mathcal{P}$
the }\textbf{\emph{Bregman divergence}}\emph{ $D_{\psi}$ w.r.t. the
differentiable parametrisation $\xi$ is given by:
\begin{equation}
D_{\psi}[P\parallel Q]=\psi(\boldsymbol{\xi}_{P})-\psi(\boldsymbol{\xi}_{Q})-\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})\cdot(\boldsymbol{\xi}_{Q}-\boldsymbol{\xi}_{P})\label{eq:def:Bregman-divergence:1}
\end{equation}
}
\end{defn*}
\begin{lem}
\label{lem:3.8}Let \emph{$D_{\psi}$ }be a Bregman divergence with
regard to a sufficiently differentiable parametrisation $\xi$, then
\emph{$D_{\psi}$} is locally linear and the Riemannian metric, induced
by \emph{$D_{\psi}$,} is given by:
\begin{equation}
g_{P}=\mathrm{\nabla_{\xi}^{2}}\psi(\boldsymbol{\xi}_{P})\label{eq:lem:3.8:1}
\end{equation}
\end{lem}
\begin{proof}
By applying the definition of a differentiable convex function it
follows, that $D_{\psi}$ is locally linear and the linearisation
term of the Taylor expansion yields $G_{\xi}(P_{\xi})=\nabla_{\xi}^{2}\psi(\boldsymbol{\xi}_{P})$.
Since by the definition of a Riemannian statistical manifold $g_{P}=G_{\xi}(P_{\xi})$,
it follows that $\mathrm{g_{P}=\nabla_{\xi}^{2}}\psi(\boldsymbol{\xi}_{P})$
\end{proof}
\begin{lem}
\label{lem:3.9}Let $D_{\psi}$ be a Bregman divergence, then the
dual divergence $D_{\psi}^{*}$ is given by the Bregman divergence
of the Legendre dual function $\psi^{*}$, such that:
\begin{equation}
D_{\psi}^{*}[P\parallel Q]=D_{\psi^{*}}[P\parallel Q]\label{eq:divergence_dual_divergence}
\end{equation}
\end{lem}
\begin{proof}
Let $G_{\xi}(P_{\xi})$ be the linearisation of $D_{\psi}$ at $P\in\mathcal{P}$,
then:
\begin{eqnarray*}
& & G_{\xi}(P_{\xi})\\
& & =\nabla_{\xi}^{2}\psi(\boldsymbol{\xi}_{P})=\nabla_{\xi}\boldsymbol{\xi}_{P}^{*}\\
& & =(\nabla_{\xi^{*}})^{-1}\boldsymbol{\xi}_{P}=(\nabla_{\xi^{*}}^{2}\psi^{*})^{-1}(\boldsymbol{\xi}_{P}^{*})\\
& & =G_{\xi^{*}}^{-1}(P_{\xi^{*}})
\end{eqnarray*}
And therefore:
\begin{equation}
G_{\xi}(P_{\xi})=G_{\xi^{*}}^{-1}(P_{\xi^{*}}),\,\forall P\in\mathcal{P}\label{eq:divergence_dual_hessian}
\end{equation}
Since $D_{\psi}$ is locally linear $G_{\xi}(P_{\xi})$ is positive
definite $\forall P\in\mathcal{P}$. From equation \ref{eq:divergence_dual_hessian}
it therefore follows, that also $G_{\xi^{*}}^{-1}(P_{\xi^{*}})$ is
positive definite $\forall P\in\mathcal{P}$ and since the inverse
matrix of a positive definite matrix is also positive definite it
follows, that $G_{\xi^{*}}(P_{\xi^{*}})$ is a positive definite $\forall P\in\mathcal{P}$.
Furthermore by the definition of the Legendre transformation $G_{\xi^{*}}(P_{\xi^{*}})$
is the Hessian matrix of $\psi^{*}(\boldsymbol{\xi}_{P}^{*})$ and
therefore $\psi^{*}$ is a convex function of $\boldsymbol{\xi}_{P}^{*}\in\mathrm{dom}\xi^{*}$.
Therefore $\psi^{*}$satisfies the requirement for the definition
of a Bregman divergence. Let $P,\,Q\in\mathcal{P}$, then:
\begin{eqnarray*}
& & D_{\psi^{*}}[P\parallel Q]\\
& & \stackrel{}{=}\psi^{*}(\boldsymbol{\xi}_{P}^{*})-\psi^{*}(\boldsymbol{\xi}_{Q}^{*})-\nabla_{\xi^{*}}\psi^{*}(\boldsymbol{\xi}_{Q}^{*})(\boldsymbol{\xi}_{Q}^{*}-\boldsymbol{\xi}_{P}^{*})\\
& & \stackrel{}{=}\psi(\boldsymbol{\xi}_{Q})-\psi(\boldsymbol{\xi}_{P})-\nabla_{\xi}\psi(\boldsymbol{\xi}_{P})(\boldsymbol{\xi}_{P}-\boldsymbol{\xi}_{Q})\\
& & \stackrel{\mathrm{def}}{=}D_{\psi}[Q\parallel P]\\
& & \stackrel{\mathrm{def}}{=}D_{\psi}^{*}[P\parallel Q]
\end{eqnarray*}
\end{proof}
\begin{prop}
\label{prop:3.3}Let $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$ be a Riemannian
statistical manifold with a Bregman divergence $D_{\psi}$. Then the
dual Riemannian metric $g^{*}$ is induced by the Bregman divergence
$D_{\psi^{*}}$ of the Legendre dual function $\psi^{*}$.
\end{prop}
\begin{proof}
By applying the definition for the dual Riemannian metric for $P\in\mathcal{P}$
it follows, that::
\[
g_{P}^{*}\stackrel{\mathrm{def}}{=}g_{P}^{-1}\stackrel{\mathrm{def}}{=}G_{\xi}^{-1}(P_{\xi})\stackrel{\ref{eq:divergence_dual_hessian}}{=}G_{\xi}^{*}(P_{\xi})\stackrel{\ref{eq:lem:3.8:1}}{=}\nabla_{\xi^{*}}^{2}\psi^{*}(\boldsymbol{\xi}_{P}^{*})
\]
This is the linearisation of the Bregman divergence $D_{\psi^{*}}$.
\end{proof}
\subsection{Dually flat statistical manifolds}
\begin{defn*}[Dually flat manifold]
\label{def:Dually-flat-manifold} \emph{Let $(M,\,g)$ be a Riemannian
manifold. Then $(M,\,g)$ is termed a dually flat (Riemannian) manifold,
iff:}
\begin{enumerate}
\item[(1)] \emph{ $g$ is a flat Riemannian metric of $M$}
\item[(2)] \emph{$g^{*}$ is a flat Riemannian metric of $M$}
\end{enumerate}
\end{defn*}
\begin{example*}[Euclidean space]
\label{exa:Euclidean-space} An example for a dually flat manifold
is given by Euclidean spaces. Let $\xi$ be the Cartesian coordinates
of an Euclidean space $E$, then $\xi$ is an affine parametrisation
of $E$ and for $P,\,Q\in E$ the Euclidean metric of $E$ is induced
by the Euclidean divergence $D[P\parallel Q]=\frac{1}{2}|\boldsymbol{\xi}_{P}-\boldsymbol{\xi}_{Q}|^{2}$.
In this case the divergence is symmetric and therefore the Euclidean
metric is self-dual Since $E$ is flat with regard to the Euclidean
metric $E$ is also flat with regard to the dual Euclidean metric
and therefore $E$ is a dually flat manifold.
\end{example*}
\begin{lem}
\label{lem:3.10}Let $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$ be a Riemannian
statistical manifold with a Bregman divergence $D_{\psi}$. Then $(X,\,\mathcal{P_{\xi}},\,D_{\psi})$
is a dually flat statistical manifold, iff:
\begin{enumerate}
\item[(1)] \emph{}The $\xi$-affine geodesics are flat with regard to the Riemannian
metric, induced by $D_{\psi}$
\item[(2)] \emph{}The $\xi^{*}$-affine geodesics are flat with regard to the
Riemannian metric, induced by $D_{\psi^{*}}$
\end{enumerate}
\end{lem}
\begin{proof}
Since the Legendre transformation generally does not preserve the
Riemannian metric, the flatness of $(X,\,\mathcal{P},\,D_{\psi})$
and $(X,\,\mathcal{P},\,D_{\psi^{*}})$ are indeed independent properties.
Let the $\xi$-affine geodesics be flat with regard to the Riemannian
metric $g$, induced by $D_{\psi}$, then also $(X,\,\mathcal{P})$
is flat w.r.t. $g$. Let further the $\xi^{*}$-affine geodesics be
flat with regard to the Riemannian metric $\tilde{g}$, induced by
$D_{\psi^{*}}$, then by \prettyref{prop:3.3} it follows, that $\tilde{g}=g^{*}$
and therefore $g^{*}$ is a flat Riemannian metric of $(X,\,\mathcal{P})$.
Conversely let $(X,\,\mathcal{P_{\xi}},\,D_{\psi})$ be a dually flat
statistical manifold with a Bregman divergence $D_{\psi}$. Then by
convention $\xi$ is an affine parametrisation of $(X,\,\mathcal{P},\,D_{\psi})$
and the geodesics in $(X,\,\mathcal{P},\,D_{\psi})$ are $\xi$-affine
geodesics and flat with regard to the Riemannian metric, induced by
$D_{\psi}$. Furthermore the dual Riemannian metric $g^{*}$ induced
by $D_{\psi}^{*}$ is a flat Riemannian metric of $(X,\,\mathcal{P})$
and since $D_{\psi}$ is a Bregman divergence it follows that $D_{\psi}^{*}=D_{\psi^{*}}$.
Then the dual parametrisation $\xi^{*}$is an affine parametrisation
of $(X,\,\mathcal{P},\,D_{\psi}^{*})$ and the geodesics in $(X,\,\mathcal{P},\,D_{\psi})$
are $\xi^{*}$-affine geodesics and flat with regard to the Riemannian
metric, induced by $D_{\psi^{*}}$ .
\end{proof}
\begin{defn*}[Dual geodesic projection]
\label{def:Dual-geodesic-projection} \emph{Let $(X,\,\mathcal{P},\,D)$
be a Riemannian statistical manifold with a smooth submanifold $(X,\,\mathcal{Q})$.
Then a mapping $\pi^{*}:\mathcal{P}\longrightarrow\mathcal{Q}$ is
termed a }\textbf{\emph{dual geodesic projection}}\emph{, iff any
point $P\in\mathcal{P}$ is mapped to a point $\pi^{*}(P)\in\mathcal{Q}$,
that minimizes the distance $d(P,\,\pi^{*}(P))$ w.r.t. the dual Riemannian
metric, which is induced by $D^{*}$.}
\end{defn*}
In the case of a dually flat statistical manifold, the dual affine
structure induces a correspondence relationship between the Riemannian
metrices, induced by $D$ and $D^{*}$.
\begin{lem}
\label{lem:3.11}Let $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$ be a Riemannian
statistical manifold with a Bregman divergence $D_{\psi}$. Then $D_{\psi}$
has a mixed representation in the parametrisations $\xi$ and $\xi^{*}$,
which is given by:
\begin{equation}
D_{\psi}[P\parallel Q]=\psi(\boldsymbol{\xi}_{P})+\psi^{*}(\boldsymbol{\xi}_{Q}^{*})-\boldsymbol{\xi}_{P}\cdot\boldsymbol{\xi}_{Q}^{*}\label{eq:divergence_bregman_mixed}
\end{equation}
\end{lem}
\begin{proof}
By applying the definition of the dual divergence and \prettyref{lem:3.9}
it follows, that:
\[
D_{\psi}[P\parallel Q]\stackrel{\ref{eq:divergence_dual_divergence}}{=}D_{\psi^{*}}[Q\parallel P]
\]
The right side of the equation is calculated by the definition of
the Bregman divergence and the Legendre dual function, such that:
\begin{eqnarray*}
& & D_{\psi^{*}}[Q\parallel P]\\
& & \stackrel{\ref{eq:def:Bregman-divergence:1}}{=}\psi^{*}(\boldsymbol{\xi}_{Q}^{*})-\psi^{*}(\boldsymbol{\xi}_{P}^{*})-\nabla\psi^{*}(\boldsymbol{\xi}_{P}^{*})(\boldsymbol{\xi}_{P}^{*}-\boldsymbol{\xi}_{Q}^{*})\\
& & \stackrel{}{=}\psi(\boldsymbol{\xi}_{P})+\psi^{*}(\boldsymbol{\xi}_{Q}^{*})-\boldsymbol{\xi}_{P}\cdot\boldsymbol{\xi}_{Q}^{*}
\end{eqnarray*}
\end{proof}
\begin{thm}[\emph{Amari Pythagorean Theorem}]
\emph{\label{thm:Pythagorean-theorem}} Let $(X,\,\mathcal{P_{\xi}},\,D_{\psi})$
be a dually flat statistical manifold, which is given by a Bregman
divergence $D_{\psi}$ and let $P,\,Q,\,R\in\mathcal{P}$ be an orthogonal
triangle in the sense, that the $\xi^{*}$-affine geodesic $\gamma_{P,Q}^{*}$
from $P$ to $Q$ is orthogonal to the $\xi$-affine geodesic $\gamma_{Q,R}$
from $Q$ to $R$, then:
\begin{equation}
D_{\psi}[P\parallel R]=D_{\psi}[P\parallel Q]+D_{\psi}[Q\parallel R]\label{eq:thm:Pythagorean_theorem:1}
\end{equation}
\begin{figure}[h]
\begin{centering}
\def\svgwidth{\columnwidth}
\input{figures/pythagoras.pdf_tex}
\par\end{centering}
\caption{\label{fig:thm:Pythagorean-theorem}Pythagorean theorem for dually
flat manifolds}
\end{figure}
\end{thm}
\begin{proof}
The $\xi^{*}$-affine geodesic $\gamma_{P,Q}^{*}:[0,\,1]\to(X,\,\mathcal{P})$,
with $\gamma_{P,Q}^{*}(0)=P$ and $\gamma_{P,Q}^{*}(1)=Q$ is parametrized
by:
\[
\boldsymbol{\xi}_{P,Q}^{*}(t)=t\boldsymbol{\xi}_{Q}^{*}+(1-t)\boldsymbol{\xi}_{P}^{*},\,t\in[0,\,1]
\]
and the $\xi$-affine geodesic $\gamma_{Q,R}:[0,\,1]\to(X,\,\mathcal{P})$,
with $\gamma_{Q,R}(0)=Q$ and $\gamma_{Q,R}(1)=R$ by:
\[
\boldsymbol{\xi}_{Q,R}(t)=t\boldsymbol{\xi}_{R}+(1-t)\boldsymbol{\xi}_{Q},\,t\in[0,\,1]
\]
Let $\langle\cdot,\,\cdot\rangle_{g}$ denote the local scalar product,
which is induced by the Bregman divergence $D_{\psi}$. By applying
the definition of the Bregman divergence, the local scalar product
at the point $Q$ is given by:
\begin{eqnarray*}
& & \langle\frac{\mathrm{d}}{\mathrm{d}t}\gamma_{P,Q}^{*}(t)|_{t=1},\,\frac{\mathrm{d}}{\mathrm{d}t}\gamma_{Q,R}(t)|_{t=0}\rangle_{g}\\
& & \stackrel{\ref{eq:def:Bregman-divergence:1}}{=}(\boldsymbol{\xi}_{Q}^{*}-\boldsymbol{\xi}_{P}^{*})\cdot(\boldsymbol{\xi}_{R}-\boldsymbol{\xi}_{Q})\\
& & \stackrel{\ref{eq:divergence_bregman_mixed}}{=}\boldsymbol{\xi}_{Q}^{*}\cdot\boldsymbol{\xi}_{R}-\boldsymbol{\xi}_{P}^{*}\cdot\boldsymbol{\xi}_{R}+\boldsymbol{\xi}_{P}^{*}\cdot\boldsymbol{\xi}_{Q}-\psi(\boldsymbol{\xi}_{Q})-\psi^{*}(\boldsymbol{\xi}_{Q}^{*})\\
& & \stackrel{\ref{eq:divergence_bregman_mixed}}{=}D_{\psi}[P\parallel Q]+D_{\psi}[Q\parallel R]-D_{\psi}[P\parallel R]
\end{eqnarray*}
Since $\gamma_{P,Q}^{*}$ and $\gamma_{Q,R}$ are required to be orthogonal
in the point $Q$, the left side of the equation equals zero and therefore
it follows, that:
\[
D_{\psi}[P\parallel Q]+D_{\psi}[Q\parallel R]-D_{\psi}[P\parallel R]=0
\]
\end{proof}
Due to the generic asymmetry of Bregman divergences the generalized
Pythagorean theorem has a corresponding dual theorem, which mutatis
mutandis is given by:
\begin{equation}
D_{\psi}^{*}[P\parallel R]=D_{\psi}^{*}[P\parallel Q]+D_{\psi}^{*}[Q\parallel R]\label{eq:dual_flat_pythagoras_dual}
\end{equation}
If $\psi$ is chosen, such that $D_{\psi}$ is symmetric, the induced
Riemannian metric of $D_{\psi^{*}}$ is identical to that of $D_{\psi}$,
since:
\[
D_{\psi}[P\parallel Q]=D_{\psi}[Q\parallel P]=D_{\psi^{*}}[P\parallel Q]
\]
In this case the generalized Pythagorean theorem and its dual corresponding
are equivalent and the induced Riemannian metric is self-dual.
\begin{defn*}[Affine projection]
\label{def:affine_projection} \emph{Let $(X,\,\mathcal{P_{\xi}},\,D)$
be a Riemannian statistical manifold and $(X,\,\mathcal{Q})$ a smooth
submanifold. Then a projection $\pi_{\xi}^{\perp}:\mathcal{P}\to\mathcal{Q}$
is termed an }\textbf{\emph{$\xi$-affine projection}}\emph{ from
}$(X,\,\mathcal{P},\,D)$\emph{ to $(X,\,\mathcal{Q},\,D)$, iff for
any $P\in\mathcal{P}$ the $\xi$-affine geodesics from $P$ to $\pi_{\xi}^{\perp}(P)$
are orthogonal to $\mathcal{Q}.$}
\end{defn*}
\begin{lem}
\label{lem:3.12}Let $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$ be a dually
flat statistical manifold with a smooth submanifold $(X,\,\mathcal{Q})$.
Then there exists an \emph{$\xi$-}affine projection as well as an
\emph{$\xi^{*}$-}affine projection from $(X,\,\mathcal{P},\,D)$\emph{
to $(X,\,\mathcal{Q})$}.
\end{lem}
\begin{proof}
Since $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$ is Riemannian statistical
manifold by convention $\xi$ is an affine parametrisation and therefore
by definition any $P,\,Q\in\mathcal{P}$ are connected by a \emph{$\xi$-}affine
geodesic $\gamma_{P,Q}:[0,\,1]\to(X,\,\mathcal{P})$ with $\gamma_{P,Q}(0)=P$
and $\gamma_{P,Q}(1)=Q$. Let's assume, that for a given $P\in\mathcal{P}$
there is no $Q\in\mathcal{Q}$, such that $\gamma_{P,Q}\bot\mathcal{Q}$,
then due to the \emph{mean value theorem} $\mathcal{Q}$ is not differentiable
with regard to the affine parametrisation $\xi$ and since $\xi$
is a homeomorphism $\mathcal{Q}$ is also not differentiable in $\mathcal{P}$.
However since $\mathcal{Q}$ is a smooth submanifold this does not
hold, such that there exists a $Q\in\mathcal{Q}$ with $\gamma_{P,Q}\bot\mathcal{Q}$.
The argument is true for any $P\in\mathcal{P}$ and therefore proves
the existence of an \emph{$\xi$-}affine projection. Since $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$
is dually flat also $\xi^{*}$ is an affine parametrisation. Then
the argument, given for the \emph{$\xi$-}affine projection mutatis
mutandis proves the existence of an \emph{$\xi^{*}$-}affine projection
is argument may analogous be applied to the dual space, and the also
proves the existence of a dual affine projection.
\end{proof}
\begin{thm}[\emph{Amari Projection theorem}]
\label{thm:Projection-theorem}\emph{} Let $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$
be a dually flat statistical manifold and $(X,\,\mathcal{Q})$ a smooth
submanifold. Then the geodesic projection $\pi:\mathcal{P}\to\mathcal{Q}$
is an \emph{$\xi^{*}$}-affine projection and the dual geodesic projection
$\pi^{*}:\mathcal{P}\to\mathcal{Q}$ is an $\xi$-affine projection.
\begin{figure}[h]
\begin{centering}
\def\svgwidth{\columnwidth}
\input{figures/projection.pdf_tex}
\par\end{centering}
\caption{\label{fig:thm:Projection-theorem}Projection theorem for dually flat
manifolds}
\end{figure}
\end{thm}
\begin{proof}
Let $P\in\mathcal{P}$, then due to \prettyref{lem:3.12} a dual affine
projection $\pi_{\xi^{*}}^{\perp}$ may be chosen, such that the dual
affine curve from $P_{\xi^{*}}$ to $\pi_{\xi^{*}}^{\perp}(P)$ is
orthogonal to $\mathcal{Q}_{\xi^{*}}$. Let $Q=\pi_{\xi^{*}}^{\perp}(P)$.
Then for any sufficiently close $R=Q+\mathrm{d}Q\in\mathcal{Q}$ with
$\boldsymbol{\xi}_{R}=\boldsymbol{\xi}_{\hat{Q}}+\mathrm{d}\boldsymbol{\xi}$
and $\mathrm{d}\boldsymbol{\xi}\ne0$ the triangle $P,\,Q,\,R\in\mathcal{P}$
is orthogonal in $\mathcal{Q}$ and \prettyref{thm:Pythagorean-theorem}
gives the relation $D[P\parallel R]>D[P\parallel Q]$. This shows,
that $Q$ is a critical point w.r.t. the divergence $D[P\parallel Q]$.
Conversely since $\mathcal{Q}$ is a smooth submanifold the mean value
theorem shows that for any critical point $Q\in\mathcal{Q}$, w.r.t.
the divergence $D[P\parallel Q]$ a dual affine projection from $P$
to $Q$ exists and therefore in particular for the points $\hat{Q}\in\mathcal{Q}$
that minimizes the divergence. From equation \ref{eq:manifold:metric:geodesic_vs_divergence}
we obtain for the distance that $D[P\parallel\hat{Q}]\leq d(P,\,\hat{Q})$.
Furthermore by definition $d(P,\,\hat{Q})$ is the minimal length
of a curve from $P$ to $\hat{Q}$, but since there exists a dual
affine projection from $P$ to $\hat{Q}$, which has the length $D[P\parallel\hat{Q}]$
it follows that $d(P,\,\hat{Q})=D[P\parallel\hat{Q}]$ and therefore
the geodesic projection is a dual affine projection. By applying the
dual version of \prettyref{thm:Pythagorean-theorem} this argument
mutatis mutandis also holds for the dual geodesic projection w.r.t.
the affine projection.
\end{proof}
\begin{cor}
\label{cor:3.3}Let $(X,\,\mathcal{P}_{\xi},\,D_{\psi})$ be a dually
flat statistical manifold and $(X,\,\mathcal{Q})$ and $(X,\,\mathcal{S})$
smooth submanifolds. Let further be $(X,\,\mathcal{Q})$ flat w.r.t.
$D_{\psi^{*}}$ and $(X,\,\mathcal{S})$ flat w.r.t. $D_{\psi}$.
Then the geodesic projection $\pi:\mathcal{P}\to\mathcal{Q}$ is uniquely
given by an \emph{$\xi^{*}$}-affine projection and the dual geodesic
projection $\pi^{*}:\mathcal{P}\to\mathcal{S}$ is uniquely given
by an \emph{$\xi$}-affine projection.
\begin{figure}[h]
\begin{centering}
\def\svgwidth{\columnwidth}
\input{figures/projection_corollary.pdf_tex}
\par\end{centering}
\caption{\label{fig:cor:3.2}Unique projections in dually flat manifolds}
\end{figure}
\end{cor}
\begin{proof}
By virtue of \prettyref{thm:Projection-theorem} it suffices to proof
the uniqueness of the affine projection and the dual affine projection.
Let $p\in\mathcal{P}$, then \prettyref{lem:3.12} asserts the existence
of a dual affine projection of $P$ to a point $\pi(P)=\hat{Q}\in\mathcal{Q}$
and since $(X,\,\mathcal{Q})$ is flat it follows, that $\mathcal{Q}\subseteq T_{Q}\mathcal{Q}$
such that for any $R\in\mathcal{Q}$ \prettyref{thm:Pythagorean-theorem}
shows that:
\[
D[P\parallel R]=D[P\parallel Q]+D[Q\parallel R]\geq D[P\parallel Q]
\]
Therefore $Q$ is the global minimum and $\pi(P)$ is unique. By the
application of the dual version of \prettyref{thm:Pythagorean-theorem}
to the submanifold $(X,\,\mathcal{S})$ the argument mutatis mutandis
also proves, that $\pi^{*}(S)=\hat{S}\in\mathcal{S}$ is unique in
$(X,\,\mathcal{S})$.
\end{proof}
\bibliographystyle{unsrt}
|
1,116,691,501,302 | arxiv | \section{Introduction}
\label{sec_intro}
Coherent spin transport across a device is a central goal
of spintronics.\cite{Stotz2005,Awschalom2007} In this context the enhancement of the spin lifetime
is a critical issue, and recent experiments have demonstrated the effectiveness
of using Surface Acoustic Waves (SAWs) for this purpose.
\cite{Sogawa2001,Stotz2005,Couto2007,Couto2008,Sanada2011}
In such experiments the spin density in semiconducting quantum wells is optically generated by laser beams
and transported by a SAW over distances of several tens of micrometers.
The current understanding\cite{Stotz2005,Couto2008} is that these long distances are possible
due to the suppression of both Bir-Aronov-Pikus\cite{Bir1975}
and Dyakonov-Perel'\cite{Dyakonov1972} spin-relaxation mechanisms:
the piezoelectric SAW potential spatially separates electrons and holes,
thus inhibiting Bir-Aronov-Pikus relaxation, and at the same time
confines them to narrow (moving) wires/dots, which causes motional narrowing and thus a suppression
of Dyakonov-Perel' relaxation. However, motional narrowing in a 2-Dimensional Electron Gas (2DEG)
ceases to be relevant for strong \textit{static} confinements, when spin-dependent scattering at the boundaries
takes over, as recently observed\cite{Holleitner2006} and theoretically explained.\cite{Schwab2006}
In this work we address the question of \textit{dynamic} confinement.
In particular, we will investigate how intrinsic (Dyakonov-Perel')
spin relaxation mechanisms affect the spin dynamics of pockets of photoexcited electrons
driven by SAWs.
We will also briefly comment on the role of extrinsic (Elliot-Yafet) spin relaxation.\cite{Raimondi2009}
Spin relaxation due to the hyperfine interaction between the carriers and the background nuclei
may be an important issue in strongly confined, static geometries \cite{Merkulov2002,Braun2005},
but was recently shown\cite{Echeverria2013} to be irrelevant for a pocket of mobile electrons
carried by a SAW, and hence will not be considered here.
We will start in Secs.~\ref{sec_model} and \ref{sec_diff} by defining the model
and introducing the diffusive limit, respectively. In Sec.~\ref{sec_charge} charge dynamics will be discussed,
and in Sec.~\ref{sec_spin} the central issue of spin dynamics. For the sake of clarity, the latter will be studied by specializing to a specific geometry, and by retaining only the dominant spin-orbit interactions.
In Sec.~\ref{sec_superplus} we will comment on different geometries and additional spin-orbit terms. A short summary is given in Sec.~\ref{sec_conclusions}.
\section{The model}
\label{sec_model}
We consider an electron gas in the $x$-$y$--plane described by the Hamiltonian
\begin{equation}
H = \frac{p^2}{2m}+H_{\rm so}+V({\bf r}).
\end{equation}
Here $m$ is the effective mass, $H_{\rm so}$ describes intrinsic spin-orbit coupling,
and $V({\bf r})$ is the random impurity potential. For the latter, we assume the standard ``white noise''
disorder, i.e., we assume that the average of the potential is zero, and its correlations are given by
\begin{eqnarray}
\langle V({\bf r})V({\bf r}')\rangle = (2\pi N_0\tau)^{-1}\delta({\bf r}-{\bf r}').
\end{eqnarray}
Here $N_0 = m/(2\pi)^2$ is the density of states at the Fermi energy per spin, $\tau$ is the elastic
momentum scattering time, and we have chosen $\hbar=1$.
For $H_{\rm so}$ we consider general linear-in-momentum couplings,
which arise in 2DEGs because of broken structural (Rashba\cite{Bychkov1984}) or bulk
(Dresselhaus\cite{Dresselhaus1955}) inversion symmetry, or of strain;\cite{Bernevig2005}
linear couplings are dominant with respect to cubic ones in a wide range of parameters.\cite{Studer2010, Walser2012}
Linear-in-momentum couplings can be written in terms of a non-Abelian vector potential
$\boldsymbol{\mathcal A}$,\cite{Mathur1992,Frohlich1993,Tokatly2008,Gorini2010}
which for spin $1/2$ carriers becomes a $SU(2)$ field with three components
in the Pauli matrices basis ($a=x,y,z$), and two components in real space ($i=x,y$):
\begin{equation}
\label{su2}
H_{\rm so} = p_i{\mathcal A}_i^a\sigma^a/2m.
\end{equation}
Unless otherwise specified, upper (lower) indices will refer to spin (real space) components
throughout.
Our treatment is based on the general approach described in Refs.~\onlinecite{Gorini2010,Raimondi2012}.
However, for definiteness we will start by considering quantum wells grown
in the $\hat{\bf z}\parallel [001]$ direction.
With the in-plane base vectors $\hat{\bf x}\mid\mid [100]$ and $\hat{\bf y}\mid\mid [010]$
the linear Rashba and Dresselhaus spin-orbit Hamiltonians read
\begin{eqnarray}
\label{eq_hR}
H^R_{\rm so} &=& \alpha (p_y\sigma^x-p_x\sigma^y),
\\
\label{eq_hD}
H^D_{\rm so} &=& \beta (p_y\sigma^y-p_x\sigma^x),
\end{eqnarray}
with $\alpha, \beta$ the respective coupling constants.
These spin-orbit terms can be rewritten according to \eqref{su2} with the following $SU(2)$ potentials:
\begin{eqnarray}
\label{eq_AR}
&({\mathcal A}_R)^x_y = - ({\mathcal A}_R)^y_x = 2m\alpha ,&
\\
\label{eq_AD}
&({\mathcal A}_D)^y_y = - ({\mathcal A}_D)^x_x = 2m\beta&,
\end{eqnarray}
all other components being zero.
The spin-orbit interaction depends on the electron direction of motion;
thus, in order to examine the effect of a SAW, we will consider the latter
to be propagating either in the $[110]$ or in the $[\bar{1}10]$ direction.
In both cases the driving field can be written as
\begin{equation}
\label{drivingSAW}
{\bf E}({\bf r})=E\,\hat{\mathbf{k}}\cos\left(\mathbf{kr} -\omega t\right),
\end{equation}
where $\omega=v \vert\mathbf{k}\vert$; $v$ is the sound velocity in the medium, and $ \hat{\mathbf k} $
is the unit vector pointing in the SAW propagation direction.
The SAW is generated in a piezoelectric material by applying a time-modulated voltage
to interdigital transducers in contact with it, and the in-plane field \eqref{drivingSAW}
is accompanied by a component in the $z$ direction and by strain.\cite{Mamishev2004,Morgan2007}
The latter are both sources of additional non-homogeneous and time dependent spin-orbit terms
in the Hamiltonian.\cite{Sanada2011} We will at first neglect these complications, and start by taking into account
only the driving SAW field \eqref{drivingSAW}.
\section{Diffusive limit}
\label{sec_diff}
Within the $SU(2)$ ``color'' approach,\cite{Tokatly2008,Gorini2010,Raimondi2012}
the charge and spin dynamics can be described by the $SU(2)$-covariant continuity equation
\begin{equation}
\label{eq_cont}
\frac{\partial \rho}{\partial t}+\tilde{\nabla}\cdot{\bf j} =0,
\end{equation}
with the density and current given by
\begin{equation}
\rho=\rho^{0}+s^{a}\sigma^{a},\quad {\bf j}={\bf j}^{0}+{\bf j}^{a}\sigma^{a}.
\end{equation}
Here $ \rho^0 $ and $s^a $ are, respectively, the charge and spin ($a$-th component) density.
The covariant derivative
\begin{equation}
\tilde{\nabla} =\nabla + \mathrm{i}\left[\boldsymbol{\mathcal A},...\right],
\end{equation}
where
\begin{equation}
\boldsymbol{\mathcal A}= \left(\boldsymbol{\mathcal A}^x \sigma^x +\boldsymbol{\mathcal A}^y \sigma^y+\boldsymbol{\mathcal A}^z \sigma^z\right)/2
\end{equation}
is defined according to Ref. \onlinecite{Gorini2010},
consists of two terms, the spatial derivative $\nabla$ and the commutator with the vector potential
describing spin precession around the spin-orbit field. In this work, we consider the diffusive regime, i.e., we assume that the mean free path, $ l= v_F \tau $, is much smaller than the wavelength of the SAW, $ 2\pi/k $.
In this limit, the electric field $ \textbf{E} $ of the SAW enters the charge-spin current as follows:\cite{Gorini2010}
\begin{equation}
\label{eq_current}
{\bf j}=-D \tilde{\nabla}\rho +\mu \textbf{E}\rho,
\end{equation}
where $ D $ is the diffusion constant, and $ \mu $ the mobility.
This simple structure is due to the fact that we are dealing with linear-in-momentum spin-orbit interactions.
Substituting \eqref{eq_current} into the continuity equation \eqref{eq_cont} leads
to a drift-diffusion equation for the charge density $\rho^0$, and to Bloch-type equations for the spin densities $s^a$.
\section{Charge dynamics}
\label{sec_charge}
As discussed above, the drift-diffusion equation
for the charge carriers in the diffusive limit has the well known form:
\begin{equation}
\frac{\partial \rho^{0}}{\partial t}+\mu\nabla\cdot(\textbf{E}\rho^{0}) -D\nabla^{2}\rho^{0}=0.
\label{dyn}
\end{equation}
In the following we assume the $ x $ axis to be parallel
to the SAW propagation direction. Since there is no drift of the carriers in the direction perpendicular
to the SAW ($ y $ axis), the solution of the drift-diffusion equation factorizes,
$ \rho_0(\textbf{r},t)=a_0 X(x,t) Y(y,t) $.
Here $a_0$ is a constant fixed by the initial conditions, irrelevant for the dynamics
and thus neglected in the following unless otherwise specified.
The motion in the $ y $ direction is governed by the solution of the diffusion equation,
\begin{equation}
Y(y,t)=\frac{1}{\sqrt{4\pi D t}}\int_{-\infty}^{\infty}\,\mathrm{d}y'\exp\left[-\frac{(y-y')^2}{4 D t}\right]
Y(y',0).
\label{ydens}
\end{equation}
For the dynamics in $x$ direction one has to discriminate between two cases,
depending on the SAW velocity $ v $ being larger or smaller than the carrier velocity $ \mu E $.
In the first case, $ v>\mu E $, the carriers are too slow to follow the SAW, but
move from one minimum to the next.
Considering in addition not too small $E$ such that $Dk\ll\mu E$, cf.\ Eq.~(\ref{dyn}),
diffusion can be neglected and the dynamics is governed by the drift. In typical non-degenerate
semiconductors the Einstein relation $D = \mu k_B T/e$ can be employed to estimate the diffusion
constant,\cite{Cameron1996, Wang2013, Garcia-Cristobal2004} implying that the condition
$Dk\ll\mu E$ becomes independent of the mobility, namely reduces to $k_B T \cdot k \ll eE$, or
$k_B T\ll eE/k$. This requirement is easily met at low temperatures, $T \sim 20$ K or
lower.\cite{Garcia-Cristobal2004} Though diffusion acquires importance with increasing temperature,
the experimental data of Ref.~\cite{Couto2008} (see Fig.\ 4(b) therein), where $k\approx 1.12 \times
10^4 {\rm cm}^{-1}$ and $eE \approx 3.4 \times 10^3$ eV/cm, show that drift can be dominant even at
room temperature. In this case, the differential equation (\ref{dyn}) simplifies,
and $ X(x,t) $ is found to be given by
\begin{equation}
X(x,t)=\frac{v-\mu E\cos\left[k \,\xi(x,t)\right]}{v-\mu E \cos\left(k x-\omega t\right)}X\left(\xi(x,t),0\right),\label{dde}
\end{equation}
with
\begin{widetext}
\begin{equation}
\xi(x,t)=\frac{2}{k} \arctan
\left\lbrace\sqrt{\frac{v-\mu E }{v+\mu E }}
\tan\left[\arctan\left(\sqrt{\frac{v+\mu E }{v-\mu E}}\tan\left(\frac{k x-\omega t}{2}\right)\right)
+\frac{\sqrt{v^{2}-(\mu E )^{2}}}{2v}\,\omega t \right]
\right\rbrace.
\end{equation}
\end{widetext}
Note that $ \xi(x,t=0)=x. $
Care is needed because of the periodicity of $ \tan\left[\left(k x-\omega t\right)/2\right] $, since for an arbitrary initial condition one has to choose the right branch in order to obtain the solution with the correct initial distribution. One can circumvent this difficulty by choosing an initial condition with all carriers within one period. In Fig. \ref{fig_charge} we therefore assumed a Gaussian initial distribution with a standard deviation much smaller than the SAW wavelength. Although the carriers are not fast enough to follow the SAW, they flow from one minimum to the next, with
the average velocity
\begin{equation}
\overline{v}=v-\sqrt{v^{2}-(\mu E)^{2}}, \; \mu E < v.
\end{equation}
\begin{figure}
\includegraphics[width=0.45\textwidth]{Chargenosurf-red.png}
\caption{Motion of the charge carriers $ X(x,t) $ in $ x $ direction with $ \mu E/v=0.5 $.}
\label{fig_charge}
\end{figure}
The situation is quite different for $ \mu E > v $, when the charge carriers are fast enough to follow the SAW, i.e. they are ``surfing''. This means that they are subjected to a stationary potential in a reference frame moving with the SAW, and at the point $ x_0 =\arccos\left(v/\mu E\right)/k $ in this frame they move with its velocity. \footnote{Note that $ k x_0 $ lies in the range $ 0\ldots\pi $. This is also apparent from Eq. \ref{eq_carrier}. }
Since the potential is periodic, there is such a point in every period.
Independent of the initial distribution $ X(x,0) $, the carriers flow towards
the point $ x_0 $ corresponding to their period, until they reach a stationary distribution.
Thus, for $ \mu E > v $ the solution (\ref{dde}) converges to $ X(x,t) \sim\delta (k (x-x_0)-\omega t) $,
and the carrier density $ X(x,t) $ is concentrated in an infinitely small wire parallel
to the wave front. This implies that the diffusion term cannot be neglected anymore.
Since the charge density distribution becomes stationary, the charge current vanishes in the moving frame,
leading to
\begin{equation}
X(x,t)=\exp\left[\frac{\mu E \sin(kx-\omega t)-v (k x-\omega t) }{D k}\right],
\label{eq_carrier}
\end{equation}
which is sharply peaked at $ kx -\omega t=kx_0 $. Hence for $ \vert kx - \omega t -k x_0 \vert \ll 1 $, $ X(x,t) $
can be approximated by a Gaussian distribution,
\begin{equation}
X(x,t)\approx\frac {e^{-\frac {(kx-\omega t-k x_0 )^2} {2\sigma^2}}} {\sqrt {2\pi}\sigma},
\end{equation}
with standard deviation $ \sigma^2= D k /\sqrt{(\mu E)^{2}-v^{2}} $. Note that the exact solution
(\ref{eq_carrier}) of the continuity equation (\ref{dyn}) does not depend on the sign of $ \mu $.
In other words, this solution describes the dynamics of electrons as well as that of holes,
provided both are in the surfing regime, i.e., $\mu_{\rm e}E$, $\mu_{\rm h}E>v$.
In this case the spatial separation of the two pockets of carriers is
\begin{equation}
\Delta x_0=\frac{\arccos\left(v/\mu_{\mathrm{h}} E\right)-\arccos\left(v/\mu_{\mathrm{e}} E\right)}{k}.\label{distance}
\end{equation}
\section{Spin dynamics}
\label{sec_spin}
In this section, we examine
the influence of a SAW on the spin density.
The spin-orbit Hamiltonians can be written as
\begin{eqnarray}
\label{newcoord1}
H^R_{\rm so}+H^D_{\rm so}=&-(\alpha+\beta)\frac{p_x-p_y}{\sqrt{2}}\,\frac{\sigma^x+\sigma^y}{\sqrt{2}}\\
&+(\alpha-\beta)\frac{p_x+p_y}{\sqrt{2}}\,\frac{\sigma^x-\sigma^y}{\sqrt{2}}\nonumber\\
\label{newcoord2}
=:&\alpha_{+} p_{x'}\sigma^{y'}+\alpha_{-} p_{y'}\sigma^{x'}
\end{eqnarray}
where the primed coordinates correspond to the two directions, $ [110] $ and $ [\bar{1}10] $.
In the following, we will perform all our calculations in this rotated reference frame
(both real space and spin components rotated by $ \pi/4 $ around the $ z $ axis)
with $ \hat{\mathbf{x}}\parallel [110] $ and $ \hat{\mathbf{y}}\parallel [\bar{1}10] $, but drop the prime (except for the closing of Sec.~\ref{sec_superplus} where we will revert back to non-rotated coordinates). For the vector potential $ \boldsymbol{\mathcal A} $ one finds
\begin{eqnarray}
&({\mathcal A})_x^y=-2m(\alpha+\beta):=\,2m\alpha_{+},&\\
&({\mathcal A})_y^x=2m(\alpha-\beta):=\,2m\alpha_{-},&\\
&({\mathcal A})_x^x=({\mathcal A})_y^y=0.&
\label{vecpot}
\end{eqnarray}
The Bloch equations describing the dynamics of the spin density read
\begin{eqnarray}
\partial_{t} s^a+\mu\nabla \cdot\textbf{E}s^a - D \nabla^{2}s^a & = & -2 D\,
\epsilon_{abc}\,\boldsymbol{\mathcal{A}}^b \cdot\nabla s^c - \Gamma^{ab} s^b+\nonumber\\
& & + \epsilon_{abc}\,\mu\textbf{E}\cdot\boldsymbol{\mathcal A}^b s^c .
\label{eq_bloch}
\end{eqnarray}
These set of equations are obtained by taking the spin $a$-component
of the continuity equation (\ref{eq_cont}), after expressing the current in the diffusive regime
according to \eqref{eq_current}. Without a SAW and for a homogeneous spin distribution,
one can immediately determine the spin lifetimes from the eigenvalues
of the inverse spin relaxation matrix $ {\hat\Gamma}^{-1} $. In fact
$ \hat\Gamma $ in (\ref{eq_bloch}) is diagonal, and its eigenvalues are
\begin{eqnarray}
\Gamma_x &=&\, 4Dm^2 \alpha_{+}^2 \,, \label{Gamma1} \\
\Gamma_y &=&\, 4Dm^2 \alpha_{-}^2 \,, \label{Gamma2} \\
\Gamma_z &=&\, 4Dm^2 \left(\alpha_{+}^2+\alpha_{-}^2\right)\,. \label{Gamma3}
\end{eqnarray}
From \eqref{Gamma2} one sees that for $y$-polarized spins there is no relaxation
if $ \alpha=\beta $. Although this limit can be realized in experiments,\cite{Koralek2009,Kohda2012}
we here consider the more general case $ \alpha\neq \beta $.
\subsection{Homogeneous initial conditions}
\label{subsec_homo}
The spin dynamics depends strongly on the initial conditions.
In this subsection we consider an experimental setup where a short laser pulse homogeneously polarizes
the complete surface. In the surfing regime electrons and holes are strongly localized
and effectively spatially separated, see Eq.~\eqref{distance},
and are transported---along with their spins---across the sample.
The description of the spin dynamics is considerably simplified by switching to a reference frame
co-moving with the SAW. A change to such a reference frame leads to an additional term
in the continuity equation which acts like an internal magnetic field,
\begin{equation}
\partial_{t}\rho+\tilde{\nabla}\textbf{j}+\mathrm{i}[\textbf{v}\boldsymbol{\mathcal{A}},\rho]=0.
\end{equation}
where $ \textbf{v} = v \hat{\mathbf{k}}$.
A further simplification can be achieved by applying the following $SU(2)$ gauge transformation:
\begin{equation}
\boldsymbol{\mathcal{A}}\rightarrow U^{\dagger}\boldsymbol{\mathcal{A}}U+iU^{\dagger}\nabla U, \;
U=\exp\left(\mathrm{i}x\mathcal{A}_{x} \right).
\end{equation}
In this gauge, the covariant derivative $ \tilde{\partial}_x \rightarrow \partial_x $ is diagonal in
spin space but leads to a $ x $-dependent vector potential $ \mathcal{A}_{y}(x) $. However, since
the charge carriers are Gaussian-distributed at the origin in the co-moving system with
$ \sigma \ll 1/2m \alpha_{+} $, one can neglect the $ x $-dependence of the vector potential,
hence $ \mathcal{A}_{y}(x)\approx\mathcal{A}_{y}(0) $.
The time-dependence of the spin density in the presence of the SAW is governed by an effective
relaxation matrix $\hat\gamma$, whose (complex) eigenvalues are given by
\begin{eqnarray}
\gamma_{x,z}& = & 2Dm^2 \alpha_{-}^2\pm2\,\mathrm{i}\sqrt{v^2 m^2 \alpha_{+}^2-D^2 m^4 \alpha_{-}^4}\label{eig110-1},\\
\gamma_y & = & \, 4Dm^2 \alpha_{-}^2.
\label{eig110-2}
\end{eqnarray}
Since all carriers move with the same velocity $ v $, the real part of these eigenvalues
is related to the spin decay length,
\begin{equation}
L_s=\frac{v}{\Re (\gamma)},\label{decaylength}
\end{equation}
whereas the imaginary part determines the spatial precession length,
\begin{equation}
\lambda=\frac{v}{\Im(\gamma)}.
\label{precessionlength}
\end{equation}
For a SAW moving in the $ y $ direction we proceed in the same way.
The carriers are then concentrated in a small wire parallel to the $ x $ axis.
In this case one finds
\begin{eqnarray}
\gamma_x & = & \, 4Dm^2 \alpha_{+}^2,\\
\gamma_{y,z} & = & 2Dm^2 \alpha_{+}^2\pm2\,\mathrm{i}\sqrt{v^2 m^2 \alpha_{-}^2-D^2 m^4\alpha_{+}^4}
\end{eqnarray}
which is obtained from Eqs.~\eqref{eig110-1} and \eqref{eig110-2} by interchanging $ x $ and $ y $
as well as $+$ and $-$.
Comparing the real parts with Eqs.~\eqref{Gamma1} and \eqref{Gamma2},
one finds a maximal enhancement of the spin lifetime by a factor of $ 2 (\alpha_{+}/\alpha_{-})^2 $ for the $ x $ direction,
and $ 2 (\alpha_{-}/\alpha_{+})^2 $ for the $ y $ direction (``motional narrowing''). Note that the real parts of $ \gamma_{x/z} $ and $ \gamma_{y/z} $
are by a factor of two smaller than their perpendicular counterparts, $ \gamma_y $ and $ \gamma_x $,
respectively. These perpendicular counterparts, describing the relaxation of spins parallel
to the SAW wave front, are not affected by the SAW in the simple case of a homogeneous spin density.
Specifically we numerically calculated the $x$-spin density for a SAW
traveling in the $ x $ direction and for different $E$ values.
For simplicity, we set $ \alpha_{+}=\alpha_{-} $, which in the surfing regime implies a spin lifetime increase
by a factor of two. Not being interested in the spatial variation of the spin density,
we consider the spin polarization $ P_s=\vert\textbf{P}_s \vert $, by integrating
the spin density over the whole surface. From the Bloch equations \eqref{eq_bloch}
one sees that, without a SAW, the spin polarization decays exponentially with the spin scattering rate
\eqref{Gamma1}. Hence we define the average spin lifetime by
\begin{equation}
\langle\tau\rangle=\frac{\int_{0}^{\infty}tP_s\,\mathrm{d}t }{\int_{0}^{\infty}P_s\,\mathrm{d}t }.
\end{equation}
For the numerical analysis, we started at $ t=0 $ with a Gaussian distribution in $ x $ direction,
with a standard deviation $ \sigma \gg 1/2m\alpha_{+} $, polarized in $ x $ direction.
The spin lifetime as a function of the ratio $ \mu E/v $ is shown in Fig.~\ref{figspin},
where the expectation value $ \langle\tau\rangle $ is normalized to the corresponding spin lifetime
$ \tau_s $ without SAW, cf.~Eq.~\eqref{Gamma1}.
\begin{figure}[htbc]
\begin{center}
\includegraphics[width=0.45\textwidth]{tau.png}
\caption{Numerical results for the increase of the spin lifetime $ \langle\tau\rangle $ due to a SAW.
For the calculation we assumed $ \alpha_{+}=\alpha_{-} $.
The spin lifetime is normalized by $ \tau_s=\Gamma^{-1}_x $, cf.~Eq.~\eqref{Gamma1}.}
\label{figspin}
\end{center}
\end{figure}
In the regime $ \mu E/v <1 $, when the carriers are not surfing, the spin lifetime depends strongly
on the form of the initial spin distribution; in particular, for our choice its
$E$-dependence is non-monotonic.
As one approaches the surfing regime $ \mu E>v $ the spin lifetime converges
to the expected value $ 2 \tau_s $.
\subsection{Inhomogeneous initial conditions}
\label{subsec_inhomo}
So far we have discussed the spin dynamics of an initially homogeneous spin distribution, for which
case there is no spin current parallel to the SAW wave front. However this assumption is not justified
in experiments where the initial spin distribution is created by, say, a focused laser beam. Again,
without loss of generality, we consider a SAW moving in $x$ direction.
While for the homogeneous case, the spins were precessing
only around the axis parallel to the SAW wave front, now there will be diffusion along the wave front,
and hence they will also rotate around the SAW propagation direction. As a consequence the spins
along the narrow moving wire will not have the same orientation. In order to deal with this additional
precession we employ the following ansatz for the spin density:
\begin{equation}
s^a=\rho^0(r,\varphi,t)\,\eta^{a}(\varphi,t),
\label{ansatz}
\end{equation}
where $ r=2m\sqrt{\alpha_{+}^2 x^2+\alpha_{-}^2 y^2} $ denotes the renormalized (dimensionless) radius,
and $ \varphi=\arctan[\alpha_{-} y/(\alpha_{+} x)] $. The carrier density in the surfing regime, $ \rho^0(r,\varphi,t) $,
was already determined in Sec.~\ref{sec_charge}, with $ X(x,t) $ given in \eqref{eq_carrier};
according to Eq.~\eqref{ydens} the carrier density along the $ y $ axis
for an initial Gaussian distribution with standard deviation $ y_0 $ reads
\begin{equation}
Y(y,t)=\frac{1}{\sqrt{2\pi(2 D t+y_0^2)}}\exp\left[-\frac{y^2}{2 (2 D t+y_0^2)}\right].
\end{equation}
Instead of switching to the SAW co-moving reference frame
as in the homogeneous case, we stay in the laboratory frame but perform again
a gauge transformation,
\begin{equation}
\boldsymbol{\mathcal{A}}\rightarrow U^{\dagger}\boldsymbol{\mathcal{A}}U+iU^{\dagger}\nabla U,
\; U=\exp\left[\mathrm{i}(x-x_0-vt)\mathcal{A}_{x} \right],
\end{equation}
since as above all relevant spin dynamics takes place in a small wire parallel
to the SAW wave front. With the ansatz \eqref{ansatz}, and by neglecting terms $ \mathcal{O}(r^{-1}) $,
the continuity equation \eqref{eq_cont} reads
\begin{eqnarray}
\partial_{t}\eta-\mathrm{i}\,\frac{v}{\cos\varphi}\left[\mathcal{A}_x(\varphi),\eta\right]+
D\left[\mathcal{A}_y,\left[\mathcal{A}_y,\eta\right]\right]=0,
\label{2ddyn}
\end{eqnarray}
where
$ \mathcal{A}_x(\varphi)=\exp\left(-\mathrm{i}\frac{\varphi}{2}\sigma^z\right)\mathcal{A}_x
\exp\left(\mathrm{i}\frac{\varphi}{2}\sigma^z \right) $
is the vector potential rotated around the $ z $ axis.
The second term in Eq.~\eqref{2ddyn} leads to spin precession around the $ \varphi $-dependent
vector potential $ \mathcal{A}_x(\varphi) $, whereas the third term is responsible
for the relaxation of the spin components perpendicular to the $ x $ axis.
\begin{figure}[htbc]
\begin{center}
\includegraphics[width=0.45\textwidth]{110a002b17.png}
\caption{Time-integrated spin density, $ \overline{s}^z $, for a SAW moving in $ [110] $ direction}\label{2D110}
\label{fig_110}
\end{center}
\end{figure}
\begin{figure}[htbc]
\begin{center}
\includegraphics[width=0.45\textwidth]{m110a002b17.png}
\caption{Time-integrated spin density, $ \overline{s}^z $, for a SAW moving in $ [\bar{1}10] $ direction}\label{2D-110}
\label{fig_bar110}
\end{center}
\end{figure}
The Bloch equations now read
\begin{equation}
\partial_{t}\eta^a=- \gamma(\varphi)^{ab}\eta^b,
\end{equation}
with the $\varphi$-dependent effective relaxation matrix
\begin{eqnarray}
&\\ \nonumber
\hat\gamma(\varphi)=&\left(\begin{array}{ccc}
0 & 0 & 2m v \alpha_{+} \\
0 & 4 Dm^2 \alpha_{-}^2 & 2m v \alpha_{+} \tan\varphi \\
- 2m v \alpha_{+} & -2m v \alpha_{+} \tan\varphi & 4 Dm^2 \alpha_{-}^2
\end{array} \right).
\end{eqnarray}
Assuming that the temporal resolution is not high enough to measure
the time dependence of the spin density directly (see, e.g., Ref.~\onlinecite{Sanada2011}), we characterize
the additional rotation of the spins due to the diffusion parallel to the SAW wave front by the
time-integrated spin density: note that all spins are confined within a narrow wire, and the spin
density vanishes everywhere but for $ x-x_0 \approx vt $. For the time-integrated
spin density we therefore obtain
\begin{equation}
\overline{s}^a=\int_{0}^{\infty} s^a\,\mathrm{d}t\simeq a_0 Y\left(y,(x-x_0)/v\right)\eta^a\left(r,\varphi\right).
\end{equation}
The results presented in Figs.~\ref{2D110} and \ref{2D-110} were obtained by calculating
numerically the time-dependence of the spin density $ s^z $,
assuming at $t=0$ a Gaussian distribution with standard deviation of 1 $\mathrm{\mu m}$.
Specifically, Figs.~\ref{2D110} and \ref{2D-110} show the time-integrated spin density for a SAW moving
in $x$ and $y$ direction, respectively. We have chosen parameters comparable
to the experimental ones\cite{Sanada2011} (we restore temporarily $\hbar$), namely
$ 2 m \alpha/\hbar^2 = 0.02 \,\mu \mathrm{m}^{-1} $, $ 2 m \beta/\hbar^2 = 0.17\,
\mu \mathrm{m}^{-1} $, $ D=30\, \mathrm{cm}^2/\mathrm{s} $, and $ v = 2.9 \times 10^5\,\mathrm{cm}/\mathrm{s} $.
The elliptical shape of the time-integrated spin density,
which is a consequence of the $ \varphi $-dependence of $ \mathcal{A}_x(\varphi) $,
is clearly visible in both figures, in remarkable agreement with the observed behavior.\cite{Sanada2011}
The time-integrated $ s^z $ takes a very simple form
along certain directions. For example, along the $x$ direction for $y=0$ (recall that our coordinate
choice means
$\hat{\bf x}\parallel[110]$, $\hat{\bf y}\parallel[\bar{1}10]$) we find
\begin{equation}
\overline{s}^z= a_0 \frac{\exp\left(-(x-x_0)/L_{s,110}\right)}{\sqrt{y_0^2+2D(x-x_0)/v}}
\cos\left[\frac{2\pi (x-x_0)}{\lambda_{110}}\right],
\label{polz}
\end{equation}
where
\begin{equation}
\label{L110}
L_{s,110}= {v}/{2Dm^2\alpha_{-}^2},\; \lambda_{110}= {v}/{2\sqrt{v^2m^2\alpha_{+}^2-D^2m^4\alpha_{-}^4}};
\end{equation}
this is plotted in Fig.~\ref{fig_zspin}, upper panel (solid black line).
The constant $a_0$ is fixed by fitting the numerical data, as discussed below.
For a SAW propagating in $y$ direction (for $ x=0 $), one finds a similar expression, with the substitutions
$x, L_{s,110}, \lambda_{110} \rightarrow y, L_{s,\bar{1}10}, \lambda_{\bar{1}10}$:
\begin{equation}
\label{Lbar110}
L_{s,\bar{1}10}= {v}/{2Dm^2\alpha_{+}^2},\; \lambda_{\bar{1}10}= {v}/{2\sqrt{v^2m^2\alpha_{-}^2-D^2m^4\alpha_{+}^4}},
\end{equation}
compare Fig.~\ref{fig_zspin}, lower panel (solid black line).
In both propagation directions the numerical and analytical data are in good agreement for
$x,\; y\gtrsim 3\, \mu \rm m $. The reason for the deviation near the origin is that for the chosen
parameters, the standard deviation 1 $ \mathrm{\mu m}$ of the initial Gaussian is only marginally
smaller than the SAW wavelength $ 2\pi/k=2.55\,\mathrm{\mu m} $, leading to two small wires instead
of one. This causes the peak for $x,\; y$ close to this value. The spin dynamics is, however, in
both wires the same. We emphasize that the dependence of the spin precession length on the direction
of motion of the SAW is in very good agreement with the experimental observations.\cite{Sanada2011}
\begin{figure}[htbc]
\begin{center}
\includegraphics[width=0.45\textwidth]{Sza002b17.png}
\caption{Time-integrated spin density, $ \overline{s}^z $, along the [110] and [$\overline{1}$10] directions.
The red circles represent the numerical solution of Eq.~\eqref{eq_cont}. The black solid line
shows the analytical expression \eqref{polz}.}
\label{fig_zspin}
\end{center}
\end{figure}
\section{Miscellaneous}
\label{sec_superplus}
\subsection{Other growth directions}
Our treatment is based on the general $SU(2)$-covariant equations \eqref{eq_cont} and \eqref{eq_current}.
The latter require as only input the specific form of the spin-orbit interaction, i.e., of the non-Abelian
vector potential $\boldsymbol{\mathcal A}$, and yield at once the spin diffusion (Bloch) equations \eqref{eq_bloch}.
Therefore any linear-in-momentum spin-orbit term can be handled straightforwardly.
Let us consider, as another example, the $[110]$-grown GaAs quantum well experimentally studied
in Refs.~\onlinecite{Couto2007,Couto2008}. The Rashba interaction is unchanged, compare
Eqs.~\eqref{eq_hR} and \eqref{eq_AR}, whereas the Dresselhaus term points out-of-plane,\cite{Sih2005}
\begin{equation}
H_{so}^D = \beta p_y \sigma^z,
\end{equation}
i.e., the only non-zero component of the vector potential $\boldsymbol{\mathcal A}_D$ is $({\mathcal A}_D)^z_y=2m\beta$.
If only the $[110]$ Dresselhaus term were present, $s^z$ would be a conserved quantity,
\cite{Hankiewicz2006, Raimondi2009} and confinement along the $x$ direction would be inconsequential.
This changes when the Rashba interaction is also taken into account.
The eigenvalues of the $\hat\Gamma$ matrix become \cite{Raimondi2009}
\begin{eqnarray}
\Gamma_1 &=&\, 4Dm^2 \alpha^2 \,, \\
\Gamma_2 &=&\, 4Dm^2 \left(\alpha^2+\beta^2\right) \,, \\
\Gamma_3 &=&\, 4Dm^2 \left(2\alpha^2+\beta^2\right)\,,
\end{eqnarray}
with two eigenmode directions depending on the relative strength of the Rashba and Dresselhaus
interactions:
\begin{eqnarray}
\hat{e}_1 & \parallel & (-\alpha,0,\beta)\,,\\
\hat{e}_2 & \parallel & (0,1,0)\,,\\
\hat{e}_3 & \parallel & (\beta,0,\alpha)\,.
\end{eqnarray}
The influence of a SAW on the spin lifetimes now crucially depends on the propagation direction.
For an $x$-propagating SAW, in the co-moving frame and after gauging away $({\mathcal A})^y_x$ as before,
we find the eigenvalues of the $\hat\gamma$ matrix to be given by
\begin{eqnarray}
\gamma_{1,3}& = & 2Dm^2 \left(3\alpha^2+\beta^2\right)\nonumber \\
& &\pm 2\,\mathrm{i}\sqrt{v^2 m^2 \alpha^2-D^2 m^4 \left(\alpha^2+\beta^2\right)}\,,\\
\gamma_2 & = & \, 4Dm^2 \left(\alpha^2+\beta^2\right)\,,
\end{eqnarray}
with the eigenmode directions
\begin{eqnarray}
\hat{e}'_1 & \parallel & (-8Dm^2\alpha^2 + \gamma_1, 0, 2m\alpha v +4Dm^2\alpha\beta)\,,\\
\hat{e}'_2 & \parallel & (0,1,0)\,, \\
\hat{e}'_3 & \parallel & (-8Dm^2\alpha^2 - \gamma_3, 0, 2m\alpha v +4Dm^2\alpha\beta)\,.
\end{eqnarray}
The $y$-polarized spin eigenmode keeps its direction, $\hat{e}'_2=\hat{e}_2$,
and its lifetime, $\gamma_2=\Gamma_2$, as in the case of a [001]-grown quantum well
(see \eqref{Gamma2} and \eqref{eig110-2}). On the other hand, the $\Gamma_1$- and $\Gamma_3$-modes
are mixed by the SAW-induced dynamics.
By comparing $ \Gamma_{1,3} $ with the real parts of $ \gamma_{1,3} $, one sees that
${\Re (\gamma_1)} > \Gamma_1$, i.e., the new $\gamma_1$ eigenmode has actually a shorter lifetime compared to the old one.
On the other hand, ${\Re (\gamma_3]} < \Gamma_3$, with the eigenmode lifetime increasing by a factor of two
for strong Dresselhaus interaction, $\beta\gg\alpha$.
Even more interestingly, for a $y$-propagating SAW the relaxation is independent of $\beta$.
The eigenvalues of the $ \hat\gamma $ matrix, in the moving frame and after the usual gauge transformation,
read
\begin{eqnarray}
\gamma_1 &=&\, 4Dm^2 \alpha^2\,,\\
\gamma_{2,3} &=&\, 2Dm^2 \alpha^2\nonumber \\
& &\pm 2\,\mathrm{i}\sqrt{v^2 m^2 \left(\alpha^2+\beta^2\right)-D^2 m^4 \alpha^4}\,,
\end{eqnarray}
whereas the eigenmode directions are
\begin{eqnarray}
\hat{e}'_1 & \parallel & (-\alpha,0,\beta)\,,\\
\hat{e}'_2 & \parallel & (2mv\beta, \gamma_2 , 2mv \alpha)\,,\\
\hat{e}'_3 & \parallel & (2mv\beta, -\gamma_3 , 2mv \alpha)\,.
\end{eqnarray}
Now the $\Gamma_1$-mode keeps both its lifetime, $\gamma_1=\Gamma_1$,
and its direction, $\hat{e}'_1=\hat{e}_1$, while the other two modes are strongly influenced
by the presence of the SAW.
In particular, compared to the eigenmodes $\Gamma_{2,3}$, the new eigenmodes $\gamma_{2,3}$
have a spin lifetime enhanced by a factor of
$2\beta^2/\alpha^2$ if the Dresselhaus interaction dominates, $\beta\gg\alpha$.
\subsection{Additional spin-orbit interactions}
Additional sources of Rashba or Dresselhaus-like spin-orbit terms are the out-of-plane
(i.e., parallel to the quantum well growth direction) SAW field and strain.
Experimental observations suggest these dynamical contributions
to be subleading compared to the static ones, though not completely negligible,
especially for very strong SAW power.\cite{Sanada2011}
For this discussion we consider the non-rotated coordinates, cf. Sec.~\ref{sec_model}.
In the laboratory reference frame the additional spin-orbit interactions appear as time- and space-dependent
Rashba or Dresselhaus terms. For example, considering a SAW propagating along the $x$ direction, Eq.~\eqref{eq_hR} is modified to
\begin{equation}
\label{eq_hR_new}
H^R_{\rm so} = \left[\alpha+\alpha_{\rm piezo}(x,t)+\alpha_{\rm strain}(x,t)\right]
(p_y\sigma^x-p_x\sigma^y),
\end{equation}
and similarly for the Dresselhaus terms. In the color language this means
that we deal with a space- and time-dependent vector potential $\boldsymbol{\mathcal A}(x,t)$. Nevertheless,
as long as the spatial variations of the spin-orbit fields are slow on the scale of the
Fermi wavelength, the $SU(2)$ approach can be employed directly, as it treats homogeneous/static
spin-orbit terms on the same footing as inhomogeneous/time-dependent ones.\cite{Gorini2010}
The dynamical nature of these additional spin-orbit interactions substantially complicates
the problem, but once more a change to the SAW co-moving reference frame offers a great simplification:
when all disturbances, i.e., in- or out-of-plane fields, either piezoelectric or
due to strain, propagate approximately with the same sound velocity $v$,
all their contributions become static in the SAW co-moving frame,
$\boldsymbol{\mathcal A}(x,t)\rightarrow\boldsymbol{\mathcal A}(x)$. Moreover, in the surfing regime
when the carriers are confined, the vector potential can be approximated by its value at
$x_0=\arccos\left(v/\mu E\right)/k$ (see Sec.~\ref{sec_charge}),
$\boldsymbol{\mathcal A}(x)\approx\boldsymbol{\mathcal A}(x_0)$. Hence we are back to the situation discussed
in Sec.~\ref{sec_spin}, with the following modifications:
\begin{eqnarray}
\boldsymbol{\mathcal A}_R &\rightarrow& \boldsymbol{\mathcal A}_R+\boldsymbol{\mathcal A}^{\rm piezo}_R(x_0)+\boldsymbol{\mathcal A}^{\rm strain}_R(x_0)
\\
\boldsymbol{\mathcal A}_D &\rightarrow& \boldsymbol{\mathcal A}_D+\boldsymbol{\mathcal A}^{\rm strain}_D(x_0).
\end{eqnarray}
This corroborates and fully justifies the intuition behind the estimations of
$\alpha_{\rm piezo}$, $\alpha_{\rm strain}$, and $\beta_{\rm piezo}$ described in Ref.~\onlinecite{Sanada2011}.
Finally, we briefly discuss extrinsic spin relaxation, i.e., due to spin-orbit interaction
with the disorder potential $V({\bf r})$. Extrinsic mechanisms can be included
in the color approach,\cite{Raimondi2012} and in the present case they lead to an additional
(diagonal) term $\hat\Gamma_{\rm extr}$ in the relaxation matrix $\hat\Gamma$ of Eq.~\eqref{eq_bloch},
\begin{equation}
\hat\Gamma_{\rm extr}=\frac{1}{\tau_{EY}} \, {\rm diag}(1,1,0) \, .
\end{equation}
The Elliot-Yafet spin-flip rate $1/\tau_{EY}$ typically is negligible compared to the Dyakonov-Perel rate
(see Ref.~\onlinecite{Raimondi2009} for details), and independent of the presence of SAWs
or of confinement. Nevertheless, a discussion focused on its role in a moving quantum dot in the presence
of a Zeeman field is given in Ref.~\onlinecite{Huang2013}.
Note that in case the impurity potential $V({\bf r})$
fluctuates also out-of-plane,\cite{Dugaev2010} an Elliot-Yafet relaxation rate
for the $z$ spin component will appear.
\section{Conclusion}
\label{sec_conclusions}
By utilizing the microscopic model of a disordered two dimensional electron gas,
we studied the effects of surface acoustic wave on the charge and spin dynamics of
photo-excited carriers, focusing on intrinsic spin-orbit mechanisms (Dyakonov-Perel relaxation).
A SAW has to be strong enough ($ \mu E>v $) in order to transport the carriers
at the speed of sound $v$ across the sample. In this surfing regime,
the spin lifetime is considerably increased due to motional narrowing, up to a factor of two
in (001) quantum wells. The dynamics can be most conveniently described in a reference
frame co-moving with the SAW.
In particular, we determined the SAW-induced modifications of the spin relaxation and
precession lengths. Considering also diffusion along the SAW wave front, we obtained
very good agreement with recent experimental observations.\cite{Sanada2011}
Additional dynamical sources of spin-orbit relaxation (out-of-plane SAW field, strain)
were also shown to be most conveniently handled in the SAW co-moving frame. These effects are expected to be relevant for the ``moving quantum dots'' produced by the interference of two orthogonal SAW beams. \cite{Stotz2005,Sanada2011}
\acknowledgments{We acknowledge useful discussion with H.~Krenner and A.~Wixforth,
as well as financial support from the German Research Foundation (DFG) through TRR 80,
and from the CEA through the DSM-Energy Program (project E112-7-Meso-Therm-DSM).}
|
1,116,691,501,303 | arxiv | \section{Significance Statement}
In 2012, Menzel \textit{et al.}~~reported on the results of a fundamental experiment raising questions regarding the simultaneous observation of wave-like and particle-like properties in a given quantum system. While the general applicability of the duality principle to entangled subsystems is an open question, we bring the current understanding of the duality principle a step forward by theoretically deriving the strongest relations between the visibility of an interference pattern and the which-way information in a two-way interferometer such as Young's double-slit. This formalism successfully describes tests of duality where post-selection on a subset of the interference pattern is applied. Our analysis even reconciles the surprising results of Menzel \textit{et al.}~with the duality principle in its standard form.
\section{Introduction}
{I}n his famous analysis of the two-slit experiment, Bohr arrived at the conclusion that one cannot obtain both complete which-way information and interference effects in a single experimental configuration \cite{bohr:49}. Since then, numerous studies have reenforced and refined Bohr's result \cite{greenberger:89, englert:96, wootters:79, jaegger:95, bergou:00}. Notably, the close connection between duality and the concept of the quantum eraser was established in the seminal paper by Scully, Englert and Walther \cite{scully:91}. Later, the duality principle was confirmed by experimental evidence with massive particles such as neutrons \cite{greenberger:89}, atoms \cite{carnal:91} and even C$_{60}$ molecules of picometer-size de Broglie wavelength \cite{arndt:99}. Having passed every test, duality has indubitably become a solid fundamental and universal principle of quantum mechanics.
Recently, however, Menzel \textit{et al.}~reported a surprising result in the context of the duality principle \cite{menzel:11, menzel:13}. They implemented Young's two-slit experiment with photons entangled in position and momentum generated through spontaneous parametric downconversion (SPDC), and measured both an interference pattern with high visibility and high which-way information in a single experimental configuration. Motivated by this unexpected result, we analyze duality from a ``fair sampling" perspective.
The concept of ``fair sampling" has received much attention in the context of tests of the Bell inequalities and non-locality \cite{giustina:13,rowe:01,matsukevich:08}. In order to rule out local theories completely, one should avoid any assumption, including the fair-sampling assumption, which states that the set of measurement results is representative of the entire ensemble under study. To achieve freedom from this assumption one could make sure that the detection efficiency be equal for all the states in the ensemble and that the overall detection efficiency be above a particular threshold \cite{giustina:13}, which depends on the type of Bell inequality. ``Fair sampling" also requires that all measurement settings be chosen without bias. In other words, all relevant subsets of an ensemble must be sampled with equal probability. However, the result of a test of fundamental quantum mechanics performed with biased sampling can still bear meaning if all the properties of the measurement settings are taken into account.
In this work, we derive the tightest possible relation between which-alternative knowledge and average visibility of the corresponding interference pattern in the presence of an environment, an improvement on the bound of the known inequalities. We then show how biased sampling can cause an apparent violation of the duality principle. We finally study the effect of biased sampling on actual tests of the duality principle by applying our duality relation to a thought experiment, inspired by the work of Menzel \textit{et al.}.
\section{The duality relations}
A duality relation bounds the visibility of an interference pattern and the corresponding available which-alternative information in an interferometer. Young's two-slit experiment is one of many ways to produce the experimental conditions in which an interference pattern and which-way knowledge can be obtained. Here, we restrict ourselves to a two-alternative system, where the alternatives can correspond to any degree of freedom: the arms of an interferometer, two slits, orthogonal polarizations, two orbital angular momentum states, to give a few examples. Without specifying any degree of freedom, we consider a pure normalized two-alternative quantum state of the form $\ket{\psi} =\lambda_{1}\ket{1}+\lambda_{2}\ket{2}$, where $\lambda_{1}$ and $\lambda_{2}$ are the complex amplitudes of alternatives 1 and 2.
There are two distinct ways of gaining which-alternative information: by prediction and by retrodiction, an educated guess about the outcome of an event that occurred in the past. We review the former, and then derive a new duality relation for the latter. One can \textit{predict}, though not necessarily with certainty of being correct, the outcome of a which-alternative measurement if a state is prepared such that a particular alternative is more likely than the other. Greenberger and Yasin, in \cite{greenberger:89}, quantify this fraction with the positive difference between the probabilities of observing the alternatives: $\PP=||\lambda_{1}|^2-|\lambda_{2}|^2|$, a quantity now known as \textit{predictability}. It corresponds to one's ability to predict the outcome of a which-alternative measurement in the basis $\{\ket{1},\ket{2}\}$. The fact that only one outcome is possible for any measurement is usually interpreted as particle-like behavior. The complementary quantity that brings to light the wave-like behavior of the quantum state is the contrast, or \textit{visibility} of the interference pattern. The visibility is obtained by projecting $\ket{\psi}$ onto the superposition state $(\ket{1}+\text{e}^{\imath\phi}\ket{2})/\sqrt{2}$, where $\phi$ is a phase that is scanned to produce the interference pattern. The visibility of the resulting interference pattern is given by $\V=2|\lambda_{1}\lambda_{2}|$. For a pure two-alternative state, we have the equality \cite{greenberger:89},
\begin{equation}\label{eq:GY}
\PP^2+\V^2=1.
\end{equation}
In the presence of noise or a statistical mixture of two alternatives, the coherence is reduced and the above relation becomes an inequality: $\PP^2+\V^2\leq1$.
The presence of decoherence can be modeled very effectively by considering an {auxiliary system} \cite{jaegger:95}, often called the {environment} \cite{englert:96}, in addition to the two-alternative system. If the two-alternative system is coupled to an environment, the latter may carry information about the former, and the amount of which-alternative information carried by the environment depends on the strength of the coupling. This concept is concisely explained through an example. Notably, Schwindt \textit{et al.}~have experimentally coupled each path of a Mach-Zehnder interferometer to arbitrary polarization states, making the which-way information accessible through a measurement of the polarization \cite{schwindt:99}. In this experiment, the arms of the Mach-Zehnder interferometer played the role of the two alternatives and the polarization degree of freedom played the role of the auxiliary system. If polarizations of the light in the two paths are orthogonal, a measurement of the polarization of a photon at the output of the interferometer yields complete which-alternative information by retrodiction. The term ``retrodiction'' refers to the fact that the measurement outcome, which is obtained after a photon traversed the interferometer, contains the relevant information \cite{jaegger:95}. Note that for each possible outcome of a measurement on the auxiliary system there corresponds a conditional state of the two-alternative system that will display a particular predictability and a particular visibility; see Fig.~1 of reference \cite{bergou:00} for a pictorial description.
In an arbitrary basis $\{\ket{a_i}\}$ of dimension $D$ for the auxiliary system, the composite state is written $\ket{\Psi} =\sum_{i=1}^D\alpha_i\ket{\psi_i,a_i}$, where the complex amplitudes $\alpha_i$ are normalized and $\ket{\psi_i}=\lambda_{1,i}\ket{1}+\lambda_{2,i}\ket{2}$ are the conditional states. The which-alternative knowledge
associated with the composite system is given by the statistical average of the predictabilities, after sorting the auxiliary states $\ket{a_i}$: $\brakett{\PP}=\sum_{i=1}^D p_i\PP_i$, where $p_i=|\alpha_i|^2$ is the probability of occurrence of the $i$-th auxiliary state and $\PP_i$ is the predictability associated with this same auxiliary state $\ket{a_i}$. The quantities $\brakett{\PP^2}=\sum_{i=1}^D p_i \PP_i^2$ and $\brakett{\V^2}=\sum_{i=1}^Dp_i\V_i^2$ sum to unity, in virtue of Eq.~\ref{eq:GY},
\begin{equation}\label{eq:B1}
\brakett{\PP^2}+\brakett{\V^2}=1.
\end{equation}
In the case where the auxiliary system is parametrized by a continuous variable, the sums are replaced by integrals. The Englert-Bergou inequality between the which-alternative knowledge and the average visibility is given by \cite{bergou:00}: $\brakett{\PP}^2+\brakett{\V}^2\leq 1$, which holds even in the case of partly or completely mixed states. We have used the fact that, for any distribution, the following inequalities are true: $\brakett{\PP}^2\leq \brakett{\PP^2}$ and $\brakett{\V}^2\leq \brakett{\V^2}$.
In order to find an \textit{equality} for the physically relevant quantities, the which-alternative knowledge and the average visibility, we use the variances of each distribution: $\sigma_\PP^2= \sum_{i=1}^D p_i(\PP_i-\brakett{\PP})^2$ and $\sigma_\V^2= \sum_{i=1}^D p_i(\V_i-\brakett{\V})^2$. From Eq.~\ref{eq:B1} and the identities $\sigma_\PP^2=\brakett{\PP^2}-\brakett{\PP}^2$ and $\sigma_\V^2=\brakett{\V^2}-\brakett{\V}^2$, it follows that
\begin{equation}\label{eq:B2}
\brakett{\PP}^2+\brakett{\V}^2= 1-\sigma_\PP^2-\sigma_\V^2.
\end{equation}
Since predictability and visibility are bounded between 0 and 1, each variance can take a maximum value of 1/4. The RHS of Eq.~\ref{eq:B2} is thus inherently greater or equal to 1/2. In the presence of noise or uncontrolled coupling to the environment, the equality becomes an inequality, $\brakett{\PP}^2+\brakett{\V}^2\leq 1-\sigma_\PP^2-\sigma_\V^2$, and the RHS of Eq.~\ref{eq:B2} bounds the LHS in the tightest way possible.
Eq.~\ref{eq:B2} holds only when all states of the environment $\{\ket{a_i}\}$ are sampled with equal probability. Since the environment is comprised of $D$ states, the sampling probability for any state $\ket{a_i}$ should be $1/D$. When this no longer holds true, the statistics do not reflect the state at hand and the RHS of Eq.~\ref{eq:B2} no longer bounds the LHS. In particular, this occurs when selecting only a subset of the auxiliary system while rejecting the rest. For instance, one could only measure the subset of the environment corresponding to the highest predictability $\PP_\text{max}$ and also the one corresponding to the highest visibility $\V_\text{max}$. In general, these subsets are different states of the environment, $\ket{a_j}$ and $\ket{a_k}$ with $j\neq k$. For non-zero variances, the maximum value in each distribution is greater than its respective average value: $\PP_\text{max}>\brakett{\PP}$ and $\V_\text{max}>\brakett{\V}$. Since the quantity $(\PP_\text{max}^2+\V_\text{max}^2)$ can in principle approach 2, it is possible to observe both high predictability and high visibility in a single experiment. This can appear to be a violation of the duality principle, but it is simply a consequence of biased sampling in the measurements of which-alternative information and visibility, i.e., different samplings in the measurements of predictability and visibility.
\section{An example of an apparent violation of duality}
Through a thought experiment inspired by the work of Menzel \textit{et al.}, we now show the details of how to achieve an apparent violation of duality. Starting from a two-photon state generated through spontaneous parametric down-conversion, one of the photons traverses a two-slit mask, while the other is used to measure the which-slit information. We then calculate the two-dimensional interference pattern in the far-field of the mask given that partial which-slit information is acquired. In the two-dimensional interference pattern, the transverse axis in the direction parallel to the long side of the slits acts as the auxiliary system. We calculate the quantities appearing in Eq.~\ref{eq:B2} for a given set of experimental parameters and show the impact of biased sampling on the outcome of the thought experiment.
\subsection{The theory of SPDC} For our purposes, it suffices to consider the SPDC process with a type I crystal, whose theoretical description is simpler than that of a type II crystal. In the limit of very little walk-off due to birefringence, the two-photon transverse spatial mode function of degenerate SPDC has a simple analytical form \cite{monken:98, vanexter2:09}. As a function of the transverse wavevectors of the signal and idler photons, $\bold{p_s}$ and $\bold{p_i}$ with $\bold{p}=p_x\bold{\hat{x}}+p_y\bold{\hat{y}}$, the single-frequency two-photon mode function is given by
\begin{align} \label{eq:SPDC}
\Phi(\bold{p_s},\bold{p_i})=N\hspace{2pt}\tilde{E}\hspace{2pt}(\bold{p_s}+\bold{p_i})\hspace{4pt}\tilde{F}\left(\frac{\bold{p_s}-\bold{p_i}}{2}\right),
\end{align}
where $N$ is a normalization constant, $\tilde{E}(\bold{p})$ is the angular spectrum of the pump laser, and $\tilde{F}$ is the phase-matching function. In the paraxial wave approximation, the phase-matching function is of the form $\tilde{F}(\bold{p})=\text{sinc}(\varphi+{L}\hspace{2pt}|\bold{p}|^2/{k_p})$, where $\varphi$ is the phase mismatch parameter, $L$ is the thickness of the crystal and $k_p$ is the wavevector of the pump inside the crystal.
Because of momentum conservation, the signal and idler photons are anti-correlated in transverse wavevector space. The momentum correlations are mostly determined by the angular spectrum of the pump, while the phase-matching function dictates the general shape of the two-dimensional probability distribution of the individual photons, which we shall refer to as ``the singles". If the pump beam is collimated and has infinite width at the crystal, its angular spread approaches the Dirac distribution $\delta (\bold{p_s}-\bold{p_i})$. In this limit, the intensity profile of the singles in the far-field of the crystal is exactly given by $|\tilde{F}(\bold{p_{s,i}})|^2$, where $\bold{p_{s,i}}$ is the transverse wavevector of either the signal or the idler photon.
In order to describe the position correlations in coordinate space, we perform a 4-dimensional Fourier transform on the two-photon mode function: $\Psi(\bold{r_s},\bold{r_i})=\text{FT}[\Phi(\bold{p_s},\bold{p_i})]$, where $\bold{r}=r_x\bold{\hat{x}}+r_y\bold{\hat{y}}$ is the transverse coordinate in the plane of the crystal. Since the mode function in wavevector space is separable in $(\bold{p_s}+\bold{p_i})$ and $(\bold{p_s}-\bold{p_i})$, the mode function at the output facet of the crystal is written \cite{peeters:09,fonseca:99,chan:07,vanexter:09}
\begin{align} \label{eq:NFSPDC}
\Psi(\bold{r_s},\bold{r_i}) = N' {E}\hspace{-1pt}\left(\bold{r_s} + \bold{r_i}\right) {F}(\bold{r_s}-\bold{r_i}),
\end{align}
where $N'$ is a normalization constant, $E(2\bold{r})$ is the transverse spatial mode of the pump at the crystal and ${F(\bold{r})}$ is the phase-matching function in coordinate space: $F(\bold{r})=(2 \pi)^{-1} \int \text{sinc}(\varphi+{L}\hspace{2pt}|\bold{p}|^2/{k_p}) \text{e}^{-\imath\bold{p}\cdot\bold{r}}\hspace{3pt}d\bold{p}$. If the phase mismatch parameter is different from zero, $\varphi\neq 0$, this integral has no known analytical solution and has to be performed numerically. See supplementary information for the details of the calculations.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{SetupSmall.pdf}
\caption{Our thought experiment. Photon pairs entangled in position and momentum are generated through degenerate SPDC with a type I crystal and a wide Gaussian pump mode. The signal and idler photons are separated by a 50/50 beam-splitter. On the path of the signal photon, the plane of the crystal is imaged with unit magnification to the plane of a two-slit mask made of slit T at $r_{s,y}=d/2$ and slit B at $r_{s,y}=-d/2$. The signal photon traverses the mask, and the idler photon is collected by an optical fiber (MMF), whose input facet is in the image plane of the crystal and centered at $r_{i,y}=d/2$ and $r_{i,x}=0$. Through position correlations, we gain which-slit information of the signal photon upon detection of the idler photon. We collect the signal photons in the far-field of the mask with a scanning point detector (SPD). All measurements are performed in coincidence, such that the interference pattern of the signal photons is conditional on the detection of idler photons. In a real experiment, interference filters would be placed before the detectors to ensure degenerate SPDC.}
\label{fig:setup}
\end{figure}
\subsection{Modelling the two-photon thought experiment}
~~In our thought experiment, we use a two-slit mask with a slit separation $d$ in the image plane of the output facet of the crystal on the signal photon side. Upon measurement of the idler photon position, the correlations allow one to gain knowledge about which slit the signal photon traverses \textit{while} measuring the interference pattern in the far-field of the two-slit mask. We model the mask with the transmission function ${W(r_{s,y})}={T(r_{s,y})}+{B(r_{s,y})}$, where ${T}$ and ${B}$ stand for the ``top" and ``bottom" slits and correspond to rectangle functions of width $\Delta$ at positions $d/2$ and $-d/2$, respectively. We chose the letter $W$ for the two-slit mask because it looks like what it represents: two slits with light diffracting out. The unnormalized two-photon mode function after one of the three masks is given by $\Psi_S(\bold{r_s},\bold{r_i})=\Psi(\bold{r_s},\bold{r_i})S(r_{s,y}) $, where $S$ can be replaced by $W$, $T$ or $B$. The single-slit amplitudes $\Psi_T(\bold{r_s},\bold{r_i})$ and $\Psi_B(\bold{r_s},\bold{r_i})$ are needed in the thorough analysis of the test of the duality principle and are physically obtainable by blocking the bottom slit or the top slit, respectively. As we are interested in the joint probability of the signal photon being detected in the far-field of the mask and the idler photon in the near-field of the crystal, we perform a Fourier transform on the signal photon only: $\widetilde{\Psi}_S(\bold{p_s},\bold{r_i})= ({2\pi})^{-1}\int d\bold{r} \hspace{2pt} \Psi_S(\bold{r},\bold{r_i}) \hspace{2pt} \text{e}^{\imath \bold{r}\cdot \bold{p_s}}$.
The idler photon is detected with a multimode fiber of width $w_f$ at position $(r_{i,x}=0,r_{i,y}=d/2)$. The mode of this fiber is modeled by a gaussian function:
\begin{equation}
f(\bold{r_i})=\text{exp}\{-[r_{i,x}^2+(r_{i,y}-d/2)^2]/(2 w_f^2)\}.
\end{equation}
Upon detection of an idler photon, the conditional distributions of the signal photon any of the masks in coordinate space and wavevector space are respectively written
\begin{align}
P_S(\bold{r_s}|f_i)&= N_P\int d\bold{r_i} \left| \Psi_S(\bold{r_s},\bold{r_i}) \hspace{2pt} f_i(\bold{r_i}) \right|^2~~~\text{and} \label{eq:sig}\\
\widetilde{P}_S(\bold{p_s}|f_i)&= N_P\int d\bold{r_i} \left| \widetilde{\Psi}_S(\bold{p_s},\bold{r_i})\hspace{2pt} f_i(\bold{r_i}) \right|^2 , \label{eq:sigff}
\end{align}
where the normalization constant is given by $N_P^{-1}= \int\int d\bold{r_s} d\bold{r_i} \left| \Psi_W(\bold{r_s},\bold{r_i}) \hspace{2pt} f_i(\bold{r_i}) \right|^2$. We find Eq.~\ref{eq:sig} and \ref{eq:sigff} through conditional probabilities. For instance, in the near-field of the two-slit mask, the conditioned signal photon distribution is given by $P_S(\bold{r_s}|f_i)=P_S(\bold{r_s},f_i)/P_S(f_i)$, where $P_S(f_i)=N_P^{-1}$ and $P_S(\bold{r_s},f_i)$ is equal to the remaining integral in Eq.~\ref{eq:sig}.
In view of the duality relations, the probability distribution $ \widetilde{P}_W(\bold{p_s}|f_i)$ is comprised of one main degree of freedom and one that belongs to the environment: the vertical and horizontal directions, respectively. In general, the visibility of the interference pattern depends on the degree of freedom of the environment and can thus vary as a function of $p_{s,x}$.
The average predictability can be calculated either in coordinate space or momentum space. In our formalism, the average predictability in coordinate space is expressed as
\begin{equation}\label{eq:p1}
\brakett{\PP}=\int d\bold{r_s} \hspace{2pt} | P_T(\bold{r_s}|f_i) - P_B(\bold{r_s}|f_i) |.
\end{equation}
Instead, we calculate the average predictability in wavevector space, which allows us to retrieve the which-alternative knowledge in the same basis as the visibility. We retrieve $ \widetilde{P}_T(\bold{p_s}|f_i)$ and $ \widetilde{P}_B(\bold{p_s}|f_i)$ by blocking slit B or slit T, respectively. We then integrate the distributions in wavevector space over the main degree of freedom, $p_y$, and obtain the marginal probability distributions $M_T(p_{s,x})=\int dp_{s,y} \widetilde{P}_T(p_{s,x},p_{s,y}|f_i)$ and $M_B(p_{s,x})=\int dp_{s,y} \widetilde{P}_B(p_{s,x},p_{s,y}|f_i)$. For brevity, we henceforth omit writing the argument $p_{s,x}$. The marginal signal probability distribution for the two slits simultaneously in the same basis is $M_W=M_T+M_B$. Predictability and visibility can both be expressed as a function of $p_{s,x}$: $\PP=|M_T-M_B|/M_W$ and $\V=2\sqrt{|M_TM_B|}/M_W$. The average predictability and average visibility are respectively given by
\begin{align}
\brakett{\PP}&= \int dp_{s,x} \hspace{2pt} |M_T-M_B| ~~~\text{and} \label{eq:p2} \\
\brakett{\V}&=\int dp_{s,x} \hspace{2pt} 2 \sqrt{|M_TM_B|}\hspace{2pt}. \label{eq:v2}
\end{align}
The last quantities left to find are the following variances:
\begin{align}
\sigma_\PP^2&=\int dp_{s,x} \hspace{2pt} M_W \hspace{2pt} (\PP-\brakett{\PP})^2~~~\text{and} \label{eq:varp2} \\
\sigma_\V^2&=\int dp_{s,x} \hspace{2pt} M_W \hspace{2pt} (\V-\brakett{\V})^2. \label{eq:varv2}
\end{align}
Using Eq.~\ref{eq:NFSPDC} to \ref{eq:varv2}, we check that Eq.~\ref{eq:B2} is satisfied by means of a numerical example. In our model, the pump spatial transverse mode does not play a key role and need not be of any special kind. We thus consider a plane-wave, which constitutes in a very good approximation to a collimated Gaussian beam at the crystal. The pump term in Eq.~\ref{eq:NFSPDC} can then be ignored, making the SPDC mode function completely determined by the phase-matching function. For the numerical calculations, the set of parameters that we use is $\{\varphi=-19~\text{rad},~L=2~\text{mm}, d=70~\mu \text{m},~\Delta=d/4~\mu \text{m},~w_f=10~\mu\text{m},~n=1.65,~\lambda_p=405~\text{nm}\}$, with $k_p= 2\pi n/\lambda_p$. Since there is no known analytical form for the phase-matching function, we compute Eq.~\ref{eq:sig} and \ref{eq:sigff}, for $S=\{W,T~\text{and}~B\}$, numerically. The two-dimensional interference pattern $\widetilde{P}_W(\bold{p_s}|\phi_i)$ is shown in Fig.~2.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{fringes3.pdf}
\caption{Theoretically predicted interference pattern of the signal photons in the far-field of the two-slit mask conditioned on the detection of idler photons: $\widetilde{P}_W(\bold{p_s}|f_i)$. Postselection on the state of the environment corresponding to the highest visibility, $p_{s,x}=0$, leads to an apparent violation of Eq. \ref{eq:B2}.}
\label{fig:fringes}
\end{figure}
The visibility of the interference pattern is strongly dependent on the degree of freedom of the environment, $p_{s,x}$. This strong dependence is explained by the fact that the sinc term in the phase-matching function is non-separable in $p_{x}$ and $p_{y}$. This effect can be fully described with classical optics. For instance, consider a two-dimensional classical transverse spatial mode $\Omega(p_x,p_y)$, which is sent to the two-slit mask $W(r_y)$. Through the convolution theorem, the resulting two-dimensional interference pattern $I(p_x,p_y)$ is determined by the convolution of the input mode in wavevector space with the Fourier transform of the two-slit mask: $I(p_x,p_y)\propto|\Omega(p_x,p_y)\ast \widetilde{W}(p_y)|^2$. Hence, the resulting interference pattern at a given value of $p_x$ only depends on the input distribution at the same value of $p_x$. If the input mode is non-separable in its two arguments, the input distribution along $p_y$ depends on $p_x$ and so does the interference pattern.
\begin{figure}[h!] \label{fig:MVP}
\centering
\includegraphics[width=0.4\textwidth]{MVP2.pdf}
\caption{Plot of (blue) the marginal probability distribution $M_W(p_{s,x})$ of the signal photons in wavevector space conditioned on the detection of idler photons. The marginal probability distribution $M_W(p_{s,x})$ is the one-dimensional distribution found by integrating the two-dimensional interference pattern over $p_{s,y}$. The scale for $M_W(p_{s,x})$ has been modified to fit the distribution on the same graph as the two other curves, which correspond to (red) the predictability $\PP$ and (green) the visibility $\V$ as a function of the degree of freedom of the environment, $p_{s,x}$. These quantities satisfy the equality $\PP^2+\V^2=1$ for all values of $p_{s,x}$.}
\end{figure}
\subsection{The biased sampling relation} We can now compute the relevant quantities: $\{\brakett{\PP}=0.816,~\brakett{\V}=0.331,~\V_\text{max}=0.982,~\sigma_\PP^2=0.077,~\sigma_\V^2=0.148\}$. The total marginal probability, the predictability and the visibility as functions of $p_{s,x}$ are shown in Fig.~3. In our example, we have $\brakett{\PP}^2+\brakett{\V}^2=1-\sigma_\PP^2-\sigma_\V^2=0.775$, which is consistent with Eq.~\ref{eq:B2}. The apparent violation occurs only when we consider the visibility at $p_{s,x}=0$ instead of the average visibility. Here, the biased sampling relation, that we define as $\mathcal{B}= \brakett{\PP}^2+\V_\text{max}^2$, reaches a value of 1.630, which is more than twice as large as the limit for the averages, thus showing high which-alternative information and high visibility in a single experiment. The apparent violation of the duality principle is due to the fact that we favor one specific subset of the environment, $p_{s,x}=0$, which corresponds to the maximum visibility $\V_\text{max}$ in the distribution. This is a form of biased sampling, or break-down of the ``fair sampling" assumption.
\section{Discussion}
In our thought experiment, we have control over the apparent violation, or the biased sampling relation $\mathcal B$, by varying the degree of non-separability between $p_y$ and $p_x$, which is controlled with the phase-mismatch parameter $\varphi$. For a vanishing phase-mismatch parameter, $\varphi=0$ rad, the phase-matching function resembles a two-dimensional Gaussian profile and becomes nearly separable. In this case, the visibility of the interference pattern is nearly constant over the whole range of $p_x$, and postselection of one particular value of $p_x$ does not lead to an apparent violation of duality; see the supplementary material. Our choice of a negative value for the phase-mismatch parameter, $\varphi=-19$ rad, makes the phase-matching function non-separable and is therefore crucial to the observation of an apparent violation of duality.
The measured subset for the measurement of the visibility must have a low probability of occurrence for $\mathcal{B}$ to surpass either side of Eq.~\ref{eq:B2} by a large amount. Notably, in the ideal case where i) a single state of the environment has vanishing probability and a corresponding value of $\V_\text{max}=1$, and ii) all other visibilities are zero, $\mathcal{B}$ approaches the value of 2. As indicated in Fig.~3, the probability of finding a signal photon where the visibility is the highest, the region around $p_{s,x}=0$, is indeed low albeit non-zero. This low probability of occurrence is an important factor contributing to the apparent violation of the duality principle.
As a result of this apparent violation one might raise the question of whether this implies a violation of the maximum speed for information transfer being the speed of light. The answer is that it does not and it can be justified in general terms. Our current understanding of quantum physics implies that any measurement on a subsystem of a larger quantum system is affected by the possibility to do measurements on the remaining system and not by whether or not the measurement has been performed. In a more formal language one would state that any measurement on the subsystem is perfectly described by the reduced density matrix of the subsystem, which one obtains by tracing over the remaining part of the total quantum system. In case of entanglement between the subsystem and the remainder this unavoidably leads to a mixed state density matrix. This implies that there can be no such entanglement if the measurement shows the subsystem to be in a pure quantum state. Within this constraint, the measurements on the subsystem and on the remaining part can of course be correlated, but this information is not accessible by looking at only one subsystem. This is essentially the message of the no-signaling theorem \cite{ghirardi:80}: it is impossible to detect whether or not a measurement has been performed on one of two entangled subsystems by looking exclusively at the other subsystem. All experiments so far comply with this interpretation. Nevertheless it is important to check such predictions again and again when novel experimental techniques become available.
\section{Conclusions}
We have derived the tightest possible relation, Eq.~\ref{eq:B2}, between the average predictability and the average visibility of a two-alternative system in the presence of an environment. This duality relation proved useful in the analysis of an apparent violation of the duality principle. Selection of one particular subset of the environment for the measurement of the visibility is the key to understanding this apparent violation. A high degree of non-separability between the main system and the environment is crucial to the observation of an apparent violation. According to our analysis, the duality principle in its standard form is safe and sound, but our new duality relation remains to be thoroughly tested.
\begin{acknowledgments}
This work was supported by the Canada Excellence Research Chairs (CERC) Program. E.~B.~acknowledges the financial support of the FQRNT, grant number 149713.
\end{acknowledgments}
|
1,116,691,501,304 | arxiv | \section{Introduction}
VANETs are envisioned to enable a range of applications, spanning from enhanced
transportation safety and efficiency to mobile infotainment,
while security and privacy enhancing technologies have been broadly accepted as prerequisites
for the deployment of such systems. A number of on-going efforts have yielded a
multitude of proposed schemes, including coordinated efforts such as those of
the IEEE 1609 working group, the Car-to-Car Communication Consortium, the
CAMP/VSC-2 project, and the SeVeCom project, which produced a full-fledged
security architecture for vehicle-to-vehicle and vehicle-to-infrastructure
communications.
Many aspects of security and privacy have already been addressed (e.g.,
in~\cite{Shen1,Shen2,Shen3}) but no solution has been yet proposed for the secure
discovery of the position of other nodes, in particular those within direct
communication range. This is an important problem because vehicular nodes are
location-aware, and location information is embedded in many VANET messages to
support various applications; transportation safety and geographical forwarding
(or GeoCast) are characteristic examples, while traffic monitoring and
management, as well as access to location-based services are also closely
related. In all such cases, nodes are required to reliably identify neighboring
nodes and determine their positions. Nonetheless, adversarial or faulty nodes
can falsify or alter such information, resulting in the disruption of system
operations.
Secure discovery of the positions of neighbors cannot be achieved by any of the
solutions in the literature. Secure localization techniques, which allow a
reliable determination of own location, are a building block but not the
solution to the problem at hand. Simply put, the reason is that an adversary
could advertise a false position in any discovery protocol. The presence of
trusted nodes would make the problem easier to solve: road-side infrastructure
or trustworthy specialized vehicles could help to securely localize other
vehicles. In such case, techniques in the literature, designed for mobile ad-hoc
networks, could be employed. However, this approach has severe limitations when
applied to vehicular environments: the presence of road-side infrastructure is
envisioned to be rather sparse and the presence of trustworthy nodes cannot be
guaranteed at all times, whereas position discovery is needed at any time and
location among any two or more vehicles.
To address this problem, we propose our Secure Neighbor Position Discovery
(SNPD) protocol, which enables any node (i) to discover the position of its
neighbors on-demand and in real-time; and (ii) to detect and discard faulty positions
and, thus, ignore their originators. SNPD therefore allows any vehicular node to
autonomously obtain a set of verified neighbor positions, leveraging the
contributions of its peers to weed out wrong-doers, without any prior
assumption about their trustworthiness.
In the rest of the paper, we first discuss related work and introduce
the system and adversary model we adopt, then we describe our SNPD protocol in
detail. A security analysis of SNPD follows, along with a performance
evaluation based on realistic vehicular mobility traces.
\section{Related Work\label{sec:related}}
Secure neighbor position discovery for vehicular environments is,
to the best of our knowledge, an open problem. Nevertheless, it relates to a
number of other problems that have instead been addressed before,
as discussed next. We emphasize that
our SNPD protocol is compatible with state-of-the-art security architectures
for vehicular networks, including those proposed by IEEE 1609.2~\cite{Ieee05:_IEEE_P1609.2}
and SeVeCom~\cite{Papadimitratos08:_secure_veh_comm_arch}.
\textbf{Securing own location and time information} is orthogonal to our problem,
as adversaries can acquire their own locations in a reliable manner, but then
advertise false positions to their neighbors. Own positioning and time synchronization
is thus a building block for SNPD, as it is for secure vehicular networking.
In vehicular environments, self-localization is mainly
achieved through Global Navigation Satellite Systems, e.g., GPS, whose security
can be provided by cryptographic and non-cryptographic defense
mechanisms~\cite{milcom2008}; alternatively, other terrestrial special-purpose
infrastructure (beacons) could be used~\cite{poovendran07}, along with techniques
to deal with non-honest beacons~\cite{zhong:theoryrobust}. In the rest of this
paper, we assume that devices can determine securely their own position and time
reference.
\textbf{Secure neighbor discovery (SND)}, that is, the discovery of directly
reachable nodes (communicating neighbors) or nodes within a distance (physical
neighbors)~\cite{snd-mag}, is only a step towards the solution we are after. To
put it simply, an adversarial node could be securely discovered as neighbor and
be indeed a neighbor (within some SND range), but it could still cheat about its
position within the same range. SND is a subset of the SNPD problem, since it
lets a node assess whether another node is an actual neighbor but it does not
verify the location it claims to be at. Nonetheless, properties of SND
protocols with proven secure solutions~\cite{asiaccs08,fmse-ccs}, are useful
in our context: as an example, signal Time of Flight-based and other distance
measurements between two nodes can prevent relay attacks (i.e., malicious nodes
relaying, stealthily and verbatim, messages of other correct nodes).
\textbf{Neighbor position verification} was investigated in the context of
ad-hoc networks, with solutions relying on dedicated mobile or hidden base
stations~\cite{capkun08}, or on the availability of a number of trustworthy
devices~\cite{capkun:secpos}. Our SNPD protocol, instead, is a fully
distributed solution that does not require the presence of any particular
infrastructure or a-priori trusted neighbors. Also, unlike previous works, our
solution targets highly mobile environments and it only assumes RF
communication; indeed, non-RF communication, e.g., infra-red or ultra-sound, is
unfeasible in VANETs, where non-line-of-sight conditions are frequent and
car-to-car distances often are in the order of tens or hundreds of meters.
\section{System and adversary model}
\label{sec:model}
We consider a vehicular network whose nodes communicate over a high-bit-rate
data link through an RF interface.
We assume that each node knows its own location with
some maximum error $\epsilon_p$, and that it shares a common time reference
with the other nodes in the network: both requirements can be met by equipping
vehicles with GPS receivers, already a major trend in today's car
manufacturing\footnote{With the help of GPS, user synchronization,
fine time granularity and a relatively precise
location information is available. Currently, small-footprint and
low-cost GPS receivers are commercially available, which achieve low
synchronization error and low localization error.}.
Also, nodes can perform Time of Flight (ToF)-based RF ranging using one message
transmission, with a maximum error equal to $\epsilon_r$: as discussed
in~\cite{capkun:secpos,techrep}, this is a reasonable assumption, although it requires
modifications to the current off-the-shelf radio interfaces; $\epsilon_p$
and $\epsilon_r$ are assumed to be equal for all nodes.
Each node has a unique identity, and carries cryptographic keys that allow it
to authenticate messages from other nodes in the network.
Although there are various ways to enable authentication, here
we only require that message authentication is done locally
and we assume that each node $X$ holds its own pair of private and public keys,
$k_X$ and $K_X$, respectively, as well as a set of one-time use keys
\{$k'_X, K'_X$\}. $X$ can encrypt and decrypt data with
its key(s) and the public keys of other nodes; also, it can
produce digital signatures with its private key.
We assume that the binding between $X$ and $K_X$ can be validated by any node,
as in state-of-the-art vehicular communication architectures.
Nodes either comply with the SNPD protocol (\emph{correct}) or they deviate from
it (\emph{faulty} or \emph{adversarial}). Adversarial nodes can advertise
arbitrarily erroneous positions in messages they inject, to mislead other nodes
about their position.
Adversaries are \emph{external} or \emph{internal}, depending on whether they
lack or possess the cryptographic keys and credentials of system nodes,
respectively. External adversaries can only relay or replay messages without
changes, or jam the communication. Internal adversaries are more powerful in
that they can fully participate in the protocol execution, forging arbitrary
messages with faked own positions. Recall though that each adversary can inject
messages only according to the cryptographic keys it possesses; it cannot forge
messages on behalf of other nodes whose keys it does not have.
Another classification of adversaries that is of interest to us is between
\emph{independent} and \emph{colluding}
adversaries: the former act without knowledge of
other adversaries in the neighborhood, while the latter, by far the most
dangerous, coordinate their actions by exchanging information.
In this work, we focus primarily on internal adversaries with standard
equipment (e.g., omnidirectional antennas, standard--compliant wireless
cards, etc.). We distinguish them into (i) {\em knowledgeable}, i.e., adversaries
that at any point in time know the exact positions of all their communication
neighbors, and (ii) {\em unknowledgeable}, otherwise. In
Section~\ref{sec:analysis}, we will outline the threats which can be posed
by both independent and colluding adversaries, and discuss possible additional
threats carried out by adversaries using non-standard equipment (e.g.,
directional antennas).
\section{Secure neighbor position discovery protocol}
\label{sec:protocol}
The SNPD protocol we propose allows any node in the network to discover and
verify the position of its communication neighbors participating in the
protocol message exchange. SNPD can be initiated in a reactive manner by any
node, which we refer to as the {\em verifier}. Our solution is based on a
\emph{best-effort, cooperative approach} that leverages information collected by
neighboring nodes thanks to the broadcast nature of the wireless medium. With
such information, the verifier can compute, via ToF-based ranging, distances
between pairs of neighbors, and then perform a sequence of tests that allow it
to classify its communication neighbors as:
\begin{itemize}
\item {\em Verified}, i.e., nodes the verifier deems to be at the claimed
position;
\item {\em Faulty}, i.e., nodes the verifier deems to have announced an
incorrect position;
\item {\em Unverifiable}, i.e., nodes the verifier cannot prove to be either
correct or faulty; due to insufficient information on these nodes or
inconclusive test outcome.
\end{itemize}
The objective of our SNPD protocol is to be robust to adversarial nodes, i.e.,
to correctly identify and reject false positions and ignore their originators.
In other words,
it is necessary to minimize false negative and false positive outcomes, i.e.,
adversaries with positions deemed verified and correct nodes with positions
deemed faulty, as well as the number of unverifiable nodes.
We stress that the SNPD protocol only verifies the position of those neighbors
with which the message exchange takes place successfully. It therefore
disregards nodes for which the protocol exchange prematurely ends, e.g., due to
message loss or communication neighbors that refuse to take part in the
protocol. SNPD assumes that the nodes position does not vary significantly
during one protocol execution, which is realistic if we consider that a complete
message exchange takes no more than a few hundreds of milliseconds.
Also, SNPD does not aim at building a consistent map of verified nodes, as
every verifier autonomously tags its neighbors as verified, faulty or
unverifiable.
Next, we
detail the message exchange between the verifier and its communication
neighbors, followed by a description of the security tests run by the verifier.
Table~\ref{tab:notation} summarizes the notations used throughout the protocol
description.
\subsection{Message exchange}
We denote by $t_X$ the time at which a node $X$ starts a broadcast
transmission and by $t_{XY}$ the time at which a node $Y$ starts
receiving that same transmission; $p_X$ is the current position of $X$,
and $\mathbb{N}_X$ is the current set of its communication neighbors.
Consider a verifier $S$ that initiates the SNPD protocol.
The message exchange procedure is outlined in Algorithm~\ref{alg:exchange_init}
for $S$, and in Algorithm~\ref{alg:exchange_neigh} for
any of $S$'s communication neighbors.
The verifier starts the protocol by broadcasting a {\sc poll} whose transmission
time $t_S$ is stored locally (Alg.~\ref{alg:exchange_init}, lines 2-3). Such
message is anonymous, since (i) it does not contain the verifier's identity,
(ii) it is transmitted employing a fresh MAC address, and (iii) it contains a
public key $K_{S}'$ from a one-time use private/public key pair $k_{S}',K_{S}'$,
taken from a pool of anonymous keys which do not allow neighbors to map them
onto a specific node. Including a one-time key in the the {\sc poll} also
ensures that the message is fresh (i.e., the key acts as a nonce).
A communication neighbor $X \in \mathbb{N}_S$ that receives the {\sc poll}
stores its reception time $t_{SX}$, and extracts a random wait interval $T_X \in
[0,T_{max}]$ (Alg.~\ref{alg:exchange_neigh}, lines 2-5). After $T_X$ has
elapsed, $X$ broadcasts a {\sc reply} message using a fresh MAC address, and
records the corresponding transmission time $t_X$
(Alg.~\ref{alg:exchange_neigh}, lines 6-10). The {\sc reply} contains encrypted
information for $S$, namely the signed neighbor identity, $Sig_X$, and the {\sc
poll} reception time: we refer to these data as $X$'s {\it commitment},
$\mathbb{c}_X$. The hash $h_{K'_{S}}$, derived from the verifier's public key,
$K'_{S}$, is also included to bind {\sc poll} and {\sc reply} belonging to the
same message exchange.
Upon reception of a {\sc reply} message from a communication neighbor $Y$,
the verifier $S$ stores the reception time $t_{YS}$ and the
commitment $\mathbb{c}_Y$ (Alg.~\ref{alg:exchange_init}, lines 4-6).
A different communication neighbor of $S$, e.g., $X$, receives the {\sc reply}
message broadcast by $Y$,
if $Y$ is a communication neighbor of both $S$ and $X$, i.e.,
$Y \in \mathbb{N}_S \cap \mathbb{N}_X$. In such case, $X$ too stores
the reception time $t_{YX}$ and the commitment $\mathbb{c}_Y$
(Alg.~\ref{alg:exchange_neigh}, lines 11-13).
Note that also {\sc reply} messages are anonymous, hence a node
records all commitments it receives without knowing their origin.
After a time $T_{max}+\Delta+T_{jitter}$, $S$ broadcasts a {\sc reveal} message;
$\Delta$ accounts for the propagation and contention lag of {\sc reply} messages
scheduled at time $T_{max}$, and $T_{jitter}$ is a random time added to thwart
jamming efforts on this message. Through the {\sc reveal}, the verifier $S$ (i) unveils its
identity by including its signature and its public key to decrypt it, and (ii) proves
to be the author of the original {\sc poll}. The latter is achieved by
attaching the encrypted hash $E_{k_{S}'}\{h_{K_{S}'}\}$
(Alg.~\ref{alg:exchange_init}, lines 7-9).
Once the identity of the verifier is known, each neighbor $X$, which received
$S$'s original {\sc poll}, unicasts to $S$ an encrypted and signed {\sc report}
message containing its own position, the transmission time of its {\sc reply},
and the list of pairs of reception times and commitments referring to the {\sc
reply} broadcasts it received (Alg.~\ref{alg:exchange_neigh}, lines 14-17).
Commitments are included `as they are', since only $S$ can decrypt them and
match the identity of the nodes that created the commitments with the reported
reception times.
\subsection{Position verification\label{subsec:verification}}
Once the message exchange is concluded, $S$ decrypts
the received data and acquires the position of all
neighbors that participated in the protocol, i.e.,
$\{p_X, \forall X \in \mathbb{N}_S\}$.
$S$ also knows the transmission time of its
{\sc poll} and learns the transmission time of all subsequent {\sc reply}
messages, as well as the corresponding reception times recorded by
the recipients of such broadcasts.
Applying a ToF-based technique, $S$ can thus compute its
distance from each communication neighbor, as well as
the distances between
pairs of communication neighbors that happen to share a link.
In particular, denoting by $c$ the speed of
light, we define $d_{XY}=(t_{XY}-t_X) \cdot c$, i.e., the distance that $S$ computes
from the timing information it collected about the broadcast message sent by $X$.
Similarly, we define $d_{YX}=(t_{YX}-t_Y) \cdot c$, i.e., the distance that $S$ computes
using the information related to the broadcast by $Y$.
Exploiting its knowledge, the verifier can run
verification tests to fill
the set $\mathbb{F}_S$ of faulty communication neighbors,
the set $\mathbb{V}_S$ of verified nodes, and the unverifiable set $\mathbb{U}_S$.
The first verification is carried through the {\bf Direct Symmetry (DS)} test,
detailed in Algorithm~\ref{alg:vt_ds}, where $|x|$ denotes the modulus of $x$ and
$\left\|p_X-p_Y\right\|$ is the Euclidean distance between locations $p_X$ and $p_Y$.
For direct links between the verifier and each of its
communication neighbors, $S$ checks whether reciprocal ToF-derived distances
are consistent (i) with each other, (ii) with the position
advertised by the neighbor, and (iii) with a proximity range $R$.
The proximity range $R$ upper bounds the distance at which two nodes can
communicate, or, in other words, corresponds to the maximum nominal transmission range.
The first check is performed by comparing the distances $d_{SX}$ and $d_{XS}$
obtained from ranging, which shall not differ by more than twice the ranging
error (Alg.~\ref{alg:vt_ds}, line 4). The second check verifies that the position advertised by the
neighbor is consistent with such distances, within an error margin equal to
$2\epsilon_p+\epsilon_r$ (Alg.~\ref{alg:vt_ds}, line 5). This check is trivial but fundamental, since
it correlates positions to verified distances: without it, an attacker could fool
the verifier by simply advertising an arbitrary position along with correct
broadcast transmission and reception timings. Finally, $S$ verifies that
$d_{SX}$ is not larger than $R$ (Alg.~\ref{alg:vt_ds}, line 6), and declares a neighbor as faulty if a
mismatch surfaced in any of these checks\footnote{
The latter two checks are performed on both $d_{SX}$ and $d_{XS}$, however
in Algorithm~\ref{alg:vt_ds} they are done on $d_{SX}$ only, for clarity of presentation.}.
The {\bf DS} test implies {\it direct} verifications
that compare trusted information collected by the verifier
against data advertised by each neighbor.
The content of the messages received by $S$, however, allows also
{\it cross}-verifications, i.e., checks on the information mutually
gathered by each pair of communicating neighbors. Such checks are done in
the {\bf Cross-Symmetry (CS)} test, in Algorithm~\ref{alg:vt_cs}.
The {\bf CS} test ignores nodes already declared as faulty by the {\bf DS} test
(Alg.~\ref{alg:vt_cs}, line 6) and only considers nodes that proved to be
communication neighbors between each other, i.e., for which ToF-derived mutual
distances are available (Alg.~\ref{alg:vt_cs}, line 7). Then, it verifies the
symmetry of such distances (Alg.~\ref{alg:vt_cs}, line 9), their consistency
with the positions declared by the nodes (Alg.~\ref{alg:vt_cs}, line 10), and
their feasibility with respect to the proximity range (Alg.~\ref{alg:vt_cs},
line 11). For each communication neighbor $X$, a link counter ${\sc l}_X$ and a
mismatch counter ${\sc m}_X$ are maintained. The former is incremented at every
new cross-verification on $X$, and records the number of links between $X$ and
other communication neighbors of $S$ (Alg.~\ref{alg:vt_cs}, line 8). The latter
is incremented every time at least one of the cross-checks on distances and
positions fails (Alg.~\ref{alg:vt_cs}, line 12), and identifies the potential
for $X$ being faulty.
Once all neighbor pairs have been processed, a node $X$ is added
to the unverifiable set $\mathbb{U}_S$ if it shares less than two
neighbors with $S$
(Alg.~\ref{alg:vt_cs}, line 17).
Indeed, in this case the information available on the
node is considered to be insufficient to tag the node as verified or
faulty (see Sec.~\ref{sec:analysis} for more details).
Otherwise, if $S$ and $X$ have two or more common neighbors, $X$ is declared as
faulty, unverifiable, or verified, depending on the percentage of mismatches
in the cross-checks it was involved
(Alg.~\ref{alg:vt_cs}, lines 18-22). More precisely, $X$ is added to $\mathbb{F}_S$,
$\mathbb{U}_S$ or $\mathbb{V}_S$, depending on whether
the ratio of the number of mismatches
to the number of checks is greater than, equal to, or less than
a threshold~$\delta$.
We point out that the lower the $\delta$, the
fewer the failed cross-checks needed to declare a node as faulty,
while the higher the $\delta$, the higher the probability of false negatives.
In the following, we set $\delta=0.5$ so that a majority rule is enforced:
the verifier makes a decision on the correctness of a node
by relying on the opinion of the majority of shared communication neighbors.
If not enough common neighbors
are available to build a reliable majority, the node is unverifiable.
As shown in the next section, this choice
makes our SNPD protocol robust to attacks in many different situations.
The third verification, the {\bf Multilateration (ML)} test, is
detailed in Algorithm~\ref{alg:vt_ml}.
The {\bf ML} test searches the verified set determined
through the {\bf DS} and {\bf CS} algorithms
for suspicious situations, in which nodes in $\mathbb{V}_S$ declare a high
number of asymmetric links.
When a suspect node is found, the {\bf ML} test exploits as anchors
other nodes in $\mathbb{V}_S$, and multilaterates the actual position
of the node under verification.
The {\bf ML} test looks for each verified
neighbor $X$ of the initiator $S$ that did not notify
a link instead reported by another party $Y$ (Alg.~\ref{alg:vt_ml}, line 7).
When such a node is found, it is added to a {\it waiting set}
$\mathbb{W}_S$ (Alg.~\ref{alg:vt_ml}, line 8) and a curve $L_X(S,Y)$ is computed.
Such curve is the locus of points that can generate a transmission
whose Time Difference of Arrival (TDoA) at $S$ and $Y$ matches
that measured by the two nodes, i.e., $\left|t_{XS}-t_{XY}\right|$.
It is easy to verify that the curve is a hyperbola, which is
added to the set $\mathbb{L}_X$ (Alg.~\ref{alg:vt_ml}, line 9).
Once all couples of verified nodes have been checked, $\mathbb{W}_S$ is filled
with suspect neighbors. For each node $X$ in $\mathbb{W}_S$, $S$ exploits the
hyperbolae in $\mathbb{L}_X$ to multilaterate the position of $X$, referred to
as $p_X^{ML}$, similarly to what is done in~\cite{capkun:secpos}
(Alg.~\ref{alg:vt_ml}, line 14). Note that $\mathbb{L}_X$ must include at least
two hyperbolae for $S$ to be able to compute the position $X$ through
multilateration, and this implies the presence of at least two shared neighbors
between $S$ and $X$ (Alg.~\ref{alg:vt_ml}, line 13). The resulting position
$p_X^{ML}$ is then compared against that advertised by $X$, $p_X$. If the
difference exceeds a given error margin, neighbor $X$ is moved from the verified
set to the faulty one (Alg.~\ref{alg:vt_ml}, lines 15-17).
\begin{comment}
After all nodes that took part in the message exchange have been declared as unverifiable, faulty,
or verified, trusted neighbors from the $\mathbb{V}_S$ set are
exploited to ascertain the nature of unverifiable nodes, where possible.
Nodes in $\mathbb{U}_S$ and $\mathbb{V}_S$ are checked for
direct links: if a link is found~\footnote{Note
that a node in $\mathbb{U}_S$ necessarily has zero or one common neighbor
with $S$. Thus, at most one link to a node in $\mathbb{V}_S$ is going
to be found for each unverified node.},
similar tests to those employed in the {\bf DS} test are run, and
previously unverified nodes can be declared either faulty or verified,
leveraging the opinion of the trusted common neighbor
(Alg.~\ref{alg:vt_ml}, lines 25-35).
\end{comment}
\section{Security analysis}
\label{sec:analysis}
\begin{comment}
We now discuss the security properties of the SNPD protocol
and of the DS and CS tests, and present a detailed analysis
in presence of attacks led by one or multiple adversarial nodes.
For clarity in the following analysis, we remind the reader that (i)
in all cases, the goal of an adversary is to make the
verifier believe that the fake advertised position
is correct; (ii) if the DS
test on a communication neighbor fails, the verifier declares the neighbor
faulty, otherwise the verifier runs the CS test; (iii) the output of
the CS test classifies a communication neighbor as either faulty, unverifiable or
verified.
\subsection{Remarks on the SNPD protocol and security tests}
\end{comment}
We analyze the security properties of the proposed
scheme in presence of adversarial nodes, whose objective is to make
the verifier believe that the fake positions they advertise are correct.
We consider scenarios of increasing complexity: we start by discussing
the basic workings of the SNPD protocol
in presence of a single adversary and different shared neighborhoods;
we then move to the case of multiple adversaries, at first assuming they act
independently and, then, that they cooperate to perform the attack;
finally, we examine the resilience of the scheme to a number of well-known attacks.
\subsection{Single adversary, no common neighbors}
\label{subsec:1a0n}
Consider a verifier $S$ that starts the SNPD protocol in presence
of an adversary $M$, with which it shares no common neighbor.
In order to bring a successful attack, $M$ must
tamper with the data $S$ uses for ranging,
so that the resulting distance confirms its fake advertised position.
To this end, $M$ can forge at its convenience the time information
in the messages it generates. In particular, let $p'_M$ be
the fake position that $M$ wants to advertise; we denote
by $t'_{SM}$ the fake timing that $M$ introduces in its {\sc reply},
and by $t'_M$ the fake timing inserted in its {\sc report} (in addition
to $p'_M$).
The {\bf DS} test (Alg.~\ref{alg:vt_ds}) run
by $S$ on $M$ checks the consistency between distances, by verifying that
$\left|d_{SM}-d_{MS}\right|\leq 2\epsilon_r$, or:
\begin{equation}
\left| (t'_{SM}-t_S)\cdot c - (t_{MS}-t'_M)\cdot c \right| \leq 2\epsilon_r
\label{eq:vt1a}
\end{equation}
and that positions are also coherent with
the distances, i.e.,
$\left| \left\|p_S-p'_M\right\| - d_{SM}\right| \leq 2\epsilon_p+\epsilon_r$, or,
equivalently:
\begin{equation}
\left| \left\|p_S-p'_M\right\| - (t'_{SM}-t_S)\cdot c \right| \leq 2\epsilon_p+\epsilon_r
\label{eq:vt2}
\end{equation}
Thus, the adversary must forge $t'_M$ and $t'_{SM}$, so that
(\ref{eq:vt1a})--(\ref{eq:vt2}) still hold
after its real position $p_M$ is replaced with $p'_M$.
Solving the equation system obtained by setting
the error margin to zero
in (\ref{eq:vt1a})--(\ref{eq:vt2}),
we obtain:
\begin{equation}
\label{eq:fake_tM}
t'_M = t_{MS} - \frac{\left\|p_S-p'_M\right\|}{c}
= t_M + \frac{\left\|p_S-p_M\right\|}{c} - \frac{\left\|p_S-p'_M\right\|}{c}
\end{equation}
\begin{equation}
\label{eq:fake_tSM}
t'_{SM} = t_S + \frac{\left\|p_S-p'_M\right\|}{c}
= t_{SM} - \frac{\left\|p_S-p_M\right\|}{c} + \frac{\left\|p_S-p'_M\right\|}{c}
\end{equation}
Note that $p'_M$ is chosen by $M$, and that $M$ knows
$t_{M}$ in (\ref{eq:fake_tM}) (since this is the actual transmission
time of its own {\sc reply}) and
$t_{SM}$ in (\ref{eq:fake_tSM}) (since this is
the time at which it actually received the {\sc poll} from $S$).
We therefore have a system of two equations that $M$ can solve, in the
two unknowns $t'_M$ and $t'_{SM}$, only if it is aware of $p_S$,
i.e., it is a knowledgeable adversary.
We stress that, for $M$ to be knowledgeable, two conditions must hold:
first, $M$ must have previously run the SNPD protocol to discover
the identity and position of its neighbors; second,
the verifier's position must have not changed since such discovery procedure.
Clearly, as $M$ cannot foresee when $S$ starts the SNPD protocol, such conditions are extremely
hard to fulfill, especially in a highly dynamic environment such as the vehicular one.
Nevertheless, if $M$ is aware of $S$'s location, the advertised position
$p'_M$ will pass the {\bf DS} test provided that it is
within the proximity range $R$, as shown in Fig.~\ref{fig:security_1a0n}.
Given such potential weakness,
the SNPD protocol marks isolated neighbors as unverifiable in the {\bf CS} test,
even if they pass the {\bf DS} test.
\subsection{Single adversary, one common neighbor}
\label{subsec:1a1n}
We now add to the previous scenario a node $X$, which is a correct
neighbor, common to $S$ and $M$.
Recall that, in bringing
its attack, $M$ can forge messages with altered information,
but it cannot modify the content of messages sent by other nodes,
since they are all encrypted and signed.
The discussion in Sec.~\ref{subsec:1a0n}
applies again, since the fake position advertised by $M$
needs to pass the {\bf DS} test: $M$ must be aware of $S$'s current position and
must forge $t'_{M}$ and $t'_{SM}$ according to $p_S$ and $p'_{M}$.
However, the presence of the common neighbor introduces two additional
levels of security.
First, the {\sc poll} and {\sc reply} messages
are anonymous, hence $M$ does not know if the verifier is $S$ or $X$
upon reception of such messages.
However, if it wants to take part in the protocol, $M$ is forced
to advertise the fake {\sc poll} reception time $t'_{SM}$ in its
{\sc reply} message, before receiving the {\sc reveal} and discovering
the verifier's identity.
The only option for $M$ is then to randomly guess who the
verifier is, and properly change $t_{SM}$ into $t'_{SM}$, as
in (\ref{eq:fake_tSM}), and this implies a 0.5 probability of
failure in the attack.
Second, the {\bf CS} test on the pair $(M,X)$ requires that
$\left|d_{XM}-d_{MX}\right|\leq 2\epsilon_r$ and
$\left| \left\|p_X-p_M\right\| - d_{XM}\right| \leq 2\epsilon_p+\epsilon_r$.
Exactly as before, to pass these checks, $M$ is forced to advertise the fake timings:
\begin{eqnarray}
\label{eq:fake_tM2}
t'_M & = & t_M + \frac{\left\|p_X-p_M\right\|}{c} - \frac{\left\|p_X-p'_M\right\|}{c}\\
\label{eq:fake_tXM}
t'_{XM} & = & t_{XM} - \frac{\left\|p_X-p_M\right\|}{c} + \frac{\left\|p_X-p'_M\right\|}{c}
\end{eqnarray}
If $M$ knows $X$'s current position $p_X$, it can solve (\ref{eq:fake_tXM})
and announce the forged $t'_{XM}$ in its {\sc report} to $S$.
However, (\ref{eq:fake_tM2}) introduces a second expression for $t'_M$,
whereas $M$ can only advertise one single $t'_M$. In order to pass
both {\bf DS} and {\bf CS} tests, $M$ needs to announce a $t'_M$ that satisfies (\ref{eq:fake_tM})
and (\ref{eq:fake_tM2}), which implies:
\begin{equation}
\label{eq:conditionA_1a1n}
\left\|p_S-p_M\right\| - \left\|p_S-p'_M\right\| =
\left\|p_X-p_M\right\| - \left\|p_X-p'_M\right\|
\end{equation}
In other words, $M$ is constrained to choose locations with the
same distance increment (or decrement) from
$S$ and $X$. In~(\ref{eq:conditionA_1a1n}),
$p_S$, $p_X$, and $p_M$ are fixed and known,
hence distances between $p_S$ and $p_M$, and between $p_X$ and $p_M$
can be considered as constant. Since $p'_M$ is variable
over the plane, we rewrite (\ref{eq:conditionA_1a1n}) as
$\left\|p_X-p'_M\right\| - \left\|p_S-p'_M\right\| = k$,
which is the equation describing a hyperbola with foci in $p_S$ and $p_X$,
and passing through $p_M$.
It follows that only positions on such hyperbola satisfy the
four constraints in (\ref{eq:fake_tM}), (\ref{eq:fake_tSM}),
(\ref{eq:fake_tM2}), and (\ref{eq:fake_tXM}), and
$p'_M$
must lie on that curve in order to pass all tests.
Examples of this condition are shown in Fig.~\ref{fig:security_1a1n}.
Summarizing, the presence of a common neighbor $X$ drastically reduces
the vulnerability of the verifier to attacks, since $M$ is
now required (i) to be knowledgeable,
(ii) to correctly guess the verifier's identity,
and (iii) to advertise a fake position only along a specific curve.
However, since some space for successful attacks remains,
the {\bf CS} test marks as unverifiable nodes that passed the {\bf DS} test
but share only one neighbor with the verifier.
We also stress that, if $M$ tweaks the timings so as to pass
the {\bf DS} test and does not care about the matching with $X$, it will still be
tagged as unverifiable.
\subsection{Single adversary, two or more common neighbors}
\label{subsec:1a2+n}
In the case of two or more common neighbors, we split the discussion
into the two following cases: (i) a generic network topology and (ii)
collinear nodes.
{\em (i) Generic network topology}.
When a second correct neighbor $Y$ is shared
between $S$ and $M$~\footnote{Note that we do not make any assumption
on the connectivity between $X$ and $Y$.}, the discussion in Sec.~\ref{subsec:1a1n}
can be extended as follows. We noting that, as before,
the adversary $M$ has to be knowledgeable,
but a
second common neighbor reduces to 0.33 the probability that $M$ correctly
guesses the verifier's identity. More importantly, by applying the same
reasoning as in Sec.~\ref{subsec:1a1n}, $M$ has now to forge four time values,
i.e., $t'_M$, $t'_{SM}$, $t'_{XM}$, and $t'_{YM}$, so that six equations are
satisfied, i.e., (\ref{eq:fake_tM}), (\ref{eq:fake_tSM}), (\ref{eq:fake_tM2}),
(\ref{eq:fake_tXM}), and the two equations corresponding to the cross-check
with the second common neighbor $Y$~\footnote{The latter two equations can be
obtained from (\ref{eq:fake_tM2})--(\ref{eq:fake_tXM}) by replacing $p_X$,
$t_{XM}$ and $t'_{XM}$, respectively, with $p_Y$, $t_{YM}$ and $t'_{YM}$.}.
To fulfill the constraints on $t'_M$, now $M$ has
to announce a position $p'_M$ that is equally farther from
(or closer to) $S$, $X$ and $Y$ with respect to its actual
location $p_M$.
The point satisfying such condition lies at the
intersection of three hyperbolae with foci in $p_S$ and $p_X$,
$p_S$ and $p_Y$, $p_X$ and $p_Y$, respectively,
and such single point actually corresponds to the real position of the adversary, $p_M$.
Accordingly, in presence of two common neighbors,
the {\bf CS} test marks a node with no mismatches as verified.
The majority rule (i.e., $\delta=0.5$) results
instead in the adversary being tagged as faulty
when mismatches are recorded with both common neighbors.
Finally, the adversary is added to the unverifiable set if
it is capable of fooling $S$ and either $X$ or $Y$, since that leads
to one mismatch over two links checked.
We stress that deceiving $S$ and one of the common neighbors
requires, beside the knowledge of their current positions and a correct guess
on the verifier's identity, also the pinning of which {\sc reply}
comes from which neighbor
(i.e., $M$ must randomly map $t_{XM}$ onto $p_X$ and
$t_{YM}$ onto $p_Y$ for the computations on the hyperbolae to work).
Thus, the guess taken by $M$ in the hope of being
marked as unverifiable has a success probability of 0.165, jointly given by
the probability of guessing the right
verifier (0.33) and the probability of guessing the right
mapping (0.5) of {\sc reply} reception times onto neighbor positions.
When three or more common neighbors are present between $S$ and $M$,
the chances of a successful attack drop to zero.
Indeed, not only the probability of guessing
the right originators of the different messages shrinks as
the size of the common neighborhood grows, but the majority rule
dooms the adversary to insertion in the faulty set,
even when all random guesses are exact.
By extending the above analysis on the hyperbolae, we observe that,
with a threshold $\delta=0.5$, when $S$ and $M$ share $n \geq 3$
communication neighbors, the mismatch-to-links ratio is $\frac{n-1}{n} > \delta$.
A summary of the security of the SNPD protocol, in presence of a single
adversary and in a generic network topology, is presented in
Tab.~\ref{tab:security_1a}, where different rows identify different behaviors of
the neighbor $X$ under verification by $S$. The columns represent the number of
correct neighbors shared by $S$ and $X$. For each combination, we report the set
to which $X$ is assigned by $S$, possibly with a probability value due to the
adversary's random guessing on the roles of neighbors.
{\em (ii) Collinear nodes}.
When the majority of common neighbors
is collinear to $S$ and an adversary $M$, and lies on the same side as $S$ with
respect to $p_M$, a degree of freedom exists for the attacker.
Indeed, $M$ is verified if it announces a fake position that is
collinear with $p_M$ and $p_S$, within a distance $R$ from
$S$, and such that the majority of the common neighbors still
lies on the same side as $S$ with respect to $p'_M$.
This case, however, hardly leads to an advantage for the adversary, since $p'_M$
must remain aligned with the positions of the other nodes, must respect the ordering
with the majority of them, and cannot exceed $S$'s proximity range.
\subsection{Multiple independent adversaries}
\label{subsec:ma}
We now consider the presence of multiple uncoordinated adversaries.
It is easy to see that independent attackers
damage each other, by announcing false positions that
reciprocally spoil the time computations discussed in the
previous sections.
Cross checks on couples
of non-colluding adversaries will always result in mismatches
in the {{\bf CS} test, increasing the chances that such nodes are tagged as
faulty by the initiator.
Where multiple independent attackers can harm the system is in
the verification of correct neighbors. As a matter of fact, a
node is ruled verified if it passes the strict majority
of cross controls it undergoes. A correct node surrounded by
several adversarial neighbors could thus be marked as faulty (unverifiable),
if it shares with the initiator a number of adversarial nodes greater than (equal to)
the number of correct nodes.
An example is provided in Fig.~\ref{fig:security_ma_independent}.
However, it is to be said that, under the assumption that the percentage of attackers
among all nodes in the network is small, situations where a correct
node shares mostly uncoordinated adversarial neighbors with the initiator
are very unlikely to occur.
\subsection{Multiple colluding adversaries, basic attack}
\label{subsec:ma_colluding_basic}
Coordinated attacks carried out by colluding adversaries
are obviously harder to counter than those independently
led by individual adversarial nodes. The SNPD protocol is
resistant to coordinated attacks, unless the presence of
colluding adversaries in the neighborhood of the initiator
node is overwhelming.
The goal of adversarial nodes remains that of inducing the initiator $S$
into trusting the fake positions they announce. The basic way they can
cooperate to that end is by mutually validating the false information they
generate. Indeed, colluding adversaries can advertise to $S$
reception times (of reciprocal {\sc reply} messages) forged so that
the values derived through ToF-based ranging confirm
the positions they made up in the {\bf CS} test.
In other words, a perfect cooperation results in the colluding
adversaries' capability of ``moving'' all links among them without
being noticed by the initiator.
Our SNPD protocol can counter the basic attack from colluders, as long
as 50\% {\em plus one} of the neighbors in common to the verifier and an
adversary are correct. Indeed, a strict majority of correct shared
neighbors allows the identification of attackers through the {\bf CS} test.
An example with three colluding attackers
is provided in Fig.~\ref{fig:security_ma_colluding}.
\subsection{Multiple colluding adversaries, hyperbolae-based attack}
\label{sec:ma_colluding_hyperbola}
A more sophisticated version of the basic coordinated attack can be
organized by colluding adversaries as follows.
Having received the {\sc poll} message, the
attackers not only agree on the identity of the initiator $S$, but also
pick a common neighbor $X$ that they share with $S$: each colluder
determines the hyperbola with foci $S$, $X$, and passing through its
own actual position, and announces a fake position on such curve.
This allows the adversaries to announce correct links (i) with the
initiator $S$, (ii) with the selected neighbor $X$, and (iii) among
themselves. Node $X$ becomes an involuntary allied in the attack:
in order to work properly, the {\bf CS} test, based on the majority rule,
needs that more than
50\% {\it plus three} of the common neighbors between the initiator
and communicating node are correct. The two additional correct
neighbors are required to counter the effect of $X$ becoming an
unintentional colluder during the cross verification.
\subsection{Multiple colluding adversaries, {\sc reply}-disregard attack}
\label{subsec:ma_colluding_disregard}
A second variation to the attack presented in
Sec.~\ref{subsec:ma_colluding_basic} relies on a coordinated action
against {\sc reply} messages received from correct nodes.
As a matter of fact, the {\bf CS} test can control the symmetry
of links between couples of neighbors only if ToF-based ranging
is performed in both directions. Thus, by intentionally excluding
from their {\sc report} the commitments received from
correct nodes while including all those received by colluding nodes,
adversaries can selectively avoid cross symmetry tests with correct
nodes, so that no mismatches are found. We refer to this as a
{\sc reply}-disregard attack and stress that it requires at least
three colluding nodes forming a clique,
or the adversaries would result unverifiable to the initiator, since
they would share less than two (bidirectional) neighbors with it.
The SNPD protocol is robust to {\sc reply}-disregard attacks, thanks to the
controls run in the {\bf ML} test. More precisely, an adversary carrying out a
disregard attack together with $N$ colluders can safely advertise up to $N-1$
wrong reception times from correct nodes, being still tagged as verified by the
majority rule. This means that there must be at least $N+1$ correct neighbors,
shared by an adversary and the initiator, for the adversary to be forced to
disregard one or more {\sc reply}, and for two correct shared neighbors to be in
the condition of participating in the {\bf ML} test and identify the colluder.
This means that 50\% {\em plus two} of the shared neighbors must be correct
for our SNPD protocol to work properly.
\begin{comment}
\subsection{Multiple adversaries}\label{subsec:ma}
Let us now consider that multiple adversarial nodes,
$M_1,\ldots,M_n$, are communication neighbors of
the verifier.
If the adversaries are \emph{independent}, they cannot be of any benefit to each
other. Indeed, each of the $M_i$ nodes announces a false own position but it is
unaware of the presence of the other adversaries, let alone of their false
position claims and their replies to $S$. As a result, the reception times that
$M_i$ provides to $S$ are inconsistent with the faulty claims of $M_j$, $j \neq
i$. This increases the mismatch count in the {\bf CS} test, for each adversary. In
short, multiple independent attackers add to the mismatch count due to correct
nodes' announcements and make it even more likely that the $M_i$ nodes are tagged
as faulty. Of course, multiple independent adversaries may also affect the
classification of correct nodes: due to the majority rule, a correct node is
tagged as faulty (unverifiable) if it shares with $S$ a number of adversaries
greater than (equal to) the number of correct nodes. An example of faulty marking
of a correct node is shown in Fig.~\ref{fig:security_ma_independent}.
Unlike independent adversaries, multiple \emph{colluding} adversaries
can share information before and while mounting an attack. This makes them in
principle significantly harder to thwart: coordinating attackers could jointly
choose their false position claims and responses (i.e., the content of their
{\sc reply}).
However, a node $M_i$ will be tagged as verified only if it can
coordinate with the claims of the other adversaries and if, in the {\bf CS} test, the
majority of cross-checks involving $M_i$ is done on other colluders. On
the contrary, if more than half of the common neighbors of $S$, $M_i$ are
correct, the verifier will tag $M_i$ as faulty. In
short, the outcome of the {\bf CS} test is only partially under the control of the
colluder group and mostly depends on the number of correct neighbors shared by
$M_i$ and $S$. An example with three colluders is shown in
Fig.~\ref{fig:security_ma_colluding}.
Once a node is tagged as faulty and thus discarded, then a possible course of
action would be to reiterate the tests in order to identify some of the
(colluding) neighbors of $M_i$ as unverifiable or faulty. However, if the node
declared as faulty were correct, this would alter the majority already achieved
and the tradeoff would be the slapping of the unverifiable tag onto other
correct nodes previously verified, as well as the promotion of unverifiable
(adversarial) nodes to the status of verified.
For this reason, in this work we do not consider the reiteration of the tests.
\emph{Coordinated attack complexity.} A collusion attack is by far non
trivial: colluders would need (i) to have a fast out-of-band, private
communication to exchange positions, false claims, false reception times, (ii)
to approximate the location of the verifier (which requires at least three
adversaries that multilaterate $S$, or the output of a recent instance of the
SNPD protocol combined with the correct guess of the verifier's identity),
(iii) to exchange the fake position they will announce, and the estimated
transmission time of their {\sc reply}: this way, each adversary can recognize
the {\sc reply} of a colluder and compute a reception time that is consistent
with the fake position advertised by such colluder. We stress that the
multilateration of $S$ requires at least three adversaries. Furthermore, to
discover the neighbors' position, SNPD should be run proactively and frequently
by the adversary.
Due to the lack of room, other types of attacks brought about by colluders
are discussed in a technical report~\cite{techrep}; there, we also cover
the robustness of our scheme to adversarial nodes equipped with multiple radio
transceivers and directional antennas, as well as to Sybil attacks.
\end{comment}
As a final remark on coordinated attacks, we comment on the significant
resources and a strong effort they require from the colluding adversaries.
Colluders have to share out-of-band links through which
they can exchange information to coordinate the attack, upon
reception of the {\sc poll} message. Exploiting such links,
they first have to agree on the initiator's identity, either
by a shared random guess or by employing a multilateration
technique to disclose it.
Then, colluders have to inform each other about the fake positions they
will announce, and about the estimated transmission time of their
{\sc reply} messages: this way, each cooperating adversary is able
to recognize the anonymous {\sc reply} of a colluder node and to compute a
reception time that is consistent with the fake position advertised
by such colluder. Finally, this exchange of information must occur in a
very limited time interval after the {\sc poll} message has been
broadcast, so that colluders can transmit their {\sc reply} messages
well before the $T_{max}$ deadline.
\subsection{Denial of Service (DoS) attacks}\label{subsec:dos}
{\bf Jamming.} An adversary $M$ may jam the channel and erase {\sc reply} or
{\sc report} messages. To successfully perform such an attack, $M$ should jam
the medium continuously for a long time, since it cannot know when exactly each
of the nodes will transmit its {\sc reply} or {\sc report} message. Or, $M$
could erase the {\sc reveal} message, but, again, jamming should cover the
entire $T_{jitter}$ time; jamming a specific {\sc reply} transmission is not
straightforward either as the {\sc reply} transmission time is randomly chosen
by each node. Overall, there is no easy point to target; a jammer has to
basically jam throughout the SNPD execution, an action that is possible for any
wireless protocol and orthogonal to our problem.
{\bf Clogging.} An adversary could induce SNPD traffic in an attempt to
congest the wireless channel, e.g., by initiating the protocol multiple times in
a short period and getting repeated {\sc reply} and {\sc report} messages from
other nodes.
{\sc report} messages are large and unicast, and generated in a
short period after the reception of the {\sc reveal} message.
They are thus likely to cause the most damage.
However, SNPD has a way of preventing that:
the initiator must unveil its identity before such messages are
transmitted by neighbors. An exceedingly frequent initiator can be
identified and rate-limited, its excessive {\sc reveal} messages ignored.
Conversely, {\sc reply} messages are small in size, they are broadcast
(and thus require no ACK) and they are
spread over the time interval $T_{max}$. Their damage is somewhat limited,
but their unnecessary transmission is much harder to thwart.
Indeed, {\sc reply} messages should be sent following an anonymous
{\sc poll} message; such anonymity is a requirement that is hard to dismiss,
since it is instrumental to keeping adversaries unknowledgeable.
As a general rule, correct nodes can reasonably
self-limit their responses if {\sc poll}s arrive at excessive rates. Overall,
clogging DoS have only local effect, within the neighborhood of the adversary,
which could anyway resort to jamming and obtain the same effect.
\subsection{Adversarial use of directional antennas}
Assume that adversarial nodes are equipped with directional antennas and
multiple radio interfaces. Then, as a correct node $S$ starts the SNPD protocol,
a knowledgeable adversary $M$ can send {\sc reply} messages through the
different interfaces at different time instants, so as to fool the communication
neighbors shared by $M$ and $S$: a correct neighbor $X$ would record a time
$t'_{MX}$, which is compliant with the fake position, $p'_M$, announced by $M$
and, thus, can pass the corresponding cross check in the {\bf CS} test. If the
adversary is able to fool a sufficient number of neighbors, it succeeds and is
tagged as verified; however, we stress that the adversary needs as many
directional antennas and radio interfaces as the number of neighbors it wants to
fool. Moreover, it must hope that no two such neighbors are within the beam of
the same antenna. The complexity, cost, and chances of failure make this attack
hardly viable.
\section{Performance evaluation}
\label{sec:performance}
To test our SNPD protocol, we selected a real-world road topology
that consists of a 5$\times$5 km$^2$ portion of the urban
area of the city of Zurich~\cite{zurichtraces}. These traces describe
the individual movement of cars through a queue-based model
calibrated on real data:
they thus provide a realistic representation of vehicular mobility at
both microscopic and macroscopic levels.
We extracted 3 hours of vehicular
mobility, in presence of mild to heavy traffic density conditions;
the average number
of cars in the area at a given time is 1200.
Traces have a time discretization of 1 s. Thus, given a trace, every second
we randomly select 1\% of the nodes
as verifiers. For each node, we consider that all devices within the proximity
range $R$ are communication neighbors of the node.
Clearly, the larger the $R$, the higher the number of neighbors taking part
in the same instance of the SNPD protocol: for example
for $R$ equal to 50~m and 500~m,
the average node degree is 8 and 104.8 and the variance is 5.9 and 71.8,
respectively.
Also, we set $\epsilon_r$ to 6.8~m and
$\epsilon_p$ to 5~m~\cite{techrep}.
Since {\em unknowledgeable} adversaries are always tagged as faulty
in the {\bf DS} test, in the following we present results considering
that all adversaries are always {\em knowledgeable}. We stress that this is
a very hard condition to meet in dynamic networks, hence all results
are to be considered as an upper bound to the success probability of
an attack.
When independent adversaries are
considered, we randomly select a ratio (a varying parameter in our
analysis) of the nodes as attackers. In case of colluders, instead, we
randomly select some nodes as adversaries, and for each we further randomly
identify neighbors who will collude with it so as to form an attackers group of
size $\sigma$ (or up to the number of neighbors available).
We assume that colluding adversaries perform hyperbolae-based
attacks, which, as previously discussed, are the hardest to contrast.
For every scenario under
study, we statistically quantify the outcome of the verification test and
compare it to the actual behavioral model of the nodes (namely, correct or
adversary).
\begin{comment}
Before discussing the results, we remark a fundamental aspect of the
scenarios under study: the platooning of vehicles on a road
gives adversaries the chance to deploy themselves on a straight line
where the verifier and two common neighbors also lie; as explained in
Sec.~\ref{subsec:1a2+n}, this condition preludes to a successful attack.
\end{comment}
We first report results in terms of probabilities
that the tests return false positives and false negatives
(Figs.~\ref{fig:zurich_malw} and~\ref{fig:zurich_Rw}) as well as of probability
that a (correct or adversary) node is tagged as unverifiable
(Figs.~\ref{fig:zurich_malu} and~\ref{fig:zurich_Ru}). The former gauge the
reliability of our scheme, while the latter is a mark of the protocol accuracy.
The plots showing the false positives and false negatives, when the ratio of
adversaries varies and $R$=250~m, confirm that our scheme errs on the side of
caution: indeed, as the number of adversaries increases, it is more likely for a
correct node to be mislabeled than for an adversary to be verified (the latter
probability amounting to less than 0.02). Instead, widening the proximity range
with a fixed adversary ratio, namely 0.05,
only plays into the verifier's hands, thanks to the
greater number of nodes (the majority of which are correct) that can be tested.
As for the probability that a node is unverifiable, while little sensitivity to
the ratio of adversaries is observed, a small $R$ (hence fewer neighbors) affects
the protocol capability to reach a conclusive verdict on either correct or
adversary nodes. We also estimated that the degree of freedom that a successful
adversary has in setting its fake position, for $R$=250 m and a ratio of 0.05
attackers, is such that, on average, the fake and actual positions of a verified
adversary are collinear and differ by 40~m.
We then fix the adversaries ratio to 0.05 and $R$ to 250~m
and we consider the presence of colluders.
Figs.~\ref{fig:coll_zurich_w}~and~\ref{fig:coll_zurich_u} show the excellent
performance of our scheme as the colluder group size $\sigma$ varies.
The impact of colluders on the results appears to be negligible,
mainly thanks to the large number of neighbors
defeating even big groups of colluders.
Finally, we comment on the overhead introduced by SNPD, in terms of number and
size of messages. SNPD generates at most $2N+2$ messages for one execution
initiated by a verifier with $N$ communication neighbors. This is twice the
cost of an unsecured NPD protocol that would consist of one poll and $N$
position replies from neighbors. Moreover, SNPD messages are relatively small in
size: with SHA-1 hashing and ECDSA-160 encryption~\cite{ieee1363},
the length of
signatures is 21 bytes (with coordinates compression). Assuming that messages
include headers with 4-byte source and destination identifiers and 1-byte
message type field, {\sc POLL}, {\sc REPLY}, and {\sc REVEAL} are all less than
100 bytes in size (to be precise, 26, 71, and 67 bytes, respectively). The {\sc
REPORT} length is variable, depending on the number of commitments it carries:
e.g., for 5 commitments, its size is only 295 bytes, and up to 28 commitments
can fit in a single 1500-byte IP packet.
Obviously, the on-demand nature of
the protocol makes it best suited to event-triggered
applications, such as safety and tolling ones. In these scenarios,
SNPD induces very low
overhead in the network. The limited number and the small size of messages
make
the proactive use of the protocol feasible, for relatively low rate execution,
e.g., once in a few tens of seconds.
\section{Conclusion}
We proposed a lightweight, distributed scheme for securely discovering the
position of communication neighbors in vehicular ad hoc networks. Our solution does
not require the use of a-priori trustworthy nodes, but it leverages the
information exchange between neighbors. Our
analysis showed the scheme to be very effective in identifying independent
as well as colluding adversaries. Results derived using realistic vehicular
traces confirmed such ability and highlighted the good performance of
our solution in terms of both false negatives/positives
and uncertain neighbor classifications.
Future work will aim at assessing the performance of the proposed secure
neighbor position discovery protocol when adversaries have partial or
out-of-date knowledge on the other nodes' positions, and at adapting our scheme
to a high-frequency proactive utilization.
|
1,116,691,501,305 | arxiv | \section{Introduction}
In many unstructured mesh applications, for example those approximating the solution of partial differential equations (PDEs) using the finite volume or the finite element method, sequences of numerical operators accessing common fields need to be evaluated. Usually, these operators are implemented by iterating over sets of mesh elements and computing a kernel in each element. In languages such as C or Fortran, the resulting sequence of loops is typically characterized by heterogeneous iteration spaces and accesses to shared datasets (reads, writes, increments) through indirect pointers, like {\tt A[map[i]]}. One notable example of such operators/loops arises in discontinuous-Galerkin finite element methods, in which numerical integrals over different domains (e.g., cells, facets) are evaluated; here, {\tt A} could represent a discrete function, whereas {\tt map} could store connectivity information (e.g., from mesh elements to degrees of freedom). In this article, we devise compiler theory and technology to automate a sophisticated version of {\it sparse tiling}, a technique to maximize data locality when accessing shared fields (like the {\tt A} and {\tt map} arrays in the earlier example), which consists of fusing a sequence of loops by grouping iterations such that all data dependencies are honored. The goal is to improve the overall application performance with minimal disruption (none, if possible) to the source code.
Three motivating real-world applications for this work are Hydra, Volna and Seigen. Hydra~\citep{hydra-op2} is a finite-volume computational fluid dynamics application used at Rolls Royce for the simulation of next-generation components of jet engines. Volna~\citep{ST-volna} is a finite-volume computational fluid dynamics application for the modelling of tsunami waves. Seigen~\citep{Seigen-paper} aims to solve the elastic wave equation using the discontinuous Galerkin finite element method for seismic exploration purposes. All these applications are characterized by the presence of a time-stepping loop, in which several loops over the mesh (thirty-three in Hydra, ten in Volna, twenty-five in Seigen) are repeatedly executed. These loops are characterized by the irregular dependence structure mentioned earlier, with for example indirect increments in one loop (e.g., {\tt A[m[i]] += f(...)}) followed by indirect reads in one of the subsequent loops (e.g., {\tt b = g(A[n[j]])}). The performance achievable by Seigen through sparse tiling will extensively be studied in Section~\ref{sec:performance}.
Although our work is general in nature, we are particularly interested in supporting increasingly sophisticated seismological problems that will be developed on top of Seigen. This has led to the following strategic decisions:
\begin{description}
\item[Automation, but no interest in legacy codes] Sparse tiling is an ``extreme optimization''. An implementation in a low level language requires a great deal of effort, as a thoughtful restructuring of the application is necessary. In common with many other low level transformations, it also makes the source code impenetrable, affecting maintenance and extensibility. We therefore aim for a fully automated system based on domain-specific languages (DSLs), which abstracts sparse tiling through a simple interface (i.e., a single construct to define a scope of fusible loops) and a tiny set of parameters for performance tuning (e.g., the tile size). We are not interested in automating sparse tiling in legacy codes, in which the key computational aspects (e.g., mesh iteration, distributed-memory parallelism) are usually hidden for software modularity, thus making such a transformation almost impossible.
\item[Unstructured meshes require mixed static/dynamic analysis] Unstructured meshes are often used to discretize the computational domain, since they allow for an accurate representation of complex geometries. Their connectivity is stored by means of adjacency lists (or equivalent data structure), which leads to indirect memory accesses within the loop nests. Indirections break static analysis, thus making purely compiler-based approaches insufficient. Runtime data dependence analysis is essential for sparse tiling, so integration of compiler and run-time tracking algorithms becomes necessary.
\item[Realistic datasets not fitting in a single node] Real-world simulations often operate on terabytes of data, hence execution on multi-node systems is often required. We have extended the original sparse tiling algorithm to enable distributed-memory parallelism.
\end{description}
Sparse tiling does {\it not} change the semantics of a numerical method -- only the order in which some iterations are executed. Therefore, if most sections of a PDE solver suffer from computational boundedness and standard optimizations such as vectorization have already been applied, then sparse tiling, which targets memory-boundedness, will only provide marginal benefits (if any). Likewise, if a global reduction is present in between two loops, then there is no way for sparse tiling to be applied, unless the numerical method itself is rethought. This is regardless of whether the reduction is explicit (e.g., the first loop updates a global variable that is read by the second loop) or implicit (i.e., within an external function, as occurs for example in most implicit finite element solvers). These are probably the two greatest limitations of the technique; otherwise, sparse tiling may provide substantial performance benefits.
The rest of the article is structured as follows: in Section~\ref{sec:tiling:lc} we present the abstraction on which sparse tiling relies. We then show, in Section~\ref{sec:examples}, examples of how the algorithm works on shared- and distributed-memory systems. This is followed by the formalization of the algorithms (Sections~\ref{sec:data-dep-analysis},~\ref{sec:algorithm}) and the implementation of the compiler that automates sparse tiling (Section~\ref{sec:implementation}). The experimentation is described in Section~\ref{sec:performance}. A discussion on the limitations of the algorithms and the future work that we expect to carry out in the years to come conclude the article.
\section{The Loop Chain Abstraction for Unstructured Mesh Applications}
\label{sec:tiling:lc}
The {\em loop chain} is an abstraction introduced in~\cite{ST-KriegerHIPS2013}. Informally, a loop chain is a sequence of loops with no global synchronization points, enriched with information to enable run-time data dependence analysis -- necessary since indirect memory accesses inhibit common static approaches to loop optimization. The idea is to replace static with dynamic analysis, exploiting the information carried by a loop chain. Loop chains must somehow be added to or automatically derived (e.g., exploiting a DSL) from the input code. A loop chain will then be used by an {\em inspector/executor} scheme~\citep{ST-Saltz91}. The {\em inspector} is an algorithm performing data dependence analysis using the information carried by the loop chain, which eventually produces a {\em sparse tiling schedule}. This schedule is used by the {\em executor}, a piece of code semantically equivalent to the original sequence of loops (i.e., computing the same result) executing the various loop iterations in a different order.
Before diving into the description of the loop chain abstraction, it is worth observing two aspects.
\begin{itemize}
\item The inspection phase introduces an overhead. In many scientific computations, the data dependence pattern is static -- or, more informally, ``the topology does not change over time''. This means that the inspection cost may be amortized over multiple iterations of the executor. If instead the mesh changes over time (e.g., in case of adaptive mesh refinement), a new inspection must be performed.
\item To adopt sparse tiling in a code there are two options. One possibility is to provide a library and leave the application specialists with the burden of carrying out the implementation (re-implementation in case of legacy code). A more promising alternative consists of raising the level of abstraction: programs can be written using a DSL; loop chain, inspector, and executor can then be automatically derived at the level of the intermediate representation. As we shall see in Section~\ref{sec:implementation}, the tools developed in this article enable both approaches, though our primary interest is in the automated approach (i.e., via DSLs).
\end{itemize}
These points will be further elaborated in later sections.
The loop chain abstraction was originally defined as follows:
\begin{itemize}
\item A loop chain $\mathbb{L} = [L_0, L_1, ..., L_{n-1}]$ is an ordered sequence of $n$ loops. There are no global synchronization points in between the loops. Although there may be dependencies between successive loops in the chain, the execution order of a loop's iterations does not influence the result.
\item $\mathbb{D} = \lbrace D_0, D_1, ..., D_{m-1} \rbrace$ is a collection of $m$ disjoint data spaces. Each loop accesses (reads from, writes to) a subset of these data spaces. An access can be either direct (e.g., {\tt A[i]}) or indirect (e.g., {\tt A[map(i)]}).
\item $R_{L_l\rightarrow D_d}(i)$ and $W_{L_l\rightarrow D_d}(i)$ are access relations for a loop $L_l$ over a data space $D_d \in \mathbb{D}$. They indicate which locations in the data space $D_d$ an iteration $i \in L_l$ reads from and writes to, respectively. A loop chain must provide all access relations for all loops.
\end{itemize}
\begin{figure}
\begin{CenteredBox}
\includegraphics[scale=0.6]{figures/mpi_mesh}
\end{CenteredBox}
\caption{Two partitions of a mesh distributed to two neighboring processes, $P_0$ and $P_1$. The {\em core} region includes all iterations that can be processed without reading halo data. The {\em owned} iterations can be processed only by reading halo data. {\em Exec} is the set of iterations that must be executed because they indirectly write (increment) the owned iterations. The union of the {\em owned} and {\em exec} regions is referred to as the {\em boundary}. The {\em non-exec} region includes halo data which is indirectly read during the {\em exec} computation. The iterations in $P_0$'s ($P_1$'s) {\em exec} region are, logically, the same iterations in $P_1$'s ($P_0$'s) {\em owned} region; thus, we say that these iterations are ``redundantly executed''. Matching colors across the two processes represent identical subsets of iterations in the non-partitioned mesh. The image was inspired by an example in~\cite{florian-thesis}.}
\label{fig:sets}
\end{figure}
We here refine this definition, and specialize it for unstructured mesh applications. This allows the introduction of new concepts, necessary to extend the sparse tiling algorithm presented in~\cite{st-paper}. Some terminology and ideas are inspired by the programming model of OP2, a library for unstructured mesh applications~\citep{op2-main} used to implement the already mentioned Hydra code.
\begin{itemize}
\item A loop chain $\mathbb{L} = [L_0, L_1, ..., L_{n-1}]$ is an ordered sequence of $n$ loops. There are no global synchronization points in between the loops. Although there may be dependencies between successive loops in the chain, the execution order of a loop's iterations does not influence the result.
\item $\mathbb{S} = \lbrace S_0, S_1, ..., S_{m-1} \rbrace$ is a collection of $m$ disjoint iteration spaces. Possible iteration spaces are the topological entities of the mesh (e.g., cells, vertices) or the degrees of freedom associated with a function.
When using distributed-memory parallelism, an iteration space $S$ is logically split into three contiguous regions: {\em core}, {\em boundary}, and {\em non-exec} (see also Figure~\ref{fig:sets}). Given a generic process $P$ executing a loop over $S$, these regions represent:
\begin{description}
\item[core] the subset of iterations computed by $P$ that does not depend on halo exchanges. In other words, these are $P$'s local iterations.
\item[boundary] the union of two sub-regions, {\em owned} and {\em exec}, which are defined next. The {\em boundary} region requires up-to-date halo data. Like {\em core}, {\em owned} contains iterations owned by $P$; the data produced by {\em owned} are sent out through a halo exchange. The {\em exec} iterations, instead, are executed because they indirectly write (increment) data in $P$'s {\em owned} sub-region.
\item[non-exec] the subset of iterations not computed by $P$ mapping read-only data sent over to $P$ during a halo exchange.
\end{description}
An iteration space is uniquely identified by a name and the sizes of its three regions.
\item The {\em depth} is an integer indicating the extent of the boundary region. This is constant across all iteration spaces in $\mathbb{S}$.
\item $\mathbb{M} = \lbrace M_0, M_1, ..., M_{o-1} \rbrace$ is a set of $o$ maps. A map of arity $a$ is a vector-valued function $M : S_i \rightarrow S_j^a$ connecting elements in different iteration spaces. For example, we can express the mapping of a triangular cell $c$ to three vertices $v_0,v_1,v_2$ as $M(c) = [v_0,\ v_1,\ v_2]$; here cells and vertices are iteration spaces, while $c, v_0, v_1, v_2$ are iteration identifiers (i.e., natural numbers).
\item A loop $L_i$ over the iteration space $S$ is associated with one or more descriptors. A descriptor is a 2-tuple ${<}M,\ {\tt mode}{>}$. $M$ is either a map from $S$ to some other iteration spaces or the special placeholder $\perp$. In the former case, $L_i$ is accessing data associated with $M(S)$ indirectly; in the latter case, the data accesses are direct. ${\tt mode}$ is one of $[r,\ w,\ i]$, indicating whether a memory access is a read, write or increment.
\end{itemize}
There are a few crucial differences in this refined definition for the unstructured mesh case. One of them is the presence of iteration spaces in place of data spaces. In unstructured mesh applications, loops tend to access multiple data spaces associated with the same iteration space. A key observation is that if a loop is writing to some data spaces, then it is extremely likely that at least a subset of them will be accessed by the subsequent loop in the chain. The idea, therefore, is to rely on iteration spaces, rather than data spaces, to perform dependence analysis. This can substantially reduce the inspection cost, since typically $|\mathbb{S}| << |\mathbb{D}|$. Obviously, this relaxation might also create ``false dependences'', thus potentially affecting data communication. This would be the case if, for example, two consecutive, independent loops accessed different data fields associated with the same iteration space (e.g., {\it pressure} and {\it velocity} defined over the same set of degrees of freedom). In our experience, however, this rarely happens in practice (never in the case of the already mentioned Volna, Hydra and Seigen).
Another fundamental addition is the characterization of iteration spaces into the three regions core, boundary and non-exec. As we shall see, this separation is essential to enable distributed-memory parallelism. The extent of the boundary regions is captured by the {\em depth} of the loop chain. Informally, the {\em depth} tells how many extra ``strips'' of elements are provided by the neighboring processes. This allows some redundant computation along the partition boundary and also limits the depth of the loop chain (i.e., how many loops can be fused). The role of the parameter {\em depth} will be clear by the end of Section~\ref{sec:algorithm}.
\begin{figure}
\begin{CenteredBox}
\begin{subfigure}{0.46\textwidth}
\centering
\begin{lstlisting}[basicstyle=\tiny\ttfamily,keywordstyle=\ttfamily]
for t = 0 to T {
// $L_0$: loop over edges, increment vertices
for e = 0 to E {
x = X + e;
tmp_0 = edges2vertices[e + 0];
tmp_1 = edges2vertices[e + 1];
kernel1(x, tmp$\_$0, tmp$\_$1);
}
// $L_1$: loop over cells, increment vertices
for c = 0 to C {
res = R + c;
tmp_0 = cells2vertices[c + 0];
tmp_1 = cells2vertices[c + 1];
tmp_2 = cells2vertices[c + 2];
kernel2(res, tmp_0, tmp_1, tmp_2);
}
// $L_2$: loop over edges, read vertices
for e = 0 to E {
tmp_0 = edges2vertices[e + 0];
tmp_1 = edges2vertices[e + 1];
kernel3(tmp$\_$0, tmp$\_$1);
}
}
\end{lstlisting}
\subcaption{Example sequence of sparse-tilable loops.}
\label{code:tiling-runningexample}
\end{subfigure}
\hspace{8mm}%
\begin{subfigure}{0.47\textwidth}
\centering
\begin{lstlisting}[basicstyle=\tiny\ttfamily,,keywordstyle=\ttfamily]
inspector = init_inspector(...);
// Three sets, edges, cells, and vertices
E = set(inspector, "edges", core_edges, boundary_edges, nonexec_edges, ...);
C = set(inspector, "cells", core_cells, boundary_cells, nonexec_cells, ...);
V = set(inspector, "verts", core_verts, boundary_verts, nonexec_verts, ...);
// Two maps, from edges to vertices and from cells to vertices
e2vMap = map(inspector, E, V, edges2verts, ...);
c2vMap = map(inspector, C, V, cells2verts, ...);
// The loop chain comprises three loops
// Each loop has some descriptors
loop(inspector, E, { $\perp$, "r"}, {e2vMap, "i"});
loop(inspector, C, { $\perp$, "r"}, {c2vMap, "i"});
loop(inspector, E, { $\perp$, "w"}, {e2vMap, "r"});
// Now can run the inspector
return inspection(mode, inspector, tile_size, ...);
\end{lstlisting}
\subcaption{Loop chain for the example program.}
\label{code:tiling-inspector}
\end{subfigure}
\end{CenteredBox}
\caption{On the left, a ``toy'' program used as running example in Section~\ref{sec:examples} to illustrate the loop chain abstraction and show how the sparse tiling algorithms (inspection, execution) work. Note that all parameters passed to the kernels are pointers. On the right, a code snippet showing the loop chain corresponding to the program on the left. The syntax is very close to the actual API of SLOPE, the sparse tiling library that we have implemented, described in Section~\ref{sec:implementation}.}
\end{figure}
\section{Loop Chain, Inspection and Execution Examples}
\label{sec:examples}
Using the example in Figure~\ref{code:tiling-runningexample}, we describe the actions performed by our sparse tiling inspector. The inspector takes as input the loop chain illustrated in Figure~\ref{code:tiling-inspector}. We show two variants, for shared-memory and distributed-memory parallelism. The value of the variable {\tt mode} in Figure~\ref{code:tiling-inspector} determines the variant to be executed.
\subsection{Overview}
The inspector starts with partitioning the iteration space of a {\it seed loop}, for example $L_0$. Partitions are used to initialize tiles: the iterations of $L_0$ falling in $P_i$ -- or, in other words, the edges in partition $P_i$ -- are assigned to the tile $T_i$. Figure~\ref{fig:st-initial-part-sm} displays the situation after the initial partitioning of $L_0$ for a given input mesh. There are four partitions, two of which ($P_0$ and $P_3$) are not connected through any edge or cell. These four partitions correspond to four tiles, $[T_0,\ T_1,\ T_2,\ T_3]$, with $P_i = T_i$.
\begin{figure}
\centering
\includegraphics[scale=0.43]{figures/partitioned.pdf}
\caption{Partitioning of the seed loop over edges. The vertices are illustrated to make the connectivity of the mesh clear, although they do not belong to any partition yet.}
\label{fig:st-initial-part-sm}
\end{figure}
As detailed in the next two sections, the inspection proceeds by populating $T_i$ with iterations from $L_1$ and $L_2$. The challenge of this task is guaranteeing that all data dependencies -- read after write, write after read, write after write -- are honored. The output of the inspector is eventually passed to the executor. The inspection carries sufficient information for computing sets of tiles in parallel. $T_i$ is always executed by a single thread/process and the execution is atomic; that is, it does not require communication with other threads/processes. When executing $T_i$, first all iterations from $L_0$ are executed, then all iterations from $L_1$ and finally those from $L_2$.
\subsection{Inspection for Shared-Memory Parallelism}
\label{sec:examples:shm}
Similarly to OP2, to achieve shared-memory parallelism we use coloring. Two tiles that are given the same color can be executed in parallel by different threads. Two tiles can have the same color if they are not connected, because this ensures the absence of race conditions through indirect memory accesses during parallel execution. In the example we can use three colors: red (R), green (G), and blue (B). $T_0$ and $T_3$ are not connected, so they are assigned the same color. The colored tiles are shown in Figure~\ref{fig:st-loop-0}. In the following, with the notation $T_i^c$ we indicate that the $i$-th tile has color $c$.
\begin{figure}[t]
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[scale=0.33]{figures/loop_0.pdf}
\caption{A snapshot of the mesh after tiling $L_0$.}
\label{fig:st-loop-0}
\end{subfigure}%
~
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[scale=0.33]{figures/loop_0_with_vertices.pdf}
\caption{The vertices are written by $L_0$, so a projection must be computed before tiling $L_1$. Here, the projection is represented as colored vertices.}
\label{fig:st-loop-0-proj}
\end{subfigure}
~\\~\\
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[scale=0.33]{figures/loop_1.pdf}
\caption{A snapshot of the mesh after tiling $L_1$.}
\label{fig:st-loop-1}
\end{subfigure}
~
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[scale=0.33]{figures/loop_2.pdf}
\caption{A snapshot of the mesh after tiling $L_2$.}
\label{fig:st-loop-2}
\end{subfigure}
\caption{Four passes of the inspection algorithm for shared-memory parallelism.}
\label{fig:inspection-example}
\end{figure}
To populate $[T_0^G,\ T_1^B,\ T_2^R,\ T_3^G]$ with iterations from $L_1$ and $L_2$, we first have to establish a total ordering for the execution of partitions with different colors. Here, we assume the following order: green (G), blue (B), and red (R). This implies, for instance, that {\it all iterations} assigned to $T_1^B$ must be executed {\it before all iterations} assigned to $T_2^R$. By ``all iterations'' we mean the iterations from $L_0$ (determined by the seed partitioning) as well as the iterations that will later be assigned from tiling $L_1$ and $L_2$. We assign integer positive numbers to colors to reflect their ordering, where a smaller number means higher execution priority. We can assign, for example, 0 to green, 1 to blue, and 2 to red.
To schedule the iterations of $L_1$ to $[T_0^G,\ T_1^B,\ T_2^R,\ T_3^G]$, we need to compute a {\it projection} for any write or local reduction performed by $L_0$. The projection required by $L_0$ is a function $\phi : V \rightarrow \mathbb{T}$ mapping the vertices in $V$ -- as indirectly incremented during the execution of $L_0$, see Figure~\ref{code:tiling-runningexample} -- to a tile $T_i^c \in \mathbb{T}$. Consider the vertex $v_0$ in Figure~\ref{fig:st-loop-0-proj}. $v_0$ has 7 incident edges, 2 of which belong to $T_0^G$, while the remaining 5 to $T_1^B$. Since we established that $G \prec B$, $v_0$ can only be read after $T_1^B$ has finished executing the iterations from $L_0$ (i.e., the 5 incident blue edges). We express this condition by setting $\phi(v_0) = T_1^B$. Observe that we can compute $\phi$ by iterating over $V$ and, for each vertex, applying the maximum function ($\operatorname{MAX}$) to the color of the adjacent edges.
We now use $\phi$ to schedule $L_1$, a loop over cells, to the tiles. Consider again $v_0$ and the adjacent cells $[c_0,\ c_1,\ c_2]$ in Figure~\ref{fig:st-loop-0-proj}. These three cells have in common the fact that they are adjacent to both green and blue vertices. For $c_1$, and similarly for the other cells, we compute $\operatorname{MAX}(\phi(v_0),\ \phi(v_1),\ \phi(v_2)) = \operatorname{MAX}(B, G, G) = B = 1$. This establishes that $c_1$ must be assigned to $T_1^B$, because otherwise ($c_1$ assigned instead to $T_0^G$) a read to $v_0$ would occur before the last increment from $T_1^B$ took place. Indeed, we recall that the execution order, for correctness, must be ``all iterations from $[L_0, L_1, L_2]$ in the green tiles before all iterations from $[L_0, L_1, L_2]$ in the blue tiles''. The scheduling of $L_1$ to tiles is displayed in Figure~\ref{fig:st-loop-1}.
To schedule $L_2$ to $[T_0^G,\ T_1^B,\ T_2^R,\ T_3^G]$ we employ a similar process. Vertices are again written by $L_1$, so a new projection over $V$ will be necessary. Figure~\ref{fig:st-loop-2} shows the output of this last phase.
\paragraph{Conflicting Colors}
\begin{figure}[hbtp]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{figures/loop_0_conflicts.pdf}
\caption{After tiling $L_0$}
\label{fig:st-conflicts-a}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/loop_1_conflicts.pdf}
\caption{After tiling $L_1$}
\label{fig:st-conflicts-b}
\end{subfigure}%
~
\begin{subfigure}[b]{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/loop_2_conflicts.pdf}
\caption{After tiling $L_2$}
\label{fig:st-conflicts-c}
\end{subfigure}%
\caption{Tiling the program in Figure~\ref{code:tiling-runningexample} for shared-memory parallelism can lead to conflicts. Here, the two green tiles eventually become adjacent, creating race conditions.}
\label{fig:st-conflicts}
\end{figure}
It is worth noting how $T_2^R$ ``consumed'' the frontier elements of all other tiles every time a new loop was scheduled. Tiling a loop chain consisting of $k$ loops has the effect of expanding the frontier of a tile of at most $k$ vertices. With this in mind, we re-inspect the loop chain of the running example, although this time employing a different execution order -- blue (B), red (R), and green (G) -- and a different seed partitioning. Figure~\ref{fig:st-conflicts} shows that, by applying the same procedure described in this section, $T_0^G$ and $T_3^G$ will eventually become adjacent. This violates the precondition that {\it tiles can be given the same color, and thus run in parallel, as long as they are not adjacent}. Race conditions during the execution of iterations belonging to $L_2$ are now possible. This problem will be solved in Section~\ref{sec:algorithm}.
\subsection{Inspection for Distributed-Memory Parallelism}
\label{sec:examples:execution}
In the case of distributed-memory parallelism, the mesh is partitioned and distributed to a set of processes. Neighboring processes typically exchange (MPI) messages before executing a loop $L_j$. A message includes all ``dirty'' dataset values required by $L_j$ modified by any $L_k$, with $L_k \prec L_j$. In the running example, $L_0$ writes to vertices, so a subset of values associated with border vertices must be communicated prior to the execution of $L_1$. To apply sparse tiling, the idea is to push all communications to the beginning of the loop chain: as we shall see, this increases the amount of data to be communicated at a time, but also reduces the number of synchronizations (only 1 synchronization between each pair of neighboring processes per loop chain execution).
From Section~\ref{sec:tiling:lc} it is known that, in a loop chain, a set is logically split into three regions, {\it core}, {\it boundary}, and {\it non-exec}. The boundary tiles, which originate from the seed partitioning of the boundary region, will include all iterations that cannot be executed until the communications have terminated. The procedure described for shared-memory parallelism -- now performed individually by each process on a partition of the input mesh -- is modified as follows:
\begin{enumerate}
\item The core region of the seed loop $L_0$ is partitioned into tiles. Unless aiming for a mixed distributed/shared-memory scheme, there is no need to assign identical colors to unconnected tiles, as a process will execute its own tiles sequentially. Colors are assigned increasingly, with $T_i$ given color $i$. As long as tiles with contiguous ID are also physically contiguous in the mesh, this assignment retains memory access locality when ``jumping'' from executing $T_i$ to $T_{i+1}$.
\item The same process is applied to the boundary region. Thus, a situation in which a tile includes iterations from both the core and the boundary regions is prevented by construction. Further, all tiles within the boundary region are assigned colors higher than those used for the core tiles. This constrains the execution order: no boundary tiles will be executed until all core tiles are computed.
\item We map the whole non-exec region of $L_0$ to a single special tile, $T_{ne}$. This tile has the highest color and will actually never be executed.
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{figures/mpi_loop0.pdf}
\caption{A snapshot of the two mesh partitions on {\tt Process 0} and {\tt Process 1} after inspecting the seed loop $L_0$ for distributed-memory parallelism. On each process, there are five tiles in total: two in the core region (green and violet), two in the boundary region (red and light blue), and $T_{ne}$. The boundary tiles can safely cross the owned and exec sub-regions (i.e., the local iterations and the iterations to be redundantly computed, respectively). However, no tile can include iterations from both the core and the boundary regions. }
\label{fig:st-mpi-init}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{figures/mpi_loop2.pdf}
\caption{A snapshot of the two mesh partitions on {\tt Process 0} and {\tt Process 1} at the end of the inspection for distributed-memory parallelism. $T_{ne}$ expands over the boundary region, which minimizes the amount of redundant computation to be performed. At the end of the execution phase, the orange edges will contain ``dirty values'', but correctness is not affected as the exec region only includes off-process data. The boundary tiles expand over the core region: this is essential for correctness since none of the red and blue entities from $[L_0,\ L_1,\ L_2]$ can be executed until the MPI communications have terminated.}
\label{fig:st-mpi-growth}
\end{figure}
From this point on, the inspection proceeds as in the case of shared-memory parallelism. The application of the $\operatorname{MAX}$ function when scheduling $L_1$ and $L_2$ makes higher color tiles (i.e., those having lower priority) ``expand over'' lower color ones.
In Figure~\ref{fig:st-mpi-init}, a mesh is partitioned over two processes and a possible seed partitioning and tiling of $L_0$ illustrated. We observe that the two boundary tiles (the red and light blue ones) will expand over the core tiles as $L_1$ and $L_2$ are tiled, which eventually results in the scheduling illustrated in Figure~\ref{fig:st-mpi-growth}. Roughly speaking, if a loop chain consists of $n$ loops and, on each process, $n-1$ extra layers of iterations are provided (the exec regions in Figure~\ref{fig:st-mpi-init}), then all boundary tiles are correctly computed.
The schedule produced by the inspector is subsequently used by the executor. On each process, the executor starts with triggering the MPI communications required for the computation of boundary tiles. All core tiles are then computed, since no data from the boundary region is necessary. Hence, computation is overlapped with communication. As all core tiles are computed and the MPI communications terminated, the boundary tiles can finally be computed.
\paragraph{Efficiency considerations}
The underlying hypothesis is that the increase in data locality will outweigh the overhead induced by the redundant computation and by the bigger volume of data exchanged. This is motivated by several facts: (i) the loops being memory-bound; (ii) the core region being much larger than the boundary region; (iii) the amount of redundant computation being minimized through the special tile $T_{ne}$, which progressively expands over the boundary region, thus avoiding unnecessary calculations.
\section{Data Dependency Analysis}
\label{sec:data-dep-analysis}
The loop chain abstraction, described in Section~\ref{sec:tiling:lc}, provides the information to construct an inspector capable of analyzing data dependencies and thus build a legal sparse tiling schedule. The dependencies between two different loops may be of type flow (read after write), anti (write after read), or output (write after write). Further, there may be ``reduction dependencies'' between iterations of the same loop.
\subsection{Cross-Loop Dependences}
Assume that loop $L_x$, having iteration space $S_x$, precedes loop $L_y$, having iteration space $S_y$, in the loop chain. Let $e_x$ be a generic iteration in $S_x$. Let $M$ be a map of arity $a$ between two iteration spaces. Let $\texttt{mode} \in \lbrace w, r, i\rbrace$ indicate whether an iteration is written, read, or incremented. We represent direct and indirect accesses as follows.
\begin{description}
\item[Direct access] $\perp_{S_x}^{\mathrm{mode}}(e_x) \rightarrow \lbrace \lbrace e_x \rbrace, \emptyset \rbrace$. In particular, if $\perp_{S_x}^{\mathrm{mode}}(e_x) = \emptyset$, then no direct write/read/increment is performed by $e_x$ when computing $L_x$.
\item[Indirect access] $M_{S_x \rightarrow S_y}^{\mathrm{mode}}(e_x) \rightarrow \lbrace \lbrace e_{y_0}, ..., e_{y_{a-1}} \rbrace, \emptyset \rbrace$. As per direct accesses, $M_{S_x \rightarrow S_y}^{\mathrm{mode}}(e_x) = \emptyset$ means that $e_x$ does not indirectly write/read/increment $S_y$ when computing $L_x$.
\end{description}
A direct access is a special case of indirect access when $y=x$ and $M_{S_x \rightarrow S_y}$ is the identity mapping. However, we here keep the distinction between the two types of access explicit due to their relevance in the sparse tiling algorithms, as explained in Section~\ref{sec:algorithm}.
By considering pairs of points $(e_x, e_y)$ in the iteration spaces of the two loops $L_x$ and $L_y$, namely $e_x \in S_x$ and $e_y \in S_y$, we can enumerate all possible dependences. For brevity, we do not distinguish between increments and writes; we also assume that at least one of the two loops accesses data indirectly. Let $S_z$ be a generic iteration space in the loop chain. Hence, the flow dependences are:
\begin{align*}
& \{ e_x \rightarrow e_y \; |
\underbrace{(\perp_{S_x}^{w}(e_x) \cap M_{S_y \rightarrow S_x}^{r}(e_y))}_\text{\ding{192} direct w, indirect r} \cup
\underbrace{(M_{S_x \rightarrow S_y}^{w}(e_x) \cap \perp_{S_y}^{r}(e_y))}_\text{\ding{193} indirect w, direct r} \cup
\underbrace{(M_{S_x\rightarrow S_z}^{w}(e_x) \cap M_{S_y \rightarrow S_z}^{r}(e_y))}_\text{\ding{194} indirect w, indirect r} \ne \emptyset
\}.
\end{align*}
In essence, there is a flow dependence between two iterations from different loops when one of those iterations writes to an element and the other iteration reads from the same element, directly or indirectly. To capture all these flow dependences, the inspection algorithm builds {\it projections} from one loop to another. We saw an example in Section~\ref{sec:examples:shm}: the loop over cells ($S_x$) performed an indirect increment to a dataset associated with vertices ($S_z$), which was then read by the subsequent loop over edges ($S_y$). Such flow dependence was of type \ding{194} (see definition above). For each vertex $e_z \in S_z$, the projection (illustrated in Figure~\ref{fig:st-loop-0-proj}) captured the {\it last tile} indirectly writing to (incrementing) $e_z$, exploiting the color (i.e., the scheduling priority) of the source iterations. A flow dependence of type \ding{192} would be even simpler to deal with, as it would not require the use of the indirect map $M_{S_x \rightarrow S_z}$ to update the color of the iterations.
Likewise, we can enumerate the anti and output dependences:
\begin{align*}
& \{ e_x \rightarrow e_y \; |
(\perp_{S_x}^{r}(e_x) \cap M_{S_y \rightarrow S_x}^{w}(e_y)) \cup
(M_{S_x \rightarrow S_y}^{r}(e_x) \cap \perp_{S_y}^{w}(e_y)) \cup
(M_{S_x\rightarrow S_z}^{r}(e_x) \cap M_{S_y \rightarrow S_z}^{w}(e_y)) \ne \emptyset \}.\\
& \{ e_x \rightarrow e_y \; |
(\perp_{S_x}^{w}(e_x) \cap M_{S_y \rightarrow S_x}^{w}(e_y)) \cup
(M_{S_x \rightarrow S_y}^{w}(e_x) \cap \perp_{S_y}^{w}(e_y)) \cup
(M_{S_x\rightarrow S_z}^{w}(e_x) \cap M_{S_y \rightarrow S_z}^{w}(e_y)) \ne \emptyset \}.
\end{align*}
Projections for this type of dependences are built analogously to that described above.
The inspection algorithm building projections for all flow, anti, and output dependences is provided in Algorithm~\ref{algo:st-projection} and discussed in Section~\ref{sec:inspector:proj}. How the inspector leverages data dependence analysis (i.e., projections) to schedule iterations to tiles (i.e., the tiling function) is formalized in Algorithm~\ref{algo:st-tiling} and commented in Section~\ref{sec:inspector:tiling}.
\subsection{Intra-Loop Dependences}
There also are local reductions, or ``reduction dependencies'', between two or more iterations of the same loop when those iterations increment the same location(s); that is, when they read, modify with a commutative and associative operator, and write to the same location(s). The reduction dependencies in $L_x$ are:
\begin{align*}
\{ e_{x_1} \rightarrow e_{x_2} \; | M_{S_x\rightarrow S_z}^{i}(e_{x_1}) \cap M_{S_x \rightarrow S_z}^{i}(e_{x_2}) \ne \emptyset \}.
\end{align*}
A reduction dependency between two iterations within the same loop indicates that those two iterations must be executed atomically with respect to each other. As we explained in Section~\ref{sec:examples:shm}, the inspection algorithm uses coloring to ensure atomic increments.
\section{Algorithms}
\label{sec:algorithm}
\begin{table}[h]
\tiny
\centering
\begin{tabulary}{1.0\columnwidth}{C|C}
\hline
Symbol & Meaning \\
\hlineB{4}
$\mathbb{L}$ & The loop chain \\
$L_j$ & The $j$-th loop in $\mathbb{L}$ \\
$\mathrm{seed}$ & The index of the seed loop\\
$S_j$ & The iteration space of $L_j$ \\
$S_j^{c}$, $S_j^{b}$, $S_j^{\mathrm{ne}}$ & The core, boundary, and non-exec regions of $S_j$ \\
$D$ & A descriptor of a loop \\
$\mathbb{T}$ & The collection of tiles \\
$\mathbb{T}[i]$ & Accessing the $i$-th tile, or $T_i$ \\
$T_i^{c}$, $T_i^{b}$, $T_i^{\mathrm{ne}}$ & The core, boundary, and non-exec regions of $T_i$\\
$\phi_S$ & A projection $\phi_S : S \rightarrow \mathbb{T}$ \\
$\Phi$ & The collection of projections \\
$\sigma_j$ & A tiling function $\sigma_j : S_j \rightarrow \mathbb{T}$ for $L_j$ \\
$\mathrm{ts}$ & The seed tile size \\
$C$ & The matrix of conflicting colors \\
{\tt Ax:y} & Algorithm x, line y \\
\hline
\end{tabulary}
\caption{Summary of the notation used throughout Section~\ref{sec:algorithm}.}
\label{table:st-summary-notation}
\end{table}
The pseudo-code for the sparse tiling inspector is shown in Algorithm~\ref{algo:st-inspector}. Given a loop chain and a seed tile size, the algorithm produces a schedule suitable for mixed distributed/shared-memory parallelism. This schedule -- in essence, a set of populated tiles -- is used by the executor to perform the sparse-tiled computation. The executor pseudo-code is displayed in Algorithm~\ref{algo:st-executor}. The next two sections elaborate on the main steps of these algorithms. The notation is summarized in Table~\ref{table:st-summary-notation}; the syntax {\tt Ax:y} is a shortcut for ``Algorithm x, line y''. The implementation is discussed in Section~\ref{sec:implementation}.
\begin{minipage}[t]{6.7cm}
\IncMargin{1em}
\begin{algorithm}[H]
\SetAlgoLined
\caption{The inspection algorithm}\label{algo:st-inspector}
\SetKwData{SeedMap}{seed$\_$map}
\SetKwData{Conflicts}{conflicts}
\SetKwData{C}{C}
\SetKwFunction{AFC}{add$\_$fake$\_$connection}
\SetKwFunction{IFC}{has$\_$conflicts}
\SetKwFunction{CLM}{compute$\_$local$\_$maps}
\SetKwFunction{Color}{color}
\SetKwFunction{Partition}{partition}
\SetKwFunction{FindMap}{find$\_$map}
\SetKwFunction{Project}{project}
\SetKwFunction{Assign}{assign}
\SetKwFunction{Tile}{tile}
\tiny
\kwInput{The loop chain $\mathbb{L} = [L_0,\ L_1,\ ...,\ L_{n-1}]$, a seed tile size $\mathrm{ts}$}
\kwOutput{A collection of tiles $\mathbb{T}$, populated with iterations from $\mathbb{L}$}
$\mathrm{seed} \gets 0$, $\Phi \gets \emptyset$, $\C \gets \perp$\; \label{algo:insp-empty-projs}
$\sigma_{\mathrm{seed}}$, $\mathbb{T} \gets$ \Partition{$S_{\mathrm{seed}}$, $\mathrm{ts}$}\; \label{algo:insp-partition}
\SeedMap $\gets$ \FindMap{$S_{\mathrm{seed}}$, $\mathbb{L}$}\;
\Do{\Conflicts}{
\Conflicts $\gets$ \False\;
\Color{$\mathbb{T}$, \SeedMap}\; \label{algo:insp-color}
\For{$j=1$ \KwTo $n-1$}{ \label{algo:st-tiling-loop}
\Project{$L_{j-1}$, $\sigma_{j-1}$, $\Phi$, \C}\;
$\sigma_j \gets$ \Tile{$L_j$, $\Phi$}\;
\Assign{$\sigma_j$, $\mathbb{T}$}\; \label{algo:insp-assign}
}
\If{\IFC{\C}}{
\Conflicts $\gets$ \True\;
\AFC{\SeedMap, \C}\;
}
}
\CLM{$\mathbb{T}$}\;
\Return{$\mathbb{T}$}
\end{algorithm}
\end{minipage}%
\begin{minipage}[t]{6.7cm}
\IncMargin{1em}
\begin{algorithm}[H]
\caption{The executor algorithm}\label{algo:st-executor}
\SetKwData{Color}{color}
\SetKwData{T}{$T$}
\SetKwFunction{SMC}{start$\_$MPI$\_$comm}
\SetKwFunction{EMC}{end$\_$MPI$\_$comm}
\SetKwFunction{GTBR}{group$\_$tiles$\_$by$\_$region}
\SetKwFunction{ET}{execute$\_$tile}
\tiny
\kwInput{A collection of tiles $\mathbb{T}$}
\KwResult{Execute the loop chain}
$\mathbb{T}^{c}$, $\mathbb{T}^{b}$ $\gets$ \GTBR{$\mathbb{T}$}\;
\nonl ~\\
\SMC{}\;
\nonl ~\\
\ForEach{\Color}{
\ForEach{$\T \in \mathbb{T}^{c}$ s.t. \T .\Color $==$ \Color}{ \label{algo:st-executor:parallel1}
\ET{\T}\;
}
}
\nonl ~\\
\EMC{}\;
\nonl ~\\
\ForEach{\Color}{
\ForEach{$\T \in \mathbb{T}^{b}$ s.t. \T .\Color $==$ \Color}{ \label{algo:st-executor:parallel2}
\ET{\T}\;
}
}
\end{algorithm}
\end{minipage}
\subsection{Inspector}
\label{sec:inspector}
\subsubsection{Choice of the seed loop}
The seed loop $L_{\mathrm{seed}}$ is used to initialize the tiles. Theoretically, any loop in the chain can be chosen as seed. Supporting distributed-memory parallelism, however, is cumbersome if $L_{\mathrm{seed}} \neq L_0$. This is because more general schemes for partitioning and coloring would be needed to ensure that no iterations in any $S_j^{b}$ are assigned to a core tile. A limitation of our inspector algorithm in the case of distributed-memory parallelism is that it must be $L_{\mathrm{seed}} = L_0$.
In the special case in which there is no need to distinguish between core and boundary tiles (because a program is executed on a single shared-memory system), $L_{\mathrm{seed}}$ can be chosen arbitrarily. If we however pick $L_{\mathrm{seed}}$ in the middle of the loop chain, that is $L_0 \prec ... \prec L_{\mathrm{seed}} \prec ... \prec L_{n-1}$, a mechanism for constructing tiles in the reverse direction (``backwards''), from $L_{\mathrm{seed}}$ towards $L_0$, is necessary. In~\cite{st-paper}, we propose two ``symmetric'' algorithms to solve this problem, \textit{forward tiling} and \textit{backward tiling}, with the latter using the $\operatorname{MIN}$ function in place of $\operatorname{MAX}$ when computing projections. For ease of exposition, and since in the fundamental case of distributed-memory parallelism we are imposing $L_{\mathrm{seed}} = L_0$, we here neglect this distinction\footnote{The algorithm implemented in SLOPE, the library presented in Section~\ref{sec:tiling:impl-slope}, supports backwards tiling for shared-memory parallelism.}.
\subsubsection{Tiles initialization}
\label{sec:inspector:init}
Let $\mathrm{ts}$ be the user-specified seed tile size. The algorithm starts with partitioning $S_{\mathrm{seed}}^{c}$ into $m$ subsets $\lbrace P_0, P_1, ..., P_{m-1}\rbrace$ such that $|P_i| = ts$ (except possibly for $P_{m-1}$), $P_i \cap P_j = \emptyset$, and $\cup_{i = 0}^{m-1} P_i = S_{\mathrm{seed}}^{c}$. Among all possible legal partitionings, we choose the one that splits $S_{\mathrm{seed}}^c$ into blocks of $\mathrm{ts}$ contiguous iterations, with $P_0 = \lbrace 0, ..., ts-1\rbrace$, $P_1 = \lbrace ts, ..., 2 ts - 1\rbrace$, and so on. We analogously partition $S_{\mathrm{seed}}^{b}$ into $k$ subsets. We create $m+k+1$ tiles, one for each of these partitions and one extra tile for $S_{\mathrm{seed}}^{\mathrm{ne}}$, namely $\mathbb{T} = [T_0^c, ..., T_{m-1}^c, T_m^{b}, ..., T_{m+k-1}^b, T_{m+k}^{\mathrm{ne}}]$. At this point we have an assignment of iterations to tiles for $L_{\mathrm{seed}}$; that is, a tiling function $\sigma_{\mathrm{seed}} : S_{\mathrm{seed}} \rightarrow \mathbb{T}$. This initial partitioning phase occurs at {\tt A\ref{algo:st-inspector}:\ref{algo:insp-partition}}.
A tile $T_i$ has four fields, as summarized in Table~\ref{table:st-tile-structure}.
\begin{itemize}
\item The {\em region} is used by the executor to schedule tiles in a given order. This field is set right after the partitioning of $L_{\mathrm{seed}}$, as a tile (by construction) exclusively belongs to $S_{\mathrm{seed}}^c$, $S_{\mathrm{seed}}^b$, or $S_{\mathrm{seed}}^{\mathrm{ne}}$.
\item The {\em iteration lists} contain the iterations in $\mathbb{L}$ that $T_i$ will have to execute. There is one {\em iteration list} for each $L_j \in \mathbb{L}$, indicated as $[T_i]_j$ . At this stage of the inspection we have $[T_i]_{\mathrm{seed}} = [T_i]_0 = P_i$, whereas still $[T_i]_j = \emptyset$ for $j=1,...,n-1$.
\item {\em Local maps} may be used for performance optimization by the executor in place of the global maps provided through the loop chain; this will be discussed in more detail in Section~\ref{sec:performance} and in the Supplementary Materials.
\item The {\em color} provides a tile with a scheduling priority. If shared-memory parallelism is requested, adjacent tiles are given different colors (the adjacency relation is determined through the maps available in $\mathbb{L}$). Otherwise, colors are assigned in increasing order (i.e., $T_i$ is given color $i$). The boundary tiles are always given colors higher than that of core tiles; the non-exec tile has the highest color. The assignment of colors is carried out by {\tt A\ref{algo:st-inspector}:\ref{algo:insp-color}}.
\end{itemize}
\begin{table}[h]
\tiny
\centering
\begin{tabulary}{1.0\columnwidth}{P{2.7cm} | P{8.5cm}}
\hline
Field & Possible values \\
\hlineB{4}
{\em region} & core, boundary, non-exec \\
{\em iterations lists} & one list of iterations $[T_i]_j$ for each $L_j \in \mathbb{L}$\\
{\em local maps} & one list of local maps for each $L_j \in \mathbb{L}$; one local map for each map used in $L_j$\\
{\em color} & an integer representing the execution priority \\
\hline
\end{tabulary}
\caption{The tile data structure.}
\label{table:st-tile-structure}
\end{table}
\subsubsection{The inspection loop}
The inspection loop, starting at {\tt A\ref{algo:st-inspector}:\ref{algo:st-tiling-loop}}, schedules the remaining $L_j \in \mathbb{L}$ by alternating dependence analysis and construction of tiling functions. The input is $\sigma_{\mathrm{seed}}$. As seen in the previous sections, a projection is a function $\phi_S : S \rightarrow \mathbb{T}$ that captures data dependences across loops. Initially, the projections set $\Phi$ is empty ({\tt A\ref{algo:st-inspector}:\ref{algo:insp-empty-projs}}). Once a new loop is tiled, $\Phi$ is updated by adding new projections or changing existing ones (see Section~\ref{sec:inspector:proj}). Using $\Phi$, a new tiling function $\sigma_j$ for $L_j$ is derived (see Section~\ref{sec:inspector:tiling}).
\subsubsection{Deriving a projection from a tiling function}
\label{sec:inspector:proj}
Algorithm~\ref{algo:st-projection} takes as input (the descriptors of) an $L_{j}$ and its tiling function $\sigma_{j} :S_j \rightarrow \mathbb{T}$ to update $\Phi$. The algorithm also updates the conflicts matrix $C \in \mathbb{N}^{m \times m}$, which indicates whether two tiles having the same color will become adjacent.
A new projection $\phi_{S}$ is needed if $S$ is written by $L_{j}$. As explained in Section~\ref{sec:data-dep-analysis}, $\phi_{S}$ carries the necessary information to tile a subsequent loop accessing $S$. Let us consider the non-trivial case in which $L_j$ writes indirectly to $S$ through a map $M : S_j \rightarrow S^a$. To compute $\phi_{S}$, we first determine the inverse map $M^{-1}$ ({\tt A\ref{algo:st-projection}:\ref{algo:st-projection-invert}}; an example is shown in Figure~\ref{fig:st-inverse-map}). Then, we iterate over all elements $e \in S$ and we set $\phi_{S}[e]$ to the last tile writing to $e$. This is accomplished by applying the $\operatorname{MAX}$ function over the color of the tiles accessing $e$ (see {\tt A\ref{algo:st-projection}:\ref{algo:st-projection-max}}), obtained through $M^{-1}$. This procedure was used, for example, to compute the projection in Figure~\ref{fig:st-loop-0-proj}.
\begin{minipage}[t]{6.7cm}
\IncMargin{1em}
\begin{algorithm}[H]
\caption{Projection of a tiled loop}\label{algo:st-projection}
\tiny
\SetKwData{Descriptors}{descriptors}
\SetKwData{Arity}{arity}
\SetKwData{T}{$T$}
\SetKwData{AT}{$T_{\mathrm{last}}$}
\SetKwData{MC}{max}
\SetKwData{IM}{$M^{-1}$}
\SetKwData{Map}{map}
\SetKwData{Mode}{mode}
\SetKwData{D}{D}
\SetKwData{C}{C}
\SetKwData{Values}{values}
\SetKwData{Offset}{offset}
\SetKwData{Color}{color}
\SetKwFunction{MapInvert}{map$\_$invert}
\SetKwFunction{Update}{update$\_$color$\_$conflicts}
\SetKwFunction{Unpack}{unpack}
\kwInput{A loop $L_j$, a tiling function $\sigma_j$, the projections set $\Phi$, the conflicts matrix \C}
\KwResult{Update $\Phi$ and \C}
\ForEach{\D $\in$ $L_j$.\Descriptors}{
\eIf{\D .\Map $==$ $\perp$}{
$\Phi[S_j] \gets \sigma_{j}$\;
}{
\IM $\gets$ \MapInvert{\D .\Map}\; \label{algo:st-projection-invert}
$S$, $S_j$, \Values, \Offset $\gets$ \IM.\Unpack()\;
$\phi_{S} \gets \perp$\;
\For{$e$ \In $S$}{ \label{algo:st-projection-parallel}
\For{$k= \Offset[e]$ \KwTo $\Offset[e+1]$}{
\AT = $\mathbb{T}[\Values[k]]$\;
\MC $\gets$ MAX($\phi_{S}[e]$.\Color, \AT .\Color)\; \label{algo:st-projection-max}
\If{\MC $\neq$ $\phi_{S}[e]$.\Color}{
$\phi_{S}[e] \gets$ \AT\;
}
}
}
\Update{\C, $\mathbb{T}$, $\phi_{S}$}\;
$\Phi[S] = \phi_{S}$\;
}
}
\end{algorithm}
\end{minipage}%
\begin{minipage}[t]{6.7cm}
\IncMargin{1em}
\begin{algorithm}[H]
\caption{Building a tiling function}\label{algo:st-tiling}
\tiny
\SetKwData{Descriptors}{descriptors}
\SetKwData{Arity}{arity}
\SetKwData{AT}{$T$}
\SetKwData{MC}{max}
\SetKwData{Size}{size}
\SetKwData{Map}{map}
\SetKwData{D}{D}
\SetKwData{Values}{values}
\SetKwData{Color}{color}
\kwInput{A loop $L_{j}$, the projections set $\Phi$}
\kwOutput{The tiling function $\sigma_{j}$}
$\sigma_j \gets \perp$\;
\ForEach{\D $\in$ $L_j$.\Descriptors}{
\eIf{\D .\Map $==$ $\perp$}{
$\phi_{S_j} \gets \Phi[S_j]$\;
\For{$e$ \In $S_j$}{
\MC $\gets$ MAX($\sigma_j[e]$.\Color, $\phi_{S_j}[e]$.\Color)\;
\If{\MC $\neq$ $\sigma_j[e]$.\Color}{
$\sigma_j[e] \gets$ $\phi_{S_j}[e]$\;
}
}
}{
\Arity $\gets$ D.\Map .\Arity\;
$\phi_{S} \gets \Phi[\D.\Map.S]$\;
\For{$e$ \In $S_j$}{ \label{algo:st-tiling-parallel}
$\sigma_j[e] \gets \AT_{\perp}$\;
\For{$k=0$ \KwTo \Arity}{
\AT $\gets \phi_{S}[\D.\Map .\Values[e*\Arity + k]]$\;
\MC $\gets$ MAX($\sigma_j[e]$.\Color, \AT .\Color)\;
\If{\MC $\neq$ $\sigma_j[e]$.\Color}{
$\sigma_j[e] \gets$ \AT\;
}
}
}
}
}
\Return{$\sigma_j$}
\end{algorithm}
\end{minipage}
\begin{figure}[h]
\begin{CenteredBox}
\includegraphics[scale=0.40]{figures/inverse_map}
\end{CenteredBox}
\caption{Representation of an inverse map. The original map shows that the triangular cell $1$ is adjacent to three vertices, namely $3$, $7$, and $9$. The inverse map associates vertices to cells. Since the mesh is unstructured, different vertices can be incident to a different number of cells. The array {\tt offset} provides the distance between two consecutive vertices in the inverse map. For instance, all entries in the inverse map between {\tt offset[3]} and {\tt offset[4]} are cells incident to vertex $3$.}
\label{fig:st-inverse-map}
\end{figure}
\subsubsection{Deriving a tiling function from the available projections}
\label{sec:inspector:tiling}
Using $\Phi$, we compute $\sigma_j$ as described in Algorithm~\ref{algo:st-tiling}. The algorithm is similar to the projection of a tiled loop (e.g., maps are used to access the neighborhood of a target iteration). The main difference is that now the projections in $\Phi$ are used, rather than computed, to schedule iterations to tiles so that data dependences are honored. Finally, $\sigma_j$ is used to populate the iteration lists $[T_i]_j$, for all $T_i \in \mathbb{T}$ (see {\tt A\ref{algo:st-inspector}:\ref{algo:insp-assign}}).
\subsubsection{Detection of conflicts}
If $C$ indicates the presence of at least one conflict, say between $T_{i_1}$ and $T_{i_2}$, we add a ``fake connection'' between these two tiles and loop back to the coloring stage. $T_{i_1}$ and $T_{i_2}$ are now connected, so they will be assigned different colors.
\subsubsection{On the history of the algorithm}
The first algorithm for generalized sparse tiling inspection was introduced in~\cite{st-paper}. In this section, a new, enhanced version of that algorithm has been presented. In essence, the major differences are: (i) support for distributed-memory parallelism; (ii) use of coloring instead of a task graph for tile scheduling; (iii) speculative inspection with backtracking if a coloring conflict is detected; (iv) use of sets, instead of datasets, for data dependency analysis; (v) use of inverse maps for parallelization of the projection and tiling routines; (vi) computation of local maps. Most of these changes contributed to reduce the inspection cost, as discussed later.
\subsection{Executor}
\label{sec:executor}
The sparse tiling executor is illustrated in Algorithm~\ref{algo:st-executor}. It consists of four main phases: (i) exchange of halo regions amongst neighboring processes through non-blocking communications; (ii) execution of core tiles (in overlap with communication); (iii) wait for the termination of the communications; (iv) execution of boundary tiles.
As explained in Sections~\ref{sec:examples:execution}, a sufficiently deep halo region enables correct computation of the boundary tiles. Further, tiles are executed atomically, meaning that all iterations in a tile are computed without ever synchronizing with other processes. The depth of the boundary region, which affects the amount of off-process data to be redundantly computed, increases with the number $n$ of loops to be fused. In the example in Figure~\ref{fig:st-mpi-init}, there are $n=3$ loops, and three ``strips'' of extra vertices are necessary for correctly computing the fused loops without tile-to-tile synchronizations.
We recall from Section~\ref{sec:tiling:lc} that the {\em depth} of the loop chain indicates the extent of the boundary region. This parameter imposes a limit to the number of fusible loops. If $\mathbb{L}$ includes more loops than the available boundary region -- that is, if $n > \text{{\em depth}}$ -- then $\mathbb{L}$ will have to be split into shorter loop chains, to be fused individually. As we shall see (Section~\ref{sec:implementation}), in our inspector/executor implementation the {\em depth} is controlled by the Firedrake's DMPlex module.
\section{Implementation: SLOPE, PyOP2, and Firedrake}
\label{sec:implementation}
The implementation of automated sparse tiling is distributed over three open-source software modules.
\begin{description}
\item[Firedrake] An established framework for the automated solution of PDEs through the finite element method~\citep{firedrake}.
\item[PyOP2] A module used by Firedrake to apply numerical kernels over sets of mesh components. Parallelism is handled at this level.
\item[SLOPE] A library for writing inspector/executor schemes, with primary focus on sparse tiling. PyOP2 uses SLOPE to apply sparse tiling to loop chains.
\end{description}
The SLOPE library is an open source embodiment of the algorithms presented in this article. The interplay amongst Firedrake, PyOP2 and SLOPE is outlined in Figure~\ref{fig:st-implementation} and discussed in more detail in the following sections.
\begin{figure}[htpb]
\centering
\includegraphics[scale=0.6]{figures/firedrake-pyop2-slope.pdf}
\caption{Sparse tiling in the Firedrake-PyOP2-SLOPE framework. There are three ways of sparse tiling a loop chain: decorating a Firedrake program (1A), decorating a sequence of loops in a PyOP2 program (1B), writing both the loop chain and the inspector/executor codes explicitly in C through calls to SLOPE (1C). Both (1A) and (1B) use the {\em loop$\_$chain} interface (details in Section~\ref{sec:tiling:lcinterface}). PyOP2 derives the loop chain (2), essentially sets, maps and loops (see Section~\ref{sec:tiling:lc} and example in Figure~\ref{code:tiling-inspector}), from the kernels produced within the {\em loop$\_$chain} context. The loop chain is provided to SLOPE through its Python interface (3). SLOPE performs the inspection and returns its output, a tiles schedule, to PyOP2 (4). Eventually, the executor is generated and run by PyOP2.}
\label{fig:st-implementation}
\end{figure}
\subsection{SLOPE: a Library for Sparse Tiling Irregular Computations}
\label{sec:tiling:impl-slope}
SLOPE is an open source software that provides an interface to build loop chains and to express inspector/executor schemes for sparse tiling\footnote{SLOPE is available at \url{https://github.com/coneoproject/SLOPE}}.
The loop chain abstraction implemented by SLOPE has been formalized in Section~\ref{sec:tiling:lc}. In essence, a loop chain comprises some sets (including the separation into core, boundary, and non-exec regions), maps between sets, and a sequence of loops. Each loop has one or more descriptors specifying what and how different sets are accessed. The example in Figure~\ref{code:tiling-inspector} illustrates the interface exposed by the library. SLOPE implements Algorithms~\ref{algo:st-inspector},~\ref{algo:st-projection} and~\ref{algo:st-tiling} from Section~\ref{sec:algorithm}. Further, it provides additional features to estimate the effectiveness and to verify the correctness of sparse tiling, including generation of VTK files (suitable for Paraview~\citep{paraview}), to visualize the partitioning of the mesh into colored tiles, as well as insightful inspection summaries (showing, for example, number and average size of tiles, total number of colors used, time spent in critical inspection phases).
In the case of shared-memory parallelism, the following sections of code are parallelized through OpenMP:
\begin{description}
\item[Inspection] The projection and tiling algorithms; in particular, the loops over set elements in Algorithm~\ref{algo:st-projection} and Algorithm~\ref{algo:st-tiling}).
\item[Execution] The computation of same colored tiles; that is, the two loops over tiles in Algorithm~\ref{algo:st-executor}.
\end{description}
\subsection{PyOP2: Lazy Evaluation and Interfaces}
\label{sec:tiling:lcinterface}
PyOP2 is a Python library offering abstractions to model an unstructured mesh -- in terms of {\em sets} (e.g. vertices, edges), {\em maps} between sets (e.g., a map from edges to vertices to express the mesh topology), and {\em datasets} associating data to each set element (e.g. 3D coordinates to each vertex) -- and applying numerical kernels to sets of entities~\citep{firedrake}. In this section, we focus on the three relevant contributions to PyOP2 made through our work: (i) the interface to identify loop chains; (ii) the lazy evaluation mechanism that allows loop chains to be built; (iii) the interaction with SLOPE to automatically build and execute inspector/executor schemes.
To apply sparse tiling to a sequence of loops, the {\em loop$\_$chain} interface was added to PyOP2. This interface, exemplified in Figure~\ref{code:loop-chain-interface}, is also exposed to the higher layers, for example in Firedrake. In the listing, the {\tt name} uniquely identifies a loop chain. Other parameters (most of them optional) are useful for performance evaluation and performance tuning. Amongst them, the most important are the {\tt tile$\_$size} and the {\tt fusion$\_$scheme}. The {\tt tile$\_$size} specifies the initial average size for the seed partitions. The {\tt fusion$\_$scheme} allows to specify how to break a long sequence of loops into smaller loop chains, which makes it possible to experiment with a full set of sparse tiling strategies without having to modify the source code.
\begin{figure}[htpb]
\begin{CenteredBox}
\begin{lstlisting}[basicstyle=\scriptsize\ttfamily,morekeywords={with}]
with loop_chain(name, tile_size, fusion_scheme, ...):
<some PyOP2 parallel loops are expressed/generated here>
\end{lstlisting}
\end{CenteredBox}
\caption{The {\em loop$\_$chain} interface in PyOP2.}
\label{code:loop-chain-interface}
\end{figure}
PyOP2 exploits lazy evaluation of parallel loops to generate an inspector/executor scheme. The parallel loops encountered during the program execution -- or, analogously, those generated through Firedrake -- are pushed into a queue, instead of being executed immediately. The sequence of parallel loops in the queue is called the {\em trace}. If a dataset $f$ needs to be read, for example because a user wants to inspect its values or a global linear algebra operation is performed, then the trace is traversed -- from the most recent parallel loop to the oldest one -- and a new sub-trace produced. The sub-trace includes all parallel loops that must be executed to evaluate $f$ correctly. The sub-trace can then be executed or further pre-processed.
All loops in a trace that were created within a {\em loop$\_$chain} scope are sparse tiling candidates. In detail, the interaction between PyOP2 and SLOPE is as follows:
\begin{enumerate}
\item Figure~\ref{code:loop-chain-interface} shows that a {\em loop$\_$chain} defines a new scope. As this scope is entered, a stamp $s_1$ of the trace is generated. This happens ``behind the scenes'', because the {\em loop$\_$chain} is a Python context manager, which can execute pre-specified routines prior and after the execution of the body. As the {\em loop$\_$chain}'s scope is exited, a new stamp $s_2$ of the trace is computed. All parallel loops in the trace generated between $s_1$ and $s_2$ are placed into a sub-trace for pre-processing.
\item The pre-processing consists of two steps: (i) ``simple'' fusion -- consecutive parallel loops iterating over the same iteration space that do not present indirect data dependencies are merged; (ii) generation of a loop chain representation for SLOPE.
\item In (ii), PyOP2 inspects the sequence of parallel loops and translates their metadata (sets, maps, loops) into a format suitable for the SLOPE's Python interface. SLOPE performs an inspection for the received loop chain and returns a tiles schedule to PyOP2 (i.e., it runs Algorithm~\ref{algo:st-inspector}).
\item A ``software cache'' mapping {\em loop$\_$chain}s to {\em inspection}s is used. This whole process needs therefore be executed only once for a given {\em loop$\_$chain}.
\item The executor is generated, compiled and run directly by PyOP2, with the help of an API provided by SLOPE. To run the executor, the tiles schedule produced in (3) is used.
\end{enumerate}
\subsection{Firedrake/DMPlex: the S-depth Mechanism for Extended Halo Regions}
\label{sec:tiling:impl-firedrake}
Firedrake uses DMPlex~\citep{dmplex-cite} to handle meshes. DMPlex is responsible for partitioning, distributing over multiple processes, and locally reordering a mesh. The MPI parallelization is therefore managed through Firedrake/DMPlex.
During the start-up phase, each MPI process receives a contiguous partition of the original mesh from DMPlex. The required PyOP2 sets, which can represent either topological components (e.g., cells, vertices) or function spaces, are created. As intuitively shown in Figure~\ref{fig:sets}, these sets distinguish between multiple regions: core, owned, exec, and non-exec. Firedrake initializes the four regions exploiting the information provided by DMPlex.
To support the loop chain abstraction, Firedrake must be able to allocate arbitrarily deep halo regions. Both Firedrake and DMPlex have been extended to support this feature~\citep{Knepley2015}. A parameter called {\em S-depth} (the name has historical origins, see for instance~\cite{s-depth-paper}) regulates the extent of the halo regions. A value {\em S-depth} $=n$ indicates the presence of $n$ strips of off-process data elements in each set. The default value is {\em S-depth} $=1$, which enables computation-communication overlap when executing a single loop at the price of a small amount of redundant computation along partition boundaries. This is the default execution model in Firedrake.
\section{Performance Evaluation}
\label{sec:performance}
\subsection{The Seigen Computational Framework}
Seigen is a seismological modelling framework capable of solving the elastic wave equation on unstructured meshes. Exploiting the well-known velocity-stress formulation~\citep{Seigen-3}, the seismic model is expressible as two first-order linear PDEs, which we refer to as {\tt velocity} and {\tt stress}. These governing equations are discretized in space through the discontinuous-Galerkin finite element method. The evolution over time is obtained by using a fourth-order explicit leapfrog scheme based on a truncated Taylor series expansion of the velocity and stress fields. The particular choice of spatial and temporal discretizations has been shown to be non-dissipative~\citep{Seigen-1}. More details can be found in~\cite{Seigen-paper}. Seigen, which is built on top of Firedrake, is part of OPESCI, an ecosystem of software for seismic imaging based on automated code generation~\citep{opesci-project}.
Seigen has a set of test cases, which differ in various aspects, such as the initial conditions of the system and the propagation of waves. However, they are all based upon the same seismological model; from a computational viewpoint, this means that, in a time step, the same sequence of loops is executed. In the following, we focus on the {\tt explosive$\_$source} test case (see the work by \cite{Garvin1956} for background details).
\subsection{Implementation and Validation}
In a time loop iteration, eight linear systems need to be solved, four from {\tt velocity} and four from {\tt stress}. Each solve consists of three macro-steps: assembling a global matrix $A$; assembling a global vector $b$; computing $x$ in the system $Ax = b$. There are two global ``mass'' matrices, one for {\tt velocity} and one for {\tt stress}. Both matrices are time invariant, so they are assembled before entering the time loop, and block-diagonal, as a consequence of the spatial discretization employed (a block belongs to an element in the mesh). The inverse of a block-diagonal matrix is again block-diagonal and is determined by computing the inverse of each block. The solution of the linear system $Ax = b$, expressible as $x = b A^{-1}$, can therefore be evaluated by looping over the mesh and computing a ``small'' matrix-vector product in each element, where the matrix is a block in $A^{-1}$. Assembling the global vectors boils down to executing a set of loops over mesh entities, particularly over cells, interior facets, and exterior facets. Overall, twenty-five loops are executed in a time loop iteration. Thanks to the hierarchy of ``software caches'' employed by Firedrake, the translation from mathematical syntax into loops is only performed once.
Introducing sparse tiling into Seigen was relatively straightforward. Three steps were required: (i) embedding the time stepping loop in a {\em loop$\_$chain} context (see Section~\ref{sec:tiling:lcinterface}), (ii) propagating user input relevant for performance tuning, (iii) creating a set of {\em fusion schemes}. A fusion scheme establishes which sub-sequences of loops within a {\em loop$\_$chain} will be fused and the respective seed tile sizes. If no fusion schemes were specified, all of the twenty-five loops would be fused using a default tile size. As we shall see, operating with a set of small loop chains and heterogeneous tile sizes is often more effective than fusing long sequences of loops.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{figures/expl-source01.png}
\caption{Right after the explosion.}
\end{subfigure}%
~~~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{figures/expl-source02.png}
\caption{Wave propagation.}
\end{subfigure}%
~~~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{figures/expl-source04.png}
\caption{Final snapshot.}
\end{subfigure}%
\caption{Three phases of the {\tt explosive$\_$source} test case in Seigen, following an explosion at a point source.}
\label{fig:seigen-output}
\end{figure}
Seigen has several mechanisms to validate the correctness of the seismological model and the test cases. The numerical results of all code versions (with and without tiling) were checked and compared. Paraview was also used to verify the simulation output. Snapshots of the simulation output are displayed in Figure~\ref{fig:seigen-output}.
\subsection{Computational Analysis of the Loops}
We here discuss computational aspects of the twenty-five fusible loops. The following considerations derive from an analytical study of the data movement in the loop chain, extensive profiling through the Intel VTune Amplifier tool~\citep{vtune}, and roofline models (available in~\cite{Seigen-paper}).
Our initial hypothesis was that Seigen would have benefited from sparse tiling. Not only does data reuse arise within single loops (e.g., by accessing vertex coordinates from adjacent cells), but also across consecutive loops, through indirect data dependencies. This seemed to make Seigen a natural fit for sparse tiling. The eight ``solver'' loops perform matrix-vector multiplications in each mesh element. It is well established that linear algebra operations of this kind are memory-bound. Four of these loops arise from {\tt velocity}, the others from {\tt stress}. There is significant data reuse amongst the four {\tt velocity} loops and amongst the four {\tt stress} loops, since the same blocks in the global inverse matrices are accessed. We therefore hypothesized performance gains if these loops were fused through sparse tiling.
We also observed that because of the particular mesh employed, the exterior facet loops, which implement the boundary conditions of the variational problems, have negligible cost. However, the cells and facets loops do have significant cost, and data reuse across them arises. Six loops iterate over the interior mesh facets to evaluate facet integrals, which ensure the propagation of information between adjacent cells in discontinuous-Galerkin methods. The operational intensity of these loops is much lower than that of cell loops, and memory-boundedness is generally expected. Consecutive facet and cell integral loops share fields, which creates cross-loop data reuse opportunities, thus strengthening the hypothesis about the potential of sparse tiling in Seigen.
All computational kernels generated in Seigen are optimized through COFFEE~\citep{Luporini-coffee}, a system used in Firedrake that, in essence, (i) minimizes the operation count by restructuring expressions and loop nests, and (ii) maximizes auto-vectorization opportunities by applying transformations such as array padding and by enforcing data alignment.
\subsection{Setup and Reproducibility}
\label{sec:performance:setup}
There are two parameters that we can vary in {\tt explosive$\_$source}: the polynomial order of the method, $q$, and the input mesh. We test the spectrum $q \in \lbrace 1, 2, 3, 4 \rbrace$. To test higher polynomial orders, changes to both the spatial and temporal discretizations would be necessary. For the spatial discretization, the most obvious choice would be tensor product function spaces, which at the moment of writing is still unavailable in Firedrake. We use as input a two-dimensional rectangular domain of fixed size 300$\times$150 tessellated with unstructured triangles ({\tt explosive$\_$source} only supports two-dimension domains); to increase the number of elements in the domain, thus allowing to weak scale, we vary the mesh spacing $h$.
The generality of the sparse tiling algorithms, the flexibility of the {\em loop$\_$chain} interface, and the {\em S-depth} mechanism made it possible to experiment with a variety of fusion schemes without changes to the source code. Five fusion schemes were devised, based on the following criteria: (i) amount of data reuse, (ii) amount of redundant computation over the boundary region, (iii) memory footprint (the larger, the smaller the tile size to fit in cache). The fusion schemes are summarized in Table~\ref{table:seigen-fusion-schemes}. The full specification, along with the seed tile size for each sub loop chain, is available at~\cite{fabio_luporini_2017_840000}.
The experimentation was conducted on two platforms, whose specification is reported in Table~\ref{table:seigen-setup}. The two platforms, Erebus (the ``development machine'') and Helen (a Broadwell-based architecture in the Helen cluster~\citep{cx2-helen}) were idle and exclusive access had been obtained when the runs were performed. Support for shared-memory parallelism is discontinued in Firedrake, so only distributed-memory parallelism with 1 MPI process per physical core was tested. The MPI processes were pinned to cores. The hyperthreading technology was tried, but found to be generally non-profitable. The code used for running the experiments was archived with the Zenodo data repository service: Firedrake~\citep{lawrence_mitchell_2017_836680}, PETSc~\citep{barry_smith_2017_836685}, PETSc4py~\citep{lisandro_dalcin_2017_836684}, FIAT~\citep{miklos_homolya_2017_836679}, UFL~\citep{martin_sandve_alnaes_2017_836683}, TSFC~\citep{miklos_homolya_2017_836677}, PyOP2~\citep{florian_rathgeber_2017_836688}, COFFEE~\citep{fabio_luporini_2017_836678}, SLOPE~\citep{fabio_luporini_2017_836738}, and Seigen~\citep{fabio_luporini_2017_840000}.
\begin{table}[htpb]
\tiny
\centering
\begin{minipage}[t]{.60\textwidth}
{
\begin{tabulary}{1.2\textwidth}{M{1.75cm} | M{2.65cm} | M{3.0cm} N}
\hline
System & Erebus & Helen \\
\hlineB{4}
Node & \shortstack{1x4-core\\Intel I7-2600 3.4GHz} & \shortstack{2x14-core\\Intel Xeon E5-2680 v4 2.40GHz} & \\[6pt] \hline
DRAM & 16 GB & 128 GB (node) &\\[6pt] \hline
Cache hierarchy & L1=32KB, L2=256KB, L3=8MB & L1=32KB, L2=256KB, L3/socket=35MB & \\[6pt] \hline
Compilers & Intel {\tt icc} 16.0.2 & Intel {\tt icc} 16.0.3 &\\[6pt] \hline
Compiler flags & {\tt -O3 -xHost -ip} & {\tt -O3 -xHost -ip} & \\[6pt] \hline
MPI version & Open MPI 1.6.5 & SGI MPT 2.14 &\\ \hline
\end{tabulary}
}
\caption{Systems specification.}
\label{table:seigen-setup}
\end{minipage}\hfill
\begin{minipage}[t]{.38\textwidth}
\makebox[\textwidth][c]
{
\begin{tabulary}{1.0\columnwidth}{M{0.5cm} | M{1.1cm} | M{1.6cm} | M{0.65cm} N}
\hline
Fusion scheme & Number of loop chains & Criterion & {\em S-depth} \\
\hlineB{4}
{\tt fs1} & 3 & Fuse costly cells and facets loops & 2 & \\[5pt] \hline
{\tt fs2} & 8 & More aggressive than {\tt fs1} & 2 & \\[5pt] \hline
{\tt fs3} & 6 & {\tt fs2}, include all solver loops & 2 & \\[5pt] \hline
{\tt fs4} & 3 & More aggressive than {\tt fs3} & 3 & \\[5pt] \hline
{\tt fs5} & 2 & {\tt velocity}, {\tt stress} & 4 & \\[5pt] \hline
\end{tabulary}
}
\caption{Fusion schemes summary.}
\label{table:seigen-fusion-schemes}
\end{minipage}\hfill
\end{table}
In the following, for each experiment, we collect three measurements.
\begin{description}
\item[Overall completion time -- OT] Used to compute the maximum application speed-up when sparse tiling is applied.
\item[Average compute time -- ACT] Sparse tiling impacts the kernel execution time by increasing data locality. Communication is also influenced, especially in aggressive fusion schemes: the rounds of communication decrease, while the data volume exchanged may increase. ACT isolates the gain due to increased data locality from (i) the communication cost and (ii) any other action performed in Firedrake (executing Python code) between the invocation of kernels. This value is averaged across the processes.
\item[Average compute and communication time -- ACCT] As opposed to ACT, the communication cost is also included in ACCT. By comparing ACCT and ACT, the communication overhead can be derived.
\end{description}
As we shall see, all of these metrics will be essential for a complete understanding of the sparse tiling performance.
To collect OT, ACT and ACCT, the following configuration was adopted. All experiments were executed with ``warm cache''; that is, with all kernels retrieved directly from the Firedrake's software cache of compiled kernels, so code generation and compilation times are not counted. All of the non-tiled {\tt explosive$\_$source} tests were repeated three times. The minimum times are reported (negligible variance). The cost of global matrix assembly -- an operation that takes place before entering the time loop -- {\it is not} included in OT. Firedrake needs to be extended to assemble block-diagonal matrices directly into vectors (an entry in the vector would represent a matrix block). Currently, this is instead obtained in two steps: first, by assembling into a CSR matrix; then, by explicitly copying the diagonal into a vector (a Python operation). The assembly per se never takes more than 3 seconds, so it was reasonable to exclude this temporary overhead from our timing. The inspection cost due to sparse tiling {\it is} included in OT, and its overhead will be discussed appropriately in a later section. Extra costs were minimized: no check-pointing, only two I/O sessions (at the beginning and at the end of the computation), and minimal logging. The time loop has a fixed duration, while the time step depends on the mesh spacing $h$ to satisfy the Courant-Friedrichs-Lewy necessary condition (i.e., CFL limit) for the numerical convergence. In essence, finer meshes require proportionately smaller time steps to ensure convergence.
We now proceed with discussing the achieved results. Below, a generic instance of {\tt explosive$\_$source} optimized with sparse tiling will be referred to as a ``tiled version'', otherwise the term ``original version'' will be used.
\subsection{Single-node Experimentation}
\label{sec:performance:singlenode}
Hundreds of single-node experiments were executed on Erebus, which was easily accessible in exclusive mode and much quicker to use for VTune profiling. The rationale of these experiments was to assess how sparse tiling impacts the application performance by improving data locality.
We only show parallel runs at maximum capacity (1 MPI process per physical core); the benefits of sparse tiling in sequential runs tend to be negligible (if any), because (i) the memory access latency is only marginally affected when a large proportion of bandwidth is available to a single process, (ii) hardware prefetching is impaired by the small iteration space of tiles, (iii) translation lookaside buffer (TLB) misses are more frequent due to tile expansion. Points (ii) and (iii) will be elaborated upon.
Table~\ref{table:seigen-speedups} reports execution times and speed-ups, indicated with the symbol $\Omega$, of the tiled version over the original version for the three metrics OT, ACT and ACCT. We report the best speed-up obtained after varying a number of parameters (tile size, fusion scheme and other optimizations discussed below).
\begin{table}
\tiny
\makebox[\textwidth][c]
{
\begin{tabulary}{1.0\columnwidth}{c V{4} c V{4} M{2.0cm} | M{2.0cm} V{4} M{1.2cm} | M{1.2cm} | M{1.2cm} N}
\hline
$h$ & $q$ & OT original (s) & OT tiled (s) & $\Omega^{OT}$ & $\Omega^{ACT}$ & $\Omega^{ACCT}$ \\
\hlineB{4}
\multirow{4}{*}{$1.0$} & 1 & 687 & 596 & 1.15 & 1.17 & 1.16 \\
& 2 & 1453 & 1200 & 1.21 & 1.25 & 1.25 \\
& 3 & 3570 & 2847 & 1.25 & 1.28 & 1.27 \\
& 4 & 7057 & 5827 & 1.21 & 1.22 & 1.22 \\
\hline
\multirow{4}{*}{$1.2$} & 1 & 419 & 365 & 1.15 & 1.17 & 1.16 \\
& 2 & 870 & 715 & 1.22 & 1.26 & 1.26 \\
& 3 & 1937 & 1549 & 1.25 & 1.28 & 1.27\\
& 4 & 4110 & 3390 & 1.21 & 1.23 & 1.22 \\
\end{tabulary}
}
\caption{OT, ACT and ACCT on Erebus, with 1 MPI process per core.}
\label{table:seigen-speedups}
\end{table}
The parameters that we empirically varied to obtain these results were: (i) the fusion scheme, {\tt fs$X$, $X \in \lbrace 1, 2, 3, 4, 5\rbrace$}, see Table~\ref{table:seigen-fusion-schemes}; (ii) the seed tile size -- for each {\tt fs} and $q$, up to four different values chosen to maximize the likelihood of fitting in L2 or L3 cache, were tried\footnote{A less extensive experimentation with ``more adverse'' tile sizes showed that: (i) a very small value causes dramatic slow-downs (up to 8$\times$ slower than the original versions); (ii) larger values cause proportionately greater performance drops.}.
Further, a smaller experimentation varying (i) type of indirection maps (local or global, see Section~\ref{sec:algorithm}) and (ii) tile shape (through different mesh partitioning algorithms), as well as introducing (iii) software prefetching and (iv) extended boundary region (to minimize redundant computation) led to the conclusion that these optimizations may either improve or worsen the execution time, in ways that are too difficult to predict beforehand. We therefore decided to exclude these parameters from the full search space. This simplifies the analysis that will follow and also allowed the execution of the whole test suite in less than five days. The interested reader is invited to refer to the Supplementary Materials attached to this article for more information.
In Figure~\ref{fig:st-erebus-expl}, we summarize the results of the search space exploration. A plot shows the ACT speed-ups achieved by multiple tiled versions over the non-tiled version for given $q$ and $h$. In these single-node experiments, the ACCT trend was basically identical (as one can infer from Table~\ref{table:seigen-speedups}), with variations in speed-up smaller than 1$\%$.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h10_1}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h10_2}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h10_3}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h10_4}
\end{subfigure}%
~\\
~\\
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h12_1}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h12_2}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h12_3}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.25\textwidth}
\includegraphics[width=\textwidth]{figures/single-node/erebus_plexmesh_h12_4}
\end{subfigure}%
\caption{Search space exploration on Erebus, with $h \in \lbrace 1.0, 1.2 \rbrace$ and $q \in \lbrace 1, 2, 3 ,4\rbrace$. Each plot shows the average compute time (ACT) speed-up achieved by multiple tiled versions over the original (non-tiled) version. The seed tile size of a loop chain in an {\tt fs}, in terms of seed loop iterations, is the product of the ``tile size factor'' (x-axis) and a pre-established multiplier (an integer in the range $[1, 4]$; full list available at~\cite{fabio_luporini_2017_840000}).}
\label{fig:st-erebus-expl}
\end{figure}
PyOP2 was enhanced with a \textit{loop chain analyzer} (LCA) capable of estimating the best- and worst-case tile memory footprint, as well as the percentage of data reuse ideally achievable\footnote{It is part of our future plans to integrate this tool with SLOPE, where the effects of tile expansion can be taken into account to provide better estimates.}. We use this tool, as well as Intel VTune, to explain the results shown in Figure~\ref{fig:st-erebus-expl}. We make the following observations.
\begin{itemize}
\item {\tt fs}, unsurprisingly, is the parameter having largest impact on the ACT. By influencing the fraction of data reuse convertible into data locality, the amount of redundant computation and the data volume exchanged, fusion schemes play a fundamental role in sparse tiling. This makes automation much more than a desired feature: without any changes to the source code, multiple sparse tiling strategies could be studied and tuned. Automation is one of our major contributions, and this performance exploration justifies the implementation effort.
\item There is a non-trivial relationship between ACT, $q$ and {\tt fs}. The aggressive fusion schemes are more effective with high $q$ -- that is, with larger memory footprints -- while they tend to be less efficient, or even deleterious, when $q$ is low. The extreme case is {\tt fs5}, which fuses two long sequences of loops (twelve and thirteen loops each). In Figure~\ref{fig:st-erebus-expl} (Erebus), {\tt fs5} is never a winning choice, although the difference between {\tt fs3}/{\tt fs4} and {\tt fs5} decreases as $q$ grows. If this trend continued with $q > 4$, then the gain from sparse tiling could become increasingly larger.
\item A non-aggressive scheme fuses only a few small subsets of loops. As discussed in later sections, these fusion schemes, despite affording larger tile sizes than the more aggressive ones (due to the smaller memory footprint), suffer from limited cross-loop data reuse. For {\tt fs1}, LCA determines that the percentage of reusable data in the three fused loop chains decreases from 25$\%$ ($q=1$) to 13$\%$ ($q=4$). The drop is exacerbated by the fact that no reuse can be exploited for the maps. Not only are these ideal values, but also a significant number of loops are left outside of loop chains. The combination of these factors motivate the lack of substantial speed-ups. With {\tt fs2}, a larger proportion of loops are fused and the amount of shared data increases. The peak ideal reuse in a loop chain reaches 54$\%$, which translates into better ACTs. A similar growth in data reuse can be appreciated in more aggressive fusion schemes, with a peak of 61$\%$ in one of the {\tt fs5}'s loop chains. Nevertheless, the performance of {\tt fs5} is usually worse than {\tt fs4}. As we shall clarify in Section~\ref{sec:performance:limit}, this is mainly due to the excessive memory footprint, which in turn leads to very small tiles. Speculatively, we tried running a sixth fusion scheme: a single loop chain including all of the 25 loops in a time iteration. In spite of an ideal data reuse of about 70$\%$, the ACT was always significantly higher than all other schemes.
\item Figure~\ref{fig:st-erebus-expl} shows the ACT for a ``good selection'' of tile size candidates. Our approach was as follows. We took a very small set of problem instances and we tried a large range of seed tile sizes. Very small tile sizes caused dramatic slow-downs, mostly because of ineffective hardware prefetching and TLB misses. Tile sizes larger than a certain fraction of L3 cache (usually slightly larger than what a core should ideally own) also led to increasingly higher ACTs. If we consider $q=4$ in Figure~\ref{fig:st-erebus-expl}, we observe that the ACT of {\tt fs2}, {\tt fs3}, and {\tt fs4} grows when the initial number of iterations in a tile is as big as 70. Here, LCA shows that the tile memory footprint is, in multiple loop chains, higher than 3MB, with a peak of 6MB in {\tt fs3}. This exceeds the proportion of L3 cache that a process owns (on average), which explains the performance drop.
\end{itemize}
\subsection{Multi-node Experimentation}
\label{sec:performance:multinode}
Weak-scaling experiments were carried out on thirty-two nodes split over two racks in the Helen cluster at Imperial College London~\citep{cx2-helen,helen-top}. The node architecture as well as the employed software stack are summarized in Table~\ref{table:seigen-setup}. The rationale of these experiments was to assess whether the change in communication pattern and amount of redundant computation caused by sparse tiling could affect the run-time performance.
\begin{figure}[htpb]
\centering
\includegraphics[scale=0.3]{figures/multi-node/helen_scalability}
\captionof{figure}{Weak scaling performance of Seigen's {\tt explosive$\_$source} on the Helen cluster. The ACCT speed-up is relative to a single process. Results for $q \in \lbrace 1, 2, 3 ,4\rbrace$ are displayed. The simulations ran for 1000 timesteps.}
\label{fig:st-helen}
\end{figure}
For each $q$ in the usual range $[1-4]$, {\tt fs3} generally resulted in the best performance improvements, due to its trade-off between gained data locality and restrained redundant computation (whose effect obviously worsen as $q$ grows). Figure~\ref{fig:st-helen} summarizes the ACCT speed-ups achieved by the best tiled version (i.e., the one found by empirically varying the tile size, with same tile size factor as in Figure~\ref{fig:st-erebus-expl}) over the original version. The weak scaling trend is remarkable, with only a small drop in the case $q=1$ when switching from one rack (448 cores) to two racks (896 cores), which disappears as soon as the per-process workload becomes significant. For instance, with $q = 2$, each process already computes over more than 150k degrees of freedom for the velocity and stress fields. The peak speed-up over the original version, 1.28$\times$, was obtained in the test case $q = 3$ when running with 448 processes. The performance achieved was 1.84 TFLOPs/s (158 GFLOPs required at each time step, for a total of 1000 time steps and an ACCT of 86 seconds); this corresponds to roughly 15$\%$ of the theoretical machine peak (assuming AVX base frequency; the architecture ideally performs 16 double-precision FLOPs per cycle).
\subsection{Negligible Inspection Overhead}
As explained in Section~\ref{sec:performance:setup}, the inspection cost was included in OT. In this section, we quantify this overhead in a representative problem instance. In all other problem instances, the overhead was either significantly smaller than or essentially identical (for reasons discussed below) to that reported here. We consider {\tt explosive$\_$source} on Erebus with $h=1.0$, $q=4$, and {\tt fs5}. With this configuration, the time step was $\Delta t=481 \cdot 10^{-6}$ (we recall from Section~\ref{sec:performance:setup} that $\Delta t$ is a function of $h$). Given the simulation duration $T=2.5$, in this test case 5198 time steps were performed. A time step took on average 1.15 seconds. In each time step, twenty-five loops, fused as specified by {\tt fs5}, are executed. We know that in {\tt fs5} there are two sub loop chains, which respectively consist of thirteen and twelve loops. To inspect these two loop chains, 1.4 and 1.3 seconds were needed (average across the four MPI ranks, with negligible variance). Roughly 98$\%$ of the inspection time was due to the projection and tiling functions, while only 0.2$\%$ was spent on the tile initialization phase (see Section~\ref{sec:algorithm}). These proportions are consistent across other fusion schemes and test cases. After 200 time steps (less than 4$\%$ of the total) the inspection overhead was already close to 1$\%$. Consequently, the inspection cost, in this test case, was eventually negligible. The inspection cost increases with the number of fused loops, which motivates the choice of {\tt fs5} for this analysis.
\subsection{On the Main Performance Limiting Factor}
\label{sec:performance:limit}
The tile structure, namely its shape and size, is the key factor affecting sparse tiling in Seigen.
The seed loop partitioning and the mesh numbering determine the tile structure. The simplest way of creating tiles consists of ``chunking'' the seed iteration space every $\mathrm{ts}$ iterations, with $\mathrm{ts}$ being the initial tile size (see Section~\ref{sec:algorithm}). Hence, the {\tt chunk} partitioning inherits the original mesh numbering. In Firedrake, and therefore in Seigen, meshes are renumbered during initialization applying the reverse Cuthill-McKee (RCM) algorithm. Using {\tt chunk} partitioning on top of an RCM-renumbered mesh has the effect of producing thin, rectangular tiles, as displayed in Figure~\ref{fig:tile-shapes:chunk}. This dramatically affects tile expansion, as a large proportion of elements will lie on the tile border. There are potential solutions to this problem. The most promising would consist of using a Hilbert curve, rather than RCM, to renumber the mesh. This would lead to more regular polygonal tiles when applying {\tt chunk} partitioning. Figures~\ref{fig:tile-shapes:hilbert} and~\ref{fig:tile-shapes:hilbert-zoomed} show the tile structure that we would ideally want in Seigen, from a Hilbert-renumbered mesh produced outside of Firedrake. As later discussed, introducing support for Hilbert curves in Firedrake is part of our future work.
The memory footprint of a tile grows quite rapidly with the number of fused loops. In particular, the matrices accessed in the {\tt velocity} and {\tt stress} solver loops have considerable size. The larger the memory footprint, the smaller $\mathrm{ts}$ for the tile to fit in some level of cache. Allocating small tiles has unfortunately multiple implications. First, the proportion of iterations lying on the border grows, which worsen the tile expansion phenomenon discussed, thus affecting data locality. Secondly, a small $\mathrm{ts}$ impairs hardware prefetching, since the virtual address streams become more irregular. Finally, using small tiles implies that a proportionately larger number of tiles are needed to ``cover'' the iteration space; in the case of shared-memory parallelism, this increases the probability of coloring conflicts, which result in higher inspection costs.
As reiterated in the previous sections, the maximum length of a loop chain is dictated by the extent of the boundary region. Unfortunately, a long loop chain has two undesirable effects. First, the redundant computation overhead tends to be non-negligible if some of the fused loops are compute-bound. Sometimes, the overhead could be even larger than the gain due to increased data locality. Second, the size of an S-level (a ``strip'' of boundary elements) grows with the depth of the boundary region, as outer levels include more elements than inner levels. This increases not only the amount of redundant computation, but also the volume of data to be communicated. Overall, this suggests that multiple, short loop chains may guarantee the best performance improvement, as indicated by the results in Sections~\ref{sec:performance:singlenode} and~\ref{sec:performance:multinode}.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{figures/chunk}
\caption{{\tt chunk} partitioning in Seigen.}
\label{fig:tile-shapes:chunk}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{figures/sfc}
\caption{Ideal {\tt Hilbert} partitioning.}
\label{fig:tile-shapes:hilbert}
\end{subfigure}%
~~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{figures/sfc-zoomed}
\caption{Zoomed in {\tt Hilbert} partitioning.}
\label{fig:tile-shapes:hilbert-zoomed}
\end{subfigure}%
\label{fig:tile-shapes}
\caption{Representation of seed iteration space partitioning strategies via SLOPE and Paraview.}
\end{figure}
\section{Related Work}
\label{sec:related-work}
The data dependence analysis that we have developed in this article is based on the \textit{loop chain} abstraction, which was originally presented in~\cite{ST-KriegerHIPS2013}. This abstraction is sufficiently general to capture data dependencies in programs structured as arbitrary sequences of loops, particularly to create inspector/executor schemes for many unstructured mesh application. Inspector/executor strategies were first formalized by~\cite{ST-Saltz91}. They have been used to exploit data reuse and to expose shared-memory parallelism in several studies~\citep{ST-dimeEtna00,ST-StroutLCPC2002,ST-Demmel08,ST-KriegerIAAA2012}.
Sparse tiling is a technique based upon inspection/execution. The term was coined by~\cite{ST-StroutLCPC2002,ST-StroutIJHPCA} in the context of the Gauss-Seidel algorithm and also used in~\cite{ST-StroutPLDI03} in the Moldyn benchmark. However, the technique was initially proposed by~\cite{ST-dimeEtna00} to parallelize computations over unstructured meshes, taking the name of \textit{unstructured cache blocking}. In this work, the mesh was initially partitioned; the partitioning represented the tiling in the first sweep over the mesh. Tiles would then shrink by one layer of vertices for each iteration of the loop. This shrinking represented what parts of the mesh could be accessed in later iterations of the outer loop without communicating with the processes executing other tiles. The unstructured cache blocking technique also needed to execute a serial clean-up tile at the end of the computation.~\cite{ST-Adams99c} also developed an algorithm very similar to sparse tiling, to parallelize Gauss-Seidel computations. The main difference between~\cite{ST-StroutLCPC2002,ST-StroutIJHPCA} and~\cite{ST-dimeEtna00} was that in the former work the tiles fully covered the iteration space, so a sequential clean-up phase at the end could be avoided. All these approaches were either specific to individual benchmarks or not capable of scheduling across heterogeneous loops (e.g., one over cells and another over degrees of freedom). These limitations had been addressed in~\cite{st-paper}.
The automated code generation technique presented in \cite{ST-OhioStateMPICodeGen} examines the data affinity among loops and performs partitioning with the goal of minimizing inter-process communication, while maintaining load balancing. This technique supports unstructured mesh applications (being based on an inspector/executor strategy) and targets distributed memory systems, although it does not exploit the loop chain abstraction and does not introduce any sort of loop reordering transformation.
Automated code generation techniques based on polyhedral compilers have been applied to structured grid benchmarks or proxy applications~\cite{pluto}. However, there has been very little effort in providing evidence that these tools can be effective in real-world applications. Time-loop diamond tiling was applied in~\cite{cohen-timetiling} to a proxy application, but experimentation was limited to shared-memory parallelism. In~\cite{reguly-ops-tiling}, instead, an approach based on raising the level of abstraction, similar to the one presented in this paper, is described and evaluated. The experimentation is conducted using realistic stencil codes ported to the OPS library. The main difference with respect to our work is the focus on structured grids (i.e., different types of numerical methods are targeted).
In structured codes, multiple layers of halo, or ``ghost'' elements, are often used to reduce communication~\citep{Bassetti98}. Overlapped tiling exploits the very same idea: trading communication for redundant computation along the boundary~\citep{Zhou12}. Several works tackle overlapped tiling within single regular loop nests (mostly stencil-based computations), for example~\cite{Meng09,Krishnamoorthy07,Chen02}. Techniques known as ``communication avoiding''~\citep{ST-Demmel08,ST-commAvoidingSparse2009} also fall in this category. To the best of our knowledge, overlapped tiling for unstructured mesh applications has only been studied analytically, by~\cite{gihan-overlapped}. Further, we are not aware of any prior techniques for automation.
\section{Future Work}
Using a Hilbert curve numbering will lead to dramatically better tile shapes, thus mitigating the performance penalties due to tile expansion, TLB misses and hardware prefetching described in Section~\ref{sec:performance}. This extension is at the top of our future work priorities.
Shared-memory parallelism was not as carefully tested as distributed-memory parallelism. First of all, we would like to replace the current OpenMP implementation in SLOPE with the MPI Shared Memory (SHM) model introduced in MPI-3. Not only does a unified programming model provide significant benefits in terms of maintainability and complexity, but the performance may also be greater as suggested by recent developments in the PETSc community. Secondly, some extra work would be required for a fair comparison of this new hybrid MPI+MPI programming model with and without sparse tiling.
The experimentation was carried out on a number of ``conventional'' Intel Xeon architectures; we aim to repeat the experimentation on the Intel's Knights Landing soon.
Finally, a cost model for automatic derivation of fusion schemes and tile sizes is still missing.
\section{Conclusions}
Sparse tiling aims to turn the data reuse in a sequence of loops into data locality. In this article, three main problems have been addressed: the specialization of sparse tiling to unstructured mesh applications via a revisited loop chain abstraction, automation via DSLs, effective support for shared- and distributed-memory parallelism. The major strength of this work lies in the fact that all algorithmic and technological contributions presented derive from an in-depth study of realistic application needs. The performance issues we found through Seigen would never have been exposed by a set of simplistic benchmarks, as often used in the literature. Further experimentation will be necessary when 3D domains and high-order discretizations will be supported by Seigen. In essence, the performance experimentation shows systematic speed-ups, in the range 1.10$\times$-1.30$\times$. This is hopefully improvable by switching to Hilbert curve numberings and by exploiting shared memory through a suitable paradigm. Finally, our opinion is that sparse tiling is an ``extreme'' optimization: at least for unstructured mesh application, it is unlikely that it will lead to speed-ups in the order of magnitudes. However, through automation via DSLs, and with suitable optimization and tuning, it may still play a key role in improving the performance of real-world computations.
\section{Supplementary materials}
\subsection{Summary of the Optimizations Attempted in Seigen}
\label{app:summary-opts}
A number of potential optimizations were attempted when applying sparse tiling to Seigen. The experimentation with these optimizations was, however, inconclusive. Although performance improvements were observed in various problem instances, in a significant number of other cases either a detrimental effect or no benefits were noticed at all. Below we briefly discuss the impact of four different execution strategies: (i) use of global maps, (ii) variation in tile shape, (iii) software prefetching, (iv) use of an extended boundary region to minimize redundant computation.
\begin{description}
\item[Global and local maps] Algorithm~\ref{algo:st-inspector} computes so called local maps to avoid an extra level of indirection in the executor. Although no data reuse is available for the local maps (each fused loop has its own local maps), there might be benefits from improved hardware prefetching and memory latency. We compared the use of global and local maps (i.e., the former are normally constructed by Firedrake and provided in the loop chain specification, the latter are computed by SLOPE), but no definitive conclusion could be drawn, as both performance improvements and deteriorations within a 5$\%$ range were observed.
\item[Tile shape] In Section~\ref{sec:performance:limit} we have explained that a Hilbert-renumbered mesh might substantially improve the tile shape quality. Hilbert curves, however, are not supported in Firedrake yet. An alternative consists of partitioning the seed iteration space with a library like METIS~\citep{METIS} before applying RCM. We experimented and discovered that this approach too was not exempt from side effects. The main cause was increased translation lookaside buffer (TLB) misses, which occur whenever the CPU cannot retrieve the mapping to the physical page corresponding to a given virtual page. Since the page table has a hierarchical structure, handling a TLB miss usually requires multiple accesses to memory. Hence, TLB misses are much more costly than cache misses. Sparse tiling increases the TLB miss/hit ratio because of the fragmented streams of virtual addresses. This is evident (and more pronounced) when the tile size is small, in which case a TLB miss is quite likely to occur when jumping to executing a new loop. This problem is exacerbated by the {\tt metis} partitioning (in contrast to {\tt chunk}), which leads to irregular tile shapes. Here, tile expansion may eventually incorporate iterations living in completely different virtual pages. VTune experimentation with $q=1$ and $q=2$ versions of {\tt explosive$\_$source} showed that {\tt chunk}- and {\tt metis}-based sparse tiling suffer from an increase in TLB misses of roughly 16$\%$ and 35$\%$, respectively. To mitigate this issue, we explored the possibility of using larger virtual pages through Linux's Transparent Huge Pages mechanism, which was enabled to automatically allocate memory in virtual pages of 2MB (instead of the default 4KB) -- as long as the base array addresses were properly aligned. However, no significant differences were observed, and a deeper investigation is still necessary.
\item[Software prefetching] In a loop, there is usually more than a single stream of memory accesses amenable to hardware prefetching (e.g., accesses to the indirection maps; direct accesses to data values; indirect accesses to data values if the mesh has a good numbering). Sparse tiling, unfortunately, impairs hardware prefetching for two reasons: (i) the virtual addresses streams are considerably shorter; (ii) tile expansion introduces irregularities in these streams. Software prefetching can be used together with hardware prefetching to minimize memory stalls. PyOP2 and SLOPE have been extended to emit intrinsics instructions to prefetch the iteration $i$'s maps and data values while executing the iteration $i-k$ at distance $k$. No compelling evidence that this further transformation could systematically improve the performance was found.
\item[Extended boundary region] The special non-exec tile $T_{ne}$ (see Sections~\ref{sec:examples} and~\ref{sec:algorithm}) reduces the amount of redundant computation in long loop chains by expanding over boundary tiles. There are two ways of creating $T_{ne}$: either an extra layer of data is added to the boundary region (e.g., see Figure~\ref{fig:st-mpi-init}) or during inspection, by searching for mesh boundaries. The current implementation only supports the first option. Manually deriving $T_{ne}$ would be not only algorithmically complex, but also potentially very expensive.
\end{description}
\begin{printonly}
See the Supplementary Materials in the online version.
\end{printonly}
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,501,306 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
The MOSE project (MOdeling ESO Sites) aims at proving the feasibility of the forecast of the optical turbulence OT ($\mbox{$C_N^2 \ $}$ profiles) and all the main integrated astro-climatic parameters derived from the $\mbox{$C_N^2 \ $}$ i.e. the seeing ($\varepsilon$), the isoplanatic angle ($\mbox{$\theta_{0} \ $}$), the wavefront coherence time ($\mbox{$\tau_{0} \ $}$) above the two ESO sites of Cerro Paranal (site of the Very Large Telescope - VLT) and Cerro Armazones (site selected for the European Extremely Large Telescope - E-ELT). The OT forecast is a crucial cornerstone for the feasibility of the ELTs: it is fundamental for supporting all kind of AO facilities in an astronomical observatory and for performing the flexible-scheduling of scientific programs and instrumentation through the Service Mode. The MOSE project aims at overcoming two major limitations that we normally encounter in studies focused on the optical turbulence forecast with atmospheric models: {\bf (1)} the difficulty in having independent samples of measurements for the model calibration and model validation to estimate if and how the correlation between measurements and predictions decreases with the increasing of the number of nights used for the calibration; {\bf (2)} the difficulty in having a large number of simultaneous measurements done with different and independent instruments for the OT estimates (in particular vertical profilers). This project is performed with the non-hydrostatic mesoscale atmospherical models Meso-Nh\cite{lafore98} joined with the Astro-Meso-Nh package for the calculation of the optical turbulence\cite{masciadri99a,masciadri99b} to perform the OT forecasts. An extended data-set of observations (meteorological parameters and optical turbulence) have been considered in the project. In this contribution we focus our attention on the model performances in reconstructing the meteorological parameters near the surface as well as in the free atmosphere.
\section{WHOLE OBSERVATIONS DATA-SET}
\label{sec:obs}
At Paranal, observations of meteorological parameters near the surface come from an automated weather station (AWS) and a 30~m high mast including a number of sensors at different heights. Both instruments are part of the VLT Astronomical Site Monitor \cite{vlt99}. Absolute temperature data are available at 2~m and 30~m above the ground.
Wind speed data are available at 10~m and 30~m above the ground. At Armazones, observations of the meteorological parameters near the ground surface come from the the Site Testing Database \cite{Schoeck2009}, more precisely from an AWS and a 30~m tower (with temperature sensors and sonic anemometers). Data on temperature and wind speed are available at 2~m, 11~m, 20~m and 28~m above the ground. At 2~m (Armazones) temperature measurements from the AWS and the sonic anemometers are both available but we considered only those from the tower (accuracy of 0.1$^{\circ}$C)\cite{Skidmore2007}. Those from the AWS are not reliable because of some drift effects (T. Travouillon, private communication). Wind speed observations are taken from the AWS (at 2~m) and from the sonic anemometers of the tower (at 11~m, 20~m and 28~m). The outputs are sampled with a temporal frequency of 1 minute.
At Paranal we access also to 50 radio-soundings (vertical distribution of the meteorological parameters in the $\sim$ 20~km above the ground) launched above this site in the context of an intense site testing campaign for water vapor estimates\cite{chacon2011} and covering 23 nights in 2009, 11 nights in summer and 12 in winter time. In a subsample of these nights (16), a few radio-soundings (two or three) have been launched at different times in the same night.
Observations of the optical turbulence at Paranal, relate to the Site Testing Campaign of November-December 2007\cite{Dali2010}, come from a Generalized Scidar, a DIMM and a MASS. The Generalized Scidar measurements have been recently re-calibrated\cite{masciadri2012}. Optical turbulence measurements at Armazones come from a DIMM and a MASS\cite{Schoeck2009} that have been used for the TMT site selection campaign.
\section{MODEL CONFIGURATION}
\label{sec:mod_conf}
The Meso-Nh atmospherical mesoscale model can simulate the temporal evolution of three-dimensional meteorological parameters over a selected finite area of the globe.
The system of hydrodynamic equations is based upon an anelastic formulation allowing for an effective filtering of acoustic waves.
It uses the Gal-Chen and Sommerville\cite{Gal75} coordinates system on the vertical and the C-grid in the formulation of
Arakawa and Messinger\cite{Arakawa76} for the spatial digitalization.
It employs an explicit three-time-level leap-frog temporal scheme with a time filter \cite{Asselin72}.
For this study we use a 1D mixing length proposed by Bougeault and Lacarr\`ere \cite{Bougeault89} with a one-dimensional 1.5 turbulence closure scheme \cite{Cuxart00}. The surface exchanges are computed by the Interaction Soil Biosphere Atmosphere - ISBA scheme\cite{Noilhan89}.
The grid-nesting technique \cite{Stein00}, employed in our study, consists of using different imbricated domains of the Digital Elevation Models (DEM i.e orography) extended on smaller and smaller surfaces, with increasing horizontal
resolution but with the same vertical grid. The standard configuration of this study includes three domains (Fig.\ref{pgd} and Table \ref{tab_orog}) with the lowest horizontal resolution equal to 10~km and the highest horizontal resolution equal to 0.5~km. The orographic DEMs that we used for this project are the GTOPO\footnote{$http://www1.gsi.go.jp/geowww/globalmap-gsi/gtopo30/gtopo30.html$} with an intrinsic horizontal resolutions of 1~km (used for the domains 1 and 2) and the ISTAR\footnote{Bought by ESO at the ISTAR Company - Nice-Sophia Antipolis, France} with an intrinsic horizontal resolution of 0.5~km (used for the domain 3). Along the z-axis we have 62 levels distributed as the following: the first vertical grid point equal to 5~m, a logarithmic stretching of 20~$\%$ up to 3.5~km above the ground, and an almost constant vertical grid size of $\sim$600~m up to 23.8~km. The model has been parallelized using OPEN-MPI-1.4.3 and it run on local workstations and on the HPCF cluster of the European Centre for Medium weather Forecasts (ECMWF). The second solution permitted us to achieve relatively rich statistical estimates of these analysis.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{zs_3mod-crop}
\caption{\label{pgd} Orography of the region of interest as seen by the Meso-Nh model (polar stereographic projection) for all the imbricated domains of the grid-nesting configuration: (a) Domain 1, (b) Domain 2, (c) Domain 3. The black dots report the position of Cerro Paranal and Cerro Armazones. See Table \ref{tab_orog} for specifications of the domains (number of grid-points, domain extension, horizontal resolution). }
\end{figure}
\begin{table*}
\begin{center}
\caption{\label{tab_orog} Orography: grid-nesting configuration conceived as three imbricated domains (1, 2 and 3) with a horizontal resolution from a minimum of 10~km to a maximum of 0.5~km (column 2) extended on smaller and smaller domains (column 4) with a different number of grid points (column 3). See Fig.\ref{pgd}.}
\vskip 0.2cm
\begin{tabular}{|c|c|c|c|}
\hline
Domain & $\Delta$X & Grid Points & Surface \\
& (km) & & (km$\times$km) \\
\hline
Domain 1 & 10& 80$\times$80& 800$\times$800\\
Domain 2 & 2.5 & 64$\times$64& 160$\times$160 \\
Domain 3 & 0.5 & 150$\times$100& 75$\times$50 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{COMPARISON MESO-NH VERSUS OBSERVATIONS}
\label{sec:res}
To estimate the statistical model reliability in reconstructing the main meteorological parameters we used the averaged values plus two statistical operators: the bias and the root mean square error (RMSE) defined as:
\begin{equation}
BIAS = \sum\limits_{i = 1}^N {\frac{{(Y_i - X_i )^{} }}
{N}}
\label{eq1}
\end{equation}
\begin{equation}RMSE = \sqrt {\sum\limits_{i = 1}^N {\frac{{(Y_i - X_i )^2 }}
{N}} }
\label{eq2}
\end{equation}
where $X_{i}$ are the individual observations, $Y_{i}$ the individual simulations calculated at the same time and N is the number of times in which a couple ($X_{i}$, $Y_{i}$) is available and different from zero for each time. At the same time, due to the fact that we are interested in investigating the model ability in forecasting a parameter and not only in characterizing it, it is important for us to investigate also the correlation observations/simulations calculated night by night and not only in statistical terms. This further detailed analysis has been performed for the potential temperature and the wind speed only i.e. the two most important parameters from which the optical turbulence depends on. In relation to this last issue, in this contribution we present only the results obtained with the temperature because of lacking of space.
\subsection{Vertical distribution of the meteorological parameters}
\label{sec:vert_dist}
To quantify the model statistic reliability in reconstructing the vertical distribution of the meteorological parameters we compare observations from the radio-soundings ({\bf 50 flights}) with the simulations performed by the model in the range [3~km - 21~km]. The first 400-500 meters above the ground (h=2634~m - Paranal summit) constitute, indeed, a sort of 'gray zone' in which it can be meaningless to retrieve any quantitative useful estimates from the radio-soundings because of many different reasons. Among others: (a) the orographic maps of the innermost domain has an intrinsic $\Delta$h$\sim$156~m with respect to the real summit due to the natural smoothing effect of the model horizontal interpolation of the DEM, (b) the radio-soundings have been launched at around 50~m below the summit. It should be meaningless to compare the observed and simulated values at the summit ground height because in one case we resolve friction of the atmospheric flow near the ground, in the other no, (c) we have to take care about an uncertainty $\Delta$h of around 50~m in the identification of the zero point (h$_{0}$) probably due to an unlucky procedure performed during the radio-soundings launches on the the zero point setting. This uncertainty has basically no effects above a few hundreds of meters above the ground because the parameters values are affected by phenomena evolving at larger spatial scales. We decided therefore to treat data only above roughly 500~m from the summit. Two different treatments of the model outputs have been analyzed: {\bf (1)} we take the vertical profile calculated by the model at the exact instant of the launch time; {\bf (2)} we take the averaged vertical profiles simulated by the model in around one hour from the time in which the radio-sounding has been launched and one hour later. We considered that the balloon is an in-situ measurement and a balloon needs around 1 hour to cover 20~km from the ground moving up in the atmosphere with a typical vertical velocity of $\sim$6~m$\cdot$s$^{-1}$. The balloon have been launched close to the synoptic hours (00:00 UT, 06:00 UT, 12:00 UT).
The model outputs as well as the balloons observations are re-interpolated with a constant vertical grid with size equal to the first vertical grid point (5~m) of the model before to be compared. Fig.\ref{average} reports the averaged values calculated on the sample of 50 flights. Fig.\ref{wind_and_co} show the results of bias and RMSE calculated on the same sample of flights. Results obtained with approach (1) and (2) are substantially the same therefore we report just the results of one case. The bias contains informations on systematic model off-sets. The RMSE tells us the maximum dispersion achieved by the model on the whole sample.
Looking at Fig.\ref{average} and Fig.\ref{wind_and_co} we retrieve that the Meso-Nh model shows very good performances in reconstructing the wind speed direction on the whole 20~km. Between 5~km and 18~km the model reconstruction is statistically almost perfect. Below 5~km (where the orographic effects are most evident) the bias is of the order of $\sim$20$^{\circ}$ with a RMSE that can reach a few tens of degrees. This means that, night by night, in the very proximity of the surface we can have a discrepancy of the order of a few tens of degrees. It is worth to highlight that the accuracy in observing the wind direction can be hardly be better than $\sim$20$^{\circ}$. The model performances are therefore very satisfactory. The model shows very good performances in reconstructing the relative humidity. It is basically never larger than 10$\%$ all along the 20~km. The largest discrepancy (10$\%$) of simulations with respect to measurements is observed at the jet stream level. Such a satisfactory result has been obtained in spite of the fact we used a cheap scheme (in terms of CPU cost) for the relative humidity. That was possible because of the dryness of the region. Such a solution permits faster simulations. The small bump at the jet-stream level (Fig.\ref{average}) is highly probably due to the humidity coming from the close ocean. The model shows also a very good performances in reconstructing the potential temperature. We observe a very small bias of $\sim$ 2$^{\circ}$C from the ground up to around 13~km. Above 13~km, where the potential temperature slope is steeper and steeper, the bias can achieve up to 4$^{\circ}$C. The wind speed intensity is very well reconstructed: we have a bias of around 1~m$\cdot$s$^{-1}$ in the [5~km - 15~km] range for the wind speed. Above 15~km and in the [3~km - 5~km] range the bias achieve a value of 2~m$\cdot$s$^{-1}$. Comparing results (not shown here) obtained with the Meso-Nh model and the ECMWF analyses coming from the General Circulation Models (CGM) we could conclude that most of the residual biases and RMSEs we described so far are generated by initial conditions and not by the mesoscale model itself.
\begin{figure}
\centering
\includegraphics[width=14cm]{average-crop}
\caption{\label{average} Averaged profiles observed by radio-soundings and simulated by the Meso-Nh mesoscale model on the sample of {\bf 50 flights}. The dashed line indicates the standard deviation. The wind direction is reported as modulus 360$^{\circ}$ therefore the large $\sigma$ in proximity of the ground (at $\sim$4~km) and in the very high part of the atmosphere (at $\sim$19~km) is just fictitious. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=14cm]{wind_thm_wd_rh-crop}
\caption{\label{wind_and_co} {\bf BIAS} (simulations minus observations) and {\bf RMSE} calculated for the vertical profiles of potential temperature, wind speed, wind direction and relative humidity related to a sample of {\bf 50 flights}.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=14cm]{wind_instant_mnh_rs_2009_08_01-crop}
\caption{\label{radio_mnh} Comparison of wind speed intensity observed (radio-soundings: thick line) and simulated (Meso-Nh model: thin line) at three different instants (00:00, 06:15, 12:00 UT) during the same night: 1/8/2009 above Paranal.}
\end{figure}
A comparison (observations/simulations) performed night by night and in each instant for which a radio-soundings was available revealed an excellent agreement in basically all the 50 cases studied. This analysis has been performed for the wind speed and the potential temperature, the two main parameters from which the optical turbulence depends on. As an example, in Fig.\ref{radio_mnh} is shown a comparison of the wind speed observed and simulated in three different instants (00:00, 06:15 and 12:00 UT) of the same night. This case perfectly puts in evidence the excellent performances of the model in adapting itself to the wind speed evolution during the time. In spite of the fact that the observed wind speed strongly modifies its features all along the night at different heights, we note that the model perfectly reconstructs the observed wind speed features in the three different instants. A similar behavior is observed in basically all the 50 cases studied (details in a forthcoming report of the MOSE Project).
This definitely guarantees us the reliability of a tool (the Meso-Nh mesoscale model) to reconstruct the temporal evolution of the vertical distribution of the wind speed (V(h,t)) during a whole night. This is a fundamental ingredient (beside to the vertical profiles of the optical turbulence $\mbox{$C_N^2 \ $}$(h,t)) to be used for the calculation of the temporal evolution of the wavefront coherence time $\mbox{$\tau_{0} \ $}$(t):
\begin{equation}
\tau _0 (t) = 0.057 \cdot \lambda ^{6/5} (\int\limits_0^\infty {V(h,t)^{5/3} } \cdot C_N^2 (h,t)dh)^{ - 3/5}
\end{equation}
We intend here to stress the concept that the analyses and the forecasts of the meteorological parameters retrieved from the General Circulation Models (ECMWF or NOAO) are calculated only at synoptic hours (00:00, 06:00, 12:00 and 18:00) UT.
\begin{figure}
\centering
\includegraphics[width=14cm]{temp_evol-crop}
\caption{\label{temp_evol} Temporal evolution of the wind speed (top) and potential temperature (bottom) vertical distribution, calculated on the grid point of Paranal and extended
along 21~km (left) and 2~km (right) from the ground. The simulation starts at t$_{0}$=18 UT and lasts 20 hours. The local night (20:00 - 05:00 LT) corresponds to the interval (6 - 15) on the x-axis.}
\end{figure}
Fig.\ref{temp_evol} shows the temporal evolution of the wind speed and the potential temperature provided by the Meso-Nh model during two different nights. In this example we can appreciate the intrinsic level of the temporal variability of both parameters at different heights above the ground during the night. This is far from being negligible. In other words, the mesoscale predictions provide us a complete information (temporal evolution of the metorological parameter) with respect to the estimates coming from the General Circulation Models that can provide outputs only at synoptic hours and can not fill the lacking information in between the synoptic hours. This proves us the invaluable utility of a mesoscale model for the prediction of the astro-climatic parameters. In particular, our results indicate that the wind speed retrieved from the mesoscale model is, at present, certainly the best (and the most practical) solution to calculate the temporal evolution of the wavefront coherence time $\mbox{$\tau_{0} \ $}$(t). The temporal evolutions of the potential temperature and the wind speed contribute both in determining the prediction of the optical turbulence $\mbox{$C_N^2 \ $}$(h,t).
A few more words are suitable to comment the wind speed in the vertical slab [3~km - 5~km] a.s.l. that is [0.5~km - 2.5~km] a.g.l.. The discrepancy observation/simulation of around 1-2~m$\cdot$s$^{-1}$ is weak but, differently from the other discrepancies, seems to be the only one derived directly by the mesoscale model and not by the initial conditions. Even if the discrepancy is weak, it is therefore worth to analyze more in detail that. We counted a roughly comparable number of cases in which the model overestimates and correctly estimates the wind in this vertical slab. The model almost never underestimates the observed values. We did not observe any clear correlation between the discrepancy with the absolute value of the wind speed. We tested the sensitivity to the grid-point selection to check if a not precise selection of the grid-point of the summit could create some anomalous effects on the wind in the low atmosphere: for all the cases in which an overestimate has been observed we calculated the same bias in four different grid-points around the summit but it has not been observed any substantial difference. We note that the radio-sounding is an in-situ measurement and that the balloon moves horizontally along the (x,y) plan during the ascension in the atmosphere. During the ascension time, it therefore senses a volume of atmosphere shifted with respect to the zenithal direction. The radio-sounding lasts around 4 minutes with a V$_{z}$=6~m$\cdot$s$^{-1}$ to achieve the altitude of 4~km a.s.l. In this temporal interval the balloon can move somewhere (depending on the wind direction) within circle with a radius of ~2.4~km. If we calculate the maximum variation of the wind speed ($\Delta$V) inside such a circle we see that, at 4~km a.s.l., $\Delta$V is of the order of 1.5-2 m$\cdot$s$^{-1}$. Table \ref{3_5_wind_speed} reports these values for a few flights. The inhomogeneity decreases with the height and disappears in the high part of the atmosphere. In other words, being that the horizontal distribution on the (x,y) plane of the wind speed in the low part of the atmosphere is not necessarily homogeneous, this could explain the discrepancy with the simulations in the [3~km-5~km] range. This argument tells us that the radio-sounding is not an optimal reference for comparisons with simulations to be done in the low part of the atmosphere. A preferable choice should be an instrument based on an optical remote sensing principle. To support this argument we remind that in a recent study\cite{hagelin2010}, indeed, a similar comparison of simulated versus measured wind speed obtained with a remote sensing instrument (a Generalized SCIDAR used for the wind speed measurements) provided a correlation better than 1 m$\cdot$s$^{-1}$ in the [0.5~km - 1~km] a.g.l. vertical range.
\begin{table}
\begin{center}
\caption{\label{3_5_wind_speed} Maximum variability of the wind speed calculated at 4~km a.s.l inside a circle having a radius proportional to the wind speed observed at 4~km a.s.l. times 4 minutes.}
\vskip 0.2cm
\begin{tabular}{|c|c|c|c|}
\hline
Date & Hour (UT) & V$_{4km}$ (m$\cdot$s$^{-1}$)& $\Delta$V (m$\cdot$s$^{-1}$)\\
\hline
1/8/2009 & 00:00 & 5 & 1.6 \\
11/11/2009 & 12:00 & 10 & 2 \\
19/11/2009 & 06:00& 5 & 1.4 \\
19/11/2009 & 12:00 & 10 & 1.6 \\
14/11/2009 & 12:00& 5 & 2 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Wind speed and absolute temperature in the surface layer}
\label{sec:surface}
To quantify the model statistic reliability in reconstructing the meteorological surface parameters we estimate the bias and the RMSE of the nightly temporal evolution of the temperature and wind speed at each level for which observations are available at Cerro Paranal and Cerro Armazones. We perform the analysis on a statistic of {\bf 20 nights}. We selected all the nights of the Paranal site testing campaign of November/December 2007 on which observations (temperature and wind speed) near the surface are available on both sites plus a few others in which measurements above both sites were available. The bias and the RMSE are calculated for each instant with respect to N $\leq$ n (where n is the number of nights) which simply means that observed values are not always available at every minutes for every nights and in these cases the statistics is performed on a number of point inferior to the total number of nights. Fig.\ref{temp_surf_temp} shows the temporal evolution of the average, the bias and the RMSE (at different levels and above Cerro Paranal and Cerro Armazones) of the absolute temperature. Table \ref{par_temp} reports the corresponding statistical bias and the RMSE calculated with respect to the values observed all along the night. All simulations were initialized the day before at 18 UT and forced every 6 hours with the analyses from the ECMWF. Simulations finished at 09 UT of the simulated day (for a total duration of 15 hours). The statistics is computed only during night time, from
00 UT to 09 UT. We neglect the first 6 hours related to the day time also because some spurious effect due to the model spin-up are present (in the first part of a simulation the model adapts itself to the orography). Looking at Fig.\ref{temp_surf_temp} and Table \ref{par_temp} we conclude that, above both Cerro Paranal and Cerro Armazones, we obtain excellent bias and RMSE values: the bias is well below 1$^{\circ}$C (at some heights well inferior to 0.2$^{\circ}$C) and, even more impressive, the RMSE is basically always inferior to 1$^{\circ}$C. The largest discrepancy is observed above Armazones at 2~m during the night (model overestimation of 0.76$^{\circ}$C).
Looking at the temporal evolution of the temperature we can note that, for observations as well as for simulations and above the two sites, we can detect the inversion of the gradient of the temperature ($\frac{\partial T}{\partial h}$) near the surfaces when we pass from day to night time: $\frac{\partial T}{\partial h}$ $< $0 during the day-time, typical of convective regimes, $\frac{\partial T}{\partial h}$ $>$ 0 during the night time, typical of stable regimes. Only above Armazones we note that only one level (11~m) presents a wrong gradient tendency with respect to the 2~meter level during both the day and night time period but the quantitative effect is minimum. The same model effect is observed also above Cerro Paranal. We conclude therefore that the model, in general, well reconstructs the thermal structure near the surface even if there is still space for some improvements. As a further output of our analysis we note (Fig.\ref{temp_surf_temp} left side) that, on the sample of 20 night the temperature observed at 2~m at Cerro Armazones is typically almost 4$^{\circ}$C colder than at Cerro Paranal with a stronger gradient in the first 30 meters. This deserve a deeper investigation because, in case this should be confirmed on a climatological scale, this should it should be an indication that at Cerro Armazones the turbulence surface layer is typically thinner than at Cerro Paranal with a stronger strength. This correlation between the thermal gradient of the surface layer, the thickness of the surface layer and the optical turbulence inside the surface layer has been put in evidence in one of our previous study (Lascaux et al. 2011\cite{Lascaux2011}) above the internal Antarctic plateau.
\begin{table}
\begin{center}
\caption{\label{par_temp} {\bf BIAS} (simulations minus observations) and {\bf RMSE} estimations of the {\bf TEMPERATURE} between
Meso-NH simulations and observations, at Cerro Paranal (left) and Cerro Armazones (right).
Units in $^o$C. The statistical sample is made by each individual couple of values for each instant for which we can compare simulations versus observations for each night (20).}
\vskip 0.2cm
\begin{tabular}{|c|c|c|c||c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{PARANAL} & 2~m & 30~m & \multicolumn{2}{|c|}{ARMAZONES} & 2~m & 11~m & 20~m & 28~m \\
\hline
\multirow{2}{*}{3dom} & BIAS & 0.13 & -0.19 & \multirow{2}{*}{3dom} & BIAS & 0.76 & 0.04 & 0.03 & 0.01 \\
& RMSE & 0.97 & 0.84 & & RMSE & 1.10 & 0.85 & 0.89 & 0.92 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=14cm]{evol_temp_temp_all_levels_20nights}
\caption{\label{temp_surf_temp} Temporal evolution of the {\bf average} (left: (a),(d)), {\bf BIAS} (centre: (b),(e)) and {\bf RMSE} (right: (c),(f)) of the {\bf TEMPERATURE} observed and simulated on a statistical sample of {\bf 20 nights} and calculated above Cerro Paranal (top) and Cerro Armazones (bottom). In (a) and (d), simulations are in bold-line style, observations in thin-line style. Time x-axis is in UT time. The local nigh-time [20:00-05:00] LT is included in the [00:00-09:00] UT time.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=14cm]{evol_temp_wind_all_levels_20nights}
\caption{\label{temp_surf_wind} Temporal evolution of the {\bf average} (left: (a),(d)), {\bf BIAS} (centre: (b),(e)) and {\bf RMSE} (right: (c),(f)) of the {\bf WIND SPEED} observed and simulated on a statistical sample of {\bf 20 nights} and calculated above Cerro Paranal (top) and Cerro Armazones (bottom). In (a) and (d), simulations are in bold-line style, observations in thin-line style. Time x-axis is in UT time. The local nigh-time [20:00-05:00] LT is included in the [00:00-09:00] UT time.}
\end{figure}
\begin{table}
\begin{center}
\caption{\label{par_wind} {\bf BIAS} (simulations minus observations) and {\bf RMSE} estimations of the {\bf WIND SPEED} between
Meso-NH simulations and observations, at Cerro Paranal (left) and Cerro Armazones (right).
Units in m$\cdot$s$^{-1}$. The statistical sample is made by each individual couple of values for each instant for which we can compare simulations versus observations for each night (20).}
\vskip 0.2cm
\begin{tabular}{|c|c|c|c||c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{PARANAL} & 2~m & 30~m & \multicolumn{2}{|c|}{ARMAZONES} & 2~m & 11~m & 20~m & 28~m \\
\hline
\multirow{2}{*}{3dom} & BIAS & -2.23 & -1.07 & \multirow{2}{*}{3dom} & BIAS & -3.59 & -3.33 &-2.38 & -2.06 \\
& RMSE & 3.24 & 2.68 & & RMSE & 4.61 & 4.81 & 4.29 & 4.25 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[width=13cm]{cum_dist_bias_temp_par-crop}
\includegraphics[width=13cm]{cum_dist_rmse_temp_par-crop}
\caption{\label{par_bias_rmse_ind_nights} Cumulative distribution of the temperature at 10~m and 30~m for the BIAS and the RMSE at Cerro Paranal. The BIAS and the RMSE are calculated with respect to the values related to the individual nights i.e. the statistical sample is 20 nights.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=13cm]{cum_dist_bias_temp_arm-crop}
\caption{\label{arm_bias_ind_nights} Cumulative distribution of the temperature at 2~m, 11~m, 20~m and 28~m for the BIAS at Cerro Armazones. The BIAS is calculated with respect to the values related to the individual nights i.e. the statistical sample is 20 nights.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=13cm]{cum_dist_rmse_temp_arm-crop}
\caption{\label{arm_rmse_ind_nights_rmse} Cumulative distribution of the temperature at 2~m, 11~m, 20~m and 28~m for the RMSE at Cerro Armazones. The RMSE is calculated with respect to the values related to the individual nights i.e. the statistical sample is 20 nights.}
\end{figure}
Fig.\ref{temp_surf_wind} shows the temporal evolution of the average, the bias and the RMSE (at different levels and above Cerro Paranal and Cerro Armazones) of the wind speed. Table \ref{par_wind} reports the corresponding statistical bias and the RMSE calculated with respect to the values observed all along the night. At Cerro Paranal the maximum bias is 2.23 m$\cdot$s$^{-1}$ while at Armazones is 3.59 m$\cdot$s$^{-1}$. The RMSE has maximum values respectively of 3.24 and 4.81 m$\cdot$s$^{-1}$ above Cerro Paranal and Cerro Armazones. We note a general tendency of the model in underestimating the wind speed all along night, particularly at the first level (2~m) even if the model seems to show a better behavior above Cerro Paranal than Cerro Armazones. Even if the bias and the RMSE are still small in absolute terms, considering the typical absolute values of the wind speed at this heights, we can have a not negligible relative error. In a further study Lascaux \& Masciadri\cite{lascaux2012} presented in this Conference, we performed an analog statistical analysis with a different model configuration (five imbricated models with the higher horizontal resolution of 100~m) to check if the model performances in reconstructing the wind speed near the surface in proximity of these summits can be improved. We found that such a new configuration definitely and substantially improves the model performances in reconstructing the wind speed near the surface of around the 50$\%$.
Which are the model performances in reconstructing these parameters night by night ? Fig.\ref{par_bias_rmse_ind_nights} shows the cumulative distribution of the BIAS and the RMSE of simulations versus observations for the temperature at 2~m and 30~m achieved by the model at Cerro Paranal night by night. Fig.\ref{arm_bias_ind_nights} and Fig.\ref{arm_rmse_ind_nights_rmse} show respectively the bias and the RMSE obtained at Cerro Armazones at each height: 2~m, 11~m, 20~m and 28~m. We note that the level of the model accuracy is very satisfactory above both sites. The median value of the bias is always within 0.60$^{\circ}$C with a maximum absolute value of the quartiles equal to 1.07$^{\circ}$C. The median value of the RMSE is within 0.93$^{\circ}$C if we take into account all the model levels analyzed with a maximum third quartile of 1.23$^{\circ}$C.
\section{CONCLUSIONS}
\label{sec:concl}
In this contribution we present the preliminary results obtained in the context of an extended feasibility study (MOSE project) aiming at performing a feasibility study for the forecast of the optical turbulence and the atmospherical parameters above Cerro Paranal and Cerro Armazones. We focus here on the atmospheric parameters in the free atmosphere as well as at the surface. We proved that the Meso-Nh mesoscale model, using a configuration made by three imbricated domains (horizontal resolution of 10~km, 2.5~km and 0.5~km), provides vertical distribution (from the ground up to $\sim$ 20km) of the main classical atmospheric parameters (wind speed and direction, temperature and relative humidity) with excellent levels of correlation with observations. The residual discrepancies are mainly due to the initial conditions. We showed that, at present, the Meso-Nh mesoscale model can be a very useful tool to calculate and predict the temporal evolution of the wind speed for the calculation of the wavefront coherence time ($\mbox{$\tau_{0} \ $}$). This method offers a very advantageous solution in terms of accuracy and temporal coverage.
Near the ground, in proximity of the surface, the forecast of the temperature calculated at different heights in the first 30 meters shows excellent performances above both sites (Paranal and Armazones) with a statistical bias with respect to observations whose median value is well smaller than 1$^{\circ}$C (at some heights well inferior to 0.2$^{\circ}$C) and a RMSE always inferior to 1$^{\circ}$C. The statistical sample is very rich considering a set of couples of values sampled at a frequency of 1 minutes, extended on 9 hours (night time) for a total number of 20 nights. A similar analysis performed for the wind speed indicated that the model shows, in this case, a tendency in underestimating the wind speed, particularly at the first level. This effect is more evident for a strong wind speed. The biases are not important in absolute terms: order of 2.23 and 3.59 m$\cdot$s$^{-1}$ at 2~m and a little weaker for the higher levels at 20~m, 28~m and 30~m (order of 2-2.5 m$\cdot$s$^{-1}$). However the model tendency is systematic and it is therefore reasonable to investigate on how to improve the model behavior in this context. In a further dedicated study (Lascaux \& Masciadri\cite{lascaux2012}) we presented in this conference how to overcome this limitation.
Beside a statistical analysis performed on a rich statistical samples, we presented in this contribution a detailed analysis of the model performances in forecasting the atmospherical parameters night by night. For what concerns the vertical distribution, the level of accuracy shown by the model is impressive. In basically all the cases studied the model is able to reconstruct the different features of the wind speed observed in three different instants during the nights (example in Fig.\ref{radio_mnh}). For what concerns the state of the atmosphere in proximity of the surface we proved that the median values of the biases (calculated for each single night and calculated at different heights: 2~m, 11~m, 20~m, 28~m, 30~m) is within 0.60$^{\circ}$C above Cerro Paranal and Cerro Armazones with a maximum absolute value of the quartiles of 1.07$^{\circ}$C. The median value of the RMSEs is within 0.93$^{\circ}$C if we take into account all the model levels analyzed with a maximum third quartile of 1.23$^{\circ}$C. We conclude, therefore, that the Meso-Nh model appears as an extremely useful system to predict the temperature near the ground with excellent levels of accuracy. Such a system can play a fundamental role in monitoring and forecasting the thermal stratification of the first few tens of meters above the ground and to infer the status of thermal stability i.e. the value of the $\mbox{$C_N^2 \ $}$ in the shallow surface layer. This definitely can play a fundamental role in the service mode operation of scientific programs.
\acknowledgments
Meteorological data-set from the Automatic Weather Station (AWS) and mast at Cerro Armazones are from the Thirty Meter Telescope Site Testing - Public Database Server\cite{Schoeck2009}.
Meteorological data-set from the AWS and mast at Cerro Paranal are from ESO Astronomical Site Monitor (ASM - Doc.N. VLT-MAN-ESO-17440-1773). We are very grateful to the whole staff of the TMT Site Testing Working Group for providing information about their data-set as well as to Marc Sarazin for his constant support to this study and for providing us the ESO data-set used in this study and Florian Kerber for his valuable support in accessing at radio-soundings data-set. Simulations are run partially on the HPCF cluster of the European Centre for Medium Weather Forecasts (ECMWF) - Project SPITFOT. This study is co-funded by the ESO contract: E-SOW-ESO-245-0933.
|
1,116,691,501,307 | arxiv | \section{Introduction}
The effects of geometrical frustration on the magnetic properties of materials are rich and often unexpected~\cite{books}.
One of the most spectacular effects is the absence of a long-range order (LRO) down to the lowest temperatures despite the presence of relatively strong magnetic interactions.
In some cases magnetic order can be induced by an applied magnetic field, but the number of materials where such an effect has been detected is very limited~\cite{caution}, with perhaps most notable examples found among the garnets~\cite{GGG} and the pyrochlores~\cite{pyros}.
\sdo\ reported here constitutes an interesting addition to this short list.
\sdo\ belongs to the family of rare-earth strontium oxides, \slo, which crystallise in the form of calcium ferrite, with the space group $Pnam$.
The crystal structure of these materials (see Fig.~\ref{Fig1}) can be viewed as a network of linked hexagons and triangles~\cite{Karunadasa_2005}, and their magnetic properties have been studied within the context of geometrically frustrated magnetism.
Several members of the family have been the subject of recent investigations~\cite{Ghosh_2011,Quintero_2012,Petrenko_2008,Hayes_2011,Young_2012}.
\syo\ is reported to order magnetically at $T_N=0.9$~K into a noncollinear structure~\cite{Quintero_2012}, with a significantly reduced ordered spin moment on the magnetic Yb$^{3+}$ sites compared to the full ionic moment of Yb$^{3+}$.
The magnetic $H-T$ phase diagram of \syo\ consists of a complicated series of states, which are attributed to the competition of the various magnetic interactions with each other as well as with the relatively strong single-ion anisotropy~\cite{Quintero_2012}.
With the help of powder neutron diffraction (PND) data, it has been established~\cite{Petrenko_2008} that \seo\ orders magnetically at $T_N=0.75$~K.
However, single-crystal neutron diffraction~\cite{Hayes_2011} has revealed two distinct components to the magnetic ordering in this compound: one component is a LRO {\bf k}~=~0 structure which appears below $T_N=0.75$~K, and the other component is a short-range incommensurate structure which is responsible for the appearance of a strong diffuse scattering signal.
The diffuse scattering in this compound is observed to form undulating planes of intensity at the positions $(h,k,\frac{1}{2}+\delta)$ and $(h,k,\frac{3}{2}-\delta)$, with the incommensuration parameter $\delta$ varying from 0.2 to 0.6, depending on the temperature.
The partially ordered component does not undergo a pronounced phase transition at any temperature down to 60~mK~\cite{Hayes_2011}.
\begin{figure}
\includegraphics[width={0.5\columnwidth}]{Fig1_structure.eps}
\caption{(Colour online) Magnetic sublattice of \sdo, with the two crystallographically inequivalent positions of Dy ions shown in different colours.
When viewed along the c axis, honeycombs of Dy$^{3+}$ ions are visible.
Zigzag ladders running along the c axis connect the honeycomb layers and give rise to geometric frustration.}
\label{Fig1}
\end{figure}
PND data obtained on \sho\ \cite{Young_2012} seem to indicate a magnetic structure very similar to the one observed in \seo, in that a LRO {\bf k}~=~0 structure coexists with a short-range diffuse component.
However, more precise single-crystal neutron diffraction measurements~\cite{Young_2013} suggest that even the {\bf k}~=~0 component appearing below $T_N=0.68$~K is short-range in \sho.
It is also observed~\cite{Young_2013} that the planes of scattering intensity formed in reciprocal space by the diffuse component are not as undulated in their position as in \seo, which suggests that \sho\ should be regarded as a collection of almost perfect one-dimensional spin chains.
Surprisingly, despite the absence of the LRO, the $T_N$ of 0.68~K is still marked by a cusp in the susceptibility as well as by a noticeable difference between the field-cooled and the zero-field-cooled data below this temperature~\cite{Hayes_2012}.
PND data for \sdo\ (SDO) show no signs of any long-range magnetic order down to 20~mK, as the scattering pattern in zero field is dominated by broad diffuse scattering peaks~\cite{Petrenko_unpublished}.
Previous powder~\cite{Karunadasa_2005} and single-crystal~\cite{Hayes_2012} susceptibility measurements have shown that the magnetic moment on the Dy$^{3+}$ sites is rather large, $\gtrsim 10 \; \mu_B$, approaching the maximum expected value of 10.63~$\mu_B$.
With a reported~\cite{Karunadasa_2005} Curie-Weiss temperature of -23~K for SDO compared to -17~K for \sho\ and -13.5~K for \seo, the lack of magnetic ordering is rather surprising.
In this paper we report on the magnetic ordering processes in SDO induced by an applied magnetic field.
The ordering is investigated by measuring the temperature and field dependence of the heat capacity of single crystal samples for $H \parallel [010]$ and $H \parallel [001]$.
\section{Experimental procedures}
Single crystal samples of \sdo\ were prepared as described previously~\cite{Balakrishnan_2009}.
The principal axes of the samples were determined using the Laue x-ray diffraction technique; the crystals were aligned to within an accuracy of 3 degrees.
Specific heat measurements were performed in the temperature range 0.39 to 5.0~K, in fields of up to 50~kOe using a Quantum Design Physical Property Measurement System (PPMS) calorimeter equipped with a $^3$He option.
We have measured the specific heat as a function of temperature in a constant magnetic field and as a function of the applied field at a constant temperature.
For field-scans, the temperature stability was better than~5~mK.
The measurements were carried out on small (typically less than 1~mg) platelike samples for which the demagnetising factors were small.
The measurements were restricted to the $H \parallel [010]$ and $H \parallel [001]$ directions, as applying a magnetic field along $a$, the ``hard-axis" for magnetisation~\cite{Hayes_2012}, would cause a significant torque on the heat capacity sample platform.
For the same reason, the measurements for $H \parallel [001]$ are limited to a maximum field of 30~kOe, as above this field the difference between $M_{H \parallel [001]}$ and $M_{H \parallel [010]}$ becomes substantial~\cite{Hayes_2012}.
From previous measurements~\cite{Petrenko_2008} of the heat capacity of SrLu$_2$O$_4$ and SrY$_2$O$_4$, two nonmagnetic compounds isostructural with SDO, it may be concluded that the lattice contribution to the specific heat is negligibly small compared to the magnetic contribution for all temperatures below 5~K, therefore all the heat capacity data shown below should be considered to be magnetic in origin.
\section{Experimental results}
\subsection{Zero-field data}
\begin{figure}
\includegraphics[width={0.5\columnwidth}]{Zero_field.eps}
\caption{Temperature dependence of the specific heat divided by temperature of \sdo\ in zero field.
The inset shows the temperature dependence of the entropy, $S$ (solid line), calculated as the area under the $C(T)/T$ curve which has been extended linearly down to $T=0$~K.
The dashed line indicates the position of 2R$\ln(2)$, which corresponds to the magnetic contribution for a system with an effective $s=1/2$.}
\label{Fig2}
\end{figure}
The temperature dependence of the heat capacity divided by temperature of SDO is shown in Fig.~\ref{Fig2}.
The $C(T)/T$ curve shows a very broad maximum at 0.77~K and a nearly linear temperature dependence below this peak, implying a quadratic, $T^2$, dependence for $C(T)$ at low temperatures.
There are no sharp features in the heat capacity curve which could be attributed to a phase transition to a magnetically ordered state.
The inset shows the temperature dependence of magnetic entropy obtained from the $C(T)/T$ curve extrapolated to $T=0$, as well as the entropy value expected for an effective spin $1/2$ system.
\subsection{$H \parallel [010]$}
\begin{figure}
\includegraphics[width={0.5\columnwidth}]{CoverT_vs_H_010.eps}
\caption{(Colour online) Field dependence of the heat capacity divided by temperature, $C(T)/T$, of single crystal \sdo\ measured at different temperatures for $H \parallel [010]$.
The curves are consecutively offset by 2.5 J/(mol K$^2$) for clarity.}
\label{Fig3}
\end{figure}
The field dependence of the heat capacity divided by temperature of SDO measured at different temperatures for $H \parallel [010]$ is shown in Fig.~\ref{Fig3}.
At the lowest temperature of 0.39~K, a sharp double peak at about 20~kOe dominates the $C(H)$ curve.
On warming the sample, the peaks in the $C(H)$ curve become more rounded while the field separation between them increases.
Above 1~K, the peaks (especially the one at lower-field) are rather broad and separated by approximately 6~kOe, while by 1.5~K the lower-field peak is too broad and too weak to be clearly distinguishable.
Apart from these two peaks, a low-field ($< 4$~kOe) feature with a pronounced temperature dependence is also clearly visible on the $C(H)$ curves.
The $C(H)/T$ curves shown in Fig.~\ref{Fig3} have been combined with similar field scans performed at higher temperatures to form the magnetic $H-T$ phase diagram shown in Fig.~\ref{Fig4}, where the value of the heat capacity divided by temperature is represented by the colour scale shown on the right of the figure.
\begin{figure}
\includegraphics[width={0.5\columnwidth}]{PhD_010.eps}
\caption{(Colour online) Magnetic $H-T$ phase diagram of \sdo\ for $H \parallel [010]$ obtained from the heat capacity measurements.
Colour represents the heat capacity divided by temperature in the units of J/(mol K$^2$).}
\label{Fig4}
\end{figure}
\subsection{$H \parallel [001]$}
The field dependence of the heat capacity divided by temperature of SDO measured at different temperatures for $H \parallel [001]$ is shown in Fig.~\ref{Fig5}.
In contrast to the case for a field applied along the $b$ axis, the application of a magnetic field along the $c$ direction does not seem to result in any features in the $C(H)$ curves sharp enough to be indicative of a phase transition.
At the lowest temperature, a broad peak in $C(H)$ is present at around~11~kOe.
On warming the sample this peak initially increases in intensity, but then decreases rapidly and splits into two very broad peaks.
By~0.93~K the same field of 11~kOe corresponds to a local {\it minimum}, rather than a {\it maximum} in the heat capacity.
This minimum remains visible (albeit less pronounced) at higher temperatures and seems to shift to slightly higher fields.
\begin{figure}
\includegraphics[width={0.5\columnwidth}]{CoverT_vs_H_001.eps}
\caption{(Colour online) Field dependence of the heat capacity divided by temperature, $C(T)/T$, of single crystal \sdo\ measured at different temperatures for $H \parallel [001]$.
The curves are consecutively offset by~2.5~J/(mol K$^2$) for clarity.}
\label{Fig5}
\end{figure}
The $C(H)/T$ curves shown in Fig.~\ref{Fig5} are combined with similar field scans performed at higher temperatures to form the magnetic $H-T$ phase diagram shown in Fig.~\ref{Fig6}, where the variation in the heat capacity divided by temperature is represented by the colour scale.
Since for this direction of applied field there are no sharp features in the $C(H)/T$ curves, we have performed additional measurements of the temperature dependence of the specific heat in constant fields; the corresponding data are shown in Fig.~\ref{Fig7}.
Again, no sharp peaks are observed in the $C(T)$ curves.
A broad peak centred at 0.77~K in zero field shifts to 0.5~K and becomes more intense and well defined in an applied field of 10~kOe.
With the further increases in field the peak shifts to higher temperatures and becomes significantly less intense.
\begin{figure}
\includegraphics[width={0.5\columnwidth}]{PhD_001.eps}
\caption{(Colour online) Magnetic $H-T$ phase diagram of \sdo\ for $H \parallel [001]$ as obtained from the heat capacity measurements.
Colour represents the heat capacity divided by temperature in the units of J/(mol K$^2$). }
\label{Fig6}
\end{figure}
\begin{figure}
\includegraphics[width={0.5\columnwidth}]{CoverT_vs_T_001.eps}
\caption{(Colour online) Temperature dependence of the heat capacity divided by temperature, $C(T)/T$, of single crystal \sdo\ measured in different applied fields for $H \parallel [001]$.
The curves are consecutively offset by~2.0~J/(mol K$^2$) for clarity.}
\label{Fig7}
\end{figure}
\section{Discussion and Conclusions}
The zero-field heat capacity data shown in Fig.~\ref{Fig2} implies the absence of long-range magnetic order in SDO down to at least~0.39~K in full agreement with our PND data~\cite{Petrenko_unpublished} which
show only broad diffuse scattering peaks down to 20~mK.
The broad peak in $C(T)$ at~0.77~K is very likely to be associated with the formation of an extended short-range order, as it seems to coincide with the broad maxima in the $\chi(T)$ curves, below which a small difference between zero-field cooled data and field cooled susceptibility data has been reported~\cite{Hayes_2012}.
The observed temperature dependence of heat capacity $C(T) \sim T^2$ is not unusual in frustrated magnetism and has been seen previously, for example, in some Gd pyrochlores~\cite{Gd_pyros}.
The magnetic entropy recovered by $T=5$~K is nearly 2R$\ln(2)$, which suggests that at least at low temperature SDO may be treated as an effective $s=1/2$ system.
This behaviour is also quite expected and commonly observed for Dy magnetic moments in the presence of a strong Ising-like magnetic anisotropy, for example in dysprosium pyrogermanate $\rm Dy_2Ge_2O_7$ \cite{Ke_2008} and dysprosium pyrochlore $\rm Dy_2Ti_2O_7$ \cite{Dy_pyros}.
In the case of SDO, high-temperature susceptibility data suggest that the $b$ axis is the easy direction for magnetisation~\cite{Hayes_2012}.
According to the inelastic neutron scattering data~\cite{CFE_Kenzelman}, the first well-defined crystal-field level for SDO is located at~4~meV (although a less pronounced peak at~2~meV, which is partially masked by the quasi-elastic signal, cannot be ruled out).
This observation implies that at approximately 20~K SDO will no longer behave as an effective spin 1/2 system.
From the data presented here (as well as from the previous magnetisation measurements~\cite{Hayes_2012}) it is obvious that
SDO behaves rather anisotropically in an applied field.
For $H \parallel [010]$, the field dependence of heat capacity measured at the base temperature (see Fig.~\ref{Fig3}) returns a small, but still well-defined peak at 2~kOe, as well as a much sharper and more pronounced double peak around 20~kOe.
Both the 2 and 20~kOe peaks correspond to sharp increases in the magnetisation (and therefore to peaks in $dM/dH$) shown in Fig.~5 of ref.~\cite{Hayes_2012}.
As the value of the magnetisation does not change significantly between these peaks and amounts to approximately a third of the total magnetisation for this direction of the applied field, it has been suggested~\cite{Hayes_2012} that the observed magnetisation plateau corresponds to the appearance of a collinear up-up-down ($uud$) structure, in which two thirds of magnetic moments are aligned along the field while the remaining third are aligned in the opposite direction.
The observed field dependence of the specific heat adds weight to this conjecture with the local maximum in $C(T,H)$ defining a region of stability of the collinear phase (see Fig.~\ref{Fig4}), which seems to extend at the lowest temperature from 2 to 19~kOe and to propagate to almost~1.5~K in 10~kOe.
At the lowest temperature the higher-field part of the phase diagram is separated from the lower-field part by a double phase transition, clearly seen in the $C(H)$ curves.
Remarkably, there is no indication in the magnetisation data~\cite{Hayes_2012} that the higher-field transition is actually split into two transitions and that the separation between them increases with increasing temperature.
After observing such a split from the heat capacity data we have made sure that this splitting is not caused by a slight misalignment of the sample or by the presence of several crystallographical domains, by verifying that two independently prepared and aligned samples of SDO return practically identical sets of specific heat data.
The sharp peaks in the $C(H)$ curves indicate multiple magnetic field-induced transitions in SDO for $H \parallel [010]$.
It is, however, impossible from the bulk-property measurements alone to state whether to not any of the field-induced phases are long-range in nature.
Careful neutron diffraction experiments are required to answer this question~\cite{further_neutrons}.
For $H \parallel [010]$, the field dependence of heat capacity differs significantly from the $H \parallel [001]$ case in that the latter does not show any signs of a field-induced phase transition.
A broad maximum in the lower-temperature $C(H)$ curves shown in Fig.~\ref{Fig5} corresponds to a similarly broad maximum in the $dM/dH$ curves for this direction of an applied field~\cite{Hayes_2012}.
In order to establish the reasons for such a dramatic difference in the magnetic behaviour of SDO for two different directions of an applied field, the crystal-field parameters must be established.
This exercise is far from trivial, as there are 8 Dy$^{3+}$ ions on the two district crystallographic sites in the unit cell (see Fig.~\ref{Fig1}).
The symmetry is rather low, therefore the number of crystal-field levels is expected to be large, but perhaps even more importantly, the positions of the levels at lower temperature can be influenced by the development of a short-range magnetic order.
The base temperature of the $^3$He cryostat used in our experiment has limited the lowest sample temperature to approximately 0.38~K.
In the context of quantum-critical transitions, it would be very interesting to extend the $C(H)$ measurements reported here to lower temperatures, particularly for $H \parallel [010]$ where the difference between the two field-induced transitions around 20~kOe seems to be decreasing with decreasing temperature.
However, this could prove to be experimentally challenging, as below 0.3~K (and extending down to at least 50~mK) there is a significant increase in the heat capacity of SDO~\cite{Hayes_2013}.
The origin of this upturn of the $C(T)$ curve below 0.3~K is presently unknown (it could potentially be caused by the hyperfine interactions); it results in a very rapid increase in $\tau$, the time constant of the measurements using the relaxation technique, even for very small samples, making the measurement times prohibitively long.
Another possible extension of the work presented here would be to measure the heat capacity of SDO for $H \parallel [100]$.
Given the difficulties associated with unfavourable sample geometry and also the featureless magnetisation curve for this direction of the applied field~\cite{Hayes_2012}, this research avenue does not seem to be particularly promising.
To summarise, we have investigated the low-temperature behaviour of \sdo\ in an applied magnetic field by measuring its specific heat for $H \parallel [010]$ and $H \parallel [001]$.
The collected data indicate that (i) in zero field, the material remains disordered down to 0.39~K (ii) for $H \parallel [010]$, a sequence of magnetically ordered phases is induced (iii) for $H \parallel [010]$, no transition to an ordered state occurs.
The results call for further neutron scattering experiments to clarify the nature of the field-induced phases of this geometrically frustrated compound.
\ack The authors acknowledge financial support from the EPSRC, UK under the grant EP/E011802/1.
Some of the equipment used in this research was obtained through the Science City Advanced Materials project: Creating and Characterising Next Generation Advanced Materials, with support from Advantage West Midlands (AWM) and was part funded by the European Regional Development Fund (ERDF).
\section*{References}
|
1,116,691,501,308 | arxiv |
\section{Introduction}
\label{S:intro}
Local smoothing for the linear Schr\"odinger equation has a long and
rich history. First observed by Kato \cite{Kato} for the KdV equation
and later studied by Constantin-Saut \cite{ConSau}, Sj\"olin
\cite{Sjolin}, Vega \cite{Vega}, and Kato-Yajima \cite{KaYa-smooth} for the Schr\"odinger equation, the
local smoothing estimate expresses that, on average in time and
locally in space, solutions to a linear homogeneous dispersive
equation gain some regularity compared to the initial data. Since
dispersive equations are time-reversible, the propagator at time $t$
preserves the initial energy, but local smoothing shows there is
greater regularity if we also integrate in time.
For the linear Schr\"odinger equation in ${\mathbb R}^n$, it is well known
(see, for example, \cite{Tao-book})
that for any $\epsilon>0$, there exists $C>0$ such that one has the estimate
\[
\int_0^T \| \left\langle x \right\rangle^{-1/2 - \epsilon} e^{it \Delta} u_0
\|_{H^{1/2}}^2 dt \leq C \| u_0 \|_{L^2}^2.
\]
There are several ways to prove this estimate, the simplest of which
is a positive commutator method (see below), although estimates on the
cutoff free resolvent also imply this estimate (this argument seems to
have its origin in the work of Kato \cite{Ka-wos};
see also, for example,
\cite{Bur-sm,Chr-disp-1,Chr-sch2}).
However, for the Schr\"odinger equation on a non-compact manifold, the
situation is not so simple. A remarkable result of Doi \cite{Doi}
states that one has the sharp $H^{1/2}$ local smoothing effect on an
asymptotically Euclidean manifold if and
only if the geodesic flow is non-trapping. That is, the presence of
geodesics which do not ``escape to infinity'' cause a loss in how much
regularity the solution can gain. This has been generalized to
boundary value problems in \cite{Bur-sm}. In the case of sufficiently
``thin'' hyperbolic trapped sets it has been demonstrated
in \cite{Bur-sm,Chr-disp-1,Chr-sch2,Dat-sm} that one has only a
``trivial'' loss of $\epsilon>0$ derivatives.\footnote{In fact, with
some care in definitions, the loss is only logarithmic.} These examples include
Ikawa's examples \cite{Ika-2-wd,Ika-3-wd,Bur-sm}, a single periodic
hyperbolic geodesic (with or without boundary reflections)
\cite{Chr-disp-1}, very general fractal trapped sets without
boundary \cite{NoZw-res,Chr-sch2,Dat-sm}, and normally hyperbolic
trapped sets \cite{WuZw10}. That is, in all of these cases,
the authors prove that for any $\epsilon>0$, there exists a constant
$C>0$ such that
\[
\int_0^T \| \left\langle x \right\rangle^{-1/2 - \epsilon} e^{it \Delta} u_0
\|_{H^{1/2-\epsilon}}^2 dt \leq C \| u_0 \|_{L^2}^2.
\]
In this case, we call the loss due to trapping ``trivial''.
To contrast, if a manifold admits an elliptic trapped set, the
existence of resonances converging exponentially to the real axis and
the existence of
infinite order quasimodes
prevents polynomial gain in regularity.
The
purpose of this note is to exhibit a class of manifolds with only one
periodic geodesic which is weakly hyperbolic, and prove a (sharp)
local smoothing effect with loss that lies somewhere between the
complete loss of an elliptic trapped set and the trivial loss of a
strictly hyperbolic trapped set.
We consider the manifold $X = {\mathbb R}_x \times {\mathbb R}_\theta / 2 \pi
{\mathbb Z}$, equipped with a metric of the form
\[
ds^2 = d x^2 + A^2(x) d \theta^2,
\]
where $A \in {\mathcal C}^\infty$ is a smooth function, $A \geq \epsilon>0.$
From this metric, we get the volume form
\[
d \text{Vol} = A(x) dx d \theta,
\]
and the Laplace-Beltrami operator acting on $0$-forms
\[
\Delta f = (\partial_x^2 + A^{-2} \partial_\theta^2 + A^{-1}
A' \partial_x) f.
\]
We observe that we can conjugate
$\Delta$ by an isometry of metric spaces and separate variables so
that spectral analysis of $\Delta$ is equivalent to a one-variable
semiclassical problem with potential. That is, let $T : L^2(X, d
\text{Vol}) \to L^2(X, dx d \theta)$ be the isometry given by
\[
Tu(x, \theta) = A^{1/2}(x) u(x, \theta).
\]
Then $\widetilde{\Delta} = T \Delta T^{-1}$ is essentially self-adjoint on $L^2 (
X, dx d \theta)$ with mild assumptions on $A$ (for example in this
paper $X$ has two ends which are short range perturbations of ${\mathbb R}^2$). A simple calculation gives
\[
-\widetilde{\Delta} f = (- \partial_x^2 - A^{-2}(x) \partial_\theta^2 + V_1(x) )
f,
\]
where the potential
\[
V_1(x) = \frac{1}{2} A'' A^{-1} - \frac{1}{4} (A')^2 A^{-2}.
\]
If we now separate variables and write $\psi(x, \theta) = \sum_k
\varphi_k(x) e^{ik \theta}$, we see that
\[
(-\widetilde{\Delta}- \lambda^2) \psi = \sum_k e^{ik \theta} (P_k -\lambda^2)\varphi_k(x),
\]
where
\[
(P_k -\lambda^2) \varphi_k(x) = (-\frac{d^2}{dx^2} + k^2 A^{-2}(x) + V_1(x) - \lambda^2)
\varphi_k(x).
\]
Setting $h = k^{-1}$, we have the semiclassical operator
\[
P(z,h) \varphi(x) = (-h^2 \frac{d^2}{dx^2} + V(x) -z) \varphi(x),
\]
where the potential is
\[
V(x) = A^{-2}(x) + h^2 V_1(x)
\]
and the spectral parameter is $z = h^2 \lambda^2$.
In this paper, we are primarily interested in the case $A(x) = (1 + x^{2m})^{1/2m}$, $m
\in {\mathbb Z}_+$. If $m \geq 2$, then $X$ is asymptotically Euclidean
(with two ends), and the subpotential $h^2 V_1(x)$ is seen to be
lower order in both the semiclassical and the scattering sense. If $m
= 1$, a trivial modification must be made to make the metric a
short-range perturbation, but we completely ignore this issue here. The
point is that for $m \geq 2$, the principal part of the potential $V(x)$
is $A^{-2}(x)$ which has a degenerate maximum at $x = 0$. The
corresponding periodic geodesic $\gamma \subset X$ is {\it weakly} hyperbolic in
the sense that it is unstable and isolated, but degenerate (see Figure
\ref{fig:fig1}).
\begin{figure}
\hfill
\centerline{\input{fig1}}
\caption{\label{fig:fig1} A piece of the manifold $X$ and the periodic
geodesic $\gamma$. The Gaussian curvature $K = -A''/A =
-(2m-1)x^{2m-2} (1 + x^{2m})^{-2}$ vanishes to order $2m-2$ at $x =
0$ and is asymptotically flat as $|x| \to \infty$ }
\hfill
\end{figure}
Our main result is the following theorem, which says that for every $m \geq 2$, there is still some
local smoothing, but with a polynomial loss depending on $m$.
\begin{theorem}[Local Smoothing]
\label{T:smoothing}
Suppose $X$ is as above for $m \geq 2$, and assume $u$ solves
\[
\begin{cases} (D_t -\Delta ) u = 0 \text{ in } {\mathbb R} \times X , \\
u|_{t=0} = u_0 \in H^{s}
\end{cases}
\]
for some $s \geq m/(m+1)$.
Then for any $T<\infty$, there exists a constant
$C>0$ such that
\[
\int_0^T \| \left\langle x \right\rangle^{-3/2} u \|_{H^1(X)}^2 \, dt \leq C (\| \left\langle D_\theta
\right\rangle^{m/(m+1)} u_0 \|_{L^2}^2 + \| \left\langle D_x \right\rangle^{1/2} u_0 \|_{L^2}^2).
\]
\end{theorem}
\begin{remark}
Observe that there is no polynomial local smoothing effect
in the limit $m \to \infty$. In Theorem \ref{T:sharp} below, we show
Theorem \ref{T:smoothing} is sharp, and that in fact the estimate is
saturated on a weak semiclassical time scale.
\end{remark}
We are also able to prove, using the same techniques, a polynomial
bound on the resolvent of the Laplacian in the same geometric
setting. We now assume for simplicity that our surface of
revolution is Euclidean at infinity, i.e.\ that $A(x) =
x$ for $\abs{x}\gg 0.$ (More generally we could just require
merely dilation analyticity at infinity; this would allow us to
include asymptotically conic spaces as treated in \cite{WZ}.)
We denote by $R(\lambda)$
the resolvent on $X$
\[
R(\lambda) = (-\Delta_g - \lambda^2)^{-1},
\]
where it exists. If we take $\,\mathrm{Im}\, \lambda < 0$ as our physical sheet,
then, since $X$ is Euclidean near infinity, there is a meromorphic
continuation of $\chi R(\lambda) \chi$ to the logarithmic covering
space, for any $\chi \in {\mathcal C}^\infty_c(X)$ (see, e.g., \cite{SjZw}).
In particular, by choosing an
appropriate branch cut, $\chi R(\lambda) \chi$ continues
meromorphically to $\{ \lambda \in {\mathbb R}, \lambda \gg 0 \}$.
\begin{theorem}
\label{T:resolvent}
Fix $m \geq 2$. For any $\chi \in {\mathcal C}^\infty_c(X)$, there exists a constant
$C= C_{m, \chi} >0$ such that
for $\lambda \gg 0$,
\[
\| \chi R(\lambda-i0) \chi \|_{L^2 \to L^2} \leq C \lambda^{-2/(m+1)}.
\]
Moreover, this estimate is sharp, in the sense that no better
polynomial rate of decay holds true.
\end{theorem}
\begin{remark}
The estimate in this theorem represents a loss over the non-trapping
case, when generally $\lambda^{-1}$ order bounds are known. In the
non-degenerate hyperbolic trapping case ($m=1$), most known estimates
are of the order $\lambda^{-1} \log (\lambda)$, and in the elliptic
trapping cases, generally one expects at best exponential bounds.
Hence Theorem \ref{T:resolvent} represents a family of estimates with
a sharp polynomial loss. To the best of our knowledge, no other such
examples are known.
\end{remark}
\subsection*{Acknowledgements}
The authors would like to thank J.~Marzuola, R.~Melrose, J.~Metcalfe,
and M.~Zworski for helpful conversations. We are especially grateful
to N.~Burq for suggesting the model problem treated here as one which
might exhibit a finite loss of local smoothing. We would also like to
thank Kiril Datchev for suggesting we write up the resolvent estimate
with loss. Finally, we would like to thank the anonymous referee
whose careful reading of this manuscript has greatly helped improve
the presentation.
\section{Positive commutators}
\subsection{The smoothing estimate on Euclidean space}
In this section we write out the standard positive commutator proof of local smoothing for
the Schr\"odinger equation in polar coordinates. We then try to mimic
the proof in the case of degenerate hyperbolic orbits ($m \geq 2$ above)
to see where the proof fails.
In polar coordinates, the homogeneous Schr\"odinger equation on
$\mathbb{R}_t \times \mathbb{R}^2$ is
\[
\begin{cases}
(D_t - \partial_r^2 - r^{-1} \partial_r - r^{-2} \partial_\theta^2)u =
0, \\
u|_{t = 0} = u_0;
\end{cases}
\]
we will of course write
$$
\Delta = {\partial}_r^2 + r^{-1} {\partial}_r +r^{-2} {\partial}_\theta^2.
$$
We recall that in polar coordinates the radial, or scaling, vector
field is $x
\cdot \partial_x = r \partial_r$. By scaling, we immediately compute
$$
[r{\partial}_r, \Delta] =2 \Delta;
$$
however, as $r {\partial}_r$ is not a bounded map between Sobolev spaces, we
change the weight and
employ the commutant $B = r \left\langle r
\right\rangle^{-1} \partial_r$.
The function $a(r) = r \left\langle r \right\rangle^{-1}$ is
non-negative and bounded, and satisfies $a'(r) = \left\langle r \right\rangle^{-3}$.
Thus, we compute
\begin{align}
[B, \Delta] & =
2 a' \partial_r^2 + (a''+ a' r^{-1} + a r^{-2}) \partial_r + 2 a
r^{-3} \partial_\theta^2 \label{foobar}\\
& = 2 \left\langle r \right\rangle^{-3} \partial_r^2 + 2 \left\langle r \right\rangle^{-1}
r^{-2} \partial_\theta^2+O(r^{-1} \ang{r}^{-1}) {\partial}_r. \notag
\end{align}
Using the Schr\"odinger equation, we write
\begin{align*}
0 = & 2 i \,\mathrm{Im}\, \int_0^T \left\langle B (D_t -\Delta)u, u \right\rangle dt \\
= & \int_0^T \left\langle B (D_t - \partial_r^2 -
r^{-1} \partial_r - r^{-2} \partial_\theta^2)u, u \right\rangle \\
& - \left\langle u, B (D_t - \partial_r^2 -
r^{-1} \partial_r - r^{-2} \partial_\theta^2)u \right\rangle dt \\
= & \int_0^T \left\langle [B, ( - \partial_r^2 -
r^{-1} \partial_r - r^{-2} \partial_\theta^2)]u, u \right\rangle dt + i
\left. \left\langle
B u , u \right\rangle \right|_0^T.
\end{align*}
The last term is bounded using energy estimates by
\[
\left| \left. \left\langle
B u , u \right\rangle \right|_0^T \right| \leq \| u_0 \|_{H^{1/2}}^2.
\]
Rearranging, we thus obtain
$$
\int_0^T \left\langle [B, \Delta]u, u \right\rangle dt\leq C_T \norm{u_0}_{H^{1/2}}^2.
$$
Employing \eqref{foobar} and integrating by parts thus yields
$$
\int_0^T \norm{\ang{r}^{-3/2}{\partial}_r u}^2 + \norm{\ang{r}^{-1/2} r^{-1}{\partial}_\theta
u}^2\, dt
\leq C_T \norm{u_0}_{H^{1/2}}^2.
$$
where we have absorbed on the right the term involving $\int_0^T
\ang{{\partial}_r u,u} \, dt$ as well as the similar error terms from
commuting ${\partial}_r$ with a multiplier. This is the local smoothing
estimate on the manifold ${\mathbb R}^2$.
\subsection{Degenerate hyperbolic trapping}
In this section, we prove our main local smoothing estimate.
Let us begin by reproducing the positive commutator computation in the
previous section for the degenerate case. Let $A(x) = (1 +
x^{2m})^{1/2m}$, the metric $ds^2 = dx^2 + A^2 d \theta^2$ as before,
and conjugate the Laplacian to Euclidean space:
\[
-\widetilde{\Delta} f = (- \partial_x^2 - A^{-2}(x) \partial_\theta^2 + V_1(x) )
f,
\]
where the potential
\[
V_1(x) = \frac{1}{2} A'' A^{-1} - \frac{1}{4} (A')^2 A^{-2}.
\]
The following proposition is the statement of local smoothing for the
conjugated equation, and evidently implies Theorem \ref{T:smoothing}
by conjugating back.
\begin{proposition}
\label{P:smoothing}
Suppose $m \geq 2$ and $u$ solves
\begin{equation}
\label{E:tDelta-Sch}
\begin{cases} (D_t -\widetilde{\Delta} ) u = 0, \\
u(0,x, \theta) = u_0.
\end{cases}
\end{equation}
Then for any $T<\infty$ there exists a constant
$C>0$ such that
\begin{align*}
\int_0^T & ( \| \left\langle x \right\rangle^{-1} \partial_x u \|_{L^2}^2 + \| \left\langle x
\right\rangle^{-3/2} \partial_\theta u \|_{L^2}^2 ) \, dt \\
& \leq C (\| \left\langle D_\theta
\right\rangle^{m/(m+1) } u_0 \|_{L^2}^2 + \| \left\langle D_x \right\rangle^{1/2} u_0 \|_{L^2}^2).
\end{align*}
\end{proposition}
\subsection{Proof of Proposition \ref{P:smoothing}}
Let us summarize briefly the strategy of the proof. Using a positive
commutator argument similar to the previous section, we prove local
smoothing except at the periodic orbit $\gamma = \{x = 0 \}$.
Moreover, solutions to \eqref{E:tDelta-Sch} exhibit perfect local
smoothing in the $x$ direction and only lose smoothing in the
directions tangential to $\gamma$ (that is, only in the $\theta$
direction). Thus it suffices to prove local smoothing with a loss for
$\theta$ derivatives, in a neighbourhood of $x = 0$. We separate
variables in the $\theta$ direction (Fourier series decomposition) and
prove estimates uniform in each Fourier mode. To do this, we further
decompose, say, the $k$th Fourier mode into a low-frequency part where $|k| \leq
|D_x|$ and a high-frequency part where $|D_x| \leq |k|$. The low
frequency part is estimated using the positive commutator technique
modulo a term which is localized to high-frequencies, so it suffices
to estimate a solution cut off to high frequencies. For this, we
introduce a semiclassical rescaling, and
reduce the estimate to a cutoff semiclassical resolvent estimate,
which implies local smoothing via \cite[Theorem 1]{Chr-sch2}.
\noindent {\bf Step 1: Positive commutators and the estimate away from $x=0$.}
If $B = \arctan(x) \partial_x$, we have
\[
[\widetilde{\Delta}, B] = 2 \left\langle x \right\rangle^{-2} \partial_x^2 - 2 x \left\langle x
\right\rangle^{-4} \partial_x + 2 A' A^{-3} \arctan(x) \partial_\theta^2 + V_1'
\arctan(x).
\]
Now
\[
iB- (iB)^* = i[\arctan(x) , \partial_x]
\]
is $L^2$ bounded, so
\begin{align*}
0 =& \int_0^T \int u \overline{ ( \arctan(x) D_x (D_t - \widetilde{\Delta}) u)} dx
d \theta \, dt \\
=& \int_0^T \int \arctan(x) D_x u \overline{ ( (D_t - \widetilde{\Delta}) u)} dx
d \theta \, dt
\\
& + \int_0^T \int (iB-(iB)^*) u \overline{ ( (D_t - \widetilde{\Delta}) u)} dx
d \theta \, dt \\
= & i \left\langle \arctan(x) D_x u , u \right\rangle|_0^T + \int_0^T \left\langle (D_t -
\widetilde{\Delta}) i^{-1} B u, u \right\rangle \, dt.
\end{align*}
Hence, using the notation $P = D_t - \widetilde{\Delta}$,
\begin{align*}
0 & = 2 i \,\mathrm{Im}\, \int_0^T \left\langle i^{-1} B P u, u \right\rangle \, dt \\
& = \int_0^T \left\langle i^{-1} B P u, u \right\rangle \, dt - \int_0^T \left\langle u, i^{-1} B
P u \right\rangle \, dt \\
& = \int_0^T \left\langle [i^{-1}B,P] u, u \right\rangle \, dt - i \left\langle \arctan(x) D_x u ,
u \right\rangle|_0^T ,
\end{align*}
or
\[
\int_0^T \left\langle [B,-\widetilde{\Delta}] u, u \right\rangle \, dt = - \left\langle \arctan(x) D_x u ,
u \right\rangle|_0^T,
\]
since $B$ does not depend on $t$. By writing $\partial_x = \left\langle D_x
\right\rangle^{1/2} \left\langle D_x \right\rangle^{-1/2} \partial_x$, and using energy
estimates, we can control the right hand side by $\| u_0
\|_{H^{1/2}}^2$. The left hand side is computed as above:
\begin{align*}
\int_0^T & \left\langle [B,-\widetilde{\Delta}] u, u \right\rangle \, dt \\
= & \int_0^T \Big\langle ( 2 \left\langle x \right\rangle^{-2} \partial_x^2 - 2 x \left\langle x
\right\rangle^{-4} \partial_x + 2 A' A^{-3} \arctan(x) \partial_\theta^2 \\
& + V_1'
\arctan(x)) u, u \Big\rangle \, dt.
\end{align*}
Using the energy estimates,
\begin{align}
\Big| \int_0^T & \left\langle ( - 2 x \left\langle x
\right\rangle^{-4} \partial_x + V_1'
\arctan(x)) u, u \right\rangle \, dt \Big| \leq C T\sup_{0 \leq t \leq T } \|
u(t) \|_{H^{1/2}}^2 \notag \\
& \leq C_T \| u_0 \|_{H^{1/2}}^2. \label{E:energy-RHS}
\end{align}
Integrating by parts in $x$ and $\theta$ and adding
the lower order terms into the right hand side as in
\eqref{E:energy-RHS} yields the estimate
\begin{equation*}
\int_0^T (\| \left\langle x \right\rangle ^{-1} \partial_x u \|_{L^2}^2 + \|\sqrt{A' A^{-3} \arctan(x) } \partial_\theta u \|_{L^2}^2 ) \, dt \leq C \| u_0
\|_{H^{1/2}}^2.
\end{equation*}
We observe that
\[
A' A^{-3} \arctan(x) = \arctan(x) x^{2m-1} (1 + x^{2m})^{-1/m -1}
\]
is even, non-negative, bounded below by $C|x|^{2m}$ for $|x| \leq 1$ and
$C'|x|^{-3}$ for $|x| \geq 1$. Hence
\[
|x|^{2m} \left\langle x \right\rangle^{-2m-3} \leq CA' A^{-3} \arctan(x) ,
\]
and hence,
\[
\left\langle |x|^{2m} \left\langle x \right\rangle^{-2m-3} \partial_\theta u, \partial_\theta u
\right\rangle \leq C \left\langle A' A^{-3} \arctan(x) \partial_\theta
u, \partial_\theta u \right\rangle
\]
plus terms which can be absorbed into the energy,
so up to lower order terms,
\[
\| |x|^{m} \left\langle x \right\rangle^{-m-3/2} \partial_\theta u \| \leq C \|
\sqrt{A' A^{-3} \arctan(x) } \partial_\theta u \|.
\]
Hence we have the estimate
\begin{equation}
\label{E:est-away-0}
\int_0^T (\| \left\langle x \right\rangle ^{-1} \partial_x u \|_{L^2}^2 + \| |x|^{m}
\left\langle x \right\rangle^{-m-3/2} \partial_\theta u \|_{L^2}^2 ) \, dt \leq C \| u_0
\|_{H^{1/2}}^2.
\end{equation}
\noindent {\bf Step 2: Low frequency estimate.}
The estimate \eqref{E:est-away-0} shows we have perfect local
smoothing away from the periodic geodesic at $x=0$, and moreover shows
we have perfect local smoothing in the $x$ direction. That is, the
only loss is in the direction tangential to the periodic geodesic,
which is expected since a point in $T^*X$ which is transversal to the
periodic geodesic will flow out to infinity, and only the tangential
directions stay localized for long times. In this subsection we show
how to get an estimate in the tangential directions with a loss.
Let us decompose
\[
u(t,x,\theta) = \sum_k e^{ik \theta} u_k(t, x),
\]
and
\[
u_0(x, \theta) = \sum_k e^{ik \theta} u_{0,k}(x).
\]
By orthogonality it suffices to prove Proposition \ref{P:smoothing}
for each mode.
Observe the zero mode $u_0(t,x)$ satisfies
\[
\begin{cases}
(D_t -\widetilde{\Delta}) u_0(t,x) = 0, \\
u_0(0,x) = u_{0,0}(x),
\end{cases}
\]
and $\partial_\theta u_0(t,x) = 0$, so that, from Step 1, we have
\begin{align*}
\int_0^T & ( \| \left\langle x \right\rangle^{-1} \partial_x u_0 \|_{L^2}^2 + \| \left\langle x
\right\rangle^{-3/2} \partial_\theta u_0 \|_{L^2}^2 ) \, dt = \int_0^T \| \left\langle
x \right\rangle^{-1} \partial_x u_0 \|_{L^2}^2 \, dt \\
& \leq \int_0^T (\| \left\langle x \right\rangle ^{-1} \partial_x u_0 \|_{L^2}^2 + \| |x|^{m}
\left\langle x \right\rangle^{-m-3/2} \partial_\theta u_0 \|_{L^2}^2 ) \, dt \\
& \leq C \| u_{0,0}
\|_{H^{1/2}}^2\\
& \leq C (\| \left\langle D_\theta
\right\rangle^{m/(m+1) } u_0 \|_{L^2}^2 + \| \left\langle D_x \right\rangle^{1/2} u_0 \|_{L^2}^2).
\end{align*}
To prove Proposition \ref{P:smoothing} for the nonzero modes, we
show
\[
\int_0^T \| \chi(x) k u_k \|_{L^2({\mathbb R})}^2 \, dt \leq C (\| \left\langle k \right\rangle^{m/(m+1)}
u_{0,k} \|_{L^2}^2 + \| u_{0,k} \|_{H^{1/2}}^2 )
\]
for some $\chi \in {\mathcal C}^\infty_c( {\mathbb R})$ with $\chi (x) \equiv 1$ near $x =
0$, for $| k | \geq 1$.
For simplicity in exposition, let us drop the $k$ notation for $u$ and
$u_0$, and just observe that now the time-dependent Schr\"odinger
operator depends on $k$:
\[
D_t + P_k = D_t -\widetilde{\Delta} = D_t + D_x^2 + A^{-2}(x) k^2 + V_1(x) .
\]
The idea is to use $k^{-1}$ as a semiclassical parameter, and
decompose $u$ into a part where $|k| \leq |D_x|$ and a part where $k$
is not controlled by $D_x$. It turns out that then the part not
controlled by $D_x$ can be handled with a second commutator argument
plus a $T T^*$ argument (see Step 3 below
and \S \ref{SS:ml-res-est-pf-ssect}).
Let $\psi \in {\mathcal C}^\infty_c( {\mathbb R})$ be an even function satisfying $\psi (r)\equiv 1$ for $| r |
\leq 1$ and $\psi(r) \equiv 0$ for $| r | \geq 2$. Let
\[
u = u_{\text{hi}} + u_{\text{lo}},
\]
where
\[
u_{\text{hi}} = \psi(D_x/k) u, \,\,\, u_{\text{lo}} = (1 - \psi(D_x/k)) u.
\]
Observe $u_{\text{lo}}$ satisfies the equation
\[
(D_t + P_k )u_{\text{lo}} = -[P_k, \psi(D_x/k)] u = k \left\langle x \right\rangle^{-3} L \tilde{\psi}(D_x/k) u,
\]
where $L$ is $L^2$ bounded and $\tilde{\psi} \in {\mathcal C}^\infty_c$ is $1$ on $\mathrm{supp}\,
\psi$. If we try to apply the positive commutator argument from the previous
step to $u_{\text{lo}}$, we now have
\begin{align*}
2 i \,\mathrm{Im}\, &\int_0^T \left\langle i^{-1} B k \left\langle x \right\rangle^{-3} L \tilde{\psi}(D_x/k) u,
u_{\text{lo}} \right\rangle \, dt \\
= & 2 i
\,\mathrm{Im}\, \int_0^T \left\langle i^{-1} B (D_t + P_k) u_{\text{lo}}, u_{\text{lo}} \right\rangle \, dt \\
= & \int_0^T \left\langle i^{-1} B (D_t + P_k )u_{\text{lo}}, u_{\text{lo}} \right\rangle \, dt - \int_0^T \left\langle u_{\text{lo}}, i^{-1} B
(D_t + P_k) u_{\text{lo}} \right\rangle \, dt \\
& - \int_0^T \int (iB-(iB)^*) u_{\text{lo}} \overline{ (D_t + P_k) u_{\text{lo}} } dx
d \theta \, dt \\
= & \int_0^T \left\langle [i^{-1}B,P_k] u, u \right\rangle \, dt - i \left\langle \arctan(x) D_x u_{\text{lo}} ,
u_{\text{lo}} \right\rangle|_0^T \\
& - i \int_0^T \left\langle \left\langle x \right\rangle^{-2} u_{\text{lo}}, k \left\langle x
\right\rangle^{-3} L \tilde{\psi}(D_x/k) u \right\rangle \, dt,
\end{align*}
or
\begin{align*}
& \left| \int_0^T \left\langle [i^{-1}B,P_k] u, u \right\rangle \, dt \right| \\ &
\quad \leq C \Bigg(
\left| \int_0^T \left\langle i^{-1} B k \left\langle x \right\rangle^{-3} L \tilde{\psi}(D_x/k) u,
u_{\text{lo}} \right\rangle \, dt \right| \\
& \quad \quad + \left| \left\langle \arctan(x) D_x u_{\text{lo}} ,
u_{\text{lo}} \right\rangle|_0^T\right| + \left| \int_0^T \left\langle \left\langle x \right\rangle^{-2} u_{\text{lo}}, k \left\langle x
\right\rangle^{-3} L \tilde{\psi}(D_x/k) u \right\rangle \, dt \right| \Bigg) \\
& \quad \leq C \Big( \| k \left\langle x \right\rangle^{-3/2} \tilde{\psi}(D_x/k) u \|^2 + \| \left\langle
x \right\rangle^{-3/2} D_x u \|^2 + \| u_0 \|_{H^{1/2}}^2 \Big),
\end{align*}
where we have again used the energy estimate where appropriate.
The right hand side is now
controlled by
\eqref{E:est-away-0} except near $x=0$. What we have gained is the
cutoff in frequency $\tilde{\psi}(D_x/k)$. Clearly if we can show for any
$\chi \in {\mathcal C}^\infty_c$, $\chi \equiv 1$ near $x=0$,
\[
\int_0^T \| \chi k \tilde{\psi}(D_x/k) u \|_{L^2}^2\, dt \leq C \| k^{m/(m+1)} u_0 \|_{L^2}^2
\]
we can control the remaining term from the estimate on $u_{\text{lo}}$ as well
as the estimate of $u_{\text{hi}}$.
\noindent {\bf Step 3: The high frequency estimate.}
Let us now try to estimate $u_{\text{hi}}$ near $x=0$, or more generally a
solution to $(D_t + P_k) u =0$ microlocalized near $(0,0)$.
For some $0 \leq r \leq 1/2$ to be determined, let $F(t)$ be defined by
\[
F(t) g = \chi(x) \psi(D_x/k) k^r e^{-itP_k} g,
\]
where $e^{-itP_k}$ is the free propagator. Our goal is to determine for
what values of $r$ we have a mapping $F: L^2_x \to L^2([0,T]) L^2_x$,
since then
\begin{equation}
\label{E:l-sm-r}
\| k^{1-r} F(t) u_0 \|_{L^2([0,T]);L^2} \leq C \| k^{1-r} u_0 \|_{L^2}
\end{equation}
is the desired local smoothing estimate.
We have such a mapping if and only if $F F^* : L^2 L^2 \to L^2 L^2$.
We compute
\[
F F^* f(x,t) = \psi(D_x/k) \chi (x) k^{2r} \int_0^T e^{i(t-s)P_k} \chi(x)
\psi(D_x/k) f(x,s)ds,
\]
and it suffices to estimate $\| FF^* f \|_{L^2 L^2} \leq C \| f
\|_{L^2 L^2}.$
We write $F F^* f(x,t) = \psi \chi (v_1 + v_2)$, where
\[
v_1 = k^{2r} \int_0^t e^{i(t-s)P_k} \chi(x)
\psi(D_x/k) f(x,s)ds,
\]
and
\[
v_2 = k^{2r} \int_t^T e^{i(t-s)P_k} \chi(x)
\psi(D_x/k) f(x,s)ds,
\]
so that
\[
(D_t + P_k)v_j = \pm i k^{2r} \chi \psi f,
\]
and it suffices to estimate
\[
\| \psi \chi v_j \|_{L^2 L^2} \leq C \| f \|_{L^2 L^2}.
\]
Since the Fourier transform in time is an $L^2$ isometry, it suffices
to estimate
\[
\|\psi \chi \hat{v}_j \|_{L^2 L^2} \leq C \| \hat{f} \|_{L^2 L^2},
\]
but this is the same as estimating
\[
\| \psi \chi k^{2r}(\tau \pm i 0+ P_k)^{-1} \chi \psi \|_{L^2_x \to L^2_x} \leq
C.
\]
Let us factor out the $k^2$ in $P_k$ to get the operator
\[
k^{-2r} (\tau \pm i 0+ P_k) = k^{2(1-r)}(-z \pm i 0+ k^{-2}D_x^2 + A^{-2}(x) + k^{-2}
V_1(x))
\]
for $-z = \tau k^{-2}$,
and if we let $h = k^{-1}$, we are left with the task of finding $r$
so that
\[
\| \psi(hD_x) \chi(x) (-z \pm i 0+ (hD_x)^2 + V)^{-1} \chi(x) \psi(hD_x)
\|_{L^2 \to L^2} \leq C h^{-2(1-r)},
\]
where $V = A^{-2}(x) + h^2 V_1(x)$. Let $$\widetilde{Q} = (hD_x)^2 + V-z.$$
We observe that the cutoff $\psi(hD_x)\chi(x)$ shows we only need to estimate
this for $z$ in a bounded interval near $z = 1$. Indeed, $\psi \chi$
cuts off to a neighbourhood of $(0,0)$, and $V(0) = 1$, so for $|
z-1|$ sufficiently large, we have elliptic regularity. The cutoff estimate
on $\widetilde{Q}$ is the content of the following Proposition, which
is proved in the next subsection.
\begin{proposition}
\label{P:ml-res-est}
Let $\varphi \in \Phi^0$ have wavefront set sufficiently close to
$(0,0)$. Then for each $\epsilon>0$ sufficiently small, there exists
a constant $C>0$ such that
\[
\| \varphi (\widetilde{Q} \pm i 0 )^{-1} \varphi \|_{L^2 \to L^2} \leq C h^{-2m/{m+1}}, \,\,\, z
\in [1-\epsilon, 1 + \epsilon].
\]
\end{proposition}
With Proposition \ref{P:ml-res-est} in hand, we observe
\[
\| \psi \chi k^{2r}(\tau \pm i 0 + P_k)^{-1} \chi \psi \|_{L^2_x \to L^2_x} \leq
C
\]
holds if
\[
k^{2(r-1)} = k^{-2m/(m+1)},
\]
or
\[
r = \frac{1}{m+1}.
\]
From \eqref{E:l-sm-r}, this implies Proposition \ref{P:smoothing} (see
also \cite[Theorem 1]{Chr-sch2}).
\subsection{Proof of Proposition \ref{P:ml-res-est}}
\label{SS:ml-res-est-pf-ssect}
The technique of proof is to prove an invertibility estimate
microlocally near $(0,0)$ in Lemma \ref{L:ml-inv} below. From this, one easily obtains a resolvent
estimate with complex absorbing potential, and then the gluing techniques of
\cite[Proposition 2.2]{Chr-disp-1} imply the Proposition (see also the
recent paper of Datchev-Vasy \cite{DaVa-gluing}).
The proof of the microlocal invertibility estimate proceeds through
several steps. First, we rescale the principal symbol of $\widetilde{Q}$ to
introduce a calculus of two parameters. We then quantize in the
second parameter which eventually will be fixed as a constant in the
problem. This technique has been used in
\cite{SjZw-mono,SjZw-frac,Chr-NC,Chr-QMNC}.
Our central result to achieve microlocal invertibility is a lower
bound for the resolvent of the semiclassical operator $\widetilde{Q},$ whose potential
has a degenerate barrier top. This result is of independent interest,
and is used to prove the sharp resolvent estimate in Theorem \ref{T:resolvent}.
\begin{lemma}
\label{L:ml-inv}
For $\epsilon>0$ sufficiently small, let $\varphi \in {\mathcal S}(T^* {\mathbb R})$
have compact support in $\{ |(x,\xi) |\leq \epsilon\}$. Then there
exists $C_\epsilon>0$ such that
\begin{equation}
\label{E:ml-inv}
\| \widetilde{Q} \varphi^w u \| \geq C_\epsilon h^{2m/(m+1)} \|
\varphi^w u \|, \,\,\, z \in [1-\epsilon, 1 + \epsilon].
\end{equation}
\end{lemma}
\subsection{The two-parameter calculus}
Following Sj\"ostrand-Zworski \cite[\S3.3]{SjZw-frac}, we introduce a
calculus with two parameters, designed to enable symbolic computations
in the $h^{-1/2}$ calculus which would otherwise involve global
considerations rather than a local Moyal product of symbols. We use a
somewhat more general version of this calculus than in
\cite{SjZw-frac}, involving inhomogenous powers of $h.$
For $\alpha\in [0,1]$ and $\beta\leq 1-\alpha,$ we let
\begin{eqnarray*}
\lefteqn{{\mathcal S}_{\alpha,\beta}^{k,m, \widetilde{m}} \left(T^*(\mathbb{R}^n) \right):= } \\
& = & \Bigg\{ a \in {\mathcal C}^\infty \left(\mathbb{R}^n \times (\mathbb{R}^n)^* \times (0,1]^2 \right): \\
&& \quad \quad \left| \partial_x^\rho \partial_\xi^\gamma a(x, \xi; h, \tilde{h}) \right|
\leq C_{\rho \gamma}h^{-m}\tilde{h}^{-\widetilde{m}} \left(
\frac{\tilde{h}}{h} \right)^{\alpha |\rho| + \beta |\gamma|}
\langle \xi \rangle^{k - |\gamma|} \Bigg\}.
\end{eqnarray*}
Throughout this work we will always assume $\tilde{h} \geq h$.
We let $\Psi_{\alpha,\beta}^{k, m, \widetilde{m}}$ denote the
corresponding spaces of semiclassical pseudodifferential operators
obtained by Weyl quantization of these symbols. We will sometimes add a
subscript of $h$ or $\tilde{h}$ to indicate which parameter is used in
the quantization; in the absence of such a parameter, the quantization
is assumed to be in $h.$ The class ${\mathcal S}_{\alpha,\beta}$ (with no
superscripts) will denote ${\mathcal S}_{\alpha,\beta}^{0,0,0}$ for brevity.
In \cite{SjZw-frac}, it is observed that in the special case
$\alpha=\beta=1/2,$ the composition in the
calculus can be computed in terms of a symbol product that converges
in the sense that terms improve in $\tilde{h}$ and $\xi$ orders, but
not in $h$ orders (owing to the marginality of the $h^{-1/2}$
calculus, which is what the introduction of the second parameter
$\tilde{h}$ mitigates). We will restrict our attention in what
follows to a generalization of this marginal case:
$$
\alpha+\beta=1.
$$
By the same arguments employed in \cite{SjZw-frac}, we may easily
verify that the calculus $\Psi_{\alpha,\beta}$ is closed under composition: if
$ a \in {\mathcal S}^{k,m , {\widetilde m} }_{\alpha,\beta}$ and
$ b \in {\mathcal S}^{k',m', \widetilde m''}_{\alpha,\beta} $ then
\[ \mathrm{Op}\,_h^w (a) \circ \mathrm{Op}\,_h^w(b) = \mathrm{Op}\,_h^w (c)
\ \text{ with } \
c \in {\mathcal S}^{k+k',m +m', {\widetilde m}+ {\widetilde m}' }_{\alpha,\beta}\,.
\]
The presence of the additional parameter $ \tilde h $ allows us to
conclude that
\[ c \equiv \sum_{ |\rho | < M } \frac{1}{\rho !}
\partial_\xi^\rho a D_x^\rho b \ \mod {\mathcal S}^{ k + k' - M , m + m ' , {\widetilde m} + {\widetilde m}'
- M }_{\alpha,\beta} \,, \]
that is, we have a symbolic expansion in powers of $ \tilde h $.
We also note that a more general version of \cite[Lemma 3.6]{SjZw-frac} holds, giving
error estimates on remainders:
\begin{lemma}
\label{l:err}
Suppose that
$ a, b \in {\mathcal S}_{\alpha,\beta}$,
and that $ c^w = a^w \circ b^w $.
Then
\begin{equation}
\label{eq:weylc} c ( x, \xi) = \sum_{k=0}^N \frac{1}{k!} \left(
\frac{i h}{2} \sigma ( D_x , D_\xi; D_y , D_\eta) \right)^k a ( x , \xi)
b( y , \eta) |_{ x = y , \xi = \eta} + e_N ( x, \xi ) \,,
\end{equation}
where for some $ M $
\begin{equation}
\label{eq:new1}
\begin{split}
& | \partial^{\gamma} e_N | \leq C_N h^{N+1}
\\
& \ \
\times \sum_{ \gamma_1 + \gamma_2 = \gamma }
\sup_{
{{( x, \xi) \in T^* \mathbb{R}^n }
\atop{ ( y , \eta) \in T^* \mathbb{R}^n }}} \sup_{
|\rho | \leq M \,, \rho \in {\mathbb N}^{4n} }
\left|
\Gamma_{\alpha, \beta, \rho,\gamma }(D)
( \sigma ( D) ) ^{N+1} a ( x , \xi)
b ( y, \eta )
\right| \,,
\end{split}
\end{equation}
where $ \sigma ( D) =
\sigma ( D_x , D_\xi; D_y, D_\eta ) $ as usual,
and
\[
\Gamma_{\alpha, \beta, \rho,\gamma }(D) =( h^\alpha {\partial}_{(x,y)},
h^\beta {\partial}_{(\xi,\eta)}))^\rho \partial^{\gamma_1}
\partial^{\gamma_2}.
\]
\end{lemma}
\begin{proof}
Following \cite[Lemma 3.6]{SjZw-frac} we recall that
$$
c(x,\xi) = \exp (i h \sigma(D)/2) a(x,\xi) b(y,\eta)|_{x=y,\eta=\xi}
$$
and hence by Taylor's theorem the remainder may be expressed as
\begin{equation}\label{eN}
e_N(x,\xi) = \frac{1}{N!} \int_0^1 (1-t)^N \exp(it h \sigma(D)/2) (ih \sigma(D)/2)^{N+1}\big(a(x,\xi)b(y,\eta)\big)\, dt\big\rvert_{x=y,\eta=\xi} .
\end{equation}
Likewise ${\partial}^\gamma e_N$ is a sum of terms of the form
\begin{equation}\label{eNderiv}
\text{const} \times \int_0^1 (1-t)^N {\partial}_{(x,\xi)}^{\gamma_1} {\partial}_{(y,\eta)}^{\gamma_2} \exp(it h \sigma(D)/2) (ih \sigma(D)/2)^{N+1}\big(a(x,\xi)b(y,\eta)\big)\, dt\big\rvert_{x=y,\eta=\xi}
\end{equation}
where $\gamma_1+\gamma_2=\gamma.$
We further recall that for any non-degenerate real quadratic form $A$ there exists $M$ such that for all $f$,
$$
\abs{{\partial}^\gamma \exp(iA(D)/2) f} \leq C \sum_{\abs{\rho}<M} \sup
\abs{{\partial}^{\gamma+\rho} f}
$$
(where the $\sup$ is over all phase and base variables---in our case,
$(x,\xi,y,\eta)$). Now we take $A(D)=th\sigma(D)$ and note that
$$
h\sigma(D) =\sigma(h^\alpha D_x, h^\beta D_\xi; h^\alpha D_y, h^\beta D_\eta),
$$
Rescaling the $x,y$ variables by $h^\alpha$ and $\xi,\eta$ by
$h^\beta$ shows that we may estimate
$$
\abs{{\partial}^{\gamma_1}{\partial}^{\gamma_2} \exp(ith \sigma(D)/2) f} \leq C \sum_{\abs{\rho}<M} \sup
\abs{{\partial}^{\gamma_1}{\partial}^{\gamma_2} (h^\alpha{\partial}_{(x,y)},h^\beta{\partial}_{(\xi,\eta)})^\rho f}
$$
Thus we may estimate the integrand of of \eqref{eNderiv} uniformly by
a constant times
$$
h^{N+1}\sup \sum_{\abs{\rho}<M}
\abs{{\partial}^{\gamma_1} {\partial}^{\gamma_2} (h^\alpha{\partial}_{x,y},h^\beta{\partial}_{\xi,\eta})^\rho
\sigma(D)^{N+1}a(x,\xi) b(y,\eta)},
$$
and the result follows.
\end{proof}
As a particular consequence we notice that if $ a \in
{\mathcal S}_{\alpha,\beta} ( T^* \mathbb{R}^n) $ and $ b \in {\mathcal S} ( T^* \mathbb{R}^n ) $ then
\begin{align}
\label{eq:abc}
c ( x, \xi) = &
\sum_{k=0}^N \frac{1}{k!} \left(
i h \sigma ( D_x , D_\xi; D_y , D_\eta) \right)^k a ( x , \xi)
b( y , \eta) |_{ x = y , \xi = \eta} \\
& + {\mathcal O}_{{\mathcal S}_{\alpha,\beta}}
( h^{N+1} \max \{ (\tilde{h}/h)^{(N+1) \alpha}, (\tilde{h}/h)^{(N+1) \beta} \}
\,. \notag
\end{align}
We will let $\mathcal{B}$ denote the ``blowdown map''\footnote{We
remark that introducing the coordinates $(X,\Xi)$ is tantamount to
performing an anisotropic blowup centered at $x=\xi=h=0$.}
\begin{equation}\label{blowdown}
(x,\xi)=\mathcal{B}(X,\Xi)=((h/\tilde{h})^\alpha X, (h/\tilde{h})^\beta \Xi).
\end{equation}
The spaces of operators $\Psi_h$ and
$\Psi_{\tilde{h}}$ are related via a unitary rescaling in the
following fashion.
Let $a \in {\mathcal S}_{\alpha,\beta}^{k,m,\tilde{m}}$, and consider the
rescaled symbol
\begin{eqnarray*}
a\left({\left( h/\tilde{h} \right)}^{\alpha}X, {\left( h/\tilde{h} \right)}^{\beta} \Xi
\right)= a \circ \mathcal{B} \in {\mathcal S}_{0,0}^{k,m,\tilde{m}}.
\end{eqnarray*}
Define the unitary operator $T_{h, \tilde{h}} u(X) = {\left( h/\tilde{h} \right)}^{\frac{n\alpha}{2}}u\left(
{\left( h/\tilde{h} \right)}^{\alpha} X \right)$,
so that
\begin{eqnarray*}\label{rescaledquantization}
\mathrm{Op}\,_{\tilde{h}}^w(a\circ B) T_{h, \tilde{h}} u= T_{h, \tilde{h}} \mathrm{Op}\,_h^w(a) u.
\end{eqnarray*}
\subsection{Proof of Lemma \ref{L:ml-inv}}
By virtue of the cutoff $\varphi^w$, to begin we are
working microlocally in $\{|(x,\xi)| \leq \epsilon \}$. We observe
that since $2m/(m+1)<2$, if we can show the estimate \eqref{E:ml-inv}
for $Q_1 = \widetilde{Q} - h^2 V_1$, the estimate follows also for $\widetilde{Q}$.
Let
\[
q_1 = \xi^2 + A^{-2} -z
\]
be the principal symbol of $Q_1$. The function $A^{-2} = (1 +
x^{2m})^{-1/m}$ is analytic near $x = 0$, and since $|x| \leq \epsilon$ is
small, we expand $A^{-2}$ in a Taylor series about $x = 0$ and write
\[
q_1 = \xi^2 - \frac{1}{m} x^{2m}(1 + a(x)) -z_1,
\]
where $z_1 = z - 1 \in [-\epsilon, \epsilon],$ and $a(x) = {\mathcal O}(x^{2m}).$
The Hamilton vector field ${\textsf{H}}$ associated to the symbol $q_1$ is given by
$$
{\textsf{H}} = 2\xi{\partial}_{x} +(2 x^{2m-1}+ {\mathcal O}(x^{4m-1})) {\partial}_{\xi}.
$$
We will consider a commutant localizing in this region and singular at the
origin in a controlled way: as above we introduce new variables
$$
\Xi=\frac{\xi}{(h/\tilde{h})^{m\alpha}},\quad X = \frac{x}{(h/\tilde{h})^\alpha},
$$
with $$\alpha = \frac 1{m+1}.$$ (When we wish to be more precise
below, we will explicitly use the map $(x,\xi) =\mathcal{B}(X,\Xi)$ in this
coordinate change; for the moment, we simply abuse notation.) As
$m\alpha +\alpha = 1,$ we note that quantizations of symbolic
functions of $X,\Xi$ lie in the pseudodifferential calculus, hence the
symbol of the composition of two such operators depends
\emph{globally} on the symbols of the two operators. It is in order
to cope with this issue that we employ the two parameter calculus.
We remark that in the new ``blown-up'' coordinates $\Xi,X,$
\begin{equation}\label{blownupvf}
{\textsf{H}}= (h/\tilde{h})^{\frac{m-1}{m+1}}\big(\Xi {\partial}_X+
X^{2m-1}{\partial}_\Xi+{\mathcal O}((h/\tilde{h})^{2m\alpha} X^{2m}){\partial}_\Xi\big)
\end{equation}
Now fix a small ${\epsilon}_0>0$ and set
$$
\Lambda(s) = \int_0^s \ang{s'}^{-1-{\epsilon}_0} \, ds';
$$
$\Lambda$ is of course a symbol of order $0,$ with $\Lambda(s)
\sim s$ near $s=0.$
We introduce the singular symbol
$$
a(x,\xi;h) = \Lambda(\Xi)\Lambda(X)\chi(x)\chi(\xi)= \Lambda(\xi/(h/\tilde{h})^{m\alpha}) \Lambda(x/(h/\tilde{h})^\alpha)\chi(x)\chi(\xi),
$$
where $\chi(s)$ is a cutoff function equal to $1$ for $\abs{s}<\delta_1$
and $0$ for $s>2\delta_1$ ($\delta_1$ will be chosen shortly).
Then $a$ is bounded, and a $0$ symbol in $X,\Xi:$
$$
\abs{{\partial}_X^\alpha {\partial}_\Xi^\beta a}\leq C_{\alpha,\beta}.
$$
(Recall that $x=(h/\tilde{h})^\alpha X$ and $\xi=(h/\tilde{h})^{m\alpha}\Xi.$)
Using \eqref{blownupvf}, it is simple to
compute
\begin{equation}\label{gdefn}
\begin{aligned}
{\textsf{H}} (a) = & (h/\tilde{h})^{\frac{m-1}{m+1}}\chi(x)\chi(\xi)\big(
\Lambda(\Xi)
\ang{X}^{-1-{\epsilon}_0}\Xi \\
& +X^{2m-1}\ang{\Xi}^{-1-{\epsilon}_0}
\Lambda(X) (1+{\mathcal O}(x^{2m}))\big)+r\\
= &
(h/\tilde{h})^{\frac{m-1}{m+1}}\chi(x)\chi(\xi)\bigg(
(h/\tilde{h})^{-m\alpha}\xi \Lambda(\xi/(h/\tilde{h})^{m\alpha})
\ang{x/(h/\tilde{h})^\alpha}^{-1-{\epsilon}_0}\\ & +(h/\tilde{h})^{-(2m+1)\alpha} x^{2m-1}
\Lambda(x/(h/\tilde{h})^\alpha) \ang{\xi/(h/\tilde{h})^{m\alpha}}^{-1-{\epsilon}_0}
(1+{\mathcal O}(x^{2m}))\bigg)+r\\
\equiv & (h/\tilde{h})^{\frac{m-1}{m+1}} g+r
\end{aligned}
\end{equation}
with $$\mathrm{supp}\, r\subset \{\abs{x}>\delta_1\} \cup \{\abs{\xi}>\delta_1\}$$
($r$ comes from terms involving derivatives of $\chi(x)\chi(\xi)$).
Note that near $X=\Xi=0,$ since $\Lambda(s)\sim s$ for $s\sim 0,$ the term
\begin{equation}\label{g}
g=\Lambda(\Xi)
\ang{X}^{-1-{\epsilon}_0}\Xi +\ang{\Xi}^{-1-{\epsilon}_0}
\Lambda(X) X^{2m-1} (1+{\mathcal O}(x^{2m}))
\end{equation}
in ${\textsf{H}}(a)$ is bounded below by a multiple of
$\Xi^2+X^{2m}.$ Provided $\delta_1$ is chosen small enough (so we can
absorb the ${\mathcal O}(x^{2m})$ error term), $g$ is in fact strictly positive
away from $X=\Xi=0,$ while in the region $\abs{(X,\Xi)}\geq 1,$ we find that
since $\mathrm{sgn}\, \Lambda(s)=\mathrm{sgn}\, (s),$ when $\abs{\Xi}\geq
\max(\abs{X}^{1+{\epsilon}_0}, 1)$
then
$$
g\geq \Lambda(\Xi)
\ang{X}^{-1-{\epsilon}_0}\Xi\gtrsim \frac{\abs{\Xi}}{\ang{\Xi}}\geq C>0,
$$
while for $\abs{X}^{1+{\epsilon}_0}\geq \max(\abs{\Xi},1),$ we have (providing
$\delta_1\ll 1$)
$$
g \geq (1/2) \ang{\Xi}^{-1-{\epsilon}_0} \Lambda(X) X^{2m-1} \gtrsim
\abs{X}^{-2(1+{\epsilon}_0)}\abs{X}^{2m-1} \geq C>0,
$$
provided $2(1+{\epsilon}_0)<2m-1.$ Thus, since the larger of $\abs{\Xi}$ and
$\abs{X}^{1+{\epsilon}_0}$ is assuredly greater than $1$ in the region of
interest, we have in fact shown that
$$
g \geq C>0\quad \text{ in } \{\Xi^2+X^2>1\}.
$$
Thus, we find
$$
{\textsf{H}}(a)=(h/\tilde{h})^{\frac{m-1}{m+1}}g+r
$$
with
$$r = {\mathcal O}_{{\mathcal S}_{\alpha, \beta}}((h/\tilde{h})^{(m-1)/(m+1)}( (h/\tilde{h})^\alpha |
\Xi| + (h/\tilde{h})^\beta | X^{2m-1} |)$$
supported as above and
$$
g(X,\Xi;h) = \begin{cases} c (\Xi^2 + X^{2m}) (1 + r_2), & \Xi^2 +X^2\leq 1\\
b, & \Xi^2 +X^2\geq 1,
\end{cases}
$$
where $c >0$ is a constant,$r_2 = {\mathcal O}_{{\mathcal S}_{\alpha, \beta}}( \delta_1)$, and $b>0$ is elliptic.
We will require a positivity result dealing with operators satisfying
estimates of this type
\begin{lemma}\label{lemma:positivity0}
Let a real-valued symbol $\tilde{g}(x,\xi;h)$ satisfy
$$
\tilde{g}(x,\xi;h) = \begin{cases} c (\xi^2 + x^{2m}) (1 + r_2), & \xi^2 +x^2\leq 1\\
b, & \xi^2 +x^2\geq 1,
\end{cases}
$$
where $c >0$ is constant, $r_2 = {\mathcal O}_{{\mathcal S}_{\alpha, \beta}}( \delta_1),$ and $b>0$ is elliptic.
Then there exists $c_0>0$ such that
$$
\left\langle\mathrm{Op}\,_h^w(\tilde{g})u, u \right\rangle \geq c_0h^{2m/(m+1)} \| u \|^2
$$
for $h$ sufficiently small.
\end{lemma}
\begin{proof
Since $b>0$ is elliptic, there exists $\sigma >0$ sufficiently small
and independent of $h>0$
so that if $\left\langle \mathrm{Op}\,_h^w(\tilde{g}) u, u \right\rangle \leq \sigma \| u \|^2$, then $u$
has semiclassical wavefront set contained in the set $\{ | x |^2 + |
\xi |^2 \leq 1/2 \}$.
On this set, we may write
$$
\tilde{g}=(\xi^2+x^{2m})K^2
$$
with $K$ a strictly positive symbol. The Weyl quantization has the
convenient feature that we thus have
$$
\mathrm{Op}\,_h^w(\tilde{g}) = \mathrm{Op}\,_h^w(K)^* (h^2D_x^2+ x^{2m}) \mathrm{Op}\,_h^w(K)+{\mathcal O}(h^2),
$$
and $\mathrm{Op}\,_h^w(K)\geq {\epsilon}_1>0.$
Then for $u$ microsupported in $\{ | x |^2 + |
\xi |^2 \leq 1/2 \}$ we thus compute
\[
\left\langle \mathrm{Op}\,_h^w(\tilde{g}) u, u \right\rangle \geq {\epsilon}_1\left\langle \mathrm{Op}\,_h^w(\xi^2 + x^{2m} ) u, u \right\rangle -
{\mathcal O}(h^2) \| u \|^2.
\]
The lower bound follows, for $h>0$ sufficiently small, from
Lemma \ref{L:Q-lower-bound}.
\end{proof}
We now employ this result to estimate $\mathrm{Op}\,_h^w ({\textsf{H}}(a)).$
\begin{lemma}\label{lemma:positivity1}
For $\tilde{h}>0$ sufficiently small, there exists $c>0$ such that
$\mathrm{Op}\,_h^w(g)>c\tilde{h}^{2m/(m+1)},$ uniformly as $h \downarrow 0,$ where $g$
is given by \eqref{g}.
\end{lemma}
\begin{proof}
Note that we have written $g$ as a function of $X,\Xi,$ so in changing
variables to $x,\xi$ we are tacitly employing the blowdown map $\mathcal{B}.$
In particular, we are interested in estimating $\mathrm{Op}\,_h^w(g\circ
\mathcal{B}^{-1})$ from below.
By \eqref{rescaledquantization},
$$
\mathrm{Op}\,_{\tilde{h}}^w(g) T_{h, \tilde{h}} u=T_{h, \tilde{h}} \mathrm{Op}\,_h^w(g
\circ \mathcal{B}^{-1}) u,
$$
hence
$$
\left\langle \mathrm{Op}\,_h^w(g \circ \mathcal{B}^{-1}) u, u \right\rangle = \left\langle T_{h, \tilde{h}}\mathrm{Op}\,_{\tilde{h}}^w(g)
T_{h, \tilde{h}} u, u \right\rangle \geq c \tilde{h}^{2m/(m+1)} \| u \|^2
$$
for $\tilde{h}$ sufficiently small, by unitarity of $T_{h,\tilde{h}}$
and Lemma~\ref{lemma:positivity0}, with $\tilde{h}$ replacing $h.$
This establishes the Lemma.
\end{proof}
Before completing the proof of Lemma \ref{L:ml-inv}, we need the
following lemma about the lower order terms in the expansion of the
commutator of $Q_1$ and $a^w$.
\begin{lemma}
\label{L:Q-comm-error}
The symbol expansion of $[Q_1, a^w]$ in the $h$-Weyl calculus is of
the form
\begin{align*}
[Q_1, a^w] = & \mathrm{Op}\,_h^w \Bigg( \Big(
\frac{i h}{2} \sigma ( D_x , D_\xi; D_y , D_\eta) \Big) (q_1(x, \xi)
a(y, \eta) - q_1(y, \eta) a ( x , \xi) ) |_{ x = y , \xi = \eta} \\
& + e (
x, \xi ) + r_3(x, \xi)\Bigg) ,
\end{align*}
where $e$ satisfies
\[
\mathrm{Op}\,_h^w(e) \leq C \tilde{h}^{-(m-3)/(m+1)} h^{2m/(m+1)} \mathrm{Op}\,_h^w(g),
\]
with $g$ given by \eqref{g} and $r_3$ supported in $\{ | (x, \xi) | \geq \delta_1 \}$.
\end{lemma}
\begin{proof}
Since everything is in the Weyl calculus, only the odd terms in the
exponential composition expansion are non-zero. Hence the $h^2$ term
is zero in the Weyl expansion.
Now according to Lemma \ref{l:err} and the standard $L^2$ continuity
theorem for $h$-pseudodifferential operators, we need to estimate a
finite number of
derivatives of the error:
\[
| \partial^{\gamma} e_2 | \leq C h^{3}
\sum_{ \gamma_1 + \gamma_2 = \gamma }
\sup_{
{{( x, \xi) \in T^* \mathbb{R} }
\atop{ ( y , \eta) \in T^* \mathbb{R} }}} \sup_{
|\rho | \leq M \,, \rho \in {\mathbb N}^{4} }
\left|
\Gamma_{\alpha, \beta, \rho, \gamma}(D)
( \sigma ( D) ) ^{3} q_1 ( x , \xi)
a ( y, \eta )
\right|
.
\]
However, since $q_1(x, \xi) = \xi^2 - x^{2m}(1 + a(x))$, we have
\[
D_x D_\xi q_1 = D_{\xi}^3 q_1 = 0,
\]
so that
\begin{align*}
\sigma(D)^3 & q_1(x, \xi) a(y, \eta) |_{ x = y , \xi = \eta} \\
& = D_x^3 q_1 D_\eta^3 a |_{ x = y , \xi = \eta} \\
& = c x^{2m-3}(1 + {\mathcal O}(x^{2m})) (\tilde{h}/h)^{3m/(m+1)} \Lambda'''
((\tilde{h}/h)^{m/(m+1)} \eta) \\
& \quad \times \Lambda((\tilde{h}/h)^{1/(m+1)} y) \chi(y)
\chi(\eta
+ r_3,
\end{align*}
where $r_3$ is supported in $\{ | (x, \xi) | \geq \delta_1 \}$.
Owing to the cutoffs $\chi(y) \chi(\eta)$ in the definition of $a$
(and the corresponding implicit cutoffs in $q_1$), we only need to
estimate this error in compact sets. The derivatives
$h^{\beta} \partial_\eta$ and $h^\alpha \partial_y$
preserve the order of $e_2$ in $h$ and increase the order in $\tilde{h}$, while the other derivatives lead
to higher powers in $h/\tilde{h}$ in the symbol expansion. Hence we need only estimate $e_2$,
as the derivatives satisfy similar estimates.
In order to estimate $e_2$, we again use conjugation to
the $2$-parameter calculus. We have
\[
\| \mathrm{Op}\,_h^w(e_2) u \| = \| T_{h, \tilde{h}} \mathrm{Op}\,_h^w(e_2) T_{h, \tilde{h}}^{-1}
T_{h, \tilde{h}} u \| \leq \| T_{h, \tilde{h}} \mathrm{Op}\,_h^w(e_2) T_{h, \tilde{h}}^{-1}
\|_{L^2 \to L^2} \| u \|,
\]
by unitarity of $T_{h, \tilde{h}}$. But $T_{h, \tilde{h}} \mathrm{Op}\,_h^w(e_2) T_{h,
\tilde{h}}^{-1} = \mathrm{Op}\,_{\tilde{h}}^w(e_2 \circ \mathcal{B} )$ and
\begin{align*}
e_2 \circ \mathcal{B} & = h^3 (h/\tilde{h})^{(2m-3)\alpha} X^{2m-3}(1 + {\mathcal O}(x^{2m})) (\tilde{h}/h)^{3m/(m+1)} \Lambda'''
(\Xi) \\
& \quad \times \Lambda(X) \chi(x)
\chi(\xi) + r_3 \circ \mathcal{B} ,
\end{align*}
and we may estimate the first term above by
$$
C h^{2m/(m+1)} \tilde{h}^{(m+3)/(m+1)}X^{2m-3} \Lambda'''(\Xi) \chi(x)
\chi(\xi),
$$
which in turn is bounded above by
\begin{equation}\label{gkineq}
\begin{cases}
C h^{2m/(m+1)} \tilde{h}^{(m+3)/(m+1)} , \,\, |X| \leq 1, \\
C h^{2m/(m+1)} \tilde{h}^{(m+3)/(m+1)} g, \,\, | X | \geq 1.
\end{cases}
\end{equation}
It now suffices to verify that for
$$
k=X^{2m-3} \Lambda'''(\Xi) \chi(x)
\chi(\xi),
$$
$$
\mathrm{Op}\,_{h}^w(k\circ \mathcal{B}^{-1}) \leq C \tilde{h}^{-2m/(m+1)}\mathrm{Op}\,_{h}^w (g\circ \mathcal{B}^{-1}),
$$
i.e., that for all $u(X),$
$$
\ang{\mathrm{Op}\,_{h}^w(k\circ \mathcal{B}^{-1})u,u} \leq C
\tilde{h}^{-2m/(m+1)}\ang{\mathrm{Op}\,_{h}^w (g\circ \mathcal{B}^{-1})u,u}.
$$
We now rescale and return to performing $\tilde{h}$ quantization in the $X$
variable. For $u(X)$ microsupported away from the origin in
$(X,\Xi),$ the desired estimate follows from the second inequality in
\eqref{gkineq} (indeed the $\tilde{h}^{-2m/(m+1)}$ factor on the RHS may be
omitted), while for $u$ microsupported near the origin, it follows
from the lower bound of Lemma~\ref{lemma:positivity0}.
\end{proof}
We are now able to prove the resolvent estimate Lemma \ref{L:ml-inv}.
Let $v=\varphi^w u,$ with $\varphi$ chosen to have support inside the
set where $\chi(x)\chi(\xi)=1;$ thus the terms $r$ and $r_3$ above are supported
away from the support of $\varphi.$ Then
Lemmas \ref{lemma:positivity1} and \ref{L:Q-comm-error} yield
\begin{align*}
i\ang{[Q_1-z,a^w]v,v}&=h\ang{\mathrm{Op}\,_h^w({\textsf{H}}(a))v,v}+\ang{\mathrm{Op}\,_h^w(e_2)u,u}
\\
&= h (h/\tilde{h})^{(m-1)/(m+1)} \ang{\mathrm{Op}\,_h^w(g)v,v}+\ang{\mathrm{Op}\,_h^w(e_2)u,u}
\\
&= h^{2m/(m+1)}\big( \tilde{h}^{-(m-1)/(m+1)}+{\mathcal O}(\tilde{h}^{-(m-3)/(m+1)})\big)\ang{\mathrm{Op}\,_h^w(g)v,v}
\\
&\geq C
h^{2m/(m+1)} \tilde{h} \norm{v}^2,
\end{align*}
for
$\tilde{h}$ sufficiently small.
On the other hand, we certainly have
$$
\big\lvert \ang{[Q_1-z,a^w]v,v}\big\rvert \leq C \norm{(Q_1-z)v}\norm{v},
$$
hence the desired bound follows once we fix $\tilde{h}>0$. \qed
\section{Resonances and Quasimodes}
In this section, we construct quasimodes for the model
operator near $(0,0)$ in phase space.
Let
\[
\widetilde{P} = -h^2 \partial_x^2 - m^{-1} x^{2m}
\]
locally near $x = 0$. We will construct quasimodes which are
localized very close to $x = 0$, so this should be a decent
approximation.
Complex scaling $(x, \xi ) \mapsto (e^{i \pi/(2m+2)} x, e^{-i \pi /(2m+2)}
\xi)$ sends $\widetilde{P}$ to a multiple of the quantum anharmonic oscillator.
As in
the appendix, we find there is a Schwartz class function $v(x) = v_0 ( x
h^{-1/(m+1)})$ which is an un-normalized ground state for the equation
\[
(-h^2 \partial_x^2 + m^{-1} x^{2m} ) v = h^{2m/(m+1)} \lambda_0 v.
\]
This suggests there are resonances for the operator
$\widetilde{P}$ with imaginary part to leading order $c_0
h^{2m/(m+1)}$, although this is only a heuristic. We use a complex WKB approximation to get an explicit
formula for a localized approximate resonant state.
Let $E = (\alpha + i \beta)h^{2m/(m+1)} $, $\alpha, \beta>0$
independent of $h$. Let the phase function
\[
\varpi(x) = \int_0^x (E + m^{-1} y^{2m})^{1/2} dy,
\]
where the branch of the square root is chosen to have positive
imaginary part. Let
\[
u(x) = (\varpi')^{-1/2} e^{i \varpi / h},
\]
so that
\[
(hD)^2 u = (\varpi')^2 u + f u,
\]
where
\begin{align*}
f & = (\varpi')^{1/2} (hD)^2 (\varpi')^{-1/2} \\
& = -h^2 \left( \frac{3}{4} (\varpi')^{-2} (\varpi'')^2 - \frac{1}{2}
(\varpi')^{-1} \varpi ''' \right).
\end{align*}
\begin{lemma}
The phase function $\varpi$ satisfies the following properties:
\begin{description}
\item[(i)] There exists $C>0$ independent of $h$ such that
\[
| \,\mathrm{Im}\, \varpi | \leq C\begin{cases} h(1 + \log(x/h^{1/2} )), \quad m =
1, \\
h, \quad m \geq 2.
\end{cases}
\]
In particular, if $| x | \leq C h^{1/(m+1)}$, $| \,\mathrm{Im}\, \varpi| \leq C'$
for some $C'>0$ independent of $h$.
\item[(ii)] There exists $C>0$ independent of $h$ such that
\[
C^{-1} \sqrt{ h^{2m/(m+1)} + m^{-1} x^{2m} } \leq | \varpi'(x) | \leq C \sqrt{
h^{2m/(m+1)} + m^{-1} x^{2m} }
\]
\item[(iii)]
\[
\begin{cases}
\varpi' = (E + m^{-1} x^{2m})^{1/2}, \\
\varpi'' = x^{2m-1} (\varpi')^{-1}, \\
\varpi''' = \left( (1-1/m) x^{4m-2} + E (2m-1) x^{2m-2}
\right) ( \varpi')^{-3},
\end{cases}
\]
In particular,
\[
f = -h^2 x^{2m-2} \left( \left( \frac{1}{4} + \frac{1}{2m}
\right) x^{2m} - \left( m - \frac{1}{2} \right) E \right) (\varpi'
)^{-4}.
\]
\end{description}
\end{lemma}
\begin{proof}
For (i) we write $\varpi' = s + it$ for $s$ and $t$ real valued, and then
\[
E + m^{-1} x^{2m} = s^2 - t^2 + 2 i st.
\]
Hence
\[
s^2 \geq s^2 - t^2 = \alpha h^{2m/(m+1)} + m^{-1} x^{2m} ,
\]
so that
\[
t = \frac{\beta h^{2m/(m+1)}}{2s} \leq \frac{\beta
h^{2m/(m+1)}}{2\sqrt{h^{2m/(m+1)} \alpha + m^{-1} x^{2m}}}.
\]
Then
\begin{align*}
| \,\mathrm{Im}\, \varpi (x) | & \leq \int_0^{|x|} | \,\mathrm{Im}\, \varpi'(y)| dy \\
& \leq C \int_0^{h^{1/(m+1)}} h^{m/(m+1)} dy + C \int_{h^{1/(m+1)}}^x
h^{2m/(m+1)} y^{-m} dy \\
& = \begin{cases} {\mathcal O} ( h(1 + \log (x/h^{1/2}))), \quad m = 1, \\
{\mathcal O}(h), \quad m >1. \end{cases}
\end{align*}
Parts (ii) and (iii) are simple computations.
\end{proof}
In light of this lemma, $| u (x) |$ is comparable to $| \varpi'
|^{-1/2}$ for all $x$ for $m \geq 2$, and provided $| x | \leq C h^{1/2}$ when
$m=1$. We are only interested in sharply localized quasimodes, so let
$$\gamma = h^{1/(m+1)},$$ choose $\chi(s) \in {\mathcal C}^\infty_c( {\mathbb R})$ such that
$\chi \equiv 1$ for $| s | \leq 1$ and $\mathrm{supp}\, \chi \subset [-2,2]$.
Let
\[
\tilde{u}(x) = \chi(x/\gamma) u(x),
\]
and, since $| \varpi'(x) | \sim h^{m/(m+1)}$ for $| x | \leq 2
h^{1/(m+1)}$, we compute:
\begin{align*}
\| \tilde{u} \|_{L^2}^2 & = \int_{| x | \leq 2 \gamma} \chi(x/\gamma)^2 | u
|^2 dx \\
& \sim \int_{|x| \leq 2\gamma} \chi(x/\gamma)^2 | \varpi' |^{-1} dx \\
& \sim h^{1/(m+1)} h^{ -m/(m+1)} \\
& \sim h^{(1-m)/(1+m)}.
\end{align*}
Further, $\tilde{u}$ satisfies the following equation:
\begin{align*}
(hD)^2 \tilde{u} & = \chi(x/\gamma) (hD)^2 u + [(hD)^2, \chi(x/\gamma)] u \\
& = (\varpi')^2 \tilde{u} + f \tilde{u} + [(hD)^2, \chi(x/\gamma)] u \\
& = (\varpi')^2 \tilde{u} + R,
\end{align*}
where
\[
R = f \tilde{u} + [(hD)^2, \chi(x/\gamma)] u.
\]
\begin{lemma}
The remainder $R$ satisfies
\begin{equation}
\label{E:R-remainder}
\| R \|_{L^2} = {\mathcal O} (h^{2m/(m+1)}) \| \tilde{u} \|_{L^2}.
\end{equation}
\end{lemma}
\begin{proof}
We have already computed the function $f$, which is readily seen to
satisfy
\[
\| f \|_{L^\infty(\mathrm{supp}\, (\tilde{u} ))} = {\mathcal O}(h^{2m/(m+1)}),
\]
since $\mathrm{supp}\, (\tilde{u}) \subset \{ | x | \leq 2 h^{1/(m+1)}$.
On the other hand, since $\| \tilde{u} \|_{L^2} \sim h^{(1-m)/2(1+m)}$, we need only show that
\[
\| [(hD)^2, \chi(x/\gamma)] u\|_{L^2} \leq C h^{(3m+1)/2(m+1)}.
\]
We compute:
\begin{align*}
[(hD)^2, \chi(x/\gamma)] u & = -h^2 \gamma^{-2} \chi'' u + 2\frac{h}{i}
\gamma^{-1} \chi' hD u \\
& = -h^2 \gamma^{-2} \chi'' u + 2\frac{h}{i}
\gamma^{-1} \chi' \left(-\frac{h}{2i} \frac{\varpi''}{\varpi'} + \varpi'
\right) u \\
& = -h^2 \gamma^{-2} \chi'' u + 2\frac{h}{i}
\gamma^{-1} \chi' \left( -\frac{h}{2i} \frac{ x^{2m-1}}{(\varpi')^2} +
\varpi' \right) u.
\end{align*}
The first term is estimated:
\[
\| h^2 \gamma^{-2} \chi'' u \|_{L^2} = {\mathcal O}(h^{2m/(m+1)}) \| u
\|_{L^2(\mathrm{supp}\, ( \tilde{u}))} ={\mathcal O}(h^{(3m+1)/2(m+1)}).
\]
Similarly, the remaining two terms are estimated:
\begin{align*}
& \left\| 2\frac{h}{i}
\gamma^{-1} \chi' \left( -\frac{h}{2i} \frac{ x^{2m-1}}{(\varpi')^2} +
\varpi' \right) u \right\|_{L^2} \\
& \quad = {\mathcal O}(h^{m/(m+1)} h^1 h^{(2m-1)/(m+1)}
h^{-2m/(m+1)}) \| u
\|_{L^2(\mathrm{supp}\, ( \tilde{u}))} \\
& \quad \quad + {\mathcal O}(h^{m/(m+1)} h^{2m/(m+1)} ) \| u
\|_{L^2(\mathrm{supp}\, ( \tilde{u}))} \\
& \quad = {\mathcal O}(h^{(3m+1)/2(m+1)}).
\end{align*}
\end{proof}
\subsection{Sharp local smoothing of quasimodes}
In this subsection, we show that the quasimode $\tilde{u}$ constructed above
can be used to
saturate the local smoothing estimate of Theorem \ref{T:smoothing}.
We observe that due to the estimate \eqref{E:est-away-0}, we have
perfect local smoothing in the ``radial'' direction $x$. Hence we
only consider $\theta$ regularity.
\begin{theorem}
\label{T:sharp}
Let $\varphi_0(x, \theta) = e^{ik \theta} \tilde{u}(x)$, where $\tilde{u} \in {\mathcal C}^\infty_c (
{\mathbb R})$ was constructed in the previous section, with $h=\abs{k}^{-1},$ where $|k|$ is
taken sufficiently large, and the parameter $m \geq 2$ as usual. Suppose $\psi$
solves
\[
\begin{cases}
(D_t + \widetilde{\Delta}) \psi = 0, \\
\psi|_{t=0} = \varphi_0.
\end{cases}
\]
Then for any $\chi
\in {\mathcal C}^\infty_c( {\mathbb R})$ such that $\chi \equiv 1$ on $\mathrm{supp}\, \tilde{u}$, and
$A>0$ sufficiently large, independent of $k$, there exists a constant
$C_0>0$ independent of $k$ such that
\begin{equation}\label{saturated}
\int_0^{|k|^{-2/(m+1)}/A} \| \left\langle D_\theta \right\rangle \chi \psi \|_{L^2}^2
dt \geq C_0^{-1} \| \left\langle D_\theta \right\rangle^{m/(m+1)} \varphi_0 \|_{L^2}^2.
\end{equation}
\end{theorem}
\begin{remark}
The theorem states that on the {\it weak semiclassical} time scale
$| t | \lesssim |k|^{-2/(m+1)}$, the
local smoothing estimate in this paper is sharp. Evidently this
implies that on any fixed finite time scale the theorem is sharp;
the theorem stated above gives more information. That is, it
demonstrates that even on a semiclassical time scale, the local
smoothing estimate really cannot be improved.
In addition, as the proof will indicate, no weight $\chi$ is
necessary, because on the semiclassical time scale we have essentially
finite propagation speed.
\end{remark}
\begin{remark}
The analogue of Theorem \ref{T:sharp} when the parameter $m = 1$,
where a log loss is expected, is contained in the work of
Bony-Burq-Ramond \cite{BBT-lower}.
\end{remark}
\begin{proof}
The technique of proof is to simply evolve the stationary quasimode as
if the equation were separated. The advantage is that this separated
stationary ``solution'' remains compactly supported for all time. Of
course this is not an exact solution, and generates an
inhomogeneous error which must be estimated using energy estimates.
It is here that we use the semiclassical time scale. Combining the
two estimates yields the theorem.
Let $\tilde{u}$ be as above for $h =
| k |^{-1}$ (after a suitable $L^2$ normalization), and let
\[
\varphi_0 (x, \theta) = \tilde{u} e^{ik \theta},
\]
as in the statement of the theorem.
Let $$\varphi(t, x, \theta) = e^{it \tau} \varphi_0$$ for some $\tau \in {\mathbb C}$
to be determined. Since the support of $\tilde{u}$ is very small, contained
in $\{ | x | \leq h^{1/(m+1)} / \gamma \}$, we have
\[
A^{-2} = (1 + x^{2m} )^{-1/m} = 1 -\frac{1}{m} x^{2m} +
{\mathcal O}(h^{4m/(m+1)})
\]
on $\mathrm{supp}\, \tilde{u}$. Then
\begin{align*}
(D_t + \widetilde{\Delta}) \varphi & = (D_t + P_k )\varphi \\
& = ( \tau - D_x^2 - A^{-2} k^2 - V_1(x) ) \varphi \\
& = k^2 e^{it \tau} e^{i k \theta} \left[\left( \tau k^{-2} - (k^{-2}D_x^2 + 1 - \frac{1}{m}
x^{2m} ) \right) \tilde{u} + {\mathcal O}( k^{-2}) \tilde{u} \right] \\
& = k^2 e^{it \tau} e^{i k \theta} \left[ \left( \tau k^{-2} - 1 -
E_0 \right) \tilde{u} + R + {\mathcal O}( k^{-2}) \tilde{u} \right],
\end{align*
where $R$ satisfies the remainder estimate \eqref{E:R-remainder},
i.e.,
$$
\|R\|_{L^2} = {\mathcal O} ( h^{2m/(m+1)}).
$$ Set
\[
\tau = k^2 + k^2 E_0 = k^2(1 + \alpha k^{-2m/(m+1)}) + i \beta
k^{2/(m+1)}, \,\,\, \alpha, \beta >0
\]
so that we have
\[
\begin{cases}
(D_t + \widetilde{\Delta}) \varphi = \tilde{R}, \\
\varphi(0, x, \theta) = \varphi_0
\end{cases}
\]
with
\begin{equation}
\label{E:tR}
\tilde{R} = k^2 e^{i t \tau } e^{i k \theta} (R(x, k) + {\mathcal O}(k^{-2}) \tilde{u} ).
\end{equation}
Thus, we obtain
\begin{equation}\label{tRestimates}
\|\tilde{R}\| \leq C k^2 \lvert e^{it\tau}\rvert k^{-2m/(m+1)} \|\tilde{u}\|
\end{equation}
and since on every function in question, $\left\langle D_\theta \right\rangle = \left\langle k
\right\rangle,$ we furthermore have
\begin{equation}\label{tRestimates2}
\|\left\langle D_\theta\right\rangle \tilde{R}\| \leq C k^2 \lvert e^{it\tau}\rvert k^{-2m/(m+1)} \|\left\langle D_\theta\right\rangle\varphi_0\|
\end{equation}
We can readily verify that $\varphi$ saturates the local smoothing
estimate of Theorem \ref{T:smoothing} {\it on any time scale}:
\begin{align}
\| \left\langle D_\theta \right\rangle \varphi \|_{L^2([0,T])L^2}^2 & = \int_0^T \| e^{it \tau} \left\langle D_\theta \right\rangle\varphi_0
\|^2_{L^2} dt \notag \\
& = \int_0^T e^{-2t\beta |k|^{2/(m+1)}} \| \left\langle D_\theta \right\rangle \varphi_0 \|_{L^2}^2 dt \notag
\\
& = \frac{1 - e^{-2 T\beta | k |^{2/(m+1)}}}{2 \beta |k|^{2/(m+1)}} \|
\left\langle D_\theta \right\rangle \varphi_0
\|_{L^2}^2 \notag \\
& = \frac{1 - e^{-2 T\beta | k |^{2/(m+1)}}}{ 2 \beta } \| |
D_\theta|^{-1/(m+1)} \left\langle D_\theta \right\rangle
\varphi_0 \|_{L^2}^2 \notag \\
& = B \| |
D_\theta|^{-1/(m+1)} \left\langle D_\theta \right\rangle
\varphi_0 \|_{L^2}^2,
\label{E:sharp-smoothing}
\end{align}
where we let
\begin{equation}\label{Bdefn}
B= \frac{1 - e^{-2 T\beta | k |^{2/(m+1)}}}{ 2 \beta }.
\end{equation}
What we aim to do, then, is show that $\varphi$ is close enough to a
solution of the Schr\"odinger equation that the same form of estimate
holds for the nearby solution with initial data $\varphi_0.$
Thus, let $L(t)$ be the unitary Schr\"odinger propagator:
\[
\begin{cases}
(D_t + \widetilde{\Delta}) L = 0, \\
L(0) = \,\mathrm{id}\,,
\end{cases}
\]
and write using Duhamel's formula:
\[
\varphi(t) = L(t) \varphi_0 + i \int_0^t L(t) L^*(s) \tilde{R}(s) ds =: \varphi_{\text{h}} + \varphi_{\text{ih}},
\]
where $\varphi_{\text{h}}$ and $\varphi_{\text{ih}}$ are the homogeneous and
inhomogeneous parts respectively. We want to show the
homogeneous smoothing effect is saturated, for which we need to show
the
inhomogeneous term is small and can be absorbed into the homogeneous term.
For this, we use an energy estimate, and localize in time to a scale
depending on $k$. That is, if $$E(t) = \| \left\langle D_\theta \right\rangle \varphi_{\text{ih}} \|^2_{L^2},$$ we
hav
\begin{align*}
E' & = 2 \,\mathrm{Re}\, \left\langle \left\langle D_\theta \right\rangle \partial_t \varphi_{\text{ih}}, \left\langle
D_\theta \right\rangle \varphi_{\text{ih}}
\right\rangle_{L^2} \\
& = 2 \,\mathrm{Re}\, \left\langle \left\langle D_\theta \right\rangle ( -i \widetilde{\Delta} \varphi_{\text{ih}} + i
\tilde{R}) , \left\langle D_\theta \right\rangle\varphi_{\text{ih}}
\right\rangle_{L^2} \\
& = 2 \,\mathrm{Re}\, \left\langle i \left\langle D_\theta \right\rangle \tilde{R}, \left\langle D_\theta \right\rangle\varphi_{\text{ih}}
\right\rangle_{L^2} \\
& \leq \nu \| \left\langle D_\theta \right\rangle \tilde{R} \|_{L^2}^2 + \nu^{-1} E,
\end{align*}
for $\nu>0$ to be determined, so that
\begin{align*}
E(t) & \leq e^{\nu^{-1}t} \int_0^t e^{-\nu^{-1} s} \left( \nu \|
\left\langle D_\theta \right\rangle \tilde{R}(s) \|_{L^2}^2 \right) ds \\
& \leq e^{\nu^{-1} t} \left( \nu \| \left\langle D_\theta \right\rangle \tilde{R}
\|_{L^2([0,T]) L^2 }^2 \right) \\
& \leq C \nu e^{\nu^{-1}t} \left( \int_0^T | e^{2is \tau} | ds \right)(k^2
k^{-2m/(m+1)})^2 \| \left\langle D_\theta \right\rangle \varphi_0 \|_{L^2}^2 \\
& \leq C (\nu e^{\nu^{-1}t}) \left( \frac{ 1 - e^{-2 \beta |k|^{2/(m+1)} T}}{2
\beta} \right) \\
& \quad \times |k|^{-2/(m+1)} (k^2
k^{-2m/(m+1)})^2 \| \left\langle D_\theta \right\rangle \varphi_0 \|_{L^2}^2.
\end{align*}
Here we have used the estimate \eqref{tRestimates2} in the middle inequality.
Integrating in $0 \leq t \leq T$, we get
\begin{align*}
\| \left\langle D_\theta \right\rangle \varphi_{\text{ih}} \|^2_{L^2([0,T]) L^2} & \leq C
\nu \left( \int_0^T
e^{\nu^{-1}t} dt \right) B |k|^{2/(m+1)} \| \left\langle D_\theta \right\rangle \varphi_0 \|_{L^2}^2 \\
& \leq C\nu^2 (e^{\nu^{-1} T} - 1) B |k|^{2/(m+1)} \| \left\langle
D_\theta \right\rangle \varphi_0
\|_{L^2}^2,
\end{align*}
where $B$ is given by \eqref{Bdefn}.
If $\nu = |k|^{-2/(m+1)}/A$ for some large $A>0$ independent of
$k$, and if we take $T = \nu$, we have
\begin{align}
\| \left\langle D_\theta \right\rangle \varphi_{\text{ih}} \|^2_{L^2([0,T]) L^2} & \leq \frac{C}{A^2} (e-1) B
|k|^{-2/(m+1)} \| \left\langle D_\theta \right\rangle \varphi_0
\|_{L^2}^2 \notag \\
& = c_0 B \| |D_\theta|^{-1/(m+1)}\left\langle D_\theta \right\rangle \varphi_0 \|^2_{L^2}, \label{E:phi-ih}
\end{align}
where
\[
c_0 = \frac{C}{A^2} (e-1) \leq \frac{1}{4}
\]
if we take $A$ sufficiently large.
Now, since $T$ depends on $k$, so does $B$ in principle, but
\[
B = \left( \frac{ 1 - e^{-2 \beta |k|^{2/(m+1)} T}}{2
\beta} \right) = \left( \frac{ 1 - e^{-2 \beta /A}}{2
\beta} \right) >0
\]
independent of $k$. Hence, combining \eqref{E:sharp-smoothing} with
\eqref{E:phi-ih}, we have for this choice of $T$ and $A$, and $\chi$
as in the statement of the theorem,
\begin{align*}
\| \left\langle D_\theta \right\rangle \chi L(t) \varphi_0 \|_{L^2([0,T]) L^2} & \geq \| \left\langle
D_\theta \right\rangle \chi \varphi \|_{L^2([0,T]) L^2}
- \| \left\langle D_\theta \right\rangle \chi \varphi_{\text{ih}} \|_{L^2([0,T]) L^2} \\
& \geq \| \left\langle
D_\theta \right\rangle \varphi \|_{L^2([0,T]) L^2}
- \| \left\langle D_\theta \right\rangle \varphi_{\text{ih}} \|_{L^2([0,T]) L^2} \\
& \geq \frac{1}{2} B^{1/2} \| | D_\theta|^{-1/(m+1)} \left\langle D_\theta
\right\rangle \varphi_0 \|_{L^2}
\end{align*}
(using $\mathrm{supp}\, \varphi\subset \{\chi=1\}$), with $B>0$ independent
of $k$ as above.
Thus, $\psi=\varphi_{\text{h}}$ satisfies \eqref{saturated}, since
$\varphi$ satisfies the estimate, while $\varphi_{\text{ih}}$ is small
enough to be absorbed.
\end{proof}
\section{Sharp resolvent estimates with loss}
We now prove Theorem~\ref{T:resolvent}.
We begin with the sharpness. By the $T
T^*$ argument (see, for example \cite{Chr-sch2}
if a better resolvent estimate held true, then a better
local smoothing estimate would also hold true. But Theorem \ref{T:sharp}
shows the local smoothing estimate in Theorem \ref{T:smoothing} is
sharp. Hence no better polynomial rate of decay in the resolvent
estimate can hold true.
In order to show the estimate is true in the first place, we conjugate
the Laplacian on $X$ and decompose in Fourier modes as usual
\[
(-\widetilde{\Delta} -\lambda^2) = \bigoplus_{k = - \infty}^\infty (L_k -
\lambda^2).
\]
We break this sum into two pieces where either $k^2 \leq \lambda^2/2$
or not. If $k^2 \leq \lambda^2/2$, then we use $h = \lambda^{-1}$ as
our semiclassical parameter, while if $k^2 > \lambda^2 / 2$, we use $h
= |k|^{-1}$. We get
\[
(-\widetilde{\Delta} - \lambda^2) = \bigoplus_{k^2 \leq \lambda^2/2} \lambda^2 (
\tilde{L}_k - z_k) \oplus \bigoplus_{k^2 > \lambda^2/2} k^2 (\tilde{L}_k - z_k ).
\]
Here
\[
\tilde{L}_k = \begin{cases}
-h^2 \partial_x^2 + \frac{k^2}{\lambda^2} A^{-2}(x) + h^2 V_1(x),
\text{ if } k^2 \leq \lambda^2/2, \\
-h^2 \partial_x^2 + A^{-2}(x) + h^2 V_1(x), \text{ if } k^2 >
\lambda^2 / 2,
\end{cases}
\]
and
\[
z_k = \begin{cases} 1, \text{ if } k^2 \leq \lambda^2/2, \\
k^{-2} \lambda^2 , \text{ if } k^2 > \lambda^2 /2.
\end{cases}
\]
In the case $k^2 \leq \lambda^2/2$, the energy level $z_k$ is
non-trapping, so by Proposition~\ref{P:ml-res-est} the operator $\tilde{L}_k$
satisfies the estimate
\[
\| \chi(\tilde{L}_k - z_k )^{-1} \chi \|_{L^2 \to L^2} \leq C h^{-1} = C
\lambda,
\]
with constants uniform as $|k|$ and $\lambda$ both go to infinity. In
the case $k^2 > \lambda^2/2$, the operator $\tilde{L}_k$ satisfies the
estimate
\[
\| \chi (\tilde{L}_k - z_k )^{-1} \chi \|_{L^2 \to L^2} \leq C h^{-2m/(m+1)}
= C |k|^{2m/(m+1)},
\]
with constants independent of $\lambda$ as $|k| \to \infty$. Hence by
orthogonality of the Fourier eigenspaces,
\begin{align*}
\| \chi R(\lambda) \chi \|_{L^2 \to L^2} & \leq \max \{ \max_{|k|^2 \leq
\lambda^2/2} \lambda^{-2} \| \chi(\tilde{L}_k - z_k)^{-1} \chi \|, \\
& \quad \quad \quad \quad
\sup_{|k| > \lambda^2/2} |k|^{-2} \| \chi(\tilde{L}_k - z_k)^{-1} \chi \| \}
\\
& \leq \max \{ C \lambda^{-1}, C \sup_{|k|^2 > \lambda^2/2}
|k|^{-2/(m+1)} \} \\
& \leq C \lambda^{-2m/(m+1)}.\qed
\end{align*}
|
1,116,691,501,309 | arxiv | \section{Introduction} \label{intro}
One of the most important practical questions which arises when we
are designing or using an information transmission or processing
system is: How much information can this system transmit or process
in a given time? Information theory, developed by Claude E. Shannon
during World War II, defines the notion of channel capacity and
provides a mathematical model by which one can compute it.
Basically, Shannon coding theorem and all newer versions of it treat
the question of how much data can be reliably communicated from one
point, or sets of points, to another point or sets of points.
The class of channels to be considered include multiple transmitter
and a single receiver. The received signal is corrupted both by
noise and by mutual interference between the transmitters. Each of
transmitters is fed by an information source, and each information
source generates a sequence of messages. More specifically, a
two-user DM-MAC is defined by a stochastic matrix\footnote{We use
the following notation throughout this work. Script capitals
$\mathcal{U}$, $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{Z}$,$\ldots$
denote finite, nonempty sets. To show the cardinality of a set
$\mathcal{X}$, we use $|\mathcal{X}|$. We also use the letters $P$,
$Q$,$\ldots$ for probability distributions on finite sets, and $U$,
$X$, $Y$,$\ldots$ for random variables.}
$W:\X \times \Y \rightarrow \Z$, where the input alphabets, $\X$,
$\Y$, and the output alphabet, $\Z$, are finite sets. The channel
transition probability for sequences of length $n$ is given by
\begin{align}
W^n\left(\mathbf{z}|\mathbf{x},\mathbf{y}\right) \triangleq
\prod_{i=1}^n W\left(z_i|x_i,y_i\right)
\end{align}
\indent where
\begin{align*}
\mathbf{x}\triangleq \left(x_1,...,x_n\right) \in \mathcal{X}^n,
\mathbf{y}\triangleq\left(y_1,...,y_n\right) \in \mathcal{Y}^n
\end{align*}
\indent and
\begin{align*}
\mathbf{z}\triangleq \left(z_1,...,z_n\right) \in \mathcal{Z}^n.
\end{align*}
It has been proven, by Ahlswede~\cite{Ahlswede71} and
Liao's~\cite{Liao} coding theorem, that for any
$\left(R_X,R_Y\right)$ in the interior of a certain set
$\mathcal{C}$, and for all sufficiently large $n$, there exists a
multiuser code with an arbitrary small average probability of error.
Conversely, for any $\left(R_X,R_Y\right)$ outside of $\mathcal{C}$,
the average probability of error is bounded away from 0. The set
$\mathcal{C}$, called \emph{capacity region} for $W$, is the closure
of the set of all rate pairs $\left(R_X,R_Y\right)$
satisfying~\cite{SlWo73}
\begin{subequations}
\begin{align}
0 &\leq R_X \leq I\left(X \wedge Z|Y,U\right)\\
0 &\leq R_Y \leq I\left(Y \wedge Z|X,U\right)\\
0 &\leq R_X+R_Y \leq I\left(XY \wedge Z|U\right),
\end{align}
\end{subequations}
for all choices of joint distributions over the random variables
$U,\ X,\ Y,\ Z$ of the form
$p\left(u\right)p\left(x|u\right)p\left(y|u\right)W\left(z|x,y\right)$
with $U \in \U$ and $|\mathcal{U}| \leq 4$. As we can see, this
theorem was presented in an asymptotic nature, i.e., it was proven
that the error probability of the channel code can go to zero as the
block length goes to infinity. It does not tell us how large the
block length must be in order to achieve a specific error
probability. On the other hand, in practical situations, there are
limitations on the delay of the communication. Additionally, the
block length of the code cannot go to infinity. Therefore, it is
important to study how the probability of error drops as the block
length goes to infinity. A partial answer to this question is
provided by examining the error exponent of the channel.
Error exponents have been studied for discrete memoryless
multiple-access channels over the past thirty years. Lower and upper
bounds are known on the error exponent of these channels. The random
coding bound in information theory provides a well-known lower bound
for the reliability function of the best code, of a given rate and
block length. This bound is constructed by upper-bounding the
average error probability over an ensemble of codes. Slepian and
Wolf~\cite{SlWo73}, Dyachkov~\cite{Dyachkov},
Gallager~\cite{Gallager-Multiaccess}, Pokorny and
Wallmeier~\cite{Pokorney}, and Liu and
Hughes~\cite{Liu-RandomCoding} have all studied the random coding
bound for discrete memoryless multiple access channels. Nazari and
et al.~\cite{aransspaa09} investigated two different upper bounds on
the average probability of error, called the typical random coding
bound and the partial expurgated bound. The typical bound is
basically the typical performance of the ensemble. By this, we mean
that almost all random codes exhibit this performance. In addition,
they have shown that the typical random code performs better than
the average performance over the random coding ensemble, at least,
at low rates. The random coding exponent may be improved at low
rates by a process called ``partial expurgation'' which yields a new
bound that exceeds the random coding bound at low rates.
Haroutunian~\cite{Haroutunian} and
Nazari~\cite{nazari08,nazaridistance09} studied upper bounds on the
error exponent of multiple access channels. In Multi-user
information theory, the sphere packing bound provides a well known
upper bound on the reliability function for multiple access channel.
The sphere packing bound that Haroutunian~\cite{Haroutunian} derived
on the average error exponent for DM-MAC is potentially loose, as it
does not capture the separation of the encoders in the MAC. Nazari
et al.~\cite{nazari08} derived another sphere packing bound which
takes into account separation of the encoders. The bound
in~\cite{nazari08} turns out to be at least as good as the bound
derived in~\cite{Haroutunian}, however it is a valid bound only for
the maximal error exponent and not the average. The sphere packing
bound is a good bound in high rate regime. Nevertheless, it tends to
be a loose bound in low rate regime. It can be shown that in low
rate regime, the minimum distance of the code dominates the
probability of error. Using the minimum distance of the code,
Nazari~\cite{nazaridistance09} derived another upper bound for the
maximal error exponent of DM-MAC. To derive the minimum distance
bound, they established a connection between the minimum distance of
the code and the maximum probability of error; then, by obtaining an
upper bound on the minimum distance of all codes with certain rates,
they derived a lower bound on the maximal error probability that can
be obtained by a code with a certain rate pair.
The paper is organized as follows. Some preliminaries are introduced
in section~\ref{prelim}. The main result of the paper, which is an
upper bound on the reliability function of the channel, is obtained
in section~\ref{SpherePack}. In section~\ref{MaxvsAvg}, by using a
known upper bound on the maximum error exponent function, we derive
an upper bound on the average error exponent function. The proofs of
some of these results are given in the Appendix.
\section{Preliminaries}\label{prelim}
For any alphabet $\mathcal{X}$, $\mathcal{P\left(X\right)}$ denotes
the set of all probability distributions on $\mathcal{X}$. The
\emph{type} of a sequence $\mathbf{x}=\left(x_1,...,x_n\right) \in
\mathcal{X}^n$ is the distributions $P_{\mathbf{x}}$, on
$\mathcal{X}$, defined by:
\begin{align}
P_{\mathbf{x}}\left(x\right)\triangleq
\frac{1}{n}N\left(x|\mathbf{x}\right),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
x \in \mathcal{X},
\end{align}
where $N\left(x|\mathbf{x}\right)$ denotes the number of occurrences
of $x$ in $\mathbf{x}$. Let $\mathcal{P}_n \left(\mathcal{X}\right)$
denotes the set of all types in $\mathcal{X}^n$, and define the set
of all sequences in $\mathcal{X}^n$ of type $P$ as
\begin{align}
T_P \triangleq \{\mathbf{x} \in \mathcal{X}^n: P_{\mathbf{x}}=P\}.
\end{align}
The joint type of a pair $\left(\mathbf{x},\mathbf{y}\right) \in
\mathcal{X}^n \times \mathcal{Y}^n$ is the probability distribution
$P_{\mathbf{x},\mathbf{y}}$ on $\mathcal{X} \times \mathcal{Y}$
defined by:
\begin{align}
P_{\mathbf{x},\mathbf{y}}\left(x,y\right)\triangleq
\frac{1}{n}N\left(x,y|\mathbf{x},\mathbf{y}\right),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\left(x,y\right) \in \mathcal{X} \times \mathcal{Y},
\end{align}
where $N\left(x,y|\mathbf{x},\mathbf{y}\right)$ is the number of
occurrences of $\left(x,y\right)$ in ($\mathbf{x},\mathbf{y}$). The
relative entropy or \emph{Kullback-Leibler} distance between two
probability distribution $P,Q \; \in \mathcal{P\left(X\right)}$ is
defined as
\begin{align}
D\left(P||Q\right) \triangleq \sum_{x \in
\mathcal{X}}P\left(x\right)\log\frac{P\left(x\right)}{Q\left(x\right)}.
\end{align}
Let $\mathcal{W\left(Y|X\right)}$ denote the set of all stochastic
matrices with input alphabet $\mathcal{X}$ and output alphabet
$\mathcal{Y}$. Then, given stochastic matrices $V,\ W \in
\mathcal{W\left(Y|X\right)}$, the conditional \emph{I-divergence} is
defined by
\begin{align}
D\left(V||W|P\right) \triangleq \sum_{x \in
\mathcal{X}}P\left(x\right)D\left(V\left(\cdot|x\right)||W\left(\cdot|x\right)\right).
\end{align}
\begin{definition}
An $\left(n,M,N\right)$ multi-user code is a set
$\{\left(\mathbf{x}_i,\mathbf{y}_j,D_{ij}\right): 1 \leq i \leq M, 1
\leq j \leq N\}$ with
\begin{itemize}
\item $\mathbf{x}_i \in \mathcal{X}^n$, $\mathbf{y}_j \in
\mathcal{Y}^n$, $D_{ij} \subset \mathcal{Z}^n$
\item $D_{ij} \cap D_{i'j'}=\varnothing$ for $\left(i,j\right) \neq
\left(i',j'\right)$.
\end{itemize}
The average error probability of this code for the MAC, $W:\X \times
\Y \rightarrow \Z$, is defined as
\begin{align}
e\left(\C,W\right) \triangleq \frac{1}{M
N}\sum_{i=1}^{M}\sum_{j=1}^{N}W^n\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{y}_j\right).
\end{align}
Similarly, the maximal error probability of this code for $W$ is
defined as
\begin{align}
e_m\left(\C,W\right) \triangleq \max_{\substack{\left(i,j\right)}}
W^n\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{y}_j\right).
\end{align}
\end{definition}
\begin{definition}
For the MAC, $W:\X \times \Y \rightarrow \Z$, the average and
maximal error reliability functions, at rate pair
$\left(R_X,R_Y\right)$, are defined as:
\begin{align}
&E^*_{av}\left(R_X,R_Y\right) \triangleq \lim_{n\rightarrow \infty}
\max_{\substack{\C}} -\frac{1}{n}
\log{e\left(\C,W\right)}\\
&E^*_{m}\left(R_X,R_Y\right) \triangleq \lim_{n\rightarrow \infty}
\max_{\substack{\C}} -\frac{1}{n} \log{e_m\left(\C,W\right)},
\end{align}
where the maximum is over all codes of length $n$ and rate pair
$\left(R_X,R_Y\right)$.
\end{definition}
\begin{definition}
A code $\C_X =\{ \mathbf{x}_i \in \X^n
:\;\; i=1,...,M_X \}$, for some $P_X$, is called a bad codebook, if
\begin{eqnarray}
\exists \;\left(i,j\right),\;\;\;\; i \neq j \;\;\; &
\mathbf{x}_i=\mathbf{x}_j
\end{eqnarray}
A codebook which is not bad, is called a good one.
\end{definition}
\begin{definition}
A multi user code $\C=\C_X \times \C_Y$ is
called a good multi user code, if both individual codebooks $\C_X$,
$\C_Y$ are good codes.
\end{definition}
\begin{definition}
For a good multi user code $\C=\C_X \times \C_Y$, and for a
particular type $P_{XY} \in \mathcal{P}_n\left(\mathcal{X}\times
\mathcal{Y}\right)$, we define
\begin{eqnarray}
R\left(\C,P_{XY}\right) \triangleq \frac{1}{n} \log|\C \cap
T_{P_{XY}}|
\end{eqnarray}
\end{definition}
\begin{definition}
For a sequence of joint types $P^n_{XY} \in \P_n\left(\X \times
\Y\right)$, with marginal types $P^n_X$ and $P^n_Y$, the sequence of
type graphs, $G_n$, is defined as follows: For every $n$, $G_n$ is a
bipartite graph, with its left vertices consisting of all $x^n \in
T_{P^n_X}$ and the right vertices consisting of all $y^n \in
T_{P^n_Y}$. A vertex on the left (say $\tilde{x}^n$) is connected to
a vertex on the right (say $\tilde{y}^n$) if and only if
$\left(\tilde{x}^n,\tilde{y}^n\right) \in T_{P^n_{XY}}$.
\end{definition}
\section{main result}\label{SpherePack}
The main result of this section is a new sphere packing bound for
the average error probability for a discrete memoryless multiple
access channel. The idea behind the derivation of this bound is
based on the property that is common among all good multi user codes
with certain rate pair. In the following, we first derive a sphere
packing bound for a good multiuser code. Next, we show that for any
bad multiuser code, there exists a good code with the same rate pair
and smaller average probability of error. Therefore, to obtain a
lower bound for the average error probability for the best code, we
only need to study good codes (codes without any repeated
codewords).
Now, consider a good multiuser code with blocklength $n$. Suppose
the number of messages of the first source is $M_X=2^{nR_X}$ and the
number of messages of the second source is $M_Y=2^{nR_Y}$. Assume
that all the messages of any source are equiprobable and the sources
are sending data independently. Considering these assumptions, all
$M_XM_Y$ pairs are occuring with the equal probability. Thus, at the
input of the channel, we can see all possible
$2^{n\left(R_X+R_Y\right)}$ (an exponentially increasing function of
$n$) pairs of input sequences. However, we also know that the number
of possible types is a polynomial function of $n$. Thus, for at
least one joint type, the number of pairs of sequences in the multi
user code sharing that particular type, should be an exponential
function of $n$ with the rate arbitrary close to the rate of the
multi user code. We will look at these pairs of sequences as a
subcode, and then try to find a lower bound for the average error
probability of this subcode. Following, we will show that this bound
is a valid lower bound for the average probability of error for the
original code.
\begin{lemma}~\cite{nazaridistance09}
For any $\delta > 0$, for any sufficiently large $n$, and for any
good $\left(n,2^{nR_X},2^{NR_Y}\right)$ multi user code $\C$, as
defined above, there exists $P_{XY} \in
\mathcal{P}_n\left(\mathcal{X}\times \mathcal{Y}\right)$ such that
\begin{eqnarray*}
R\left(\C,P_{XY}\right) \geq
R_X+R_Y-\delta\;\;\;\;\;\;\;\;\;\;\;\;\; \text{for sufficiently
large n},
\end{eqnarray*}
$P_{XY}$ is called a dominant type of $\C$.
\end{lemma}
Hence, for any good code, there must exist at least a joint type
which dominates the codebook. We can ask the following question: for
a multiuser code, with rate $\left(R_X,R_Y\right)$, can any joint
type potentially be its dominant type? As shown later, the answer to
this question helps us characterize a tighter sphere packing bound.
In response to this question, Nazari et al.~\cite{nazaridistance09}
studied the type graphs for different joint types and proved the
following result:
\begin{lemma}\cite{nazaridistance09} \label{U-Lemma}
For all sequences of nearly complete subgraphs of a particular type
graph $T_{P_{XY}}$, the rates of the subgraph $\left(R_X,R_Y\right)$
must satisfy
\begin{eqnarray}
R_X \leq H\left(X|U\right), \: R_Y \leq H\left(Y|U\right)
\end{eqnarray}
for some $P_{U|XY}$ such that $X-U-Y$.
\end{lemma}
Now consider a particular joint type $P^n_{XY}$. By the previous
lemma, if there does not exist any $P_{U|XY}$ satisfying the
constraint mentioned in lemma~\ref{U-Lemma}, the type graph
corresponding to this joint type can not contain an almost fully
connected subgraph with rate $\left(R_X,R_Y\right)$. Consequently,
it cannot be the dominant type of a good multiuser code with rate
$\left(R_X,R_Y\right)$.
\begin{fact} \label{U2-Lemma}
Consider a good multiuser code $\C$ with parameter
$\left(n,2^{nR_X},2^{nR_Y}\right)$. A joint type $P^n_{XY} \in
\P_n\left(\X \times \Y\right)$ can be the dominant type of $\C$ if
there exists a $P_{U|XY}$, $X-U-Y$, such that
\begin{eqnarray}
R_X \leq H\left(X|U\right), \: R_Y \leq H\left(Y|U\right),
\end{eqnarray}
conversely, if it does not exist such a conditional distribution,
then $P^n_{XY}$ cannot be the dominant type of any good multiuser
code with parameter $\left(n,2^{nR_X},2^{nR_Y}\right)$.
\end{fact}
\begin{theorem}\label{fixedtypepacking}
Fix any $R_X \geq 0$, $R_Y \geq 0$, $\delta > 0$ and a sufficiently
large $n$. Consider a good multiuser code $\C$ with parameter
$\left(n,2^{nR_X},2^{nR_Y}\right)$ which has a dominant type
$P_{XY}^* \in \mathcal{P}_n \left(\mathcal{X} \times
\mathcal{Y}\right)$. The average error exponent of such a code is
bounded above by
\begin{eqnarray}
E_{sp}\left(R_X,R_Y,W\right) \triangleq \min_{V_{Z|XY}}
D\left(V_{Z|XY}||W|P^*_{XY}\right).
\end{eqnarray}
Here, the minimization is over all possible conditional
distributions $V_{Z|XY}:\X \times \Y \rightarrow \Z$, which satisfy
at least one of the following conditions
\begin{eqnarray}
I_V\left(X \wedge Z|Y\right) &\leq& R_X\\
I_V\left(Y \wedge Z|X\right) &\leq& R_Y\\
I_V\left(XY \wedge Z\right) &\leq& R_X+R_Y.
\end{eqnarray}
\end{theorem}
\begin{proof}
The proof is provided in Appendix~A.1.
\end{proof}
In theorem~\ref{fixedtypepacking}, we obtained a sphere packing
bound on the average error exponent for a good multiuser code with a
certain dominant type. For a more general code, we do not know the
dominant type of the code. However, we do have the condition for a
joint type to be the potential dominant type of a code with certain
parameter. By combining the result of theorem~\ref{fixedtypepacking}
and fact~\ref{U2-Lemma}, we can obtain the following sphere packing
bound for any good multiuser code:
\begin{theorem}\label{packing}
For any given multiple access channel $W$ and any good multi user
code with rate pair ($R_X$,$R_Y$), the reliability function,
$E\left(R_X,R_Y,W\right)$, is bounded above by
\begin{eqnarray}
E_{sp}\left(R_X,R_Y,W\right) \triangleq \max_{P_{UXY}}
\min_{V_{Z|XY}} D\left(V_{Z|XY}||W|P_{XY}\right).
\end{eqnarray}
Here, the maximum is taken over all possible joint distributions
satisfying $X-U-Y$ and
\begin{eqnarray}
R_X \leq H\left(X|U\right), \: R_Y \leq H\left(Y|U\right),
\end{eqnarray}
and the minimum over all channels $V_{Z|XY}$ that satisfy at
least one of the following conditions
\begin{eqnarray}
I_V\left(X \wedge Z|Y\right) &\leq& R_X\\
I_V\left(Y \wedge Z|X\right) &\leq& R_Y\\
I_V\left(XY \wedge Z\right) &\leq& R_X+R_Y.
\end{eqnarray}
\end{theorem}
Thus far, we have obtained a lower bound on the average error
probability for all good multiuser codes with certain rate pairs.
Here, we show that the result of the previous theorem is indeed a
valid bound for any multiuser code regardless of whether it is good
or bad. This approach shows that for any bad code there exists a
good code with the same number of codewords and a better
performance. Therefore, to obtain a lower bound on the error
probability of the best code, we only need to consider codes without
any repeated codewords. In lemma~\ref{P2P}, we prove this result for
a single-user code and later, by using the result of lemma~\ref{P2P}
several times, we prove the same result for the multiuser scenario.
\begin{lemma}\label{P2P}
Suppose $C_X$ is a codebook of size $M_X$ for which all codewords
are selected from $T_{P_X}$. Moreover, suppose $\mathbf{x}_i$ is
repeated $N_i$ times in the codebook and $M_X=N_1+N_2+...+N_M$,
where M is the number of distinct sequences in $C_X$. If $M_X \leq
|T_{P_X}|-1$, there exists another code $C'_X$ with better
probability of error, such that
\begin{eqnarray}
|C_X|&=&|C'_X|\nonumber\\
N'_i&=&N_i\;\;\;\;\;\;\; i=1,...,M-1\nonumber\\
N'_M&=&N_M-1\nonumber\\
N'_{M+1}&=&1
\end{eqnarray}
Here, $N'_{M+1}=1$ is the number of occurrences of the new sequence
$\mathbf{x} \in T_{P_X}$ which does not belong to $C_X$.
\end{lemma}
\begin{proof}
The proof is provided in Appendix~A.2.
\end{proof}
\begin{lemma}\label{MAC}
For any bad multi user code with codewords that belong to $T_{P_X}$,
and $T_{P_Y}$, with rate pair ($R_X$,$R_Y$), there exists a good
multi user code with the same rate pair and a better probability of
error.
\end{lemma}
\begin{proof}
For a bad multi user code, we know that at least one of the
individual codebooks is bad. If we apply lemma~\ref{P2P} several
times to any of the bad single user codes, with the appropriate
cardinality, we will end up with a good multiuser code and a better
probability of decoding error.
\end{proof}
Finally, by combining the result of lemma~\ref{MAC} and the result
of theorem~\ref{packing}, we deduce an upper bound on the
reliability function for all multiuser codes.
\begin{theorem}
For any given multiple access channel W, and any good multi user
code with rate pair ($R_X$,$R_Y$), the reliability function,
$E\left(R_X,R_Y,W\right)$, is bounded above by
\begin{eqnarray}
E_{sp}\left(R_X,R_Y,W\right) \triangleq \max_{P_{XY}}
\min_{V_{Z|XY}} D\left(V_{Z|XY}||W|P_{XY}\right).
\end{eqnarray}
Here, the maximum is taken over all possible joint
distributions, and the minimum over all channels $V_{Z|XY}$ which
satisfy at least one of the following conditions
\begin{eqnarray}
I_V\left(X \wedge Z|Y\right) &\leq& R_X\nonumber\\
I_V\left(Y \wedge Z|X\right) &\leq& R_Y\nonumber\\
I_V\left(XY \wedge Z\right) &\leq& R_X+R_Y
\end{eqnarray}
\end{theorem}
\section{Another Sphere packing bound}\label{MaxvsAvg}
In point to point communications systems, one can show that a lower
bound for the maximal error probability of the best code is also a
lower bound on the average probability of error for such a code.
However, in multiuser communications, this is not the case. It has
been shown that for multiuser channels, in general, the maximal
error capacity region is smaller than the average error capacity
region~\cite{Dueck2}. Therefore, we cannot hope a sphere packing
bound for maximal error probability to be equal to the one for the
average probability of error. In the following, we show an approach
to derive an upper bound on the average error exponent by using a
known upper bound for the maximal error exponent.
\begin{lemma}
Fix any DM-MAC $W:\X \times \Y \rightarrow \Z$, $R_X \geq 0$, $R_Y
\geq 0$. Assume that, the maximal reliability function is bounded as
follows:
\begin{equation}
E^L_{m}\left(R_X,R_Y\right) \leq E^*_{m}\left(R_X,R_Y\right) \leq
E^U_{m}\left(R_X,R_Y\right),\label{number1}
\end{equation}
therefore, the average reliability function can be bounded by
\begin{equation}
E^L_{m}\left(R_X,R_Y\right) \leq E^*_{av}\left(R_X,R_Y\right) \leq
E^U_{m}\left(R_X,R_Y\right) + R,\label{number2}
\end{equation}
where $R = \min\{R_X,R_Y\}$. Similarly, if the average reliability
function is bounded as follows:
\begin{equation}
E^L_{av}\left(R_X,R_Y\right) \leq E^*_{av}\left(R_X,R_Y\right) \leq
E^U_{av}\left(R_X,R_Y\right),\label{number3}
\end{equation}
it can be concluded that the maximal reliability function satisfies
the following constraint
\begin{equation}
E^L_{av}\left(R_X,R_Y\right) -R \leq E^*_{m}\left(R_X,R_Y\right)
\leq E^U_{av}\left(R_X,R_Y\right).\label{number4}
\end{equation}
\end{lemma}
\begin{proof}
The proof is provided in Appendix~A.3.
\end{proof}
In~\cite{nazari08}, the authors derived a sphere packing bound on
the maximal reliability function for DM-MAC. This results is only a
valid upper bound for the maximal error reliability function and not
the average one. We can simply use the previous lemma to derive a
new upper bound on the average error reliability function for
DM-MAC.
\begin{theorem}
For any $R_X,R_Y
>0$, $\delta > 0$ and any DM-MAC, $W:\mathcal{X} \times \mathcal{Y}\rightarrow
\mathcal{Z}$, every $(n,M_X,M_Y)$ code, $\C$ with
\begin{subequations}
\begin{align}
\frac{1}{n} \log{M_X} &\geq R_X + \delta \\
\frac{1}{n} \log{M_Y} &\geq R_Y + \delta,
\end{align}
\end{subequations}
has average probability of error
\begin{equation}
e(\C,W) \geq \frac{1}{2}\exp{\left(-n
\left(E^m_{sp}(R_X,R_Y,W)+R\right)\left(1+\delta\right)\right)},
\end{equation}
where $E^m_{sp}$ is the sphere packing bound derived
in~\cite{nazari08}, and $R=\min\{R_X,R_Y\}$.
\end{theorem}
\section{Appendix}
\subsection{Appendix A.1}
For a given MAC $W:\mathcal{X} \times \mathcal{Y} \rightarrow
\mathcal{Z}$ and a good multi user code $C=C_X \times C_Y$, where
$C_X =\{\ \textbf{x}_i \in \X^n :\;\; i=1,...,M_X\}$ and $C_Y =\{\
\textbf{y}_j \in \Y^n :\;\; j=1,...,M_Y\}$, with decoding sets
$D_{i,j} \subset \mathcal{Z}^n$, we have
\begin{eqnarray}
e\left(C,W\right)&=&\frac{1}{M_XM_Y}\sum_{i=1}^{M_X}\sum_{j=1}^{M_Y}W\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{x}_j\right)\\
&=&\frac{1}{M_XM_Y}\sum_{P_{XY}}
\frac{M_{XY}}{M_{XY}}\sum_{\left(i,j\right)\in
C_{XY}}W\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{x}_j\right)
\end{eqnarray}
where $\C_{XY}$ is the set that includes all pairs in $\C_X\times
\C_Y$ which have the same type $P_{XY}$, $M_{XY}$ denotes the
cardinality of this set, and $R_{XY}=\frac{1}{n} \log{M_{XY}}$. For
a fixed $\left(i,j\right)$,
$T_V\left(\mathbf{x}_i,\mathbf{x}_j\right)$s are disjoint subsets of
$\Z^n$ for different conditional types $V: \X \times \Y \rightarrow
\Z$. Therefore,
\begin{eqnarray}
e\left(C,W\right)&=&\frac{1}{M_XM_Y}\sum_{P_{XY}}\frac{M_{XY}}{M_{XY}}\sum_{\left(i,j\right)\in
C_{XY}}\sum_{V}W\left(D_{i,j}^c \cap
T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|\mathbf{x}_i,\mathbf{y}_j\right)\\
&=&\frac{1}{M_XM_Y}\sum_{P_{XY}}
M_{XY}\sum_{V}\frac{1}{M_{XY}}\sum_{\left(i,j\right)\in
C_{XY}}W\left(T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|i,j\right)\frac{|D_{i,j}^c \cap T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{|T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}\\
&=& \frac{1}{M_XM_Y}\sum_{P_{XY}} M_{XY}\sum_{V}
2^{-nD\left(V||W|P_{XY}\right)}[1-\frac{1}{M_{XY}}\sum_{\left(i,j\right)\in
C_{XY}}\frac{|D_{i,j} \cap
T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{|T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}]\\
&\geq& \frac{1}{M_XM_Y}\sum_{P_{XY}} M_{XY}\sum_{V}
2^{-nD\left(V||W|P_{XY}\right)}[1-\frac{1}{M_{XY}}\sum_{\left(i,j\right)\in
C_{XY}}\frac{|D_{i,j} \cap
T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{2^{nH\left(Z|X,Y\right)}}]\\
&=& \frac{1}{M_XM_Y}\sum_{P_{XY}} M_{XY}\sum_{V}
2^{-nD\left(V||W|P_{XY}\right)}[1-\frac{1}{M_{XY}}
\frac{|\bigcup_{\left(i,j\right)
\in C_{XY}} D_{i,j}\cap T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{2^{nH\left(Z|X,Y\right)}}]\\
&\geq& \frac{1}{M_XM_Y}\sum_{P_{XY}} M_{XY}\sum_{V}
2^{-nD\left(V||W|P_{XY}\right)}[1-\frac{1}{M_{XY}}
\frac{|T_Z|}{2^{nH\left(Z|X,Y\right)}}]\\
&\geq&\frac{1}{M_XM_Y}\sum_{P_{XY}} M_{XY}\sum_{V}
2^{-nD\left(V||W|P_{XY}\right)}[1-\frac{1}{M_{XY}}
\frac{2^{nH\left(Z\right)}}{2^{nH\left(Z|X,Y\right)}}]\\
&=& \frac{1}{M_XM_Y}\sum_{P_{XY}} M_{XY}\sum_{V}
2^{-nD\left(V||W|P_{XY}\right)}[1-2^{-n[R_{XY}-I_V\left(XY \wedge
Z\right)]}]
\end{eqnarray}
We define
\begin{eqnarray}
V_{bad}^{XY}=\{ V : R_{XY}\geq I_V\left(XY \wedge Z\right)\}
\end{eqnarray}
So, form the last inequality,
\begin{eqnarray}
e\left(C,W\right) &\geq& \frac{1}{M_X M_Y} \sum_{P_{XY}}M_{XY}
\sum_{V \in
V_{bad}^{XY}} 2^{-n D\left(V||W|P_{XY}\right)}\\
&\geq& \frac{1}{M_X M_Y} \sum_{P_{XY}}M_{XY} 2^{-n[\min_{V \in
V_{bad}^{XY}}D\left(V||W|P_{XY}\right)]}\\
&=& \frac{1}{M_X M_Y} \sum_{P_{XY}} 2^{-n[\min_{V \in
V_{bad}^{XY}}D\left(V||W|P_{XY}\right)-R_{XY}]}\\
&\geq& \frac{1}{M_X M_Y} 2^{-n[\min_{P_{XY}}\min_{V \in
V_{bad}^{XY}}D\left(V||W|P_{XY}\right)-R_{XY}]}
\end{eqnarray}
Thus,
\begin{equation}
e\left(C,W\right) \geq 2^{-n[\min_{P_{XY}}\min_{V \in
V_{bad}^{XY}}D\left(V||W|P_{XY}\right)+R_X+R_Y-R_{XY}]}
\end{equation}
On the other hand, if we use the fact that $D_{i,j}^c \subseteq
\bigcup_{j'}\bigcup_{i'\neq i} D_{i',j'}$, we can conclude
\begin{eqnarray}
e\left(C,W\right)&=&\frac{1}{M_XM_Y}\sum_{i=1}^{M_X}\sum_{j=1}^{M_Y}W\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{y}_j\right)\\
&\geq& \frac{1}{M_Y}\sum_{j=1}^{M_Y} \frac{1}{M_X}\sum_{i=1}^{M_X}
W\left(\bigcup_{j'}\bigcup_{i'\neq i} D_{i',j'}|\mathbf{x}_i,\mathbf{y}_j\right)\\
\text{Define} D_i^c\triangleq \bigcup_{j'}\bigcup_{i'\neq i} D_{i',j'}\\
&=& \frac{1}{M_XM_Y}\sum_{P_{XY}}\sum_{i}\sum_{j:\left(i,j\right)\in
C_{XY}}\sum_{V}W\left(D_i^c \cap T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|\mathbf{x}_i,\mathbf{y}_j\right)\\
&=& \frac{1}{M_XM_Y}\sum_{P_{XY}}\sum_{V} 2^{-n
D\left(V||W|P_{XY}\right) }\sum_{i}\sum_{j:\left(i,j\right)\in
C_{XY}} \frac{|D_i^c \cap T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{|T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}\\
&=& \sum_{P_{XY}}\sum_{V} 2^{-n D\left(V||W|P_{XY}\right)}
\frac{1}{M_XM_Y} \sum_{i}\sum_{j:\left(i,j\right)\in
C_{XY}}[1-\frac{|D_i \cap
T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{|T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}]\\
&=& \sum_{P_{XY}}\sum_{V} 2^{-n D\left(V||W|P_{XY}\right)}
\frac{M_{XY}}{M_XM_Y} [1-\frac{1}{M_{XY}}
\sum_{i}\sum_{j:\left(i,j\right)\in
C_{XY}} \frac{|D_i \cap T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{|T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}]\;\;\;\;\;\;\;\;\;\\
&\geq& \sum_{P_{XY}}\sum_{V} 2^{-n D\left(V||W|P_{XY}\right)}
\frac{M_{XY}}{M_XM_Y} [1-\frac{1}{M_{XY}}\sum_{i=1}^{M_X}
\sum_{j=1}^{M_Y} \frac{|D_i \cap T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{|T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}]\\
&\geq&\sum_{P_{XY}}\sum_{V} 2^{-n D\left(V||W|P_{XY}\right)}
\frac{M_{XY}}{M_XM_Y} [1-\frac{1}{M_{XY}} \sum_{j=1}^{M_Y}
\sum_{i=1}^{M_X} \frac{|D_i
\cap T_V\left(\mathbf{x}_i,\mathbf{y}_j\right)|}{2^{nH\left(Z|X,Y\right)}}]\\
&\geq& \sum_{P_{XY}}\sum_{V} 2^{-n D\left(V||W|P_{XY}\right)}
\frac{M_{XY}}{M_XM_Y} [1-\frac{1}{M_{XY}} \sum_{j=1}^{M_Y}
\frac{2^{nH\left(Z,X|Y\right)}}{2^{nH\left(Z|X,Y\right)}}]\\
&\geq&\sum_{P_{XY}}\sum_{V} 2^{-n D\left(V||W|P_{XY}\right)}
\frac{M_{XY}}{M_XM_Y} [1-2^{-n[R_{XY}-R_Y-I_V\left(Z\wedge
X|Y\right)]}]
\end{eqnarray}
and now, let us define
\begin{eqnarray}
V_{bad}^{X}\triangleq \{ V : R_{XY}-R_Y\geq I_V\left(Z\wedge
X|Y\right)\}
\end{eqnarray}
Hence, it easily can be seen
\begin{eqnarray}
e\left(C,W\right) &\geq& \frac{1}{M_X M_Y} \sum_{P_{XY}}M_{XY}
\sum_{V \in
V_{bad}^{X}} 2^{-n D\left(V||W|P_{XY}\right)}\\
&\geq&\frac{1}{M_X M_Y} \sum_{P_{XY}}M_{XY} 2^{-n[\min_{V \in
V_{bad}^{X}}D\left(V||W|P_{XY}\right)]}\\
&=& \frac{1}{M_X M_Y} \sum_{P_{XY}} 2^{-n[\min_{V \in
V_{bad}^{X}}D\left(V||W|P_{XY}\right)-R_{XY}]}\\
&\geq& \frac{1}{M_X M_Y} 2^{-n[\min_{P_{XY}}\min_{V \in
V_{bad}^{X}}D\left(V||W|P_{XY}\right)-R_{XY}]}
\end{eqnarray}
So,
\begin{equation}
e\left(C,W\right) \geq 2^{-n[\min_{P_{XY}}\min_{V \in
V_{bad}^{X}}D\left(V||W|P_{XY}\right)+R_X+R_Y-R_{XY}]}
\end{equation}
Using the same idea for $Y$ and defining $D_j^c\triangleq
\bigcup_{i'}\bigcup_{j'\neq j} D_{i',j'}$, we can easily see
\begin{eqnarray}
e\left(C,W\right) &\geq& \frac{1}{M_X M_Y} \sum_{P_{XY}}M_{XY}
\sum_{V \in
V_{bad}^{Y}} 2^{-n D\left(V||W|P_{XY}\right)}\\
&\geq& \frac{1}{M_X M_Y} \sum_{P_{XY}}M_{XY} 2^{-n[\min_{V \in
V_{bad}^{Y}}D\left(V||W|P_{XY}\right)]}\\
&=& \frac{1}{M_X M_Y} \sum_{P_{XY}} 2^{-n[\min_{V \in
V_{bad}^{Y}}D\left(V||W|P_{XY}\right)-R_{XY}]}\\
&\geq& \frac{1}{M_X M_Y} 2^{-n[\min_{P_{XY}}\min_{V \in
V_{bad}^{Y}}D\left(V||W|P_{XY}\right)-R_{XY}]}
\end{eqnarray}
So,
\begin{equation}
e\left(C,W\right) \geq 2^{-n[\min_{P_{XY}}\min_{V \in
V_{bad}^{Y}}D\left(V||W|P_{XY}\right)+R_X+R_Y-R_{XY}]}
\end{equation}
where
\begin{eqnarray}
V_{bad}^{Y}=\{ V : R_{XY}-R_X\geq I_V\left(Z\wedge Y|X\right)\}
\end{eqnarray}
From (64),(81),(86),
\begin{eqnarray}
e\left(C,W\right) \geq 2^{-n[ \min_{P_{XY}} \min_{V \in V_{bad}^{X}
\cup V_{bad}^{Y}\cup V_{bad}^{XY}
}D\left(V||W|P_{XY}\right)+R_X+R_Y-R_{XY}]}.
\end{eqnarray}
Equivalently, for the exponent of $e\left(C,W\right)$
\begin{equation}
E\left(C,W\right) \leq \min_{P_{XY}} \min_{V \in V_{bad}^{X} \cup
V_{bad}^{Y}\cup V_{bad}^{XY}} D\left(V \|W
|P_{XY}\right)+R_X+R_Y-R_{XY}
\end{equation}
If we define $V_{bad}=V_{bad}^{X} \cup V_{bad}^{Y}\cup V_{bad}^{XY}
$, for every code $C$, we have
\begin{eqnarray}
E\left(C,W\right) &\leq& \max_C \;\;\;\;\;\min_{P_{XY}} \min_{V \in
V_{bad}} D\left(V \|W
|P_{XY}\right)+R_X+R_Y-R_{XY}\\
&=& \max_{\underline{R} \in \mathcal{R}} \;\;\;\;\;\min_{P_{XY}}
\min_{V \in V_{bad}} D\left(V \|W |P_{XY}\right)+R_X+R_Y-R_{XY}
\end{eqnarray}
Where $\underline{R}$ is a vector with elements
$R\left(C,P_{XY}\right)$ and $\mathcal{R}$ is the set of all
possible vectors $\underline{R}$. The last inequality follows from
the fact that $E\left(C,W\right)$ is only a function of $R_{XY}$s.
Since $P_{XY}^*$ is the dominant type of the code, we conclude that
\begin{eqnarray}
E\left(C,W\right) &\leq& \max_{\underline{R} \in \mathcal{R}} \;\;
\min_{V \in
V_{bad}} D\left(V \|W |P_{XY}^*\right)+R_X+R_Y-R_{XY}^*\\
&=& \max_{\underline{R} \in \mathcal{R}} \;\; \min_{V \in V_{bad}}
D\left(V \|W |P_{XY}^*\right).
\end{eqnarray}
However, this expression does not depend on $\underline{R}$.
Therefore
\begin{eqnarray}
E\left(C,W\right) &\leq& \min_{V \in V_{bad}} D\left(V \|W
|P_{XY}^*\right),
\end{eqnarray}
where $V_{bad}=\{V:I_V\left(XY\wedge Z\right) \leq R_X+R_Y \;\; or
\;\; I_V\left(Y\wedge Z| X\right) \leq R_Y \;\;or \;\;
I_V\left(X\wedge Z |Y\right) \leq
R_X\}$\\
\subsection{Appendix A.2}
Suppose the decoding regions for $C_X$ are $D_1,D_2,...D_M$. Hence,
\begin{eqnarray}
e\left(C_X,W\right)&=&\frac{1}{M_X}\sum_{i=1}^{M_X}W\left(D_{i}^c|i\right)\nonumber\\
&=&\frac{1}{M_X}[\sum_{i=1}^{M}
\left(N_iW\left(D_i^c|\textbf{x}_i\right)+\left(N_i-1\right)W\left(D_i|\textbf{x}_i\right)\right)]\nonumber\\
&=&
\frac{1}{M_X}\left(M_X-M+\sum_{i=1}^{M}W\left(D_i^c|\textbf{x}_i\right)\right).
\end{eqnarray}
Let us randomly choose $\textbf{x} \in T_{P_X}$ that does not belong
to $C_X$. Define
\begin{eqnarray}
V_0 \triangleq
\arg\min_{V}\{D\left(V||W|P_X\right)+H\left(V|P_X\right)\}
\end{eqnarray}
it is proved that if $\textbf{y} \in T_{V_0}\left(\textbf{x}\right)$
\begin{eqnarray}
W^n\left(\textbf{y}|\textbf{x}\right)&=&2^{-n[min_{V}\{D\left(V||W|P_X\right)+H\left(V|P_X\right)\}]} \nonumber\\
&\geq& 2^{-n[\{D\left(V||W|P_X\right)+H\left(V|P_X\right)\}]} \;\;\;\;\; \text{any } V \nonumber\\
&=& W^n\left(\textbf{y}|\textbf{x}'\right)
\end{eqnarray}
for some $\textbf{x}'$ such that $\textbf{y}\in
T_V\left(\textbf{x}'\right)$. Thus,
\begin{eqnarray}
W^n\left(\textbf{y}|\textbf{x}\right) \geq
W^n\left(\textbf{y}|\textbf{x}_i\right) \;\;\;\;\;\; \;\;\;\;\;
\;\;\;
\text{any } i=1,...M
\end{eqnarray}
Choose $\textbf{y} \in T_{V_0}\left(\textbf{x}\right)\cap D_k$ for
some $k$ with $|D_k| \geq 2$. Now, let us look at $\C'_X$ which
contains all codewords in $\C_X$ except one of the repeated ones,
i.e one of the $\textbf{x}_M$ which is replaced with $\textbf{x}$,
and define the decoding sets
\begin{eqnarray}
D'_i&=&D_i
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;i\neq
k\\
D'_k&=&D_k -\{\textbf{y}\}\\
D'_{M+1}&=& \{\textbf{y}\}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\text{where }\textbf{x}'_{M+1} \triangleq \textbf{x}.
\end{eqnarray}
By following a similar approach, we conclude that
\begin{align}
e'(C_X,W) &= \frac{1}{M_X}\Big(M_X-M+\sum_{i=1,i\neq
k}^{M}W(D_i^c|\textbf{x}_i)+W(D_k^{'c}|\textbf{x}_k)-W(\textbf{y}|\textbf{x})\Big)\nonumber\\
&=
\frac{1}{M_X}\Big(M_X-M+\sum_{i=1}^{M}W(D_i^c|\textbf{x}_i)W(D_k^{'c}|\textbf{x}_k)-W(\textbf{y}|\textbf{x})-W(D_k^c|\textbf{x}_k)\Big)\nonumber\\
&=e(C_X,W)+\frac{1}{M_X}\Big(W(\textbf{y}|\textbf{x}_k)-W(\textbf{y}|\textbf{x})\Big)\nonumber\\
&\leq e(C_X,W),
\end{align}
where the last inequality follows from the fact that
$W\left(\textbf{y}|\textbf{x}_k\right)\leq
W\left(\textbf{y}|\textbf{x}\right)$.
\subsection{Appendix A.3}
The left hand side of~\eqref{number2} is straightforward, since for
all multiuser codes, $\C$, $e_m(\C,W) \geq e(\C,W)$.
By~\eqref{number1}, for all multiuser codes with rate pair
$(R_X,R_Y)$, we can conclude that
\begin{eqnarray}
e_m(\C,W) \geq 2^{-nE^U_{m}(R_X,R_Y)}.\label{assump}
\end{eqnarray}
Let us assume that there exists a code $\C$ with rate pair
$(R_X,R_Y)$ for which the right hand side of~\eqref{number2} does
not hold. Without loss of generality, we assume $R_X \leq R_Y$. For
Therefore,
\begin{eqnarray*}
e(\C,W) < \frac{1}{2} 2^{-n \left(E^U_{m}(R_X,R_Y) + R_X \right)},
\end{eqnarray*}
which is equivalent to
\begin{eqnarray}
\frac{1}{M_XM_Y}\sum_{i=1}^{M_X}\sum_{j=1}^{M_Y}W\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{x}_j\right)
< \frac{1}{2} 2^{-n \left(E^U_{m}(R_X,R_Y) + R_X \right)},
\end{eqnarray}
which can be written as
\begin{eqnarray}
\frac{1}{M_Y}\sum_{j=1}^{M_Y}\frac{1}{M_X}\sum_{i=1}^{M_X}W\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{x}_j\right)
< \frac{1}{2} 2^{-n \left(E^U_{m}(R_X,R_Y) + R_X \right)},
\end{eqnarray}
therefore, there exist $M^1_Y \geq \frac{M_Y}{2}$ codewords in
$\C_Y$ that satisfy
\begin{eqnarray}
\frac{1}{M_X}\sum_{i=1}^{M_X}W\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{x}_j\right)
< 2^{-n \left(E^U_{m}(R_X,R_Y) + R_X \right)}.\label{multeq}
\end{eqnarray}
Let us call this set of codewords as $C^1_Y$. By multiplying both
sides of~\eqref{multeq} with $M_X$, and considering the fact that
all terms in summation are non-negative, it can be concluded that
for every $\mathbf{x}_i \in \C_X$, $ \mathbf{y}_j \in \C^1_Y$,
\begin{eqnarray}
W\left(D_{i,j}^c|\mathbf{x}_i,\mathbf{x}_j\right) < 2^{-n
\left(E^U_{m}(R_X,R_Y) \right)}.
\end{eqnarray}
Therefore, the new multiuser code $\C^1=\C_X \times \C^1_Y$, has a
rate pair very close to the original code, and its maximal
probability of error satisfies
\begin{eqnarray}
e_m(\C^1,W) < 2^{-n \left(E^U_{m}(R_X,R_Y)
\right)}.\label{contradiction}
\end{eqnarray}
\eqref{contradiction} contradicts our assumption in~\eqref{assump},
therefore it can be concluded that the assumption must be false and
that its opposite must be true. Similarly, we can show the bounds
in~\eqref{number4} by assumption in~\eqref{number3}.
\bibliographystyle{plain}
|
1,116,691,501,310 | arxiv | \section{Introduction}
\label{sec1}
The recent economic crisis once again drew attention to the
insufficient ability of modern economic theory to properly account
for uncertainty and imperfect knowledge: neglect of these issues
is argued to be one of the reasons for the failure of the economic
profession in the difficult times of 2007--2009; cf.~{\em The
Economist\/} (2007) \cite{eco2007}, Colander {\em et al\/} (2009)
\cite{coletal2009}, Taleb (2010) \cite{tal2010}, Akerlof and Shiller
(2009) \cite{akeshi2009}, and Svetlova and Fiedler (2011)
\cite{svefie2011}. Next to the voices from the inside of the
profession, there is the related criticism from neighbouring
disciplines such as, e.g., economic sociology; cf.~Beckert (1996)
\cite{bec1996}, and Esposito (2007, 2010) \cite{esp2007, esp2010}. The
impression arises that economists are utterly ignorant: they
supposedly do not pay (enough) attention to the issues which the
rest of the world consider to be most crucial for economic life.
We asked ourselves if this ignorance is indeed a part of
scientific practice in economics. Is it correct that nobody has
properly tackled the issue of true uncertainty and imperfect
knowledge since Knight (1921) \cite{kni1921} and Keynes (1921)
\cite{key1921} during the post-WW~I twentieth century?
In this article, we aim to arrive at a more differentiated
judgement. Based on a
review of the literature, we classify the developments in
economics and decision theory that refer to uncertainty and
imperfect knowledge. We identify three major directions that deal
with these issues in economics, specifically {\em risk\/}, {\em
uncertainty as ambiguity\/}, and {\em uncertainty as
unawareness\/}. However, it should be stressed that our goal is
not a detailed classification of
approaches {\em per se\/}, but answering the question of how
{\em non-knowledge\/} has been represented formally in
economic theory to date. This task requires, however, some
detailed detection work, because {\em non-knowledge\/} has not
been an explicit issue in economics yet.
Surely, there is knowledge economy, cf.~Rooney {\em et al\/}
(2008) \cite{roo2008}, where knowledge is treated as a resource or a
desirable asset. Also, knowledge is an important topic in
information economics, as pioneered by Stigler (1961)
\cite{sti1961}, Akerlof (1970) \cite{ake1970}, Spence (1973)
\cite{spe1973}, and Stiglitz (1975, 2002) \cite{sti1975, sti2002},
where it is considered to be one of the tools to maximise profit.
Generally, in economics, knowledge is considered as a good that is
commonly available in principle (and should be used); the opposite
--- non-knowledge ---
is treated implicitly as a lack of information. In philosophy and
the social sciences, the situation is not very different, though
there are interesting recent attempts to overcome ``theoretical
preoccupations that underlie the study of knowledge accumulation,''
McGoey (2012) \cite[p~1]{mcg2012}, and to develop an agenda for the
social and cultural study of ignorance; cf.~McGoey (2012)
\cite{mcg2012} and Proctor (2008) \cite{pro2008}. Ignorance should be
treated ``as more than `not yet known' or the steadily retreating
frontier,'' Proctor (2008) \cite[p~3]{pro2008}, and should be
separately accounted for as a strategic resource and the source of
economic profit and progress; cf.~Knight (1921) \cite{kni1921} and
Esposito (2010) \cite{esp2010}. In economic theory, there have been
occasional voices pleading for more attention to ``true
uncertainty'', understood as the principle impossibility of
foreseeing all future events that may occur in the exogenous
world, cf.~Davidson (1991) \cite{dav1991} and Dequech (2006)
\cite{deq2006}, and to ``unknown unknowns'', cf.~Taleb (2007)
\cite{tal2007} and Diebold {\em et al\/} (2010) \cite{dieetal2010}.
However, non-knowledge has not become an independent issue of any
significant interest or importance for economists so far. Thus, to
find out how ignorance is formalised in the approaches considered
here, we have to uncover first which aspects of decision-making
are treated (often indirectly) as unknown, and which mathematical
instruments are used to represent them.
Our focus is on the principle non-knowledge of future events in
the exogenous world, which is the primary source of uncertainty.
After providing, in Section \ref{sec2}, a brief historical
overview to position the approaches considered within the ongoing
debate on uncertainty, we are concerned with the formal
mathematical representation of {\em ambiguity\/} in Section
\ref{sec3}, and of {\em unawareness\/} in Section \ref{sec4}.
Accordingly, we identify and review two approaches to the
formalisation of non-knowledge in the literature: one based on
economic agents' decision-making in the context of a state space
representing the exogenous world, as in Savage's (1954)
\cite{sav1954} axiomatisation and some successor concepts (ambiguity
as situations with unknown probabilities), and one based on
decision-making over a set of menus of potential future
opportunities, providing the possibility of derivation of agents'
subjective state spaces (unawareness as situation with imperfect
subjective knowledge of all future events). Due to the large
number of papers written on this topic, we have to be selective
and, hence, cannot provide an exhaustive overview. We particularly
draw attention to the last-mentioned line of research, namely
uncertainty as unawareness, as it represents an exciting attempt
to formalise ``unknown unknowns'' by radically departing from the
mainstream paradigm of Savage's axiomatisation. Finally, in
Section \ref{sec5}, we discuss the impending challenges and tasks
of formalisation of non-knowledge in economics. We believe that
without a detailed understanding of how non-knowledge has been
represented in economics so far, no serious research agenda for
studying ignorance as an independent part of economic theory can
be developed. We hope that this article provides one of the first
useful steps towards such an agenda.
\section{Historical developments}
\label{sec2}
Though there has not been an explicit discussion on non-knowledge
in economic theory, this issue permanently turns up in relation to
the topic of uncertainty. We identified three branches in the
literature on decision-making of economic agents under conditions
of uncertainty --- {\em risk, ambiguity\/} and {\em
unawareness\/} --- and, in what follows, present those three
directions and discuss the issue of knowledge versus ignorance in
relation to each of them:
\begin{itemize}
\item[(i)] {\em risk\/}: in formal representations, possible
states and events regarding the exogenous world and their
respective probabilities are known to all economic agents; they
agree on the probability measure to be employed in calculations of
individual utility,
\item[(ii)] {\em uncertainty I -- ambiguity\/}: in formal
representations, possible states and events are known but their
respective probabilities are not known to the agents; each of them
employs their own subjective (prior) probability measure in
calculations of individual utility,
\item[(iii)] {\em uncertainty II -- unawareness\/}: in formal
representations, possible states and events are known only
incompletely to the agents; there is ignorance among them as
regards relevant probability measures for calculations of
individual utility.
\end{itemize}
This classification goes back to the work on uncertainty by Knight
(1921) \cite{kni1921}, Keynes (1921, 1937) \cite{key1921, key1937},
Shackle (1949, 1955) \cite{sha1949, sha1955}, and Hayek (1945)
\cite{hay1945}, who tightly connected
the discussion of uncertainty with two kinds of knowledge, or
rather ignorance: specifically, with imperfect knowledge of future
events (uncertainty II), and with knowledge or non-knowledge of
probability measures relating to future events (uncertainty I).
Though the detailed depiction of the historical development of
those concepts would go far beyond the scope of this paper, we
consider it important to highlight the main ideas in this
development in order to provide a topical frame for our discussion
on the conceptualisation of non-knowledge in contemporary economic
theory.
Generally, the authors mentioned differentiate between
{\em epistemological\/} and {\em ontological uncertainty\/}.
{\em Epistemological uncertainty\/} is related to situations where
economic agents lack the knowledge necessary to construct adequate
probability measures. According to Knight (1921) \cite{kni1921},
e.g., theoretical, i.e., {\em a priori\/} probabilities on the one
hand, and statistical probabilities on the other,
are based on a valid fundament of knowledge: the law of large
numbers, or statistical grouping. The {\em a priori\/} probability
can be predicted using counting principles and a completely
homogeneous classification of instances (e.g., by rolling dice),
the statistical probability describes the frequency of an outcome
based on a classification of empirical events or instances, given
repeated trials. Knowledge is understood
in both cases as (empirical) information that allows for the
classification of possible outcomes. These two kinds of
probability ({\em a priori\/} and statistical) can be measured,
and in this sense are known and unanimously agreed upon by all
agents involved in decision-making processes (the situation of
{\em risk\/}). Hence, such probability measures can be reasonably
referred to as objective.
However, Knight suggests that these two categories do not exhaust
all possibilities for defining a probability measure; he
adds ``estimates'', or subjective probabilities. Quoting Knight
(1921) \cite[p~225]{kni1921}: ``The distinction
here is that there is no valid basis of any kind for classifying
instances. This form of probability is involved in the greatest
logical difficulties of all \ldots.'' Knight refers to this last
situation as a situation of {\em uncertainty\/} (ibid
\cite[p~233]{kni1921}); uncertainty can be defined as absence of
probable knowledge. In the situation of risk, probabilities
represent the measurable degree of non-knowledge; in the
uncertainty situation, this degree is immeasurable, and in this
sense probabilities are not known. Keynes (1921) \cite{key1921} also
suggested a concept of immeasurable probabilities as logical
relationships, and argued in his 1937 paper --- in unison with
Knight --- that economic agents lack a valid basis to devise
probability measures. In his definition uncertainty exists, e.g.,
in the case of predicting the price of copper or the interest rate
20 years hence (Keynes (1937) \cite[p~113]{key1937}): ``About these
matters there is no scientific basis on which to form any
calculable probability whatever. We simply do not know.''
Probabilities are used by economic agents as a convention that
enables them to act (ibid \cite[p~114]{key1937}); at the same time,
though probabilities are widely applied, they represent the
agents' ignorance rather than their (scientific) knowledge.
Interestingly, in the later literature this issue was taken up by
Ellsberg (1961) \cite{ell1961}, who, in his experiments,
distinguished between situations with known probability measures
over some event space (when the color and number of the balls in
an urn are known to agents; thus, they can form probabilities),
and situations with unknown probability measures (agents know only
the colors of balls but not the exact number of balls of each
color; thus, they deal with the ignorance of probability).
Ellsberg demonstrated empirically that people tend to prefer
situations with known probability measures over situations with
unknown probability measures; he explicitly referred to situations
with unknown probability measures as {\em ambiguous\/} and named
the phenomenon of avoiding such situations ``ambiguity aversion''
(corresponding to the term ``uncertainty aversion'' coined by
Knight (1921) \cite{kni1921}).
It must be noted that the discussion about measurability of
probabilities in economic life, as well as about their objective
vs subjective character, was severely influenced and pulled in one
particular, for a long time uncontested, direction by the line of
argumentation due to Ramsey (1931) \cite{ram1931}, de Finetti
(1937) \cite{fin1937}, and Savage (1954) \cite{sav1954}. Ramsey
and de Finetti reacted to Knight's and Keynes' concepts of
uncertainty as situations with immeasurable probabilities with the
axiomatisation of subjective probabilities: they demonstrated that
subjective probabilities can always be derived from the observed
betting behaviour of economic agents, rendering the whole
discussion about measurability and objectivity of probabilities
seemingly obsolete. Adopting these results, Savage generalised the
theory of decision under risk, i.e., the expected utility theory
as conceived of originally by Bernoulli (1738) \cite{ber1738} and
von Neumann and Morgenstern (1944) \cite{neumor1944}. While the
expected utility concept as an element of risk theory was based on
objective probability measures, Savage combined expected utility
theory and the subjective probability approach of Ramsey and de
Finetti to deliver a new variant of an
axiomatisation of decision under conditions of uncertainty ---
subjective expected utility theory. This concept was perfectly
compatible with the Bayes--Laplace approach to probability theory
and statistics where subjective {\em prior\/} probabilities can
always be assumed to exist and adjusted in the process of
learning. The crucial feature of Savage's probabilistic
sophistication is the principle neglect of the Knightian
distinction between risk and uncertainty, as Savage's concept
presupposes that even if an objective probability measure for
future events is not known, it can always be assumed that
economic agents behave {\em as if\/} they apply an individual
subjective (prior) probability measure to estimating the
likelihood of future events; and these probability measures can in
principle be derived {\em a posteriori\/} from an axiomatic model
on the basis of empirical data on agents' choice behaviour. By
this theoretical move, the immeasurability (and thus the
knowability) issue is eliminated. The question of the validity of
the subjective degrees of beliefs foundation, or of the origin of
subjective probabilities, is beyond Savage's model, as these are
built into the {\em as-if\/}-construction from the outset.
However, the Knightian distinction continued to bother economists
and --- especially after Ellsberg's (1961) \cite{ell1961} paper
--- a new branch of research appeared in the literature that
endeavoured to re-introduce uncertainty, understood as absence of
perfect knowledge of relevant probability measures, into economic
theory. The most prominent attempt was delivered by Gilboa and
Schmeidler (1989) \cite{gilsch1989}.
In the next section, we will introduce the basic elements of their
axiomatisation of decision under uncertainty in terms of
non-unique probability measures,
and contemplate how non-knowledge is represented in this concept.
At the same time, the attentive reading of Knight, Keynes and
Shackle suggests that the issue of uncertainty is not restricted
to the question whether probabilities can be meaningfully defined
or measured. There is a more fundamental issue of {\em ontological
uncertainty\/} which is concerned with the principle unknowability
of what is going on in an economic system; it goes beyond the
scope of epistemic uncertainty.
Note that in the framework of epistemic uncertainty, knowledge
that is relevant for the derivation of a meaningful probability
measure is generally treated as information; compare the
respective definition by Epstein and Wang (1994)
\cite[p~283]{epswan1994}, who define risk as a situation ``where
probabilities are available to guide choice, and uncertainty,
where information is too imprecise to be summarized adequately by
probabilities.'' It is interesting that also beyond the borders of
economic theory --- in the IPCC (2007) \cite{ipcc2007} report
--- the Knightian distinction between risk and uncertainty is
understood as an epistemic one: ``The fundamental distinction
between `risk' and `uncertainty' is as introduced by economist
Frank Knight (1921), that risk refers to cases for which the
probability of outcomes can be ascertained through
well-established theories with reliable complete data, while
uncertainty refers to situations in which the appropriate data
might be fragmentary or unavailable.'' (\ldots) The clear relation
``information (empirical data) -- probabilities'' is presupposed.
The lack of knowledge, in this case, can be theoretically removed
by becoming more skillful in calculating, or by collecting more
information.
However, it should be stressed that Knight (as well as Keynes and
Shackle) did not conceive of ignorance as lack of information but
rather as ontological indeterminacy, the ``inherent unknowability
in the factors'', see Knight (1921) \cite[p~219]{kni1921}. Shackle
(1955) \cite{sha1955} relates the genuinely imperfect knowledge
about future events to the
absence of an exhaustive list of possible consequences of choices.
Traditional probability theory assumes that the list of
consequences over which probability is distributed is an
exhaustive list of possible outcomes, or, in Shackle's terms,
hypotheses. However, so Shackle, if there is a residual
hypothesis, that is, the list of possible consequences is
incomplete, the probability model runs into trouble. By adding a
hypothesis to the list of possible hypotheses, each corresponding
probability of the previously known hypotheses has to be revised
downwards; see Shackle (1955) \cite[p~27]{sha1955}. If five possible
hypotheses are considered and a sixth hypothesis is added, and
additivity of probabilities is assumed, the probability of each of
the initial five hypotheses is subsequently lower.
This objection applies to both approaches, namely the frequentist
approach to probability theory on the one hand, and the
Bayes--Laplace approach which deals with belief-type subjective
(prior) probability measures on the other, because neither can
incorporate a residual hypothesis, or the principle non-knowledge
of future states. Thus, referring to the genuinely imperfect
knowledge about future events, Shackle (but also Knight and
Keynes) expressed doubts whether probability theory in general is
sufficient to account for decision under uncertainty, and whether
it should be the central issue after all.
By far more important than the issue of devising suitable
probability measures seems to be the non-knowledge of possible
future states of the exogenous world and of related outcomes. Only
if we manage to account properly for this imperfect knowledge, can
we conceptualise properly human decision-making, or, in the words
of Shackle (1959) \cite[p~291]{sha1959}, a non-empty decision.
Crocco (2002) \cite{cro2002} explains: ``An empty decision is the
mere account of a formal solution to a formal problem. It is that
situation where a person has a complete and certain knowledge
about all possible choices and all possible outcomes of each
choice. It is a mechanical and inevitable action," or, in the
words of Heinz von F\"orster (1993) \cite[p~153]{foe1993}, every
decidable (or perfectly known) problem is already decided; true
decisions always presuppose genuine undecidability.
In this sense, Savage's concept is rather concerned with empty
decisions, because it presupposes situations with full knowledge
of possible events, acts and outcomes, rendering agents' choices
just a mechanical application of the personal utility-maximisation
rule.
In economics, genuine undecidability should enter theory.
Most economic decisions are truly undecidable because they take
place under conditions of imperfect knowledge of the situation to
be faced, which is in the sense of the American pragmatist
philosopher John Dewey (1915) \cite[p~506]{dew1915} a genuinely
``incomplete situation'': ``something is `there', but what is
there does not constitute the entire objective situation.'' This
``means that the decision-maker does not have complete knowledge
of the following: (a)~the genesis of the present situation,
(b)~the present situation itself, or (c)~the future outcomes that
remain contingent on the decisions that are made in the present
situation;'' see Nash (2003) \cite[p~259]{nas2003}. According to
Dewey (1915) \cite{dew1915}, the situation is underdetermined,
unfinished, or not wholly given.
This principle non-knowledge can be explained, so Shackle (1949,
1955) \cite{sha1949, sha1955}, by the character of economic
decisions, which he considers to be non-devisible, non-seriable,
and crucial experiments. {\em Non-devisible experiments\/} imply
only a single trial; {\em non-seriable experiments\/} are not
statistically important even in the aggregate; an example of a
seriable experiment is fire insurance: although no reasonable
probability can be assigned to an individual house to burn down,
if there are sufficiently many events, a (statistical)
probability will emerge. Most importantly, economic decisions are
{\em crucial experiments\/}: they inevitably alter the conditions
under which they were performed (this definition applies to all
strategic situations, e.g., chess play, but also financial
markets). Within
the genuinely social context of economic life, economic events are
rather {\em endogenous\/} to the decision processes of agents and
are dependent on the actions and thinking of other market
participants. There are path dependencies and reflexivity; cf.
Soros (1998) \cite{sor1998}. In general, a meaningful approach to
decision-making should take into account that the future is
principally unknowable, due to ontological features of the
exogenous world such as openness, organic unity, and
underdeterminacy. These are features which are typically
attributed to complex systems; cf.~Keynes {\em et
al\/} (1926) \cite[p~150]{keyetal1926}: ``We are faced at every turn
with the problems of Organic Unity, of Discreteness, of
Discontinuity --- the whole is not equal to the sum of the parts,
comparisons of quantity fail us, small changes produce large
effects, the assumptions of a uniform and homogeneous continuum
are not satisfied.''
In such a system, not all constituent
variables and structural relationships connecting them are known
or knowable. Thus, in an open and organic system, some information
is not available at the time of decision-making, and cannot be
searched, obtained or processed in principle. Surprises, or
unforeseen events, are normal, not exceptional. The list of
possible events or states is not predetermined and very little, or
nothing at all, can be known about the adequate probability
measure for this radically incomplete set of future events.
These considerations require a more sophisticated distinction of
decision-making configurations, namely a distinction that goes
beyond the usual {\em risk\/} vs {\em uncertainty as ambiguity\/}
debate. As Dequech (2006) \cite[p~112]{deq2006} puts it:
``Even though the decision-maker under ambiguity does not know
with full reliability the probability that each event (or state of
the world) will obtain, he/she usually knows all the possible
event \ldots. Fundamental uncertainty, in contrast, is
characterized by the possibility of creativity and
non-predetermined structural change. The list of possible events
is not predetermined or knowable ex ante, as the future is yet to
be created.'' What Dequech calls ``fundamental uncertainty'' (or
``true uncertainty'' in terms of some post-Keynesians (e.g.,
Davidson (1991) \cite{dav1991}) enters the recent debate in the
economic literature under the label of ``unawareness''.
The {\em unawareness\/} concept, as introduced by Kreps 1979
\cite{kre1979}, Dekel {\em et al\/} (1998, 2001)
\cite{deketal1998,deketal2001}, and Epstein {\em et al\/} (2007)
\cite{epsetal2007}, presupposes a coarse (imperfect) subjective
knowledge of all possible future events. This concept criticises
Savage's (1954) \cite{sav1954} axiomatisation and suggests a radical
departure from it. Savage's axiomatisation is characterised by the
in principle observability and knowability of all possible future
events. These events belong to the primitives of the model
and are assumed to be exogenous and known to
all economic agents. In Savage's model, the
(compact) state space representing the exogenous world the
agents are continually interacting with is ``a space of
mutually exclusive and exhaustive states of nature, representing
all possible alternative unfoldings of the world''; see Machina
(2003) \cite[p~26]{mac2003}. The exhaustiveness criterion is very
restrictive and basically precludes non-knowledge of future states
on the part of the agents. Machina (2003) \cite[p~31]{mac2003}
continues: ``When the decision maker has reason to `expect the
unexpected' [or the residual hypothesis in terms of Shackle ---
the authors], the exhaustivity requirement cannot necessarily be
achieved, and the best one can do is specify a final, catch-all
state, with a label like `none of the above', and a very
ill-defined consequence.'' Obviously, true uncertainty as
imperfect knowledge of possible future states of the exogenous
world is not an element of Savage's model.
The pioneers of the {\em unawareness\/} concept depart from
Savage's axiomatisation by replacing the state space in the list
of primitives by a set of menus over actions which are the objects
of choice. This theoretical move allows for dealing with
unforeseen contingencies, i.e., an inability of economic agents to
list all possible future states of the exogenous world.
We now turn to give a more formal presentation of the two main
concepts of uncertainty we discussed so far: uncertainty as
ambiguity and uncertainty as unawareness.
\section{Uncertainty as ambiguity: non-knowledge of probability
measures}
\label{sec3}
All decision-theoretical approaches to modelling an economic
agent's state of knowledge regarding future developments of the
exogenous world,
the ensuing prospects for an individual's opportunities, and the
agent's consequential choice behaviour under conditions of
uncertainty employ an axiomatic description of the characteristic
properties of observable choice behaviour and derive a
quantitative representation of an agent's preferences in
decision-making. Uncertainty in this context is generally
interpreted as ambiguity perceived by an agent with respect to
unknown probabilities by which future states of the exogenous
world will be realised. In these approaches the standard
assumption of neoclassical economics of an agent whose
choices are fully rational is being maintained. The main issue
of modelling here is to put forward a set of primitives which can
be observed in principle in real-life settings, as well as a
minimal set of axioms describing exhaustively the interconnections
between these primitives, to provide the conceptual basis for (in
general highly technically demanding) mathematical proofs of
representation theorems.
Most approaches in the literature propose an expected utility (EU)
representation of an agent's preferences in terms of a
real-valued personal utility function which is an unobservable
theoretical construct, thus following the quantitative
game-theoretical tradition of von Neumann and Morgenstern (1944)
\cite{neumor1944}. A related issue is the question to what extent an
agent's choice behaviour can be reasonably viewed as influenced by
a set of personal subjective probabilities regarding the (unknown)
future states of the exogenous world. We begin by briefly
reviewing the central aspects of the axiomatic approach taken by
Savage (1954) \cite{sav1954} to describe one-shot choice situations
--- the subjective expected utility (SEU) framework, which
attained the prominent status of a standard model in decision
theory.
The primitives in Savage (1954) \cite{sav1954} are
\begin{itemize}
\item[(i)] an exhaustive set of mutually exclusive future states
$\omega$ of the exogenous world which an agent cannot actively
take an influence on; these constitute a state space
$\boldsymbol{\Omega}$ which is assumed to be continuous,
compact, and can be partitioned into a finite number of pairwise
disjoint events; possible events $A, B, \ldots$ are considered
subsets of $\boldsymbol{\Omega}$, with $2^{\boldsymbol{\Omega}}$
the set of all such subsets of $\boldsymbol{\Omega}$,
\item[(ii)] a finite or infinite set of outcomes $x$ contingent on
future states $\omega$, forming an outcome space
$\boldsymbol{X}$, and
\item[(iii)] a weak binary preference order $\succeq$ (``prefers
at least as much as'') defined over the agent's objects of choice
--- a set of potential individual acts $f$ an agent may
consciously take in reaction to realised future states~$\omega$ of
the exogenous world, yielding predetermined outcomes $x$ ---,
describing their personal ranking of available options; these acts
form a space $\boldsymbol{F}$.
\end{itemize}
In more detail, an act is defined as a (not necessarily
real-valued, continuous) mapping $f: \boldsymbol{\Omega}
\rightarrow \boldsymbol{X}$ from the set of future states
$\boldsymbol{\Omega}$ to the set of possible outcomes
$\boldsymbol{X}$, so the set of acts available to an agent at
a given instant in time, in view of known future states $\omega$
but of unknown probabilities, is
$\boldsymbol{F} = \boldsymbol{X}^{\boldsymbol{\Omega}}$.
There is no additional structure needed in this model regarding
measures or topology on either space $\boldsymbol{\Omega}$ or
$\boldsymbol{X}$, except for continuity and compactness of
$\boldsymbol{\Omega}$. An observable weak binary preference order
over the set of acts is given by $\succeq \subset \boldsymbol{F}
\times \boldsymbol{F}$, intended to reflect an agent's subjective
beliefs regarding future states $\omega$, and the usefulness of
acts the agent may take in response to ensuing states.
Savage introduces a minimal set
of seven axioms (P1 to P7) to characterise the theoretical nature
of this preference order over acts (and, by implication,
related outcomes), which are commonly referred to in the
literature as weak order resp.~completeness, sure-thing principle,
state-independence, comparative probability, non-triviality,
Archimedean, and finitely additive probability measures; cf.
Nehring (1999) \cite[p~105]{ner1999} and Gilboa (2009)
\cite[p~97ff]{gil2009}. These
axioms constitute the foundation of a representation theorem
proved by Savage which states that an agent's (one-shot) choice
behaviour
under conditions of uncertainty may be viewed as if it was guided
by (i)~a real-valued personal utility function $U: \boldsymbol{X}
\rightarrow \mathbb{R}$ that assigns subjective value to specific
outcomes $x \in \boldsymbol{X}$, and (ii)~a single finitely
additive subjective probability measure $\mu:
2^{\boldsymbol{\Omega}} \rightarrow [0,1]$ on the space of all
possible future events $2^{\boldsymbol{\Omega}}$. In particular,
an agent's choice behaviour may be modelled as if for the acts $f$
available to them they strive to maximise a real-valued EU
preference function $V: \boldsymbol{F} \rightarrow
\mathbb{R}$, defined by
\begin{equation}
\label{eq:savagerepr}
V(f) := \int_{\boldsymbol{\Omega}}U(f(\omega))\,\mu({\rm d}\omega)
\ .
\end{equation}
Hence, in this setting an act $f \in \boldsymbol{F}$ is weakly
preferred by an agent to an act $g \in \boldsymbol{F}$, iff
$V(f) \geq V(g)$.
The elements of Savage's SEU model may be schematically summarised
in terms of a decision matrix of the following structure (here for
a partition of the continuous and compact $\boldsymbol{\Omega}$
into a finite number $n$ of pairwise disjoint events):
\begin{equation}
\begin{array}{c|cccc|c}
\text{probability measure}\ \mu & P(\omega_{1}) & P(\omega_{2}) &
\ldots & P(\omega_{n}) & \\
\hline
\text{acts}\ \boldsymbol{F}\ \backslash\ \text{states}
\ \boldsymbol{\Omega} & \omega_{1} &
\omega_{2} & \ldots & \omega_{n} & \\
\hline
f_{1} & x_{11} & x_{12} & \ldots & x_{1n} & \\
f_{2} & x_{21} & x_{22} & \ldots & x_{2n} & \text{outcomes}
\ \boldsymbol{X} \\
\vdots & \vdots & \vdots & \ddots & \vdots &
\end{array} \ ,
\end{equation}
where $0 \leq P(\omega_{i}) \leq 1$ and $\sum_{i}P(\omega_{i})=1$
(and generally: $\mu \geq 0$ and
$\int_{\boldsymbol{\Omega}}\mu({\rm d}\omega) = 1$). Note that,
formally, Savage's framework reduces an
agent's situation of decision under uncertainty, in the Knightian
sense of not knowing the probability measure associated with
$(\boldsymbol{\Omega}, 2^{\boldsymbol{\Omega}})$ {\em a priori\/},
to a manageable situation of decision under risk by introducing a
{\em single\/} subjective Bayesian prior probability measure as a
substitute. This is to say, every single economic agent possesses
for themselves a unique probability measure which they employ in
their individual calculations of utility; a probability measure is
thus {\em known\/} to every individual from the outset, but there
is no reason whatsoever that these measures should coincide
between agents.
Savage's main claim is that his framework can be used to
explicitly derive for an arbitrary economic agent who makes
rational choices in parallel (i)~a unique subjective probability
measure $\mu$ over $(\boldsymbol{\Omega},
2^{\boldsymbol{\Omega}})$, and (ii)~a personal utility function
$U$ over $\boldsymbol{F}$ (unique up to positive linear
transformations), from observation of their choice behaviour
in practice. For the sequel it is worth mentioning that Savage's
numerical SEU representation~(\ref{eq:savagerepr}) can be
interpreted to fall into either of the categories of ordinal or
additive EU representations.
\medskip
Various authors have criticised Savage's SEU model for different
reasons, where in particular the claim is that one or more of his
axioms are regularly being violated in real-life situations of
(one-shot) choice. Bewley (1986,2002) \cite{bew1986, bew2002}, for
example, points the finger
to the completeness axiom P1 in that he considers it unrealistic
to assume that all agents have a clear-cut ranking of all the acts
available to them, when it need not necessarily be clear from the
outset which acts comprise the complete set $\boldsymbol{F}$. In
his work he therefore proposes an axiomatic alternative to
Savage's SEU model which discards the completeness axiom in favour
of an inertia assumption regarding the status quo of an agent's
personal situation.
More prominent still is Ellsberg's (1961)\cite{ell1961} empirical
observation that in situations of choice under uncertainty
rational agents need not necessarily act as
subjective expected utility maximisers: given the choice between a
game of chance with known probabilities of the possible outcomes
and the identical game of chance where the probabilities are
unknown, the majority of persons tested exhibited the phenomenon
of uncertainty aversion by opting for the former game. Ellsberg
showed that this kind of behaviour correspond to a violation of
Savage's sure-thing principle axiom P2.
\medskip
A possible resolution of
this conflict was suggested in the multiple priors maxmin expected
utility (MMEU) model due to Gilboa and Schmeidler (1989)
\cite{gilsch1989}, which takes uncertainty aversion explicitly into
account by stating that under conditions of uncertainty an agent
need not have to have a unique subjective prior probability
measure $\mu$, but rather an {\em entire set\/} $\Pi$ worth of such
measures $\pi$ from which they select in making decisions
according to the maxmin principle. In this sense, Gilboa and
Schmeidler take an explicit attempt at formalising Knightian
uncertainty in problems of decision-making, interpreted as
situations with in principle unknowable probability measures over
$(\boldsymbol{\Omega},2^{\boldsymbol{\Omega}})$. The degree of an
agent's ignorance is encoded in the generically unconstrained
cardinality of the set of Bayesian priors~$\Pi$: no criteria are
formulated according to which an agent assesses the relevance of
any particular probability measure that is conceivable for a given
situation of decision-making. Non-knowledge regarding the
likelihood of future events here is linked to the number of
elements included in the individual set $\Pi$ that is employed in
an agent's individual calculation of utility and so is represented
in a more comprehensible fashion than in Savage's framework.
Nevertheless, the primitives of the MMEU model are unchanged with
respect to Savage's SEU model. Based on a minimal set of six
axioms (A1 to A6) referred to resp.~as weak order,
certainty-independence, continuity, monotonicity,
uncertainty aversion and non-degeneracy, the representation
theorem Gilboa and Schmeidler (1989) \cite{gilsch1989} prove employs
a real-valued preference function $V: \boldsymbol{F} \rightarrow
\mathbb{R}$ defined by the minimum expected utility relation
\begin{equation}
\label{eq:gilschrepr}
V(f) := \min_{\pi\in \Pi}
\int_{\boldsymbol{\Omega}}(E_{f(\omega)}U)\,{\rm d}\pi \ ,
\end{equation}
with $\Pi \subset \Delta(\boldsymbol{\Omega})$ a non-empty,
closed and convex set of finitely additive probability measures
over $(\boldsymbol{\Omega},2^{\boldsymbol{\Omega}})$, and $U:
\boldsymbol{X} \rightarrow \mathbb{R}$ a non-constant real-valued
personal utility function. Again, an act $f \in \boldsymbol{F}$ is
then weakly preferred by an agent to an act $g \in
\boldsymbol{F}$, iff $V(f) \geq V(g)$.
Since its inception, Gilboa and Schmeidler's MMEU model has
enjoyed a number of applications in the econometrical literature;
e.g. in Epstein and Wang (1994) \cite{epswan1994} on intertemporal
asset pricing; Hansen {\em et al\/} (1999) \cite{hanetal1999} on
savings behaviour; Hansen and Sargent (2001, 2003)
\cite{hansar2001,hansar2003} on macroeconomic situations;
Nishimura and Ozaki (2004) \cite{nisoza2004} on a job search model;
and Epstein and Schneider (2010) \cite{epssch2010} on implications
for portfolio choice and asset pricing. Rigotti and Shannon (2005)
\cite{rigsha2005}, who propose an approach to formalising
uncertainty in financial markets on the basis of Bewley's
(1986,2002) \cite{bew1986,bew2002} idea of discarding Savage's
completeness axiom P1, contrast their findings on the impact of
uncertainty on equilibrium configurations in decision-making
processes with corresponding consequences arising from an MMEU
perspective.
\medskip
The strongest criticism to date of Savage-type state space models
of decision-making under conditions of uncertainty was voiced at
the end of the 1990ies by Dekel {\em
et al\/} (1998) \cite{deketal1998}. They showed that given one
considers it unrealistic for an economic agent to be aware of all
possible future states $\omega$ of the exogenous world, a standard
state space model is incapable of consistently incorporating the
dimension of an agent's unawareness of future contingencies.
The basis of the formal treatment of the issue at hand are
information structures referred to as possibility
correspondences.
A possibility correspondence amounts to a function $P:
\boldsymbol{\Omega} \rightarrow
2^{\boldsymbol{\Omega}}$ that maps elements~$\omega$ in some state
space $\boldsymbol{\Omega}$ to subsets thereof, so that
$P(\omega)$ is interpreted as the set of states an agent considers
possible when the realised state is~$\omega$. In this picture, an
agent ``knows'' an event $E \in
2^{\boldsymbol{\Omega}}$ at a state~$\omega$ provided $P(\omega)
\subseteq E$. Hence, given a possibility correspondence $P$, a
knowledge operator $K: 2^{\boldsymbol{\Omega}} \rightarrow
2^{\boldsymbol{\Omega}}$ is determined by
\begin{equation}
K(E) := \{\omega \in \boldsymbol{\Omega}|P(\omega) \subseteq E\}
\quad\text{for all}\quad
E \in 2^{\boldsymbol{\Omega}} \ ;
\end{equation}
$K(E)$ represents the set of states in $\boldsymbol{\Omega}$ for
which an agent knows that event $E$ must have occurred.
According to Dekel {\em et al\/}, it is commonplace to assume that
such a knowledge operator features the properties of
(i)~necessitation, meaning
$K(\boldsymbol{\Omega})=\boldsymbol{\Omega}$, and
(ii)~monotonicity, meaning $E \subseteq F \Rightarrow K(E)
\subseteq K(F)$. In addition, an unawareness operator may be
defined as a mapping $U: 2^{\boldsymbol{\Omega}} \rightarrow
2^{\boldsymbol{\Omega}}$, so that $U(E)$ is to be regarded as the
set of states in $\boldsymbol{\Omega}$ where an agent is unaware
of the possibility that event $E$ may occur. With these structures
in place, a standard state space model is represented by a triplet
$(\boldsymbol{\Omega},K,U)$.
To obtain their central result, Dekel {\em et al\/} require a
minimal set of only three axioms which characterise the nature of
the operators $K$ and $U$: these demand that for every event $E
\in 2^{\boldsymbol{\Omega}}$, (i)~$U(E) \subseteq \neg K(E) \cap
\neg K(\neg K(E))$, called plausibility,\footnote{The symbol
$\neg$ denotes complementation.} (ii)~$K(U(E)) =
\emptyset$, called KU introspection, and (iii)~$U(E) \subseteq
U(U(E))$, called AU introspection. Given a standard state space
model $(\boldsymbol{\Omega},K,U)$ satisfies these three axioms,
the theorem proven by Dekel {\em et al\/} (1998)
\cite[p~166]{deketal1998} states that in such a setting
(a)~``the agent is never unaware of anything,'' provided $K$
satisfies the necessitation property, and (b)~``if the agent is
unaware of anything, he knows nothing,'' provided $K$ satisfies
the monotonicity property. This result renders standard state
space models void as regards the intention of formally capturing
an agent's unawareness of subjective contingencies in a
non-trivial way.
\medskip
The work by Dekel {\em et al\/} (1998) \cite{deketal1998}, in
particular, triggered a series of papers written during the last
decade, which aspire to include an agent's unawareness of future
subjective contingencies in a coherent model that continues to
employ a kind of EU representation of an agent's manifested
preferences in situations of choice under conditions of
uncertainty. We turn to highlight the, in our view, most important
papers of this development next.
\section{Uncertainty as unawareness: non-knowledge of complete
state spaces}
\label{sec4}
Since the status of possible future states $\omega$ of the
exogenous world as a primitive in a decision-theoretical model on
an agent's choice behaviour under conditions of uncertainty is
questionable due to the lack of a convincing operational
instruction for observation of such states, a number of authors
have dropped the state space $\boldsymbol{\Omega}$ from the set of
primitives altogether and turned to focus instead on the
description of an agent's preferences when they are unaware of
some future subjective contingencies which take a direct influence
on future outcomes such as the pay-offs of certain actions. In the
papers to be considered in the following, the conceptual line of
thought pursued in which originated in the work by Kreps (1979)
\cite{kre1979}, the primitives underlying this alternative approach
comprise in general
\begin{itemize}
\item[(i)] a (typically finite) set $\boldsymbol{B}$ of
alternative opportunities, actions, or options; a generic element
in this set will be denoted by $b$,
\item[(ii)] a (typically finite) set $\boldsymbol{X}$ of all
conceivable non-trivial menus compiled from elements in
$\boldsymbol{B}$, with a generic element denoted by $x$; note that
$\boldsymbol{X} = 2^{\boldsymbol{B}}\backslash\{\emptyset\}$,
\item[(iii)] a weak binary preference order $\succeq$ defined over
the agent's objects of choice, presently menus in $\boldsymbol{X}$.
\end{itemize}
The setting conceived of in this approach considers a two-stage
choice process in which an agent will initially (``now'') choose a
particular menu $x$, from which, contingent on subsequently
ensuing states $\omega$ of the exogenous world, they will choose a
specific element $b$ at an unmodelled later stage
(``then'').\footnote{As will be described in the following, in
some of the works to be reviewed the elements of choice at stage
``then'' can be more complex objects than simply elements $b \in
\boldsymbol{B}$.} Hence, two kinds of (weak) binary preference
orders need to be introduced: an ``ex ante preference''
(preference ``now'') over the set $\boldsymbol{X}$, $\succeq
\subset \boldsymbol{X} \times \boldsymbol{X}$, and an ``ex post
preference'' (preference ``then'') over $\boldsymbol{B}$
contingent on a realised state $\omega$,
$\succeq^{*}_{\omega} \subset \boldsymbol{B} \times
\boldsymbol{B}$; cf.~Dekel {\em et al\/} (2001) \cite{deketal2001}.
Generally, authors then proceed to formulate
minimal sets of axioms for the ex ante preference order $\succeq$,
on the basis of which they prove representation theorems for
modelling an agent's choice behaviour
under conditions of uncertainty in the sense that the agent is
unaware of some future subjective contingencies. A particularly
interesting feature of some of the works to be discussed in the
sequel is the possibility to derive in principle an agent's
subjective state space regarding future subjective contingencies
from observed choice behaviour, given some form of
EU representation of the agent's preference relation is
employed. This aspect is key to a meaningful representation of
non-knowledge in economic theory. It is also seen as an
intermediate step towards derivation of an agent's subjective
probability measure regarding choice behaviour under conditions of
uncertainty on the basis of empirical data.
\medskip
Kreps (1979) \cite{kre1979}, in his pioneering paper, considers an
agent with a ``desire for flexibility'' as regards
decision-making, the choice behaviour of which, however, may {\em
not\/} satisfy ``revealed preference''. He formalises these
properties of an agent's envisaged choice behaviour in terms of
the following two axioms: for all $x, x^{\prime}, x^{\prime\prime} \in
\boldsymbol{X}$,
\begin{equation}
\label{eq:flexibility}
x \supseteq x^{\prime}
\quad\Rightarrow\quad
x \succeq x^{\prime} \ ,
\end{equation}
and
\begin{equation}
\label{eq:revpref}
x \sim x \cup x^{\prime}
\quad\Rightarrow\quad
x \cup x^{\prime\prime} \sim x \cup x^{\prime} \cup
x^{\prime\prime} \ ,
\end{equation}
with $\sim$ denoting the indifference relation on $\boldsymbol{X}$.
Note that in the literature the axiom (\ref{eq:flexibility}) is
often referred to as the monotonicity axiom. Kreps, in his
discussion, does {\em not\/} make explicit an agent's uncertainty
regarding unawareness of (some) future subjective contingencies.
Rather, it is implied by the agent's ``desire for flexibility''.
He continues to prove that, given a ``dominance relation'' on
$\boldsymbol{X}$ defined by
\begin{equation}
x \geq x^{\prime}
\quad\text{if}\quad
x \sim x \cup x^{\prime} \ ,
\end{equation}
and the axioms stated before, an agent's
preferences on $\boldsymbol{X}$ can be sensibly described as if
they were ``maximizing a `state dependent utility function of
subsequent consumption'{}'' in terms of a formal real-valued
preference function $V: \boldsymbol{X} \rightarrow \mathbb{R}$,
defined by
\begin{equation}
V(x) := \sum_{s \in \boldsymbol{S}}\max_{b \in x} U(b,s) \ .
\end{equation}
Here $\boldsymbol{S}$ denotes the unobservable finite subjective
state space of an agent's personal tastes, with generic element
$s$, and $U: \boldsymbol{B} \times \boldsymbol{S} \rightarrow
\mathbb{R}$ is the agent's unobservable state-dependent
real-valued utility function of alternative opportunities
available in the finite set $\boldsymbol{B}$. Kreps points out
that this representation is principally ordinal in character.
The bottom-line of Kreps' approach is that the set of
state-dependent ex post utilities $\{U(\cdot,s)|s \in
\boldsymbol{S}\}$, expressing the agent's beliefs on potential
future pay-offs, can be interpreted as an agent's implicitly
given coherent subjective state space which describes their
uncertainty regarding ex post choices over the set
$\boldsymbol{B}$, and so can be legitimately used as a model of
unforeseen contingencies (cf.~Kreps (1992)~\cite{kre1992}).
\medskip
However, as Dekel {\em et al\/} (2001) \cite[p~892,
p~896f]{deketal2001} emphasise, Kreps' implied subjective state
space $\{U(\cdot,s)|s \in \boldsymbol{S}\}$ of an agent is far
from being determined uniquely, since
the axioms he proposed prove not to be sufficiently restrictive
for this purpose. It is this feature in particular, which these
authors set out to overcome in their own work. To accomplish this
goal, Dekel {\em et al\/} (2001) \cite{deketal2001} extend Kreps'
analysis in two respects. On the one-hand side, here the agent's
objects of choice are, in the spirit of von Neumann
and Morgenstern (1944) \cite{neumor1944}, sets of lotteries
$\Delta(\boldsymbol{B})$ defined over finite sets of future
alternative opportunities $\boldsymbol{B}$, on the other, the
assumption of an agent's strict preference for flexibility is
relaxed to also allow for a preference for commitment in instances
when this appears valuable. The latter feature introduces the
possibility of an agent's view ``ex ante'' to differ from their
view ``ex post''. To continue with the primitives: Dekel
{\em et al\/} take the set $\Delta(\boldsymbol{B})$ to correspond
to a set of probability measures over $\boldsymbol{B}$; a generic
lottery in $\Delta(\boldsymbol{B})$ is denoted by $\beta$. Subsets
of $\Delta(\boldsymbol{B})$ are referred to as menus $x$, with
$\boldsymbol{X}$ denoting the set of all non-empty subsets of
$\Delta(\boldsymbol{B})$. $\boldsymbol{X}$ is endowed with a
Hausdorff topology and constitutes the formal basis of an agent's
binary ex ante preference order, $\succeq \subset
\boldsymbol{X} \times \boldsymbol{X}$. The two-stage choice
process of Kreps (1979) \cite{kre1979} remains qualitatively
unchanged: the agent chooses a menu $x \in \boldsymbol{X}$
``now'', and a lottery $\beta \in x$ ``then''.
Dekel {\em et al\/}'s different kinds of representations of an
agent's ex ante preference order $\succeq$ over menus $x$ of
lotteries correspond to triplets $(\boldsymbol{\Omega},U,u)$,
comprising the following three common elements: a non-empty
(exogenous) state space $\boldsymbol{\Omega}$
serving merely as an index set to label ex post preferences
over $\Delta(\boldsymbol{B})$, a state-dependent real-valued
personal utility function $U: \Delta(\boldsymbol{B}) \times
\boldsymbol{\Omega} \rightarrow \mathbb{R}$, and a real-valued
personal aggregator function $u: \mathbb{R}^{\boldsymbol{\Omega}}
\rightarrow \mathbb{R}$. The aggregator function is a rather
special feature in Dekel {\em et al\/}'s analysis. It is given the
role of translating an agent's ex post utility levels of menus
$x$ into corresponding ex ante
values, making the strong assumption that, in the model proposed,
an agent has a coherent view of all future utility possibilities
of menus $x$ available to them. The ex post preference order
$\succeq^{*}_{\omega}$ over $\Delta(\boldsymbol{B})$, given a
state $\omega \in \boldsymbol{\Omega}$, can be viewed as being
encoded in the utility function $U(\cdot,\omega)$. In consequence,
Dekel {\em et al\/} define an agent's subjective state space as
the set $\boldsymbol{P}(\boldsymbol{\Omega},U) :=
\{U(\cdot,\omega)|\omega \in \boldsymbol{\Omega}\}$.
On the basis of an ex ante preference-characterising minimal
set of seven axioms (A1 to A7), referred to as weak order,
continuity, non-triviality, indifference to randomisation (IR),
independence, weak independence and monotonicity, resp., Dekel
{\em et al\/} (2001) \cite{deketal2001} prove existence
theorems for three kinds of EU representations,
all of which can be cast in the form of a real-valued preference
function $V: \boldsymbol{X} \rightarrow \mathbb{R}$ defined by
\begin{equation}
V(x) := u\left(\left(\sup_{\beta \in x}
U(\beta,\omega)\right)_{\omega \in \boldsymbol{\Omega}}\right) \ ,
\end{equation}
with $U(\cdot,\omega)$ an EU affine function in line with von
Neumann and Morgenstern (1944) \cite{neumor1944}, i.e.,
for all $\beta \in \Delta(\boldsymbol{B})$ and $\omega \in
\boldsymbol{\Omega}$,
\begin{equation}
U(\beta,\omega) := \sum_{b \in \boldsymbol{B}}\beta(b)
U(b,\omega) \ .
\end{equation}
The main results following from the proofs of the EU
representation theorems for the binary ex ante preference
order $\succeq$ are: (i)~uniqueness of an agent's subjective state
space $\boldsymbol{P}(\boldsymbol{\Omega},U)$ related to
their binary ex post preference order, as well as essential
uniqueness of the associated aggregator function $u$, (ii)~the
size of an agent's subjective state space
$\boldsymbol{P}(\boldsymbol{\Omega},U)$ can be interpreted as a
measure of their uncertainty about future subjective
contingencies, while the associated aggregator $u$ indicates
whether such contingencies trigger a preference for commitment
or rather for flexibility, (iii)~ordinal EU representations offer
the smallest subjective state space
$\boldsymbol{P}(\boldsymbol{\Omega},U)$ possible for any ordinal
representation, and (iv)~existence of an additive EU
representation when in particular the standard independence axiom
due to von Neumann and Morgenstern (1944) \cite{neumor1944} holds;
the former is given by a real-valued preference function
$V: \boldsymbol{X} \rightarrow \mathbb{R}$ such that (up to
monotone transformations)
\begin{equation}
V(x) = \int_{\boldsymbol{\Omega}}\sup_{\beta \in x}
U(\beta,\omega)\,\mu({\rm d}\omega) \ ,
\end{equation}
with $\mu$ a (non-unique) finitely additive probability measure on
$(\boldsymbol{\Omega}, 2^{\boldsymbol{\Omega}})$.
This last result, providing a representation in line with
``standard'' approaches, raises ideas on the possibility of
identification of an agent's probability measure over their
subjective state space $\boldsymbol{P}(\boldsymbol{\Omega},U)$,
analogous to one of the central outcomes of Savage's (1954)
\cite{sav1954} SEU model. However, it is the state-dependence of an
agent's ex post preference which renders this objective
currently quite unrealistic.
Dekel {\em et al\/}'s (2001) \cite[p~894]{deketal2001} approach
contains an inherent interpretational difficulty, which these
authors briefly address: the model represents an agent with an
at least partially incomplete concept of future subjective
contingencies ``now'' with an agent with complete knowledge
of all utility possibilities of menus ``then''; does the
model, nevertheless, deal consistently with an agent's
non-knowledge of (some) future subjective contingencies? Dekel
{\em et al\/} do not see a need for full commitment to this issue,
but leave this point by resorting to the idea of an ``{\em as
if\/}'' representation of their model. However, to put their model
to the test, they call for the identification of a concrete
Ellsberg-type example of an agent's choice behaviour which is in
contradiction with (some of) their axioms; cf.~Dekel {\em et al\/}
(2001) \cite[p~920]{deketal2001}.
\medskip
This challenge was met in the work by Epstein {\em et al\/}
(2007) \cite{epsetal2007}, in which they focus on criticising Dekel
{\em et al\/}'s additive EU representation in particular. The main
argument Epstein {\em et al\/} give states that an economic agent
who is aware of their incomplete knowledge of future subjective
contingencies, and in particular is averse to this personal state
of affairs, will feel a need to hedge against this uncertainty by
randomisation over options available to them, thus providing a
case of violation of Dekel {\em et al\/}'s independence axiom. In
addition, these authors argue that the impossibility of fully
describing all future contingencies relevant to an agent may lead
to the failure of quantifying an agent's uncertainty about
utilities
``then'' in terms of just a single probability measure, as Dekel
{\em et al\/} do in their additive EU representation. In Epstein
{\em et al\/}'s (2007) \cite[p~359]{epsetal2007} view, Dekel
{\em et al\/}'s model therefore precludes a consistent
representation of incompletely known future subjective
contingencies and ambiguity about an agent's preferences ``then''.
To overcome the conceptual problems of Dekel {\em et al\/}'s
model --- in particular, to capture the ambiguity due to an
agent's incomplete knowledge of future subjective contingencies,
and their induced tendency for hedging against it ---, Epstein
{\em et al\/} (2007) \cite{epsetal2007} propose two alternative
axiomatic models of an agent's ex ante choice behaviour. These can
be considered modifications of Dekel {\em et al\/}'s approach in
the following sense. The first model maintains the assumption of
the IR axiom to hold, while the independence axiom is
being relaxed; in the second model both the IR and the
independence axioms are dropped, and the primitives of the model
are extended to include random menus. The two models exhibit a
qualitative difference as regards the status of ex post ambiguity
that an agent finds themself exposed to ``then''. In the first
model, an agent ``now'' expects to gain complete knowledge
``then'' of a state realised in the meantime, i.e., before they
choose a lottery $\beta$ from the ex-ante-preferred menu $x$;
hence, ex post ambiguity is resolved. However, ``now'' the agent
is uncertain about their actual preferences ``then''. In the
second model, on the other hand, an agent ``now'' reckons that
even ``then'' their knowledge of all relevant contingencies will
remain incomplete, leaving their preferences ``then'' somewhat
vague due to the lack of a complete view of all of the options
available to them. In the present work this circumstance is
modelled in terms of a restricted set of utility functions (over
lotteries) with unknown likelihoods. Ex post ambiguity persists,
making hedging against uncertainty ``then'' (and related
potentially unfavourable outcomes) a valuable tool.
In model I, Epstein {\em et al\/} (2007) \cite{epsetal2007}
implement an agent's need for
hedging by following the ideas of Gilboa and Schmeidler (1989)
\cite{gilsch1989} on uncertainty aversion in that, via introducing a
mixing operation defined over menus, an axiomatisation
of a multiple-priors utility representation of an agent's ex ante
preferences is proposed. Starting from a minimal set of eight
axioms requiring (weak) order, monotonicity, IR, non-degeneracy,\footnote{In the literature the
names non-degeneracy and non-triviality are used synonymously for
one of the axioms.} preference convexity, worst,
certainty-independence, and mild continuity, the corresponding
representation theorem states that an agent's ex ante choice
behaviour amounts to maximising a real-valued preference
functional $V_{MP}: \boldsymbol{X} \rightarrow \mathbb{R}$, given
by
\begin{equation}
V_{MP}(x) := \min_{\pi \in \Pi}
\int_{\boldsymbol{N}}\max_{\beta \in x}
U(\beta)\,{\rm d}\pi(U) \ ,
\end{equation}
with $\Pi$ a (non-unique) convex and compact set of Borel
probability measures on the space $\boldsymbol{N}$ of
specifically normalised ex post utility functions; cf. Epstein
{\em et al\/} (2007) \cite[p~365]{epsetal2007}.
For their model II, in order to provide a formal basis for dealing
with persistent coarseness ``then'' of an agent's perception of
future subjective contingencies, Epstein {\em et al\/} (2007)
\cite{epsetal2007} enlarge the set of an agent's objects of choice
to also include random menus of lotteries. This thus yields a set
of Borel probability measures $\Delta(\boldsymbol{X})$ defined
over menus in $\boldsymbol{X}$. A generic element in
$\Delta(\boldsymbol{X})$ is denoted by $P$. Proposing a minimal
set of six axioms comprising (weak) order, continuity,
non-degeneracy, first-stage independence, dominance,
and certainty reversal of order to hold, a representation theorem
is proved for an agent's binary ex ante preference order $\succeq$
over random menus in $\Delta(\boldsymbol{X})$ to the extent that,
in this model, an agent's choice behaviour corresponds to
maximising a real-valued preference functional ${\cal V}_{PC}:
\Delta(\boldsymbol{X}) \rightarrow \mathbb{R}$, given by
\begin{equation}
{\cal V}_{PC}(P) := \int_{\boldsymbol{X}}\left[\,
\int_{{\cal K}^{cc}(\boldsymbol{N}^{*})}\max_{\beta \in x}\min_{U
\in {\cal U}}U(\beta)\,\mu({\rm d}U)\,\right]\,{\rm d}P(x) \ ,
\end{equation}
with $\mu \in \Delta({\cal K}^{cc}(\boldsymbol{N}^{*}))$ a Borel
probability measure over the set ${\cal K}^{cc}(N^{*})$ of closed,
convex and comprehensive Hausdorff-topology subsets of the compact
space of specifically normalised ex post utility functions
$\boldsymbol{N}^{*}$ (cf. Epstein {\em et al\/} (2007)
\cite[p~366]{epsetal2007}), which is unique up to linear
transformations. ${\cal U} \subset \boldsymbol{N}^{*}$
denotes the subset of normalised ex post utility functions
conceived of by an agent ``now'', which, however, to them in that
instance have unknown likelihoods as regards realisation ``then''.
In this respect, $\boldsymbol{N}^{*} \backslash {\cal U}$ may
be interpreted as relating to an agent's unawareness (or
non-knowledge) ``now'' of possible subjective contingencies
``then''.
\section{Discussion and outlook}
\label{sec5}
Now, having discussed the state of the art in the literature, we
should question if the formal representation of non-knowledge in
economic theory has been satisfactory so far. Moreover, in what
follows we sketch some promising directions of research which we
did not address in detail in this paper.
As highlighted in section \ref{sec2}, true uncertainty and,
thus, genuine non-knowledge about the future, are features of
the situation in which an agent is unaware of all future
contingencies, not (just) due to their limited ability to
calculate, or to search for information, but due to the very
nature of any economic system. The major insight of Knight,
Keynes, Shackle, and
some Post-Keynesians, was that economic systems are open and
organic unities that are genuinely indeterminate; every decision
situation is incomplete because it undergoes a constant change
{\em while\/} people decide and act and, by doing so, influence
the set of relevant variables; hence, the major characteristics of
the decision situation --- first of all, the future states that
are possible and conceivable --- cannot be sufficiently
determined; they are unknown.
We already mentioned the following concrete reasons for the
indeterminacy of decision situations: (i)~the {\em big world
issue\/} (i.e., the indefinite, non-exhaustive, number of possible
future states), (ii)~the {\em endogeneity\/} of the decision
situations, i.e., the dependence of future outcomes on
decisions which are prepared and made in the present, and
(iii)~the {\em social contingency\/} which is typical for economic
systems, where the indeterminacy increases due to the dependence
of an agent's decisions on what other agents decide. Are those
issues adequately reflected in the {\em ambiguity\/} and {\em
unawareness\/} approaches, which we discussed in this paper?
\subsection{Big world issue}
Savage's (1954) \cite{sav1954} axiomatisation was often criticised
for its restrictive assumption of the ``small'' world: the list of
possible events is presupposed to be exhaustive (though Savage
\cite[p~16]{sav1954} himself referred to such an assumption as
``preposterous''). Some of the follow-up concepts discussed in our
paper differ in their treatment of this issue.
The uncertainty as ambiguity approaches we mentioned continue to
employ Savage-type state spaces as primitives, which are
continuous, compact, and can be partitioned into a finite number
of mutually exclusive events, while there is an uncountable number
of different states. Although in principle no additional structure
is needed, some authors like Epstein and Wang (1994)
\cite[p~206]{epswan1994} assume on the state space the existence of
a metric and a particular (``weak convergence'') topology,
suggesting that one can construct an indefinite
number of different subsets of the state space; the boundaries of
such subsets are not entirely clear. The question arises if, in
the end, such mathematical structures make everything possible and
thinkable, thus offering a loophole for the assumption that
the list of possible events is not exhaustive. It is worthwhile
mentioning here that the formal handling, but even
more so providing compelling interpretations, of a potential
infinitude of possibilities or states regularly proves a delicate
issue in most (if not all) areas of applied mathematics and
statistics; see, e.g., Hawking and Ellis (1973) \cite{hawell1973}.
In the uncertainty as unawareness models, in contrast, the big
world issue, which relates to the state space representing the
exogenous world, is of less importance, since here the focus of
the analyses is on an agent's subjective state space. This
concept, however, does not belong to the set of the primitives of
the theory. This issue is closely related to the exogeneity vs
endogeneity topic which we turn to discuss next.
\subsection{Endogeneity of state space}
In our view, the question of Machina (2003) \cite[p~18]{mac2003}:
``Do individuals making choices under uncertainty face
states of nature, or do they create them?'' remains one of the
most crucial and controversial in decision theory. In
Savage's concept, the state space represents nature's
exogenous states, i.e., their emergence cannot be influenced by
agents' decisions and actions; an agent just observes the states
and is not an active part of the decision situation.
In economics, as well as in the social sciences, however, there is
increasing attention being payed to the issue of creation of
economic reality by the actions and decisions of economic agents;
e.g., one considers the notions of exogenous risk, performativity
and reflexivity; cf. Danielsson and Shin (2003) \cite{danshi2003},
Danielson {\em et al\/} (2009) \cite{danetal2009}, Callon (1998)
\cite{cal1998}, MacKenzie (2006) \cite{mac2006}, and Soros (1998)
\cite{sor1998}. There is also an interesting movement in the
direction of constructive decision theory, where decision-relevant
states are not given but ``constructed by the DM
(decision maker) in the course of deliberating about questions
such as `How is choice A different from choice B?' and `In what
circumstances will choice A turn out better than choice B?'''; see
Blume {\em et al\/} (2009) \cite[p~1f]{bluetal2009}.
Concerning the papers discussed in the article at hand, the
works on uncertainty as ambiguity focus on the non-knowledge
of probabilities; here, the state space remains exogenous.
However, the unawareness literature makes an interesting and
important move towards the conceptualisation of an endogenous
state space: future outcomes (``then'') are contingent on
decisions made at present (``now''); the
subjective state space is not directly observable but can be
derived from the only variable observable: an agent's behaviour,
or an agent's preferences (e.g., for flexibility).
However, an important question remains if, in the
unawareness literature, we really deal with a truly endogenous
state space, as understood in the concepts of endogenous risk,
performativity and reflexivity. There is already raised some
criticism in the literature, e.g. by Sagi (2006) \cite{sag2006}, who
is concerned about the static nature of the theoretical
construction in
the unawareness approaches: ``The decision maker chooses among
menus, uncertainty over her subjective states is assumed to
resolve and then the decision
maker selects from the menu. However, there is no explicit
modelling of ex-post choice and no role for consistency between
realized tastes and tastes inferred from ex-ante preferences'',
see Sagi (2006) \cite[p~307]{sag2006}. We also think that the
representation of true endogeneity --- as a central determinant of
non-knowledge --- should be dynamic: uncertainty cannot be
resolved, as a situation constantly changes, so that the ex-post
choice should not be modelled as a mechanic, or empty
(i.e., predetermined) decision. The other important dynamic aspect
is the modelling of the {\em evolution\/} of the state space
itself (how it expands and changes); the genesis of a decision
situation should be taken into consideration. There are some
interesting ideas aiming at these issues, e.g. by Hayashi (2012)
\cite{hay2012}, and Grant and Quiggin (2007) \cite{graqui2007}. The
latter authors model the notion of discovery of the principally
unknown space states by decision-makers. We consider this issue to
be crucial for making further progress in the unawareness and
non-knowledge literature.
\subsection{Social contingency}
The idea of an endogenous space state (as well as the notions of
endogenous risk, performativity and reflexivity) goes beyond the
subjective level of decision-making: future states are unknown
because they are contingent on thinking, deciding and acting of
all interconnected economic agents. This is an issue which was
neglected in the unawareness papers presented here. However, there
are interesting attempts to account for the social construction of
the (subjectively perceived) state space. Here we refer to the
work on interactive unawareness by Heifetz, Meier and Schipper
(2006, 2008) \cite{heietal2006, heietal2008}, and on epistemic game
theory; cf.~Brandenburger (2008) \cite{bra2008}.
\medskip
Finally, it may be noted that only a small number of authors
proposing theoretical models of agents' choice behaviour under
conditions of uncertainty are committed to making testable
predictions that may be refuted in principle. This state of
affairs conflicts with
logical positivism's view that the falsification of hypotheses by
means of observation and/or experiment is the primary method for
attaining meaningful progress in any empirical scientific
disciplines; see, e.g., Popper (2002) \cite{pop2002}. Ideally,
future research in economics and decision theory will address this
problem more carefully.
\addcontentsline{toc}{section}{References}
|
1,116,691,501,311 | arxiv | \section{Introduction}
\label{sec:intro}
Propagation of information has been widely studied in various domains. Despite different applications \cite{guille2013information}, all of the diffusion models have three main components in common: {\em Nodes}, i.e., the set of separate agents; {\em Infection}, i.e., the change in the state of a node that can be
transferred from one node to the other; and {\em Causality}, i.e., the
underlying structure based on which the infection is transferred between nodes. The term {\em cascade} is usually used to refer to the temporal traces left by a diffusion process. One of the main goals of studying the diffusion process is to infer the causality component using observations related to the cascades. While the majority of studies (e.g.~\cite{ gomez2011uncovering, gomez2013structure, embar2014bayesian, farajtabar2015coevolve}) assume that the cascades are perfectly observed, some studies (\cite{sefer2015convex, sadikov2011correcting, amin2014learning, lokhov2015efficient,farajtabar2015back}) investigate scenarios in which the cascade trace is not directly observable or is at least partially missing. Two important examples of such scenarios are the outbreak of a contagious disease (with nodes being geographical regions) and the impact of external events on the stock returns of different assets. These studies either assume that some portion of the cascade data is observable and try to infer the causality structure from this portion (e.g.~\cite{amin2014learning, lokhov2015efficient, farajtabar2015back}) or infer the structure using some other observable property of the cascade without inferring the cascade trace itself (e.g.~\cite{sefer2015convex}). In this paper, we assume that the cascade or more precisely the infection times cannot be directly observed. Instead, we observe time series with statistics that change as a function of the true infection times.
As opposed to the previous works, we intend to not only infer the causality structure but also estimate the unobserved infection times.
Another related body of literature addresses time series segmentation and investigates the techniques of detecting single (\cite{eckley2011analysis}) and multiple (\cite{killick2012optimal})
changepoints in univariate (\cite{fearnhead2005exact,fearnhead2006exact}) and multivariate (\cite{xuan2007modeling, matteson2014nonparametric}) time series. Some of these methods involve an underlying graphical model that captures the correlation structure between the time series, but there is no notion of an infection network. In this paper, we develop a framework in which the infection times, parental relationships and link strengths can be estimated by simultaneously modeling the network structure and performing time series segmentation.
The paper is organized as follows. In the rest of this section, we
briefly review related work. In Section \ref{sec:system_model}, we describe
our system model and formulate the diffusion problem. We present our
proposed inference approach and discuss its modeling assumptions. We
evaluate the performance of our suggested approach using both
synthetic and real world datasets and present the simulation results
in Section \ref{sec:res}. The concluding remarks are made in Section
\ref{sec:conc}.
{\em \bf Related Work:} Most of the earlier work exploring techniques for inferring the structure of an infection or diffusion network assumed that cascades were perfectly observed.
\cite{gomez2011uncovering} proposes a
generative probabilistic model of diffusion that aims to realistically
describe how infections occur over time in a static network. The
infection network and transmission rates are inferred by maximizing the likelihood of an observed set of
infection times. \cite{gomez2013structure}
investigates the diffusion problem in an unobserved dynamic network
for the case when the dynamic process spreading over the edges of the
network is observed. Stochastic convex optimization is employed to infer the dynamic network. \cite{embar2014bayesian} proposes a Bayesian
framework to estimate the posterior distribution of connection
strengths and information diffusion paths given the observed infection
times. \cite{farajtabar2015coevolve} studies the creation of new links in the diffusion network and proposes a joint continuous-time model of information diffusion and network evolution.
In contrast to the work described above, some studies focus on the problem where cascades are not perfectly observed. In~\cite{sefer2015convex}, it is assumed that the partially observed probabilistic information about the state of each node is provided, but the exact state transition times are not observed. These transition times are related to the observed trace via the noise dynamics function. The underlying network is inferred by minimizing the expected loss over all realizations of the observable trace. \cite{amin2014learning} studies the theoretical learnability of tree-like graphs in a setting where only the initial and final states are observed. The goal in~\cite{lokhov2015efficient} is to reconstruct the so called node couplings using dynamic message-passing equations when the cascade observations are only partially available. \cite{farajtabar2015back} develops a two-stage framework to identify the infection source when the node infections are only partially observed and the diffusion trace is incomplete. This paper is categorized in this second group of studies by proposing an approach to simultaneously infer the structure and casacade trace of a diffusion process.
\section{System Model and Inference Procedure} \label{sec:system_model} We consider a set of
$N$ nodes ${\mathcal N}=\{1,\dots,N\}$ and assume that node $s \in {\mathcal N}$ is the
source of a contagion $C$ which is transmissible to other nodes of the
network. When $C$ is transferred from node $j$ to node $i$ ($i,j \in
{\mathcal N}$), we say node $i$ is infected by node $j$. In this case, we refer
to node $j$ as the parent of node $i$, and denote it by $z_i$. We
model this infection process by a directed, weighted graph
$G=({\mathcal N},{\mathcal E},{\boldsymbol\alpha}_{N \times N})$ where ${\mathcal E}$ is the set of weighted
edges, and $\alpha$ is a $N\times N$ link strength matrix. Component
$\alpha_{ij}$ of this matrix denotes the strength of the link between
two nodes $i$ and $j$. A directed edge $j\rightarrow i$ exists if and
only if $z_i=j$. The set of potential parents for node $i$ is denoted
by $\pi_i$ (i.e. $z_i \in \pi_i$). The definitions of parents and
candidate parents simply implies that $\forall j \in \pi_i : t_j <
t_i$ and $\forall j \notin \pi_i : \alpha_{ij}=0$.
As mentioned in Section \ref{sec:intro}, we focus on the scenarios
where none of the main infection parameters (link strengths, parents,
and infection times) are directly observed. We assume that the only
observation we get from an arbitrary node $i \in {\mathcal N}$ is a discrete
time signal of length $T$ denoted by ${\mathbf d}_i=\{d_i^n\}_{n=1:T}$. We
denote the set of all observed time signals by
${\mathbf d}=({\mathbf d}_1,\dots,{\mathbf d}_N)$. The goal is to infer the infection parameters
$({\mathbf z},{\mathbf t},{\boldsymbol\alpha})$ that best explain the received signal vector ${\mathbf d}$
where ${\mathbf z}=(z_1,\dots,z_N)$ and ${\mathbf t}=(t_1,\dots,t_N)$. More precisely,
we aim to find the most probable set of parameters
$({\mathbf z}^*,{\mathbf t}^*,{\boldsymbol\alpha}^*)$ conditioned on the received signals ${\mathbf d}$,
i.e.
\begin{equation}\label{mainopt}
({\mathbf z}^*,{\mathbf t}^*,{\boldsymbol\alpha}^*)= \underset{({\mathbf z},{\mathbf t},{\boldsymbol\alpha})}{\arg \max} \quad f({\mathbf z},{\mathbf t},{\boldsymbol\alpha}|{\mathbf d})
\end{equation}
In order to solve \eqref{mainopt}, we need to first derive the joint conditional distribution $f({\mathbf z},{\mathbf t},{\boldsymbol\alpha}|{\mathbf d})$. Using Bayes' rule we have,
\begin{equation}\label{Bayes}
f({\mathbf z},{\mathbf t},{\boldsymbol\alpha}|{\mathbf d})=\frac{f({\mathbf d}|{\mathbf t},{\mathbf z},{\boldsymbol\alpha})f({\mathbf t}|{\mathbf z},{\boldsymbol\alpha})f({\mathbf z}|{\boldsymbol\alpha})f({\boldsymbol\alpha})}{f({\mathbf d})}
\end{equation}
We consider proper prior distributions for components of equation \eqref{Bayes}. As justified in \cite{embar2014bayesian}, we assume that link strengths $\alpha_{ij}$ are independent and model their probability distribution by a Gamma distribution with parameters $a_{ij}$ and $b_{ij}$ i.e. $\alpha_{ij} \sim \Gamma(a_{ij},b_{ij})$. Therefore,
\begin{equation}\label{prior_alpha}
f({\boldsymbol\alpha})=\prod_{i\in {\mathcal N} , j \in \pi_i} f(\alpha_{ij})= \prod_{i\in {\mathcal N} , j \in \pi_i} \frac{x^{a_{ij}-1}e^{-\frac{x}{b_{ij}}}}{\Gamma(a_{ij})b_{ij}^{a_{ij}}}
\end{equation}
We also assume that conditioned on the link strengths, the nodes' parents are independent and follow multinomial distributions i.e.
\begin{equation}\label{prior_z}
f({\mathbf z}|{\boldsymbol\alpha})=\prod_{i \in {\mathcal N}} f(z_i|\alpha_{ij_{j \in \pi_i}})=\prod_{i \in {\mathcal N}} \frac{\alpha_{iz_i}}{\sum_{j\in\pi_i}\alpha_{ij}}
\end{equation}
The next step is to consider a proper prior conditional distribution for infection times. As proposed in \cite{gomez2012inferring}, we assume $t_i$ follows an exponential distribution with parameter $\alpha_{iz_i}$. Without loss of generality, we can assume that $t_1 \geq t_2 \geq \dots\geq t_N$. Therefore,
\begin{equation}\label{prior_tc}
\begin{aligned}
f({\mathbf t}|{\mathbf z},{\boldsymbol\alpha})&=\prod_{i \in {\mathcal N}}f(t_i|{\mathbf z},{\boldsymbol\alpha},t_{i+1:N})=\prod_{i \in {\mathcal N}} \alpha_{iz_i} e^{-\alpha_{iz_i}(t_i-t_{z_i})}
\end{aligned}
\end{equation}
Finally, we assume that node $i$'s observed data, ${\mathbf d}_i$, is independent of the observations from other nodes and that it follows two different distributions before and after being infected at $t_i$. Hence,
\begin{equation}\label{prior_d1}
f({\mathbf d}|{\mathbf z},{\mathbf t},{\boldsymbol\alpha})=\prod_{i \in {\mathcal N}}f({\mathbf d}_i|t_i)
\end{equation}
With the proposed distributions in
\eqref{prior_alpha}-\eqref{prior_d1}, we can calculate the probability
of any arbitrary set $({\mathbf z}_0,{\mathbf t}_0,{\boldsymbol\alpha}_0)$ up to a constant
$\frac{1}{f({\mathbf d})}$ using \eqref{Bayes}. Since the optimization problem
of \eqref{mainopt} cannot be easily solved, we use MCMC methods to
sample from a probability distribution $f({\mathbf z},{\mathbf t},{\boldsymbol\alpha}|{\mathbf d})$. We
use Gibbs sampling to generate samples of this posterior
distribution. In other words, we use full conditional distributions
for each of the infection parameters $t_i$, $z_i$, $\alpha_{ij}$ ($i,j
\in {\mathcal N}$) to generate samples. We denote the parents and infection
times of all the nodes in the network except node $i$ respectively by
${\mathbf z}_{\overline{i}}$, ${\mathbf t}_{\overline{i}}$. Also, the link strength of
all the possible links except the link between nodes $i$ and $j$ is
denoted by ${\boldsymbol\alpha}_{\overline{ij}}$. Using Bayes' rule, the full
conditional probablities for Gibbs sampling are:\\
(a) For parent of an node $i$,
\begin{equation}\label{lem1_1}
f(z_i|{\mathbf d},{\mathbf z}_{\overline{i}},{\mathbf t},{\boldsymbol\alpha})
\propto f(t_i|z_i,\alpha_{iz_i},t_{z_i})f(z_i|\alpha_{ij_{j \in \pi_i}})
\end{equation}
(b) For infection time of an node $i$,
\begin{equation}\label{lem1_2}
f(t_i|{\mathbf d},{\mathbf z},{\mathbf t}_{\overline{i}},{\boldsymbol\alpha})
\propto f(d_i|t_i)f(t_i|z_i,\alpha_{iz_i},t_{z_i})\prod_{k\in C_i}f(t_k|\alpha_{ki},t_i)
\end{equation}
(c) For link strength between nodes $i$ and $j \in \pi_i$,
\begin{equation}\label{lem1_3}
f(\alpha_{ij_{ j \in \pi_i}}|{\mathbf d},{\mathbf z},{\mathbf t},{\boldsymbol\alpha}_{\overline{ij}})\propto f(t_i|z_i,\alpha_{iz_i},t_{z_i})f(z_i|\alpha_{ij_{j \in \pi_i}})f(\alpha_{ij})
\end{equation}
We evaluate the proficiency of the proposed inference approach in Section \ref{sec:res}.
\section{Simulation Results}\label{sec:res}
\subsection{Synthetic Data}
We generate a dataset based on the model
\eqref{prior_alpha}-\eqref{prior_d1}. We first randomly choose
$\pi_i$s (for all $i\in{\mathcal N}$) and an underlying directed tree ${\mathcal T}$
with adjacency matrix $\mathbf{A}=[A_{ij}]$, where $A_{ij}=1$ if and only if
there is a directed edge from $i$ to $j$. The link strength value
${\alpha_{ij}}$ (${j \in \pi_i}$) is generated using the gamma
distribution $\Gamma (a_1,b_1)$ if $A_{i,j}=1$ and $\Gamma (a_2,b_2)$
if $A_{i,j}=0$. We refer to these $\alpha$ values as \textit{true
alphas} and denote them by ${\boldsymbol\alpha}^R=[\alpha_{ij}^R]_{N \times N}$.
Then, we choose the parent of node $i$ i.e. $z_i$ from all the nodes
$j \in \pi_i$ based on a random sampling with weights
$\alpha_{ij}$. These parents are called \textit{true parents} and are
denoted by ${\mathbf z}^R=(z_1^R,\dots,z_N^R)$. Knowing the values of $z_i$
and $\alpha_{iz_i}$, we then generate the \textit{true
infection times} ${\mathbf t}^R=(t_1^R,\dots,t_N^R)$ based on the exponential
distributions described in \eqref{prior_tc}. Finally, we generate the data $d_i$ based on two different Gaussian distributions with
parameters $(\mu_{1i},\sigma_{1i})$ and $(\mu_{2i},\sigma_{2i})$ for
all nodes $i \in {\mathcal N}$, i.e.
\begin{equation}\label{prior_d2}
\begin{aligned}
f(d_i|t_i)=\frac{e^{-[\frac{\sum_{n=1}^{t_i-1} (d_i^n-\mu_{1i})^2}{2\sigma_{1i}^2}+\frac{\sum_{n=t_i}^{T} (d_i^n-\mu_{2i})^2}{2\sigma_{2i}^2}]}}{\sqrt{2\pi}^T \sigma_{1i}^{t_i}\sigma_{2i}^{T-t_i}}
\end{aligned}
\end{equation}
We generate $M$ samples using full conditional distributions of
equations \eqref{lem1_1}-\eqref{lem1_3} to infer the network
parameters $({\mathbf z}^R,{\mathbf t}^R,{\boldsymbol\alpha}^R)$. We denote the set of all
generated samples by $\mathcal{M}$ and refer to the $m$th sample as
$S^m$. The parent vector, infection time vector, and strength matrix
of the $m$th sample are respectively denoted by $S_{{\mathbf z}}^m$,
$S_{{\mathbf t}}^m$, and $S_{{\boldsymbol\alpha}}^m$. Denoting the most observed
parent-infection time pair (i.e. the pair that has been repeated the
most among the $M$ generated samples) by $(\hat{{\mathbf z}},\hat{{\mathbf t}})$, we
estimate the components of the link strength
$\hat{{\boldsymbol\alpha}}=[\hat{\alpha}_{ij}]_{N\times N}$ by
$\hat{\alpha}_{ij}=\frac{1}{|\mathcal{S}|} \sum_{k \in \mathcal{S}}
[S_{{\boldsymbol\alpha}}^k]_{ij}$ where $\mathcal{S}=\{m\in
\mathcal{M}|S_{{\mathbf z}}^m=\hat{{\mathbf z}},S_{{\mathbf t}}^m=\hat{{\mathbf t}}\}$.
In order to
evaluate the performance of our proposed inference approach, two main
questions should be answered: (1) Does the network structure improve
detection of infection times? (2) How much accuracy is lost in terms
of detecting the parents and estimating link strengths when time
series are observed instead of the actual infection times?
The first question can be answered by comparing the accuracy of
infection time estimates for two cases. In the first case, we detect the
infection time of each node independently (i.e. $\hat{t_i} '=\arg
\max_{t_i} f(t_i|d_i)$), while in the second case we exploit the
network structure to find the infection times as explained in Section
\ref{sec:system_model}. We denote the vector of all $\hat{t_i}'$s by
$\hat{{\mathbf t}}'$ and define the infection time deviation function
$D_t({\mathbf t}_x^1,{\mathbf t}_x^2)$ as the average number of samples that are
different in the arbitrary infection time vectors
${\mathbf t}_x^1=({t_x^1}_1,\dots.{t_x^1}_N)$ and
${\mathbf t}_x^2=({t_x^2}_1,\dots.{t_x^2}_N)$ i.e.
$\forall {\mathbf t}_x^1 , {\mathbf t}_x^2 \in \mathcal{R}^{1 \times N} : \quad D_t({\mathbf t}_x^1,{\mathbf t}_x^2) \triangleq \frac{1}{N} \sum_{i=1}^N |{t_x}^1_i-{t_x}^2_i|$.
Figure \ref{fig:fig1} shows the average and $95\% $ confidence intervals of deviation values for
${\mathbf t}_x^1=\hat{{\mathbf t}},\hat{{\mathbf t}}'$ and ${\mathbf t}_x^2={{\mathbf t}}^R$ in $100$
networks of $N=20$ nodes using four extreme sets of parameters
described in Table \ref{table:scenarios}. In all these scenarios, $\mu_{i1}=10$ , $\mu_{i2}=\mu_2$, $\sigma_{i1}=\sigma_{i2}=1$ for all $i\in \mathcal{N}$ and $a_1=9$, $b_1=0.5$ , $a_2=10$. $M=10^5$ samples are
generated and the first $10^3$ generated samples are discarded. As we see in Figure \ref{fig:fig1}, in scenarios {\em A}
and {\em B}, infection times can be detected with high likelihood thus
both performance metrics are zero. However, in scenarios {\em
C} and {\em D}, we see that exploitation of the network structure
results in smaller deviation from the true values. The infection time
estimates are in average more accurate.
\begin{figure}
\centering
\begin{minipage}{.15\textwidth}
\centering
\vspace*{0.3cm}
\small
\begin{tabular}{|c|c|c|}
\hline
\begin{tabular}{@{}c@{}}Sce- \\nario\end{tabular}
& $\mu_2$ & $b_2$ \\
\hline
{\em A} & $100$ & $0.9$ \\
\hline
{\em B} & $100$ & $0.6$ \\
\hline
{\em C} & $11$ & $0.9$ \\
\hline
{\em D} & $11$ & $0.6$ \\
\hline
\end{tabular}
\captionof{table}{\\Test Scenarios}
\label{table:scenarios}
\end{minipage}
\hspace{0.4 cm}
\begin{minipage}{.29\textwidth}
\centering
\includegraphics[width=\textwidth]{t.pdf}
\caption{Infection Time Deviations}
\label{fig:fig1}
\end{minipage}
\end{figure}
We now compare our proposed framework with a more idealized situation
in which the ${\mathbf t}_i$ values are known. We denote the parents and link
strengths estimated with knowledge of the infection times by $\hat{{\mathbf z}}'$ and
$\hat{{\boldsymbol\alpha}}'$ and define deviation functions
$D_z({\mathbf z}_x^1,{\mathbf z}_x^2)$ and $D_{\alpha}({\boldsymbol\alpha}_x^1,{\boldsymbol\alpha}_x^2)$. The
parent deviation function $D_z({\mathbf z}_x^1,{\mathbf z}_x^2)$ is defined as the
number of nodes whose parents are different in
${\mathbf z}_x^1=({z_x^1}_1,\dots,{z_x^1}_N)$ and
${\mathbf z}_x^2=({z_x^2}_1,\dots,{z_x^2}_N)$ i.e.
$\forall {\mathbf z}_x^1 , {\mathbf z}_x^2 \in \mathcal{R}^{1 \times N} : \quad D_z({\mathbf z}_x^1,{\mathbf z}_x^2) \triangleq \sum_{i=1}^N I({z_x^1}_i-{z_x^2}_i)$
, where $I(x)=1$ if $x\neq0$ and $I(x)=0$ otherwise. Finally, for the deviation of link strengths we have,
$\forall {\boldsymbol\alpha}_x^1 , {\boldsymbol\alpha}_x^2 \in \mathcal{R}^{N \times N}: D_{\alpha}({\boldsymbol\alpha}_x^1,{\boldsymbol\alpha}_x^2) \triangleq \frac{1}{N}\sum_{\hat{\alpha}_{ij}>0} |{\alpha_x^1}_{ij}-{\alpha_x^2}_{ij}|$.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.45 \textwidth , height=0.2 \textwidth]{z_a.pdf}
\caption{Deviations in Detection of ${\mathbf z}$ and ${\boldsymbol\alpha}$}
\label{fig:fig2}
\end{figure}
Figure \ref{fig:fig2} shows the values of the defined performance
metrics for ${\mathbf z}_x^1=\hat{{\mathbf z}},\hat{{\mathbf z}}'$ and
${\mathbf z}_x^2={\mathbf z}^R$. We see that in scenarios {\em C} and {\em D}
(where the noise is greater and infection times are more difficult to estimate), not knowing the exact infection times results in larger deviations in estimating the network parameters. Overall, however, the deterioration
in estimation accuracy is not dramatic.
\subsection{Real Data}
We study the outbreak of Avian Influenza (H5N1 HPAI)
\cite{EMPRES}. Figure \ref{fig:fig3} shows the observed locations of
reported infections for both domestic and wild bird species for the
period of January 2004 to February 2016. We divide the observation
points to eight main regions using K-means clustering and generate a
time series ${\mathbf d}_i$ to the $i$th region. The value of this time series at
day $n$ ($d_i^n$) denotes the number of separate locations within the region
$i$ in which the disease was reported on that day.
\begin{figure}[!htb]
\includegraphics[width=0.47 \textwidth]{maps.pdf}
\caption{H5N1 HPAI outbreak in 2004-2016}
\label{fig:fig3}
\end{figure}
We model the number of observations in each region by a Poisson distribution:
\begin{equation}
f({\mathbf d}_i|t_i)=\prod_{n=1}^{t_i-1} \frac{{\lambda_{1i}}^{d_i^n} e^{-\lambda_{1i}}}{d_i^n!}\prod_{n=t_i}^T \frac{{\lambda_{2i}}^{d_i^n} e^{-\lambda_{2i}}}{d_i^n!}
\end{equation}
where $\lambda_{1i}=\frac{\sum_{n=t_1}^{t_i}d_i^n}{t_i-1}$ and
$\lambda_{2i}=\frac{\sum_{n=t_i+1}^T d_i^n}{T-t_i+1}$. The link
strength parameters $a_{ij}$ and $b_{ij}$ of equation
\eqref{prior_alpha} are derived by fitting a gamma distribution to the
inverse of distances between observation points of regions $i$ and
$j$. Figure \ref{fig:fig4} shows the time series for the eight
regions. Regions R5 and R8 are the first regions in which the disease
is observed. The first infections for these regions were reported on
the same day, so we assume that they were both sources of the infection.
We infer the infection parameters for the period 2004-2007
by generating $M=10^6$ samples and discarding the first $10^4$ ones. The green line in Figure
\ref{fig:fig4} shows the end of the study period. Region R4 has almost
no reported infections for this period so we exclude it when estimating the
underlying infection graph. The detected infection times are shown in Figure
\ref{fig:fig4} by red vertical lines.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.49 \textwidth]{series.pdf}
\caption{Observed Time Series in the Impacted Regions}
\label{fig:fig4}
\end{figure}
Figure \ref{fig:fig5} shows the four most probable configurations of
the infection network and their percentages among generated samples. The
edge weights in these graphs are estimated link strengths.
\begin{figure}[!htb]
\centering
\subfloat[Configuration 1, Weight= $48 \%$]{\includegraphics[trim={0mm 1mm 1mm 1mm},clip,width= 0.22 \textwidth, height=0.22 \textwidth]{C1.pdf}\label{fig:fig1_first_case}}\hspace*{0.05in}
\subfloat[Configuration 2, Weight= $23 \%$]{\includegraphics[trim={0mm 1mm 1mm 1mm},clip,width= 0.22 \textwidth, height=0.22 \textwidth]{C2.pdf}\label{fig:fig1_third_case}}\\
\subfloat[Configuration 3, Weight= $17 \%$]{\includegraphics[trim={0mm 1mm 1mm 1mm},clip,width= 0.22 \textwidth, height=0.22 \textwidth]{C3.pdf}\label{fig:fig1_first_case}}\hspace*{0.05in}
\subfloat[Configuration 4, Weight= $10 \%$]{\includegraphics[trim={0mm 1mm 1mm 1mm},clip,width= 0.22 \textwidth, height=0.22 \textwidth]{C4.pdf}\label{fig:fig1_third_case}}
\caption{Most Possible Network Configurations}
\label{fig:fig5}
\end{figure}
\section{CONCLUSION}
\label{sec:conc}
In this paper, we have proposed a framework for inferring the
underlying graph based on which an infection is diffused in a network
structure. We designed the model to address scenarios where the
infection times are unknown. We evaluated the performance using
synthetic datasets, demonstrating that (i) the incorporation of the
model could improve the estimation of infection times compared to
univariate changepoint estimation when the data match the model; and
(ii) the absence of exact knowledge of infection times does not lead
to significant deterioration in performance. We illustrated how the model and inference methodology could be applied to analyze the
outbreak of a virus. Incorporating multiple changepoint detection approaches can be studied as a future work.
\bibliographystyle{IEEEtran}
|
1,116,691,501,312 | arxiv | \section{Introduction}
In power systems terminology, a \emph{preventively} secure system state refers to a situation in which the power system remains secure after any credible contingency (typically defined as an N-1 situation) without any additional control action. A \emph{correctively} secure state refers to a situation where additional, post-disturbance controls might be required.
The use of corrective security has increased over the past years \cite{panciatici2014}. On the one hand, the use of corrective actions is driven by increased uncertainty from renewable electricity generation, economic considerations due to market liberalization and operation of the system closer to its limits. On the other hand, better control systems, more situational awareness and the installation of devices such as phase-shifting transformers (PSTs) and high-voltage direct current (HVDC) connections provide the system operator with new possibilities to control power flows and react to changes in the system in real-time.
While corrective control can reduce the operational cost \cite{chatzivasileiadis2011} and is applied routinely
in real-time system operation,
the real-time set-point changes of HVDC and PSTs are typically chosen in an ad-hoc fashion. This is partially due to the difficulty of planning corrective control actions in response to forecast uncertainty, as it requires the consideration of a large number possible uncertainty scenarios in addition to consideration of contingencies. However, the need to ensure that corrective control will be sufficient in real-time calls for efficient ways of modelling both the possible system states and the corresponding corrective control actions.
Power system operational planning with consideration of uncertainty has been considered in many different ways, e.g.
\cite{bouffard2008, sjodin2012, vrakopoulou2012, roald2013, bienstock2014, warrington2013, lorca2015, li2015, summers2014}, but only few have considered the application of corrective control actions or the existence of power flow control devices such as PSTs or HVDC.
In \cite{panciatici2010, capitanescu2012}, a three-stage optimal power flow (OPF) framework where corrective control actions are used to ensure feasibility during worst-case combinations of contingencies and uncertainty was proposed, and was extended to the definition of optimal corrective control actions in \cite{fliscounakis2013}.
The OPF formulation in \cite{mueller2014} accounts for uncertainty and includes corrective control for a limited number of pre-selected, critical uncertainty scenarios.
In \cite{vrakopoulou2013ISGT, thakurta2015}, \emph{post-contingency} corrective control of power flow control devices such as HVDC and PSTs is applied within a chance constrained OPF (CC-OPF) framework, which ensures that the constraints will hold with a desired probability. However, corrective control actions \emph{in reaction to the uncertainty realizations} have not been considered in any of these works.
In this paper, we extend previously proposed CC-OPFs to account for corrective control in reaction to \emph{both contingencies and uncertainty}.
Corrective control for uncertainty differs from post-contingency corrective control in several ways. While contingencies are typically low probability, discrete events that induce large and sudden impacts on the system, uncertainty realizations occur frequently and develop in a more continuous fashion (although ramping due to, e.g., sunset or fog can be fast).
These differences impact the time available for implementation, as well as the type and modelling of control reactions.
However, we refer to both as \emph{corrective control}, since they act to mitigate the impact of already occurred events (as opposed to preventive measures).
We focus on corrective control of HVDC and PSTs, which are typically controlled by the transmission system operator and have low cost, as opposed to the use of generation redispatch, which interferes with market operation and incurs significant cost.
Further, we choose to work with analytically reformulated chance constraints to account for the impact of uncertainty, as these offer a transparent and scalable way of ensuring security with a large number of uncertainty sources. Chance constraints also align well with several methods applied in industry, such as the probabilistic reserve dimensioning applied in ENTSO-E
\cite{ENTSOE2013supportingLFCR} or the definitions of reliability margins for flow-based market coupling in parts of Europe \cite{CWE2011}.
The contributions of the paper can be summarized in the following points:
\begin{enumerate}
\item We propose a framework with combined corrective control in reaction to both uncertainty and contingencies. Since uncertainty from, e.g., renewable energy production are naturally characterized as continuous random variables, we propose to model the corrective control through a continuous, affine control policy.
\item We formulate a chance constrained optimal power flow with security constraints (CC-SCOPF) based on a combination of the formulations developed in \cite{roald2013, bienstock2014}, and extend it to include the proposed corrective control framework
\item The CC-SCOPF is reformulated into an optimization problem with second-order cone (SOC) constraints, for which we develop an efficient solution algorithm. The proposed algorithm is based on solving a sequence of second-order cone programs (SOCPs), which in our case outperforms the cutting-plane algorithm proposed in \cite{bienstock2014}.
\end{enumerate}
The benefits of the proposed CC-SCOPF are demonstrated in a case study based on the IEEE 118 bus system. The results show that corrective control in reaction to uncertainty can reduce operational cost, while maintaining system security. Further, we demonstrate and discuss the scalability of the method in a case study including both the IEEE 300 bus test system and the large-scale Polish test case with 2383 buses.
The remainder of the paper is organized as follows. Section \ref{sec:corrcontrol} describes the general framework of corrective control, based on a generic power flow controller. The power system modelling, including uncertainty and corrective control, is described in Section \ref{sec:modelling}. Section \ref{sec:CCOPF} provides the complete CC-SCOPF formulation and discusses the reformulation of the chance constraints. Details of the sequential SOCP algorithm are given in Section \ref{sec:implementation}.
The case study for the IEEE 118 bus system is included in Section \ref{sec:Case},
while Section \ref{sec:Polish} demonstrates and discusses scalability of the method on larger test cases. Section \ref{sec:Conclusion} summarizes and concludes the paper.
\section{Modelling Framework for Corrective Control}
\label{sec:corrcontrol}
In this paper, we consider corrective control as a means to handle transmission line congestion. We distinguish between two types of corrective control: Corrective control to handle contingencies and corrective control to handle forecast uncertainty.
In the following, we present how we model corrective control for a generic power flow controller, which changes set-point in response to transmission line outages or fluctuations in the power injections.
Corrective control for generation outages is not considered here, but can be incorporated within the proposed modelling framework.
Modelling considerations related specifically to HVDC and PSTs, as well as how the corrective control influences the power flows on the transmission lines, are given in Section \ref{sec:modelling}.
We denote vectors by lower case letters, e.g., $p_G,~\omega$. The components of the vectors are denoted by using lower case subscripts, i.e, the $i$th component of $p_G$ is denoted by $p_{G,i}$.
Matrices are denoted by upper/lower bold case letters, $\boldsymbol{\alpha}, \mathbf{M}$, and $\boldsymbol{\alpha}_{(i,\cdot)}, \boldsymbol{\alpha}_{(\cdot,i)}$ denote the $i^{th}$ row and column of $\boldsymbol{\alpha}$ respectively.
Index $i$ refers to generators, while index $ij$ refers to lines.
\subsection{Modelling Corrective Control for Contingencies}
Post-contingency control is modeled as a change in the set-points of power flow control devices (such as the flow across HVDC connection or the phase angles of PSTs) following an outage.
Since outages can be characterized as a set of discrete events, we model the set-point changes by introducing additional optimization variables $\delta^{ij}$ in for every outage as in \cite{chatzivasileiadis2011, vrakopoulou2013ISGT}.
The set-point of a power flow controller following the outage of line $ij$ is thus given by
\begin{equation}
\tilde{\pi}(ij) = \pi + \delta^{ij},
\label{policy}
\end{equation}
where $\pi,~\tilde\pi(ij)$ represents the pre- and post-contingency set-points, respectively, and $\delta^{ij}$ represents the set-point change following the contingency.
\subsection{Modelling Corrective Control for Forecast Uncertainty}
While post-contingency corrective control describes reactions of the controllers to an outage \emph{after the outage happens}, corrective control for uncertainty describes reactions of the controllers \emph{after the uncertainty has realized}.
We model these corrective control actions through affine control policies, where the controllers adjust their set-point proportional to the forecast error. The controller set-points, including corrective control for forecast uncertainty, are given by
\begin{equation}
\tilde{\pi}(\omega) = \pi + \boldsymbol{\alpha} \omega,
\label{policyUnc}
\end{equation}
where $\pi,~\tilde\pi(\omega)$ represents the scheduled and real-time controller set-points, and the vector $\omega$ denotes the random fluctuations. The matrix $\boldsymbol{\alpha}$ defines the response of the controllers, and control parameters $\boldsymbol{\alpha}$ are subject to optimization.
Restricting the corrective action to follow a linear response leads to solutions that have higher cost than if we use, e.g., an arbitrary function or define optimal set-points based on specific uncertainty scenarios. However, representing arbitrary functions or a reasonable set of uncertainty scenarios within an optimization problem is challenging, and negatively affects scalability to large systems.
Therefore, while an affine response policy
restricts the ability to react,
it also has several advantages. First, it allows us to optimize over the corrective control actions without compromising computational tractability.
Second, a policy-based response allows us to treat the fluctuations as continuous variables (as opposed to a representation through a finite number of scenarios). Third, it provides a control policy which can be easily implemented by the TSO.
Fourth, even if the policy is not directly implemented, it guarantees that a feasible solution will exist.
Affine policies have been utilized within chance constrained and robust models for the optimal power flow problem with uncertainty, e.g. \cite{vrakopoulou2012, roald2013, warrington2013, bienstock2014, lorca2015}, to model the continuous activation of reserves through a mechanism similar to the automatic generation control (AGC).
The approach taken in this paper differs from previous work in two important aspects.
First, the affine control policies defined previously were used to \emph{balance} the grid (by controlling generation), while we focus on corrective power flow control for \emph{congestion management} (by controlling HVDC and PSTs).
Second, previously proposed policies focused on balancing the overall power mismatch without consideration of the location of the fluctuations in the grid.
Since we apply corrective control to manage power flows and congestion, the specific location of both the fluctuations and the control reactions in the grid matters. We therefore define control parameters $\boldsymbol{\alpha}_{(i,j)}$ for each pair of controller $i$ and uncertainty source $j$, such that the controllers can react to fluctuation $\omega_j$ based on its location in the grid.
\subsection{Combined Corrective Control}
Assuming that each controller is able to provide corrective control for both contingencies and forecast uncertainty, we obtain the combined corrective control policy,
\begin{equation}
\tilde{\pi}(\omega,ij) = \pi + \boldsymbol{\alpha} \omega + \delta^{ij}.
\label{policy_combined}
\end{equation}
Note that the reaction to forecast uncertainty is decoupled from the reaction to contingencies, i.e., $\boldsymbol{\alpha}$ does not depend on whether the system is in post- or pre-contingency state.
This is because the number of additional control variables would otherwise be very significant and difficult to handle, both for the optimization and for implementation in real-time operation.
\subsection{Use of Corrective Control in Real-Time Operation}
The corrective control actions $\delta^{ij},\boldsymbol{\alpha}$ are necessary to ensure secure system operation in real time.
The post-contingency corrective control actions $\delta^{ij}$ can be made available to the operator in form of a look-up table, such that they can be implemented fast following a contingency.
The corrective control actions due to uncertainty, defined by a combination of the real time realization of the fluctuations $\omega$ and the control parameters $\boldsymbol{\alpha}$, can be implemented
either as continuous, automatic adjustments (e.g., similar to AGC control)
or as a manual change in set-points which is implemented only when a constraint violation occurs in the system.
While restricting the response to an affine policy allows for a tractable day-ahead OPF, we can implement any reaction in real-time operation. In real time the uncertainty $\omega$ is known, and the operator can rerun the SCOPF as a deterministic problem. The set-points of the HVDC and PSTs can then be optimally chosen based on the current operating point. Since the real-time control is more general than the affine policy, we solve an SCOPF with less restrictive constraints, extending the feasible space. In this regard, applying an affine policy in operational planning guarantees feasibility in real time operation.
\section{Power System Modelling} \label{sec:modelling}
In this section, we present details about the power system modelling. We consider a power system where $\mathcal{N},~\mathcal{L}$ is the set of nodes and lines, respectively, and the number of nodes and lines are given by $|\mathcal{N}| = m$ and $|\mathcal{L}| = l$.
The set of nodes with uncertain demand or production of energy is given by $\mathcal{U}\subseteq\mathcal{N}$. The fluctuations in the demand or production at any given node can be due to various sources, such as load fluctuations, forecast errors for wind or PV or intra-day electricity trading.
The set of conventional generators is denoted by $\mathcal{G} \subseteq \mathcal{N}$, and are assumed to be controllable within their limits.
To simplify notation, we assume that there is one conventional generator $p_{G,i}$, one composite uncertainty source $u_i$ and one demand $d_i$ per node, such that $|\mathcal{G}|=|\mathcal{U}|=|\mathcal{N}| = m$. Nodes without generation or load can be handled by setting the respective entries to zero, and nodes with multiple entries can be handled through a summation.
The considered power flow control devices used for corrective control are given by the set of HVDC connections $\mathcal{H}$ and PSTs $\mathcal{S}$, with $|\mathcal{H}| = h$ and $|\mathcal{S}| = s$, respectively.
The modelling is based on a DC power flow approximation. We consider the outage of any single line for the N-1 security constraints, but leave out lines that lead to islanding of the system. Generation outages are handled in a simplified way through a pre-determined reserve requirement.
\subsection{Uncertainty Sources}
The uncertainty sources $u\in\mathbb{R}^m$ are modeled as the sum of the expected production of active power (e.g., from wind or solar PV) $\mu$ and a zero mean fluctuating component $\omega$:
\begin{equation}
u = \mu + \omega~.
\end{equation}
The covariance matrix of the fluctuations is denoted by $\Sigma_W \in \mathbb{R}^{m\times m}$. Further, the total power mismatch arising from the fluctuations is given by $\Omega=\sum_{i\in\mathcal{U}}\omega_i$, with corresponding standard deviation $\sigma_{\Omega} = \sqrt{\mathbf{1}^{T}\Sigma_W \mathbf{1}}$.
\subsection{HVDC and PST Set-Points With Corrective Control}
We assume that all HVDC installations are point-to-point connections
such that the power flow $p_{DC}$ is controllable within the limits $\underline{p}_{DC}$, $\overline{p}_{DC}$.
The PST tap positions are assumed to be close enough for the angle $\gamma$ to be well approximated as continuous variables, and the lower and upper limit on the PST angles are given by $\underline{\gamma}, \overline{\gamma}$.
As outlined in Section \ref{sec:corrcontrol}, the post-contingency corrective control of HVDC and PSTs are modelled through additional variables $\delta_{DC}^{ij},~\delta_{\gamma}^{ij}$, while the corrective control for uncertainties are modelled through the matrices $\boldsymbol{\alpha}_{DC}\in\mathbb{R}^{h \times m},~\boldsymbol{\alpha}_{\gamma}\in\mathbb{R}^{s \times m}$, respectively. The set-points of the HVDC and the PSTs with corrective control are thus given by
\begin{align}
&\tilde{p}_{DC,i}(\omega,ij)=p_{DC,i} + \delta_{DC,i}^{ij} - \boldsymbol{\alpha}_{DC(i,\cdot)}\omega, \quad \forall_{i\in\mathcal{H}, ij\in \{0,\mathcal{L}\}}, \label{HVDCcontrol}\\
&\tilde{\gamma}_{DC,i}(\omega,ij)=\gamma_{i} + \delta_{\gamma,i}^{ij} - \boldsymbol{\alpha}_{\gamma(i,\cdot)}\omega, \quad \forall_{i\in\mathcal{S}, ij\in \{0,\mathcal{L}\}}, \label{PSTcontrol}
\end{align}
where $ij=0$ refers to the set-points in the pre-contingency state where $\delta_{DC,i}^{0}=\delta_{\gamma,i}^{0}=0$.
\subsection{Power Balance and Generation Control}
One of the main tasks in power system operation is to ensure balance between consumed and produced power at all times. Here, we split the modelling of the power balance into the case with and without deviations $\omega$ from the schedule. For the base case (with $\omega=0$ and no outages), we enforce the nodal power balance constraints,
\begin{equation}
p_G - d + \mu + \mathbf{C}_{DC}p_{DC}= \mathbf{B}_{Bus}\cdot\theta + \mathbf{B}_{\gamma}\cdot\gamma~. \label{nodal_bal_first}
\end{equation}
Here, $\mathbf{B}_{Bus}\in\mathbb{R}^{m\times m}$ is the bus susceptance matrix of the system and $\theta\in\mathbb{R}^m$ refers to the voltage angles at each bus.
$\mathbf{C}_{DC}$ is the incidence matrix of the HVDC connections, with -1 at the bus where the connection is leaving and +1 at the bus where it enters.
The matrix $\mathbf{B}_{\gamma}$ describes the influence of the PSTs and has non-zero entries only for buses connected to lines with PSTs. For PST $k$ located at line $ij$ (which leaves from bus $i$ and enters bus $j$), we have the following non-zero elements,
\begin{equation}
\mathbf{B}_{\gamma(i,k)}=\frac{1}{x_{ij}},\quad \mathbf{B}_{\gamma(j,k)}=-\frac{1}{x_{ij}}.
\end{equation}
To balance fluctuations $\omega$, we assume that the generators adjust their in-feeds according to an automatic generation control (AGC) signal.
We base the activation of reserves on the total power mismatch $\Omega$ as in \cite{vrakopoulou2012, roald2013, warrington2013, bienstock2014, lorca2015}. In this case, the generator set-points after adjustment for the fluctuations are given by
\begin{equation}
\tilde{p}_G(\omega) = p_G - \alpha_G \Omega, \label{global}
\end{equation}
where $\alpha_G \in \mathbb{R}^m$ is a vector distributing the power mismatch among the generators, and is subject to optimization.
To ensure active power balance during fluctuations and that the reserve activation contributes to balancing of the deviation (i.e., a positive $\Omega$ induce a decrease in other generation),
we additionally enforce the following constraints on $\alpha_G$:
\begin{equation}
\sum_{i\in\mathcal{G}} \alpha_{G,i} = 1, \quad \alpha_G \geq 0. \label{windbal}
\end{equation}
Note that we do not include corrective control by generators for neither contingencies nor uncertainties. While the formulation itself can easily be extended by variables denoting post-contingency redispatch $d_G^{ij}$ or a more general matrix $\boldsymbol{\alpha}_G$, generation control is expensive and require careful consideration of how redispatch and balancing is priced. Allowing a more general matrix $\boldsymbol{\alpha}_G$ can however significantly impact the handling of uncertainty as shown in \cite{roald2016EnergyCon}.\\
Since line outages do not change the power injections and power balance is maintained during fluctuations due to \eqref{windbal}, the above modelling enforces power balance during all considered system conditions.
\subsection{Power flow modelling}
We compute the pre-contingency power flows as the sum of the base case flow $p_{ij}$ (with $\omega=0$ and no contingencies) and changes due to fluctuations $\Delta p_{ij}^{\omega}$:
\begin{equation}
p_{ij}^{\omega} = p_{ij} + \Delta p_{ij}^{\omega},
\end{equation}
The base case power flow $p_{ij}$ is given by
\begin{equation}
p_{ij} = \frac{1}{x_{ij}}( \theta_i - \theta_j) + b_{\gamma(ij,\cdot)}\gamma,
\end{equation}
where $x_{ij}$ is the reactance of lines $ij$. The matrix $b_{\gamma}\in\mathbb{R}^{l\times s}$ maps the PST angles to the lines where the PSTs are located, with entries $b_{\gamma(ij,s)}=\frac{1}{x_{ij}}$ if the $s^{th}$ PST is located at line $ij$ and $b_{\gamma(ij,s)}=0$ otherwise.
The power flow change $\Delta p_{ij}^\omega$ is a result of both the fluctuations $\omega$ themselves, the corrective control from HVDC and PSTs and the balancing by the generators. It is given by
\begin{align}
& \Delta p_{ij}^{\omega} = \mathbf{M}_{(ij,\cdot)}\left(-\alpha_G 1_{1\times m} + \mathbf{I} + \mathbf{C}_{DC} \boldsymbol{\alpha}_{DC} - \mathbf{B}_{\gamma} \boldsymbol{\alpha}_{\gamma}\right)\omega~ \nonumber \\
& \quad \quad \quad ~ + b_{\gamma(ij,\cdot)}\boldsymbol{\alpha}_{\gamma}\omega = \mathbf{A}_{(ij,\cdot)}\omega~, \label{eq:lineflows1}
\end{align}
where $\mathbf{I}\in \mathbb{R}^{m \times m}$ is the identity matrix.
The matrix $\mathbf{M}\in\mathbb{R}^{l \times m}$ is the matrix of power transfer distribution factors (PTDFs) \cite{christie2000}.
The term in the brackets is the effective change in the power injections due to fluctuations $\omega$, and the last term is the direct influence of the changes in the PST set-points on the lines where they are located.
Note that the matrix $\mathbf{A} = \mathbf{A}(\alpha_G, \boldsymbol{\alpha}_{DC}, \boldsymbol{\alpha}_{\gamma})$ is a linear function of the policies for balancing and corrective control. By allowing corrective control of HVDC and PSTs in reaction to uncertainties, it is possible to influence the change in the power flow and reduce line congestion.
With line outages, the power flow changes both due to corrective control actions from HVDC and PSTs, and due to the change in system topology.
The change in the power flow due to the corrective control actions alone (including the change that would have occured on the outaged line itself) can be modelled as
\begin{align}
p_{ij}^{\delta_{kl}} &= p_{ij} + \mathbf{M}_{(ij,\cdot)}\mathbf{C}_{DC} \delta_{DC}^{kl} + \left(b_{\gamma(ij,\cdot)} - \mathbf{M}_{(ij,\cdot)}\mathbf{B}_{\gamma}\right) \delta_{\gamma}^{kl}. \nonumber
\end{align}
The effect of change in system topology can be accounted for using line outage distribution factors (LODFs) \cite{christie2000}, with $LF_{ij}^{kl}$ denoting the fraction of the power flow on the line $kl$ that is shifted to line $ij$ when line $kl$ is outaged.
The flow on a line $ij$ with outage $kl$ and fluctuation $\omega$ is given by
\begin{align}
p_{ij}^{kl,\omega} &= p_{ij}^{\delta_{kl}} + \Delta p_{ij}^{\omega} + LF_{ij}^{kl}\left( p_{kl}^{\delta_{kl}} + \Delta p_{kl}^{\omega}\right) \nonumber\\
&= p_{ij}^{\delta_{kl}} + \mathbf{A}_{(ij,\cdot)}\omega + LF_{ij}^{kl}\left( p_{kl}^{\delta_{kl}} + \mathbf{A}_{(kl,\cdot)}\omega \right) \nonumber\\
&= p_{ij}^{\delta_{kl}} + LF_{ij}^{kl} p_{kl}^{\delta_{kl}} + \left(\mathbf{A}_{(ij,\cdot)}+LF_{ij}^{kl}\mathbf{A}_{(kl,\cdot)}\right) \omega. \nonumber
\end{align}
\section{Chance-Constrained Optimal Power Flow} \label{sec:CCOPF}
In this section, we formulate the chance constrained optimal power flow problem based on the modelling considerations described in Section \ref{sec:modelling}.
\subsection{Objective and Constraints}
\subsubsection{Objective}
The objective of the CC-OPF is to minimize the total cost of energy and reserves,
\begin{equation} \label{eq:objective}
\min \sum_{i \in \mathcal{G}}\left(c_i p_{G,i} + c_i^+ r_i^+ + c_i^- r_i^-\right)
\end{equation}
Here, $p_{G,i}$ are the scheduled generation set-points, $r^+, r^-$ represent the up- and down regulation reserves and the cost coefficients $c_i, c_i^+, c_i^-$ represent the bids of the generators for providing energy and reserves.
\subsubsection{Power Balance and Generator Constraints}
The power balance and generator constraints are given by
\begin{align}
& p_G - d + \mu + \mathbf{C}_{DC}p_{DC}= \mathbf{B}_{Bus}\cdot\theta + \mathbf{B}_{\gamma}\cdot\gamma~, \label{nodalbal}\\
& \sum_{i\in\mathcal{G}} \alpha_{G,i} = 1, \quad \alpha_G \geq 0, \label{windbal1}\\
& p_G + r^+ \leq \bar{p}_G, \quad p_G - r^- \geq \underline{p}_G, \label{gen_down}\\
& \sum_{i\in\mathcal{G}} r_i^+ \geq R^+, ~~\sum_{i\in\mathcal{G}} r_i^- \geq R^-, \label{res_cont1}\\
& 0\leq r^+ \leq \bar{r}^+,~~ 0\leq r^-\leq \bar{r}^-, \label{res_cont}\\
& \mathbb{P}\left[- \alpha_{G,i}\Omega \leq r^{+}_i \right] \geq 1-\epsilon_G,~~ \forall_{i \in \mathcal{G}} \label{chance_gen_min} \\
& \mathbb{P}\left[- \alpha_{G,i}\Omega \geq r^{-}_i \right] \geq 1-\epsilon_G,~~ \forall_{i \in \mathcal{G}} \label{chance_gen_max}
\end{align}
Here, \eqref{nodalbal} defines the power balance in normal operation, while \eqref{windbal1} ensures active power balance during wind power fluctuations and non-negativity of $\alpha_G$.
The generator constraints \eqref{gen_down} ensure that the initial generation set-points in combination with scheduled reserve capacities $r^+,r^-$ remain within the technical generation limits $\bar{p}_G,~\underline{p}_G$.
Eq. \eqref{res_cont1} enforce that the total amount of reserves fulfill a predetermined reserve requirement $R^+, R^-$. Eq. \eqref{res_cont} ensure that the generators are not scheduled to provide more reserves than their ramping capabilities allow, by enforcing an upper bound on the reserve capacities $\bar{r}^+,~\bar{r}^-$ for each generator.
Eqs. \eqref{chance_gen_min}, \eqref{chance_gen_max} enforce that the reserve activation requested from each generator
can be covered by the corresponding reserves $r^+,~r^-$. Since the reserve activation depends on the fluctuations $\Omega$, these constraints are formulated as chance constraints, which require the probability of constraint violation to remain below an acceptable level $\epsilon_G$. \\
Note that all reserve chance constraints depend only on the total mismatch $\Omega$, which is a scalar random variable. Assuming that we enforce all generator constraints with the same acceptable violation probability $\epsilon_G$, the reserve constraints \eqref{chance_gen_min}, \eqref{chance_gen_max} are jointly enforced, i.e., the probability that \emph{none} of the generation constraints will be violated is $1-\epsilon_G$.
To see this, we observe that the reserves of all generators will be fully utilized when the $1-\epsilon_g$ quantile $q_{1-\epsilon_g}$ of $\Omega$ is reached. For $\Omega<\hat{\Omega}_{1-\epsilon_G}$, \emph{none} of the constraints \eqref{chance_gen_max} are violated. For $\Omega>\hat{\Omega}_{1-\epsilon_G}$, \emph{all} constraints \eqref{chance_gen_max} are violated. Using this observation we have
\begin{align*}
&\mathbb{P}\left[- \alpha_{G,i}\Omega \leq r^{+}_i \right] = \mathbb{P}\left[- \Omega \leq \frac{r^{+}_i}{\alpha_{G,i}} \right] \geq 1-\epsilon_G.
\end{align*}
Since there is a non-zero cost $c_i^{+}$ associated with $r_i^{+}$ in \eqref{eq:objective}, at optimality we will have
\begin{align*}
\frac{r^{+}_i}{\alpha_{G,i}} = q_{1-\epsilon_g}~~ \forall_{i \in \mathcal{G}}.
\end{align*}
Hence the \emph{joint} probability of reserve insufficiency can be simplified as
\begin{align*}
\mathbb{P}\left[- \alpha_{G,i}\Omega \leq r^{+}_i, ~~ \forall_{i \in \mathcal{G}} \right]
& = \mathbb{P}\left[ -\Omega \leq q_{1-\epsilon_g} \right] = 1-\epsilon_G.
\end{align*}
The violation probability $\epsilon_g$ thus has a direct interpretation as the probability of having insufficient reserves, which is a commonly used risk measure in real power systems. In the Swiss power system, for example, the acceptable probability of reserve deficiency is 0.2\% \cite{abbaspourtorbati2016}, which corresponds to $\epsilon_g = 0.001$ for up and down reserves.
\subsubsection{HVDC and PST Constraints}
The constraints enforcing the upper and lower bounds on HVDC and PSTs, after activation of corrective control, are given by \eqref{chance_dc_min} - \eqref{chance_pst_max}, while \eqref{max_corr} allows the operator to limit the amount of post-contingency control:
\begin{align}
& \mathbb{P}\left[p_{DC,i} + \delta_{DC,i}^{kl} - \boldsymbol{\alpha}_{DC(i,\cdot)}\omega \leq \bar{p}_{DC,i} \right] \geq 1-\epsilon, \label{chance_dc_min} \\
& \mathbb{P}\left[p_{DC,i} + \delta_{DC,i}^{kl} - \boldsymbol{\alpha}_{DC(i,\cdot)}\omega \geq \underline{p}_{DC,i} \right] \geq 1-\epsilon, \label{chance_dc_max} \\
& \mathbb{P}\left[\gamma_{j} + \delta_{\gamma,j}^{kl} - \boldsymbol{\alpha}_{\gamma(j,\cdot)}\omega \leq \bar{\gamma}_{j} \right] \geq 1-\epsilon, \label{chance_pst_min} \\
& \mathbb{P}\left[\gamma_{j} + \delta_{\gamma,j}^{kl} - \boldsymbol{\alpha}_{\gamma(j,\cdot)}\omega \geq \underline{\gamma}_{j} \right] \geq 1-\epsilon, \label{chance_pst_max} \\
& -\overline{\delta}_{DC,i}^{kl} \leq \delta_{DC,i}^{kl} \leq -\overline{\delta}_{DC,i}^{kl},\quad
-\overline{\delta}_{\gamma,i}^{kl} \leq \delta_{\gamma,i}^{kl} \leq -\overline{\delta}_{\gamma,i}^{kl} \label{max_corr}\\
& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \forall_{i \in \mathcal{H}, ~j \in \mathcal{S},~ kl \in \{0,\mathcal{L}\}}~. \nonumber
\end{align}
The constraints \eqref{chance_dc_min} - \eqref{chance_pst_max} depend on the uncertainty $\omega$, and are thus formulated as chance constraints with acceptable violation probability $\epsilon$.
Since $\epsilon > 0$, there will be cases in which HVDC and PSTs reach their limit and are not able to continue to provide corrective control according to \eqref{HVDCcontrol}, \eqref{PSTcontrol}. This control saturation can be expressed as
\begin{align}
\tilde{p}_{DC,i} &= \min \left\{ p_{DC,i} + \delta_{DC,i}^{kl} - \boldsymbol{\alpha}_{DC(i,\cdot)}\omega , ~~\bar{p}_{DC,i}\right\} , \\
\tilde{\gamma}_{j} &= \min \left\{ \gamma_{j} + \delta_{\gamma,j}^{kl} - \boldsymbol{\alpha}_{\gamma(j,\cdot)}\omega , ~~\bar{\gamma}_{j}\right\} .
\end{align}
When such saturation occurs, we get different power flows than expected, and there may be overloading on the transmission lines, requiring further corrective action. However, our chance constraints ensure that the need for such additional corrective action occurs with probability smaller than $\epsilon$.
\subsubsection{Power Flow Constraints}
The power flow constraints can be expressed as
\begin{align}
& \mathbb{P}\left[ p_{ij}^{\delta_{kl}} +\! LF_{ij}^{kl} p_{kl}^{\delta_{kl}} +\! \left(\mathbf{A}_{(ij,\cdot)}\! + LF_{ij}^{kl}\mathbf{A}_{(kl,\cdot)}\right) \omega \leq \bar{p}_{ij}\right]\! \geq\! 1\!-\!\epsilon_l, \nonumber\\%\label{chance_line_min} \\
& \mathbb{P}\left[ p_{ij}^{\delta_{kl}} +\! LF_{ij}^{kl} p_{kl}^{\delta_{kl}} +\! \left(\mathbf{A}_{(ij,\cdot)}\! + LF_{ij}^{kl}\mathbf{A}_{(kl,\cdot)}\right) \omega \geq \underline{p}_{ij} \right]\! \geq\! 1\!-\!\epsilon_l, \nonumber\\%\label{chance_line_max} \\
& \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \forall_{ij \in \mathcal{L}, ~ kl \in \{0,\mathcal{L}\}} \label{chance_line_min}
\end{align}
where $\bar{p}_{ij}, \underline{p}_{ij}$ are the upper and lower limits on the power flow, enforced as chance constraints with acceptable violation probability $\epsilon_l$. Index $kl=0$ refers to the pre-contingency constraint with $LF_{ij}^{kl}=0$.
Different from the limits on generators, HVDC and PSTs, the transmission limits are soft constraints from a physical perspective and might experience violations. The violation probability $\epsilon_l$ is thus interpreted as the probability of observing an overload or N-1 violation. These such transmission line violations can either be accepted (e.g., if the overload is small or related to an unlikely outage), or can be handled through additional measures in real-time.
Note that the problem does not include any consideration of the intermediate post-contingency, pre-corrective control system state, as described in e.g. \cite{capitanescu2007}. Constraints to ensure feasibility of this state can however be added without any conceptual change to the method.
\subsection{Reformulation of Chance Constraints}
The chance constraints in Eq.~\eqref{chance_gen_min}-\eqref{chance_line_min} need to be reformulated into tractable constraints. Here, we choose to use an analytic reformulation based on the assumption that the forecast uncertainty vector $\omega$ follows a normal distribution, similar to the one used in \cite{roald2013, bienstock2014}.
This choice of distribution is based on the fact that the chance constraints used are marginal chance constraints and are one dimensional (e.g., one constraint for each line flow) and depend on the weighted sum of the high dimensional random vector $\omega$. With the projection of a high dimensional vector onto a one dimensional constraint, arguments similar to the central limit theorem can be invoked \cite{dasgupta2006}, despite the fact that the individual uncertainty sources $\omega$ may not be normally distributed.
\subsubsection{Reformulated Constraints}
With the normality assumption, Eqs.~\eqref{chance_gen_min}-\eqref{chance_line_min} can be reformulated into the following constraints:
\begin{align}
&\alpha_{G,i}\Phi^{-1}(1-\epsilon_G)\sigma_{\Omega} \leq r_i^{+}, \label{lin_gen_max}\\
&\alpha_{G,i}\Phi^{-1}(1-\epsilon_G)\sigma_{\Omega} \geq -r_i^{-}, \label{lin_gen_min}
\\
&p_{DC,i} + \delta_{DC,i}^{kl} + \Phi^{-1}(1-\epsilon)\parallel\Sigma_W^{1/2} \boldsymbol{\alpha}_{DC(i,\cdot)}^T \parallel_2 \leq \bar{p}_{DC,i} \label{SOC_dc_max}\\
&p_{DC,i} + \delta_{DC,i}^{kl} - \Phi^{-1}(1-\epsilon)\parallel\Sigma_W^{1/2} \boldsymbol{\alpha}_{DC(i,\cdot)}^T \parallel_2 \geq \underline{p}_{DC,i} \label{SOC_dc_min} \\
&\gamma_{j} + \delta_{\gamma,j}^{kl} + \Phi^{-1}(1-\epsilon)\parallel \Sigma_W^{1/2} \boldsymbol{\alpha}_{\gamma(j,\cdot)}^{T}\parallel_2 \leq \bar{\gamma}_{j} \label{SOC_pst_max}\\
&\gamma_{j} + \delta_{\gamma,j}^{kl} - \Phi^{-1}(1-\epsilon)\parallel \Sigma_W^{1/2} \boldsymbol{\alpha}_{\gamma(j,\cdot)}^{T}\parallel_2 \geq \underline{\gamma}_{j}\label{SOC_pst_min} \\
& p_{ij}^{\delta_{kl}} + LF_{ij}^{kl} p_{kl}^{\delta_{kl}} ~+ \label{SOC_line_max}\\
&\quad\quad\Phi^{-1}(1-\epsilon_l) \parallel \Sigma_W^{1/2} \left(\mathbf{A}_{(ij,\cdot)} + LF_{ij}^{kl}\mathbf{A}_{(kl,\cdot)}\right)^{T} \parallel_2 \leq \bar{p}_{ij} \nonumber\\
& p_{ij}^{\delta_{kl}} + LF_{ij}^{kl} p_{kl}^{\delta_{kl}} ~- \label{SOC_line_min}\\
&\quad\quad\Phi^{-1}(1-\epsilon_l) \parallel \Sigma_W^{1/2} \left(\mathbf{A}_{(ij,\cdot)} + LF_{ij}^{kl}\mathbf{A}_{(kl,\cdot)}\right)^{T} \parallel_2 \geq -\bar{p}_{ij} \nonumber
\end{align}
The generator constraints are linear since they only depend on the total power mismatch $\Omega$ with standard deviation $\sigma_{\Omega}$ \cite{li2015}. The remaining constraints \eqref{SOC_dc_max} - \eqref{SOC_line_min} are Second Order Cone (SOC) constraints, which are convex for $\epsilon\leq0.5$ \cite{bienstock2014}.
\subsubsection{Exploiting Symmetry of SOCs}
Each pair of upper and lower SOC constraints in \eqref{SOC_dc_max}-\eqref{SOC_line_min} can be reduced to a pair of linear constraints and a single SOC constraint by exploiting the symmetry of the normal distribution \cite{bienstock2014}:
\begin{align}
& p_{DC} + \delta_{DC}^{kl} + s_{DC} \leq p_{DC}^{max} \label{final_DC_max}\\
& p_{DC} + \delta_{DC}^{kl} - s_{DC} \geq p_{DC}^{min} \label{final_DC_min}\\
& s_{DC,i} \geq \Phi^{-1}(1-\epsilon)\parallel\Sigma_W^{1/2} \boldsymbol{\alpha}_{DC(i,\cdot)}^T \parallel_2, ~~\forall_{i\in \mathcal{H}} \label{final_DC_SOC}
\end{align}
\begin{align}
& \gamma + \delta_{\gamma}^{kl} + s_{\gamma} \leq \gamma^{max} \label{final_PST_max} \\
& \gamma + \delta_{\gamma}^{kl} - s_{\gamma} \geq \gamma^{min} \label{final_PST_min} \\
& s_{\gamma,j} \geq \Phi^{-1}(1-\epsilon)\parallel \Sigma_W^{1/2} \boldsymbol{\alpha}_{\gamma(j,\cdot)}^{T}\parallel_2, ~~~\forall_{j\in \mathcal{S}} \label{final_PST_SOC}\\
&p_{ij}^{\delta_{kl}} + LF_{ij}^{kl} p_{kl}^{\delta_{kl}} + s_{ij}^{kl} \leq p_{ij}^{max}, \quad\quad~~~ \forall_{ij \in \mathcal{L}, ~ kl \in \{0,\mathcal{L}\}} \label{final_line_max}\\
&p_{ij}^{\delta_{kl}} + LF_{ij}^{kl} p_{kl}^{\delta_{kl}} - s_{ij}^{kl} \geq -p_{ij}^{max}, \quad~~~~ \forall_{ij \in \mathcal{L}, ~ kl \in \{0,\mathcal{L}\}}\label{final_line_min}\\
& s_{ij}^{kl} \geq \Phi^{-1}(1-\epsilon_l) \parallel \Sigma_W^{1/2} \left(\mathbf{A}_{(ij,\cdot)} + LF_{ij}^{kl}\mathbf{A}_{(kl,\cdot)}\right)^{T} \parallel_2 \label{final_line_SOC}
\end{align}
The above reformulation cuts the number of SOC constraints in half and thus improves efficiency.
Note that the SOC terms \eqref{final_DC_SOC}, \eqref{final_PST_SOC}, \eqref{final_line_SOC} are always positive, and introduce a tightening of the original, deterministic constraints.
\subsubsection{Non-Normal Uncertainty} For cases where the assumption of a normal distribution is not justified, an analytical reformulation can be still be obtained even when only limited knowledge of the distribution is available \cite{roaldArxiv, summers2014}. For example, a reformulation based on the Chebyshev inequality only requires knowledge of the mean and covariance of $\omega$. These reformulations result in SOC constraints similar to \eqref{final_DC_max}-\eqref{final_line_SOC},
and can be easily used within the suggested framework. However, the distributionally robust reformulations can be overly conservative \cite{roaldArxiv}, leading to unnecessarily high cost or infeasibility.
Instead, we suggest to use the normal assumption (as described above) combined with out-of-sample testing.
\section{Solution Algorithm}
\label{sec:implementation}
The full OPF problem with security and chance constraints in Section \ref{sec:CCOPF} is a Second Order Cone Program (SOCP), with the SOC constraints given by Eq.~\eqref{final_DC_SOC}, \eqref{final_PST_SOC} and \eqref{final_line_SOC}.
The problem has a large number of linear and SOC constraints, due to consideration of both the pre- and post-contingency situations.
Although the SOC constraints are convex, it has been observed in the literature \cite{bienstock2014} that attempting to solve the entire optimization problem at once by using a non-linear SOCP solver can result in unacceptably long convergence times and run into numerical difficulties. The state of the art method in the literature for solving such chance constrained OPF problems is the sequential outer approximation cutting planes algorithm \cite{bienstock2014}.
In this algorithm, a relaxed version of the problem is solved by eliminating all the SOC constraints. Then the relaxation is successively tightened using a sequence of polyhedral outer approximations obtained by adding separating hyperplanes (cutting-planes) to the violated SOC constraints until the desired accuracy is reached. The success of the algorithm relies on exploiting three salient properties prevalent in the CC-OPF \cite{bienstock2014}: (i) only a small fraction of the non-linear chance constraints (SOCs) are active at optimality, and these correspond to critical/congested lines, (ii) each SOC term depends only on a limited number of decision variables, and (iii) linear programming solvers have better speed and stability compared to non-linear solvers.
However, there are some critical differences in the features of the CC-SCOPF considered in this paper that makes the cutting-plane algorithm unsuitable:
\emph{Feature 1:} Due to the SOC constraints for the HVDC and PSTs, that are often tight at optimality, as well as the security constraints for the lines, more constraints must be approximated through polyhedral constraints.
\emph{Feature 2:} Each SOC term, in particular the SOC terms for the lines, depend on a large number of decision variables since $\alpha_G,~\boldsymbol{\alpha}_{DC},~\boldsymbol{\alpha}_{\gamma}$ are potentially large matrices.
For those constraints, we found that the cutting-plane algorithm requires a large number of iterations to obtain a good polyhedral approximation and a feasible solution within acceptable tolerance.
This implies that the optimization problem needs to be solved a many times, and that a significant amount of memory is required to store the large and increasing number of linear constraints.
\emph{Feature 3:} Due to security constraints, there is a very large number of SOCs present in the problem. Evaluating the SOC constraints at any given candidate solution is therefore time consuming. Since the cutting-planes algorithm requires a large number of iterations with an SOC evaluation in each iteration, the problem solves very slowly
To overcome these difficulties and obtain an efficient implementation, we develop a sequential SOCP algorithm based on a constraint generation process customized to our problem.
As in \cite{bienstock2014}, we first solve a relaxed version of the problem involving only linear constraints. Instead of adding cutting-planes, we then add the full SOC terms for the most violated constraints. We continue solving a sequence of SOCPs until all constraints are satisfied, reaching the globally optimal solution of the original problem. The sequence in which the violated constraints are added has a significant impact on the solving time, and needs to be carefully chosen.
In the following we describe the details of the algorithm and the reasoning behind. \\%the details in the implementation.\\
\emph{\underline{Preprocessing:} Solving the SCOPF Without Uncertainty}\\
As a pre-processing step, we solve the SCOPF without consideration of uncertainty. This allows us to obtain a fast, first estimate of the active constraints.\\
\emph{\underline{Step 1:} Solving the CC-OPF Without Security Constraints}
\begin{itemize}
\item[(a)] We first solve a base case problem consisting of the power balance and generator constraints \eqref{nodalbal}-\eqref{res_cont}, \eqref{lin_gen_max}, \eqref{lin_gen_min}, the full PST and HVDC constraints \eqref{final_DC_max} - \eqref{final_PST_SOC} as well as the linear pre-contingency line constraints \eqref{final_line_max}, \eqref{final_line_min}. Since most of the SOC constraints for HVDC and PST are tight and there are few of them (i.e., no additional SOC terms for the security constraints), adding the full SOC upfront eliminates unnecessary iterations.
In addition, the SOC terms belonging to the pre-contingency constraints violated in the SCOPF are added.
\item[(b)] The line SOC constraints for the base case, i.e., \eqref{final_line_SOC} with $kl = 0$ are then checked for violations, and the most violated ones are added to the problem with a warm start from the previous iteration. This process is repeated until all the base case constraints are satisfied.
\end{itemize}
\emph{\underline{Step 2:} Solving the Full CC-SCOPF With Security Constraints}\\
We check for violation of the security constraints and add them sequentially using warm start. This part of the algorithm runs in three phases:
\begin{itemize}
\item[(c)] In the first iteration, we add all post-contingency constraints that were active in the SCOPF without uncertainty, as these most likely will be active in the CC-SCOPF as well.
We include both the linear constraints \eqref{final_line_max}, \eqref{final_line_min} and the corresponding SOC constraint \eqref{final_line_SOC} to the problem.
\item[(d)] After the first iteration, we check for violation of only the linear security constraints \eqref{final_line_max}, \eqref{final_line_min}, which is much faster than evaluating the full SOC constraints.
However, since the SOC constraint implies a tightening of the linear constraint, a violation of the linear constraint almost always means that the corresponding SOC is violated as well.
For the most violated constraints, we therefore add both the linear constraints \eqref{final_line_max}, \eqref{final_line_min} and the corresponding SOC constraint \eqref{final_line_SOC} to the problem.
This process is repeated until all the linear security constraints are satisfied.
\item[(e)] We then check for the violated SOC terms for the line security constraints \eqref{final_line_SOC}, and add the most violated ones to the problem. As mentioned in \emph{Feature 3}, the number of post-contingency SOCs are rather large, and hence inefficient to evaluate. However, we observe that this stage of the algorithm only requires few iterations until all constraints are satisfied, since most violated security constraints were added in (d).\\
To reduce the computational time involved in each iteration (d), we do a pre-screening of the SOC terms based on the LODF matrix. Even though most $LF_{ij}^{kl}$ are non-zero, most are very small $<1e-3$.
By evaluating the post-contingency SOC constraints only for those pairs $ij$ and $kl$ where $LF_{ij}^{kl}$ exceeds a certain threshold, the number of evaluations can be significantly reduced while maintaining acceptable accuracy.
\end{itemize}
Finally, we check whether the current solution violates any of the pre-contingency constraints. If yes, we restart from (b). Otherwise, the algorithm is terminated and a globally optimal solution which satisfies all the constraints of the full problem has been found. In all encountered cases, only one pass of the algorithm was required to find an optimal solution without requiring to return to (b).
\section{Case study - IEEE 118 Bus System}
\label{sec:Case}
In this section, we analyze the benefits of corrective control based on a case study for the IEEE 118 bus system. The scalability of the proposed method and the sequential SOCP algorithm is demonstrated in the next section.
\subsection{IEEE 118 Bus Test System}
We use the IEEE 118 bus test system as defined in \cite{118busdata}, with the following modifications. Both load and maximum generation capacity are scaled by a factor of 1.25, and the minimum generation capacity is set to zero.
The system loads are interpreted as a mix between load and renewable energy sources connected at a lower voltage level. Instead of adding uncertain in-feeds to the system, we assume that all 99 loads fluctuate around their forecasted consumption. The standard deviation $\sigma$ of each load is equal to 10\% of the forecasted consumption. As shown in Fig. \ref{system}, the system is divided into three zones. We assume that fluctuations within a zone are correlated with $\rho=0.3$, that fluctuations in different zones are uncorrelated.
For the chance constraints, we apply $\epsilon_l=\epsilon=0.01$ for transmission line, HVDC and PST constraints.
For the generator constraints, we use $\epsilon_G=0.001$, which corresponds to the acceptable probability of having insufficient reserves in the Swiss power system.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{system_v2.pdf}
\centering
\caption{IEEE 118 bus system with 3 Zones. The the lines with PSTs are marked in red, while HVDC connections are drawn in green.}
\label{system}
\end{figure}
We assume that there are 3 PSTs and 3 HVDC connections installed in the system, as shown in Fig. \ref{system}. Details for each HVDC connection are listed in Table \ref{table_HVDC_PST}. The PSTs are installed on lines 41, 167 and 54, with maximum tap positions $\pm 30^{\circ}$. The upper bounds on the post-contingency control for HVDC and PSTs are set to $\overline{\delta}_{DC}^{ij}=0.25~\overline{p}_{DC}$ and $\overline{\delta}_{\gamma}^{ij}=0.25~\overline{\gamma}$.
\begin{table}
\caption{IEEE 118 Bus System - HVDC Connections}
\label{table_HVDC_PST}
\centering
\begin{tabular}{|l|l|l|l|}
\hline
HVDC connection & HVDC 1 & HVDC 2 & HVDC 3\\
\hline
From - To Bus & 38 - 65 & 104 - 62 & 12 - 17\\
Replaced Line & 96 & - & 20\\
Capacity [MW] & 500 & 200 & 175\\
\hline
\end{tabular}
\end{table}}
The total required down-reserves is set to $R^-=\Phi^{-1}(1-\epsilon_G)\sigma_\Omega$, which ensures that the total fluctuation $\Omega$ will be covered with probability $1-\epsilon_G$. The total amount of up-reserves $R^+$ is required to cover the same fluctuation or the maximum generation outage, whichever is larger: $R^+=\max\{\overline{p}_G, R^-\}$. This reserve dimensioning ensures that a similar amount of reserves are procured for the chance-constrained and deterministic formulations we will compare.
To ensure ramping constraints, the upper bound on provision of reserves from each generator is set to 20\% of total generation capacity, i.e. $\overline{r}^+=\overline{r}^-=0.2\overline{p}_G$.
To analyze the performance of the method, we use a Monte Carlo simulation. We sample the uncertainty realization $\omega$, calculate the power flows and record the number of constraint violations. To test the algorithm for different types of uncertainty, we run two simulations with 2000 samples of $\omega$ each:
First, we use uncertainty data which correspond to our assumptions, and draw samples from a multivariate normal distribution with mean $\mu=0$ and covariance matrix $\Sigma_W$.
Second, we run out-of-sample tests based on 1 year of real system data from the Austrian Power Grid (APG).
The deviations are defined based on the difference between the the so-called DACF (Day-Ahead Congestion Forecast) and the snapshot (the real-time power injections) for all hours and buses with available data (8492 data points for 28 buses). Splitting the data into four three-month periods, we obtain 2000 data samples for each of the 99 buses. The samples are then scaled to fit the assumed covariance matrix $\Sigma_W$ and mean $\mu=0$. This corresponds to the case where we have a good estimate of the mean and covariance, but do not know the full underlying distribution.
We implement the sequential SOCP algorithm in Julia with JuMP \cite{jump}, and solve the problem using MOSEK \cite{mosek}.
With this set-up, the solution is obtained within a few minutes on a laptop.
\subsection{Numerical Results}
To demonstrate the benefits of corrective control, we compare five different solutions:
\begin{itemize}
\item[(a)] The \emph{OPF} does not consider uncertainty (i.e., $\Sigma_W=0$) or constraints related to outages.
\item[(b)] The \emph{SCOPF} does not consider uncertainty (i.e., $\Sigma_W=0$) or post-contingency corrective control ($\delta_{DC}^{ij}=\delta_{\gamma}^{ij}=0$).
\item[(c)] The \emph{post-contingency corrective SCOPF} does not consider uncertainty (i.e., $\Sigma_W=0$).
\item[(d)] The \emph{post-contingency corrective CC-SCOPF} does not consider corrective control in reaction to fluctuations (i.e., $\alpha_{DC}=\alpha_{\gamma}=0$).
\item[(e)] The \emph{full CC-SCOPF} considers corrective control for both fluctuations and contingencies.
\end{itemize}
We first compare the cost of the generation dispatch and reserves (as obtained directly from the optimization), and analyze the differences.
Second, we look into the empirical number of violations observed in the two Monte Carlo simulations for the normally distributed and out-of-sample data.
Finally, we investigate the impact of corrective control by comparing the solution of (d) and (e) for different acceptable violation probabilities $\epsilon$ and varying uncertainty levels $\sigma$.
\subsubsection{Comparison of operation cost}
The costs of the porblems (a)-(e) are listed in Table \ref{tableII} and are shown graphically in Fig. \ref{cost}, where the costs are normalized by the cost of the standard OPF.
The cost increase is analyzed with respect to the following criteria \cite{roald2013}:
\emph{Cost of Security} is defined as the cost increase between the OPF and the SCOPF, due to the cost of enforcing N-1 constraints.
\emph{Cost of Uncertainty} is defined as the cost increase between the SCOPF and the CC-SCOPF, due to the cost of enforcing chance constraints instead of deterministic constraints.
We observe that the cost of security can be reduced by 0.9\% from (b) to (c), by introducing post-contingency corrective control. Similarly, the cost of uncertainty is reduced by 0.35\% from (d) to (e), by introducing uncertainty corrective control
\begin{figure}
\includegraphics[width=0.98\columnwidth]{Cost_eps099.eps}
\centering
\caption{Breakdown of the cost for the five OPF formulations (a) - (e). The costs are normalized by the cost of the OPF (a).}
\label{cost}
\end{figure}
\begin{figure}
\includegraphics[width=0.98\columnwidth]{HVDC_LineFlow_eps099_v2.eps}
\centering
\caption{Power Flow on Line 119 after outage of Line 126, plotted against the set-point of HVDC 3 for the CC-SCOPF without (left) and with (right) corrective control for uncertainty. The red marker shows the forecasted operating point, the blue points are the realized power flows obtained through the Monte Carlo simulation and the black lines show the line and HVDC limits.}
\label{lineflow}
\end{figure}
\begin{figure}
\includegraphics[width=0.95\columnwidth]{aDC.eps}
\includegraphics[width=0.95\columnwidth]{aPST.eps}
\centering
\caption{Values of the corrective control parameters $\boldsymbol{\alpha}_{DC}$ (top) and $\boldsymbol{\alpha}_{\gamma}$ (bottom).}
\label{DC_PST}
\end{figure}
The reduction in cost of uncertainty can be explained by the better ability to react to the fluctuations in the power injections.
In Fig. \ref{lineflow}, the power flow on line 119 after the outage of line 126 is plotted against the set-point of HVDC 3 for the CC-SCOPF problems without (left) and with (right) corrective control. The red marker shows the forecasted operating point, and the blue points correspond to the actual, realized conditions as simulated for 2000 samples of $\omega$. In the case without corrective control (left), the HVDC set-point remains constant for all uncertainty samples, while the set-point changes depending on the wind condition with corrective control (right).
The post-contingency constraint on line 119 is one of the active constraints in the CC-SCOPF problems (d) and (e), and it is clearly seen how the line flow limit is violated in some cases.
However, we also observe that by introducing the corrective control of HVDC and PSTs, it is possible to reduce the standard deviation of the line flow, in this case from 35.2 MW to 19.7 MW. With reduced variance, a higher nominal power flow can be accepted, which leads to a reduction in cost.
The response of the HVDC and PSTs to uncertainty is determined by the corrective control parameters $\boldsymbol{\alpha}_{DC},~\boldsymbol{\alpha}_{\gamma}$. The values of those variables as obtained from (e) are shown in Fig. \ref{DC_PST}. We observe that the PST and HVDC react strongly to fluctuations that are geographically close to where they are located, and have close to zero entries for most other fluctuations.
\subsubsection{Number of Violations and Out-of-Sample Testing}
We run two Monte Carlo simulations based on the normally distributed and APG data sets, and compute the number of constraint violations.
To investigate how well our solutions perform in the out-of-samples test with real APG data, we first look at the empirical violation probability $\epsilon_{emp}$ for each separate constraint and compare this to the accepted violation probability $\epsilon_l$.
Fig. \ref{ViolPerLine} shows the empirical violation probability per constraint for the full CC-SCOPF (e), with normal (green bar) and APG (yellow bar) samples. Only the 19 constraints with $\epsilon_{emp}>0$ are shown.
We observe that there are four tight constraints (3, 7, 8 and 10) for which $\epsilon_{emp}\approx0.01$ with normally distributed samples.
For these constriants, the APG samples lead to $\epsilon_{emp}>0.01$, indicating that the chance constraint is violated. However, the empirical violation probability remains below 2\% for all constraints, and below the acceptable 1\% value for the majority of constraints. While the empirical violation probability will vary depending on the uncertainty data, this result indicates that a chance constraint approach based on the normal distribution still significantly reduces the risk of violations.
In a second investigation, we calculate the joint violation probability, i.e., the probability that a sample will exhibit a violation of \emph{any} constraint.
Since each chance constraint in our problem is enforced individually with a violation probability of $1-\epsilon$ and the wind deviations that lead to a constraint violation differ between constraints, the joint violation probability might be significantly higher than $1-\epsilon$.
\setlength{\extrarowheight}{2pt}
\begin{table}
\caption{Cost and Number of Violations for (a) - (e)}
\label{tableII}
\centering
\begin{tabular}{l c c c c c c }
& (a) & (b) & (c) & (d) & (e)\\[2pt]
\hline
Cost [\$] & 84924 & 87880 & 87108 & 88713 & 88418\\
\hline
& & & & & \\[-2pt]
Violations & (a) & (b) & (c) & (d) & (e)\\[+2pt]
\hline
Normal & 2000 & 1729 & 1712 & 98 & 87 \\[-2pt
Samples & (100\%) & (86.5\%) & (85.6\%) & (4.9\%) & (4.4\%) \\[+2pt]
\hline
APG & 2000 & 1697 & 1655 & 114 & 95 \\[-2pt]
Samples & (100\%) & (84.9\%) & (82.8\%) & (5.7\%) & (4.8\%) \\[+2pt]
\hline
\end{tabular}
\end{table}}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{ViolPerLine_v2.eps}
\centering
\caption{Empirical violation probability $\epsilon_{emp}$ per constraint with normal samples (green) and APG samples (yellow) for the 19 violated constraints, based on the solution for the full CC-SCOPF (e). The black dotted line corresponds to $\epsilon_{emp}=\epsilon_l=0.01$.}
\label{ViolPerLine}
\end{figure}
In Table \ref{tableII}, the number of Monte Carlo samples which lead to either pre- or post-contingency constraint violations are listed for both the normally distributed and the APG samples.
The OPF (a) violates all scenarios, as it neither accounts for outages nor uncertainty. The two SCOPF formulations without (b) and with (c) corrective control have violations due to uncertainty in around 1700 of the samples. This corresponds to a joint violation probability of around 85\%, which is unacceptably high and clearly shows the need to account for uncertainty
Both CC-SCOPF formulations (d) and (e) have violations in around 100 samples, indicating a joint violation probability of around 5\%. As expected, this is above the acceptable violation probability $\epsilon_l=0.01$ for each separate constraint. However, it is still in the acceptable range and shows that a CC-SCOPF based on separate chance constraints also effectively reduces the joint violation probability.
The Monte Carlo simulation based on APG data increases number of violated scenarios by less than 1 \% compared to the normally distributed samples for the CC-SCOPF cases (a) and (e), indicating that the method performs well also in out-of-sample tests.
We further conclude that the CC-SCOPF with corrective control (e) is able to provide a similarly low violation probability as the CC-SCOPF without corrective control (d), but at lower cost.
\subsubsection{Influence of the confidence level} Table \ref{tableIII} shows the cost of the generation dispatch without (d) and with (e) corrective control for uncertainty, for different acceptable violation probabilities $\epsilon$ and different standard deviations (in \% of forecasted load). We observe that the possibility to react to uncertainty becomes increasingly important (i.e., the reduction in cost is larger) if we want to enforce smaller violation probabilities or if the level of uncertainty increases. With standard deviations of 14.25\%, we even observe that the CC-SCOPF with corrective control (e) is feasible, whereas the CC-SCOPF without (d) is not.
\setlength{\extrarowheight}{2pt}
\begin{table}
\caption{Cost of CC-SCOPF without (d) and with (e) corrective control for uncertainty, for different values of $\epsilon$ (top) and different standard deviations $\sigma$ as \% of forecasted load (bottom).}
\label{tableIII}
\centering
\begin{tabular}{l c c c c c c }
& $\epsilon\!=\!0.1$ & $\epsilon\!=\!0.05$ & $\epsilon\!=\!0.02$ & $\epsilon\!=\!0.01$ & $\epsilon\!=\!0.001$\\
\hline
(d) & 87898 & 88136 & 88466 & 88713 & 89483\\
(e) & 87720 & 87941 & 88201 & 88418 & 89070 \\
\hline
\\
& $\sigma\!=\!5\%$ & $\sigma\!=\!7.5\%$ & $\sigma\!=\!10\%$ & $\sigma\!=\!12.5\%$ & $\sigma\!=\!14.25\%$\\
\hline
(d) & 84171 & 86334 & 88136 & 91513 & Infeasible\\
(e) & 83991 & 86119 & 87941 & 91022 & 93063\\
\hline
\end{tabular}
\end{table}}
\section{Case study - Scalability}
\label{sec:Polish}
In this case study, we discuss the scalability of the CC-SCOPF and proposed solution algorithm based on the run times for three different test cases, the IEEE 118 bus system, the IEEE 300 bus system and the Polish test case with 2383 buses.
\subsection{Test System Data}
\subsubsection{IEEE 118 bus test system} For the IEEE 118 bus system, we use the same data as in the previous case study.
\subsubsection{IEEE 300 bus test system}
We use a modified version of the IEEE 300 bus test system found in the NICTA Energy System Test Archive \cite{coffrin2014}. To obtain a feasible SCOPF case, we increase the line limits by a factor of 5.0 and the generation limits by a factor of 2.0.
All loads above 50 MW, corresponding to 109 loads and 68\% of the system load, are assumed to be uncertain. Their standard deviations are set to 5\% of the forecasted consumption, and the correlation to zero.
We assume that there are 3 HVDC connections installed in the system, with corresponding system data listed in Table \ref{table_HVDC_PST_300}. The PSTs are installed on lines 91, 140 and 174, with maximum tap positions $\pm 30^{\circ}$.
\begin{table}
\caption{IEEE 300 Bus System - HVDC Connections}
\label{table_HVDC_PST_300}
\centering
\begin{tabular}{|l|l|l|l|}
\hline
HVDC connection & HVDC 1 & HVDC 2 & HVDC 3\\
\hline
From - To Bus & 68 - 198 & 8 - 18 & 126 - 145\\
Replaced Line & - & - & -\\
Capacity [MW] & 900 & 600 & 600\\
\hline
\end{tabular}
\end{table}}
\subsubsection{Polish Test System}
We base our case study on the Polish Winter Peak test case with 2383 buses as provided with Matpower 5.1 \cite{zimmermann2011}. To obtain a feasible case with active transmission constraints, the maximum generation capacity is scaled by a factor of 2.0 and the transmission line limits by a factor of 2.5.
All loads above 25 MW, corresponding to 157 loads and 29\% of the system load, are assumed to be uncertain. Their standard deviations are set to 20\% of the forecasted consumption, and the correlation between loads to zero.
We assume that there are 3 HVDC connections installed in the system, with corresponding system data listed in Table \ref{table_HVDC_PST_Polish}. The PSTs are installed on lines 130, 240 and 1381, with maximum tap positions $\pm 30^{\circ}$.
\begin{table}
\caption{Polish Test Case - HVDC Connections}
\label{table_HVDC_PST_Polish}
\centering
\begin{tabular}{|l|l|l|l|}
\hline
HVDC connection & HVDC 1 & HVDC 2 & HVDC 3\\
\hline
From - To Bus & 32 - 18 & 184 - 105 & 67 - 138\\
Replaced Line & - & - & 169\\
Capacity [MW] & 500 & 500 & 1000\\
\hline
\end{tabular}
\end{table}}
For both of the above test systems, we choose similar settings as for the IEEE 118 bus test case in the previous section. We apply $\epsilon_l=\epsilon=0.01$ for transmission line, HVDC and PST constraints and use $\epsilon_G=0.001$ for generation constraints. We determine the total required reserve capacities $R^-,~R^+$ and the reserve provision bounds $\overline{r}^+,~\overline{r}^-$ as described above. We do not include post-contingency corrective control in either of the test cases, but solve the CC-SCOPF with corrective control for uncertainties.
The sequential SOCP algorithm is implemented in Julia with JuMP \cite{jump}, and solved
using Ipopt \cite{ipopt}.
\subsection{Results}
The size of each test case is listed in Table \ref{table_size}.
Due to the large number of security constraints, already the IEEE 118 bus test case has more SOC constraints than the Polish test case solved in \cite{bienstock2014}. The problem sizes for the IEEE 300 bus and Polish grid test cases are one and two orders of magnitude larger than the IEEE 118 bus system, as measured in number of constraints, with the Polish test case featuring more than 8 million variables and 8 million SOC constraints.
Table \ref{table_times} shows the run times of the algorithm based on the current implementation. We have included both the total solution time, as well as an overview of the number of SOC evaluations and the total time spent evaluating the SOC constraints.
We observe that a significant part of the solution time is spent checking the SOC constraints. The pre-screening of the security constraints based on the LODF matrix reduces the number of security constraints that we need to check by a factor of 3. However, for the largest Polish test case, we still need to evaluate close to 1.4 million SOC constraints.
By using the sequential SOCP algorithm, we are able to keep the number of SOC evaluations very low, with only 2-3 evaluations.
While the cutting plane algorithm \cite{bienstock2014} would solve the problem faster and more reliably in each iteration, it would require a much larger number of SOC constraint evalutions, leading to prohibitively large run times. However, the time required for the SOC check can be significantly reduced by more efficient coding and parallelization, and is left as a topic for future work. This would improve the solution times for both our solution algorithm and the cutting plane algorithm in \cite{bienstock2014}.
\begin{table}
\caption{Size of the CC-SCOPF for the different test cases}
\label{table_size}
\centering
\begin{tabular}{l c c c}
& IEEE 118 & IEEE 300 & Polish\\[+1pt]
\hline
& & & \\[-8pt]
Variables & ~54 000 & ~173 000 & $>$8 mill.\\
Linear Constraints & ~70 000 & ~343 000 & $>$16 mill.\\
SOC Constraints & ~34 000 & ~169 000 & $>$8 mill.\\[+1pt]
\hline
\end{tabular}
\end{table}}
\begin{table}
\caption{Run times for CC-SCOPF for the different test cases}
\label{table_times}
\centering
\begin{tabular}{l l l l}
& IEEE 118 & IEEE 300 & Polish\\[+1pt]
\hline
& & & \\[-8pt]
Total Run Time & 2min 15s & 4 min 44s & 4h 20min\\
Number of SOC Evaluations & 2 & 3 & 2\\
Time per SOC Evaluation & 8s & 27 s & 1h 36min\\
\hline
\end{tabular}
\end{table}}
\section{Conclusions}
\label{sec:Conclusion}
In this paper, we have proposed a framework for corrective control which involves corrective actions in response to both contingencies and different uncertainty realizations. \\
The combined corrective control was incorporated in a CC-SCOPF with security and chance constraints, to ensure that the probability of constraint violations does not exceed a desired level. The chance constraints were reformulated using an analytical approach, which leads to an optimization problem with SOC constraints. To be able to handle the problem computationally, we developed a sequential SOCP algorithm.\\
With this solution algorithm, we are able to solve the IEEE 118 bus case including contingency constraints and 99 uncertain loads within a few minutes on a laptop. The solutions obtained for the IEEE 118 bus system show that the use of corrective control actions in reaction to uncertainty reduces operational cost, without reducing the level of security in the system.
The cost reduction is more significant for cases where we want to achieve a low violation probability, or where the level of uncertainty is higher.
We also demonstrate the scalability of the method by presenting results for the IEEE 300 bus test system and the Polish grid with 2383 buses. One main bottleneck of the current implementation is the evaluation of the SOC terms, but we believe a better implementation and parallelization will allow for significant speed up of this part.
In the case study, we observed that the PSTs and HVDC react similarly to fluctuations that are located close to each other, implying that geographically close fluctuations could be aggregated.
With this type aggregations, coupled with the fact that even large transmission systems only have few active contraints, we believe that the proposed CC-SCOPF is scalable to even larger systems with hundreds of uncertain variables. In addition to further scalability tests and improvements on the current implementation, future work will explore the extension to a full AC power flow, with approximate chance constraints based on a linearization of the power flow around the forecasted operating point \cite{qu15}.
\section{Acknowledgements}
This research work described in this paper has been partially carried out within the scope of the project "Innovative tools for future coordinated and stable operation of the pan-European electricity transmission system (UMBRELLA)", supported under the 7th Framework Programme of the European Union, grant agreement 282775. We thank our partners for useful discussions and feedback, and Austrian Power Grid for providing historical data.
\bibliographystyle{IEEEtran}
|
1,116,691,501,313 | arxiv | \section{Alternative Multi-Sample Strategies}
In this section, we review two alternative learning rules that tackle the ML problem \eqref{eq:ml} by leveraging $K$ independent samples from the hidden neurons. We also present a performance comparison on the MNIST-DVS classification task studied in Sec.~\ref{sec:experiments}.
\subsection{{\MBSNN}: Mini-Batch Online Learning for SNNs}
The first scheme is a direct mini-batch extension of the online learning rule for SNN developed in \cite{brea2013matching, jimenez2014stochastic, jang19:spm}, which we term {\MBSNN}. The learning algorithm for SNN introduced in \cite{brea2013matching, jimenez2014stochastic, jang19:spm} aims at minimizing the upper bound of the log-loss in \eqref{eq:general-elbo} obtained via Jensen's inequality as
\begin{align} \label{eq:elbo-snn}
- \log p_{\bm{\theta}}({\bm x}_{\leq T}) &\leq -\mathbb{E}_{ p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}||{\bm x}_{\leq T})} \Big[ \log \frac{p_{\bm{\theta}}({\bm x}_{\leq T}, {\bm h}_{\leq T})}{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}||{\bm x}_{\leq T})} \Big] \cr
&= \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}||{\bm x}_{\leq T})}\Big[ \underbrace{ \sum_{t=1}^T \sum_{i \in \set{X}} \ell\big( x_{i,t}, \sigma(u_{i,t})\big)}_{=~ -v_{\bm{\theta}^\text{X},T}:~ \text{learning signal} } \Big] := \set{L}_{{\bm x}_{\leq T}}(\bm{\theta}),
\end{align}
where we have recalled the notation $v_{\bm{\theta}^\text{X},T} = \log p_{\bm{\theta}^\text{X}}({\bm x}_{\leq T}||{\bm h}_{\leq T-1})$ in \eqref{eq:gem-log-weight-ls}. The minimization of the bound $\set{L}_{{\bm x}_{\leq T}}(\bm{\theta})$ via SGD in the direction of a stochastic estimate of the gradient $-\grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq T}}(\bm{\theta})$ results the update rule $\bm{\theta} \leftarrow \bm{\theta} - \eta \cdot \grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq T}}(\bm{\theta})$, with $\eta > 0$ being the learning rate. The gradient is estimated in \cite{jang19:spm} by using the REINFORCE gradient \cite{simeone2018brief}
\begin{align}
&\grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq T}}(\bm{\theta}) = \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}|| {\bm x}_{\leq T})}\Big[ -\grad_{\bm{\theta}}v_{\bm{\theta}^\text{X},T} - v_{\bm{\theta}^\text{X},T} \cdot \grad_{\bm{\theta}} \log p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}||{\bm x}_{\leq T}) \Big] \cr
&\quad = \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}|| {\bm x}_{\leq T})}\Big[ \sum_{t=1}^T \sum_{i \in \set{X}} \grad_{\bm{\theta}}\ell\big( x_{i,t},\sigma(u_{i,t})\big) + v_{\bm{\theta}^\text{X},T} \cdot \sum_{t=1}^T \sum_{i \in \set{H}} \grad_{\bm{\theta}} \ell\big(h_{i,t},\sigma(u_{i,t})\big) \Big].
\end{align}
An MC estimate of the gradient can be obtained by drawing a mini-batch of $K$ independent spiking signals ${\bm h}_{\leq T}^{1:K}=\{{\bm h}_{\leq T}^k\}_{k=1}^K$ of the hidden neurons via $K$ independent forward passes through the SNN, i.e., ${\bm h}_{\leq T}^k \sim p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^k||{\bm x}_{\leq T})$ for $k=1,\ldots,K$, and by evaluating for visible neuron $i \in \set{X}$,
\begin{subequations} \label{eq:mc-grad-batch}
\begin{align} \label{eq:mc-grad-batch-vis}
\grad_{\bm{\theta}_i} \set{L}^K_{{\bm x}_{\leq T}}(\bm{\theta}) := \frac{1}{K} \sum_{k=1}^K \sum_{t=1}^T \grad_{\bm{\theta}_i} \ell\big( x_{i,t},\sigma(u_{i,t}^k)\big),
\end{align}
and for hidden neuron $i \in \set{H}$,
\begin{align} \label{eq:mc-grad-batch-hid}
&\grad_{\bm{\theta}_i} \set{L}^K_{{\bm x}_{\leq T}}(\bm{\theta}) := \frac{1}{K} \sum_{k=1}^K \bigg( \big( v_{\bm{\theta}^\text{X},T}^k - {\bm b}_{i,T}^k\big) \cdot \sum_{t=1}^T \grad_{\bm{\theta}_i} \ell\big( h_{i,t}^k,\sigma(u_{i,t}^k)\big) \bigg),
\end{align}
\end{subequations}
In \eqref{eq:mc-grad-batch}, we have introduced baseline, also known as control variates, signals ${\bm b}_{i,T}^k$ for each $k$th sample of mini-batch as means to reduce the variance of the gradient estimator. Following the approach in \cite{peters2008reinforcement}, an optimized baseline can be evaluated as
\begin{align} \label{eq:mb-baseline}
{\bm b}_{i,T}^k = \frac{\mathbb{E}\Big[ v_{\bm{\theta}^\text{X},T}^k \cdot \Big(\sum_{t=1}^T \grad_{\bm{\theta}_i} \ell\big( h_{i,t}^k,\sigma(u_{i,t}^k)\big)\Big)^2 \Big]}{ \mathbb{E} \Big[ \Big(\sum_{t=1}^T \grad_{\bm{\theta}_i} \ell\big( h_{i,t}^k,\sigma(u_{i,t}^k)\big)\Big)^2 \Big]}.
\end{align}
In order to obtain an online rule, the model parameters $\bm{\theta}$ are updated at each time $t$ based on the data ${\bm x}_{\leq t}$ via SGD by using the discounted version of the gradient estimator $\grad_{\bm{\theta}} \set{L}^K_{{\bm x}_{\leq T}}(\bm{\theta})$ in \eqref{eq:mc-grad-batch}, which is given as for visible neuron $i \in \set{X}$,
\begin{align*}
\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta}) := \frac{1}{K} \sum_{k=1}^K \sum_{t'=0}^{t-1} \gamma^{t'} \grad_{\bm{\theta}_i} \ell\big( x_{i,t-t'}, \sigma(u_{i,t-t'}^k)\big),
\end{align*}
and for hidden neuron $i \in \set{H}$,
\begin{align*}
&\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta}) := \frac{1}{K} \sum_{k=1}^K \bigg( \big( v_{\bm{\theta}^\text{X},t}^{k,\gamma} - {\bm b}_{i,t}^{k,\gamma}\big) \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \grad_{\bm{\theta}_i} \ell\big( h_{i,t-t'}^k, \sigma(u_{i,t-t'}^k)\big) \bigg),
\end{align*}
where $\gamma \in (0,1)$ is a discount factor.
In a similar manner to {\GEMSNN}, the batch processing required in learning signal $v_{\bm{\theta}^\text{X},T}^k$ and the baseline ${\bm b}_{i,T}^k$ can be addressed by computing the discounted values $v_{\bm{\theta}^\text{X},t}^{k,\gamma}$ from \eqref{eq:gem-log-weight} and the corresponding baseline ${\bm b}_{i,t}^{k,\gamma}$ using temporal average operator. The resulting online learning rule, which we refer to Mini-batch online learning for SNNs ({\MBSNN}), updates the parameters $\bm{\theta}_i$ of each neuron $i$ in the direction of the MC estimate of the gradient $-\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta})$, yielding the update rule at time $t$ as follows: for visible neuron $i \in \set{X}$,
\begin{subequations} \label{eq:mb-update}
\begin{align} \label{eq:mb-update-vis}
\Delta \bm{\theta}_i = \frac{1}{K} \sum_{k=1}^K \Big\langle \grad_{\bm{\theta}_i} \ell\big( x_{i,t},\sigma(u_{i,t}^k)\big) \Big\rangle_{\gamma},
\end{align}
and for hidden neuron $i \in \set{H}$,
\begin{align} \label{eq:mb-update-hid}
\Delta \bm{\theta}_i = \frac{1}{K} \sum_{k=1}^K \Big( v_{\bm{\theta}^\text{X},t}^{k,\gamma} - {\bm b}_{i,t}^{k,\gamma} \Big) \cdot \Big\langle \grad_{\bm{\theta}_i} \ell\big( h_{i,t}^k,\sigma(u_{i,t}^k)\big) \Big\rangle_{\gamma},
\end{align}
\end{subequations}
We note that learning rules based on the conventional single-sample estimator in \cite{jimenez2014stochastic, brea2013matching, jang19:spm} rely on a single stochastic sample ${\bm h}_{\leq t}$ for the hidden neurons. It has a generally high variance, only partially decreased by the presence of the baseline control variates. By leveraging a mini-batch of $K$ samples, it is anticipated that {\MBSNN} with $K>1$ can potentially improve the learning performance by reducing the variance of the single-sample estimator.
\subsection{{\IWSNN}: Importance-Weighted Multi-Sample Online Learning for SNNs}
The second alternative approach uses the available $K$ independent samples to obtain an increasingly more accurate bound on the log-loss $\log p_{\bm{\theta}}({\bm x}_{\leq T})$ in \eqref{eq:ml}. The approach adapts the principles of the importance weighting method, which are introduced in \cite{mnih2016variational, burda2015importance, domke2018iwvi} for conventional probabilistic models, to probabilistic SNNs.
We start by considering an unbiased, importance-weighted estimator of the marginal $p_{\bm{\theta}}({\bm x}_{\leq T})$ of the form
\begin{align} \label{eq:iw-estimator}
p_{\bm{\theta}}({\bm x}_{\leq T}) \approx \frac{1}{K} \sum_{K=1}^K \frac{p_{\bm{\theta}}({\bm x}_{\leq T},{\bm h}_{\leq T}^k)}{ p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^k||{\bm x}_{\leq T})} &= \frac{1}{K} \sum_{k=1}^K p_{\bm{\theta}^\text{X}}({\bm x}_{\leq T}||{\bm h}_{\leq T-1}^k) \cr
&= \frac{1}{K}\sum_{k=1}^K \exp\big( v_{\bm{\theta}^\text{X},T}^k\big) := R_{\bm{\theta}^\text{X},T}^K,
\end{align}
where ${\bm h}_{\leq T}^{1:K}$ are $K$ independent spiking signals of the hidden neurons drawn from the causally conditioned distribution $p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^{1:K}||{\bm x}_{\leq T}) = \prod_{k=1}^K p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^k||{\bm x}_{\leq T})$ in \eqref{eq:causally-cond}, and we have used the notation $v_{\bm{\theta}^\text{X},T}^k = \log p_{\bm{\theta}^\text{X}}({\bm x}_{\leq T}||{\bm h}_{\leq T-1}^k)$ in \eqref{eq:gem-log-weight-ls}. The estimator $R_{\bm{\theta}^\text{X},T}^K$ in \eqref{eq:iw-estimator} for the marginal is unbiased, i.e., $\mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^{1:K}||{\bm x}_{\leq T})}[R_{\bm{\theta}^\text{X},T}^K] = p_{\bm{\theta}}({\bm x}_{\leq T})$, and can be used to obtain the upper bound of the log-loss as
\begin{align} \label{eq:iw-elbo-T}
-\log p_{\bm{\theta}}({\bm x}_{\leq T}) &= -\log \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^{1:K}||{\bm x}_{\leq T})} \big[ R_{\bm{\theta}^\text{X},T}^K \big] \cr
&\leq - \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^{1:K}||{\bm x}_{\leq T})}\Big[ \log R_{\bm{\theta}^\text{X},T}^K \Big] := \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta}).
\end{align}
We note that for $K \rightarrow \infty$, the upper bound $\set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta})$ in \eqref{eq:iw-elbo-T} converges to the exact log-loss $-\log p_{\bm{\theta}}({\bm x}_{\leq T})$ \cite{burda2015importance}.
The minimization of the bound $\set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta})$ in \eqref{eq:iw-elbo-T} via SGD in the direction of the gradient $-\grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta})$ yields the update rule $\bm{\theta} \leftarrow \bm{\theta} - \eta \cdot \grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta})$ with $\eta>0$ the learning rate. The gradient is obtained from the REINFORCE gradient method as
\begin{align} \label{eq:iw-grad-T}
&\grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta}) = - \mathbb{E}_{p_{\bm{\theta}^{\text{H}}}({\bm h}_{\leq T}^{1:K}||{\bm x}_{\leq T})} \Big[ \grad_{\bm{\theta}} \log R_{\bm{\theta}^\text{X},T}^K + \log R_{\bm{\theta}^\text{X},T}^K \cdot \grad_{\bm{\theta}} \log p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^{1:K}||{\bm x}_{\leq T}) \Big],
\end{align}
where the first gradient in \eqref{eq:iw-grad-T} can be computed as
\begin{align}
\grad_{\bm{\theta}} \log R_{\bm{\theta}^\text{X},T}^K &= \frac{1}{R_{\bm{\theta}^\text{X},T}^K} \cdot \grad_{\bm{\theta}}R_{\bm{\theta}^\text{X},T}^K \cr
&= \frac{1}{ \sum_{k'=1}^K \exp\big( v_{\bm{\theta}^\text{X},T}^{k'}\big) } \cdot \sum_{k=1}^K \exp\big( v_{\bm{\theta}^\text{X},T}^k\big) \cdot \grad_{\bm{\theta}} v_{\bm{\theta}^\text{X},T}^k \cr
&= -\sum_{k=1}^K \bm{\sigma}_{\text{SM}}^k\Big( {\bm v}_{\bm{\theta}^\text{X},T}\Big) \cdot \sum_{t=1}^T \sum_{i \in \set{X}} \grad_{\bm{\theta}} \ell\big( x_{i,t},\sigma(u_{i,t}^k)\big)
\end{align}
by using the chain rule, while the second gradient in \eqref{eq:iw-grad-T} can be computed as
\begin{align}
\grad_{\bm{\theta}} \log p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}^{1:K}||{\bm x}_{\leq T}) = - \sum_{k=1}^K \sum_{t=1}^T \sum_{i \in \set{H}} \grad_{\bm{\theta}} \ell\big( h_{i,t}^k,\sigma(u_{i,t}^k)\big).
\end{align}
As a result, we have the MC estimate of the gradient using the $K$ samples ${\bm h}_{\leq T}^{1:K}$ as: for visible neuron $i \in \set{X}$,
\begin{align} \label{eq:iw-grad-T-vis}
\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta}) = \sum_{k=1}^K \bigg( \bm{\sigma}_{\text{SM}}^k\Big( {\bm v}_{\bm{\theta}^\text{X},T}\Big) \cdot \sum_{t=1}^T \grad_{\bm{\theta}_i} \ell\big(x_{i,t},\sigma(u_{i,t}^k)\big) \bigg),
\end{align}
and for hidden neuron $i \in \set{H}$,
\begin{align} \label{eq:iw-grad-T-hid}
&\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta}) = \Big( \log R_{\bm{\theta}^\text{X},T}^K - {\bm b}_{i,T} \Big) \cdot \sum_{k=1}^K \sum_{t=1}^T \grad_{\bm{\theta}_i} \ell\big(h_{i,t}^k,\sigma(u_{i,t}^k)\big),
\end{align}
where we have introduced baseline signals ${\bm b}_{i,T}$ to minimize the variance of the gradient estimator following similar way in \eqref{eq:mb-baseline}.
In order to obtain an online rule, the model parameters $\bm{\theta}$ are updated at each time $t$ via SGD by using the discounted version of the gradient estimator $\grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta})$ in \eqref{eq:iw-grad-T}, given as: for visible neuron $i \in \set{X}$,
\begin{subequations} \label{eq:iw-grad}
\begin{align} \label{eq:iw-grad-vis}
\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta}) &= \sum_{k=1}^K \bigg( \bm{\sigma}_{\text{SM}}^k\Big( {\bm v}_{\bm{\theta}^\text{X},t}^{\gamma}\Big) \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \grad_{\bm{\theta}_i} \ell\big( x_{i,t-t'},\sigma(u_{i,t-t'}^k)\big) \bigg),
\end{align}
and for hidden neuron $i \in \set{H}$,
\begin{align} \label{eq:iw-grad-hid}
\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta}) &= \Big( \log R_{\bm{\theta}^\text{X},t}^{K,\gamma} - {\bm b}_{i,t}^{\gamma}\Big) \cdot \sum_{k=1}^K \sum_{t'=0}^{t-1} \gamma^{t'} \grad_{\bm{\theta}_i} \ell\big( h_{i,t-t'}^k,\sigma(u_{i,t-t'}^k)\big),
\end{align}
\end{subequations}
where $\gamma \in (0,1)$ is a discount factor. In a similar manner to {\GEMSNN} and {\MBSNN}, the batch processing required in importance weight $\bm{\sigma}_{\text{SM}}^k\big({\bm v}_{\bm{\theta}^\text{X},T}\big)$ and in importance-weighted estimator $R_{\bm{\theta}^\text{X},T}^K$, we use the discounted values $v_{\bm{\theta}^\text{X},t}^{k,\gamma}$ in \eqref{eq:gem-log-weight} to compute the discounted version of the estimator $R_{\bm{\theta}^\text{X},t}^{K,\gamma} = \frac{1}{K} \sum_{k=1}^K \exp(v_{\bm{\theta}^\text{X},t}^{k,\gamma})$ and the corresponding baseline ${\bm b}_{i,t}^{\gamma}$.
The resulting SGD-based learning rule, which we refer to Importance-Weighted online learning for SNNs ({\IWSNN}), updates the parameters $\bm{\theta}_i$ of each neuron $i$ in the direction of the MC estimate of the gradient $-\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta})$ at time $t$ as follows: for visible neuron $i \in \set{X}$,
\begin{subequations} \label{eq:iw-update}
\begin{align} \label{eq:iw-update-vis}
\Delta \bm{\theta}_i = \sum_{k=1}^K \bm{\sigma}_{\text{SM}}^k\Big( {\bm v}_{\bm{\theta}^\text{X},t}^{\gamma} \Big) \cdot \Big\langle \grad_{\bm{\theta}_i} \ell\big( x_{i,t},\sigma(u_{i,t}^k)\big) \Big\rangle_{\gamma},
\end{align}
and for hidden neuron $i \in \set{H}$,
\begin{align} \label{eq:iw-update-hid}
\Delta \bm{\theta}_i = \Big( \log R_{\bm{\theta}^\text{X},t}^{K,\gamma} - {\bm b}_{i,t}^{\gamma}\Big) \cdot \sum_{k=1}^K \Big\langle \grad_{\bm{\theta}_i} \ell\big( h_{i,t}^k, \sigma(u_{i,t}^k)\big) \Big\rangle_{\gamma}.
\end{align}
\end{subequations}
\subsection{Interpreting {\MBSNN} and {\IWSNN}}
Following the discussion around \eqref{eq:three-factor}, the update rules {\MBSNN} and {\IWSNN} follow a standard three-factor format and is local, with the exception of the learning signals used for the update of hidden neurons' parameters.
In {\MBSNN}, for each neuron $i \in \set{V}$, the gradient $\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\cdot)$ in \eqref{eq:mb-update} contains the derivative of the cross-entropy loss $\grad_{\bm{\theta}_i} \ell\big( s_{i,t}^k,\sigma(u_{i,t}^k)\big)$, with $s_{i,t}^k = x_{i,t}$ for visible neuron $i \in \set{X}$ and $s_{i,t}^k = h_{i,t}^k$ for hidden neuron $i \in \set{H}$. As detailed in \eqref{eq:gem-update}, the derivative includes the synaptic filtered trace $\overrightarrow{s}_{j,t}^k$ of pre-synaptic neuron $j$; the somatic filtered trace $\overleftarrow{s}_{i,t}^k$ of post-synaptic neuron $i$; and the post-synaptic error $s_{i,t}^k - \sigma(u_{i,t}^k)$. Furthermore, for hidden neurons $\set{H}$, the per-sample global learning signals $\{v_{\bm{\theta}^\text{X},t}^{k,\gamma}\}_{k=1}^K$ in \eqref{eq:gem-log-weight} are commonly used to guide the update in \eqref{eq:mb-update-hid}, while the visible neurons $\set{X}$ do not use the learning signal. The common per-sample learning signals $\{v_{\bm{\theta}^\text{X},t}^{k,\gamma}\}_{k=1}^K$ can be interpreted as indicating how effective the current, randomly sampled, behavior ${\bm h}_{\leq t}^k$ of hidden neurons is ensuring the minimization of the log-loss of the desired observation ${\bm x}_{\leq t}$ for the visible neurons.
\begin{table}[t]
\caption{Learning Communication Loads (in real numbers)}
\label{tab:comparison}
\vspace{-0.3cm}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Scheme & Unicast $\mathsf{C}_{\text{N} \rightarrow \text{CP}}$ & Broadcast $\mathsf{C}_{\text{CP} \rightarrow \text{N}}$\\
\midrule
{\GEMSNN} & $K|\set{X}|$ & $K(|\set{X}| + |\set{H}|)$ \\
{\MBSNN} & $K|\set{X}|$ & $K|\set{H}|$ \\
{\IWSNN} & $K|\set{X}|$ & $K|\set{X}| + |\set{H}|$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vspace{-0.3cm}
\end{table}
In {\IWSNN}, for each neuron $i \in \set{V}$, the gradient $\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\cdot)$ in \eqref{eq:iw-update} also contains the derivative of the cross-entropy loss $\grad_{\bm{\theta}_i} \ell\big( s_{i,t}^k,\sigma(u_{i,t}^k)\big)$. The dependence on the pre-synaptic filtered trace, post-synaptic somatic filtered trace, and post-synaptic error is applied for each sample $k$ as for {\GEMSNN} and {\MBSNN}, while the learning signal is given in different form to each neuron. As in \eqref{eq:iw-update-vis}, for visible neurons $\set{X}$, the importance weights $\{\bm{\sigma}_{\text{SM}}^k\big( {\bm v}_{\bm{\theta}^\text{X},t}^{\gamma}\big)\}_{k=1}^K$ computed using the SoftMax function are given as the common learning signals, with the contribution of each sample being weighted by $\bm{\sigma}_{\text{SM}}^k\big( {\bm v}_{\bm{\theta}^\text{X},t}^{\gamma}\big)$. As for {\GEMSNN}, the importance weight measures the relative effectiveness of the $k$th random realization ${\bm h}_{\leq t}^k$ of the hidden neurons in reproducing the desired behavior ${\bm x}_{\leq t}$ of the visible neurons. In contrast, for hidden neurons $\set{H}$, a scalar $\log R_{\bm{\theta}^\text{X},t}^{K,\gamma} = \frac{1}{K} \sum_{k=1}^K \exp\big( v_{\bm{\theta}^\text{X},t}^{k,\gamma}\big)$ is commonly used in \eqref{eq:iw-update-hid} as a global learning signal, indicating how effective their overall behavior across $K$ samples is in ensuring the minimization of the log-loss of observation ${\bm x}_{\leq t}$.
\begin{figure*}[ht!]
\centering
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/general_comparison_ll_v4} \label{fig:general-comp-ll}
}
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/general_comparison_acc_v4} \label{fig:general-comp-acc}
}
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/general_comparison_comm_v4} \label{fig:general-comp-comm}
}
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/general_comparison_spike_v4} \label{fig:general-comp-spike}
}
\vspace{-0.1cm}
\caption{Classification task trained on MNIST-DVS data set using {\MBSNN}, {\IWSNN} and {\GEMSNN} versus $K$, with $95\%$ confidence intervals accounting for the error bars: (a) (estimated) log-loss for the desired output on test data set; (b) classification accuracy of the test data set; (c) broadcast communication load $\mathsf{C}_{\text{CP} \rightarrow \text{N}}$ from CP to neurons in Table~\ref{tab:comparison}; and (d) number of spikes emitted by the hidden neurons per unit time during training.
}
\label{fig:general-comparison}
\vspace{-0.4cm}
\end{figure*}
\subsection{Communication Load of {\MBSNN} and {\IWSNN}}
From the descriptions of {\MBSNN} and {\IWSNN} given above, they require bi-directional communication. For both rules, at each time $t$, unicast communication from neurons to CP is required to collect information $\{\{\ell\big(x_{i,t},\sigma(u_{i,t}^k)\big)\}_{k=1}^K\}_{i \in \set{X}}$ from all visible neurons. It entails unicast communication load $\mathsf{C}_{\text{N} \rightarrow \text{CP}} = K|\set{X}|$ real numbers, as for {\GEMSNN}. For {\MBSNN}, the collected information is used to compute the per-sample learning signals $\{v_{\bm{\theta}^\text{X},t}^{k,\gamma}\}_{k=1}^K$, which are then sent back to all hidden neurons, resulting a broadcast communication load from CP to neurons equal to $\mathsf{C}_{\text{CP} \rightarrow \text{N}} = K|\set{H}|$. In contrast, for {\IWSNN}, the collected information is used {\em (i)} to compute the importance weights $\{\bm{\sigma}_{\text{SM}}^k\big( {\bm v}_{\bm{\theta}^\text{X},t}^{\gamma}\big)\}_{k=1}^K$ of the samples, which are sent back to all visible neurons; and {\em (ii)} to compute the scalar learning signal $\log R_{\bm{\theta}^\text{X},t}^{K,\gamma}$ that is fed back to all hidden neurons. The resulting broadcast communication load is $\mathsf{C}_{\text{CP} \rightarrow \text{N}} = K|\set{X}|+|\set{H}|$. As compared to {\GEMSNN} whose broadcast load is $K(|\set{X}|+|\set{H}|)$, {\MBSNN} and {\IWSNN} have smaller broadcast communication load (see Table.~\ref{tab:comparison} for details).
\subsection{Additional Experiments on Classification Task}
We provide a comparison among the multi-sample training schemes, including the proposed {\GEMSNN} and two alternatives {\MBSNN} and {\IWSNN}.
We trained an SNN with $K$ samples by using {\GEMSNN}, {\MBSNN} and {\IWSNN}, and tested the SNN with $K_I=K$ samples, where the corresponding estimated log-loss, accuracy, broadcast communication load and the number of spikes for hidden neurons during training are illustrated as a function of $K$ in Fig.~\ref{fig:general-comparison}. We recall that the multi-samples learning rules use learning signals for visible and hidden neurons in different ways.
For visible neurons, {\MBSNN} does not differentiate among the contributions of different samples, while {\GEMSNN} and {\IWSNN} weigh them according to importance weights. For hidden neurons, {\IWSNN} uses a scalar learning signal, while both {\GEMSNN} and {\MBSNN} apply {\em per-sample} learning signals in different way -- {\GEMSNN} applies the importance weights of samples that are normalized by using SoftMax function, and {\MBSNN} uses the unnormalized values.
From Fig.~\ref{fig:general-comparison}, we first observe that {\GEMSNN} outperforms other methods for $K > 1$ in terms of test log-loss, accuracy, and number of spikes, while requiring the largest broadcast communication load $\mathsf{C}_{\text{CP} \rightarrow \text{N}}$. Focusing on the impact of learning signals used for visible neurons, it is seen that using the importance weights in {\IWSNN} and {\GEMSNN} improves test performance. For this classification task, where the model consists of a small number of read-out visible neurons, the difference in performance among the proposed training schemes is largely due to their different use of learning signals for the hidden neurons.
Specifically, in terms of the estimated test log-loss, per-sample importance weights used in {\GEMSNN} that are normalized using the SoftMax among the $K$ samples outperforms the global scalar learning signal in {\IWSNN} and the per-sample (unnormalized) learning signal in {\MBSNN}; while the use of multiple samples in {\GEMSNN} is seen to enhance the performance for large $K$. We note that other two schemes {\MBSNN} and {\IWSNN} can be applied with arbitrary variational posterior parameterized by learnable parameters, which further can be optimized during training. With our choice of causally conditioned distribution \eqref{eq:causally-cond}, it is not feasible.
Finally, it is observed that the proposed schemes provide different trade-offs between costs (in terms of communication load and energy consumption) and learning performance (in terms of test log-loss and accuracy).
\section{Conclusions} \label{sec:conclusion}
In this paper, we have explored the implications for inference and learning of one of the key unique properties of probabilistic spiking neural models as opposed to standard deterministic models, namely their capacity to generate multiple independent spiking outputs for any input.
We have introduced inference and online learning rules that can leverage multiple samples through majority rule and generalized expectation-maximization (GEM), respectively.
The multi-sample inference rule has the advantage of robustifying decisions and of quantifying uncertainty.
The GEM-based learning rule is derived from an estimation of the log-loss via importance sampling.
Experiments on the neuromorphic data set MNIST-DVS have demonstrated that multi-sample inference and learning rules, as compared to conventional single-sample schemes, can improve training and test performance in terms of accuracy and calibration.
While this work considered the log-loss of specific desired output spiking signals as the learning criterion, similar rules can be derived by considering other reward functions, such as Van Rossum (VR) distance \cite{zenke2018superspike, rossum2001novel}. The multi-sample algorithms for probabilistic SNN models proposed in this paper can be also extended to networks of spiking Winner-Take-All (WTA) circuits \cite{jang20:vowel, mostafa2018learning}, which process multi-valued spikes.
\section{Experimental Comparison} \label{sec:experiments-comp}
\vspace{3cm}
\section{Experiments} \label{sec:experiments}
In this section, we evaluate the performance of the proposed inference and learning schemes on memorization and classification tasks defined on the neuromorphic data set MNIST-DVS \cite{serrano2015poker}. We conduct experiments by varying the number $K_I$ and $K$ of samples used for inference and training, and evaluate the performance in terms of robust decision making, uncertainty quantification, log-likelihood, classification accuracy, number of spikes \cite{merolla2014million}, communication load, and calibration \cite{guo2017calibration}.
The MNIST-DVS data set \cite{serrano2015poker} was generated by displaying slowly-moving handwritten digit images from the MNIST data set on an LCD monitor to a DVS neuromorphic camera \cite{lichtsteiner2006128}. For each pixel of an image, positive or negative events are recorded by the camera when the pixel's luminosity respectively increases or decreases by more than a given amount, and no event is recorded otherwise. In this experiment, images are cropped to $26 \times 26$ pixels, and uniform downsampling over time is carried out to obtain $T = 80$ time samples per each image as in \cite{zhao2014feedforward, henderson2015spike}. The training data set is composed of $900$ examples per each digit, from $0$ to $9$, and the test data set is composed of $100$ examples per digit. The signs of the spiking signals are discarded, producing a binary signal per pixel as in e.g., \cite{zhao2014feedforward}.
Throughout this section, as illustrated in Fig.~\ref{fig:model-inference}, we consider a generic, non-optimized, network architecture characterized by a set of $|\set{H}|$ fully connected hidden neurons, all receiving the exogeneous inputs as pre-synaptic signals, and a read-out visible layer, directly receiving pre-synaptic signals from all exogeneous inputs and all hidden neurons without recurrent connections between visible neurons. For synaptic and somatic filters, following the approach \cite{pillow2008spatio}, we choose a set of two or three raised cosine functions with a synaptic duration of $10$ time steps as synaptic filters, and, for somatic filter, we choose a single raised cosine function with duration of $10$ time steps.
Since we aim at demonstrating the advantages of using multiple samples for inference and learning, we only provide comparisons with the counterpart probabilistic methods that use a single sample in \cite{brea2013matching, jimenez2014stochastic, jang19:spm}. The advantages of probabilistic SNN models trained with conventional single-sample strategies over deterministic SNN models trained with the state-of-the-art method DECOLLE \cite{kaiser2020decolle} have been investigated in \cite{jang20:vowel, skatchkovsky2020reviewpt2}. It is shown therein that probabilistic methods can have significant gain in terms of accuracy in resource-constrained environments characterized by a small number of neurons and/or a short presentation time, even with a single sample. We do not repeat these experiments here.
\iffalse
Following \cite{rossum2001novel}, the VR distance between the desired output sequence ${\bm x}_{\leq T}$ and the sequence ${\bm x}_{\leq T}^k$ generated by the compartment $k$ of the visible neurons is defined as the squared error between the corresponding filtered traces, i.e.,
\begin{align} \label{eq:distance-eval}
D({\bm x}_{\leq T}, {\bm x}_{\leq T}^k) := \frac{1}{|\set{X}|} \sum_{t=1}^T \sum_{i \in \set{X}} \big( \tilde{x}_{i,t} - \tilde{x}_{i,t}^k \big)^2,
\end{align}
where $\tilde{x}_{i,t} = c_t \ast x_t$ for some filter $c_t$. In all experiments, we use the self-memory filter $b_t$ to evaluate the VR distance.
\fi
\iffalse
While all compartments are used during training, inference can in principle use any subset of compartments. For example, using a single compartment $k$ for inference, the test performance can be measured by the VR distance $D({\bm x}_{\leq T}, {\bm x}_{\leq T}^k)$ in \eqref{eq:distance-eval}. A more accurate inference is expected when all $K$ outputs $\{{\bm x}_{\leq T}^k\}_{k=1}^K$ generated by $K$ compartments in parallel are used to estimate the lower-half spiking signals. In our experiments, we have considered majority rule, whereby the output $x_{i,t}^o$ is set to $1$ if $\sum_{k=1}^K x_{i,t}^k > K/2$ and $x_{i,t}^o = 0$ otherwise. In this case, we evaluated the VR distance $D({\bm x}_{\leq T}, {\bm x}_{\leq T}^o)$ using \eqref{eq:distance-eval}.
\fi
\vspace{-0.15cm}
\subsection{Multi-Sample Learning on Memorization Task}
\begin{figure*}[t!]
\centering
\hspace{-0.3cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/single_ll_gem_training_v2} \label{fig:single-ll-train}
}
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/single_ll_gem_v2} \label{fig:single-spike-K}
}
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/single_comm_gem_v2} \label{fig:single-ll-K}
}
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.28\columnwidth]{fig/single_spikenum_gem_v2} \label{fig:single-comm-K}
}
\vspace{-0.2cm}
\caption{Structured output memorization task trained on a single MNIST-DVS data point using {\GEMSNN}: (a) (Estimated) log-loss of the desired output as a function of the number of processed time samples for different values of the number $K = 1,2,5,10,20$ of samples with $|\set{H}| = 20$. (b) (Estimated) log-loss for the desired output; (c) broadcast communication load $\mathsf{C}_{\text{CP} \rightarrow \text{N}}$ from CP to neurons; and (d) number of spikes emitted by the hidden neurons per unit time during training as a function of the number $K$ of samples in training for different values of $|\set{H}| = 20, 500$. Also shown for reference is the performance with $|\set{H}| = 0$ and shaded areas represent $95\%$ confidence intervals. }
\label{fig:single-training}
\vspace{-0.4cm}
\end{figure*}
We first focus on a structured output memorization task, which involves predicting the $13 \times 26$ spiking signals encoding the lower half of an MNIST-DVS digit from the $13 \times 26$ spiking signals encoding its top half. The top-half spiking signals are given as exogeneous inputs to the SNN, while the lower-half spiking signals are assigned as the desired outputs ${\bm x}_{\leq T}$ of the $|\set{X}| = 13 \times 26 = 338$ visible neurons.
For this memorization task, the accuracy of an SNN model with parameter $\bm{\theta}$ on a desired output signal ${\bm x}_{\leq T}$ is measured by the marginal log-loss $-\log p_{\bm{\theta}}({\bm x}_{\leq T})$ obtained from \eqref{eq:ml-marginal}. The log-loss is estimated via the empirical average of the cross-entropy loss of the visible neurons over $20$ independent realizations of the hidden neurons \cite{simeone2018brief}.
At a first example, we train an SNN model using {\GEMSNN} by presenting $200$ consecutive times a single MNIST-DVS training data point ${\bm x}_{\leq T}$, yielding a sequence of $200 \times T = 16000$ time samples. The trained SNN is tested on the same image, hence evaluating the capability of the SNN for memorization \cite{brea2013matching}. For training, hyperparameters such as learning rate $\eta$ and time constant $\gamma$ have been selected after a non-exhaustive manual search and are shared among all experiments. The initial learning rate $\eta = 5 \times 10^{-4}$ is decayed as $\eta = \eta/(1+0.2)$ every $40$ presentations of the data point, and we set $\gamma = 0.9$.
To start, Fig.~\ref{fig:single-ll-train} shows the evolution of the estimated log-loss of the desired output (the lower-half image) as more samples are processed by the proposed {\GEMSNN} rule for different values of the number $K = 1,2,5,10,20$ of samples and for $|\set{H}| = 0, 20, 500$ hidden neurons. The corresponding estimated log-loss at the end of training is illustrated as a function of $K$ in Fig.~\ref{fig:single-ll-K}. It can be observed that using more samples $K$ improves the training performance due to the optimization of an increasingly more accurate learning criterion. Improvements are also observed with an increasing number $|\set{H}|$ of hidden neurons. However, it should be emphasized that increasing the number of hidden neurons increases proportionally the number of weights in the SNN, while a larger $K$ does not increase the complexity of the model in terms of number of weights.
We now turn our attention to the number of spikes produced by the hidden neurons during training and to the requirements of the SNN trained with {\GEMSNN} in terms of the communication load.
A larger $K$ implies a proportionally larger number of spikes emitted by the hidden neurons during training, as seen in Fig.~\ref{fig:single-spike-K}. We recall that the number of spikes is a proxy for the energy consumption of the SNN. Furthermore, Fig.~\ref{fig:single-comm-K} shows the broadcast communication load $\mathsf{C}_{\text{CP} \rightarrow \text{N}}$ from CP to neurons as a function of $K$ for different values of $|\set{H}|$. The communication load of an SNN trained using {\GEMSNN} increases linearly with $K$ and $|\set{H}|$.
From Fig.~\ref{fig:single-training}, the proposed {\GEMSNN} is seen to enable a flexible trade-off between communication load and energy consumption, on the one hand, and training performance, on the other, by leveraging the availability of $K$ samples.
\vspace{-0.15cm}
\subsection{Multi-Sample Inference on Classification Task}
Next, we consider a handwritten digit classification task based on the MNIST-DVS data set. In order to evaluate the advantage of multi-sample inference rule proposed in Sec.~\ref{sec:multi-inference}, we focus on a binary classification task. To this end, we consider a probabilistic SNN with $|\set{X}| = 2$ visible output neurons in the read-out layer, one for each class of two digits `$0$' and `$1$'. The $26 \times 26$ spiking signals encoding an MNIST-DVS image are given as exogeneous inputs to the SNN, while the digit labels $\{0,1\}$ are encoded by the neurons in the read-out layer. The output neuron $c \in \set{X}$ corresponding to the correct label is assigned a desired output spike signals $x_{c,t} = 1$ for $t=1,\ldots,T$, while the other neuron $c' \neq c$ are assigned $x_{c',t} = 0$ for $t=1,\ldots,T$.
We train an SNN model with $|\set{H}|=4$ hidden neurons on the $1800$ training data points for digits $\{0,1\}$ by using the proposed learning rule {\GEMSNN} with $K=5$ samples. We set the constant learning rate $\eta = 10^{-4}$ and time constant $\kappa = 0.2$, which have been selected after a non-exhaustive manual search. For testing on the $200$ test data points, we implement a majority rule with $K_I$ spiking signals $\{{\bm x}_{\leq T}^k\}_{k=1}^{K_I}$ for inference. The $K_I$ samples are used to obtain the final classification decisions $\hat{c}$ using \eqref{eq:rate-class}-\eqref{eq:majority}; and to quantify the aleatoric uncertainty of the decisions using \eqref{eq:majority-prob}. In particular, the probability of each class is estimated with the empirical average as in \eqref{eq:majority-prob}.
\begin{figure*}[t!]
\centering
\hspace{-0.3cm}
\subfigure[]{
\includegraphics[height=0.31\columnwidth]{fig/inference_acc_v2} \label{fig:inference-acc}
}
\hspace{-0.4cm}
\subfigure[]{
\includegraphics[height=0.31\columnwidth]{fig/inference_entropy_v1}
\label{fig:inference-entropy}
}
\vspace{-0.2cm}
\caption{Classification task trained on MNIST-DVS data points of two digits $\{0,1\}$ using {\GEMSNN} with $K=5$ samples: (a) Classification accuracy; and (b) Average entropy of classes measured for correct and incorrect decisions as a function of processed time samples for different values $K_I=1,5,10,20,30$ of samples in inference. The shaded areas represent $95\%$ confidence intervals.}
\label{fig:multi-inference}
\vspace{-0.4cm}
\end{figure*}
To start, we plot in Fig.~\ref{fig:inference-acc} the classification accuracy on test data set as a function of the number of training iterations. A larger $K_I$ is seen to improve the test accuracy thanks to the more robust final decisions obtained by the majority rule \eqref{eq:majority}. As discussed in \eqref{eq:majority-error}, the probability of error of the majority rule decreases exponentially with $K_I$. In particular, after training on the $100$ data points, yielding a sequence of $8000$ time samples, the test accuracy obtained by single-sample ($K_I = 1$) inference is $90.9\%$, which is improved to $97.2\%$ with $K_I = 20$ samples.
Next, we evaluate the uncertainty of the predictions as a function of the number $K_I$ of samples used for inference. Using $K_I$ decisions, the probability assigned to each class by the trained model is given by \eqref{eq:majority-prob}. Accordingly, each class is assigned a degree of confidence that depends on the fraction of the $K_I$ decisions assigned to it.
In order to evaluate the uncertainty in this decision, we evaluate the entropy of the distribution \eqref{eq:majority-prob} across the classes. More precisely, we separately average the entropy for correct and incorrect decisions. In this way, we can assess the capacity of the model to quantify uncertainty for both correct and incorrect decisions.
As seen in Fig.~\ref{fig:inference-entropy}, when $K_I=1$, the entropy is zero for both classes, since the model is unable to quantify uncertainty. In contrast, with $K_I > 1$, the entropy of correct decisions decreases as a function of training iterations, while the entropy of incorrect decisions remains large, i.e., close to $1$ bit. This indicates that the model reports well calibrated decisions.
\subsection{Multi-Sample Learning on Classification Task}
\begin{figure}[t!]
\centering
\includegraphics[height=0.36\columnwidth]{fig/general_calibration_gem_Nh200_seed5_v5}
\vspace{-0.2cm}
\caption{Classification task trained on MNIST-DVS data points of digits $\{0,1,2\}$:
Estimated log-likelihood, classification accuracy and expected calibration error (ECE) \eqref{eq:ece} of test data points as a function of processed time samples for different values $K$ of samples in training using {\GEMSNN}. The accuracy and ECE are measured using $K_I = 2$ samples. The shaded areas represent $95\%$ confidence intervals.
}
\label{fig:general-ll-ece-gem}
\vspace{-0.4cm}
\end{figure}
Finally, we investigate the advantage of using multiple samples for training in a classification task. We consider a SNN with $|\set{X}| =3$ visible output neurons in the read-out layer, one for each class of three digits $\{0,1,2\}$. We train the SNN with $|\set{H}| = 200$ hidden neurons on the $2700$ training data points of digits $\{0,1,2\}$ by using {\GEMSNN}, and the SNN is tested on the $300$ test data points.
For testing, we adopt the majority rule \eqref{eq:rate-class}-\eqref{eq:majority} to evaluate accuracy. We also consider {\em calibration} as a performance metric following \cite{guo2017calibration}. To this end, the prediction probability $\hat{p}$, or confidence, of a decision is derived from the vote count ${\bm z} = (z_c : c \in \set{X})$, with $z_c = \sum_{k=1}^{K_I} \mathds{1}({\bm x}_{\leq T}^k \in \mathds{X}_c)$, using the SoftMax function, i.e.,
\begin{align*}
\hat{p} = \sigma_{\text{SM}}^{\hat{c}}\big( {\bm z} \big).
\end{align*}
The expected calibration error (ECE) measures the difference in expectation between confidence and accuracy, i.e., \cite{guo2017calibration}
\begin{align} \label{eq:ece}
\text{ECE} = \mathbb{E}_{\hat{p}} \Big[ \big| \mathbb{P}\big( \hat{c} = c | \hat{p} = p \big) - p \big| \Big].
\end{align}
In \eqref{eq:ece}, the probability $\mathbb{P}\big( \hat{c} = c| \hat{p}=p\big)$ is the probability that $\hat{c}$ is the correct decision for inputs that yield accuracy $\hat{p} = p$. The ECE can be estimated by using quantization and empirical averages as detailed in \cite{guo2017calibration}.
We plot in Fig.~\ref{fig:general-ll-ece-gem} the test accuracy, ECE, and estimated log-likelihood of the desired output spikes as a function of the number of training iterations. A larger $K$ is seen to improve the test log-likelihood thanks to the optimization of a more accurate training criterion. The larger log-likelihood also translates into a model that more accurately reproduces conditional probability of outputs given inputs \cite{guo2017calibration, bishop1994mixture}, which in turn enhances calibration. In contrast, accuracy can be improved with a larger $K$ but only if regularization via early stopping is carried out. This points to the fact that the goal of minimizing the log-loss of specific desired output spiking signals is not equivalent to minimizing the classification error \cite{guo2017calibration}. More results can be found in Appendix, when we compare the performance with several alternative training schemes based on multiple samples.
\section{Generalized EM Multi-Sample Online Learning for SNNs}
\label{sec:multi-learning}
In this section, we discuss how the capacity of probabilistic SNNs to generate multiple independent samples at the output can be leveraged to improve the training performance in terms of convergence and accuracy.
To this end, we first provide necessary background by reviewing the standard expectation-maximization (EM) algorithm and its Monte Carlo approximation known as generalized EM (GEM).
These schemes are recalled for general probabilistic models.
Next, we propose a novel learning rule that applies GEM to probabilistic SNNs.
The novel rule, referred to as {\GEMSNN}, addresses the novel challenge of obtaining {\em online local} updates that can be implemented at each neuron as data is streamed through the SNN based on locally available information and limited global feedback.
As in \cite{jimenez2014stochastic, brea2013matching, jang19:spm}, we aim at maximizing the likelihood of obtaining a set of desired output spiking signals at the read-out layer.
The desired output sequences are specified by the training data as spiking sequences ${\bm x}_{\leq T}$ of some duration $T > 0$ in response to given exogeneous inputs. In contrast, the spiking behavior of the hidden neurons is left unspecified by the data, and it should be adapted to ensure the desired outputs of the visible neurons.
Mathematically, we focus on the maximum likelihood (ML) problem
\begin{align} \label{eq:ml}
\min_{\bm{\theta}}~ - \log p_{\bm{\theta}}({\bm x}_{\leq T}),
\end{align}
where ${\bm x}_{\leq T}$ is a sequence of desired outputs for the visible neurons.
The likelihood of the data, $p_{\bm{\theta}}({\bm x}_{\leq T})$, is obtained via marginalization over the hidden spiking signals as
\begin{align} \label{eq:ml-marginal}
\log p_{\bm{\theta}}({\bm x}_{\leq T}) = \log \sum_{{\bm h}_{\leq T}} p_{\bm{\theta}}({\bm x}_{\leq T}, {\bm h}_{\leq T}).
\end{align}
This marginalization requires summing over the $2^{|\set{H}|T}$ possible values of the hidden neurons, which is practically infeasible.
As we will discuss, we propose a multi-sample online learning rule that estimates the expectation over the hidden neurons' outputs by using $K$ independent realizations of the outputs $\{{\bm h}_{\leq T}^k\}_{k=1}^K$ of the hidden neurons, which are obtained via $K$ independent forward passes through the SNN for a given input.
\subsection{Expectation-Maximization Algorithm} \label{sec:EM}
We start by recalling the operation of the standard EM algorithm \cite{dempster1977maximum} for general probabilistic models of the form $p_{\bm{\theta}}({\bm x},{\bm h})$, where ${\bm h}$ is a discrete latent random vector.
In a manner analogous to \eqref{eq:ml}-\eqref{eq:ml-marginal}, in general probabilistic models, one aims at minimizing the log-loss, i.e., $\min_{\bm{\theta}} \big(- \log p_{\bm{\theta}}({\bm x})\big)$, where the log-likelihood of the data is obtained via marginalization over the hidden vector ${\bm h}$ as $\log p_{\bm{\theta}}({\bm x}) = \log \sum_{{\bm h}} p_{\bm{\theta}}({\bm x},{\bm h})$. The EM algorithm introduces an auxiliary distribution $q({\bm h})$, known as {\em variational posterior}, and focuses on the minimization of the upper bound
\begin{align} \label{eq:general-elbo}
-\log p_{\bm{\theta}}({\bm x})
&= -\sum_{{\bm h}} p_{\bm{\theta}}({\bm h}|{\bm x}) \log \frac{p_{\bm{\theta}}({\bm x},{\bm h})}{p_{\bm{\theta}}({\bm h}|{\bm x})} \cr
&\leq - \sum_{{\bm h}} q({\bm h}) \log \frac{p_{\bm{\theta}}({\bm x},{\bm h})}{q({\bm h})} = -\mathbb{E}_{q({\bm h})}\Big[ \log p_{\bm{\theta}}({\bm x},{\bm h}) \Big] - \text{H}(q({\bm h})),
\end{align}
where $\text{H}(q({\bm h}))$ is the entropy of the distribution $q({\bm h})$. The upper bound \eqref{eq:general-elbo} is known as {\em variational free energy} (or negative evidence lower bound, ELBO), and is tight when the auxiliary distribution $q({\bm h})$ equals the exact posterior $p_{\bm{\theta}}({\bm h}|{\bm x})$.
Based on this observation, the EM algorithm iterates between optimizing over $q({\bm h})$ and over $\bm{\theta}$ in separate phases known as E and M steps.
To elaborate, for a fixed current iterate $\bm{\theta}_\text{old}$ of the model parameters, the E step minimizes the bound \eqref{eq:general-elbo} over $q({\bm h})$, obtaining $q({\bm h}) = p_{\bm{\theta}_\text{old}}({\bm h}|{\bm x})$. Then, it computes the expected log-loss
\begin{subequations} \label{eq:EM}
\begin{align} \label{eq:EM-estep}
\set{L}_{{\bm x}}(\bm{\theta},\bm{\theta}_\text{old}) := -\mathbb{E}_{p_{\bm{\theta}_\text{old}}({\bm h}|{\bm x})}\Big[ \log p_{\bm{\theta}}({\bm x},{\bm h})\Big].
\end{align}
The M step finds the next iterate for parameters $\bm{\theta}$ by minimizing the bound \eqref{eq:general-elbo} for fixed $q({\bm h}) = p_{\bm{\theta}_\text{old}}({\bm h}|{\bm x})$, which corresponds to the problem
\begin{align} \label{eq:EM-mstep}
\bm{\theta}_\text{new} = \arg \min_{\bm{\theta}}~ \set{L}_{{\bm x}}(\bm{\theta},\bm{\theta}_\text{old}).
\end{align}
\end{subequations}
The implementation of the EM algorithm \eqref{eq:EM} is computationally demanding for problems of practical size. In fact, the E step in \eqref{eq:EM-estep} entails the computation of the posterior $p_{\bm{\theta}_\text{old}}({\bm h}|{\bm x})$ of the hidden vector at the current iterate $\bm{\theta}_\text{old}$; and the M step requires solving the stochastic optimization problem \eqref{eq:EM-mstep}.
\subsection{Generalized Expectation-Maximization Algorithm} \label{sec:GEM}
We now review a Monte Carlo approximation of the EM algorithm, known as GEM \cite{neal1998view, tang2013sfnn} that tackles the two computational issues identified above.
First, the expected log-loss $\set{L}_{{\bm x}}(\bm{\theta},\bm{\theta}_\text{old})$ in \eqref{eq:EM-estep} is estimated via importance sampling by using $K$ independent samples $\{{\bm h}^k\}_{k=1}^K$ of the hidden variables drawn from the current prior distribution, i.e., ${\bm h}^k \sim p_{\bm{\theta}_\text{old}}({\bm h})$ for $k=1,\ldots,K$. Specifically, GEM considers the unbiased estimate
\begin{align} \label{eq:gem-estep}
\set{L}_{{\bm x}}(\bm{\theta},\bm{\theta}_\text{old}) &= - \mathbb{E}_{p_{\bm{\theta}_\text{old}}({\bm h})} \bigg[ \frac{p_{\bm{\theta}_\text{old}}({\bm h}|{\bm x})}{p_{\bm{\theta}_\text{old}}({\bm h})} \log p_{\bm{\theta}}({\bm x},{\bm h}) \bigg] \cr
&\approx -\sum_{k=1}^K w^k \cdot \log p_{\bm{\theta}}({\bm x},{\bm h}^k) := \set{L}^K_{{\bm x}}(\bm{\theta},\bm{\theta}_\text{old}),
\end{align}
where the {\em importance weight} $w^k$ of the sample ${\bm h}^k$ is defined as
\begin{align} \label{eq:importance-weight-gem}
w^k:= \frac{1}{K} \cdot \frac{p_{\bm{\theta}_\text{old}}({\bm h}^k|{\bm x})}{p_{\bm{\theta}_\text{old}}({\bm h}^k)} = \frac{1}{K} \cdot \frac{p_{\bm{\theta}_\text{old}}({\bm x}|{\bm h}^k)}{p_{\bm{\theta}_\text{old}}({\bm x})}.
\end{align}
The marginal $p_{\bm{\theta}_\text{old}}({\bm x})$ in \eqref{eq:importance-weight-gem} also requires averaging over the hidden variables and is further approximated using the same samples $\{{\bm h}^k\}_{k=1}^K$ as
\begin{align}
p_{\bm{\theta}_\text{old}}({\bm x}) \approx \frac{1}{K} \sum_{k'=1}^K p_{\bm{\theta}_\text{old}}({\bm x}|{\bm h}^{k'}).
\end{align}
Accordingly, the weights in \eqref{eq:importance-weight-gem} are finally written as
\begin{align} \label{eq:importance-weight-gem-sm}
w^k = \frac{p_{\bm{\theta}_\text{old}}({\bm x}|{\bm h}^k)}{\sum_{k'=1}^K p_{\bm{\theta}_\text{old}}({\bm x}|{\bm h}^{k'})} = \bm{\sigma}_\text{SM}^k\big( {\bm v} \big).
\end{align}
In \eqref{eq:importance-weight-gem-sm}, we have defined the SoftMax function
\begin{align} \label{eq:softmax}
\bm{\sigma}_\text{SM}^k\big( {\bm v} \big) = \frac{\exp \big( v^k \big)}{\sum_{k'=1}^K \exp \big( v^{k'} \big) }, ~\text{for}~ k=1,\ldots,K
\end{align}
and denoted as $v^k:= \log p_{\bm{\theta}_\text{old}}({\bm x}|{\bm h}^k)$ the log-probability of producing the observation ${\bm x}$ given the $k$th realization ${\bm h}^k$ of the hidden vector. We have also introduced in vector ${\bm v} = (v^1,\ldots,v^K)$. Note that the importance weights sum to $1$ by definition.
The variance of the estimate given by \eqref{eq:gem-estep} and \eqref{eq:importance-weight-gem-sm} generally depends on how well the current latent prior $p_{\bm{\theta}_\text{old}}({\bm h})$ represents the posterior $p_{\bm{\theta}_\text{old}}({\bm h}|{\bm x})$ \cite{neal1998view, tang2013sfnn}.
In the M step, GEM relies on numerical optimization of $\set{L}_{{\bm x}}^K(\bm{\theta},\bm{\theta}_\text{old})$ in \eqref{eq:gem-estep} with \eqref{eq:importance-weight-gem-sm} via gradient descent yielding the update
\begin{align} \label{eq:gem-mstep}
\bm{\theta} &\leftarrow \bm{\theta} - \eta \cdot \grad_{\bm{\theta}} \set{L}^K_{{\bm x}}(\bm{\theta},\bm{\theta}_\text{old}) \big|_{\bm{\theta} = \bm{\theta}_\text{old}} \cr
&= \bm{\theta} + \eta \cdot \sum_{k=1}^K \bm{\sigma}_\text{SM}^k({\bm v}) \grad_{\bm{\theta}} \log p_{\bm{\theta}}({\bm x},{\bm h}^k) \big|_{\bm{\theta} = \bm{\theta}_\text{old}}, \quad
\end{align}
where $\eta > 0$ is the learning rate. From \eqref{eq:gem-mstep}, the gradient is a weighted sum of the $K$ partial derivatives $\{\grad_{\bm{\theta}} \log p_{\bm{\theta}}({\bm x},{\bm h}^k)\}_{k=1}^K$, with the importance weight $\bm{\sigma}_\text{SM}^k({\bm v})$ measuring the relative effectiveness of the $k$th realization ${\bm h}^k$ of the hidden variables in generating the observation ${\bm x}$.
\subsection{Derivation of {\GEMSNN}} \label{sec:GEM-SNN}
In this section, we propose an online learning rule for SNNs that leverages $K$ independent samples from the hidden neurons by adapting the GEM algorithm introduced in Sec.~\ref{sec:GEM} to probabilistic SNNs.
To start, we note that a direct application of GEM to the learning problem \eqref{eq:ml} for SNNs is undesirable for two reasons. First, the use of the prior $p_{\bm{\theta}_\text{old}}({\bm h}_{\leq T})$ as proposal distribution in \eqref{eq:gem-estep} is bound to cause the variance of the estimate \eqref{eq:gem-estep} to be large. This is because the prior distribution $p_{\bm{\theta}_\text{old}}({\bm h}_{\leq T})$ is likely to be significantly different from the posterior $p_{\bm{\theta}_\text{old}}({\bm h}_{\leq T}|{\bm x}_{\leq T})$, due to the temporal dependence of the outputs of visible neurons and hidden neurons.
Second, a direct application of GEM would yield a batch update rule, rather than the desired online update. We now tackle these two problems in turn.
{\em 1) Sampling distribution.} To address the first issue, following \cite{jimenez2014stochastic, brea2013matching, jang19:spm}, we propose to use as proposal distribution the {\em causally conditioned distribution} $p_{\bm{\theta}_\text{old}}({\bm h}_{\leq T}||{\bm x}_{\leq T})$, where we have used the notation \cite{kramer1998directed}
\begin{align} \label{eq:causally-cond}
p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T}) &= \prod_{t=1}^T p_{\bm{\theta}}({\bm h}_{t} | {\bm x}_{\leq t}, {\bm h}_{\leq t-1}) = \prod_{t=1}^T \prod_{i \in \set{H}} p_{\bm{\theta}_i}(h_{i,t} | u_{i,t}).
\end{align}
Furthermore, distribution \eqref{eq:causally-cond} is easy to sample from, since the samples from \eqref{eq:causally-cond} can be directly obtained by running the SNN in standard forward mode.
In contrast to true posterior distribution $p_{\bm{\theta}}({\bm h}_{\leq T}|{\bm x}_{\leq T}) = \prod_{t=1}^T p_{\bm{\theta}}({\bm h}_t |{\bm x}_{\leq T},{\bm h}_{\leq t-1})$, the causally conditioned distribution \eqref{eq:causally-cond} ignores the stochastic dependence of the hidden neurons' signals ${\bm h}_t$ at time $t$ on the future values of the visible neurons' signals ${\bm x}_{> t}$. Nevertheless, it does capture the dependence of ${\bm h}_t$ on past signals ${\bm x}_{\leq t}$, and is therefore generally a much better approximation of the posterior as compared to the prior.
With this choice as the proposal distribution, during training, we obtain the $K$ independent realizations of the outputs ${\bm h}_{\leq T}^{1:K} = \{{\bm h}_{\leq T}^k\}_{k=1}^K$ of the hidden neurons via $K$ independent forward passes through the SNN at the current parameters $\bm{\theta}_\text{old}$.
Denoting by $\bm{\theta}^\text{X} = \{\bm{\theta}_i\}_{i \in \set{X}}$ and $\bm{\theta}^\text{H} = \{\bm{\theta}_i\}_{i \in \set{H}}$ the collection of model parameters for visible $\set{X}$ and hidden $\set{H}$ neurons, respectively, we note that we have the decomposition
\begin{align}
p_{\bm{\theta}}({\bm x}_{\leq T},{\bm h}_{\leq T}) = p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T}||{\bm x}_{\leq T})p_{\bm{\theta}^\text{X}}({\bm x}_{\leq T}||{\bm h}_{\leq T-1}),
\end{align}
with $p_{\bm{\theta}^\text{X}}({\bm x}_{\leq T}||{\bm h}_{\leq T-1}) = \prod_{t=1}^T \prod_{i \in \set{X}} p_{\bm{\theta}_i}(x_{i,t}|u_{i,t})$ \cite{kramer1998directed}.
By leveraging the $K$ samples ${\bm h}_{\leq T}^{1:K}$ via importance sampling as in \eqref{eq:gem-estep}-\eqref{eq:importance-weight-gem}, we accordingly compute the approximate importance weight \eqref{eq:importance-weight-gem-sm} as
\begin{align} \label{eq:posterior-approx}
\frac{1}{K} \cdot \frac{ p_{\bm{\theta}_\text{old}}({\bm h}_{\leq T}^k | {\bm x}_{\leq T}) }{ p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq T}^k || {\bm x}_{\leq T})} &= \frac{1}{K} \cdot \frac{p_{\bm{\theta}_\text{old}}({\bm x}_{\leq T}, {\bm h}_{\leq T}^k)}{p_{\bm{\theta}_\text{old}}({\bm x}_{\leq T}) \cdot p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq T}^k || {\bm x}_{\leq T})} \cr
&\approx \frac{ p_{\bm{\theta}_\text{old}^\text{X}} ({\bm x}_{\leq T} || {\bm h}_{\leq T-1}^k)}{ \sum_{k'=1}^K p_{\bm{\theta}_\text{old}^\text{X}}({\bm x}_{\leq T} || {\bm h}_{\leq T-1}^{k'}) }
= \bm{\sigma}_\text{SM}^k\Big( {\bm v}_{\bm{\theta}_\text{old}^\text{X},T} \Big). \qquad
\end{align}
In \eqref{eq:posterior-approx}, $v_{\bm{\theta}_\text{old}^\text{X},T}^k$ represents the log-probability of producing the visible neurons' target signals ${\bm x}_{\leq T}$ causally conditioned on ${\bm h}_{\leq T-1}^k$, i.e.,
\begin{align} \label{eq:gem-log-weight-ls}
v_{\bm{\theta}_\text{old}^\text{X},T}^k &:= \log p_{\bm{\theta}_\text{old}^\text{X}}({\bm x}_{\leq T}||{\bm h}_{\leq T-1}^k) = -\sum_{t=1}^T \sum_{i \in \set{X}} \ell\big( x_{i,t},\sigma(u_{i,t}^k)\big).
\end{align}
These probabilities are collected in vector ${\bm v}_{\bm{\theta}_\text{old}^\text{X},T} = \big(v_{\bm{\theta}_\text{old}^\text{X},T}^1, \ldots, v_{\bm{\theta}_\text{old}^\text{X},T}^K\big)$.
As a result, we have the MC estimate $\set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta},\bm{\theta}_\text{old})$ of the upper bound
\begin{align} \label{eq:gem-snn-elbo-T}
&\set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta},\bm{\theta}_\text{old}) = \sum_{k=1}^K \bigg( \bm{\sigma}_\text{SM}^k \Big( {\bm v}_{\bm{\theta}_\text{old}^\text{X},T} \Big) \cdot \sum_{t=1}^T \sum_{i \in \set{V}} \ell\big( s_{i,t}^k, \sigma(u_{i,t}^k)\big) \bigg).
\end{align}
{\em 2) Online learning rule.}
We now consider the second problem, namely that of obtaining an online rule from the minimization of \eqref{eq:gem-snn-elbo-T}. In online learning, the model parameters $\bm{\theta}$ are updated at each time $t$ based on the data ${\bm x}_{\leq t}$ observed so far.
To this end, we propose to minimize at each time $t$ the discounted version of the upper bound $\set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta},\bm{\theta}_\text{old})$ in \eqref{eq:gem-snn-elbo-T} given as
\begin{align}
\label{eq:gem-snn-elbo}
\set{L}^{K,\gamma}_{{\bm x}_{\leq t}}(\bm{\theta}, \bm{\theta}_\text{old}) &:= \sum_{k=1}^K \bigg( \bm{\sigma}_\text{SM}^k\Big( {\bm v}_{\bm{\theta}_\text{old}^\text{X},t}^{\gamma} \Big) \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{V}} \ell\big( s_{i,t-t'}^k, \sigma(u_{i,t-t'}^k)\big) \bigg),
\end{align}
where $\gamma \in (0,1)$ is a discount factor. Furthermore, we similarly compute a discounted version of the log-probability $v_{\bm{\theta}_\text{old}^\text{X},t}^k$ at time $t$ by using temporal average operator as
\begin{align} \label{eq:gem-log-weight}
v_{\bm{\theta}_\text{old}^\text{X},t}^{k,\gamma} &:= -\sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{X}} \ell\big( x_{i,t-t'},\sigma(u_{i,t-t'}^k)\big) =
\Big\langle -\sum_{i \in \set{X}} \ell\big( x_{i,t},\sigma(u_{i,t}^k)\big) \Big\rangle_\gamma.
\end{align}
In \eqref{eq:gem-log-weight}, we denoted as $\langle f_t\rangle_\kappa$ the temporal average operator of a time sequence $\{f_t\}_{t \geq 1}$ with some constant $\kappa \in (0,1)$ as $\langle f_t\rangle_\kappa = \kappa \cdot \langle f_{t-1}\rangle_\kappa + f_t$ with $\langle f_0\rangle_\kappa = 0$.
In a similar way, the batch processing for computing the log-loss in \eqref{eq:gem-snn-elbo} can be obtained by $\big\langle \sum_{i \in \set{V}} \ell\big( s_{i,t}^k, \sigma(u_{i,t}^k)\big) \big\rangle_\gamma$, with $s_{i,t}^k = x_{i,t}$ for visible neuron $i \in \set{X}$ and $s_{i,t}^k = h_{i,t}^k$ for hidden neuron $i \in \set{H}$.
We note that $\set{L}_{{\bm x}_{\leq T}}^{K,\gamma}(\bm{\theta},\bm{\theta}_\text{old}) \rightarrow \set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta},\bm{\theta}_\text{old})$ as $\gamma \rightarrow 1$.
The minimization of the bound $\set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta},\bm{\theta}_\text{old})$ in \eqref{eq:gem-snn-elbo} via stochastic gradient descent (SGD) corresponds to the generalized M step.
The resulting online learning rule, which we refer to {\GEMSNN}, updates the parameters $\bm{\theta}$ in the direction of the gradient $-\grad_{\bm{\theta}} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta},\bm{\theta}_\text{old})$, yielding the update rule at time $t$
\begin{align} \label{eq:gem-snn-update-simple}
\bm{\theta} \leftarrow \bm{\theta} - \eta \cdot \sum_{k=1}^K & \bm{\sigma}_\text{SM}^k\Big( {\bm v}_{\bm{\theta}_\text{old}^\text{X},t}^\gamma \Big) \cdot \Big\langle \sum_{i \in \set{V}} \grad_{\bm{\theta}} \ell\big( s_{i,t}^k, \sigma(u_{i,t}^k)\big) \Big|_{\bm{\theta} = \bm{\theta}_\text{old}} \Big\rangle_\gamma,
\end{align}
with a learning rate $\eta$.
\subsection{Summary of {\GEMSNN}}
\begin{figure}[t!]
\centering
\includegraphics[height=0.43\columnwidth]{fig/gem-snn-cp_v4}
\vspace{-0.3cm}
\caption{Illustration of the proposed multi-sample online learning scheme for SNNs. In {\GEMSNN}, learnable model parameters $\bm{\theta}$ are adapted based on data fed into the visible neurons $\set{X}$ with the help of the stochastic spiking behavior of hidden neurons $\set{H}$. A central processor (CP) collects information from all $K$ samples of the visible neurons $\set{X}$, with entailing a communication load $\mathsf{C}_{\text{N} \rightarrow \text{CP}} = K|\set{X}|$; computes importance weights of samples; and sends them to all neurons, with entailing a broadcast communication load $\mathsf{C}_{\text{CP} \rightarrow \text{N}} = K(|\set{X}|+|\set{H}|)$, in order to guide the update of model parameters $\bm{\theta}$. }
\label{fig:model-learning}
\vspace{-0.3cm}
\end{figure}
As illustrated in Fig.~\ref{fig:model-learning}, the rule {\GEMSNN} can be summarized as follows. At each time $t=1,2,\ldots,$ a central processor (CP) collects the binary cross-entropy values $\{\{\ell\big( x_{i,t}, \sigma(u_{i,t}^k)\big)\}_{k=1}^K\}_{i \in \set{X}}$ from all visible neurons $i \in \set{X}$ from $K$ parallel independent runs of the SNN.
It then computes the log-probabilities of the samples ${\bm v}_{\bm{\theta}^\text{X},t}^\gamma$ with a constant $\gamma \in (0,1)$ as in \eqref{eq:gem-log-weight}, as well as the corresponding importance weights of the samples using the SoftMax function $\bm{\sigma}_\text{SM}(\cdot)$ in \eqref{eq:softmax}. Then, the SoftMax values $\{\bm{\sigma}_\text{SM}^k\big( {\bm v}_{\bm{\theta}^\text{X},t}^\gamma \big)\}_{k=1}^K$ are fed back from the CP to all neurons. Finally, each neuron $i \in \set{V}$ updates the local model parameters $\bm{\theta}_i$ as $\bm{\theta}_i \leftarrow \bm{\theta}_i + \eta \cdot \Delta \bm{\theta}_i$, where each component of $\Delta \bm{\theta}_i$ is given as
\begin{align} \label{eq:gem-update}
\Delta w_{j,i} &= \sum_{k=1}^K \bm{\sigma}_\text{SM}^k\Big( {\bm v}_{\bm{\theta}^\text{X},t}^\gamma \Big) \cdot \Big\langle \big( s_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overrightarrow{s}_{j,t-1}^{k} \Big\rangle_\gamma,
\cr
\Delta w_i &= \sum_{k=1}^K \bm{\sigma}_\text{SM}^k\Big( {\bm v}_{\bm{\theta}^\text{X},t}^\gamma \Big) \cdot \Big\langle \big( s_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overleftarrow{s}_{i,t-1}^k \Big\rangle_\gamma,
\cr
\Delta \vartheta_i &= \sum_{k=1}^K \bm{\sigma}_\text{SM}^k\Big( {\bm v}_{\bm{\theta}^\text{X},t}^\gamma \Big) \cdot \Big\langle s_{i,t}^k - \sigma(u_{i,t}^k) \Big\rangle_\gamma,
\end{align}
where we have used standard expressions for the derivatives of the cross-entropy loss \cite{jang19:spm,jang19:def}, and we set $s_{i,t}^k = x_{i,t}$ for visible neuron $i \in \set{X}$ and $s_{i,t}^k = h_{i,t}^k$ for hidden neuron $i \in \set{H}$. As we will explain next, the update \eqref{eq:gem-update} requires only local information at each neuron, except for the $K$ SoftMax values broadcast by the CP. The overall algorithm is detailed in Algorithm~\ref{alg:gem-snn}.
\DecMargin{2em}
\begin{algorithm}[t]
\caption{{\GEMSNN}}
\label{alg:gem-snn}
\begin{algorithmic}[1]
\STATE {\bfseries Input:}
Training data ${\bm x}_{\leq t}$, number of samples for training $K$, discount factor $\gamma$, and learning rate $\eta$
\STATE{\bfseries Output:} learned model parameters $\bm{\theta}$
\vspace{-0.2cm}
\hrulefill
\STATE {\bf initialize} parameters $\bm{\theta}$
\FOR{each time $t=1,2,\ldots$}
\smallskip
\STATE \texttt{$K$ independent forward passes:}
\smallskip
\FOR{$k=1,\ldots,K$}
\STATE each neuron $i \in \set{V}$ computes the filtered traces $\{\overrightarrow{s}_{i,t-1}^k, \overleftarrow{s}_{i,t-1}^k\}$ and computes the membrane potential $u_{i,t}^k$ from \eqref{eq:membrane-potential}
\smallskip
\STATE each hidden neuron $i \in \set{H}$ emits a spike $h_{i,t}^k = 1$ with probability $\sigma(u_{i,t}^k)$ as in \eqref{eq:spike-prob-ind}
\ENDFOR
\smallskip
\STATE a central processor (CP) collects the binary cross-entropy values $\big\{\ell\big(x_{i,t},\sigma(u_{i,t}^k)\big)\big\}_{k=1}^K$ in \eqref{eq:log-prob-ind} from all visible neurons $i \in \set{X}$, and computes a discounted version of the log-probability ${\bm v}_{\bm{\theta}^\text{X},t}^\gamma$ from \eqref{eq:gem-log-weight}
\smallskip
\STATE the CP computes the approximate importance weights $\big\{\bm{\sigma}_\text{SM}^k\big( {\bm v}_{\bm{\theta}^\text{X},t}^\gamma \big)\big\}_{k=1}^K$ using the SoftMax function, and feeds back them to all neurons $\set{V}$
\smallskip
\STATE \texttt{parameter update:}
\STATE each neuron $i \in \set{V}$ updates the local model parameters $\bm{\theta}_i$ as
\begin{align*}
\bm{\theta}_i \leftarrow \bm{\theta}_i + \eta \cdot \sum_{k=1}^K \bm{\sigma}_\text{SM}^k\Big( {\bm v}_{\bm{\theta}^\text{X},t}^\gamma\Big) \cdot \grad_{\bm{\theta}_i} \ell\big( s_{i,t}^k, \sigma(u_{i,t}^k)\big),
\end{align*}
with $s_{i,t}^k = x_{i,t}$ for $i \in \set{X}$ and $s_{i,t}^k = h_{i,t}^k$ for $i \in \set{H}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\IncMargin{2em}
\subsection{Interpreting {\GEMSNN}}
The update rule {\GEMSNN} \eqref{eq:gem-update} follows a standard three-factor format implemented by most neuromorphic hardware \cite{davies2018loihi, fremaux2016neuromodulated}: A synaptic weight $w_{j,i}$ from pre-synaptic neuron $j$ to a post-synaptic neuron $i$ is updated as
\begin{align} \label{eq:three-factor}
w_{j,i} \leftarrow w_{j,i} + \eta \cdot \sum_{k=1}^K \textsf{learning signal}^k \cdot \big\langle \textsf{pre}_j^k \cdot \textsf{post}_i^k \big\rangle,
\end{align}
where $\eta$ is the learning rate. The three-factor rule \eqref{eq:three-factor} sums the contribution from the $K$ samples, with each contribution depending on three factors: $\textsf{pre}_j^k$ and $\textsf{post}_i^k$ represents the activity of the pre-synaptic neuron $j$ and of the post-synaptic neuron $i$, respectively; and $\textsf{learning signal}^k$ determines the sign and magnitude of the contribution of the $k$th sample to the learning update. The update rule \eqref{eq:three-factor} is hence local, with the exception of the global learning signals.
In particular, in \eqref{eq:gem-update}, for each neuron $i \in \set{V}$, the gradients $\grad_{\bm{\theta}_i} \set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\cdot)$ contain the synaptic filtered trace $\overrightarrow{s}_{j,t}^k$ of {\em pre-synaptic} neuron $j$; the somatic filtered trace $\overleftarrow{s}_{i,t}^k$ of {\em post-synaptic} neuron $i$ and the {\em post-synaptic} error $s_{i,t}^k - \sigma(u_{i,t}^k)$; and the {\em global learning signals} $\big\{\bm{\sigma}_\text{SM}^k\big({\bm v}_{\bm{\theta}^\text{X},t}^\gamma \big)\big\}_{k=1}^K$.
Accordingly, the importance weights computed using the SoftMax function can be interpreted as the common learning signals for all neurons, with the contribution of each $k$th sample being weighted by $\bm{\sigma}_\text{SM}^k\big({\bm v}_{\bm{\theta}^\text{X},t}^\gamma\big)$. The importance weight $\bm{\sigma}_\text{SM}^k\big({\bm v}_{\bm{\theta}^\text{X},t}^\gamma\big)$ measures the relative effectiveness of the $k$th random realization ${\bm h}_{\leq t}^k$ of the hidden neurons in reproducing the desired behavior ${\bm x}_{\leq t}$ of the visible neurons.
\subsection{Communication Load}
As discussed, {\GEMSNN} requires bi-directional communication between all neurons and a CP. As seen in Fig.~\ref{fig:model-learning}, at each time $t$, unicast communication from neurons to CP is required in order to compute the importance weights by collecting information $\{\{\ell\big( x_{i,t}, \sigma(u_{i,t}^k)\big)\}_{k=1}^K\}_{i \in \set{X}}$ from all visible neurons. The resulting unicast communication load is $\mathsf{C}_{\text{N} \rightarrow \text{CP}} = K|\set{X}|$ real numbers.
The importance weights $\big\{\bm{\sigma}_\text{SM}^k\big( {\bm v}_{\bm{\theta}^\text{X},t}^\gamma\big)\big\}_{k=1}^K$ are then sent back to all neurons, resulting a broadcast communication load from CP to neurons equal to $\mathsf{C}_{\text{CP} \rightarrow \text{N}} = K (|\set{X}|+|\set{H}|)$ real numbers.
As {\GEMSNN} requires computation of $K$ importance weights at CP, the communication loads increase linearly to the number $K$ of samples used for training.
\begin{comment}
\begin{itemize}
\item Following generalized EM approach, the E-step is to obtain samples from the causally conditioned distribution, and the M-step is to update model parameters $\bm{\theta}$ via SGD, assuming the sampling in E-step is carried out from the fixed, old model parameters, i.e., from $p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1})$.
\note{here...}
\item The expected data log-likelihood that we aim to maximize is given by
\begin{align*}
L_{{\bm x}_{\leq T}}(\bm{\theta}, \bm{\theta}_\text{old}) &= \sum_{{\bm h}_{\leq T}} \frac{p_{\bm{\theta}_\text{old}}({\bm h}_{\leq T} | {\bm x}_{\leq T})}{ p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1})} \cdot p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1}) \cdot \log p_{\bm{\theta}}({\bm x}_{\leq T}, {\bm h}_{\leq T}) \cr
&= \mathbb{E}_{p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1})} \Big[ \frac{p_{\bm{\theta}_\text{old}}({\bm h}_{\leq T} | {\bm x}_{\leq T})}{p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1})} \cdot \sum_{t=1}^T \sum_{i \in \set{V}} \bar{H}\big( s_{i,t}, \sigma(u_{i,t})\big) \Big].
\end{align*}
\item
\vspace{2cm}
\item Generalized EM to maximize the log-likelihood of observation ${\bm x}_{\leq T}$
\begin{align*}
\log p_{\bm{\theta}}({\bm x}_{\leq T}) &= \sum_{{\bm h}_{\leq T}} p_{\bm{\theta}}({\bm h}_{\leq T} | {\bm x}_{\leq T}) \cdot \log p_{\bm{\theta}}({\bm x}_{\leq T}) = \sum_{{\bm h}_{\leq T}} p_{\bm{\theta}}({\bm h}_{\leq T} | {\bm x}_{\leq T}) \cdot \log \frac{p_{\bm{\theta}}({\bm x}_{\leq T}, {\bm h}_{\leq T})}{p_{\bm{\theta}}({\bm h}_{\leq T}|{\bm x}_{\leq T})} \cr
&\geq \sum_{{\bm h}_{\leq T}} q_{\bm{\phi}}({\bm h}_{\leq T}) \cdot \log \frac{p_{\bm{\theta}}({\bm x}_{\leq T}, {\bm h}_{\leq T})}{q_{\bm{\phi}}({\bm h}_{\leq T})}.
\end{align*}
For the tightest lower bound, $q_{\bm{\phi}}({\bm h}_{\leq T})$ needs to be close to the exact posterior $p_{\bm{\theta}}({\bm h}_{\leq T} | {\bm x}_{\leq T})$.
\item Instead of hard-to-compute posterior distribution, in {\VLSNN}, we set a simple feedforward, causally conditioned, distribution as a variational posterior, i.e., $q_{\bm{\phi}}({\bm h}_{\leq T}) = p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1})$ resulting to obtain the lower bound $L_{{\bm x}_{\leq T}}(\bm{\theta})$.
\item Following generalized EM approach, we set $q_{\bm{\phi}}({\bm h}_{\leq T}) = p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1})$, then the expected data log-likelihood that we aim to maximize is given by
\item In order to obtain online learning rules, we aim at maximizing at each time $t$ a discounted version of the lower bound $L_{{\bm x}_{\leq T}}(\bm{\theta},\bm{\theta}_\text{old})$
\begin{align} \label{eq:gem-elbo}
&L_{{\bm x}_{\leq t}}(\bm{\theta}, \bm{\theta}_\text{old}) = \mathbb{E}_{p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq t} || {\bm x}_{\leq t-1})} \Big[ \frac{p_{\bm{\theta}_\text{old}}({\bm h}_{\leq t} | {\bm x}_{\leq t})}{p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq t} || {\bm x}_{\leq t-1})} \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{V}} \bar{H}\big( s_{i,t-t'}, \sigma(u_{i,t-t'}) \big) \Big],
\end{align}
where $0 < \gamma < 1$ is a discount factor.
\item Using Bayes rule, the ratio between the true posterior and the proposal distribution in \eqref{eq:gem-elbo} is computed as
\begin{align*}
\frac{p_{\bm{\theta}_\text{old}}({\bm h}_{\leq t} | {\bm x}_{\leq t})}{p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq t} || {\bm x}_{\leq t-1})} = \frac{p_{\bm{\theta}_\text{old}}({\bm x}_{\leq t}, {\bm h}_{\leq t})}{p_{\bm{\theta}_\text{old}}({\bm x}_{\leq t}) \cdot p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq t} || {\bm x}_{\leq t-1})} = \frac{p_{\bm{\theta}_\text{old}^\text{X}}({\bm x}_{\leq t} || {\bm h}_{\leq t-1})}{p_{\bm{\theta}_\text{old}}({\bm x}_{\leq t})} \approx
\frac{ \exp\big( w_{\bm{\theta}_\text{old}^\text{X},t}\big)}{ \frac{1}{K} \sum_{k=1}^K \exp\big( w_{\bm{\theta}_\text{old}^\text{X},t}^k\big) },
\end{align*}
where we use MC approximation with $K$ samples ${\bm h}_{\leq t}^{1:K} \sim p_{\bm{\theta}_\text{old}^\text{H}}({\bm h}_{\leq t}^{1:K} || {\bm x}_{\leq t-1})$ for the estimation
\begin{align*}
p_{\bm{\theta}_\text{old}}({\bm x}_{\leq t}) \approx \frac{1}{K} \sum_{k=1}^K p_{\bm{\theta}_\text{old}^\text{X}}({\bm x}_{\leq t} || {\bm h}_{\leq t-1}^k)
\end{align*}
and defined the unnormalized log-weight
\begin{align*}
w_{\bm{\theta}_\text{old}^\text{X},t} := \log p_{\bm{\theta}_\text{old}^\text{X}}({\bm x}_{\leq t} || {\bm h}_{\leq t-1}) = \sum_{t'=1}^t \underbrace{ \sum_{i \in \set{X}} \bar{H}\big( x_{i,t'}, \sigma(u_{i,t'}) \big) }_{:=~ w_{\bm{\theta}_\text{old}^\text{X}, t'}}.
\end{align*}
\item the $K$ samples of hidden neurons ${\bm h}_{\leq t}^{1:K}$ from the proposal distribution, with associated unnormalized importance weights $\{w_{\bm{\theta}^\text{X},t}^k\}_{k=1}^K$, can also be used to obtain MC estimate of the lower bound in \eqref{eq:gem-elbo} as
\begin{align} \label{eq:multi-gem-elbo}
\hat{L}_{{\bm x}_{\leq t}}^K(\bm{\theta}, \bm{\theta}_\text{old}) &:= \frac{1}{K} \sum_{k=1}^K \frac{ \exp\big( w_{\bm{\theta}_\text{old}^\text{X},t}^k \big)}{ \frac{1}{K} \sum_{k'=1}^K \exp\big( w_{\bm{\theta}_\text{old}^\text{X},t}^{k'} \big) } \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{V}} \bar{H}\big( s_{i,t-t'}^k, \sigma(u_{i,t-t'}^k)\big) \cr
&= \sum_{k=1}^K \text{SoftMax}^k\big( \sum_{t'=1}^t {\bm w}_{\bm{\theta}_\text{old}^\text{X},t'} \big) \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{V}} \bar{H}\big( s_{i,t-t'}^k, \sigma(u_{i,t-t'}^k) \big).
\end{align}
\item The gradient of \eqref{eq:multi-gem-elbo} is given by \begin{align} \label{eq:multi-gem-elbo-grad}
\grad_{\bm{\theta}} \hat{L}_{{\bm x}_{\leq t}}^K(\bm{\theta},\bm{\theta}_\text{old}) = \sum_{k=1}^K \text{SoftMax}^k \big( \sum_{t'=1}^t {\bm w}_{\bm{\theta}_\text{old}^\text{X},t'} \big) \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{V}} \grad_{\bm{\theta}} \bar{H}\big( s_{i,t-t'}^k, \sigma(u_{i,t-t'}^k) \big).
\end{align}
\item We defined time-averaged importance weights
\begin{align*}
v_{\bm{\theta}_\text{old}^\text{X},t}^k = \big\langle w_{\bm{\theta}_\text{old}^\text{X},t}^k \big\rangle_\kappa, ~~\text{and}~~ \tilde{{\bm v}}_{\bm{\theta}_\text{old}^\text{X},t} = \text{SoftMax}\big( {\bm v}_{\bm{\theta}_\text{old}^\text{X},t} \big).
\end{align*}
\end{itemize}
\end{comment}
\section{Importance-Weighted Multi-Sample Online Learning for SNNs}
\begin{comment}
In order to obtain an online rule, we aim at minimizing at each time $t$ the discounted version of the upper bound $\set{L}_{{\bm x}_{\leq T}}^K(\bm{\theta})$ in \eqref{eq:iw-elbo-T} given as
\begin{align} \label{eq:iw-elbo}
\set{L}_{{\bm x}_{\leq t}}^{K,\gamma}(\bm{\theta}) := - \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq t}^{1:K}||{\bm x}_{\leq t})}\Big[ \log R_{\bm{\theta}^\text{X},t}^{K,\gamma}\Big] =
- \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq t}^{1:K}||{\bm x}_{\leq t})}\Big[ \log \frac{1}{K} \sum_{k=1}^K \exp\Big( v_{\bm{\theta}^\text{X},t}^{k,\gamma} \Big) \Big],
\end{align}
where a discount factor $\gamma \in (0,1)$ is introduced to address the batch processing required in $R_{\bm{\theta}^\text{X},T}^K$. The discounted version of the importance-weighted estimator $R_{\bm{\theta}^\text{X},t}^{K,\gamma}$ can be obtained by using the discounted values $v_{\bm{\theta}^\text{X},t}^{k,\gamma}$ in \eqref{eq:gem-log-weight}.
\end{comment}
The {\MBVLSNN} rule introduced above leverages the $K$ compartments to improve the MC estimate of the gradient $\grad_{\bm{\theta}} L_{{\bm x}_{\leq T}}(\bm{\theta})$ of the lower bound \eqref{eq:single-elbo-batch} of the log-likelihood $\log p_{\bm{\theta}}({\bm x}_{\leq T})$. In this section, we consider an alternative approach that uses multiple compartments to obtain an increasingly more accurate bound on the log-likelihood. The approach follows the principles of the importance weighting method introduced in \cite{mnih2016variational, burda2015importance, domke2018iwvi} for conventional probabilistic models.
\begin{comment}
\subsection{{\IWVLSNN}: Importance-Weighted VLSNN}
In this section, we develop an online local learning rule that maximizes the importance-weighted lower bound $L_{{\bm x}_{\leq t}}^K(\bm{\theta})$ in \eqref{eq:multi-elbo}. We refer to the rule as Importance-Weighted Variational online Learning for SNNs ({\IWVLSNN}). {\IWVLSNN} updates the model parameters $\bm{\theta}$ via SGD in the direction of the gradient $\grad_{\bm{\theta}} L_{{\bm x}_{\leq t}}^K(\bm{\theta})$, which is obtained using REINFORCE as
\begin{align} \label{eq:multi-elbo-grad}
\grad_{\bm{\theta}}& L_{{\bm x}_{\leq t}}^K(\bm{\theta}) = \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq t}^{1:K} || {\bm x}_{\leq t-1})} \bigg[ \cr
&\sum_{k=1}^K \tilde{v}_{\bm{\theta}^\text{X},t}^k \cdot \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{X}} \grad_{\bm{\theta}} \bar{H}\big( x_{i,t-t'}, \sigma(u_{i,t-t'}^k) \big) \cr
&+ \ell_{\bm{\theta}^\text{X},t} \cdot \sum_{k=1}^K \sum_{t'=1}^t \sum_{i \in \set{H}} \grad_{\bm{\theta}} \bar{H} \big( h_{i,t'}^k, \sigma(u_{i,t'}^k) \big) \bigg].
\end{align}
In \eqref{eq:multi-elbo-grad}, we have defined ${\bm v}_{\bm{\theta}^\text{X},t} = (v_{\bm{\theta}^\text{X},t}^1, \ldots, v_{\bm{\theta}^\text{X},t}^K)$ as the vector of unnormalized log-weights of the samples at time $t$ in \eqref{eq:multi-elbo} using temporal average operator as
\begin{subequations} \label{eq:iw-quantities}
\begin{align} \label{eq:unnorm-weights-online}
v_{\bm{\theta}^\text{X},t}^k = \big\langle w_{\bm{\theta}^\text{X},t}^k \big\rangle_\gamma = \big\langle \sum_{i \in \set{X}} \bar{H}\big( x_{i,t}, \sigma(u_{i,t}^k) \big) \big\rangle_\gamma,
\end{align}
and the normalized weights $\tilde{{\bm v}}_{\bm{\theta}^\text{X},t} = (\tilde{v}_{\bm{\theta}^\text{X},t}^1, \ldots, \tilde{v}_{\bm{\theta}^\text{X},t}^K)$ are defined using the SoftMax function over $K$ samples as
\begin{align} \label{eq:norm-weights-online}
\tilde{{\bm v}}_{\bm{\theta}^\text{X},t} = \text{SoftMax} \big( {\bm v}_{\bm{\theta}^\text{X},t} \big),
\end{align}
where
\begin{align*}
\tilde{v}_{\bm{\theta}^\text{X},t}^k = \text{SoftMax}^k \big( {\bm v}_{\bm{\theta}^\text{X},t} \big) = \frac{ \exp\big( v_{\bm{\theta}^\text{X},t}^k \big)}{\sum_{k'=1}^K \exp\big( v_{\bm{\theta}^\text{X},t}^{k'} \big) }.
\end{align*}
The learning signal $\ell_{\bm{\theta}^\text{X},t}$ at time $t$ in \eqref{eq:multi-elbo} can be expressed using the LogSumExp function as
\begin{align} \label{eq:iw-ls-online}
\ell_{\bm{\theta}^\text{X},t} = \text{LogSumExp}\big( {\bm v}_{\bm{\theta}^\text{X},t} \big) - \log K,
\end{align}
where
\begin{align*}
\text{LogSumExp}\big( {\bm v}_{\bm{\theta}^\text{X},t} \big) = \log \sum_{k=1}^K \exp\big( v_{\bm{\theta}^\text{X},t}^k \big).
\end{align*}
\end{subequations}
The resulting learning algorithm {\IWVLSNN} based on the MC estimate of the gradient \eqref{eq:multi-elbo-grad} operates as follows.
In a $K$-compartment SNN, at each time $t=1,2,\ldots,$ all compartments within visible neurons are clamped to the data ${\bm x}_{\leq t}$, while each hidden neuron $i \in \set{H}$ emits a spike $h_{i,t}^k = 1$ with probability $\sigma(u_{i,t}^k)$ at each compartment $k=1,\ldots,K$. Then, the model parameters are updated as follows. As illustrated in Fig.~\ref{fig:model-learning}, the CP collects the binary negative cross-entropy values $\{ \bar{H}\big( x_{i,t}, \sigma(u_{i,t}^k) \big) \}_{k=1}^K$ of all compartments from all visible neurons $i \in \set{X}$. These are used by the CP to compute the importance weights ${\bm v}_{\bm{\theta}^\text{X},t}$ from \eqref{eq:unnorm-weights-online}; the normalized importance weights $\tilde{{\bm v}}_{\bm{\theta}^\text{X},t}$ in \eqref{eq:norm-weights-online}; and the learning signal $\ell_{\bm{\theta}^\text{X},t}$ from \eqref{eq:iw-ls-online}. Finally, the normalized weights $\{\tilde{v}_{\bm{\theta}^\text{X},t}^k\}_{k=1}^K$ are fed back from the CP to all visible neurons $\set{X}$, while a common learning signal $\ell_{\bm{\theta}^\text{X},t}$ is fed back to all hidden neurons $\set{H}$. Finally, for each neuron $i$, {\IWVLSNN} updates the local model parameters $\bm{\theta}_i$ as
\begin{subequations} \label{eq:iw-update}
\begin{align}
& \Delta \vartheta_i =
\begin{cases}
\sum_{k=1}^K \tilde{v}_{\bm{\theta}^\text{X},t}^k \cdot \big\langle x_{i,t} - \sigma(u_{i,t}^k) \big\rangle_\gamma, ~\text{if}~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - b_{t}^{\vartheta_i} \big) \cdot \sum_{k=1}^K \big\langle h_{i,t}^k - \sigma(u_{i,t}^k) \big\rangle_\kappa, ~\text{if}~ i \in \set{H}
\end{cases} \label{eq:iw-update-bias} \\
& \Delta w_{j,i} =
\begin{cases}
\sum_{k=1}^K \tilde{v}_{\bm{\theta}^\text{X},t}^k \cdot \big\langle \big( x_{i,t} - \sigma(u_{i,t}^k) \big) \cdot \overrightarrow{s}_{j,t-1}^{k} \big\rangle_\gamma, \\%~~\text{if}~~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - b_{t}^{w_{j,i}} \big) \cdot \sum_{k=1}^K \Big\langle \big( h_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overrightarrow{s}_{j,t-1}^{k} \big\rangle_\kappa,
\end{cases} \label{eq:iw-update-synaptic} \\
& \Delta w_i =
\begin{cases}
\sum_{k=1}^K \tilde{v}_{\bm{\theta}^\text{X},t}^k \cdot \big\langle \big( x_{i,t} - \sigma(u_{i,t}^k) \big) \cdot \overleftarrow{x}_{i,t-1} \big\rangle_\gamma, \\%~~\text{if}~~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - b_{t}^{w_i} \big) \cdot \sum_{k=1}^K \big\langle \big( h_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overleftarrow{h}_{i,t-1}^k \big\rangle_\kappa,
\end{cases} \label{eq:iw-update-feedback}
\end{align}
\end{subequations}
with constants $\gamma, \kappa \in (0,1)$. For each hidden neuron $i \in \set{H}$, the baseline ${\bm b}_{i,t}$ can be introduced to minimize the variance of the gradient estimator \eqref{eq:multi-elbo-grad} as \cite{peters2008reinforcement}
\begin{align} \label{eq:iw-baseline}
{\bm b}_{i,t} = \frac{ \big\langle \ell_{\bm{\theta}^\text{X},t} \cdot \sum_{k=1}^K \big( {\bm e}_{i,t}^k \big)^2 \big\rangle_{\kappa_b} }{\big\langle \sum_{k=1}^K \big( {\bm e}_{i,t}^k \big)^2 \big\rangle_{\kappa_b}}.
\end{align}
for some constant $\kappa_b \in (0,1)$.
\end{comment}
\iffalse
\begin{align*}
\Delta \bm{\theta}_i = \eta \cdot
\begin{cases}
\sum_{k=1}^K \tilde{v}_{\bm{\theta}^\text{X},t}^k \cdot \big\langle \grad_{\bm{\theta}_i} \bar{H}\big( x_{i,t}, \sigma(u_{i,t}^k) \big) \big\rangle, ~~\text{if}~~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - {\bm b}_{i,t} \big) \cdot \sum_{k=1}^K \big\langle \grad_{\bm{\theta}_i} \bar{H} \big( h_{i,t}^k, \sigma(u_{i,t}^k) \big) \big\rangle, ~~\text{if}~~ i \in \set{H}
\end{cases}
\end{align*}
\fi
\subsection{Interpreting {\IWVLSNN}}
The update rule \eqref{eq:iw-update} follows the three-factor format in \eqref{eq:three-factor}. The dependence on the post-synaptic error, post-synaptic feedback trace, and pre-synaptic synaptic trace is applied for each compartment as for {\VLSNN} and {\MBVLSNN}, while the learning signal is given in different form to each neuron. For visible neurons, the normalized importance weights $\{ \tilde{v}_{\bm{\theta}^\text{X},t}^k \}_{k=1}^K$ are given as learning signals, with the contribution of each compartment being weighted by $\tilde{v}_{\bm{\theta}^\text{X},t}^k$. From \eqref{eq:norm-weights-online}, the normalized weight $\tilde{v}_{\bm{\theta}^\text{X},t}^k$ measures the relative effectiveness of the random realizations of the hidden neurons within the $k$th compartment in reproducing the desired behavior of the visible neurons. In contrast, for the hidden neurons, the learning signal $\ell_{\bm{\theta}^\text{X},t}$ in \eqref{eq:iw-ls-online} is given as a global feedback, indicating how effective their current overall behavior across all compartments is in ensuring the maximization of the likelihood of observation ${\bm x}_{\leq t}$. Note that this signal is shared across all compartments.
\vspace{-0.1cm}
\subsection{Communication Load}
From the description of {\IWVLSNN} given above, as illustrated in Fig.~\ref{fig:model-learning}, the unicast communication load of {\IWVLSNN} from neuron to CP is $\mathsf{C}_{\text{N} \rightarrow \text{CP}} = K \mathsf{C}_{\text{N} \rightarrow \text{CP}}^{(K=1)} = K|\set{X}|$ real numbers; while the broadcast communication load from CP to neurons is $\mathsf{C}_{\text{CP} \rightarrow \text{N}} = K |\set{X}| + |\set{H}|$. As compared to {\MBVLSNN} whose broadcast load is $K |\set{H}|$, the broadcast communication load of {\IWVLSNN} can be smaller if the number of hidden neurons is large.
\iffalse
As investigated in \cite{domke2018iwvi}, maximizing the bound $L_{{\bm x}_{\leq T}}^K(\bm{\theta})$ minimizes the KL divergence between the self-normalized importance sampling (NIS) distribution over a batch of $K$ samples, i.e., obtained from the SoftMax function, and the true posterior. After learning, for inference at each time $t$, the expectation over the true posterior can be approximated as
\begin{align} \label{eq:multi-vi}
\mathbb{E}_{p_{\bm{\theta}}({\bm h}_{\leq t}| {\bm x}_{\leq t})} \big[ f({\bm h}_{\leq t})\big] &\approx \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq t}^{1:K} || {\bm x}_{\leq t-1})} \bigg[ \sum_{k=1}^K \text{SoftMax}^k\Big( \sum_{t'=1}^t {\bm w}_{\bm{\theta}^\text{X},t'} \Big) \cdot f({\bm h}_{\leq t}^k) \bigg]
\end{align}
where the normalized log-weights at time $t$ are computed using the SoftMax function, without temporal average operator.
\fi
\begin{comment}
\subsection{{\IWVLSNN} with Per-Compartment Learning Signal}
As per \eqref{eq:iw-update}, {\IWVLSNN} applies the same baseline \eqref{eq:iw-baseline} to all compartments of the hidden neurons. Hence, unlike the differentiated per-compartment feedback signals sent to the visible neurons, {\IWVLSNN} does not implement relative credit assignment across the $K$ compartments for hidden neurons. The common learning signal applied to each compartment may not be sufficiently specific and, as a result, it may suffer from high variance. In order to address this issue, we introduce per-compartment learning signals denoted by $\ell_{\bm{\theta}^\text{X},t}^k$ for all compartments $k=1,\ldots,K$ of the hidden neurons. Following \cite{mnih2016variational}, a per-compartment learning signal can be defined by subtracting a term dependent on the contribution of the other compartments to the learning signal in \eqref{eq:multi-elbo} as
\begin{align*}
\ell_{\bm{\theta}^\text{X},t}^k = \ell_{\bm{\theta}^\text{X},t} - \log \frac{1}{K} \Big(& \sum_{k' \neq k} \exp\Big( v_{\bm{\theta}^\text{X},t}^{k'} \Big) + \exp\Big( \frac{\sum_{k' \neq k} v_{\bm{\theta}^\text{X},t}^{k'}}{K-1} \Big) \Big).
\end{align*}
With this approach, the CP needs to compute the per-compartment learning signals $\{\ell_{\bm{\theta}^\text{X},t}^k\}_{k=1}^K$ with information collected from the visible neurons $\set{X}$, which are then fed back to all hidden neurons $\set{H}$. Instead of the update rule \eqref{eq:iw-update}, this alternative rule, referred to as {\IWbVLSNN}, updates the local model parameters $\bm{\theta}_i$ of each neuron $i$ as
\begin{align} \label{iw-update-b}
\Delta \bm{\theta}_i =
\begin{cases}
\sum_{k=1}^K \tilde{v}_{\bm{\theta}^\text{X},t}^k \cdot \big\langle \grad_{\bm{\theta}_i} \bar{H}\big( x_{i,t}, \sigma(u_{i,t}^k)\big) \big\rangle_\gamma, ~\text{if}~ i \in \set{X} \\
\sum_{k=1}^K \ell_{\bm{\theta}^\text{X},t}^k \cdot \big\langle \grad_{\bm{\theta}_i} \bar{H}\big( h_{i,t}^k, \sigma(u_{i,t}^k) \big) \big\rangle_\kappa, ~\text{if}~ i \in \set{H}.
\end{cases}
\end{align}
As a result, as illustrated in Fig.~\ref{fig:model-learning}, the broadcast communication load from CP to neurons includes the normalized weights of the compartments $\{ \tilde{v}_{\bm{\theta}^\text{X},t}^k \}_{k=1}^K$ to visible neurons and $\{ \ell_{\bm{\theta}^\text{X},t}^k \}_{k=1}^K$ per-compartment learning signals to hidden neurons, entailing the load $\mathsf{C}_{\text{CP} \rightarrow \text{N}} = K(|\set{X}|+|\set{H}|)$.
\end{comment}
\section{Introduction}
\label{sec:intro}
\subsection{Background}
\label{sec:background}
Much of the recent progress towards solving pattern recognition tasks in complex domains, such as natural images, audio, and text, has relied on parametric models based on Artificial Neural Networks (ANNs).
It has been widely reported that ANNs often yields learning and inference algorithms with prohibitive energy and time requirements \cite{hao2019training, strubell2019energy}.
This has motivated a renewed interest in exploring novel computational paradigms, such as Spiking Neural Networks (SNNs), that can capture some of the efficiency of biological brains for information encoding and processing.
Neuromorphic computing platforms, including IBM's TrueNorth \cite{merolla2014million}, Intel's Loihi \cite{davies2018loihi}, and BrainChip's Akida \cite{brainchipakida}, implement SNNs instead of ANNs.
Experimental evidence has confirmed the potential of SNNs implemented on neuromorphic chips in yielding energy consumption savings in many tasks of interest.
For example, for a keyword spotting application, a Loihi-based implementation was reported to offer a $38.6 \times$ improvement in energy cost per inference as compared to state-of-the-art ANNs (see Fig.~$1$ in \cite{blouw2019benchmarking}).
Other related experimental findings concern the identification of odorant samples under contaminants \cite{imam2020rapid} using SNNs via Loihi.
Inspired by biological brains, SNNs consist of neural units that process and communicate via sparse spiking signals over time, rather than via real numbers, over recurrent computing graphs \cite{neftci2019surrogate, jang2020reviewpt1, skatchkovsky2020reviewpt2, skatchkovsky2020reviewpt3}. Spiking neurons store and update a state variable, the membrane potential, that evolves over time as a function of past spike signals from pre-synaptic neurons. Most implementations are based on deterministic spiking neuron models that emit a spike when the membrane potential crosses a threshold. The design of training and inference algorithms for deterministic models need to address the non-differentiable threshold activation and the locality of the update permitted by neuromorphic chips \cite{neftci2019surrogate, davies2018loihi}.
A training rule is said to a local if each neuron can update its synaptic weights based on locally available information during the causal operation of the network and based on scalar feedback signals.
The non-differentiability problem is tackled by smoothing out the activation function \cite{huh2018gradient} or its derivative \cite{neftci2019surrogate}. In \cite{kaiser2020decolle, neftci2019surrogate, bohte2002error}, credit assignment is typically implemented via approximations of backpropagation through time (BPTT), such as random backpropagation, feedback alignment, or local randomized targets, that ensure the locality of the resulting update rule via {\em per-neuron scalar feedback signals}.
As an alternative, probabilistic spiking neural models based on the generalized linear model (GLM) \cite{nelder1972generalized} can be trained by directly maximizing a lower bound on the likelihood of the SNN producing desired outputs \cite{jimenez2014stochastic, brea2013matching, jang19:spm, zenke2018superspike}.
The resulting learning algorithm implements an online learning rule that is local and only leverages global, rather than per-neuron, feedback signals. The rule leverages randomness for exploration and it was shown to be effective for resource-constrained systems \cite{jang20:vowel, skatchkovsky2020reviewpt2}.
\subsection{Main Contributions}
This paper investigates for the first time another potential advantage of probabilistic SNNs, namely their capability to generate multiple independent spiking signals when queried over the same input.
This is clearly not possible with deterministic models which output functions of the input.
We specifically consider two potential benefits arising from this property of probabilistic SNNs: {\em (i)} during {\em inference}, the multiple generated samples can be used to robustify the model's decisions and to quantify uncertainty; and {\em (ii)} during {\em training}, the independent samples can evaluate more accurate statistical estimates of the training criterion, such as the log-loss, as well as of its gradient, hence reducing the variance of the updates and improving convergence speed.
Our main contributions are summarized as follows.
\begin{itemize}[$\bullet$]
\item During inference, we propose to apply independent runs of the SNNs for any given input in order to obtain multiple decisions. For the example of classification task, the decisions correspond to independent draws of the class index under the trained model. Using the multiple decisions allows more robust predictions to be made, and it enables a quantification of uncertainty. For classification, a majority rule can be implemented and uncertainty can be quantified via the histogram of the multiple decisions.
\item We introduce a multi-sample online learning rule that leverages generalized expectation-maximization (GEM) \cite{bishop2006pattern, simeone2018brief, tang2013sfnn} using multiple independent samples to approximate the log-loss bound via importance sampling. This yields a better statistical estimate of the log-loss and of its gradient as compared to conventional single-sample estimators \cite{tang2013sfnn}. The rule, referred to as {\GEMSNN}, follows a three-factor form with global per-sample learning signals computed by a central processor (CP) that communicates with all neurons (see Fig.~\ref{fig:model-learning}). We provide derivations of the rule and a study of communication loads between neurons and the CP.
\item We also derive alternative learning rules based on a mini-batch variant of the online learning rules in \cite{jimenez2014stochastic, brea2013matching, jang19:spm}, and on the principle of importance weighting \cite{burda2015importance, mnih2016variational,domke2018iwvi} in Appendix.
\item Finally, experimental results on structured output memorization and classification with a standard neuromorphic data set demonstrate the advantage of inference and learning rules for probabilistic SNNs that leverage multiple samples in terms of robust decision making, uncertainty quantification, log-likelihood, accuracy and calibration when increasing the number of samples used for inference and training. These benefits come at the cost of an increased communication load between neurons and the CP, as well as of the increased time and computational complexity required to generate the multiple samples.
\end{itemize}
\section{Mini-Batch VLSNN}
It is anticipated that using a mini-batch of $K$ samples can reduce the variance of the single-sample estimator.
\vspace{2cm}
\begin{comment}
\subsection{{\VLSNN}: Variational Online Learning for SNNs}
In online learning, the model parameters $\bm{\theta}$ are updated at each time $t$ based on the data ${\bm x}_{\leq t}$ observed so far. To this end, at each time $t=1,2,\ldots$, the visible neurons are clamped to the data ${\bm x}_t$, and each hidden neuron $i \in \set{H}$ emits a spike $h_{i,t} = 1$ with probability $\sigma(u_{i,t})$ from \eqref{eq:spike-prob-ind}. Weights updates are carried out using a three-factor rule of the form \eqref{eq:three-factor} as follows. A central processor (CP) collects the binary negative cross-entropy values $\bar{H}\big( x_{i,t}, \sigma(u_{i,t}) \big)$ in \eqref{eq:log-prob-ind} from all visible neurons $i \in \set{X}$, and it computes the learning signal
\begin{align} \label{eq:single-ls}
\ell_{\bm{\theta}^\text{X},t} = \Big\langle \sum_{i \in \set{X}} \bar{H}\big( x_{i,t}, \sigma(u_{i,t}) \big) \Big\rangle_\gamma,
\end{align}
for some constant $\gamma \in (0,1)$. The learning signal is then fed back from the CP to all hidden neurons $\set{H}$. Finally, each neuron $i$ updates the local model parameters $\bm{\theta}_i$ as $\bm{\theta}_i \leftarrow \bm{\theta}_i + \eta \cdot \Delta \bm{\theta}_i$ with a learning rate $\eta$, where each component of $\Delta \bm{\theta}_i$ is given as
\begin{subequations} \label{eq:single-update}
\begin{align}
&\Delta \vartheta_i =
\begin{cases}
\big\langle x_{i,t} - \sigma(u_{i,t}) \big\rangle_\gamma, ~\text{if}~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - b_{t}^{\vartheta_i} \big) \cdot \big\langle h_{i,t} - \sigma(u_{i,t}) \big\rangle_\kappa, ~\text{if}~ i \in \set{H}
\end{cases} \label{eq:single-update-bias} \\
&\Delta w_{j,i} =
\begin{cases}
\big\langle \big( x_{i,t} - \sigma(u_{i,t}) \big) \cdot \overrightarrow{s}_{j,t-1} \big\rangle_\gamma, \\%~~\text{if}~~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - b_{t}^{w_{j,i}} \big) \cdot \big\langle \big( h_{i,t} - \sigma(u_{i,t}) \big) \cdot \overrightarrow{s}_{j,t-1} \big\rangle_\kappa,
\end{cases} \label{eq:single-update-synaptic} \\
&\Delta w_i =
\begin{cases}
\big\langle \big( x_{i,t} - \sigma(u_{i,t}) \big) \cdot \overleftarrow{x}_{i,t-1} \big\rangle_\gamma, \\% ~~\text{if}~~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - b_{t}^{w_i} \big) \cdot \big\langle \big( h_{i,t} - \sigma(u_{i,t}) \big) \cdot \overleftarrow{h}_{i,t-1} \big\rangle_\kappa,
\end{cases} \label{eq:single-update-feedback}
\end{align}
\end{subequations}
with a constant $\kappa \in (0,1)$. The baselines ${\bm b}_{i,t} = \{ \{b_t^{w_{j,i}} \}_{j \in \set{P}_i}, b_t^{w_i}, b_t^{\vartheta_i} \}$ are control variates introduced as means to minimize the variance of the gradient estimator for hidden neuron $i \in \set{H}$. Following the approach in \cite{peters2008reinforcement}, the optimized baseline can be evaluated as
\begin{align*}
{\bm b}_{i,t} = \frac{ \big\langle \ell_{\bm{\theta}^\text{X},t} \cdot {\bm e}_{i,t}^2 \big\rangle_{\kappa_b} }{ \big\langle {\bm e}_{i,t}^2 \big\rangle_{\kappa_b} },
\end{align*}
for some constant $\kappa_b \in (0,1)$, with ${\bm e}_{i,t} = \big\langle \grad_{\bm{\theta}_i} \bar{H}\big( h_{i,t}, \sigma(u_{i,t})\big) \big\rangle_\kappa$ denoting the {\em eligibility trace} of neuron $i$ at time $t$. We refer to the online rule \eqref{eq:single-update} as Variational online Learning for SNNs ({\VLSNN}).
\iffalse
\begin{align*}
\Delta \bm{\theta}_i = \eta \cdot
\begin{cases}
\big\langle \grad_{\bm{\theta}_i} \bar{H}\big( x_{i,t}, \sigma(u_{i,t}) \big) \big\rangle, ~~\text{if}~~ i \in \set{X} \\
\big( \ell_{\bm{\theta}^\text{X},t} - {\bm b}_{i,t} \big) \cdot \big\langle \grad_{\bm{\theta}_i} \bar{H} \big( h_{i,t}, \sigma(u_{i,t}) \big) \big\rangle, ~~\text{if}~~ i \in \set{H}
\end{cases}
\end{align*}
\fi
\vspace{-0.1cm}
\subsection{Derivation of {\VLSNN}}
In order to make the paper self-contained and to facilitate the derivation of multi-compartment learning schemes, we present here a brief derivation of {\VLSNN}. To address the ML problem \eqref{eq:ml} in an online manner, {\VLSNN} aims at maximizing at each time $t$ a discounted version of a lower bound $L_{{\bm x}_{\leq T}}(\bm{\theta})$ on the log-likelihood $\log p_{\bm{\theta}}({\bm x}_{\leq T})$ of the observation ${\bm x}_{\leq T}$. Using Jensen's inequality, the lower bound is obtained as
\begin{align} \label{eq:single-elbo-batch}
&\log p_{\bm{\theta}}({\bm x}_{\leq T}) = \log \sum_{{\bm h}_{\leq T}} p_{\bm{\theta}}({\bm x}_{\leq T}, {\bm h}_{\leq T}) \cr
&~~ \geq \mathbb{E}_{ {\bm h}_{\leq T} \sim p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1}) } \Big[ \sum_{t=1}^T \sum_{i \in \set{X}} \bar{H}\big( x_{i,t}, \sigma(u_{i,t}) \big) \Big] \cr
&~~ := L_{{\bm x}_{\leq T}}(\bm{\theta}),
\end{align}
where we have used the notation
\begin{align}
p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1}) &= \prod_{t=1}^T p_{\bm{\theta}}({\bm h}_{t} | {\bm x}_{\leq t-1}, {\bm h}_{\leq t-1}) \cr
&= \prod_{t=1}^T \prod_{i \in \set{H}} p_{\bm{\theta}_i}(h_{i,t} | u_{i,t})
\end{align}
to indicate the causally conditioned distribution of hidden neurons, signals given the visible neurons, signals \cite{kramer1998directed}. We note that we have the decomposition
\begin{align*}
p_{\bm{\theta}}({\bm x}_{\leq T}, {\bm h}_{\leq T}) = p_{\bm{\theta}^\text{H}}({\bm h}_{\leq T} || {\bm x}_{\leq T-1}) p_{\bm{\theta}^\text{X}}({\bm x}_{\leq T} || {\bm h}_{\leq T-1}),
\end{align*}
where the causally conditioned distribution of visible neuron given hidden neurons is similarly defined as $p_{\bm{\theta}^\text{X}}({\bm x}_{\leq T} || {\bm h}_{\leq T-1}) = \prod_{t=1}^T \prod_{i \in \set{X}} p_{\bm{\theta}_i}(x_{i,t}|u_{i,t})$ \cite{kramer1998directed}. We denote as $\bm{\theta}^\text{X} = \{ \bm{\theta}_i \}_{i \in \set{X}}$ and $\bm{\theta}^\text{H} = \{ \bm{\theta}_i \}_{i \in \set{H}}$ the collection of model parameters for visible $\set{X}$ and hidden $\set{H}$ neurons, respectively. Finally, we define the time-discounted version of the lower bound $L_{{\bm x}_{\leq T}}(\bm{\theta})$ in \eqref{eq:single-elbo-batch} at time $t$ as
\begin{align} \label{eq:single-elbo}
L_{{\bm x}_{\leq t}}(\bm{\theta}) &= \mathbb{E}_{p_{\bm{\theta}^\text{H}}({\bm h}_{\leq t} || {\bm x}_{\leq t-1})} \Big[ \cr
&~~ \underbrace{ \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{X}} \bar{H}\big( x_{i,t-t'}, \sigma(u_{i,t-t'}) \big) }_{:=~ \ell_{\bm{\theta}^\text{X},t} } \Big],
\end{align}
where $0 < \gamma < 1$ is a discount factor.
The online local update rule {\VLSNN} in \eqref{eq:single-update} can be obtained by maximizing the lower bound $L_{{\bm x}_{\leq t}}(\bm{\theta})$ in \eqref{eq:single-elbo} via SGD in the direction of a stochastic estimate of the gradient $\grad_{\bm{\theta}} L_{{\bm x}_{\leq t}}(\bm{\theta})$. The gradient of \eqref{eq:single-elbo} is obtained from the standard REINFORCE gradient, which is estimated by drawing a single spiking signal ${\bm h}_{\leq t} \sim p_{\bm{\theta}^\text{H}}({\bm h}_{\leq t} || {\bm x}_{\leq t-1})$ for the hidden neurons from the causally conditioned distribution \eqref{eq:causally-cond}. Note that, as explained in Sec.~\ref{sec:intro}, the limitation of a single sample is dictated by the presence of single-compartment neurons, i.e., $K = 1$. The resulting MC estimate of the gradient can be computed as \cite{jang19:spm}
\begin{eqnarray} \label{eq:single-elbo-grad}
&&\grad_{\bm{\theta}} L_{{\bm x}_{\leq t}}(\bm{\theta}) = \sum_{t'=0}^{t-1} \gamma^{t'} \sum_{i \in \set{X}} \grad_{\bm{\theta}} \bar{H}\big( x_{i,t-t'}, \sigma(u_{i,t-t'})\big) \cr
&&\quad \qquad+ \ell_{\bm{\theta}^\text{X},t} \cdot \sum_{t'=1}^t \sum_{i \in \set{H}} \grad_{\bm{\theta}} \bar{H}\big( h_{i,t'}, \sigma(u_{i,t'}) \big),
\end{eqnarray}
which yields the SGD-based learning rule as $\bm{\theta} \leftarrow \bm{\theta} + \eta \cdot \grad_{\bm{\theta}} L_{{\bm x}_{\leq t}}(\bm{\theta})$ with a learning rate $\eta$. This can be seen to coincide with \eqref{eq:single-update} \cite{brea2013matching, jimenez2014stochastic, jang19:spm}.
\end{comment}
\subsection{Interpreting {\VLSNN}}
Following the discussion around \eqref{eq:three-factor}, the update rule \eqref{eq:single-update} has the form of a three-factor rule \cite{fremaux2016neuromodulated} and is local, with the exception of the learning signal $\ell_{\bm{\theta}^\text{X},t}$ used for the update of hidden neurons' parameters. For each neuron $i$, the gradients $\grad_{\bm{\theta}_i} L_{{\bm x}_{\leq t}}(\bm{\theta})$ with respect to the local model parameters $\bm{\theta}_i$ from \eqref{eq:single-elbo-grad} contain three types of terms, namely the post-synaptic error $s_{i,t} - \sigma(u_{i,t})$ and post-synaptic feedback trace $\overleftarrow{s}_{i,t-1}$; pre-synaptic synaptic trace $\overrightarrow{s}_{j,t-1}$; and the common global learning signal $\ell_{\bm{\theta}^\text{X},t}$ in \eqref{eq:single-ls}. For hidden neurons $\set{H}$, the learning signal is used to guide the update, while the visible neurons $\set{X}$ do not use the learning signal. The common learning signal term $\ell_{\bm{\theta}^\text{X},t}$ in \eqref{eq:single-ls} can then be interpreted as indicating indicates how effective the current, randomly sampled, behavior of hidden neurons ${\bm h}_{\leq t}$ is in ensuring the maximization of the likelihood of the desired observation ${\bm x}_{\leq t}$ for the visible neurons.
\vspace{-0.1cm}
\subsection{Communication Load}
As discussed, {\VLSNN} requires bi-directional communication. As seen in Fig.~\ref{fig:model-learning}, at each step $t$, first, unicast communication from visible neurons to CP is required in order to compute the learning signal $\ell_{\bm{\theta}^\text{X},t}$ by collecting information $\{ \bar{H}\big( x_{i,t}, \sigma(u_{i,t}) \big)\}_{i \in \set{X}}$ from all visible neurons. The resulting unicast communication load, from neurons to CP, is $\mathsf{C}_{\text{N} \rightarrow \text{CP}}^{(K=1)} = |\set{X}|$ real numbers. The learning signal is then sent back to all hidden neurons, resulting a broadcast communication load from CP to neurons equal to $\mathsf{C}_{\text{CP} \rightarrow \text{N}}^{(K=1)} = |\set{H}|$ real numbers.
As discussed, {\VLSNN} is based on a MC estimate of the gradient $\grad_{\bm{\theta}} L_{{\bm x}_{\leq t}}(\bm{\theta})$ of the likelihood function that relies on a single stochastic sample ${\bm h}_{\leq t}$ for the hidden neurons. This constraint is dictated by the availability of a single compartment. The gradient used by {\VLSNN} has a generally high variance, only partially decreased by the presence of the baseline control variates in \eqref{eq:single-update}. In this section, we introduce a first learning rule that leverages multiple compartments to potentially improve the learning performance of {\VLSNN} by reducing the variance of the gradient-based updates.
\begin{comment}
\subsection{{\MBVLSNN}: Mini-batch VLSNN}
As anticipated in Sec.~\ref{sec:intro}, in order to reduce the variance of the MC estimate in \eqref{eq:single-update}, it is possible to use a mini-batch of $K$ samples from the causally conditioned distribution \eqref{eq:causally-cond} by leveraging the availability of $K$ compartments. The resulting mini-batch variational online learning for SNNs ({\MBVLSNN}) operates as follows.
In a $K$-compartment SNN, at each time $t=1,2,\ldots$, each compartment $k$ within the visible neurons are clamped to the respective entry in data ${\bm x}_{\leq t}^k$. Each hidden neuron $i \in \set{H}$ outputs $K$ binary values $\{ h_{i,t}^k \}_{k=1}^K$, with the $k$th compartment emitting a spike $h_{i,t}^k = 1$ with probability $\sigma(u_{i,t}^k)$ in \eqref{eq:spike-prob-ind}. The model parameters are updated in online manner as follows. The CP collects the binary negative cross-entropy values $\{ \bar{H}\big( x_{i,t}^k, \sigma(u_{i,t}^k) \big)\}_{k=1}^K$ of all compartments from all visible neurons $i \in \set{X}$ in order to compute the {\em per-compartment} learning signal $\ell_{\bm{\theta}^\text{X},t}^k$ as in \eqref{eq:single-ls}, with $u_{i,t} = u_{i,t}^k$, for the $k$th compartment. The learning signals $\{ \ell_{\bm{\theta}^\text{X},t}^k \}_{k=1}^K$ are then fed back from the CP to all hidden neurons $\set{H}$. The communication paths are illustrated in Fig.~\ref{fig:model-learning}.
Finally, for each neuron $i$, {\MBVLSNN} updates the local model parameters $\bm{\theta}_i$ by averaging the {\VLSNN} updates in \eqref{eq:single-update} over the generated $K$ samples as
\begin{subequations} \label{eq:mb-update}
\begin{align}
&\Delta \vartheta_i =
\begin{cases}
\frac{1}{K} \sum_{k=1}^K \big\langle x_{i,t}^k - \sigma(u_{i,t}^k) \big\rangle_\gamma, ~\text{if}~ i \in \set{X} \\
\frac{1}{K} \sum_{k=1}^K \big( \ell_{\bm{\theta}^\text{X},t}^k - b_{t}^{\vartheta_i, k} \big) \cdot \big\langle h_{i,t}^k - \sigma(u_{i,t}^k) \big\rangle_\kappa, ~\text{if}~ i \in \set{H}
\end{cases} \label{eq:mb-update-bias} \\
&\Delta w_{j,i} =
\begin{cases}
\frac{1}{K} \sum_{k=1}^K \big\langle \big( x_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overrightarrow{s}_{j,t-1}^{k} \big\rangle_\gamma, \\%~~\text{if}~~ i \in \set{X} \\
\frac{1}{K} \sum_{k=1}^K \big( \ell_{\bm{\theta}^\text{X},t}^k - b_{t}^{w_{j,i}, k} \big) \cdot \big\langle \big( h_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overrightarrow{s}_{j,t-1}^{k} \big\rangle_\kappa,
\end{cases} \label{eq:mb-update-synaptic} \\
&\Delta w_i =
\begin{cases}
\frac{1}{K} \sum_{k=1}^K \big\langle \big( x_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overleftarrow{x}_{i,t-1} \big\rangle_\gamma, \\%~~\text{if}~~ i \in \set{X} \\
\frac{1}{K} \sum_{k=1}^K \big( \ell_{\bm{\theta}^\text{X},t}^k - b_{t}^{w_i, k} \big) \cdot \big\langle \big( h_{i,t}^k - \sigma(u_{i,t}^k) \big) \cdot \overleftarrow{h}_{i,t-1}^k \big\rangle_\kappa,
\end{cases} \label{eq:mb-update-feedback}
\end{align}
\end{subequations}
with time-averaging constants $\gamma, \kappa \in (0,1)$. We note that, in the update rule \eqref{eq:mb-update}, the relevance of each sample ${\bm h}_{\leq t}^k$ of the hidden neurons depends on the $k$th learning signal $\ell_{\bm{\theta}^\text{X},t}^k$. In similar manner to {\VLSNN}, for each hidden neuron $i \in \set{H}$, per-compartment baseline ${\bm b}_{i,t}^k$ can be optimized to minimize the variance (due to the magnitude of the per-compartment learning signal) as \cite{peters2008reinforcement}
\begin{align*}
{\bm b}_{i,t}^k = \frac{\big\langle \ell_{\bm{\theta}^\text{X},t}^k \cdot \big({\bm e}_{i,t}^k\big)^2 \big\rangle_{\kappa_b} }{\big\langle \big({\bm e}_{i,t}^k\big)^2 \big\rangle_{\kappa_b} },
\end{align*}
for some constant $\kappa_b \in (0,1)$, with ${\bm e}_{i,t}^k = \langle \grad_{\bm{\theta}_i} \bar{H}\big( h_{i,t}^k, \sigma(u_{i,t}^k)\big) \rangle_\kappa$ being the eligibility trace at the $k$th compartment of neuron $i$ at time $t$.
\iffalse
\begin{align*}
\Delta \bm{\theta}_i = \eta \cdot
\begin{cases}
\frac{1}{K} \cdot \sum_{k=1}^K \big\langle \grad_{\bm{\theta}_i} \bar{H}\big( x_{i,t}, \sigma(u_{i,t}^k) \big) \big\rangle, ~~\text{if}~~ i \in \set{X} \\
\frac{1}{K} \cdot \sum_{k=1}^K \big( \ell_{\bm{\theta}^\text{X},t}^k - {\bm b}_{i,t}^k \big) \cdot \big\langle \grad_{\bm{\theta}_i} \bar{H} \big( h_{i,t}^k, \sigma(u_{i,t}^k) \big) \big\rangle, ~~\text{if}~~ i \in \set{H}
\end{cases}
\end{align*}
\fi
\end{comment}
\vspace{-0.1cm}
\subsection{Communication Load}
As seen in Fig.~\ref{fig:model-learning}, {\MBVLSNN} requires computation of $K$ learning signals at CP, the resulting communication loads increase linearly to $K$. Specifically, the unicast communication load from neurons to CP is $\mathsf{C}_{\text{N} \rightarrow \text{CP}} = K \mathsf{C}_{\text{N} \rightarrow \text{CP}}^{(K=1)} = K |\set{X}|$ real numbers; while the broadcast communication load from CP to neurons is $\mathsf{C}_{\text{CP} \rightarrow \text{N}} = K \mathsf{C}_{\text{CP} \rightarrow \text{N}}^{(K=1)} = K |\set{H}|$ real numbers.
\section{Probabilistic SNN Model}
\label{sec:model}
\begin{figure}[t!]
\centering
\includegraphics[height=0.22\columnwidth]{fig/snn-architecture}
\caption{Architecture of an SNN with exogeneous inputs and $|\set{X}| = 4$ visible and $|\set{H}| = 5$ hidden spiking neurons -- the directed links between two neurons represent synaptic dependencies, while the self-loop links represent self-memory. The directed graph may have loops, indicating recurrent behavior.}
\label{fig:model-topology}
\vspace{-0.4cm}
\end{figure}
An SNN model is defined as a network connecting a set $\set{V}$ of spiking neurons via an arbitrary directed graph, which may have cycles.
Following a discrete-time implementation of the probabilistic generalized linear neural model (GLM) for spiking neurons \cite{pillow2008spatio, jang19:spm}, at any time $t=1,2,\ldots$, each spiking neuron $i$ outputs a binary value $s_{i,t} \in \{0,1\}$, with ``$1$'' denoting the emission of a spike. We collect in vector ${\bm s}_t = (s_{i,t}: i \in \set{V})$ the spiking signals emitted by all neurons $\set{V}$ at time $t$, and denote by ${\bm s}_{\leq t} = ({\bm s}_1, \ldots, {\bm s}_t)$ the spike sequences of all neurons up to time $t$. As illustrated in Fig.~\ref{fig:model-topology}, each post-synaptic neuron $i$ receives past input spike signals ${\bm s}_{\set{P}_i, \leq t-1}$ from the set $\set{P}_i$ of pre-synaptic neurons connected to it through directed links, known as synapses. With some abuse of notation, we include exogeneous inputs to a neuron $i$ (see Fig.~\ref{fig:model-topology}) in the set $\set{P}_i$ of pre-synaptic neurons.
As illustrated in Fig.~\ref{fig:model-inference}, the spiking probability of neuron $i$ at time $t$ conditioned on the value of the {\em membrane potential} $u_{i,t}$ is defined as
\begin{align} \label{eq:spike-prob-ind}
p_{\bm{\theta}_i}(s_{i,t} = 1 | {\bm s}_{\set{P}_i \cup i, \leq t-1}) = p_{\bm{\theta}_i}(s_{i,t} = 1 | u_{i,t}) = \sigma(u_{i,t}),
\end{align}
with $\sigma(x) = (1+\exp(-x))^{-1}$ being the sigmoid function. The membrane potential $u_{i,t}$ summarizes the effect of the past spike signals ${\bm s}_{\set{P}_i,\leq t-1}$ from pre-synaptic neurons and of its past activity ${\bm s}_{i, \leq t-1}$. From \eqref{eq:spike-prob-ind}, the negative log-probability of the output $s_{i,t}$ corresponds to the {\em binary cross-entropy loss}
\begin{align} \label{eq:log-prob-ind}
-\log p_{\bm{\theta}_i}(s_{i,t}| u_{i,t}) = \ell \big( s_{i,t}, \sigma(u_{i,t})\big) := -s_{i,t} \log \sigma(u_{i,t}) - (1-s_{i,t}) \log (1-\sigma(u_{i,t})).
\end{align}
The joint probability of the spike signals ${\bm s}_{\leq T}$ up to time $T$ is defined using the chain rule as $p_{\bm{\theta}}({\bm s}_{\leq T}) = \prod_{t=1}^T \prod_{i \in \set{V}} p_{\bm{\theta}_i}(s_{i,t}|u_{i,t})$, where $\bm{\theta} = \{\bm{\theta}_i\}_{i \in \set{V}}$ is the model parameters, with $\bm{\theta}_i$ being the local model parameters of neuron $i$ as detailed below.
The membrane potential $u_{i,t}$ is obtained as the output of spatio-temporal synaptic filter, or kernel, $a_t$ and somatic filter $b_t$ \cite{doya2007bayesian, pillow2008spatio}. Specifically, each synapse $(j,i)$ from a pre-synaptic neuron $j \in \set{P}_i$ to a post-synaptic neuron $i$ computes the synaptic filtered trace
\begin{align} \label{eq:synaptic-filtered-trace}
\overrightarrow{s}_{j,t} = a_t \ast s_{j,t},
\end{align}
while the somatic filtered trace of neuron $i$ is computed as $\overleftarrow{s}_{i,t} = b_t \ast s_{i,t}$, where we denote by $f_t \ast g_t = \sum_{\delta > 0} f_{\delta} g_{t-\delta}$ the convolution operator. If the kernels are impulse responses of autoregressive filters, e.g., $\alpha$-functions with parameters, the filtered traces can be computed recursively without keeping track of windows of past spiking signals \cite{osogami15:DyBM}. It is also possible to assign multiple kernels to each synapse \cite{pillow2008spatio}. The membrane potential $u_{i,t}$ of neuron $i$ at time $t$ is finally given as the weighted sum
\begin{align} \label{eq:membrane-potential}
u_{i,t} = \sum_{j \in \set{P}_i} w_{j,i} \overrightarrow{s}_{j,t-1} + w_i \overleftarrow{s}_{i,t-1} + \vartheta_i,
\end{align}
where $w_{j,i}$ is a synaptic weight of the synapse $(j,i)$; $w_i$ is the self-memory weight; and $\vartheta_i$ is a bias, with $\bm{\theta}_i = \{ \{w_{j,i}\}_{j \in \set{P}_i}, w_i, \vartheta_i\}$ being the local model parameters of neuron $i$.
\begin{figure}[t!]
\centering
\includegraphics[height=0.45\columnwidth]{fig/snn-potential}
\vspace{-0.2cm}
\caption{Illustration of the membrane potential $u_{i,t}$ model for a probabilistic SNN, with exponential synaptic and somatic filters. At each neuron $i$, the contribution of the synaptic trace from a pre-synaptic neuron $j \in \set{P}_i$ through synaptic filter $a_t$ are multiplied by the corresponding synaptic weights $w_{j,i}$, and the contribution of the somatic trace of a post-synaptic neuron $i$ through self-memory filter $b_t$ is multiplied by a weight $w_i$. The bias parameter $\vartheta_i$ is summed to obtain the membrane potential $u_{i,t}$, which is used to determine the spiking probability through the sigmoid function $\sigma(\cdot)$. }
\label{fig:model-inference}
\vspace{-0.5cm}
\end{figure}
\iffalse
To summarize, given exogeneous inputs, all spiking neurons store and produce $K_I$ spiking signals $\{{\bm s}_t^k\}_{k=1}^{K_I}$ independently over time $t$, with the $k$th spiking signal $s_{i,t}^k$ of neuron $i$ at time $t$ being spiked with probability \eqref{eq:spike-prob-ind} from the associated $k$th membrane potential $u_{i,t}^k$. In more detail, as illustrated in Fig.~\ref{fig:model-inference}, at each time $t$, given the $K_I$ incoming signals $\{{\bm s}_{\set{P}_i, \leq t-1}^k\}_{k=1}^{K_I}$ of pre-synaptic neurons $\set{P}_i$ (including the exogeneous inputs) and given the local model parameters $\bm{\theta}_i$ of neuron $i$, each synapse $(j,i)$ computes the $K_I$ synaptic filtered traces $\{\overrightarrow{s}_{j,t-1}^k\}_{k=1}^{K_I}$, while neuron $i$ computes the $K_I$ somatic filtered traces $\{\overleftarrow{s}_{i,t-1}^k\}_{k=1}^{K_I}$. Then, the associated membrane potential $u_{i,t}^k$ is evolved using the filtered traces $\{\{\overrightarrow{s}_{j,t-1}^k\}_{j \in \set{P}_i}, \overleftarrow{s}_{i,t-1}^k\}$ as in \eqref{eq:membrane-potential}, a spike $s_{i,t}^k$ is emitted with probability $\sigma(u_{i,t}^k)$
as in \eqref{eq:spike-prob-ind}.
\fi
As illustrated in Fig.~\ref{fig:model-topology}, we divide the neurons of SNN into the disjoint subsets of visible $\set{X}$, and hidden, or latent, $\set{H}$ neurons, hence setting $\set{V} = \set{X} \cup \set{H}$. Using the notation above, we denote by $s_{i,t} = x_{i,t}$ for a visible neuron $i \in \set{X}$ and $s_{i,t} = h_{i,t}$ for a hidden neurons $i \in \set{H}$ the output at time $t$, as well as ${\bm s}_t = ({\bm x}_t, {\bm h}_t)$ for the overall set of spike signals of all neurons $\set{V}$ at time $t$.
The visible neurons represent the read-out layer that interfaces with end users or actuators.
We will use the terms read-out layer and visible neurons interchangeably.
The exogeneous input signals can either be recorded from neuromorphic sensors or they can instead be converted from a natural signal to a set of spiking signals by following different encoding rules, e.g., rate, time, or population encoding \cite{jang19:spm}. In a similar manner, output spiking signals of the visible neurons can either be fed directly to a neuromorphic actuator or they can be converted from spiking signals to natural signals by following pre-defined decoding principles, e.g., rate, time, or population decoding rules, in order to make the model's decision.
\section{Multi-Sample Inference}
\label{sec:multi-inference}
\iffalse
\note{
\begin{enumerate}
\item Different outputs can be used to estimate the posterior of the inference decision given the input
\item example of classification: decisions from $K_I$ sample can be used to approximate the posterior over classes
\item it allows us to obtain a quantification of uncertainty via the approximate posterior + to take robust decision via approximate use of MAP by implementing a majority rule
\item another example (already we have, or a new toy example): show approximate posterior obtained for some of the inputs
\item there can be significant residual uncertainty on some of the inputs, while on others the SNN may be more confident: figures for different inputs, with different confidence
\end{enumerate}
}
\fi
In this section, we describe ways in which multiple independent output samples produced by the SNN in response to a given input can be used to robustify decision making and quantify the uncertainty.
During inference, the SNN generally acts as a probabilistic sequence-to-sequence mapping between exogeneous inputs and outputs that is defined by the model parameters $\bm{\theta}$.
While the output of the read-out layer for deterministic SNN models is a function of the exogeneous input, in probabilistic models, multiple applications of the SNN to the same input produce independent spiking outputs.
Let us define as $K_I$ the number of independent spiking signals $\{{\bm x}_{\leq T}^k\}_{k=1}^{K_I}$ recorded at the read-out layer of visible neurons $\set{X}$ up to time $T$ for a given input.
To elaborate on the use of the $K_I$ spiking signals $\{{\bm x}_{\leq T}^k\}_{k=1}^{K_I}$ for inference, we will focus in this section on classification tasks.
Consider a classification task, in which each class $c \in \{1,\ldots,C\}$ is associated with a disjoint subset $\mathds{X}_c$ of output spiking signals at the read-out layer.
Accordingly, the classifier chooses class $c$ if we have ${\bm x}_{\leq T} \in \mathds{X}_c$.
For instance, with rate decoding, each class $c$ is assigned one neuron in the read-out layer of $C$ neurons, and the set $\mathds{X}_c$ contains all spiking signals ${\bm x}_{\leq T}$ such that
\begin{align} \label{eq:rate-class}
c = \arg \max_{c' \in \set{X}} \sum_{t=1}^T x_{c',t},
\end{align}
where $x_{c',t}$ is the output spike signal at time $t$ of the visible neuron $c' \in \set{X}$ corresponding to the class $c'$.
This implies that the decoder chooses a class $c$ associated with the visible neuron producing the largest number of spikes.
The $K_I$ spiking signals $\{{\bm x}_{\leq T}^k\}_{k=1}^{K_I}$ of the visible neurons can be leveraged for classification by combining the $K_I$ decisions
\begin{align} \label{eq:multiple-class}
\{\hat{c}^k: {\bm x}_{\leq T}^k \in \mathds{X}_{\hat{c}^k}\}_{k=1}^{K_I}.
\end{align}
In this example, one can adopt a majority decision rule whereby the final decision is obtained by selecting the class $\hat{c}$ that has received the most ``votes'', i.e.,
\begin{align} \label{eq:majority}
\hat{c} = \arg \max_{c \in \set{X}} \sum_{k=1}^{K_I} \mathds{1}(c = \hat{c}^k),
\end{align}
with $\mathds{1}(E)$ being the indicator function of the event $E$.
The final decision of majority rule \eqref{eq:majority} is generally more robust, being less sensitive to noise in the inference process.
As an example, consider the problem of binary classification. If each decision fails with probability $P_e < 1/2$, the majority rule decodes erroneously with probability
\begin{align} \label{eq:majority-error}
P_{e,K_I} &= \sum_{k= \frac{K_I}{2} }^{K_I} \binom{K_I}{k} P_e^k (1-P_e)^{K_I-k} \leq \exp\Big( - 2 K_I \Big(\frac{1}{2} - P_e\Big)^2 \Big),
\end{align}
where the inequality follows by Hoeffding bound. It follows that the probability of error of the majority rule decreases exponentially with $K_I$. The same correlation applies with classification problems with any (finite) number of classes (see, e.g., \cite{moon2005error}). This will be demonstrated in Sec.~\ref{sec:experiments}.
Moreover, the $K_I$ spiking signals can be used to quantify the uncertainty of the decision. Focusing again on classification, for a given input, an ideal quantification of aleatoric uncertainty under the model requires computing the marginal distribution $p_{\bm{\theta}}({\bm x}_{\leq T} \in \mathds{X}_c)$ for all classes $c \in \{1,\ldots,C\}$ (see, e.g., \cite{abdar2020uncertainty}).
Probability $p_{\bm{\theta}}({\bm x}_{\leq T} \in \mathds{X}_c)$ quantifies the level of confidence assigned by the model to each class $c$.
Using the $K_I$ spiking signals $\{{\bm x}_{\leq T}^k \}_{k=1}^{K_I}$ produced by the visible neurons $\set{X}$ during $K_I$ independent runs, this probability can be estimated with the empirical average
\begin{align} \label{eq:majority-prob}
p_{\bm{\theta}}({\bm x}_{\leq T} \in \mathds{X}_c) \approx \frac{1}{K_I} \sum_{k=1}^{K_I} \mathds{1}({\bm x}_{\leq T}^k \in \mathds{X}_c)
\end{align}
for each class $c$. This implies that each class is assigned a degree of confidence that depends on the fraction of the $K_I$ decisions that are made in its favour.
We note that this approach is not feasible with deterministic models, which only provide individual decisions.
\section{Multi-Sample Learning}
\label{sec:multi-learning}
\note{
\begin{enumerate}
\item describing general idea of using multiple samples for learning: $K$ samples
\item move the discussion on multi-sample training methods
\item we can explain model for multi-sample training
\end{enumerate}
}
In general, probabilistic machine learning models define a parameterized joint probability $p({\bm x},{\bm h})$ for observed variables ${\bm x}$ and latent variables ${\bm h}$. Recent advances in variational inference (VI) have made it possible to train such models efficiently despite the presence of possibly high-dimensional hidden variables ${\bm h}$. VI techniques tackle the maximization of the marginal log-likelihood of the data, namely $\log p({\bm x})$, through the maximization of the evidence lower bound (ELBO). The ELBO can be introduced via the following identity
\begin{align}
\log p({\bm x}) = \underbrace{ \mathbb{E}_{q({\bm h})} \Big[ \log \frac{p({\bm x},{\bm h})}{q({\bm h})} \Big] }_{ \text{ELBO} } + \text{KL}\big( q({\bm h}) || p({\bm h}|{\bm x}) \big),
\end{align}
where $q({\bm h})$ is an arbitrary distribution, known as variational posterior, and the term $\text{KL}\big( p||q \big) = \sum_x p(x)\log \frac{p(x)}{q(x)} \geq 0$ represents the Kullback-Leibler divergence between distributions $p$ and $q$. State-of-the-art VI methods tackle the maximization of the ELBO by iteratively optimizing over the variational posterior $q({\bm h})$ and over the model parameters defining the complete likelihood $p({\bm x},{\bm h})$, through Monte Carlo (MC) sampling averages obtained by drawing samples ${\bm h} \sim q({\bm h})$ to evaluate expectation over $q({\bm h})$. In recent studies for ANNs, alternative approaches leveraging $K > 1$ samples from the variational posterior have been developed that optimize a more accurate lower bound on the marginal log-likelihood than the ELBO \cite{burda2015importance, tang2013sfnn, mnih2016variational, domke2018iwvi}. These lower bounds become tighter as the number $K$ of samples increases, converging to the exact value $\log p({\bm x})$ as $K \rightarrow \infty$. The resulting methods produce training algorithms based on importance weighting (IW) \cite{burda2015importance, mnih2016variational, domke2018iwvi} or generalized EM (GEM) \cite{tang2013sfnn}.
\vspace{1cm}
References \cite{jimenez2014stochastic, brea2013matching, jang19:spm, zenke2018superspike} derived learning rules for SNNs that optimize the ELBO for given observed spiking signals ${\bm x}$, with a variational posterior equal to the forward distribution of the model. These techniques are based on a MC estimate of the ELBO that uses a single sample ${\bm h} \sim q({\bm h})$ for the hidden neurons. This allows the learning rules to be implemented in online manner by simply running the network in feedforward mode over time.
This paper is concerned with the derivation of online local learning rules that tackle the ML problem \eqref{eq:ml}. The general form of the desired online training rule for multi-compartment SNNs follows the standard three-factor format implemented by most neuromorphic hardware \cite{fremaux2016neuromodulated, davies2018loihi}: A synaptic weight $w_{j,i}$ from pre-synaptic neuron $j$ to a post-synaptic neuron $i$ is updated as
\begin{align} \label{eq:three-factor}
w_{j,i} \leftarrow w_{j,i} + \eta \cdot \sum_{k=1}^K \ell^k \cdot \big\langle \text{pre}_j^k \cdot \text{post}_i^k \big\rangle,
\end{align}
where $\eta$ is the learning rate and $\langle \cdot \rangle$ denotes a time-averaging operator. The update \eqref{eq:three-factor} sums the contributions from the $K$ compartments, with each contribution depending on three different factors. The term $\text{pre}_j^k$ is a function of the filtered traces \eqref{eq:synaptic-filtered-trace} for the $(j,i)$ synapse, hence depending only on the spiking signals of the pre-synaptic neuron $j$, while the factor $\text{post}_i^k$ depends on the activity of the post-synaptic neuron $i$ processed by the $k$th compartment. Finally, $\ell^k$ is a scalar {\em learning signal} that determines the sign and magnitude of the contribution of the $k$th compartment to the learning update. The learning signal can generally be evaluated by a central processor that has access to the current outputs $({\bm x}_t^k, {\bm h}_t^k)$ of the SNN. The learning signal $\ell^k$ may be missing in some learning rules for given subset of neurons (see Fig.~\ref{fig:model-learning} for a preview).
\section{Related Work}
\label{sec:related}
As discussed, most of the algorithmic solutions for SNNs have been studied by assuming that each spiking neuron emits individual spikes, i.e., single-sample models \cite{jimenez2014stochastic, brea2013matching, jang19:spm, zenke2018superspike}. Several modelling and algorithmic approaches have been proposed to introduce and leverage multiple samples per neuron. A line of work including \cite{mostafa2018learning, guo2017hierarchical, jang20:vowel} considers spiking Winner-Take-All (WTA) circuits. As multiple samples per neuron are investigated here, WTA circuits maintain multiple membrane potentials per circuit. Unlike the multi-sample model, each potential is assigned to one of a group of spatially correlated spiking units in the circuit, and the potentials define a competitive process across the units through which at most one of the elements in the circuit spikes. To this end, in a WTA circuit, the weights used to compute each potential are different. This is essential in order to ensure that each element is sensitive to different spatio-temporal patterns. Therefore, increasing the number of WTA circuits entails a model with potentially more parameters, while increasing the number of samples in our model does not affect the model complexity.
\iffalse
Other related models assume more general interconnection among compartments in order to capture the dynamic behavior of branchy dendrites \cite{ermentrout2010mathematical, drix2018sparse}. In these models, dendritic structures are treated as multiple compartments coupled with each other. In \cite{drix2018sparse}, a network of spiking neurons, each with dendritic and somatic compartments, was studied to regulate both the average firing rates and the population sparseness via separate learning rules for the compartments. Compartmentalized structures can also be effectively simulated by neuromorphic hardware with support for multi-compartment dendrites on the SpiNNaker chip \cite{furber2012spinnaker} and on Intel's Loihi \cite{davies2018loihi}.
\fi
The idea of using such multi-sample objectives as a better proxy for the log-likelihood has been proposed for ANN-based variational autoencoders with continuous latent variables in \cite{burda2015importance}. This work used the reparameterization trick in order to reduce the variance of the estimator. In \cite{tang2013sfnn}, the authors proposed an importance sampling based estimator or a generalized EM bound on the log-likelihood. Focusing on the model with discrete latent variables, an unbiased gradient estimator for a multi-sample objective with per-sample learning signals was derived in \cite{mnih2016variational}.
A theoretical connection between the multi-sample objective and the log-likelihood was discussed in \cite{domke2018iwvi}.
|
1,116,691,501,314 | arxiv | \section{Introduction}
Eigenvalues of large random matrices
have been attracting much interest in theoretical
physics since the 1950's
\cite{Port,Mehta,Haake_book,Bohigas,AlSi,Bee,Guhr}.
Until recently only the real eigenvalues were seen as
physically relevant, hence most of the studies
ignored matrices with complex eigenvalues. Powerful techniques to deal
with real eigenvalues were developed and their statistical properties
are well understood nowadays \cite{Mehta}. Microscopic justifications of the use of random matrices
for describing the universal spectral properties of quantum chaotic systems
have been
provided by several groups recently, based both on traditional
semiclassical
periodic orbit expansions \cite{per,bogomol} and on advanced
field-theoretical methods \cite{MK,aaas}.
These facts make the theory of random Hermitian
matrices a powerful and versatile
tool of research in different branches of modern theoretical physics,
see e.g.\cite{Bohigas,Bee,Guhr}.
Recent studies of dissipative quantum maps \cite{diss,reichl},
asymmetric neural networks \cite{neural,n}, and open quantum
systems \cite{Sok,nils,dittes,FS,FSR} stimulated interest
to complex
eigenvalues of random matrices. Most obvious
motivation comes from studies of resonances in
open quantum systems,
i.e.\ systems whose fragments can escape to or come from
infinity. The resonances are determined as poles of the
scattering matrix (S-matrix), as a function of energy of incoming waves,
in the complex energy plane. The real part of the pole is the resonance
energy and the imaginary part is the resonance half-width. Finite width
implies
finite life-time of the corresponding states. In the chaotic regime
the resonances are dense and placed irregularly in the complex plane.
Recently, the progress in numerical techniques and
computational facilities made available resonance patterns of
high accuracy for realistic open quantum chaotic systems
like atoms and molecules \cite{Blumel}.
Due to irregularity in the resonance widths and positions
the S-matrix shows
irregular fluctuations with energy and the main goal of the
theory of the chaotic scattering is to provide an adequate statistical
description of such a behavior.
The so-called ``Heidelberg approach''
to this problem suggested in \cite{VWZ} makes use of random matrices.
The starting point is
a representation of the S-matrix in terms of an
effective non-Hermitian Hamiltonian
${\cal H}_{eff}=\hat{H}-i\hat{\Gamma}$. The Hermitian $N\times N$ matrix
$\hat{H}$ describes the closed counterpart of the open
system and the skew-Hermitian $i\hat{\Gamma}=i\hat W \hat W^T$
arises due to coupling to open scattering channels $a=1, \ldots , M$,
the matrix elements $W_{ja}$ being the amplitudes of direct transitions
from "internal" states $i=1,\ldots , N$ to one of open channels.
The poles of the S-matrix coincide
with the eigenvalues of ${\cal H}_{eff}$. In the chaotic regime
one replaces
$\hat{H}$ with an ensemble of random matrices of an
appropriate symmetry. This step is usually
``justified'' by the common belief according
to which the universal features of the
chaotic quantum systems survive such a replacement \cite{Bohigas,AlSi,Bee,Guhr}. As a result, various features of
chaotic quantum scattering
can be efficiently studied by performing the ensemble averaging.
The approach has proved to be very fruitful (for an account of
recent developments see \cite{FSR}). In particular,
it allowed to
obtain explicitly the distribution of the resonances
in the complex plane for chaotic quantum systems
with broken time-reversal invariance \cite{FS,FSR} and,
in its turn, this distribution was used to clarify some
aspects of the relaxation
processes in quantum chaotic systems\cite{Sav}.
A very recent outburst of interest to the non-Hermitian problems
\cite{Nels,NS,pass,Efnonh,freevs,nhloc,zee,NS1,QCD}
deserves to be mentioned separately. During the last several
years complex spectra of random matrices and operators
emerged in a diversity of problems. Hatano and Nelson
described depinning of flux lines from columnar defects
in superconductors in terms
of a localization-delocalization transition
in non-Hermitian quantum mechanics \cite{Nels}. Their work
motivated a series of studies of the
corresponding non-Hermitian
Schr\"odinger operator \cite{Efnonh,freevs,nhloc,zee,NS1}
and, surprisingly, random matrices appeared to be relevant
in this context \cite{Efnonh,freevs}. Complex eigenvalues
were also discussed in the context of lattice
QCD. The lattice Dirac operator entering the QCD
partition function is non-Hermitian at nonzero
chemical potential and proves to be difficult to deal with
both numerically and
analytically. Recent studies of chiral symmetry breaking used
a non-Hermitian random matrix substitute for the
Dirac operator
\cite{QCD}. There exist also interesting
links between complex eigenvalues of random matrices and
systems of interacting particles in one and two spatial dimensions
\cite{cal}. And, finally, we have to mention that
random matrices can be used for visualization of
the pseudospectra of non-random
convection-diffusion operators \cite{Tref}, and for description of two-level systems coupled to the noise reservoir\cite{ewa}.
Traditional mathematical treatment of random matrices
with no symmetry conditions imposed
goes back to the pioneering work by Ginibre\cite{Gin}
who determined all the eigenvalue correlation functions
in an ensemble of complex matrices with Gaussian entries.
The progress in the field was rather slow but steady
\cite{Mehta,Lehm1,Edel,Forr,Oas,FKS2}, see also \cite{Gir,Bai}.
In addition to the traditional approach
other aproaches have been developed and tested on new classes of
non-Hermitian random matrices
\cite{n,nils,FS,Khor,FKS1,free,zee1,Kus}.
However, our knowledge of the statistical properties of
complex eigenvalues of random matrices is still far from being
complete,
in particular little is known about the universality classes of
the obtained eigenvalue statistics.
When speaking about universality one has to specify
the energy scale, for the degree of universality
depends usually upon the chosen scale.
There exist two characteristic scales in the random matrix
spectra: the global one and the local one.
The global scale is aimed at description of the distribution of
the eigenvalues in bulk. The local one is aimed at decription of the
statistical properties of small eigenvalue sets. For real spectra,
the global scale is that on which
a spectral interval of
unit length contains on average a large, proportional to
the matrix dimension, number of eigenvalues. If
the spectrum is supported in a finite interval $[a,b]$
the global scale is simply given by the length of this interval.
In contrary, the local scale
is that
determined by the mean distance $\Delta $ between two
neighbouring eigenvalues.
Loosely speaking, the local scale is $N$ times smaller than
the global one sufficiently far from the spectrum edges,
$N$ being the matrix dimension.
Universality in the real spectra is well
established. The global scale universality is specific
to random matrices with independent entries
and does not extend to other classes of random matrices.
The best known example of such universality is provided
by the Wigner semicircle law \cite{Wig}:
\begin{equation}\label{semi}
\langle\rho(X)\rangle=\frac{N}{2\pi J^2 }
\sqrt{4\,J^2-X^2}=N\nu(X)=\frac{1}{\Delta},
\end{equation}
which holds
for random matrices whose entries satisfy a Lindeberg type condition
\cite{pastur}. In this expression the parameter $J$ just sets the global scale
in a sense as defined above. It is determined by the expectation value
$J^2=\langle \frac{1}{N}\mathop{\mathrm{Tr}}\hat{H}^2\rangle$.
It is generally accepted to scale
entries in such a way that $J$ stays finite when $N\to \infty$,
the local spacing between eigenvalues in the neighbourhood of the
point $X$ being therefore $\Delta\propto 1/N$.
Similar universality is also known for
complex spectra \cite{Gir,Bai}.
From the point of view of
universality the semicircular eigenvalue density
is not extremely robust.
Most easily one violates it by considering an
important class of so-called "invariant ensembles"
characterized by a probability density
of the form ${\cal P}(\hat{H})\propto
\exp{(-N\mathop{\mathrm{Tr}} V(\hat{H}))}$, with $V(\hat{H})$ being an even polynomial.
The corresponding eigenvalue density turns out to be highly nonuniversal
and determined by the particular form of the potential $V(H)$ \cite{BIPZ,MPS}.
Only for $V(\hat{H})=\hat{H}^2$ it is given by the
semicircular law, Eq.(\ref{semi}). Moreover, one can easily
have a non-semicircular
eigenvalue density even for real symmetric matrices
$\hat{S};\quad S_{ij}=S_{ji}$
with i.i.d. entries, if one keeps
the mean number of non-zero entries $p$ per column to be of the order
of unity when performing the limit $N\to\infty$.
This is a characteristic feature of the
so-called {\it sparse} random matrices\cite{spa,MF,FSC}.
Much more profound universality emerges on the local scale in
the real spectra. The statistical behavior of eigenvalues
separated by distance $S=s\Delta$ measured in units
of the mean eigenvalue spacing $\Delta $ is dictated by the
global matrix symmetries (e.g. if they are complex Hermitian or real
symmetric
\cite{Mehta}), being the same for all random matrix
ensembles within
a fixed symmetry class.
All ensemble specific information is encoded
in $\Delta $. On different levels of rigor, this
universality was established for ``invariant'' ensembles (i.e.\
matrices with invariant probabiltity distributions)
\cite{Pas1,bz,HW} and for matrices with i.i.d.\ entries, including
sparse matrices \cite{MF,note}. Similar universality
holds on a larger scale $S \gg \Delta $ \cite{univ1,kkp} and in the
vicinity of the spectrum edges \cite{bb,sosh}.
It turns out, that it is the {\it local scale} universality
that is mostly relevant for real physical systems\cite{Bohigas}.
Namely, statistics of highly excited
bound states of {\it closed} quantum chaotic systems
of quite different microscopic nature turn out to be independent of
the microscopic details when sampled on the energy intervals large in
comparison with the mean level separation, but smaller
than the energy scale related by the Heisenberg uncertainty
principle to the
relaxation time necessary for a classically chaotic system to reach
equilibrium in phase space \cite{AlSi}.
Moreover,
these statistics
turn out to identical to
those of
large random matrices on the {\it local} scale,
with different symmetry classes
corresponding to presence or absence of time-reversal symmetry.
One of the aims of the present paper is to demonstrate that
complex spectra of weakly non-Hermitian random matrices
possess a universality property
which is as robust as the above mentioned local scale
universality in the real spectra of Hermitian matrices.
Weakly non-Hermitian matrices
appear naturally when one uses the Heidelberg approach to describe
few-channel chaotic scattering \cite{FS}.
When the number
$M$ of open channels is small in comparison with the number
$N$ of the relevant resonances, the majority of the S-matrix poles
(resonances) are situated close to the real axis.
This is well captured within the Heidelberg approach.
With a proper normalization of $\hat H$ and $\hat W$, the imaginary part
of typical eigenvalues of the effective
Hamiltonian ${\cal H}_{eff}$ is of the order of the mean
separation between neighboring eigenvalues along the real axis.
This latter property is a characteristic feature of the regime of weak non-Hermiticity.
Motivated by this example we introduced in \cite{FKS1} another
ensemble of weakly non-Hermitian random matrices. This ensemble consists of
almost-Hermitian matrices which interpolate
between the Gaussian ensemble of Hermitian matrices (GUE) and the
Gaussian ensemble of complex matrices studied by Ginibre. It turned out that
the eigenvalue distribution for almost-Hermitian random matrices
is described by a formula \cite{FKS1} containing as two opposite
limit cases both the Wigner semicircular distribution of real
eigenvalues and the uniform distribution of complex eigenvalues obtained
by Ginibre. Further studies of almost-Hermitian random
matrices \cite{FKS2} showed that actually all their
local scale eigenvalues statistics
describe crossover between those of the GUE and Ginibre ensembles.
Later on Efetov, in his studies of directed localization
\cite{Efnonh}, discovered that weakly non-Hermitian matrices
are relevant to the problem of motion of flux lines in
superconductors with columnar defects. Efetov's matrices
are real almost-symmetric. They
interpolate between Gaussian ensemble of real symmetric
matrices (GOE) and the Gaussian ensemble of real asymmetric
matrices. This development clearly shows that, apart from being
a rich and largely unexplored mathematical object, weakly
non-Hermitian random matrices enjoy direct physical applications
and deserve a detailed study.
The present paper consists of two parts. In the first
part we study a three parameter family of
random matrix ensembles which contains the above
mentioned ensembles of almost-Hermitian and
almost-symmetric matrices.
Our random matrices are of the form
\[
\hat H =
(\hat S_1 + iu\hat A_1) + iv (\hat S_2 + iw\hat A_2),
\]
where the four matrices on the right-hand side are
mutually independent, with $\hat S_{1,2}$ being real
symmetric and $\hat A_{1,2}$ being
real skew-symmetric. By choosing matrix distributions
and varying the parameter values one obtains different
ensembles of non-Hermitian matrices. We use that
normalization of matrix elements which ensures that
\[
\lim_{N\to \infty } \frac{1}{N} \langle \mathop{\mathrm{Tr}} S_j^2 \rangle =
\lim_{N\to \infty } \frac{1}{N} \langle \mathop{\mathrm{Tr}} A_jA_j^T \rangle =1,
\;\;\; j=1,2
\]
$N$ being the matrix dimension. The parameters $v$ and $v$
are scaled with matrix dimension:
\[
v=\frac{\alpha }{2\sqrt{N}},\;\;\;\; u=\frac{\phi }{2\sqrt{N}},
\]
and $\alpha$, $\phi $, and $w$ are
assumed to be of the order of unity
in the limit $N\to \infty $. The above scaling of $v$
provides access to
the regime of weak non-Hermiticity, while scaling
$u$ we describe the crossover between the GOE and GUE
types of behavior of eigenvalues of the
Hermitian part of $\hat H$.
A simple argument \cite{FKS1} based on
the perturbation theory shows that
for our random matrices
the eigenvalue deviations
from the real axis
are of the order of $1/N$ when $N$ is
large, i.e. it is of the same order as typical separation between
real eigenvalues of the Hermitian $\hat{S}_1+iu \hat{A}_1$. Hence,
in order to obtain a nontrivial eigenvalue distribution in the limit $N\to \infty $
one has to magnify the imaginary part scaling it with
the matrix dimension.
Our study of the scaled eigenvalues of $\hat H$
is based on the supersymmetry technique. We express
the density of the scaled eigenvalues in
the form of
a correlation function of a certain zero-dimensional
non-linear $\sigma$-model. The obtained correlation function
is given by a supersymmetric integral which involves only
the density of the limit eigenvalue
distribution of the Hermitian part
of $\hat H$ and the parameters $\alpha$, $\phi$, $w$.
In two particular cases this supersymmetric integral can be
explicitly evaluated yielding the earlier obtained distributions
of complex eigenvalues for almost-Hermitian \cite{FKS1,FKS2}
and almost-symmetric matrices \cite{Efnonh}.
The supersymmetric $\sigma$-model
was invented long ago by Efetov in the context of theory of disordered
metals and the Anderson localization
and since then have been successfully applied to
diverse problems\cite{Efbook,my}.
Application of this
technique to the calculation of the mean density of complex
eigenvalues of non-Hermitian random matrices
was done for the first time in our earlier works \cite{FS,FSR,FKS1}
and further advanced by Efetov \cite{Efnonh} in the context of
description of flux line motion in a disordered
superconductor with columnar defects.
A detailed account of our calculations is given
for sparse matrices \cite{spa,MF,FSC} with i.i.d.\ entries,
although our results are
extended to ``invariant'' ensembles
and conventional
random matrices with i.i.d.\ entries.
We assume that
matrix entries of
$\hat S_k$ and $\hat A_k$ are distributed on the
real axis with the density
\begin{equation}\label{distr}
{\cal P}(x)=\left(1-\frac{p}{N}\right)\delta (x)+\frac{p}{N}h(x),
\end{equation}
where $h(x)$ is arbitrary
symmetric density function,
$h(x)=h(-x)$, having no delta function singularity
at $x=0$ and satisfying the condition $\int x^2 h(x) dx< \infty $.
We also assume that the mean number of nonzero entries $p$
exceeds some threshold value: $p>p_l$, see \cite{note}.
We want to stress that Eq. (\ref{distr}) describes the most general class ofrandom matrices whose entries are i.i.d. variables with finite second
moment\cite{MF}. In particular, in the first part of our paper we do not assume
the matrix entries to be Gaussian.
We believe that here the power of the supersymmetry method is the most evident
and we are not aware of any other analytical technique allowing to treat
this general case non-perturbatively.
Although giving an important insight into the problem, the supersymmetry
non-linear $\sigma-$model technique
suffers from at least two deficiencies. The most essential one is
that the
present state
of art in the application of the supersymmetry technique gives
little hope
of access to quantities describing {\it correlations}
between different eigenvalues in the complex plane
due to insurmountable technical difficulties.
At the same time, conventional theory of random Hermitian matrices
suggests
that these {\it universal} correlations are the most interesting
features.
The second drawback is conceptual: the supersymmetry technique
itself is not
a rigorous mathematical tool at the moment and should be considered
as a
heuristic one from the point of view of a mathematician.
In the second part of the present paper we
develop the rigorous mathematical theory of weakly non-Hermitian
random matrices of a particular type:
almost-Hermitian Gaussian. Our consideration is based on the method of
orthogonal polynomials. Such a method is free from the above mentioned
problem and allows us to study
correlation properties of complex spectra to the same degree as
is typical for earlier studied classes of random matrices.
The results were reported earlier in a form of
Letter-style communication\cite{FKS2}. Unfortunately, the paper
\cite{FKS2} contains a number of misleading misprints. For this
reason we
indicate those misprints in the present text by using footnotes.
\section{Regime of Weak non-Hermiticity:
Universal Density of Complex Eigenvalues}
To begin with, any $N\times N$ matrix $\hat{J}$ can be decomposed
into a sum of its Hermitian and skew-Hermitian parts:
$
\hat{J}=\hat{H}_1+i\hat{H}_2,
$
where $\hat{H}_1=(\hat{J}+\hat{J}^\dagger )/2 $ and $\hat{H}_2
=(\hat{J}-\hat{J}^\dagger )/2i$. Following this, we
consider an ensemble of random $N\times N$ complex matrices
$\hat{J}=\hat{H}_1+iv\hat{H}_2$
where $\hat{H}_p;\,\, p=1,2$ are both Hermitian:
$\hat{H}^{\dagger}_p=\hat{H}_p$. The parameter $v$ is used to
control the
degree of non-Hermiticity.
In turn, complex Hermitian matrices $\hat{H}_p$
can always be represented as $\hat{H}_1=\hat{S}_1+iu\hat{A}_1$ and
$\hat{H}_2=\hat{S}_2+iw\hat{A}_2$, where $\hat{S}_p=\hat{S}_p^T$
is a real symmetric matrix, and $\hat{A}_p=-\hat{A}_p^T$
is a real antisymmetric one. From this point of view the parameters
$u,w$ control the degree of being non-symmetric.
Throughout the paper we consider the matrices
$\hat{S}_1,\hat{S}_2,\hat{A}_1,
\hat{A}_2$ to be mutually statistically independent, with i.i.d. entries
normalized in such a way that:
\begin{equation}\label{norm}
\lim_{N\to\infty}\frac{1}{N}{\mbox
Tr}\hat{S}_p^2=\lim_{N\to\infty}\frac{1}{N}{\mbox
Tr}\hat{A}_p\hat{A}_p^T=1
\end{equation}
As is well-known\cite{Bohigas}, this normalisation ensures,
that for any value of the parameter $u\ne 0$ , such that $u=O(1)$
when $N\to \infty$, statistics
of real eigenvalues of the Hermitian matrix of the form
$\hat{H}=\hat{S}+iu\hat{A}$ is identical (up to a trivial rescaling) to
that of $u=1$, the latter case known as the Gaussian Unitary
Ensemble (GUE).
On the other hand, for $u\equiv 0$ real eigenvalues of real
symmetric matrix
$S$
follow another pattern of the so-called Gaussian Orthogonal
Ensemble (GOE).
The non-trivial crossover between GUE and GOE types of statistical
behaviour happens on a scale $u\propto 1/N^{1/2}$\cite{cross}.
This scaling can be easily
understood by purely perturbative arguments\cite{Alt}.
Namely, for $u\propto 1/N^{1/2}$ the typical shift
$\delta \lambda$ of eigenvalues of the symmetric matrix $S$ due to
antisymmetric perturbation $iu\hat{A}$ is of the same
order as the mean spacing $\Delta $ between unperturbed eigenvalues
: $\delta\lambda\sim \Delta\sim 1/N$.
Similar perturbative arguments show\cite{FKS1}, that the
most interesting behaviour
of {\it complex} eigenvalues of non-Hermitian matrices
should be expected for the parameter $v$ being scaled in
a similar way: $v\propto 1/N^{1/2}$.
It is just the regime when the {\it imaginary} parts
$\mathop{\mathrm{Im}} Z_k $ of a typical eigenvalue $Z_k$ due to
non-Hermitian perturbation is of the same order as the
mean spacing $\Delta $ between unperturbed real eigenvalues :
$\mathop{\mathrm{Im}} Z_k \sim \Delta\sim 1/N$.
Under these conditions a non-Hermitian matrix $J$ still
"remembers" the statistics of its Hermitian part $\hat{H}_1$.
As will be clear afterwards, the parameter $w$
should be kept of the order of unity in order to influence the statistics
of the complex eigenvalues.
It is just the regime of {\it weak non-Hermiticity} which we are
interested in.
Correspondingly, we scale the parameters as
\footnote{In the Letter \cite{FKS2} there is a misprint in the
definition of the parameter $\alpha$.}:
\begin{equation}\label{scale}
v=\frac{\alpha}{2\sqrt{N}};\quad u=\frac{\phi}{2\sqrt{N}}
\end{equation}
and consider $\alpha,\phi,w$ fixed of the order O(1) when $N\to \infty$.
One can recover the spectral density
\begin{equation}\label{defden}
\rho(Z)=\sum_{k=1}^N\delta^{(2)}(Z-Z_k)=\sum_{k=1}^N
\delta(X-X_k)\delta(Y-Y_k)=\rho(X,Y)
\end{equation}
of complex eigenvalues $Z_k=X_k+iY_k, \quad k=1,2,...,N$
from the generating function (cf.\cite{neural,FSR})
\begin{equation}\label{genf}
{\Large \cal Z}=\frac{{\mbox
Det}\left[(Z-J)(Z-J)^{\dagger}+\kappa^2\right]}
{{\mbox Det}\left[(Z_b-J)(Z_b-J)^{\dagger}+\kappa^2\right]}
\end{equation}
as
\[
\rho(Z)=-\frac{1}{\pi}\lim_{\kappa\to 0}\frac{\partial}{\partial Z^*}
\lim_{Z_b\to Z}\frac{\partial}{\partial Z_b}{\Large\cal Z}.
\]
To facilitate the ensemble averaging we first represent the ratio
of the two
determinants in Eq.(\ref{genf}) as the Gaussian integral
\begin{equation}\label{genf1}
{\Large \cal Z}=\int \prod_{i=1}^{N}[d \Phi_{i}] \exp\{{\cal
L}_1(\Phi)+{\cal L}_2(\Phi)\}
\end{equation}
over 8-component supervectors $\Phi_{i}$,
\[
\Phi_{i}=
\left(
\begin{array}{c}
\Psi_{i}(+)\\
\Psi_{i}(-)
\end{array}
\right)
,
\Psi_{i}(\pm)=
\left(
\begin{array}{c}
\vec{R}_{i}(\pm)\\
\vec{\eta}_{i}(\pm)
\end{array}
\right),
\vec{R}_{i}(\pm)=
\left(
\begin{array}{c}
r_{i}(\pm)\\
r_{i}^{*}(\pm)
\end{array}
\right),
\vec{\eta}_{i}(\pm)=
\left(
\begin{array}{c}
\chi_{i}(\pm)\\
\chi_{i}^{*}(\pm)
\end{array}
\right)
\]
with components $r_{i}(+),r_{i}(-);\quad i=1,2,...,N$
being complex commuting
variables and $\chi_{i}(+),\chi_{i}(-)$ forming the
corresponding Grassmannian
parts of the supervectors $\Psi_{i}(\pm)$.
The terms in the exponent of Eq.(\ref{genf1}) are of the following form:
\begin{eqnarray}\label{l2}
{\cal L}_1(\Phi)&=&-\frac{i}{2}\sum_{i} \Phi_{i}^{\dagger}
\left\{S_{1,{ii}}\hat{\Lambda}
\right\}\Phi_{i}-i\sum_{i<j}
\Phi_{i}^{\dagger}\left\{S_{1,ij}\hat{\Lambda}\right\}\Phi_{j} \\
\label{l1}
{\cal L}_2(\Phi)&=&\frac{i}{2}\sum_{i} \Phi_{i}^{\dagger}
\left\{ X_b\hat{\Lambda}_b +X\hat{\Lambda}_f
-i\kappa \hat{I}+iY_b\hat{\Sigma}_{\tau,b}+iY
\hat{\Sigma}_{\tau,f}\right\}\Phi_{i}-\\
\nonumber
& & i\sum_{i<j}\Phi_{i}^{\dagger}\left\{ivS_{2,ij}
\Sigma_{\tau}+iu A_{1,ij}
\hat{\Lambda}_{\tau}+vw A_{1,ij}\hat{\Sigma}\right\}\Phi_{j}.
\end{eqnarray}
Here
$\hat{I}_2=\mathop{\mathrm{diag}} (1,1)$,
$\hat{I_4}= \mathop{\mathrm{diag}} (\hat{I}_2,\hat{I}_2)$,
$\hat{\Lambda}= \mathop{\mathrm{diag}} (\hat{I}_4,-\hat{I}_4)$,
$\hat{\Lambda}_b= \mathop{\mathrm{diag}} (\hat{I}_2,\hat{0}_2,-\hat{I}_2,\hat{0}_2)$,
$\hat{\Lambda}_f=\hat{\Lambda}-\hat{\Lambda}_b$,
$\hat{\Sigma}=\hat{\Sigma}_b+\hat{\Sigma}_f$,
and
\[
\hat{\Sigma}_b=
\left(
\begin{array}{cc}
\hat{0}_4 &
\left(
\begin{array}{cc}
\hat{I}_2&\hat{0}_2\\
\hat{0}_2&\hat{0}_2
\end{array}\right)
\\
\left(
\begin{array}{cc}
-\hat{I}_2&\hat{0}_2\\
\hat{0}_2&\hat{0}_2
\end{array}
\right)&{\hat{0}_4}
\end{array}
\right); \quad \quad
\hat{\Sigma}_f=
\left(
\begin{array}{cc}
\hat{0}_4 & \left(
\begin{array}{cc}
\hat{0}_2&\hat{0}_2\\
\hat{0}_2&\hat{I}_2
\end{array}\right)\\
\left(
\begin{array}{cc}
\hat{0}_2&\hat{0}_2\\
0&-\hat{I}_2
\end{array}
\right)
& \hat{0}_4
\end{array}
\right).
\]
and the matrices $\hat{\Sigma}_{\tau,b},\hat{\Sigma}_{\tau,f},
\hat{\Sigma}_{\tau},\hat{\Lambda}_{\tau}$ are obtained from the
corresponding matrices without subindex $\tau$ by replacing
all $\hat{I}_2$ blocks with the matrices
$ \hat{\tau}=\mathop{\mathrm{diag}} (1,-1)$.
We also use $\vec{\eta}_{i}^{\dagger}(\pm)=
\left(\chi_{i}^{*}(\pm);- \chi_{i}(\pm)\right)$.
When writing ${\cal L}_2(\Phi)$ in Eq.(\ref{l1})
we have used the fact that
diagonal matrix elements $S_{2,ii}$
for $ i=1,..,N$ give total contribution
of the order of $O(1/N)$ with respect to
the total contribution of the
off-diagonal ones and can be safely disregarded.
Now we should perform the ensemble averaging of the generating function.
We find it to be convenient to average first over the distribution
of matrix
elements of the real symmetric matrix $\hat{S}_1$.
These elements are
assumed to be distributed according to Eq.(\ref{distr}).
Before presenting the derivation for our case, let us remind the general strategy. The procedure consists of three steps. First step is the
averaging of the generation function over the disorder. It can be done trivially due to statistical independence of the matrix elements in view of the integrand being a product of exponents, each depending only
on the particular matrix element $H_{ij}$. This averaging performed, the integrand ceases to be the simple Gaussian and thus the integration over
the supervectors can not be performed any longer without further tricks.When matrix elements are Gaussian-distributed, this difficulty is circumvented
in a standard way by exploiting the so-called Hubbard-Stratonovich transformation. That transformation amounts to making the integrand to be Gaussian with respect to components of the supervector by introducing new auxilliary integrations.
After that the integral over supervectors can be performed exactly, and remaining auxilliary degrees of freedom are integrated out in the saddle-point approximation justified by large parameter $N$.
As is shown in the paper \cite{MF}, there exists an analogue of the Hubbard-Stratonovich transformation allowing to perform the steps above
also for the case of arbitrary non-Gaussian distribution. The main difference with the Gaussian case is that the auxilliary integration has to be chosen
in a form of a {\it functional integral }.
Our presentation follow the procedure suggested in \cite{MF},
and presented also in some detail in \cite{FSC}
\footnote{Similarly to the paper \cite{MF} we first disregard
necessity for the compactification and use the matrix $\hat{\Lambda}$
rather than two different
matrices $\hat{L}$ and $\hat{\Lambda}$, see discussion in \cite{FSC}}.
Exploiting the large
parameter $N\gg 1$ one can write:
\begin{equation}\label{ave}
\left\langle \exp{{\cal L}_1(\Phi)}\right\rangle\mid_{N\gg 1}\approx
\exp{\left[\frac{p}{2N}\sum_{i,j} h_{F}
(\Phi_{i}^{\dagger}\hat{\Lambda}\Phi_{j})\right]}
;\quad h_{F}(z)=\int\limits_{-\infty}^{\infty}ds h(s)e^{-isz}-1
\end{equation}
In order to proceed further we employ the functional Hubbard-Stratonovich
transformation introduced in \cite{MF}:
\begin{eqnarray}\label{HS}
\lefteqn{
\exp{
\left[
\frac{p}{2N}\sum_{i,j} h_{F}(\Phi_{i}^{\dagger}
\hat{\Lambda}\Phi_{j})
\right]
}
=} \\
\nonumber
& & \int {\cal D}g \
\exp{
\left[
-\frac{pN}{2}\int d\Theta d\tilde{\Theta} g(\Theta)
C(\Theta,\tilde{\Theta})g(\tilde{\Theta})+p\sum_{i=1}^{N}
g(\Phi_{i})
\right]
}
\end{eqnarray}
where the kernel $C(\Theta,\tilde{\Theta})$ is determined by the
relation:
\begin{equation}\label{ker}
\int d\tilde{\Theta}C(\Theta,\tilde{\Theta})
h_{F}(\tilde{\Theta}\hat{\Lambda}\Phi)=
\delta(\Theta,\Phi)\end{equation}
with the right-hand side of the eq.(\ref{ker}) being the
$\delta-$function
in the space of supervectors.
Substituting eq.(\ref{HS}) into averaged eq.(\ref{genf1}) and
changing the order of
integrations over $[d\Phi]_{i}$ and ${\cal D} g$ one obtains the averaged
generating function in the form:
\begin{equation}\label{gav}
\left\langle{\cal Z}\right\rangle=\int {\cal D}g\exp{\left[-N{\cal L}(g)+
\delta {\cal L}(g)\right]}
\end{equation}
where
\begin{eqnarray}\nonumber
{\cal L}(g)&=&\frac{p}{2}\int [d\Phi][d\tilde{\Phi}]\
g(\Phi)C(\tilde{\Phi},
\Phi)g(\Phi)-\ln{\int [d\Phi]\ e^{{\cal F}(\Phi)}}\\
\label{calg}
\delta {\cal L}(g)&=&\ln \
\frac{\int\prod_{i=1}^{N}[d\Phi_{i}]\
\exp{\left[
\sum_{i}{\cal F}(\Phi_{i})+R\{\Phi\}\right]}}{\int
\prod_{i=1}^{N}[d\Phi_{i}]\
\exp{\left[\sum_{i}{\cal F}(\Phi_{i})\right]}}
\end{eqnarray}
with ${\cal F}(\Phi)
=\frac{i}{2}X\Phi^{\dagger}\hat{\Lambda}\Phi+pg(\Phi)$,
\[
R\{\Phi\}=\frac{1}{2}\sum_{i=1}^N\Phi_i^{\dagger}\hat{f}\Phi_i
-i\sum_{i<j}\Phi_{i}^{\dagger}\left\{ivS_{2,ij}\hat{\Sigma}_{\tau}+iu
A_{1,ij}
\hat{\Lambda}_{\tau}+
vw A_{1,ij}\hat{\Sigma}\right\}\Phi_{j},
\]
and $\hat{f}=\kappa\hat{I}-i(X-X_b)\hat{\Lambda}_b-
Y_b\hat{\Sigma}_{\tau,b} -Y\hat{\Sigma}_{\tau,f}$.
We are interested in evaluating the functional integral over ${\cal D}g$
in the limit $N\to \infty$ and $\kappa\to 0; X\to X_b$. Moreover, we
expect eigenvalues of weakly non-Hermitian
matrices to have imaginary parts $Y$
to be of the order of $1/N$. Remembering also the chosen scaling
(\ref{scale}), we conclude that the argument of the logarithm in
Eq.(\ref{calg}) is close to unity and the term $\delta{\cal
L}(\Phi)$ in
Eq.(\ref{gav})
should be treated
as a small perturbation to the first one. Then the functional integral
of the type $\int {\cal D}g (...) \exp{-N{\cal L}(g)}$ can be
evaluated by the saddle-point method. Variating the "action"
${\cal L}(g)$ and using the relation
eq.(\ref{ker}) one obtains the following saddle point equation
$\delta {\cal
L}(g)/\delta g=0$ for the function
$g(\Phi)$:
\begin{equation}\label{sp}
g(\Phi)=\frac{\int [d\Theta] h_{F}(\Phi^{\dagger}\hat{L}\Theta)
\exp{{\cal F}(\Theta)}}{\int [d\Theta]\exp{{\cal F}(\Theta)}}
\end{equation}
A quite detailed investigation of the
properties of this equation was performed
in \cite{MF,FSC}. Below we give a summary of the main features of the
eq.(\ref{sp}) following from such an analysis.
First, the solution $g(\Phi)$ to this equation can be sought for in
a form
of a function $g_{s}(\Phi)=g_{0}(x,y)$ of two superinvariants:
$x=\Phi^{\dagger}\Phi$ and $y=\Phi^{\dagger}\hat{\Lambda}\Phi$.
As the result, the denominator in eq.(\ref{sp}) is equal to $1$ due to
the identity $\int [d\Phi] F(x,y)=F(0,0)$ which is a particular case of
the so-called Parisi-Sourlas-Efetov-Wegner (PSEW) theorem, see
e.g.\cite{my} and
references therein.
However, the form of the function $g_{0}(x,y)$ is essentially different
for the number of nonzero elements $p$ per matrix column
exceeding the threshold value $p=p_{l}$ and for $p<p_{l}$
\cite{MF,note}. Namely, for $p>p_{l}$ the function $g_{0}(x,y)$
is an {\it analytic} function of {\it both} arguments $x$ and $y$,
whereas
for $p<p_{l}$ such a function is dependent only on the second argument
$y=\Phi^{\dagger}\hat{\Lambda}\Phi$. At the same time, the
saddle-point equation
eq.(\ref{sp}) is {\it always} invariant w.r.t. any transformation
$g(\Phi)\to g(\tilde{T}\Phi)$ with supermatrices $\tilde{T}$ satisfying
the condition $\tilde{T}^{\dagger}\hat{\Lambda}\tilde{T}=\hat{\Lambda}$.
Combining all these facts together one finds,
that for $p>p_{l}$ a saddle-point
solution $g_{s}(\Phi)$ gives rise to the whole continuous manifold of
saddle-point solutions of the form: $g_{T}(\Phi)\equiv
g_{s}(\tilde{T}\Phi)=
g_{0}(\Phi^{\dagger}\tilde{T}\tilde{T}\Phi,\Phi^{\dagger}\hat{\Lambda}
\Phi)$, so that all the manifold gives a nonvanishing contribution to the
functional integral eq.(\ref{gav}). It is the existence of the
saddle-point
manifold that is actually responsible for the universal random-matrix
correlations\cite{MF}.
We see, that the saddle-point manifold is parametrized by the
supermatrices
$\tilde{T}$. It turns out, however, that one has to "compactify"
the manifold
of $\tilde{T}$ matrices with respect to the "fermion-fermion"
block in order to ensure
convergence of the integrals over the saddle-point manifolds\cite{VWZ}.
The resulting "compactified" matrices $\hat{T}$ form a graded Lie group
$\mbox{UOSP}(2,2/4)$. Properties of such matrices can
be found in \cite{VWZ} together with the integration measure $d\mu(T)$.
>From now on we are going to consider only the case $p>p_{l}$. The
program of
calculation is as follows: (i) To find the expression for the term
$\delta {\cal L}(g)$ on the saddle-point manifold $g=g_{T}(\Phi)$
in the limit $N\to \infty$ and (ii) to calculate the integral
over the saddle-point manifold exactly.
Expanding the expression in Eq.\ (\ref{calg}) to the
first non-vanishing
order in $\hat{f},\hat{S}_2,\hat{A}_1,\hat{A}_{2}$, introducing the
notation ${\cal F}_T(\Phi)=
\frac{i}{2}X\Phi^{\dagger}\hat{\Lambda}\Phi+pg_{T}(\Phi)$
and using the relations
\begin{equation}\begin{array}{c}
\int [d\Phi] \exp{{\cal F}_{T}(\Phi)}=1;
\quad \int \prod_{k=1}^{N}[d\Phi_{k}]
\left(\Phi^{\dagger}_{i}\hat{B}\Phi_{j}\right)|_{i<j}
\exp{\sum_{k=1}^{N}{\cal F}_{T}(\Phi_{k})}=0;\\ \\
\int \prod_{k=1}^{N}[d\Phi_{k}]
\left(\Phi^{\dagger}_{i_{1}}\hat{B}\Phi_{j_{1}}\right)|_{i_{1}<j_{1}}
\left(\Phi_{i_{2}}^{\dagger}\hat{C}\Phi_{j_{2}}\right)|_{i_{2}<j_{2}}
\exp{\sum_{k=1}^{N}{\cal F}_{T}(\Phi_{k})}\\=
\delta_{i_{1}i_{2}}\delta_{j_{1}j_{2}}
\int [d\Phi_{i_{1}}][d\Phi_{j_{1}}]
\left(\Phi^{\dagger}_{i_{1}}\hat{B}\Phi_{j_{1}}\right)
\left(\Phi_{i_{1}}^{\dagger}\hat{C}\Phi_{j_{1}}\right)_{i_{1}<j_{1}}
\exp{\left[{\cal F}_{T}(\Phi_{i_{1}})+{\cal F}_{T}(\Phi_{j_{1}})\right]}
\end{array}\end{equation}
which hold for arbitrary $8\times 8$ supermatrices $\hat{B},\hat{C}$,
one finds that
\begin{equation}\label{expan}
\delta {\cal L}(g_{T}) =\frac{1}{2}\langle\Phi^{\dagger}
\hat{f}\Phi\rangle_T+\delta{\cal L}_R+\delta{\cal L}_{IR},
\end{equation}
where
\begin{eqnarray*}
\delta{\cal L}_R &=& \frac{v^2}{2}
\sum_{i<j}\left(\hat{S}_2\right)^2_{ij}
\left\langle\left[\Phi_1^{\dagger}
\hat{\Sigma}_{\tau}\Phi_2\right]
\left[\Phi_1^{\dagger}\hat{\Sigma}_{\tau}
\Phi_2\right]\right\rangle_T \\
&+& \frac{u^2}{2}
\sum_{i<j} \left(\hat{A}_1\right)^2_{ij}
\left\langle\left[\Phi_1^{\dagger}
\Lambda_{\tau}
\Phi_2\right]\left[\Phi_1^{\dagger}\hat{\Lambda}_{\tau}
\Phi_2\right]\right\rangle_T - \frac{v^2w^2}{2}
\sum_{i<j} \left(\hat{A}_2\right)^2_{ij}
\left\langle\left[\Phi_1^{\dagger}
\hat{\Sigma}\Phi_2\right]\left[\Phi_1^{\dagger}\hat{\Sigma}
\Phi_2\right]\right\rangle_T,\\
\delta{\cal L}_{IR} &=& uv
\sum_{i<j} \left(\hat{S}_2\right)_{ij}\left(\hat{A}_1\right)_{ij}
\left\langle\left[\Phi_1^{\dagger}
\hat{\Sigma}_{\tau}\Phi_2\right]
\left[\Phi_1^{\dagger}\hat{\Lambda}_{\tau}
\Phi_2\right]\right\rangle_T - iv^2w
\sum_{i<j}\left(\hat{S}_2\right)_{ij}\left(\hat{A}_2\right)_{ij}
\left\langle\left[\Phi_1^{\dagger}\hat{\Sigma}_{\tau}
\Phi_2\right]\left[\Phi_1^{\dagger}\hat{\Sigma}
\Phi_2\right]\right\rangle_T \\
&-& iuvw
\sum_{i<j}\left(\hat{A}_1\right)_{ij}\left(\hat{A}_2\right)_{ij}
\left\langle\left[\Phi_1^{\dagger}
\hat{\Lambda}_{\tau}\Phi_2\right]\left[\Phi_1^{\dagger}
\hat{\Sigma}\Phi_2\right]\right\rangle_T
\end{eqnarray*}
and we used the notations:
\begin{eqnarray*}
\left\langle\left[\Phi^{\dagger}\hat{B}\Phi\right]\right\rangle_T
&=&
\int [d\Phi]\left(\Phi^{\dagger}\hat{B}\Phi\right)e^{{\cal F}_T(\Phi)} \\
\left\langle\left[\Phi_1^{\dagger}\hat{B}
\Phi_2\right]\left[\Phi_1^{\dagger}\hat{C}\Phi_2\right]\right\rangle_T
&=&
\int [d\Phi_{1}][d\Phi_2]
\left(\Phi^{\dagger}_{1}\hat{B}\Phi_{2}\right)
\left(\Phi_1^{\dagger}\hat{C}\Phi_{2}\right)
\exp{\left[{\cal F}_{T}(\Phi_{1})+{\cal F}_{T}(\Phi_{2})\right]}.
\end{eqnarray*}
It is clear that with the chosen normalization [see
Eqs.\ (\ref{norm}) -- (\ref{scale})] we have
\begin{equation}
v^2\sum_{i<j}
\left(\hat{S}_2\right)^2_{ij}\to\frac{\alpha^2}{8};\quad
u^2\sum_{i<j}
\left(\hat{A}_1\right)^2_{ij}\to\frac{\phi^2}{8};\quad
w^2v^2\sum_{i<j}
\left(\hat{A}_2\right)^2_{ij}\to\frac{w^2\phi^2}{8}
\end{equation}
when $N\to \infty$. On the other hand, it is easy to see that:
\begin{equation}
uv\sum_{i<j}
\left(\hat{S}_2\right)_{ij}\left(\hat{A}_1\right)_{ij}
\sim wv^2\sum_{i<j}\left(\hat{S}_2\right)_{ij}
\left(\hat{A}_2\right)_{ij}\sim
wuv\sum_{i<j}\left(\hat{A}_1\right)_{ij}
\left(\hat{A}_2\right)_{ij}=O(\frac{1}{N})
\end{equation}
because of the statistical independence of
$\hat{S}_2,\hat{A}_1,\hat{A}_2$ and the chosen normalization of
matrix elements.
Therefore, the part $\delta{\cal L}_{IR}$ can be safely neglected
in the limit of large $N$.
To proceed further it is convenient to introduce the $8\times 8$
supermatrix $W$ with elements
\begin{equation}\label{w}
W_{\alpha\beta}\equiv \int [d\Phi]\
\Phi_{\alpha}\Phi_{\beta}^{\dagger}
e^{{\cal F}_{T}(\Phi)}\end{equation}
Exploiting the saddle-point equation for the function $g_T(\Phi)=
g_0\left(\Phi^{\dagger}\hat{T}^{\dagger}\hat{T}\Phi,\Phi^{\dagger}
\Lambda\Phi\right)$ one can show (details can be found in
\cite{FSC}, Eqs.(67-70)) that the supermatrix $\hat{W}$ whose
elements are
$W_{\alpha\beta}$ can be written as:
\begin{equation}\label{ww}
\hat{W}=\frac{2}{B}
\left[g_{0y}\hat{\Lambda}+ig_{0x}\hat{\Lambda}\hat{Q}
\right]
\end{equation}
where $B$ is the second moment of the distribution $h(s)$:
$B=\int h(s) s^2 ds$
and $g_{0x}=\partial g_{0}/\partial x|_{x=y=0};\quad g_{0y}=
\partial g_{0}/\partial y|_{x=y=0}$.
In the expression above we introduced a new supermatrix:
$\hat{Q}=-i\hat{T}
^{-1}\hat{\Lambda}\hat{T}$.
Using the definition of the matrix $W$, one can rewrite the part
$\delta{\cal L}_R$ as follows (cf.\cite{FSC}, eqs. (71)-(73)):
\begin{eqnarray}
\delta{\cal
L}_R=-\frac{1}{16}\left[\alpha^2 \mathop{\mathrm{Str}} \hat{W}\hat{\Sigma}_{\tau}
\hat{W}\hat{\Sigma}_{\tau}
-\phi^2 \mathop{\mathrm{Str}} \hat{W}\hat{\Lambda}_{\tau}\hat{W}\hat{\Lambda}_\tau+
w^2\phi^2 \mathop{\mathrm{Str}} \hat{W}\hat{\Sigma}\hat{W}\hat{\Sigma}\right]
\end{eqnarray}
Now one can use Eq.(\ref{w}) together with the properties:
$ \mathop{\mathrm{Str}} \hat{Q}= \mathop{\mathrm{Str}} \hat{\Lambda}= \mathop{\mathrm{Str}} \hat{I}=0;\quad
Q^2=-\hat{I}$ to show that:
\[
\delta{\cal L}_R
=\frac{g_{0x}^2}{4B^2}\left[\alpha^2 \mathop{\mathrm{Str}} \hat{Q}\hat{\sigma}_{\tau}
\hat{Q}\hat{\sigma}_{\tau}-\phi^2
\mathop{\mathrm{Str}} \hat{Q}\hat{\tau}_2\hat{Q}\hat{\tau}_2+w^2\phi^2
\mathop{\mathrm{Str}} \hat{Q}\hat{\sigma}\hat{Q}\hat{\sigma}\right]
\]
where the $8\times 8$
supermatrices entering these expressions are as follows:
\[
\hat{\tau}_2= \mathop{\mathrm{diag}} \{\hat{\tau}_3,\hat{\tau}_3\};\quad
\hat{\sigma}_{\tau}=\left(\begin{array}{cc}0&\hat{\tau}_3\\
\hat{\tau}_3&0\end{array}\right);\quad
\hat{\sigma}=
\left(
\begin{array}{cc}
0&\hat{I}_4\\
\hat{I}_4&0
\end{array}
\right)
\]
and $\hat{\tau}_3$ is $4\times 4$ diagonal,
$\hat{\tau}_3= \mathop{\mathrm{diag}} \{\hat{\tau},\hat{\tau}\}$.
In the same way one finds:
\begin{eqnarray*}
\left\langle\Phi^{\dagger}\hat{f}\Phi\right\rangle_T
&=&
-\frac{4ig_{0y}}{B}(X-X_b)+\\
& &
\frac{2ig_{0x}}{B}
\left[
\kappa \mathop{\mathrm{Str}} \hat{Q}\hat{\Lambda}-i(X-X_b) \mathop{\mathrm{Str}}
\hat{K}_B\hat{Q}
-Y_B \mathop{\mathrm{Str}} \hat{\sigma}_\tau^{(B)}\hat{Q}-Y
\mathop{\mathrm{Str}} \hat{\sigma}_\tau^{(F)}\hat{Q}
\right],
\end{eqnarray*}
where
\[
\hat{K}_B= \mathop{\mathrm{diag}} \{\hat{I}_2,\hat{0}_2,\hat{I}_2,\hat{0}_2\};\quad
\hat{\sigma}_{\tau}^{(B,F)}=
\left(
\begin{array}{cc}
\hat{0}_4&\hat{\tau}_3^{(B,F)}\\
\hat{\tau}_3^{(B,F)}&\hat{0}_4
\end{array}\right)
\]
and $\hat{\tau}_3^{B,F}$ are $4\times 4$ diagonal supermatrices:
$\hat{\tau}_3^{(B)}= \mathop{\mathrm{diag}} \{\hat{\tau},\hat{0}_2\}$ and
$\hat{\tau}_3^{(F)}= \mathop{\mathrm{diag}} \{\hat{0}_2,\hat{\tau}\}$.
At last, we use the relation between $g_0(x,y)$ and the mean eigenvalue
density for a sparse
symmetric matrix $\hat{S}_1$ at the point $X$ on the
real axis derived in \cite{MF}:
\begin{equation}\label{meanden}
\nu(X)\equiv\frac{1}{N}\langle \rho(X)\rangle=-\frac{2}{\pi B}g_{0x}
\end{equation}
Substituting expressions for $\delta{\cal L}_R$ and
$\left\langle\Phi^{\dagger}\hat{f}\Phi\right\rangle_T$ to the
generating
function ${\cal Z}$ represented as an integral over
the saddle-point manifold
parametrized by the supermatrices $\hat{T}$ (or,
equivalently, by the supermatrices $\hat{Q}=-i\hat{T}^{-1}
\hat{\Lambda}\hat{T}$) and
performing the proper limits we finally obtain:
\begin{equation}\label{unires}
\langle\rho(X,Y)\rangle=\frac{\pi[N\nu(X)]^2}{16}
\int d\mu(\hat{Q}) \mathop{\mathrm{Str}} \left(\hat{\sigma}_\tau^{(F)}\hat{Q}\right)
\mathop{\mathrm{Str}} \left(\hat{\sigma}_\tau\hat{Q}\right)\exp{-S(\hat{Q})}
\end{equation}
\[
S(\hat{Q})=-\frac{i}{2}y
\mathop{\mathrm{Str}}
\left(\hat{\sigma}_\tau\hat{Q}\right)-
\frac{a^2}{16}
\mathop{\mathrm{Str}} \left(\hat{\sigma}_\tau\hat{Q}\right)^2+
\frac{b^2}{16}
\mathop{\mathrm{Str}}
\left(\hat{\tau}_2\hat{Q}\right)^2
-\frac{c^2}{16}
\mathop{\mathrm{Str}}
\left(\hat{\sigma}\hat{Q}\right)^2
\]
where we introduced the scaled imaginary parts $y=\pi\nu(X)NY$
and used the notations: $a^2=\left(\pi\nu(X)\alpha\right)^2,\quad
b^2=\left(\pi\nu(X)\phi\right)^2,\quad c^2=\left(\pi\nu(X)\alpha
w\right)^2$
The expression (\ref{unires}) is just the universal $\sigma-$ model
representation of the mean density of complex eigenvalues in the
regime of weak non-Hermiticity we were looking for.
The universality is clearly manifest: all the particular details
about the
ensembles entered only in the form of mean density of
real eigenvalues $\nu(X)$. The density of complex
eigenvalues turns out to be dependent on three parameters:
$a,b$ and $c$, controlling the degree of non-Hermiticity
($a$), and symmetry properties of the
Hermitian part ($b$) and non-Hermitian part ($c$).
The following comment is appropriate here. The derivation above
was done for ensembles with i.i.d. entries. However, one can
satisfy oneself
that the same expression would result if one start
instead from any "rotationally invariant" ensemble of
real symmetric matrices $\hat{S}_1$.
To do so one can employ the procedure invented by Hackenbroich and
Weidenm\"{u}ller \cite{HW} allowing one to map the
correlation functions of the invariant
ensembles (plus perturbations) to that of Efetov's $\sigma-$model.
Still, in order to get an explicit expression for the
density of complex eigenvalues one has to
evaluate the integral over the set of supermatrices
$\hat{Q}$. In general, it is an elaborate
task due to complexity of that manifold.
At the present moment such an evaluation was successfully performed
for two
important cases: those of almost-Hermitian matrices and
real almost-symmetric
matrices. The first case ( which is technically the simplest one)
corresponds to $\phi\to\infty$, that is $b\to \infty$.
Under this condition only that part of the matrix
$\hat{Q}$ which commutes with
$\hat{\tau}_2$ provides a nonvanishing contribution. As the result,
$ \mathop{\mathrm{Str}} \left(\hat{\sigma}\hat{Q}\right)^2= \mathop{\mathrm{Str}}
\left(\hat{\sigma}_{\tau}\hat{Q}\right)^2$
so that second and fourth term in Eq.(\ref{unires})
can be combined together. Evaluating the resulting integral,
and introducing the notation $\tilde{a}^2=a^2+c^2$ one finds \cite{FKS1}:
\begin{eqnarray}\label{13}
\rho_X(y)=\sqrt{\frac{2}{\pi}}\frac{1}{\tilde{a}}\exp
\left( -\frac{2y^2}{\tilde{a}^2} \right)
\int\limits_0^1 dt \cosh (2ty) \exp{(-\tilde{a}^2t^2/2)},
\end{eqnarray}
where $\rho_X(y)$ is the density of the scaled imaginary parts $y$
for those eigenvalues, whose real parts are situated around the point
$X$ of the spectrum. It is related to the two-dimensional density
as $\rho_X(y)=\rho(X,Y)/\pi(N\nu(X))^2$.
It is easy to see, that when $\tilde{a}$ is large one can effectively
put the upper boundary of integration in Eq.(\ref{13}) to be infinity
due to the Gaussian cut-off of the integrand. This immediately
results in the uniform density $\rho_X(y)=(\tilde{a}^2)^{-1}$ inside the
interval $|y|<\tilde{a}^2/2$ and zero otherwise. Translating this
result to the
two-dimensional density of the original variables $X,Y$, we get:
\begin{equation}\label{girko}
\rho(X,Y)=
\left\{
\begin{array}{ll}
\displaystyle{
\frac{N}{4\pi v^2(1+w^2)}
}& \mbox{if}\,\,
\displaystyle{
|Y|\le 2\pi\nu(X)v^2(1+w^2)
}\\
0&\mbox{otherwise}\end{array}
\right.
\end{equation}
This result is a natural generalisation of the so-called "elliptic
law" known
for strongly non-Hermitian random matrices\cite{Gin,neural}.
Indeed, the curve encircling the domain of the uniform eigenvalue
density is an ellipse: $\frac{Y^2}{2v^2(1+w^2)}+\frac{X^2}{4}=1$ as
long as the
mean eigenvalue density of the Hermitian counterpart is given by the
semicircular law, Eq.(\ref{semi}) (with the parameter $J=1$). The
semicircular density is known to be
shared by ensembles with i.i.d. entries, provided the mean
number $p$ of
non-zero elements per row grows with the matrix size as $p\propto
N^{\alpha}; \,\, \alpha>0$, see \cite{MF}. In the general case of
sparse or "rotationally invariant" ensembles the function $\nu(X)$
might be quite different from the semicircular law.
Under these conditions Eq.(\ref{girko})
still provides us with the corresponding density of complex eigenvalues.
The second nontrivial case for
which the result is known explicitly is due to Efetov\cite{Efnonh}.
It is the limit
of slightly asymmetric real
matrices corresponding in the present notations to:
$\phi\to 0; w\to \infty$ in such a way that the product
$\phi w=\tilde{c}$ is kept fixed.
The density of complex eigenvalues turns out to be given by:
\begin{eqnarray}\label{Efres}
\rho_X(y)&=&\delta(y)\int\limits_0^1 dt\exp{(-\tilde{c}^2t^2/2)} \\
\nonumber
&+2\sqrt{\frac{2} {\pi}}&\frac{|y|}{\tilde{c}}
\int\limits_1^{\infty}du \exp \left(-\frac{2y^2u^2}{\tilde{c}^2} \right)
\int\limits_0^1 dt t \sinh (2t|y|) \exp{(-\tilde{c}^2t^2/2)},
\end{eqnarray}
The first term in this expression shows that everywhere in the regime of
"weak asymmetry" $\tilde{c}<\infty$ a finite fraction of
eigenvalues remains on the real axis.
Such a behaviour is qualitatively
different from that typical for the
case of "weak non-Hermiticity" $\tilde{a}<\infty$, where eigenvalues
acquire a nonzero imaginary part with probability one.
In the limit $\tilde{c}>>1$ the portion of real eigenvalues
behaves like $\tilde{c}^{-1}$. Remembering the normalisation
of the parameter $v$, Eq.(\ref{norm}), it is easy to see that
for the case of $v=O(1)$ the number of real eigenvalues should scale
as $\sqrt{N}$.
Indeed, as
was first noticed by
Sommers et al. \cite{neural,Lehm1}
the number of {\em real} eigenvalues of
strongly asymmetric
{\em real} matrices is proportional to
$\sqrt{N}$ . This and the fact that the
mean density of real eigenvalues is constant
was later proved by Edelman et al. \cite{Edel}.
\section{Gaussian almost-Hermitian matrices: from Wigner-Dyson to
Ginibre eigenvalue statistics}
In the previous section we obtained the eigenvalue distribution
in the regime of weak non-Hermiticity for the random matrices
of the form $\hat J =\hat H_1 +iv \hat H_2$, with
$\hat H_1 $ and $\hat H_2$ being mutually independent Hermitian
random matrices with i.i.d.\ matrix entries. The obtained eigenvalue
distribution appeared to be universal, i.e.\ independent of the
probability
distribution of the Hermitian matrices $\hat H_1 $ and $\hat H_2$.
In the present section we reexamine a particular case of $\hat J$ when
both $\hat H_1 $ and $\hat H_2$ are taken to be Gaussian. In this
special case
not only the mean eigenvalue density but also the eigenvalue
correlation functions can be obtained and studied in great detail.
The ensemble of random matrices that will be considered in this section
is specified by the probability measure
$d\mu (\hat J)={\cal P}(\hat J) d \hat J$,
\begin{eqnarray}\label{P(J)}
{\cal P}(\hat{J})
=\left( \frac{N}{\pi \sqrt{1-\tau^2} }\right)^{N^2}
\exp \left[
-\frac{N}{(1-\tau^2)} \mathop{\mathrm{Tr}}
\left(
\hat{J}\hat{J}^\dagger - \tau \mathop{\mathrm{Re}} \hat{J}^2
\right)
\right]
\end{eqnarray}
on the set ${\cal M}$ of complex $N\times N$ matrices
with the matrix volume element
\[
d\hat J = \prod_{j,k=1}^{N}d^2 J_{jk}, \;\;
d^2 J_{jk}\equiv d \mathop{\mathrm{Re}} J_{jk} d\mathop{\mathrm{Im}} J_{jk}.
\]
If the Hermitian $\hat H_1$ and $\hat H_2$
are taken independently from the GUE, the probability
distribution of $\hat J =\hat H_1 +iv \hat H_2$
is described by the above-given measure $d\mu (\hat{J})$
with
\[
\tau =\frac{1-v^2}{1+v^2}
\]
provided that $\hat H_1$ and $\hat H_2$ are normalized to
satisfy $\langle \mathop{\mathrm{Tr}} \hat H_p^2 \rangle = N(1+\tau )/2$,
$p=1,2$.
The parameter $\tau$, $0 \le \tau \le 1$,
controls the magnitude of the
correlation between
$J_{jk}$ and $J_{kj}$: $\langle J_{jk}J_{kj}
\rangle = \tau /N $, hence
the degree of non-Hermiticity.
All $J_{jk}$ have zero mean and variance
$\langle |J_{jk}|^2 \rangle =1/N$ and only
$J_{jk}$ and $J_{kj}$ are pairwise correlated.
If $\tau =0$ all $J_{jk}$
are mutually independent and
we have maximum non-Hermiticity.
When $\tau $ approaches unity, $J_{jk}$ and
$J_{kj}^*$ are related via $J_{jk}=J_{kj}^*$
and we are back to the ensemble of Hermitian matrices.
Our first goal is to obtain the density of the joint distribution
of eigenvalues in the random matrix ensemble
specified by Eq.\ (\ref{P(J)}). First of all, one can disregard
the matrices whose characteristic polynomial has multiple roots.
For, the set of such matrices forms a surface in ${\cal M}$,
hence has zero volume. Every matrix off this surface has $N$
distinct eigenvalues and we label them $Z_1, \ldots , Z_N$
ordering them in such a way that
\begin{equation}\label{b0}
|Z_1| \le |Z_2| \le \ldots \le |Z_N|
\end{equation}
and if $ |Z_j| = |Z_{j+1}|$ for some $j$ then $\arg Z_j < \arg Z_{j+1}$.
Given $\hat J$, one can always find a unitary matrix $\hat U$
and a triangular $\hat T$ ($T_{jk}=0$ if $j>k$) such that
\begin{equation}\label{b1}
\hat J =\hat U \hat T \hat U^{-1}
\end{equation}
and $T_{jj}=Z_j$ for every $j$ \cite{Wilkinson}.
The choice of $\hat U$ and $\hat T$
is not unique. For, multiplying $\hat U$ to the right by a
unitary diagonal matrix $\hat \Phi $ one can also write $\hat J =\hat V
\hat S \hat V^{-1}$, where $\hat V=\hat U \hat \Phi $ is unitary,
$\hat S$ is triangular, and again $S_{jj}=Z_j$ for every $j$.
It is natural, therefore, to impose a restriction on $\hat U$
requiring, for instance, the first non-zero element in each column
of $\hat U$ to be real positive. Then the correspondence (\ref{b1})
between $\hat J$ and $(\hat U, \hat T)$ is one-to-one.
The idea of using the decomposition (\ref{b1}) ( which is often called
the Schur decomposition) for derivation of the joint distribution
of eigenvalues goes back to Dyson \cite{Dyson} and we simply follow
his argument. To obtain the density of the joint distribution
one integrates (\ref{P(J)}) over the set of matrices whose
eigenvalues are $Z_1, \ldots , Z_N$. To perform the integration,
one changes the variables from $\hat J$ to $(\hat U , \hat T)$
and integrates over $\hat U$ and the off-diagonal elements of $\hat T$.
The Jacobian of the transformation $\hat J \to (\hat U , \hat T)$
depends only on the eigenvalues and is given by the squared modulus
of the Vandermonde determinant of the $\{Z_j\}$ \cite{Dyson}.
Since $\hat U$ is unitary,
\begin{equation}\label{b2}
\mathop{\mathrm{Tr}} (\hat J \hat J^{\dagger} +\tau \mathop{\mathrm{Re}} \hat J^2)=
\mathop{\mathrm{Tr}} (\hat T \hat T^{\dagger} +\tau \mathop{\mathrm{Re}} \hat T^2).
\end{equation}
Therefore, the integral over $\hat U$ yields
\[
\frac{\mathop{\mathrm{Vol}} [U(N)]}{(2\pi
)^N}=\frac{\pi^{N(N-1)/2}}{\prod_{n=1}^{N-1} n!},
\]
where $\mathop{\mathrm{Vol}} [U(N)]$ is the volume of the unitary group $U(N)$.
Since $\hat T$ is triangular, the integration over the
off-diagonal entries of $\hat T$ reduces, in view of
(\ref{b2}), to the Gaussian integral
\[
\prod_{1\le j<k \le N} \int d^2 T_{jk}
\exp \left[ -\frac{N}{1-\tau^2}
\left( |T_{jk}|^2 -\tau \mathop{\mathrm{Re}} T_{jk}^2 \right)
\right]=
\left[
\frac{\pi (1-\tau^2)}{N}
\right]^{N(N-1)/2}.
\]
Collecting the constants one obtains the desired density.
Obviously, it is symmetric in the eigenvalues $\{Z_j\}$. Therefore,
the above restriction of Eq.\ (\ref{b0})
on the eigenvalues can be removed by
reducing the obtained density in $N!$ times.
Thus finally, the density of the
joint distribution of (unlabelled) eigenvalues in the random matrix
ensemble specified by Eq.\ (\ref{P(J)}) is given by
\begin{equation}\label{P(Z)}
{\cal P}_N(Z_1, \ldots, Z_N)= \frac{N^{N(N+1)/2}}{\pi^N 1!
\cdots N! (1-\tau^2)^{N/2} }\
\prod_{j=1}^N w^2(Z_j)\
\prod_{j<k}|Z_j-Z_k|^2,
\end{equation}
where
\begin{equation}\label{b3}
w^2(Z)=\exp \left\{-\frac{N}{1-\tau^2}
\left[|Z|^2 - \frac{\tau}{2}\left(Z^2+{Z^*}^2\right) \right] \right\}.
\end{equation}
The form of the distribution
Eq.\ (\ref{P(Z)}) allows one to employ
the powerful method of orthogonal polynomials \cite{Mehta}.
Let $H_n(z)$ denotes
$n$th Hermite polynomial,
\begin{equation} \label{H}
H_n(z)=\frac{(\pm i)^n}
{\sqrt{2\pi}}\! \exp{\left(\! \frac{z^2}{2}\! \right)}
\int\limits_{-\infty}^{\infty}\!
dt\ t^n \exp{\left(-\frac{t^2}{2}\mp izt\right)}.
\end{equation}
These Hermite polynomials
are orthogonal on the real axis with the weight function
$\exp (-x^2 /2) $ and are determined by the following
generating function
\begin{equation}\label{b4}
\exp \left(
zt - \frac{t^2}{2}
\right)
= \sum_{n=1}^\infty H_n(z) \frac{t^n}{n!}.
\end{equation}
It is convenient to rescale Hermite polynomials in the following way:
\begin{eqnarray}\label{p_n}
p_n(Z)=\frac{\tau^{n/2} \sqrt{N} }
{\sqrt{\pi }\sqrt{ n!}(1-\tau^2 )^{1/4}}
H_n\left( \sqrt{ \frac{N}{\tau}}Z\right).
\end{eqnarray}
The main reason for doing that rescaling is
that these new polynomials $p_n(Z)$,
$n=0,1,2, \ldots $,
are orthogonal in the {\it complex plane}
$Z=X+iY$
with the weight function $w^2(Z)$ of Eq.\ (\ref{b3}):
\begin{equation}
\int d^2 Z\ p_n(Z)p_m(Z^*)w^2(Z) = \delta_{nm}.
\end{equation}
(Recall that $d^2 Z = dX dY$.)
We borrowed this observation which is crucial for our analysis
from the paper \cite{orth}
(see also the related paper \cite{FJ}).
A quick check of the orthogonality relations
is possible with the help of the generating function
(\ref{b4}).
With these orthogonal polynomials in hand,
the standard machinery of the method of
orthogonal polynomials\cite{Mehta} yields
the $n$-eigenvalue correlation functions
\begin{equation}
R_n(Z_1,...,Z_n)=\frac{N!}{(N-n)!}\int d^2Z_{n+1}...d^2Z_N\
{\cal P}_N\{Z\}
\label{R_n}
\end{equation}
in the form
\begin{equation}\label{b4a}
R_n(Z_1,...,Z_n)=\det \left[ K_N(Z_j,Z_k)\right]_{j,k=1}^n,
\end{equation}
where the kernel $K_N(Z_1,Z_2)$ is given by
\begin{equation}
K_N(Z_1,Z_2)=
w(Z_1)w(Z_2^*)
\sum_{n=0}^{N-1}p_n(Z_1)p_n(Z_2^*).
\label{K}
\end{equation}
In particular, define the density of eigenvalues as in
Eq.(\ref{defden}),
so that the number of eigenvalues in domain $A$ of the complex plane
is given by the integral
\begin{equation}\label{b6}
n(A)=\int\limits_A d^2 Z\ \rho(Z).
\end{equation}
Notice that the averaged density of eigenvalues
$\langle \rho (Z) \rangle$
is simply $R_1(Z)$. From
Eqs.\ (\ref{b4a}), (\ref{K}), and (\ref{p_n}) one infers that
\begin{equation}
\label{den}
R_1(Z)=
\frac{N}{\pi \sqrt{1-\tau^2}}
\exp \left\{-\frac{N}{1-\tau^2}
\left[|Z|^2 - \frac{\tau}{2}\left(Z^2+{Z^*}^2\right) \right] \right\}
\sum_{n=1}^{N-1} \frac{\tau^n}{n!}
\left| H_n \left( \sqrt{\frac{N}{\tau}}Z \right)\right|^2
\end{equation}
This exact result is valid for every finite $N$.
The rest of this section is devoted to sampling
of statistical information that can be obtained from
Eqs.\ (\ref{b4a}) -- (\ref{K}) for large matrix dimensions $N\gg 1$.
First we briefly examine the regime of strong non-Hermiticity
when the real and imaginary parts of a typical eigenvalue
are of the same order of magnitude when $N \to \infty $.
This regime is realized when $0\le\lim_{N\to\infty}\tau<1 $
(recall that for $\tau =1$ our
matrices are Hermitian). We will show that in this case $\tau
$-dependence of the eigenvalue correlations
on the {\em local} scale becomes essentially trivial
and the correlations become identical to those
found by Ginibre in the case of maximum non-Hermiticity ($\tau = 0$).
Then we will examine the regime of weak non-Hermiticity
when the imaginary part of typical eigenvalue is of the
order of the mean separation between the nearest eigenvalues
along the real axis.
This regime is realized when
\begin{equation}\label{wnh}
\tau = 1-\frac{\alpha^2}{2N}, \;\; \alpha > 0 .
\end{equation}
We will show that by varying the parameter $\alpha $ one
can describe the crossover from the Wigner-Dyson eigenvalue statistic
typical for Hermitian random matrices to the Ginibre
eigenvalue statistic typical for non-Hermitian random matrices.
To begin with the regime of strong non-Hermiticity
we first recall that in this regime the eigenvalues in bulk
are confined to an ellipse in the complex plane and they
are distributed there with constant density (cf. Eq.(\ref{girko})) :
\[
\lim_{N\to \infty} \frac{1}{N} R_1(Z)=
\left\{
\begin{array}{ll}
\displaystyle{
\frac{1}{\pi (1-\tau^2)}
}, & \mbox{if} \;\;
\displaystyle{
\frac{X^2}{(1+\tau)^2}
+ \frac{Y^2}{(1-\tau)^2} \le 1
}\\
0, & \mbox{otherwise}.
\end{array}
\right.
\]
This fact can be inferred from Eq.\ (\ref{den}). Inside the ellipse
every domain of linear dimension of the order of $1/\sqrt{N}$
contains, on average, a finite number of eigenvalues. Thus
the eigenvalue statistics on the local scale are determined
by the correlation functions
\[
\tilde R_n (z_1, \ldots , z_n)
\equiv N^{-n} R_n (\sqrt{N}Z_1, \ldots , \sqrt{N}Z_n)
\]
of the rescaled eigenvalues $z=\sqrt{N}Z$.
This rescaling is effectively equivalent to
the particular normalization of the distribution
(\ref{P(J)}) which yields $\langle \mathop{\mathrm{Tr}} JJ^{\dagger } \rangle
=N^2$, the normalization used in \cite{Gin}.
One can easily evaluate the rescaled correlation functions
$\tilde R_n$ exploiting Mehler's formula\footnote{Mehler's
formula can be derived
by using the integral representation
(\ref{H}) for Hermite polynomials in the
l.h.s. of Eq.\ (\ref{Mehler}).} \cite{S}:
\begin{equation}\label{Mehler}
\sum_{n=1}^{N-1} \frac{\tau^n}{n!}
H_n \left( \frac{z_1}{\sqrt{\tau}}\right)
H_n \left( \frac{z_2^*}{\sqrt{\tau}}\right)=
\frac{1}{\sqrt{1-\tau^2}}
\exp
\left\{
\frac{1}{1-\tau^2}
\left[
z_1z_2^* - \frac{\tau}{2}
\left(
z_1^2+{z_2^*}^2
\right)
\right]
\right\}.
\end{equation}
Indeed, denote $\tilde K_N(z_1,z_2)=N^{-1}K_N(\sqrt{N}z_1,
\sqrt{N}z_2)$.
Then, by Mehler's formula
\[
\lim_{N\to \infty}
\tilde K_N(z_1,z_2)=
\frac{1}{\pi (1-\tau^2)}
\exp
\left[
\frac{2z_1z_2^* -|z_1|^2 - |z_2|^2}{2(1-\tau^2)}
\right]
\exp
\left[
\frac{\tau ({z_1^*}^2-z_1^2 + z_2^2 -{z_2^*}^2 )}{4(1-\tau^2 )}
\right]
\]
and in view of the relationship
\[
\tilde R_n(z_1,...,z_n)=\det \left[ \tilde K_N (z_j,z_k)
\right]_{j,k=1}^n,
\]
one obtains that
\begin{equation}\label{Gin}
\lim_{N\to \infty} \tilde R_n (z_1, \ldots , z_n)=
\left[
\frac{1}{ \pi (1-\tau^2) }
\right]^n
\exp
\left(
- \frac{1}{1-\tau^2}\sum_{j=1}^n |z_j|^2
\right)
\det
\left[
\exp
\left(
\frac{z_jz_k^*}{1-\tau^2}
\right)
\right]_{j,k=1}^n.
\end{equation}
In particular
\begin{eqnarray}
\label{Gin1}
\lim_{N\to \infty} \tilde R_1 (z)&=& \frac{1}{ \pi (1-\tau^2) }\\
\label{Gin2}
\lim_{N\to \infty} \tilde R_2 (z_1, z_2)&=&
\left[
\frac{1}{ \pi (1-\tau^2) }
\right]^2
\exp
\left[
-\frac{|z_1-z_2|^2}{1-\tau^2}
\right].
\end{eqnarray}
After the natural additional rescaling
$z/\sqrt{1-\tau^2} \to z $
Eqs.\ (\ref{Gin}) -- (\ref{Gin2}) become identical to those
found by Ginibre \cite{Gin} for the case $\tau =0$.
Now we move on to the regime of weak non-Hermiticity [see
Eq.\ (\ref{wnh})]. We will show that in this regime
new non-trivial correlations occur
on the scale:
$\mathop{\mathrm{Im}} Z_{1,2}=O(1/N)$, $\mathop{\mathrm{Re}} Z_1 -\mathop{\mathrm{Re}} Z_2=O(1/N)$.
To find the density of complex
eigenvalues and describe their correlations,
let us define new variables
$y_1$, $y_2$, $\omega $:
\begin{equation}\label{b8}
Z_1=X+\frac{\omega }{2N}+i\frac{y_1}{N}, \; \;
Z_2=X-\frac{\omega }{2N}+i\frac{y_2}{N}.
\end{equation}
Our first goal is to evaluate the kernel
$K_N (Z_1, Z_2)$ in the limit
\begin{equation}\label{b9}
N\to\infty , N(1-\tau) \to \alpha^2/2,
\; \; X, \omega , y_{1,2}\, \mbox{are}\,\, \mbox{fixed}.
\end{equation}
Using the integral representation for Hermite polynomials,
Eq.\ (\ref{H}), we can rewrite $K_N (Z_1, Z_2)$ in the form
\begin{eqnarray*}
K_N (Z_1, Z_2)&=&\frac{N}{\pi \sqrt{1-\tau^2}}
\exp
\left[
\frac{N(Z_1^2+{Z_2^*}^2)}{2\tau }
-
\frac{N(|Z_1|^2+|Z_2^*|^2)}{2(1-\tau^2) }
+
\frac{N\tau (Z_1^2+Z_2^2+{Z_1^*}^2+{Z_2^*}^2)}{4(1-\tau^2)}
\right]\times \\
& &
\sum_{n=0}^{N-1}\frac{\tau^n}{n!} \frac{1}{2\pi}
\int\limits_{-\infty}^{+\infty}\! dt\
\int\limits_{-\infty}^{+\infty}\! ds\
(ts)^n \exp
\left[
-\frac{t^2+s^2}{2}+\sqrt{\frac{N}{\tau}} \left(
itZ_1-isZ_2^* \right)
\right]
\end{eqnarray*}
Using new variables (\ref{b8}) in the equation above
and making the substition
\[
u=(t+s)\sqrt{\frac{N}{\tau}} \ \mbox{and}\
v=(t-s)\sqrt{\frac{N}{\tau}}
\]
in the integrals, we obtain
\begin{eqnarray*}
K_N (Z_1, Z_2)&=&
\frac{N^2}{\tau \sqrt{1-\tau^2}}
\exp
\left[
\frac{\omega^2}{4N\tau (1+\tau )}
-
\frac{y_1^2+y_2^2}{2N\tau (1-\tau )}
+
\frac{iX(y_1-y_2)}{\tau}
+
\frac{i\omega (y_1+y_2)}{2N\tau}
\right]\times \\
& &
\frac{1}{2\pi}
\int\limits_{-\infty}^{+\infty}\! dv\
\exp
\left\{
\left[
-\frac{Nv^2}{4}
\left(
1+\frac{1}{\tau}
\right)
+ \frac{iXvN}{\tau} +
\frac{NX^2}{\tau (1+\tau )}
\right]
- \frac{v(y_1-y_2)}{2\tau}
\right\} \times \\
& &
\frac{1}{2\pi}
\int\limits_{-\infty}^{+\infty}\! du\
\exp
\left\{
-\frac{Nu^2}{4}
\left(
\frac{1}{\tau}-1
\right) + \frac{iu\omega}{2\tau} -\frac{u(y_1+y_2)}{2\tau}
\right\}
\Theta_N \left[ \frac{N}{4}(u^2-v^2) \right],
\end{eqnarray*}
were we have introduced the notation
\[
\Theta_N (x) =e^{-x} \sum_{n=1}^{N-1} \frac{x^n}{n!}, \;\; x\ge 0.
\]
Now we are in a position to evaluate $K_N(Z_1, Z_2)$ in
the regime defined by Eq.(\ref{b9}).
Indeed, in this regime, to the leading order,
\begin{eqnarray*}
\left[
-\frac{Nv^2}{4}
\left(
1+\frac{1}{\tau}
\right)
+ \frac{iXvN}{\tau} +
\frac{NX^2}{\tau (1+\tau )}
\right]
& = & -\frac{N(v-iX)}{2}\\
\left\{
-\frac{Nu^2}{4}
\left(
\frac{1}{\tau}-1
\right) + \frac{iu\omega}{2\tau} -\frac{u(y_1+y_2)}{2\tau}
\right\}
& = &
-\frac{u^2\alpha^2}{8} +\frac{iu\omega}{2} -\frac{u(y_1-y_2)}{2}.
\end{eqnarray*}
>From these relations one obtains that
\begin{eqnarray*}
K_N (Z_1, Z_2)& = &
\frac{1}{2\pi}
\frac{N^2}{\alpha }
\exp
\left[
-\frac{y_1^2+y_2^2}{\alpha^2}+iX(y_1-y_2)
\right]\times \\
& &
\int\limits_{-\infty}^{+\infty}\! \frac{du}{\sqrt{2\pi}}
\exp
\left[
-\frac{u^2\alpha^2}{8}+\frac{iu\omega}{2} -\frac{u(y_1+y_2)}{2}
\right] \times \\
& &
\frac{N}{\sqrt{2\pi}}
\int\limits_{-\infty}^{+\infty}\! dv\
\exp
\left[
-\frac{N(v-iX)^2}{2}-\frac{v(y_1-y_2)}{2}
\right]
\Theta_N\left[ \frac{N}{4} (u^2-v^2)\right].
\end{eqnarray*}
Taking into the account that
\begin{equation}\label{b10}
\lim_{N\to \infty } \Theta_N (Nx)=
\left\{
\begin{array}{ll}
1, & 0\le x < 1 \\
0, & x > 1
\end{array}
\right.
\end{equation}
and evaluating
the integral over $v$ by the saddle point method
we finally obtain that in the regime
Eq. (\ref{b9}),
to the leading order,
\begin{eqnarray}\label{kern}
K_N
\left(
X+\frac{\omega}{2N}+\frac{iy_1}{N},
X-\frac{\omega}{2N}+\frac{iy_2}{N}
\right) =
\frac{N^2}{\pi\alpha}
\exp{\left\{- \frac{y_1^2+y_2^2}{\alpha^2}+\frac{iX (y_1-y_2)}{2}
\right\}}g_{\alpha}\left(y-\frac{i\omega}{2}\right),
\end{eqnarray}
where $ y=(y_1+y_2)/2$ and
\begin{equation}\label{g}
g_{\alpha}(y)=\int\limits_{-\pi\nu(X)}^{\pi\nu(X)}\frac{du}{\sqrt{2\pi}}
\exp{\left(-\frac{\alpha^2u^2}{2}-2uy\right)},
\end{equation}
with
$\nu(X)=\frac{1}{2\pi}\sqrt{4-X^2}$ standing for the Wigner
semicircular density of
real eigenvalues of the Hermitian part $\hat{H_1}$ of the matrices
$\hat{J}$.
Equation (\ref{kern})
constitutes the most important result in this section.
The kernel $K_N$ given by Eq.\ (\ref{kern})
determines all the properties of complex eigenvalues in the regime of
weak non-Hermiticity. For instance,
the mean value of the density
$\rho(Z)= \sum_{i=1}^N\delta^{(2)}(Z-Z_i)$
of complex eigenvalues $Z=X+iY$ is given by $
\langle\rho(Z)\rangle= K_N(Z,Z)$.
Putting $y_1=y_2$ and $\omega=0$ in Eqs.\ (\ref{kern})--(\ref{g})
we immediately recover the density
Eq.(\ref{13}) found by the supersymmetry approach
\footnote{In the present section
we normalized $\hat{H}_2$ in such a way that for weak
non-Hermiticity regime
we have
$\lim_{N\to\infty} \mbox{Tr}\hat{H}_2^2=N$, whereas
the normalization Eq.(\ref{norm}) gives
$\lim_{N\to\infty} \mbox{Tr}\hat{H}_2^2=N(1+w^2)$.
It is just because of this difference the parameter
$\tilde{a}$ entering Eq.(\ref{13})
contains extra factor $1+w^2$ as compared to the present case.}.
One of the most informative statistical measures of the spectral
correlations
is the `connected' part of the two-point correlation function of
eigenvalue
densities:
\begin{equation}
\label{defy}
\left\langle \rho(Z_1)\rho(Z_2)\right\rangle_c
=\left\langle
\rho(Z_1)\right\rangle \delta^{(2)}(Z_1-Z_2)-{\cal Y}_2(Z_1,Z_2),
\end{equation}
In particular, it determines the variance $\Sigma^2(D)=
\langle n(D)^2\rangle-\langle n(D)\rangle^2$ of the number
$n=\int\limits_D d^2Z\rho(Z)$
of complex eigenvalues in any domain $D$ in the complex plane,
see the Appendix for a detailed exposition.
Comparing with the definitions, Eqs.\ (\ref{R_n}) and (\ref{b4a})
we see that
the {\it cluster function} ${\cal Y}_2(Z_1,Z_2)$
is expressed in terms of the kernel $K_N$
as ${\cal Y}_2(Z_1,Z_2)=\left|K_N(Z_1,Z_2)\right|^2$.
It is evident that in the limit of weak non-Hermiticity
the kernel $K_N$ in Eq.(\ref{kern})
depends on $X$ only via the semicircular density $\nu(X)$.
Thus, it does not change with $X$ on the local scale comparable with
the mean spacing along the real axis $\Delta\sim 1/N$.
The cluster function $ {\cal Y}_2$ is given by the following explicit expression:
\begin{equation}\label{clexp}
{\cal Y}_2(Z_1,Z_2)=
\frac{N^4}{\pi^2 \alpha^2}
\exp
\left(
-\frac{2(y_1^2+y_2^2)}{\alpha^2}
\right)
\left|
\int\limits_{-\pi\nu(X)}^{\pi\nu(X)}\!
\frac{du}{\sqrt{2\pi}}\
\exp
\left[
-\frac{\alpha^2 u^2}{2}-u(y_1+y_2)+iu\omega
\right]
\right|^2
\end{equation}
The parameter
$a=\pi\nu(X)\alpha$ controls the deviation from Hermiticity.
When
$a\gg 1$ the limits of integration in Eq.(\ref{clexp})
can be effectively put to $\pm\infty$ due to the Gaussian cutoff
of the integrand. The corresponding Gaussian
integration is trivially performed
yielding in the original variables $Z_1,Z_2$
the expression equivalent (up to a trivial rescaling) to
that found by Ginibre \cite{Gin}:
${\cal Y}_2(Z_1,Z_2)=(N^2/\pi \alpha^2)^{2}
\exp\{-N^2|Z_1-Z_2|^2/{\alpha}^2\}$.
In the opposite case $a\to 0$ the cluster function tends to GUE form
${\cal Y}_2(\omega,y_1,y_2)=\frac{N^4}{\pi^2}\delta(y_1)
\delta(y_2)\frac{\sin^2{\pi\nu(X)\omega}}{\omega^2}$.
One can also define a
renormalized cluster function:
\[
Y_2(Z_1,Z_2)=\frac{{\cal Y}_2(Z_1,Z_2)}{R_1(Z_1)R_1(Z_2)}.
\]
Introduce the notation
\[
Y_2(\omega , y_1, y_2) \equiv
\lim_{N\to \infty} Y_2
\left(
X+ \frac{\omega}{2N}+\frac{iy_1}{N}, X-\frac{\omega}{2N}+\frac{iy_2}{N}
\right) .
\]
Then
\begin{equation}\label{my_Y2}
Y_2(\omega , y_1, y_2)=
\frac
{
\displaystyle{
\left|
\int_0^1\!dt \ \exp
\left(
-\frac{a^2t^2}{2}
\right)
\cosh \left[
t(\tilde y_1+\tilde y_2+i\tilde\omega )
\right]
\right|^2
}
}
{
\displaystyle{
\int_0^1\!dt \ \exp
\left(
-\frac{a^2t^2}{2}
\right)
\cosh ( 2t\tilde y_1 )
\int_0^1\!dt \ \exp
\left(
-\frac{a^2t^2}{2}
\right)
\cosh ( 2t\tilde y_2 )
}
},
\end{equation}
where
\[
\tilde y_{1,2}=y_{1,2}\pi \nu (X), \;\;\;\;\;\;\;\;
\tilde \omega = \omega\pi \nu (X)\;\;\;\;\;\;\;\;
.
\]
It has advantages of being non-singular in the Hermitian limit $a\to 0$ and coinciding with the usual GUE cluster function
$\tilde \omega^{-2}\sin^2{\tilde \omega}$ on the real axis: $y_1=y_2=0$.
We plotted this function for different values of the parameter $a$ in
Fig.\ \ref{figyan1}.
The operation of calculating the Fourier transform of the cluster
function
over its arguments $\omega,y_1,y_2$ amounts to
simple Gaussian and exponential integrations. Performing them
one finds
the following expression for the {\it spectral form-factor}:
\begin{eqnarray}\label{for}
b(q_1,q_2,k)&=&\int\limits_{-\infty}^{\infty}\! d\omega \
\int\limits_{-\infty}^{\infty}\! dy_1\
\int\limits_{-\infty}^{\infty}\! dy_2 \
{\cal Y}_2(Z_1,Z_2)\exp [2\pi i(\omega k+y_1q_1+y_2q_2)]\\
\nonumber
&=& N^4
\exp
\left[
-\frac{\alpha^2 (q_1^2+q_2^2+2k^2)}{2}
\right]
\frac{\sin{\left[\pi^2 \alpha^2(q_1+q_2)(\nu(X)-|k|)\right]}}
{\pi^2 \alpha^2(q_1+q_2)}\theta(\nu(X)-|k|),
\end{eqnarray}
where $Z_1$ and $Z_2$ are given by Eq.\ (\ref{b8}), and
$\theta(u)=1$ for $u>0$ and zero otherwise.
We see, that everywhere in the regime of weak non-Hermiticity
$0<\alpha<\infty$ the formfactor shows a kink-like behaviour
at $|k|=\nu(X)$. This feature is inherited from the corresponding
Hermitian counterpart-the Gaussian Unitary Ensemble. It
reflects the oscillations of the cluster function with
$\omega$ which is a manifestation of the long-ranged
order in eigenvalue positions along the real axis\cite{Bohigas}.
When non-Hermiticity increases
the oscillations become more and more damped
as is evident from the Fig.\ \ref{figyan1}.
As is well-known\cite{Mehta,Bohigas}, the knowledge of the formfactor
allows one to determine the variance $\Sigma_2$ of a number of
eigenvalues in any domain $D$ of the complex plane.
Small $\Sigma_2$ is a signature of a tendency for levels to form
a crystal-like structure with long correlations.
In contrast, increase in the number variance signals about growing
decorrelations of eigenvalues.
For the sake of completeness
we derive the corresponding relation in the Appendix,
see Eq.(\ref{formf1}). In the general case
this expression is not very transparent, however. For this reason we
restrict ourselves to the simplest case,
choosing the domain $D$ to be the infinite strip of width $L_x$
(in units of mean spacing along the real axis $\Delta=(\nu(0)N)^{-1}$)
oriented perpendicular to the real axis:
$0<\mbox{Re}Z<~L_x\Delta;\quad -\infty
<\mbox{Im}Z<\infty$.
Such a choice means that we look only at real parts of complex
eigenvalues irrespective of their imaginary parts.
It is motivated, in particular, by
the reasons of comparison with the GUE case, for which the function
$\Sigma_2(L_x)$ behaves at large $L_x$ logarithmically:
$\Sigma_2(L_x)\propto \ln{L_x}$ \cite{Bohigas}.
After simple calculations (see Appendix) one finds
\footnote{In our earlier Letter \cite{FKS2} the expression
Eq.(\ref{var}) and formulae derived from it
erroneously contained $\pi a$ instead of $a$.}
\begin{equation}
\label{var}
\Sigma_2(L_x)=
L_x
\left\{
1-2
\int\limits_0^{L_x}\! dk \
\left(
1-\frac{k}{L_x}
\right)
\frac{\sin^2 (\pi k)}{(\pi k)^2}
\exp
\left[
-\left(\frac{a k}{L_x}\right)^2
\right]
\right\}
\end{equation}
First of all, it is evident that $\Sigma_2$ grows systematically
with increase in the degree of non-Hermiticity $a=\pi\nu(0)\alpha$,
see Fig.\ \ref{figyan2}.
This fact signals on the gradual decorrelation of the {\it real}
parts $\mathop{\mathrm{Re}} Z_i$ of complex eigenvalues. It can be easily understood
because of increasing possibility for eigenvalues to avoid one
another along the $Y=\mathop{\mathrm{Im}} Z$ direction, making their projections on the
real axis $X$ to be more independent.
In order to study the difference from
the Hermitian case in more detail let us consider again the large
$L_x$ behaviour.
Then it is evident, that
the number variance is only slightly modified by non-Hermiticity
as long as $a\ll L_x$. We therefore consider the case
$a\gg 1$ when we expect essential differences from the Hermitian case.
For doing this it is convenient to rewrite Eq.(\ref{var})
as a sum of three contributions:
\begin{eqnarray}\label{var1}
\Sigma_2(L_x)&=&\Sigma_2^{(1)}+\Sigma_2^{(2)}+
\Sigma_2^{(3)} \\
\nonumber
\Sigma_2^{(1)}&=& L_x
\left\{
1-\frac{2}{\pi^2}\! \int\limits_0^{\infty}\!
\frac{dk}{k^2}\sin^2{(\pi k)}
\exp \left[-\left( \frac{a k}{L_x}\right)^2 \right]
\right\}\\
\nonumber
\Sigma_2^{(2)}&=&\frac{2}{\pi^2}\! \int\limits_0^{\infty}\!
\frac{dk}{k}\sin^2{(\pi k)}
\exp \left[-\left( \frac{a k}{L_x}\right)^2 \right]\\
\nonumber
\Sigma_2^{(3)}&=&\frac{2}{\pi^2}\! \int\limits_1^{\infty}\!
\frac{dz}{z^2}(1-z)\sin^2{\left(\pi z L_x \right)}
\exp [-(a z)^2 ]
\end{eqnarray}
First of all we notice, that
for large $a$ the third contribution $\Sigma_2^{(3)}$
is always of the order $O(\exp{-a^2)}$ and
can be neglected.
The relative order of the first and second terms
depends on the ratio $L_x/a$. In a large domain $1\ll L_x\sim a$,
the second term $\Sigma_2^{(2)}$ is much smaller
than $\Sigma_2^{(1)}$. This implies that
the number variance grows like
$\Sigma(L_x)=L_xf(L_x/a)$. We find it more transparent to rewrite
the function $f(u)$ in an equivalent form:
$$
f(u)=1+\frac{2}{\sqrt{\pi}}\left\{\frac{1}{2\pi
u}\left(1-e^{-\pi^2u^2}\right)-
\int\limits_{0}^{\pi u}dte^{-t^2}\right\}.
$$
which can be obtained from Eq.(\ref{var1}) after a simple
transformation.
For $u=L_x/a\ll 1$ we have simply $f\approx 1$
and hence a linear growth of the number variance. For $u\gg 1$ we have
$f~\approx~(\pi^{3/2}u)^{-1}$. Thus, $\Sigma_2(L_x)$ slows down:
$\Sigma_2(L_x)\approx \frac{a}{\pi^{3/2}}$.
Only for exponentially large $L_x$ such that
$\ln{(L_x/a)}\gtrsim a$
the term $\Sigma_2^{(2)}$ produces a contribution
comparable with $\Sigma_2^{(1)}$. To make this fact evident
we rewrite
$\Sigma_2^{(2)}$ as:
\begin{equation}\label{sig3}
\Sigma_2^{(2)}=-\frac{2}{\pi^2}\! \int\limits_0^{\infty}\! dk
\ln{\left(\frac{L_x k}{ a}\right)}\frac{\partial}{\partial k}\left\{
\sin^2{\left(\frac{\pi L_x k}{a}\right)}e^{-k^2}\right\}
\end{equation}
For $L/a\gg 1$ we can neglect the oscillatory part of the integrand
effectively substituting $1/2$ for
$\sin^2{\frac{\pi L_x k}{a}}$ in Eq.(\ref{sig3}). The resulting
integral can be evaluated explicitly. Remembering
that $\Sigma_2^{(1)}\mid_{L_x>>a}\approx a/(2\pi^{3/2})$ we
finally find:
\[
\Sigma_2(L_x\gg a)= \frac{a}{\pi^{3/2}}+\frac{1}{\pi^2}
\left(\ln{\left(\frac{L_x}{ a}\right)}-\frac{\gamma}{2}\right)
\]
where $\gamma$ is Euler's constant. This logarithmic growth
of the number variance is reminiscent of that
typical for real eigenvalues of the Hermitian matrices.
Another important spectral characteristics which
can be simply expressed in terms of the cluster function is
the small-distance
behavior of the nearest neighbour distance distribution
\cite{Mehta,Bohigas,Oas}. We present the derivation of
the corresponding relationship in Appendix.
Substituting the expression Eqs.(\ref{13},\ref{clexp}) for the mean
density and the cluster function into Eq.(\ref{small})
one arrives after
a simple algebra to the probability density to have one
eigenvalue at the point $Z_0=X+iy_0\Delta$ and its
closest neighbour at the distance $|z_1-z_0|=s\Delta,\,\,
\Delta=(\nu(X)N)^{-1}$, such that
$s\ll 1$:
\begin{eqnarray} \label{nns}
\nonumber
p_{\alpha}(X+iy_0\Delta,s\Delta)|_{s\ll 1}&=&
\frac{1}{2\pi}\left[
g_{\alpha}(y_0)\frac{\partial^2}{\partial y_0^2}g_{\alpha}(y_0)-\left(
\frac{\partial}{\partial y_0}g_{\alpha}(y_0)\right)^2\right]
\exp
\left(
-4\frac{y_0^2}{a^2}
\right)
\frac{s^3}{a^2}\times \\
& &
\int\limits_0^{\pi}d\theta
\exp
\left[
-\frac{2}{a^2}(s^2\cos^2{\theta}-2y_0s\cos{\theta})
\right]
\end{eqnarray}
where $g_{\alpha}$ is given by Eq.(\ref{g}).
First of all it is easy to see that in the limit $a\gg 1$
one has: $p_{a\gg 1}(Z_0,s\ll 1)
=\frac{2}{\pi} (s/a^2)^3$ in agreement with the cubic repulsion generic
for strongly non-Hermitian random matrices\cite{Gin,diss,Oas}.
On the other hand one can satisfy oneself that in
the limit $a\to 0 $ we are back to the
familiar GUE quadratic level repulsion: $p_{a\to 0}(Z_0,s\ll 1)
\propto \delta(y_0) s^2$.
In general, the expression Eq.(\ref{nns})
describes a smooth crossover between the two regimes, although
for any $a\ne 0$ the repulsion is always cubic for $s\to 0$.
To this end, an interesting situation may occur when
deviations from the Hermiticity are very weak: $a\ll \sqrt{2}$ and
`observation points' $Z_0$ are situated sufficiently far
from the real axis: $2|y_0|/a\gg 2^{-1/2}$.
Under this condition the following three regions for the
parameter $s$ should be distinguished:
i)$\frac{s}{a}\ll \frac{a}{4|y_0|}$
ii)$\frac{a}{4|y_0|}\ll \frac{s}{a}\ll 2\frac{|y_0|}{a}$
and finally iii) $ 2^{-1/2}\ll 2\frac{|y_0|}{a}\ll \frac{s}{a}\ll
a^{-1}$.
In the regimes i) and ii) the term linear in $\cos{\theta}$
in the exponent of Eq.(\ref{nns}) dominates yielding the
result of integration to be the modified Bessel function
$\pi I_0\left(\frac{4y_0s}{a^2}\right)$. In the regime iii)
the term quadratic in $\cos{\theta}$ dominates producing
$2\pi e^{-(s/a)^2}I_0\left[(s/a)^2\right]\approx
\left(2\pi a/s\right)^{1/2}$.
As the result, the distribution
$p(Z_0,s)$ displays the following behaviour:
\begin{eqnarray}
\label{5/2}
p_{\alpha}(Z_0,s)&=&
\left\{
\begin{array}{cl}
\displaystyle{
\frac{s^3}{a^2},
}& \mbox{ for} \quad
\displaystyle{
\frac{s}{a}\ll \frac{a}{4|y_0|}
}\\
\displaystyle{
\frac{s^{5/2}}{2a\sqrt{2\pi|y_0|}},
} &\mbox{ for} \quad
\displaystyle{
\frac{a}{4|y_0|}\ll
\frac{s}{a}\ll 2\frac{|y_0|}{a},
}\\
\displaystyle{
\sqrt{\frac{2}{\pi}}
\frac{s^2}{a},
}
&\mbox{ for} \quad
\displaystyle{
2\frac{|y_0|}{a}\ll \frac{s}{a}\ll
a^{-1}
}
\end{array}
\right\}
\times \\
\nonumber
& &
\frac{1}{2}
\left[
g_{0}(y_0)\frac{\partial^2}{\partial y_0^2}g_{0}(y_0)-
\left(
\frac{\partial}{\partial y_0}g_{0}(y_0)
\right)^2
\right]
\exp \left(
-\frac{4y_0^2}{a^2}
\right)
\end{eqnarray}
with $g_0(y)\equiv g_{\alpha}(y)|_{\alpha=0}$.
Unfortunately, the unusual power law $p(s)\propto s^{5/2}$
might be a very difficult one to detect numerically because of
the low density of complex eigenvalues in
the observation points reflected by the presence of the
Gaussian factor in the expression Eq.(\ref{5/2}).
\section{Conclusion}
In the present paper we addressed the issue of eigenvalue statistics
of large weakly non-Hermitian matrices. The regime of weak
non-Hermiticity is defined as that for which the imaginary
part $Im Z$ of a typical complex eigenvalue is of the same order
as the mean eigenvalue separation $\Delta$ for the Hermitian
counterpart.
Exploiting a mapping to the non-linear $\sigma-$model we were able
to show that there are three different "pure" classes of weakly
non-Hermitian matrices: i) almost Hermitian with complex entries
ii) almost symmetric with real entries and iii) complex symmetric
ones. Within each of these classes the eigenvalue statistics
is {\it universal} in a sense that it is the same irrespective of the
particular distribution of matrix entries up to
an appropriate rescaling. There are also crossover regimes between
all three classes.
Our demonstration of universality was done explicitly for
the density of complex eigenvalues of matrices
with independent entries. Within the non-linear $\sigma-$model formalism
one can easily provide a heuristic proof of such a universality
for higher correlation functions as well as for "rotationally
invariant" matrix ensembles, see \cite{HW}.
The above feature is a great advantage of the supersymmetry technique.
A weak point of that method is a very complicated representation of
the ensuing quantities. It seems, that the explicit evaluation of the
higher correlation functions is beyond our reach at the moment, and
even a calculation of the mean density requires a lot of effort, see
\cite{FKS1,Efnonh}. As a result, at present time
the mean density is known explicitly only
for the cases i) and ii).
Fortunately, because of the mentioned universality
another strategy can be pursued. Namely, one can concentrate
on the particular case of matrices with independent, Gaussian
distributed entries for which alternative analytical techniques might
be available. Such a strategy turned out to be a success for
the simplest case of complex almost-Hermitian matrices, where
we found the problem to be an exactly solvable one by the method of
orthogonal polynomials. This fact allowed us to extract all the
correlation functions in a mathematically rigorous way\cite{FKS2}.
One might hope that combining the supersymmetric method
and the method of orthogonal polynomials one
will be able to elevate our understanding
of properties of almost-Hermitian
random matrices to the level typical for their Hermitian counterparts.
>From this point of view a detailed numerical investigation of
different types of almost-Hermitian random matrices is highly
desirable. Recently, an interesting work in this direction appeared
motivated by the theory of chaotic scattering \cite{reso}.
Weakly non-Hermitian matrices emerging in that theory are
different from the matrices considered in the present paper because
of the
specific form of the skew Hermitian perturbation, see e.g. \cite{FS}.
This fact makes impossible a quantitative comparison of our results
with those
obtained in \cite{reso}. The qualitative fact of increase in number
variance with increase in non-Hermiticity agrees well with our findings.
Let us finally mention
that the knowledge of the {\it time-delay} correlations [\cite{FSR}]
allows one to make a plausible conjecture about the form of the number variance for the scattering systems with broken time-reversal symmetry.
These results will be published elsewhere [\cite{new}].
The financial support
by SFB-237(Y.V.F and H.-J.S.) and EPRSC Research Grant GR/L31913
(Y.V.F. and B.A.K.) is acknowledged with thanks.
Y.V.F. is grateful to the School of Mathematical Sciences, Queen Mary\&
Westfield
College, University of London and to the Newton Institute, Cambridge
for the warm hospitality extended to him during
his visits.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.